View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

RF_FULL Covers_SPRING_09

8/12/09

5:10 PM

Page 1

SPRING 2009

THE

FEDERAL

RESERVE

BANK

OF

RICHMOND

RF_FULL Covers_SPRING_09

8/12/09

5:10 PM

Page 2

VOLUME 13
NUMBER 2
SPRING 2009

COVER STORY
14

Reforming the Raters: Can regulatory reforms adequately
realign the incentives of credit rating agencies?
Amid the financial turmoil, many have criticized the bond rating
agencies. How do these agencies operate? And what do economists
have to say about the role they play in a healthy capital market?

FEATURES
20

Honeybees: Market for pollination services grows
Some beekeepers in the Fifth District sell pollination services to
farmers as far away as California’s almond fields. This market has
expanded as wild bee populations have declined and honeybee hives
have suffered from a variety of pests and problems.
23

Our mission is to provide
authoritative information
and analysis about the
Fifth Federal Reserve District
economy and the Federal
Reserve System. The Fifth
District consists of the
District of Columbia,
Maryland, North Carolina,
South Carolina, Virginia,
and most of West Virginia.
The material appearing in
Region Focus is collected and
developed by the Research
Department of the Federal
Reserve Bank of Richmond.
DIRECTOR OF RESEARCH

John A. Weinberg
EDITOR

Aaron Steelman

Silver Screen Subsidies: Is hoping to land the next Hollywood
hit a sound economic development strategy?
State legislators often try to lure movie and television productions to
their states. But the effectiveness of these policies as an economic
development tool is in question.
26

Clear Skies? The fight for dominance in the airline industry
The recession and oil price shocks have made for a turbulent few
years for the airline industry. Today, strong competition and shifts
in market realities are changing how airlines operate and will have
real implications for future air travel.

MANAGING EDITOR

Kathy Constant
STA F F W R I T E R S

Renee Courtois
Betty Joyce Nash
David van den Berg
E D I TO R I A L A S S O C I AT E

R E G I O N A L A N A LY S T

Sonya Ravindranath Waddell
CONTRIBUTORS

Veto Politics: Can a line-item veto reduce spending?
The line-item veto power held by a state governor is often thought to
keep the spending of the legislature in check. However, economists
who have studied the issue have come to realize that the conventional
wisdom may not be entirely correct.

1 President’s Message/The (Limited) Role of Credit Ratings
in the Financial Crisis
2 Upfront/Economic News Across the Region
6 Federal Reserve/Capital Cushions
10 Jargon Alert/Underemployment
11 Research Spotlight/Have Free Markets Failed Us?
12 Policy Update/Are CEOs Paid too Much?
13 Around the Fed/What Prolonged the Great Depression?
32 Interview/Allan Meltzer
37 Economic History/Sport of Kings
40 District Digest/Economic Trends Across the Region
48 Opinion/The Importance of Luck

Stephen Slivinski

Julia Ralston Forneris

30

DEPARTMENTS

SENIOR EDITOR

Matthew Conner
Christina Zajicek
DESIGN

Beatley Gravitt Communications
C I RC U L AT I O N

Alice Broaddus
Published quarterly by
the Federal Reserve Bank
of Richmond
P.O. Box 27622
Richmond, VA23261
www.richmondfed.org
Subscriptions and additional
copies: Available free of
charge through our Web site at
www.richmondfed.org/publications
or by calling Research
Publications at (800) 322-0565.
Reprints: Text may be reprinted
with the disclaimer in italics
below. Permission from the editor
is required before reprinting
photos, charts, and tables. Credit
Region Focus and send the editor a
copy of the publication in which
the reprinted material appears.
The views expressed in Region Focus
are those of the contributors and not
necessarily those of the Federal Reserve Bank
of Richmond or the Federal
Reserve System.
ISSN 1093-1767

PRESIDENT’SMESSAGE
The (Limited) Role of Credit Ratings in the Financial Crisis
he cover story of this
issue of Region Focus
seeks to frame the
policy debate about the future
of the credit rating agencies. It’s
certainly a timely discussion.
When financial institutions
began to post significant losses,
some observers suggested that
many financial institutions had
invested in new, complex securities — some of which have
been downgraded to junk status
today — mainly because those assets were at that time
given a seal of approval by one of the “Big Three” rating
agencies. Some of the reform proposals being discussed in
Washington are geared toward eliminating what many
argue were conflicts of interest that arose in the course of
awarding those ratings.
It’s important to acknowledge the concerns that many
have about the agencies and how those agencies might have
influenced the quality of investor information. After all,
clear and reliable information is an important component of
a properly functioning market. If an investor doesn’t understand how a securitized asset is constructed — maybe
because it is too opaque or simply too confusing to understand — market discipline may be weakened. Either a lack
of transparency or a lack of comprehension by the buyer of
an asset can lead to little or no check on the originators and
underwriters of those securities.
Yet it may not be entirely appropriate to blame the
apparent shortcomings of the securitization markets simply
on the complexity of the products. If indeed that complexity raised sufficient concern among investors, it should
have been reflected in the prices of those assets. And if
those risk premia were not as high as we think they should
have been after the fact, an undeserved credit rating may
not have been the only contributing factor. It could be that
investors simply had an incorrect view of the future of the
economy or of particular institutions.
Nor is it appropriate to place all the blame with the credit
rating agencies. Yes, an investor’s false sense of security
may have been reinforced by the inflated grade given to
a securitized asset by the rating agencies. But intelligent
institutional investors also probably had some understanding that the ratings awarded by the Big Three agencies were
flawed in certain respects. That could have just as easily
been factored into the price too. And indeed it was, to
some extent, as structured securities routinely traded at
spreads greater than similarly rated, but less complex,

T

corporate bonds. This leads one to question the extent to
which investors had a competing incentive to ignore countervailing information about the potential riskiness of the
securitized assets they were buying.
One plausible reason investors bought these securities
involves the incentives built into the capital requirements
that financial institutions must observe. Credit ratings
issued by the agencies were used to assign “risk weights” to
the securities banks held. If the grade was high, banks could
hold less capital as a buffer against losses. That gave banks
an incentive to hold the highest-yielding (that is, riskiest)
securities with any given rating — in short, potentially overrated securities.
Such a strategy might seem especially desirable to certain
financial institutions if market participants believed the
federal government would treat those institutions as “too
big to fail” and would take action to keep them alive in the
face of impending insolvency. This implicit promise to bail
out institutions considered important to the stability of
capital markets could have dampened market discipline no
matter how good the information produced by rating agencies and others might have been.
When the government is in the business of protecting a
certain class of investors and institutions against downside
risk, it should be no surprise that those investors and institutions are more likely to take on risk. It should also be no
surprise that information which might have spurred caution
might be given less attention in such cases.
Better information — whether through a reformed
rating process or through increased disclosure — could
contribute to better functioning markets. But better information alone will not be sufficient to bring effective market
discipline to bear on institutions that are widely viewed as
too big to fail. What will be necessary is a widespread belief
among investors that the government will not necessarily
protect large institutions which make imprudent investments. So far, investors have little reason to believe that
is the case. Indeed, quite the opposite. Establishing tighter
boundaries on the financial safety net — and making those
boundaries well known and credible — is a key task facing
policymakers.

JEFFREY M. LACKER
PRESIDENT
FEDERAL RESERVE BANK OF RICHMOND

Spring 2009 • Region Focus

1

UPFRONT

Economic News Across the Region
Backyard Burn

Coastal Wildfire Risk Swells with Population
The worst wildfire in more than 30 years burned nearly 20,000 acres and sent
smoke billowing over the Grand Strand near Myrtle Beach, S.C., in April after
a backyard debris burn spread to an adjacent property.

PHOTOGRAPHY: SOUTH CAROLINA FORESTRY COMMISSION

A wildfire near
Myrtle Beach, S.C.,
burned about
20,000 acres
last spring.

2

No one was injured, but
the fire destroyed 75
homes and damaged 101
more. Four thousand
people were evacuated.
South
Carolina’s
coastal development has
mushroomed since the
biggest fire on record,
the Clear Pond Fire. In
1976, that fire burned
30,000 acres. The residential boom raises
questions about what’s
become a problem, not
just in South Carolina,
but across the nation as
people settle in retirement or vacation communities near the
woods. This fire, for example, threatened
thousands of homes. The fire came close to
major developments like Carolina Forest and
Barefoot Landing.
Most fires are caused by people. South
Carolina Forest Protection Chief Darryl Jones
says that his agency responds to between
5,000 and 6,000 fires a year, many started by
people trying to burn leaves or yard trimmings.
Some 88 percent of the 12.9 million acres of
forest in South Carolina are privately owned.
As destructive as wildfires can be, especially near residential areas, fires serve to
manage forest floor litter and that prevents
worse fires. “Wildfires in forests are a part of
the natural disturbance regime,” says economist Roger Sedjo, who directs the forest
economics and policy program at Resources
for the Future, a Washington, D.C., think tank.
But suppression becomes a priority when
human life and development are threatened.
Living near forests presents risks. Sedjo
notes that the “insurance market has begun to
adapt to these differential risks” especially in

Region Focus • Spring 2009

the West. It makes more sense for the people
whose assets are at risk to bear the cost of fire
suppression, so society doesn’t pick up the
whole tab. Jones says that his agency is working with insurance companies to consider
factoring the risk of fires into insurance rates.
To fight the South Carolina fire, the commission got help from local fire departments
as well as the United States National Guard.
The Guard sent Black Hawk helicopters outfitted with 750-gallon buckets to scoop water
from ponds to drop on the blaze. The Federal
Emergency Management Agency will help
South Carolina pay the Guard. Damage estimates from the fire fighting alone reached $1.5
million. Damage to timber is estimated at
between $15 million to $20 million, with about
$25 million in damage to homes.
There’s ongoing debate about how budgets,
for example in the United States Forest
Service, are allocated between suppression of
wildfire and prevention. Sedjo says that most
of the money today goes to fire fighting when
“there are obvious things people might do to
decrease the probability that their house
might burn down.”
The South Carolina Forestry Commission
is responsible for forest fires in rural areas of
the state, and fights them with its fleet of fire
tractor-bulldozers that plow firebreaks. Each
machine costs about $250,000. Without a
buffer zone of about 30 feet to 40 feet between
the house and the woods, it’s not safe for firefighters. Some materials to avoid include vinyl
siding, wood stacked near the home, and certain types of flammable shrubs and mulch.
The fire still smoldered underground well
into May, requiring the commission to monitor the area with heat sensors, amid an unusual
coastal feature known as the “Carolina Bays.”
Those are elliptical depressions dotting the
Southeast containing peat bogs and flammable
material.
—B E T T Y J OYC E N A S H

UPFRONT

Economic News Across the Region
Backyard Burn

Lead Foot

Coastal Wildfire Risk Swells with Population
The worst wildfire in more than 30 years burned nearly 20,000 acres and sent
smoke billowing over the Grand Strand near Myrtle Beach, S.C., in April after
a backyard debris burn spread to an adjacent property.

PHOTOGRAPHY: SOUTH CAROLINA FORESTRY COMMISSION

A wildfire near
Myrtle Beach,
S.C., burned
more than
20,000 acres
last spring.

2

No one was injured, but
the fire destroyed 75
homes and damaged 101
more. Four thousand
people were evacuated.
South
Carolina’s
coastal development has
mushroomed since the
biggest fire on record,
the Clear Pond Fire. In
1976, that fire burned
30,000 acres. The residential boom raises
questions about what’s
become a problem, not
just in South Carolina,
but across the nation as
people settle in retirement or vacation communities near the
woods. This fire, for example, threatened
thousands of homes. The fire came close to
major developments like Carolina Forest and
Barefoot Landing.
Most fires are caused by people. South
Carolina Forest Protection Chief Darryl Jones
says that his agency responds to between
5,000 and 6,000 fires a year, many started by
people trying to burn leaves or yard trimmings.
Some 88 percent of the 12.9 million acres of
forest in South Carolina are privately owned.
As destructive as wildfires can be, especially near residential areas, fires serve to
manage forest floor litter and that prevents
worse fires. “Wildfires in forests are a part of
the natural disturbance regime,” says economist Roger Sedjo, who directs the forest
economics and policy program at Resources
for the Future, a Washington, D.C., think tank.
But suppression becomes a priority when
human life and development are threatened.
Living near forests presents risks. Sedjo
notes that the “insurance market has begun to
adapt to these differential risks” especially in

Region Focus • Spring 2009

the West. It makes more sense for the people
whose assets are at risk to bear the cost of fire
suppression, so society doesn’t pick up the
whole tab. Jones says that his agency is working with insurance companies to consider
factoring the risk of fires into insurance rates.
To fight the South Carolina fire, the commission got help from local fire departments
as well as the United States National Guard.
The Guard sent Black Hawk helicopters outfitted with 750-gallon buckets to scoop water
from ponds to drop on the blaze. The Federal
Emergency Management Agency will help
South Carolina pay the Guard. Damage estimates from the fire fighting alone reached $1.5
million. Damage to timber is estimated at
between $15 million to $20 million, with about
$25 million in damage to homes.
There’s ongoing debate about how budgets,
for example in the United States Forest
Service, are allocated between suppression of
wildfire and prevention. Sedjo says that most
of the money today goes to fire fighting when
“there are obvious things people might do to
decrease the probability that their house
might burn down.”
The South Carolina Forestry Commission
is responsible for forest fires in rural areas of
the state, and fights them with its fleet of fire
tractor-bulldozers that plow firebreaks. Each
machine costs about $250,000. Without a
buffer zone of about 30 feet to 40 feet between
the house and the woods, it’s not safe for firefighters. Some materials to avoid include vinyl
siding, wood stacked near the home, and certain types of flammable shrubs and mulch.
The fire still smoldered underground well
into May, requiring the commission to monitor the area with heat sensors, amid an unusual
coastal feature known as the “Carolina Bays.”
Those are elliptical depressions dotting the
Southeast containing peat bogs and flammable
material.
—B E T T Y J OYC E N A S H

Traffic Tickets Rise in Recessions
When the stock market declines and unemployment rises, it might be a good
idea to pay a little extra attention to local traffic laws.
Recent studies have found evidence that
police use traffic tickets to generate revenue
during hard economic times, like when tax
receipts flag during recessions. Economists
Thomas Garrett from the Federal Reserve
Bank of St. Louis and Gary Wagner from the
University of Arkansas at Little Rock in a 2006
paper find that the number of traffic tickets
rise after state revenue sources fall. The economists studied data from counties in North
Carolina from 1990 to 2003.
One implication of these findings is that
police face a choice about how stringently to
enforce traffic laws. Individual officers can
choose whether to pull someone over, issue a
ticket (and, to some degree, what the fine will
be), or simply warn a driver.
“Clearly the police’s primary motive is public
safety,” says Garrett, “but the revenue motive
does appear to come into play.”
Once you consider that local police respond
to incentives, perhaps it shouldn’t be a surprise
that the revenue motive induces officers to
issue more traffic tickets. “There is a lot of
literature out there that suggests local governments are revenue maximizers,” Garrett says.
“Whether you think that’s good or bad, it suggests they’ll look for alternative sources for
revenue when existing revenue sources become
constrained.”
This explains why nonresidents of a municipality may be issued more traffic tickets
and bigger fines than residents, according to
economists Michael Makowsky of Towson
University and Thomas Stratmann of George
Mason University in a 2009 paper. They
studied municipalities in Massachusetts, and
compared the outcomes of drivers pulled over
for speed violations. Their probability calculations found that out-of-town and out-of-state
drivers got more tickets than residents, by 11
and 21 percentage points, respectively. This
occurred even though speeders who were
pulled over drove the same number of miles per

hour over the speed limit, on average.
Their study also finds that municipal officers are more likely to issue tickets after local
voters have rejected increases in certain taxes.
Then the prospects for out-of-town drivers get
even worse: Their probability of receiving a fine
after being pulled over increases by 38 percentage points. This effect disappears if voters have
approved the tax increase.
This suggests that local police use traffic
citations to generate revenue from a previously
untapped group: those who pay no local
property or income taxes. Also, Makowsky and
Stratmann hypothesize that targeting nonresidents could provide a source of revenue from
a group that is unable to retaliate come election
day. Local police often report to elected officials who would be worried about such an
outcome.
“I think if this form of revenue generation
was subject to voter approval, maybe the fines
would be lower,” says Garrett. “But then maybe
they’d just have more tickets being issued to
compensate for the lower fine.”
Raleigh Police Department spokesperson
Laura Hourigan says that officers are not
instructed to use tickets to recoup revenue during downturns, and that traffic citations are
just one aspect of a police officer’s job description. “Their responsibilities are to keep our
roads safe, our streets safe, and our citizens
safe,” she says. In her view, it’s an old wives’ tale
that officers intentionally write a greater number of tickets to get more revenue for the city at
any particular time, let alone during recessions.
Garrett proposes an interesting way to further test the theory that local police forces
consider revenue when allocating resources
toward issuing traffic tickets. “If the concern is
purely about public safety, I would suggest that
all revenue be donated to charity,” he says. “If
there is no revenue motive, we would expect
the number of traffic tickets to stay the same.”
— R E N E E C O U RTO I S

Spring 2009 • Region Focus

3

Consumer Loans

Law May Constrain Payday Borrowers
The South Carolina General Assembly overrode a gubernatorial veto of a bill that
requires the creation of a database to track whether borrowers have outstanding
loans elsewhere.
The state will now contract with a third party to provide the database, and that company will be
allowed to charge payday lenders a fee to determine consumer eligibility. Companies can pass half of
the fee — which cannot exceed $1 per completed transaction — onto their customers, says Jamie
Fulmer, director of public affairs for Advance America, the nation’s largest payday lender, which is based
in Spartanburg, S.C.
The new rules specify that borrowers will be allowed to take only one loan at a time, face a one-day
break between each of the first seven consecutive loans and a two-day break between loans after that.
The maximum allowable individual loan will increase from $300 to $550.
Both the South Carolina House of Representatives and Senate overrode the veto by a wide margin.
Governor Mark Sanford worried the lending database would violate consumers’ privacy, according to
newspaper reports. He also argued the bill could make people’s financial situation worse or drive them
to illegal loan sharks and unregulated Internet lenders.
Payday loans are small, short-term consumer loans designed to be repaid in a single lump sum.
Borrowers only need to provide a pay stub, bank statement, and driver’s license. Lenders typically won’t
conduct a credit check of prospective borrowers but may investigate whether the applicant has a
checking account. If approved, the borrower typically writes a postdated check for the loan amount
plus a finance charge, and receives the loan amount in exchange. The lender will hold the check until a
future date, in most cases, two weeks. In some states, borrowers can renew loans before their postdated check is deposited, and incur additional fees.
In the Fifth District, South Carolina now joins Virginia in tracking borrowers’ activity and the imposition of a cooling-off period between loans for repeat borrowers. No storefront payday lenders operate
in Maryland, the District of Columbia, North Carolina, or West Virginia.
Most states cap interest rates on consumer loans, usually in the double digits. Payday lenders often
can’t profitably operate in states with such laws because their customers are often relatively risky
borrowers. Maryland, West Virginia, and the District of Columbia each cap interest rates.
More than 22,000 outlets make payday loans to consumers nationwide. Typical payday borrowers
earn between $25,000 and $50,000 a year. Nearly 70 percent of customers are under 45 years old, most
are married, and 42 percent own homes. Payday borrowers are typically “early life-cycle, moderate
income, credit constrained consumers,” write Gregory Elliehausen and Edward C. Lawrence in a 2008
Contemporary Economic Policy article.
Lenders in South Carolina currently charge $15 for every $100 borrowed, for an annual percentage
rate of more than 400 percent. However, annual percentage rates for overdraft protection, offered by
banks, and for cash advances on credit cards can be even higher. Rates for $100 bounced checks including merchant fees, credit card balances with late fees, and utility bills with reconnect fees may add up
to finance charges of 1,000 percent.
Consumer advocacy groups condemn payday lenders. They argue payday loans are debt traps that
pose hardships for borrowers. However, in a Federal Reserve Bank of New York staff report, Bank
economist Donald Morgan and Cornell University doctoral student Michael Strain studied the effects
of legislation against payday loans in Georgia and North Carolina. They found residents of both states
bounced more checks than residents of states where payday loan laws did not change. The researchers
also found more Georgians and North Carolinians complained to the Federal Trade Commission about
debt collectors.
Since he started studying payday lending in 2005, Morgan says more states have banned or regulated the practice. The next big research question, Morgan says, is why some states regulate the loans
more strictly. “It’s not the borrowers themselves who are pushing to have these laws changed,” he says.
— D AV I D VA N D E N B E RG

4

Region Focus • Spring 2009

VNTY PL8TS

Virginians Snap Up Personalized License Plates
Who’s so vain? Virginia is, according to the American Association of Motor
Vehicle Administrators (AAMVA). The organization found in a 2007 survey that
Virginia ranks No. 1 in the percentage of all registered vehicles with vanity
license plates. They feature a personally chosen number, letter, or symbol
combination.
The AAMVA’s study estimated that almost
4 percent of all registered motor vehicles in the
United States are “vanitized,” equaling about 9
million total plates. But in Virginia, about 16
percent of all vehicles have vanity plates. New
Hampshire came in second at 14 percent, and
Texas was dead last at 0.56 percent.
“People seem to just really love personalized
plates,” according to Melanie Stokes of the
Virginia Department of Motor Vehicles
(DMV). “It’s a fun way to put your personality
on your car. Virginians really have fun with it
and the DMV really enjoys administering it.”
Why are Virginians so eager to express
themselves? According to economist Erik Craft
at the University of Richmond, there are several reasons. In 2002, he used data collected from
each state, with the help of the Virginia DMV,
to figure out which factors affect the number
of vanity plates you see
on the road.
According to Craft’s
study, one of the biggest
determinants of vanity
plate demand is the age
range of the population.
States with more 25- to
34-year-olds tend to have
more vanity plates.
“Younger people want to stand out,” Craft
hypothesizes. “Single, young people may tend
to be at the point where they want to make a
statement with their style and attract attention.” If a state requires license plates mounted
front and back, as in Virginia, then the proportion of cars with vanity plates rises even more,
according to Craft’s study, because the impact
of personalizing your car is even greater.
Craft’s study also found that vanity plates
and “specialized license plates” are complementary goods. States that offer these
specialty-background plates that endorse some

university, civic group, or nonprofit organization sell more vanity plates too. By the time a
driver has gone to the trouble to order a special
background image for his plate, choosing a
number and letter combination requires little
extra effort.
Virginia offers more than 200 specialty
plate styles. Each costs an extra $25, and yet
more specialty plates are issued than vanity
plates. Stokes reports that specialty plates generate almost $3 million for special groups and
universities, including more than $404,000
for the Department of Game and Inland
Fisheries through proceeds from the Wildlife
Conservationist plate, which is the most popular.
But perhaps the biggest reason that
Virginia’s drivers are so expressive is that it costs
so little. In Virginia, a vanity plate costs only
$10 at the time of purchase in addition to the
usual vehicle registration
fee, with a $10 annual
renewal fee. Compare
this with Minnesota,
which charges $100 initially. The Virginia DMV
also estimates that it
takes about four minutes
to buy your plate online.
At prices like these,
Virginians have shown more interest in being
whimsical on their plates.
Though a state-by-state comparison of vanity plate demand hasn’t been repeated since
2007, Virginia residents need only to look
around to know whether their counterparts
continue to express themselves in abundance.
A recent stroll through the Richmond Fed’s
parking garage one morning revealed a wide
range of vanity plates, touting everything from
a sweetheart’s name to a favorite NASCAR
contender. None were Fed related.
— R E N E E C O U RTO I S

Spring 2009 • Region Focus

5

Consumer Loans

Law May Constrain Payday Borrowers
The South Carolina General Assembly overrode a gubernatorial veto of a bill that
requires the creation of a database to track whether borrowers have outstanding
loans elsewhere.
The state will now contract with a third party to provide the database, and that company will be
allowed to charge payday lenders a fee to determine consumer eligibility. Companies can pass half of
the fee — which cannot exceed $1 per completed transaction — onto their customers, says Jamie
Fulmer, director of public affairs for Advance America, the nation’s largest payday lender, which is based
in Spartanburg, S.C.
The new rules specify that borrowers will be allowed to take only one loan at a time, face a one-day
break between each of the first seven consecutive loans and a two-day break between loans after that.
The maximum allowable individual loan will increase from $300 to $550.
Both the South Carolina House of Representatives and Senate overrode the veto by a wide margin.
Governor Mark Sanford worried the lending database would violate consumers’ privacy, according to
newspaper reports. He also argued the bill could make people’s financial situation worse or drive them
to illegal loan sharks and unregulated Internet lenders.
Payday loans are small, short-term consumer loans designed to be repaid in a single lump sum.
Borrowers only need to provide a pay stub, bank statement, and driver’s license. Lenders typically won’t
conduct a credit check of prospective borrowers but may investigate whether the applicant has a
checking account. If approved, the borrower typically writes a postdated check for the loan amount
plus a finance charge, and receives the loan amount in exchange. The lender will hold the check until a
future date, in most cases, two weeks. In some states, borrowers can renew loans before their postdated check is deposited, and incur additional fees.
In the Fifth District, South Carolina now joins Virginia in tracking borrowers’ activity and the imposition of a cooling-off period between loans for repeat borrowers. No storefront payday lenders operate
in Maryland, the District of Columbia, North Carolina, or West Virginia.
Most states cap interest rates on consumer loans, usually in the double digits. Payday lenders often
can’t profitably operate in states with such laws because their customers are often relatively risky
borrowers. Maryland, West Virginia, and the District of Columbia each cap interest rates.
More than 22,000 outlets make payday loans to consumers nationwide. Typical payday borrowers
earn between $25,000 and $50,000 a year. Nearly 70 percent of customers are under 45 years old, most
are married, and 42 percent own homes. Payday borrowers are typically “early life-cycle, moderate
income, credit constrained consumers,” write Gregory Elliehausen and Edward C. Lawrence in a 2008
Contemporary Economic Policy article.
Lenders in South Carolina currently charge $15 for every $100 borrowed, for an annual percentage
rate of more than 400 percent. However, annual percentage rates for overdraft protection, offered by
banks, and for cash advances on credit cards can be even higher. Rates for $100 bounced checks including merchant fees, credit card balances with late fees, and utility bills with reconnect fees may add up
to finance charges of 1,000 percent.
Consumer advocacy groups condemn payday lenders. They argue payday loans are debt traps that
pose hardships for borrowers. However, in a Federal Reserve Bank of New York staff report, Bank
economist Donald Morgan and Cornell University doctoral student Michael Strain studied the effects
of legislation against payday loans in Georgia and North Carolina. They found residents of both states
bounced more checks than residents of states where payday loan laws did not change. The researchers
also found more Georgians and North Carolinians complained to the Federal Trade Commission about
debt collectors.
Since he started studying payday lending in 2005, Morgan says more states have banned or regulated the practice. The next big research question, Morgan says, is why some states regulate the loans
more strictly. “It’s not the borrowers themselves who are pushing to have these laws changed,” he says.
— D AV I D VA N D E N B E RG

4

Region Focus • Spring 2009

VNTY PL8TS

Virginians Snap Up Personalized License Plates
Who’s so vain? Virginia is, according to the American Association of Motor
Vehicle Administrators (AAMVA). The organization found in a 2007 survey that
Virginia ranks No. 1 in the percentage of all registered vehicles with vanity
license plates. They feature a personally chosen number, letter, or symbol
combination.
The AAMVA’s study estimated that almost
4 percent of all registered motor vehicles in the
United States are “vanitized,” equaling about 9
million total plates. But in Virginia, about 16
percent of all vehicles have vanity plates. New
Hampshire came in second at 14 percent, and
Texas was dead last at 0.56 percent.
“People seem to just really love personalized
plates,” according to Melanie Stokes of the
Virginia Department of Motor Vehicles
(DMV). “It’s a fun way to put your personality
on your car. Virginians really have fun with it
and the DMV really enjoys administering it.”
Why are Virginians so eager to express
themselves? According to economist Erik Craft
at the University of Richmond, there are several reasons. In 2002, he used data collected from
each state, with the help of the Virginia DMV,
to figure out which factors affect the number
of vanity plates you see
on the road.
According to Craft’s
study, one of the biggest
determinants of vanity
plate demand is the age
range of the population.
States with more 25- to
34-year-olds tend to have
more vanity plates.
“Younger people want to stand out,” Craft
hypothesizes. “Single, young people may tend
to be at the point where they want to make a
statement with their style and attract attention.” If a state requires license plates mounted
front and back, as in Virginia, then the proportion of cars with vanity plates rises even more,
according to Craft’s study, because the impact
of personalizing your car is even greater.
Craft’s study also found that vanity plates
and “specialized license plates” are complementary goods. States that offer these
specialty-background plates that endorse some

university, civic group, or nonprofit organization sell more vanity plates too. By the time a
driver has gone to the trouble to order a special
background image for his plate, choosing a
number and letter combination requires little
extra effort.
Virginia offers more than 200 specialty
plate styles. Each costs an extra $25, and yet
more specialty plates are issued than vanity
plates. Stokes reports that specialty plates generate almost $3 million for special groups and
universities, including more than $404,000
for the Department of Game and Inland
Fisheries through proceeds from the Wildlife
Conservationist plate, which is the most popular.
But perhaps the biggest reason that
Virginia’s drivers are so expressive is that it costs
so little. In Virginia, a vanity plate costs only
$10 at the time of purchase in addition to the
usual vehicle registration
fee, with a $10 annual
renewal fee. Compare
this with Minnesota,
which charges $100 initially. The Virginia DMV
also estimates that it
takes about four minutes
to buy your plate online.
At prices like these,
Virginians have shown more interest in being
whimsical on their plates.
Though a state-by-state comparison of vanity plate demand hasn’t been repeated since
2007, Virginia residents need only to look
around to know whether their counterparts
continue to express themselves in abundance.
A recent stroll through the Richmond Fed’s
parking garage one morning revealed a wide
range of vanity plates, touting everything from
a sweetheart’s name to a favorite NASCAR
contender. None were Fed related.
— R E N E E C O U RTO I S

Spring 2009 • Region Focus

5

FEDERALRESERVE
Capital Cushions
BY ST E P H E N S L I V I N S K I

The Basel Accords
and bank risk

through the interest payments made
by borrowers.)
When a bank borrows money to
fund its operations, this creates a liability that can cause the bank to fail if it
he recent “stress test” the
cannot meet its repayment obligafederal government conducted
tions. On the other hand, the revenue
on the nation’s biggest banks
generated by a stock sale is considered
was an attempt to ascertain whether
“capital” since it can be used to pay
those depository institutions could
off depositors or bondholders if neceswithstand a market downturn. This
sary. Thus, the larger the portion of the
new form of bank examination was
bank’s operations that are financed by
meant to quell some of the uncercapital funds, the more losses the bank
tainty among investors about the value
can absorb.
of the assets the banks were holding
Measuring how much capital a bank
on their balance sheets as well as
has on hand relative to its assets has
whether these banks had enough
become an important function of the
capital on hand to keep them standbank regulatory system. The main
ing in the wake of an extended
regulators of the U.S. banking system
economic storm.
— the Federal Deposit Insurance
Banks can finance their operations
Corporation, the
Federal Reserve,
Bank Capital Ratios Have Risen Since the 1980s
and the Office of
the Comptroller
12
of the Currency
— have routinely
10
examined banks
8
for years to measure the adequacy
6
of their capital
cushion, among
4
other things.
2
One of the
metrics by which
0
this adequacy is
1935 40 45
50
55
60 65
70 75
80 85
90 95 2000 05
measured is a
SOURCE: FDIC Historical Statistics on Banking
capital-to-assets
ratio. While this
might sound like a simple concept to
in two ways. They can borrow money
operationalize, the proper role for the
— or accept more deposits from their
ratio in regulatory policy is far from
customers, which by definition is a
settled. In addition, current events
form of borrowing since the bank is
have raised questions regarding the old
required to return the full deposit balassumptions about how best to define
ance if demanded by the customer —
a bank’s capital cushion.
or they can sell stock. Banks can then
turn around and lend this money to
others. (The loans the banks extend
A Brief History
to others are considered assets since
The numeric standards that the curthey generate income for the bank
rent capital adequacy requirements are
EQUITY CAPITAL AS PERCENT OF ASSETS

T

6

Region Focus • Spring 2009

based on are relatively new. Before the
1980s, bank supervisors did not
impose a specific quantitative capital
requirement on a bank. Instead,
through most of the country’s history,
an institution’s solvency was based
largely on an examiner’s judgment.
Supervisors had the freedom to take a
look at each bank individually and use
formal and informal measures and
their knowledge of each bank’s circumstances to form their views.
Rigid adherence to something
quantitative like a capital ratio was still
widely perceived to discourage a
more comprehensive and thoughtful
analysis of a bank’s potential solvency
in the face of an economic shock. For
instance, the American Bankers
Associations 1954 “Statement of
Principles” explicitly rejected the use
of ratios as a centerpiece of bank
supervision. Even as late as 1978, the
FDIC Manual of Examination Policies
— the rulebook for that agency’s bank
auditors — instructed their examiners
to use capital ratios as only “a first
approximation of a bank’s ability to
withstand adversity. A low capital ratio
by itself is no more conclusive of a
bank’s weakness than a high ratio is of
its invulnerability.”
This was a sustainable strategy for
bank examiners from the 1940s
through the early 1970s. Bank failures
were few in number and in scope during that time. The dollar-weighted
average capital ratio for the banking
industry remained healthy also, ranging from 6 percent to 8 percent
between 1950 and 1970.
The high-inflation environment of
the mid- to late-1970s led to high interest rates that severely weakened large
banks and the savings and loan (S&L)
industry. In 1981, the federal regulators
introduced an explicit capital ratio
requirement for the first time. It consisted of a “leverage ratio” of primary

capital (mainly the amount of stockholder equity) to average
total assets (an average of aggregate assets over a set time
period, usually two years). Congress furthered the push by
passing the International Lending and Supervision Act of
1983 (ILSA). The legislation ushered in a common definition
of uniform capital requirements for all bank regulatory
agencies to use.
In 1985, under the auspices of ISLA, the standard
mandated capital ratio for banks converged on 5.5 percent
of total assets. Any bank operating at a leverage ratio of
3 percent was declared unsound and was required to comply
with federal enforcement actions.
By 1986, however, regulators began to realize that the
ratio failed to differentiate between different sorts of risks
on the bank’s balance sheets. The simple ratio, by definition,
ranked all assets as being equally likely to maintain their
value. But during the 1980s, financial markets were becoming vastly more international in scope and innovations in
financial products were introducing a new element of risk
into bank holdings. Besides, many banks were beginning to
move away from lower-yielding liquid assets while also
experimenting with “off-balance-sheet” activities that
would allow them to make certain higher-yield (but riskier)
investments. Under the old rules, they didn’t have to
increase the size of their capital cushion as a result.

The Basel Accord and U.S. Policy
In the summer of 1988, central bank governors from the 10
biggest economies (also called the Group of Ten, or G-10)
met in the town of Basel, Switzerland, to approve an agreement — eventually called Basel I — that would set the
approach that bank regulators would take for the next 18
years. The first big result of the accord was to redefine the
way regulators in each participating country measure
capital. It created two “tiers” — Tier 1 (core) capital and Tier
2 (supplementary) capital. Tier 1 is basically equity owned by
common stockholders while Tier 2 consists of a variety of
other forms of capital, such as a “hybrid” equity instrument
like preferred stock that resembles equity in some form but
also maintains a liability claim on the bank in the event of
bankruptcy.
The next new step was to break away from a simplistic,
uniform approach to capital ratios and instead create a series
of risk categories into which the assets of a bank can be subdivided. A “risk weight” would then be assigned to each class
of asset for the purposes of taking into account the potential
for a loss in value or probability of default: The higher the
risk weight, the more capital the bank needs to have on hand
to compensate for the potential loss. Those ranged from a
“0.0 percent” risk weight for bonds issued by the governments of most developed countries to a “100 percent” risk
weight for corporate debt. Mortgages fell in the middle (a 50
percent weight). Off-balance-sheet assets were also included
in these “risk buckets” and weighted by a similar risk factor.
To calculate the risk-weighted capital ratio, regulators
would sum the new weighted values of the assets before they

calculated the capital-to-asset result. The standard would
require banks to hold capital (Tier 1 plus Tier 2) that consisted of 8 percent of their newly defined risk-weighted assets.
Coincidentally, the year after the original Basel Accord
was agreed upon and the standards began to be adopted by
a number of countries — over 100 by the year 2002 — the
United States witnessed the largest number of bank failures
since the Great Depression. More than 530 FDIC-insured
banks failed in 1989. The concern among policymakers at the time was about “regulatory forbearance” — in
other words, the act of looking the other way when a
regulator discovered that a bank might be in jeopardy of
collapsing.
Analysts of the period often point out that bank regulators were aware of many of the warning signs and the losses
from the S&L crisis of the 1980s were made worse than they
might have been. “The consequent increased pressure to
forbear from managers and owners in the industry,
unchecked by an offsetting increased pressure to facilitate
early closure, may have led to changes in favor of such policies in the 1980s,” write economists Randall Kroszner of the
University of Chicago and Philip Strahan of Boston College
in a 1996 paper. (Kroszner subsequently served as a
Governor at the Federal Reserve Board.)
Partly in response to this concern, Congress passed
the Federal Deposit Insurance Corporation Improvement
Act (FDICIA) in 1991. It created a set of categories to
classify the capitalization of a bank. A bank was “well
capitalized” if it had a risk-weighted capital ratio of 10
percent or more. It was “adequately capitalized” at 8 percent
or more. Below 8 percent was considered “undercapitalized.” The law mandated “prompt corrective action”
by regulators to shut down banks that were considered
undercapitalized and failed to meet other criteria. The
purpose was to minimize the potential cost to taxpayers of
the government’s deposit insurance guarantees by heading
off a potential bank collapse while a bank still had a positive,
but low, capital ratio.

The Rise of Basel II
Soon, a variety of inherent flaws in Basel I’s treatment of
capital became apparent. First, the relationship between
assets’ actual revealed default risk and their risk weights
proved to be less reliable than had been thought. For
instance, all bonds issued by countries that were members of
the Organization for Economic Cooperation and
Development (OECD) were given the same weight even
though doing so might have downplayed the very real
differences in the risk of defaults among these countries or,
conversely, possibly overstated the difference in default risks
between OECD and non-OECD countries.
Second, the Basel methodology was too crude. It simply
summed the risk weights to construct a measure of overall
capital risk, but that is a poor proxy for actual risk. Doing so
does not take into account the overall portfolio risk of
the bank and the formula made no room for management

Spring 2009 • Region Focus

7

FEDERALRESERVE
Capital Cushions
BY ST E P H E N S L I V I N S K I

The Basel Accords
and bank risk

through the interest payments made
by borrowers.)
When a bank borrows money to
fund its operations, this creates a liability that can cause the bank to fail if it
he recent “stress test” the
cannot meet its repayment obligafederal government conducted
tions. On the other hand, the revenue
on the nation’s biggest banks
generated by a stock sale is considered
was an attempt to ascertain whether
“capital” since it can be used to pay
those depository institutions could
off depositors or bondholders if neceswithstand a market downturn. This
sary. Thus, the larger the portion of the
new form of bank examination was
bank’s operations that are financed by
meant to quell some of the uncercapital funds, the more losses the bank
tainty among investors about the value
can absorb.
of the assets the banks were holding
Measuring how much capital a bank
on their balance sheets as well as
has on hand relative to its assets has
whether these banks had enough
become an important function of the
capital on hand to keep them standbank regulatory system. The main
ing in the wake of an extended
regulators of the U.S. banking system
economic storm.
— the Federal Deposit Insurance
Banks can finance their operations
Corporation, the
Federal Reserve,
Bank Capital Ratios Have Risen Since the 1980s
and the Office of
the Comptroller
12
of the Currency
— have routinely
10
examined banks
8
for years to measure the adequacy
6
of their capital
cushion, among
4
other things.
2
One of the
metrics by which
0
this adequacy is
1935 40 45
50
55
60 65
70 75
80 85
90 95 2000 05
measured is a
SOURCE: FDIC Historical Statistics on Banking
capital-to-assets
ratio. While this
might sound like a simple concept to
in two ways. They can borrow money
operationalize, the proper role for the
— or accept more deposits from their
ratio in regulatory policy is far from
customers, which by definition is a
settled. In addition, current events
form of borrowing since the bank is
have raised questions regarding the old
required to return the full deposit balassumptions about how best to define
ance if demanded by the customer —
a bank’s capital cushion.
or they can sell stock. Banks can then
turn around and lend this money to
others. (The loans the banks extend
A Brief History
to others are considered assets since
The numeric standards that the curthey generate income for the bank
rent capital adequacy requirements are
EQUITY CAPITAL AS PERCENT OF ASSETS

T

6

Region Focus • Spring 2009

based on are relatively new. Before the
1980s, bank supervisors did not
impose a specific quantitative capital
requirement on a bank. Instead,
through most of the country’s history,
an institution’s solvency was based
largely on an examiner’s judgment.
Supervisors had the freedom to take a
look at each bank individually and use
formal and informal measures and
their knowledge of each bank’s circumstances to form their views.
Rigid adherence to something
quantitative like a capital ratio was still
widely perceived to discourage a
more comprehensive and thoughtful
analysis of a bank’s potential solvency
in the face of an economic shock. For
instance, the American Bankers
Associations 1954 “Statement of
Principles” explicitly rejected the use
of ratios as a centerpiece of bank
supervision. Even as late as 1978, the
FDIC Manual of Examination Policies
— the rulebook for that agency’s bank
auditors — instructed their examiners
to use capital ratios as only “a first
approximation of a bank’s ability to
withstand adversity. A low capital ratio
by itself is no more conclusive of a
bank’s weakness than a high ratio is of
its invulnerability.”
This was a sustainable strategy for
bank examiners from the 1940s
through the early 1970s. Bank failures
were few in number and in scope during that time. The dollar-weighted
average capital ratio for the banking
industry remained healthy also, ranging from 6 percent to 8 percent
between 1950 and 1970.
The high-inflation environment of
the mid- to late-1970s led to high interest rates that severely weakened large
banks and the savings and loan (S&L)
industry. In 1981, the federal regulators
introduced an explicit capital ratio
requirement for the first time. It consisted of a “leverage ratio” of primary

capital (mainly the amount of stockholder equity) to average
total assets (an average of aggregate assets over a set time
period, usually two years). Congress furthered the push by
passing the International Lending and Supervision Act of
1983 (ILSA). The legislation ushered in a common definition
of uniform capital requirements for all bank regulatory
agencies to use.
In 1985, under the auspices of ISLA, the standard
mandated capital ratio for banks converged on 5.5 percent
of total assets. Any bank operating at a leverage ratio of
3 percent was declared unsound and was required to comply
with federal enforcement actions.
By 1986, however, regulators began to realize that the
ratio failed to differentiate between different sorts of risks
on the bank’s balance sheets. The simple ratio, by definition,
ranked all assets as being equally likely to maintain their
value. But during the 1980s, financial markets were becoming vastly more international in scope and innovations in
financial products were introducing a new element of risk
into bank holdings. Besides, many banks were beginning to
move away from lower-yielding liquid assets while also
experimenting with “off-balance-sheet” activities that
would allow them to make certain higher-yield (but riskier)
investments. Under the old rules, they didn’t have to
increase the size of their capital cushion as a result.

The Basel Accord and U.S. Policy
In the summer of 1988, central bank governors from the 10
biggest economies (also called the Group of Ten, or G-10)
met in the town of Basel, Switzerland, to approve an agreement — eventually called Basel I — that would set the
approach that bank regulators would take for the next 18
years. The first big result of the accord was to redefine the
way regulators in each participating country measure
capital. It created two “tiers” — Tier 1 (core) capital and Tier
2 (supplementary) capital. Tier 1 is basically equity owned by
common stockholders while Tier 2 consists of a variety of
other forms of capital, such as a “hybrid” equity instrument
like preferred stock that resembles equity in some form but
also maintains a liability claim on the bank in the event of
bankruptcy.
The next new step was to break away from a simplistic,
uniform approach to capital ratios and instead create a series
of risk categories into which the assets of a bank can be subdivided. A “risk weight” would then be assigned to each class
of asset for the purposes of taking into account the potential
for a loss in value or probability of default: The higher the
risk weight, the more capital the bank needs to have on hand
to compensate for the potential loss. Those ranged from a
“0.0 percent” risk weight for bonds issued by the governments of most developed countries to a “100 percent” risk
weight for corporate debt. Mortgages fell in the middle (a 50
percent weight). Off-balance-sheet assets were also included
in these “risk buckets” and weighted by a similar risk factor.
To calculate the risk-weighted capital ratio, regulators
would sum the new weighted values of the assets before they

calculated the capital-to-asset result. The standard would
require banks to hold capital (Tier 1 plus Tier 2) that consisted of 8 percent of their newly defined risk-weighted assets.
Coincidentally, the year after the original Basel Accord
was agreed upon and the standards began to be adopted by
a number of countries — over 100 by the year 2002 — the
United States witnessed the largest number of bank failures
since the Great Depression. More than 530 FDIC-insured
banks failed in 1989. The concern among policymakers at the time was about “regulatory forbearance” — in
other words, the act of looking the other way when a
regulator discovered that a bank might be in jeopardy of
collapsing.
Analysts of the period often point out that bank regulators were aware of many of the warning signs and the losses
from the S&L crisis of the 1980s were made worse than they
might have been. “The consequent increased pressure to
forbear from managers and owners in the industry,
unchecked by an offsetting increased pressure to facilitate
early closure, may have led to changes in favor of such policies in the 1980s,” write economists Randall Kroszner of the
University of Chicago and Philip Strahan of Boston College
in a 1996 paper. (Kroszner subsequently served as a
Governor at the Federal Reserve Board.)
Partly in response to this concern, Congress passed
the Federal Deposit Insurance Corporation Improvement
Act (FDICIA) in 1991. It created a set of categories to
classify the capitalization of a bank. A bank was “well
capitalized” if it had a risk-weighted capital ratio of 10
percent or more. It was “adequately capitalized” at 8 percent
or more. Below 8 percent was considered “undercapitalized.” The law mandated “prompt corrective action”
by regulators to shut down banks that were considered
undercapitalized and failed to meet other criteria. The
purpose was to minimize the potential cost to taxpayers of
the government’s deposit insurance guarantees by heading
off a potential bank collapse while a bank still had a positive,
but low, capital ratio.

The Rise of Basel II
Soon, a variety of inherent flaws in Basel I’s treatment of
capital became apparent. First, the relationship between
assets’ actual revealed default risk and their risk weights
proved to be less reliable than had been thought. For
instance, all bonds issued by countries that were members of
the Organization for Economic Cooperation and
Development (OECD) were given the same weight even
though doing so might have downplayed the very real
differences in the risk of defaults among these countries or,
conversely, possibly overstated the difference in default risks
between OECD and non-OECD countries.
Second, the Basel methodology was too crude. It simply
summed the risk weights to construct a measure of overall
capital risk, but that is a poor proxy for actual risk. Doing so
does not take into account the overall portfolio risk of
the bank and the formula made no room for management

Spring 2009 • Region Focus

7

strategies that could reduce that overall risk. A bank portfolio can indeed be more or less risky than the mere sum of
its parts might indicate because of the correlation among
assets.
Third, the broad categories were lumped together, and
assigned a single weight to a variety of assets that in reality
exist along a spectrum of risk profiles. A loan to a startup
company, for instance, was treated the same as one to an
established Fortune 500 company. As such, banks investing
the same share of their portfolio in either asset would have
identical mandatory capital set aside. This creates an incentive for a bank to invest in high-yielding assets in the risky
end of the spectrum without having to make a corresponding expansion of their capital cushion. This sort of activity
could over time increase the overall risk of a bank’s portfolio
although it would still meet Basel I standards.
In January of 2001, a second set of Basel standards —
called Basel II — attempted to remedy these problems.
(The implementation by the Federal Reserve began in the
fall of 2006.) The first big change altered the risk weight. By
using the ratings issued by credit rating agencies like
Standard and Poor’s and Moody’s to determine the potential
risk of default, Basel II set up a system by which assets within each broad “risk bucket” could be further classified.
The second big change was a new method by which risk
profiles could be measured. Instead of forcing all banks to
abide by the specific numeric standards set forth in Basel II,
certain banks could opt out. In place of the top-down
approach, the “internal ratings based” approach — available
only to sophisticated banks with the resources and knowledge base to develop an internal rating with a mathematic
model — allowed some banks to estimate the necessary size
of their own capital cushion.
Both changes were aimed at answering the critics who
stated that the original Basel standards did not integrate any
market-based mechanisms for evaluating risk. Yet these
changes seem to have proven flawed as well. The grades
awarded by the ratings agencies for some mortgage-backed
securities, for instance, have been shown to be less reliable
than originally hoped. Some argue it’s hard to make a case
that a handful of firms which are largely insulated from competition by the Securities and Exchange Commission, as the
“Big Three” ratings agencies are, could be considered a sufficient market-based mechanism. (For a detailed analysis, see
this issue’s cover story on page 14.)
In addition, allowing banks to set their own capital
requirements doesn’t seem to acknowledge the current state
of the science of risk management. It has become apparent
that the models of risk used by many banks may not have
been sufficiently robust to anticipate the potential default of
complex new asset-backed securities.
There has been some discussion within the Federal
Reserve about how to overcome the incentive a bank would
have to lowball their capital requirement estimates. One way
to create an incentive for banks to be as honest as possible is
to require them to precommit to a maximum loss exposure

8

Region Focus • Spring 2009

and corresponding capital buffer. If the bank’s losses exceed
the declared maximum, the bank supervisor would levy a
fine on the bank.
A criticism of the precommitment approach centers on
the ability and willingness of a regulator to assess fines. For
the fines to be a credible threat, they must be large enough
to spur action by the bank. But if an economic shock were to
reduce a bank’s soundness, a regulator might feel compelled,
if he believed the shock to be temporary, to avoid assessing
the fine if doing so would result in the bank’s failure. Yet the
failure to issue a penalty, especially if it is sufficiently steep
for the precommitment regime to work, would severely
restrict the credibility of the regulatory threat in the future.

The Search for a Market-Based Mechanism
Critics of the Basel standards have pointed out that
each round of changes has yet to address a key conceptual
problem: Banks face a variety of risks that cannot be captured by a simple ratio. There is no attention paid to the
risks of a heavy concentration of a bank’s balance sheet in a
certain sort of investment. And a ratio has no way to gauge
the risks of poor management, the risks of an economic
shock, and the risks to reputation in the marketplace.
Critics argue that a real market-based mechanism that does
not rely almost solely on credit rating agencies or mathematical models would be better suited to managing not just
the capital ratios of a bank but also these other intangible
risk factors that those institutions face.
One proposal is to require large banks to hold a certain
portion of their assets in long-term subordinated debt.
This form of debt would be uninsured — meaning it has no
claim to a federal guarantee — and would have a maturity
of more than a year. The term “subordinated” means that
the holders of these bonds are in line for repayment
behind depositors, conventional bondholders, and the
FDIC should the bank fail. The bonds could be traded in a
secondary market.
Supporters of this proposal suggest that these characteristics would be important for making this form of debt a
strong market-based barometer of a bank’s capital position.
Because these bondholders would be among the last to get
paid in the event of a bank failure, they would have an incentive to monitor the bank’s relative riskiness. Subordinated
debt holders would be watchful of the bank’s levels of leverage because that level would influence not just the
probability of the bank’s failure but also the composition of
risks on its balance sheet — and, consequently, the bank’s
ability to repay subordinated bondholders in the event of
failure. Finally, because the bonds can be traded in secondary
markets, the risk yield would go up on the debt in the event
of a market perception that the bank is taking on too much
risk, thus sending a signal to both regulators and investors.
As Charles Calomiris of Columbia University and Robert
Litan of the Kauffman Foundation argue, a subordinated
debt requirement could be preferable to the current Basel
standard that encourages more equity financing of banks.

Stockholders of a bank are likely to be more concerned
about the bank’s profitability and, hence, more interested in
the bank making high-yield, potentially risky investments.
As Calomiris and Litan point out in a 2000 study, “because
holders of subordinated debt have no upside other than the
interest they are promised, they are likely to be less risk
seeking than shareholders.” They argue that these debt
holders would also have a relatively longer time horizon
than a stockholder because of the long-term nature of the
bond maturities. And they suggest that, because a portion
of the bonds will mature regularly, a subordinated debt
requirement on banks would force those banks to prove
themselves in the credit markets on a regular basis.
A criticism of the subordinated debt proposal suggests
that a secondary market for the asset may not emerge. The
amount of debt outstanding, particularly for a small bank,
might be too small for the market to be robust. Also, because
the proposal relies on the assumption that the bondholders
are relatively risk averse, they may be unusually sensitive to
new information and rush to redeem the debt after hearing
isolated pieces of bad economic news.
Another criticism of the subordinated debt is that political realities might make it a less effective tool at controlling
risk. In a world of deposit insurance and central governments unable to credibly commit to not bail out failing
banks, the upside of risk is privatized — by allowing the
bank’s stockholders to keep the profits of successful gambles
— but the downside is socialized because the government
ensures that the bank’s debtors don’t suffer. This creates an
incentive for banks to make even riskier investments than
they would otherwise. Meanwhile, the price of bank debt
will be influenced by the implicit or explicit insurance
guarantee, and the debt price would not necessarily yield
accurate information about a bank’s level of risk.
One way to control risk more directly is to approach the
question from the other end by limiting the net return a
bank can make and thereby limit its incentive to take too
much risk. This can be done by requiring banks to issue
stock warrants. Edward Simpson Prescott, an economist at
the Richmond Fed, argues this requirement would alter a
bank’s capital structure in such a way as to replicate the
incentives that a bank would face in a world in which deposit

insurance and bondholder guarantees didn’t exist.
The stock warrants would contain a strike price — a set
price at which the holders of the warrant could purchase a
share of bank equity. If the per-share return a bank experiences is higher than the strike price, then the warrant holder
could exercise his option to buy the stock at the predetermined price and reap the gains. The bank, on the other hand,
would only receive the price of the stock. Selling a stock warrant would, in other words, be equivalent to selling a portion
of the bank’s return to a set of investors. This would have the
effect of constraining the upper-end payoff a bank could
reap if the managers pursued a risky yet potentially highyield investment and should limit the incentive that banks
have to engage in such behavior.
A potential risk here is that a stock warrant could
penalize a bank that exhibits high returns generated by innovation or better management rather than risky leveraged
investments. There are also political economy issues. Bank
warrants can tip the balance of power away from bank managers, and a proposal to require warrants are likely to be
met with opposition. Additionally, by definition a stock warrant requirement would work best with a lower equity
capital requirement; high capital requirements choke
off investment. Yet it’s likely that a proposal to allow a lowering of capital requirements would be met with skepticism
today.
As the economic downturn unfolds, the debate about
the correct regulatory approach to capital buffers and the
best way to integrate market-based mechanisms will
continue. Bank regulation, by its nature, is often backwardlooking, adjusting to new financial innovations after they
become widespread. Some critics question whether the
attempts to continually modify capital standards can ever
keep up.
Nevertheless, capital ratios are quite firmly embedded in
U.S. law now. Yet it remains an open question whether the
spirit of the Basel II standards will survive intact. The Basel
Committee responded to the situation in the worldwide
financial markets in a November 2008 press release that recognized the “fundamental weaknesses” of Basel II and
proposed a goal of modifying the standards once again by
the end of 2009.
RF

READINGS
Burhouse, Susan, John Feid, George French, and Keith Ligon.
“Basel and the Evolution of Capital Regulation: Moving Forward,
Looking Back.” Washington, D.C.: Federal Deposit Insurance
Corporation, Jan. 14, 2003.
Calomiris, Charles W., and Robert E. Litan. “Financial Regulation
in a Global Marketplace.” Brookings-Wharton Papers on Financial
Services: 2000. Washington, D.C.: Brookings Institution Press,
pp. 283-323.
Kroszner, Randall S., and Philip E. Strahan. “Regulatory
Incentives and the Thrift Crisis: Dividends, Mutual-to-Stock
Conversions, and Financial Distress.” Journal of Finance,

September 1996, vol. 51, no. 4, pp. 1285-1319.
Kupiec, Paul, and James M. O’Brien. “The Pre-Commitment
Approach: Using Incentives to Set Market Risk Capital
Requirements.” Federal Reserve Board Finance and Economics
Discussion Series Paper 1997-14, March 1997.
Prescott, Edward S. “Regulating Bank Capital Structure to Control
Risk.”Federal Reserve Bank of Richmond Economic Quarterly,
Summer 2001, vol. 87, no. 3, pp. 35-52.
Rodriguez, L. Jacobo. “International Banking Regulation: Where’s
the Market Discipline in Basel II?” Cato Institute Policy Analysis
no. 455, Oct. 15, 2002.

Spring 2009 • Region Focus

9

strategies that could reduce that overall risk. A bank portfolio can indeed be more or less risky than the mere sum of
its parts might indicate because of the correlation among
assets.
Third, the broad categories were lumped together, and
assigned a single weight to a variety of assets that in reality
exist along a spectrum of risk profiles. A loan to a startup
company, for instance, was treated the same as one to an
established Fortune 500 company. As such, banks investing
the same share of their portfolio in either asset would have
identical mandatory capital set aside. This creates an incentive for a bank to invest in high-yielding assets in the risky
end of the spectrum without having to make a corresponding expansion of their capital cushion. This sort of activity
could over time increase the overall risk of a bank’s portfolio
although it would still meet Basel I standards.
In January of 2001, a second set of Basel standards —
called Basel II — attempted to remedy these problems.
(The implementation by the Federal Reserve began in the
fall of 2006.) The first big change altered the risk weight. By
using the ratings issued by credit rating agencies like
Standard and Poor’s and Moody’s to determine the potential
risk of default, Basel II set up a system by which assets within each broad “risk bucket” could be further classified.
The second big change was a new method by which risk
profiles could be measured. Instead of forcing all banks to
abide by the specific numeric standards set forth in Basel II,
certain banks could opt out. In place of the top-down
approach, the “internal ratings based” approach — available
only to sophisticated banks with the resources and knowledge base to develop an internal rating with a mathematic
model — allowed some banks to estimate the necessary size
of their own capital cushion.
Both changes were aimed at answering the critics who
stated that the original Basel standards did not integrate any
market-based mechanisms for evaluating risk. Yet these
changes seem to have proven flawed as well. The grades
awarded by the ratings agencies for some mortgage-backed
securities, for instance, have been shown to be less reliable
than originally hoped. Some argue it’s hard to make a case
that a handful of firms which are largely insulated from competition by the Securities and Exchange Commission, as the
“Big Three” ratings agencies are, could be considered a sufficient market-based mechanism. (For a detailed analysis, see
this issue’s cover story on page 14.)
In addition, allowing banks to set their own capital
requirements doesn’t seem to acknowledge the current state
of the science of risk management. It has become apparent
that the models of risk used by many banks may not have
been sufficiently robust to anticipate the potential default of
complex new asset-backed securities.
There has been some discussion within the Federal
Reserve about how to overcome the incentive a bank would
have to lowball their capital requirement estimates. One way
to create an incentive for banks to be as honest as possible is
to require them to precommit to a maximum loss exposure

8

Region Focus • Spring 2009

and corresponding capital buffer. If the bank’s losses exceed
the declared maximum, the bank supervisor would levy a
fine on the bank.
A criticism of the precommitment approach centers on
the ability and willingness of a regulator to assess fines. For
the fines to be a credible threat, they must be large enough
to spur action by the bank. But if an economic shock were to
reduce a bank’s soundness, a regulator might feel compelled,
if he believed the shock to be temporary, to avoid assessing
the fine if doing so would result in the bank’s failure. Yet the
failure to issue a penalty, especially if it is sufficiently steep
for the precommitment regime to work, would severely
restrict the credibility of the regulatory threat in the future.

The Search for a Market-Based Mechanism
Critics of the Basel standards have pointed out that
each round of changes has yet to address a key conceptual
problem: Banks face a variety of risks that cannot be captured by a simple ratio. There is no attention paid to the
risks of a heavy concentration of a bank’s balance sheet in a
certain sort of investment. And a ratio has no way to gauge
the risks of poor management, the risks of an economic
shock, and the risks to reputation in the marketplace.
Critics argue that a real market-based mechanism that does
not rely almost solely on credit rating agencies or mathematical models would be better suited to managing not just
the capital ratios of a bank but also these other intangible
risk factors that those institutions face.
One proposal is to require large banks to hold a certain
portion of their assets in long-term subordinated debt.
This form of debt would be uninsured — meaning it has no
claim to a federal guarantee — and would have a maturity
of more than a year. The term “subordinated” means that
the holders of these bonds are in line for repayment
behind depositors, conventional bondholders, and the
FDIC should the bank fail. The bonds could be traded in a
secondary market.
Supporters of this proposal suggest that these characteristics would be important for making this form of debt a
strong market-based barometer of a bank’s capital position.
Because these bondholders would be among the last to get
paid in the event of a bank failure, they would have an incentive to monitor the bank’s relative riskiness. Subordinated
debt holders would be watchful of the bank’s levels of leverage because that level would influence not just the
probability of the bank’s failure but also the composition of
risks on its balance sheet — and, consequently, the bank’s
ability to repay subordinated bondholders in the event of
failure. Finally, because the bonds can be traded in secondary
markets, the risk yield would go up on the debt in the event
of a market perception that the bank is taking on too much
risk, thus sending a signal to both regulators and investors.
As Charles Calomiris of Columbia University and Robert
Litan of the Kauffman Foundation argue, a subordinated
debt requirement could be preferable to the current Basel
standard that encourages more equity financing of banks.

Stockholders of a bank are likely to be more concerned
about the bank’s profitability and, hence, more interested in
the bank making high-yield, potentially risky investments.
As Calomiris and Litan point out in a 2000 study, “because
holders of subordinated debt have no upside other than the
interest they are promised, they are likely to be less risk
seeking than shareholders.” They argue that these debt
holders would also have a relatively longer time horizon
than a stockholder because of the long-term nature of the
bond maturities. And they suggest that, because a portion
of the bonds will mature regularly, a subordinated debt
requirement on banks would force those banks to prove
themselves in the credit markets on a regular basis.
A criticism of the subordinated debt proposal suggests
that a secondary market for the asset may not emerge. The
amount of debt outstanding, particularly for a small bank,
might be too small for the market to be robust. Also, because
the proposal relies on the assumption that the bondholders
are relatively risk averse, they may be unusually sensitive to
new information and rush to redeem the debt after hearing
isolated pieces of bad economic news.
Another criticism of the subordinated debt is that political realities might make it a less effective tool at controlling
risk. In a world of deposit insurance and central governments unable to credibly commit to not bail out failing
banks, the upside of risk is privatized — by allowing the
bank’s stockholders to keep the profits of successful gambles
— but the downside is socialized because the government
ensures that the bank’s debtors don’t suffer. This creates an
incentive for banks to make even riskier investments than
they would otherwise. Meanwhile, the price of bank debt
will be influenced by the implicit or explicit insurance
guarantee, and the debt price would not necessarily yield
accurate information about a bank’s level of risk.
One way to control risk more directly is to approach the
question from the other end by limiting the net return a
bank can make and thereby limit its incentive to take too
much risk. This can be done by requiring banks to issue
stock warrants. Edward Simpson Prescott, an economist at
the Richmond Fed, argues this requirement would alter a
bank’s capital structure in such a way as to replicate the
incentives that a bank would face in a world in which deposit

insurance and bondholder guarantees didn’t exist.
The stock warrants would contain a strike price — a set
price at which the holders of the warrant could purchase a
share of bank equity. If the per-share return a bank experiences is higher than the strike price, then the warrant holder
could exercise his option to buy the stock at the predetermined price and reap the gains. The bank, on the other hand,
would only receive the price of the stock. Selling a stock warrant would, in other words, be equivalent to selling a portion
of the bank’s return to a set of investors. This would have the
effect of constraining the upper-end payoff a bank could
reap if the managers pursued a risky yet potentially highyield investment and should limit the incentive that banks
have to engage in such behavior.
A potential risk here is that a stock warrant could
penalize a bank that exhibits high returns generated by innovation or better management rather than risky leveraged
investments. There are also political economy issues. Bank
warrants can tip the balance of power away from bank managers, and a proposal to require warrants are likely to be
met with opposition. Additionally, by definition a stock warrant requirement would work best with a lower equity
capital requirement; high capital requirements choke
off investment. Yet it’s likely that a proposal to allow a lowering of capital requirements would be met with skepticism
today.
As the economic downturn unfolds, the debate about
the correct regulatory approach to capital buffers and the
best way to integrate market-based mechanisms will
continue. Bank regulation, by its nature, is often backwardlooking, adjusting to new financial innovations after they
become widespread. Some critics question whether the
attempts to continually modify capital standards can ever
keep up.
Nevertheless, capital ratios are quite firmly embedded in
U.S. law now. Yet it remains an open question whether the
spirit of the Basel II standards will survive intact. The Basel
Committee responded to the situation in the worldwide
financial markets in a November 2008 press release that recognized the “fundamental weaknesses” of Basel II and
proposed a goal of modifying the standards once again by
the end of 2009.
RF

READINGS
Burhouse, Susan, John Feid, George French, and Keith Ligon.
“Basel and the Evolution of Capital Regulation: Moving Forward,
Looking Back.” Washington, D.C.: Federal Deposit Insurance
Corporation, Jan. 14, 2003.
Calomiris, Charles W., and Robert E. Litan. “Financial Regulation
in a Global Marketplace.” Brookings-Wharton Papers on Financial
Services: 2000. Washington, D.C.: Brookings Institution Press,
pp. 283-323.
Kroszner, Randall S., and Philip E. Strahan. “Regulatory
Incentives and the Thrift Crisis: Dividends, Mutual-to-Stock
Conversions, and Financial Distress.” Journal of Finance,

September 1996, vol. 51, no. 4, pp. 1285-1319.
Kupiec, Paul, and James M. O’Brien. “The Pre-Commitment
Approach: Using Incentives to Set Market Risk Capital
Requirements.” Federal Reserve Board Finance and Economics
Discussion Series Paper 1997-14, March 1997.
Prescott, Edward S. “Regulating Bank Capital Structure to Control
Risk.”Federal Reserve Bank of Richmond Economic Quarterly,
Summer 2001, vol. 87, no. 3, pp. 35-52.
Rodriguez, L. Jacobo. “International Banking Regulation: Where’s
the Market Discipline in Basel II?” Cato Institute Policy Analysis
no. 455, Oct. 15, 2002.

Spring 2009 • Region Focus

9

JARGONALERT

RESEARCHSPOTLIGHT

Underemployment

Have Free Markets Failed Us?

BY DAV I D VA N D E N B E RG

T

10

Region Focus • Spring 2009

Reserve Bank of Philadelphia. For general audiences, the
fact that the official unemployment rate follows the same
trend as alternative unemployment figures makes things
easy, says Faberman. “For a lay person, what this tells you is
that looking at the unemployment number is going to give
you the same story in relative terms as looking at the underemployment number.” Because they move in the same
direction, both numbers will tell the same general story over
time, Faberman says.
Unemployment measures and other labor market indicators are derived from data generated by the Current
Population Survey (CPS), sent to 60,000 households a
month. Before the 1994 changes to the survey, the BLS sent
the old and new versions of the questionnaire simultaneously between July 1992
and December 1993. The new questionnaire produced an unemployment rate
half a percentage point higher for 1993.
Survey participants faced more extensive questioning under the new
questionnaire, which generally registered
more labor force activity, especially for
workers who traditionally have more
part-time or irregular work force participation. That’s why the new survey yielded
a higher labor force participation rate. It
also revealed longer durations of unemployment, a higher proportion of
unemployed people re-entering the work force, and a lower
proportion of new entrants.
Because the U-6 was first published in 1996, it is not possible to compare recent underemployment rates to those in
earlier severe downturns such as the 1982 recession. For
instance, marginally attached workers were not included in
unemployment measures prior to the 1994 redesign. The BLS
also tightened the definition of discouraged workers, which
reduced their numbers considerably after the CPS redesign.
However, one element of the underemployment rate can be
compared to earlier downturns — the level of involuntary
part-time workers. That figure, which can be traced back to
1955, is higher today than at any point since then.
Over time, the gap between unemployment and underemployment rates has remained fairly constant in
percentage terms, according to BLS data. However, the
severity of the current recession could produce some significant short-term structural changes in the labor market.
Monitoring the unemployment and underemployment rates
will be important both during the current downturn and the
recovery following it.
RF

countries adopting them. For instance, these economists
s it merely a coincidence that living standards rose
do not necessarily look askance at capital controls or see
sharply and absolute poverty declined while the world
price stability as an important precondition to economic
embraced free market policies beginning in 1980?
growth.
That’s the question Harvard University economist Andrei
A recent book co-authored by Stiglitz, surveyed by
Shleifer ponders in this essay.
Shleifer in this essay, seeks to make the case for significant
He names the period between 1980 and 2005 as the “Age
state intervention in developing economies. Yet, Shleifer
of Milton Friedman” to acknowledge the adoption —
argues, the evidence offered is not persuasive. On inflation,
at least in modified form — of many of the late Nobel
for instance, their argument often amounts to a straw man,
laureate’s market-oriented proposals. The policies pursued
Shleifer maintains. Stiglitz and his co-authors see advocates
in that spirit include capital market deregulation, the
of zero inflation as their main opposition when that point of
lowering of trade barriers, inflation-conscious monetary
view isn’t held by most market-oriented economists, who
policy, the adoption of flexible exchange rates, and tax cuts.
argue that a certain level of inflation might need to be tolerIt’s hard to argue that these policies didn’t at least
ated, at least in the short run. Meanwhile, Stiglitz and his
have some positive effect. As Shleifer points out, they correco-authors are incautious when
sponded to substantial increases
they “express little concern for
in the rate of growth in per-capita
the huge costs that high inflation
GDP worldwide and it’s quite
“The Age of Milton Friedman” by
has brought to countries that lost
likely that they were the main
Andrei Shleifer. Journal of Economic
control of their fiscal policy,
drivers of the growth. The
including many Latin American
countries for which market liberLiterature, March 2009, vol. 47,
and transition economies.”
alization policies provided the
no. 1, pp. 123-135.
Stiglitz and his co-authors also
best relative return were those
favor capital controls as a way to
that were once the most heavily
stem swings in speculative capital
regulated, such as the countries
investment. As Shleifer notes, they lean heavily on the examof East and South Asia. (Aggregate growth trends mask a
ple of Malaysia as a country that imposed such controls and
few key differences between regions. Rapid growth in Asia
was able to escape the Asian financial crisis of the 1990s. Yet
towers above slow growth in Latin America and stagnation
that example is still controversial as recent analysis has
in Africa.)
failed to find that these controls had macro-economic beneThe triumph over runaway inflation and high punitive
fits. Instead, Shleifer suggests that such controls encouraged
tax rates was evident during the Age of Friedman. The world
misallocation of capital and political corruption.
median annual inflation rate declined from 14.3 percent in
Shleifer reminds us that we must be careful to learn
1980 to 4.1 percent in 2005. Marginal income tax rates
the right lessons from the experiences of developing
dropped from the population-weighted average of 65 pereconomies. The transition to a more free market system
cent in 1980 to 36.7 percent in 2005.
“has taught us that economic and political disorganization,
Markets became more international in scope due to a
combined with obsolete human capital of both economic
weakening of trade barriers too. Tariff rates fell from the
agents and politicians, can sharply slow down the economic
population-weighted world average of 43 percent in 1980 to
turnaround.” The other obvious problem facing the devel13 percent in 2004. As formal goods markets become more
oping world now, he writes, is the lack of new business
free, black market activity declined.
investment — a phenomenon that must be tied to the lack
The benefits of abandoning dirigistic policies have
of institutional barriers to arbitrary political power which
become clear to many in the developed world and this, in
spawns predatory regulatory and fiscal policies.
turn, has raised people’s hopes and expectations. Shleifer
“On strategy, economics got the right answer: free
recounts a trip he took to Chile a decade ago. At that time,
market policies, supported but not encumbered by the
the ambition of policymakers was to overtake Argentina. In
government, deliver growth and prosperity,” Shleifer con2007, policymakers wanted to match the growth of Australia
cludes. “And while a lot has been accomplished in the last
and New Zealand.
quarter century, a lot remains to be done.” In short, the prinYet some scholars, most notably Columbia University
ciples to which Milton Friedman devoted his career can
economist and Nobel Prize winner Joseph Stiglitz, remain
continue to provide a suitable policy guide in the future. RF
skeptical that free market policies are, in fact, good for the

I

ILLUSTRATION: TIMOTHY COOK

he monthly unemployment rate most people are
familiar with tracks people who are out of work and
searching for new jobs. However, it’s only one of
six measures of unemployment published by the Bureau
of Labor Statistics (BLS). The BLS also produces a broader
measurement, sometimes referred to as the “underemployment rate” or the “U-6 rate” after the dataset on which
it is based. The U-6 rate, according to the BLS, includes the
officially unemployed plus all marginally attached workers
and people employed part-time for economic reasons as a
share of the civilian labor force plus all marginally attached
workers. Through June 2009, the underemployment rate
reached 16.5 percent, the highest since the BLS redesigned
its unemployment figures and created the
U-6 in 1994. In 1993 the BLS stopped the
U-7 data set, which was previously its
broadest measure of unemployment.
Workers classified as “marginally
attached” and “discouraged workers” are
included in the underemployment calculation. They are typically just a small portion
of the people outside the labor force as
measured by the BLS, which defines the
labor force as the sum of all employed and
unemployed people. Employed people performed any work for pay or profit during
the survey week, did at least 15 hours of
unpaid work in a family-owned business, or
were absent from work because of bad weather, illness, vacation, industrial disputes, or various personal reasons. People
who do not have a job but have actively searched for one in
the last four weeks and are immediately available for work
are counted as unemployed.
Typically most people not in the labor force do not seek
employment because they’re retired, attending to family
responsibilities, going to school, or are physically unable
work. The marginally attached are neither employed nor
looking for work but have sought work in the past year and
are available immediately. Family responsibilities or transportation concerns can keep marginally attached workers
out of the work force. Discouraged workers are not
employed and not seeking work because they believe nothing is available for them.
All six unemployment measures the Bureau of Labor
Statistics publishes follow a similar pattern: Both the underemployment and unemployment rates move in the same
direction. What is perhaps most relevant to economic
researchers is how these measures move relative to each
other, says Jason Faberman, an economist at the Federal

BY ST E P H E N S L I V I N S K I

Spring 2009 • Region Focus

11

JARGONALERT

RESEARCHSPOTLIGHT

Underemployment

Have Free Markets Failed Us?

BY DAV I D VA N D E N B E RG

T

10

Region Focus • Spring 2009

Reserve Bank of Philadelphia. For general audiences, the
fact that the official unemployment rate follows the same
trend as alternative unemployment figures makes things
easy, says Faberman. “For a lay person, what this tells you is
that looking at the unemployment number is going to give
you the same story in relative terms as looking at the underemployment number.” Because they move in the same
direction, both numbers will tell the same general story over
time, Faberman says.
Unemployment measures and other labor market indicators are derived from data generated by the Current
Population Survey (CPS), sent to 60,000 households a
month. Before the 1994 changes to the survey, the BLS sent
the old and new versions of the questionnaire simultaneously between July 1992
and December 1993. The new questionnaire produced an unemployment rate
half a percentage point higher for 1993.
Survey participants faced more extensive questioning under the new
questionnaire, which generally registered
more labor force activity, especially for
workers who traditionally have more
part-time or irregular work force participation. That’s why the new survey yielded
a higher labor force participation rate. It
also revealed longer durations of unemployment, a higher proportion of
unemployed people re-entering the work force, and a lower
proportion of new entrants.
Because the U-6 was first published in 1996, it is not possible to compare recent underemployment rates to those in
earlier severe downturns such as the 1982 recession. For
instance, marginally attached workers were not included in
unemployment measures prior to the 1994 redesign. The BLS
also tightened the definition of discouraged workers, which
reduced their numbers considerably after the CPS redesign.
However, one element of the underemployment rate can be
compared to earlier downturns — the level of involuntary
part-time workers. That figure, which can be traced back to
1955, is higher today than at any point since then.
Over time, the gap between unemployment and underemployment rates has remained fairly constant in
percentage terms, according to BLS data. However, the
severity of the current recession could produce some significant short-term structural changes in the labor market.
Monitoring the unemployment and underemployment rates
will be important both during the current downturn and the
recovery following it.
RF

countries adopting them. For instance, these economists
s it merely a coincidence that living standards rose
do not necessarily look askance at capital controls or see
sharply and absolute poverty declined while the world
price stability as an important precondition to economic
embraced free market policies beginning in 1980?
growth.
That’s the question Harvard University economist Andrei
A recent book co-authored by Stiglitz, surveyed by
Shleifer ponders in this essay.
Shleifer in this essay, seeks to make the case for significant
He names the period between 1980 and 2005 as the “Age
state intervention in developing economies. Yet, Shleifer
of Milton Friedman” to acknowledge the adoption —
argues, the evidence offered is not persuasive. On inflation,
at least in modified form — of many of the late Nobel
for instance, their argument often amounts to a straw man,
laureate’s market-oriented proposals. The policies pursued
Shleifer maintains. Stiglitz and his co-authors see advocates
in that spirit include capital market deregulation, the
of zero inflation as their main opposition when that point of
lowering of trade barriers, inflation-conscious monetary
view isn’t held by most market-oriented economists, who
policy, the adoption of flexible exchange rates, and tax cuts.
argue that a certain level of inflation might need to be tolerIt’s hard to argue that these policies didn’t at least
ated, at least in the short run. Meanwhile, Stiglitz and his
have some positive effect. As Shleifer points out, they correco-authors are incautious when
sponded to substantial increases
they “express little concern for
in the rate of growth in per-capita
the huge costs that high inflation
GDP worldwide and it’s quite
“The Age of Milton Friedman” by
has brought to countries that lost
likely that they were the main
Andrei Shleifer. Journal of Economic
control of their fiscal policy,
drivers of the growth. The
including many Latin American
countries for which market liberLiterature, March 2009, vol. 47,
and transition economies.”
alization policies provided the
no. 1, pp. 123-135.
Stiglitz and his co-authors also
best relative return were those
favor capital controls as a way to
that were once the most heavily
stem swings in speculative capital
regulated, such as the countries
investment. As Shleifer notes, they lean heavily on the examof East and South Asia. (Aggregate growth trends mask a
ple of Malaysia as a country that imposed such controls and
few key differences between regions. Rapid growth in Asia
was able to escape the Asian financial crisis of the 1990s. Yet
towers above slow growth in Latin America and stagnation
that example is still controversial as recent analysis has
in Africa.)
failed to find that these controls had macro-economic beneThe triumph over runaway inflation and high punitive
fits. Instead, Shleifer suggests that such controls encouraged
tax rates was evident during the Age of Friedman. The world
misallocation of capital and political corruption.
median annual inflation rate declined from 14.3 percent in
Shleifer reminds us that we must be careful to learn
1980 to 4.1 percent in 2005. Marginal income tax rates
the right lessons from the experiences of developing
dropped from the population-weighted average of 65 pereconomies. The transition to a more free market system
cent in 1980 to 36.7 percent in 2005.
“has taught us that economic and political disorganization,
Markets became more international in scope due to a
combined with obsolete human capital of both economic
weakening of trade barriers too. Tariff rates fell from the
agents and politicians, can sharply slow down the economic
population-weighted world average of 43 percent in 1980 to
turnaround.” The other obvious problem facing the devel13 percent in 2004. As formal goods markets become more
oping world now, he writes, is the lack of new business
free, black market activity declined.
investment — a phenomenon that must be tied to the lack
The benefits of abandoning dirigistic policies have
of institutional barriers to arbitrary political power which
become clear to many in the developed world and this, in
spawns predatory regulatory and fiscal policies.
turn, has raised people’s hopes and expectations. Shleifer
“On strategy, economics got the right answer: free
recounts a trip he took to Chile a decade ago. At that time,
market policies, supported but not encumbered by the
the ambition of policymakers was to overtake Argentina. In
government, deliver growth and prosperity,” Shleifer con2007, policymakers wanted to match the growth of Australia
cludes. “And while a lot has been accomplished in the last
and New Zealand.
quarter century, a lot remains to be done.” In short, the prinYet some scholars, most notably Columbia University
ciples to which Milton Friedman devoted his career can
economist and Nobel Prize winner Joseph Stiglitz, remain
continue to provide a suitable policy guide in the future. RF
skeptical that free market policies are, in fact, good for the

I

ILLUSTRATION: TIMOTHY COOK

he monthly unemployment rate most people are
familiar with tracks people who are out of work and
searching for new jobs. However, it’s only one of
six measures of unemployment published by the Bureau
of Labor Statistics (BLS). The BLS also produces a broader
measurement, sometimes referred to as the “underemployment rate” or the “U-6 rate” after the dataset on which
it is based. The U-6 rate, according to the BLS, includes the
officially unemployed plus all marginally attached workers
and people employed part-time for economic reasons as a
share of the civilian labor force plus all marginally attached
workers. Through June 2009, the underemployment rate
reached 16.5 percent, the highest since the BLS redesigned
its unemployment figures and created the
U-6 in 1994. In 1993 the BLS stopped the
U-7 data set, which was previously its
broadest measure of unemployment.
Workers classified as “marginally
attached” and “discouraged workers” are
included in the underemployment calculation. They are typically just a small portion
of the people outside the labor force as
measured by the BLS, which defines the
labor force as the sum of all employed and
unemployed people. Employed people performed any work for pay or profit during
the survey week, did at least 15 hours of
unpaid work in a family-owned business, or
were absent from work because of bad weather, illness, vacation, industrial disputes, or various personal reasons. People
who do not have a job but have actively searched for one in
the last four weeks and are immediately available for work
are counted as unemployed.
Typically most people not in the labor force do not seek
employment because they’re retired, attending to family
responsibilities, going to school, or are physically unable
work. The marginally attached are neither employed nor
looking for work but have sought work in the past year and
are available immediately. Family responsibilities or transportation concerns can keep marginally attached workers
out of the work force. Discouraged workers are not
employed and not seeking work because they believe nothing is available for them.
All six unemployment measures the Bureau of Labor
Statistics publishes follow a similar pattern: Both the underemployment and unemployment rates move in the same
direction. What is perhaps most relevant to economic
researchers is how these measures move relative to each
other, says Jason Faberman, an economist at the Federal

BY ST E P H E N S L I V I N S K I

Spring 2009 • Region Focus

11

POLICYUPDATE

AROUNDTHEFED

Are CEOs Paid Too Much?

What Prolonged the Great Depression?
BY M AT T H E W C O N N E R

BY DAV I D VA N D E N B E RG

n June, President Obama announced the appointment
of a Washington attorney as the administration’s new
“special master” for executive compensation. Kenneth
Feinberg, the appointee, will oversee pay packages of
company executives whose firms are receiving government
assistance.
Feinberg will review and approve any compensation for
the senior executives and the next 20 highest-paid employees at seven firms who received money through the Federal
Government's TARP program. Those companies include
Bank of America, Citigroup, AIG, General Motors, GMAC,
Chrysler, and Chrysler Financial, according to the Treasury
Department. Feinberg’s duties also include advising 80 more
financial companies that received government money about
executive pay.
Part of the debate in Washington about executive pay has
centered on the question of whether CEOs are overpaid
relative to their contribution to firm value. Another question has revolved around whether their compensation
packages create incentives for them to take excessive risks.
Across the corporate sector, the size of executive compensation packages has soared. The gap between the salaries
of the workers and the CEO of a corporation has widened
considerably. In 1994, the ratio of median CEO pay to
median production worker pay was 90 to 1, according to a
Congressional Research Service report. In 2005, that ratio
had increased to 179 to 1.
Executive compensation packages often contain multiple
elements. CEOs can receive company stock, stock options,
deferred compensation, long-term bonuses, and nonmonetary perks. Not all of these are new. Stock options have been
an important element of CEO pay since the 1950s, although
executives receive those more frequently now.
In a 2008 paper, New York University economists Xavier
Gabaix and Augustin Landier write: “[T]he sixfold increase
of U.S. CEO pay between 1980 and 2003 can be fully attributed to the sixfold increase in market capitalization of large
companies during that period.” Gabaix says that this suggests the market for CEOs works well and there are only a
few egregious examples of executives getting paid more than
you would expect based on their contributions to a company’s success.
CEOs may operate in a kind of superstar market, which
the late University of Chicago labor economist Sherwin
Rosen describes as one in which “relatively small numbers of
people earn enormous amounts of money and dominate the
activities in which they engage.” The differences in talent
levels among top executives is quite small, Gabaix and
Landier argue. However, those small differences can lead to

I

12

Region Focus • Spring 2009

big gaps in compensation and are magnified by firm size. In
their paper, they note that the first CEO on the list earns
over 500 percent more than the 250th ranked executive.
The more-talented CEOs seem to add more value to
their companies than the less-talented ones. Marko Tervio
of the University of California at Berkeley tried to determine what would happen if the managers of the 1,000
largest U.S companies in 2004 had been replaced by lessskilled executives, such as the CEO of the company at the
bottom of the list. The combined market value of the top
firms would have been perhaps $25 billion lower. Tervio’s calculations imply talented CEOs contributed $17 million to
$21 million, or 15 percent of the total market value, of the
largest 1,000 firms, writes Arantxa Jarque in a 2008 paper for
the Richmond Fed’s Economic Quarterly.
Economists differ on how closely the executive’s pay
should be linked to the company’s performance. For
instance, stock options may prove problematic in CEO
compensation packages, Gabaix says, by encouraging excessive risk taking that only temporarily bolsters a firm’s share
price. In addition, a large decline in share price can render
the stock options worthless and granting new options or
re-pricing existing ones may seem to reward an executive for
failure.
Part of the CEO’s compensation should not be subject to
risk, providing some insurance against bad performance due
to factors outside of his control, Jarque writes. Failure to
provide that assurance would make it difficult to recruit
executives.
In a May 2009 paper, Gabaix and three co-authors
propose one possible solution for improving incentive structures. They suggest awarding executive pay through
“dynamic incentive accounts.” Under the plan, CEOs would
see their pay escrowed each year and would have no immediate access to most of it. A constant percentage of the
executive’s pay would be invested in company stock and the
remainder in cash. The portfolio would be continuously
rebalanced so that the portion of company stock is sufficient
to induce effort at minimum risk to the executive. The executive would receive small portions of the account gradually,
and that gradual vesting would continue even after an executive’s departure. This could discourage an executive from
behaving badly, such as using accounting tricks to inflate the
company’s short-run stock price before cashing out and
leaving the firm in shambles.
In the end, structuring executive compensation in a way
that aligns the incentives of the CEO with those of the
company and its shareholders can be a tricky task — but one
crucial to well-functioning markets.
RF

“Capital Taxation During the U.S. Great Depression.” Ellen
R. McGrattan, Federal Reserve Bank of Minneapolis
Working Paper 670, April 2009.

hile most economists would argue that the main
cause of the Great Depression was unwise monetary policies, such policies alone cannot adequately explain
the severity and duration of the crisis. In this paper Ellen
McGrattan of the Minneapolis Fed seeks to prove that
some fiscal policies during the period had more than a
small impact. One key insight of the paper is that prior
studies on this topic have assumed that the only sort of
capital taxed during this period was profit. Yet the big
change in policy was actually a substantial increase in the
taxation of dividends in the Revenue Act of 1932.
As McGrattan suggests, even the anticipation of dividend taxation — a proposal publicly suggested by President
Herbert Hoover as early as 1930 — could have had an effect
on investment in that period. In addition, the studies that
suggest tax increases had little or no effect note that few
people actually paid income taxes during this period.
McGrattan notes that while this is true, the taxpayers who
did pay those taxes earned almost all of their income
through dividends.
Adding dividend taxation to the standard growth model
on which the majority of research on this topic is based,
McGrattan discovers that a large fraction of the observed
declines in real GDP between 1929 and 1933 is explained by
her tax-inclusive model. Additionally, the decline in production hours per capita during this period also can be
explained by her model.

W

“The Olympic Effect.” Andrew K. Rose and Mark M. Spiegel,
Federal Reserve Bank of San Francisco Working Paper 200906, March 2009.

he right to host a mega-event such as the Olympics
or the World Cup is seen as an honor to the nation
chosen, but economists are skeptical about the economic
benefits. In practice, these events usually end up imposing large costs on their hosts that are not often fully
recovered through revenue during the event or from the
structures that are left over afterward.
While it is commonly asserted that hosting the Olympics
will promote a nation’s exports, economists Andrew Rose of
the University of California at Berkeley and Mark Spiegel of
the San Francisco Fed examine the empirical evidence. They
find a large positive effect of the Summer Olympics on both
exports and overall trade. (The Winter Olympics are not

T

studied due to the fact that fewer countries are able to host
that event.) The authors also found a strong positive effect
on trade from other mega-events such as the World Cup.
The research shows that Olympic host countries have seen
up to a 30 percent increase in exports. Yet the authors
also find an almost equal increase in trade in the nations that
vied for the right to host the event but were not chosen.
This implies that the effect on trade comes not from
actually hosting the games but from bidding for them in the
first place.
The authors speculate that this increase results from the
signal that bidding to host the event sends to the world. This
“signaling strategy” conveys the country’s interest in trade
liberalization. This idea is illustrated by the fact that just two
months after being awarded the right to host the 2008
Summer Games in July 2001, China successfully concluded
negations with the WTO, thus formalizing its commitment
to trade liberalization.
“Subprime Mortgage Pricing: The Impact of Race, Ethnicity,
and Gender on the Cost of Borrowing.” Andrew Haughwout,
Christopher Mayer, and Joseph Tracy, Federal Reserve Bank
of New York Staff Report 368, April 2009.

ome have argued that during the peak period for subprime lending (2004 to 2006) minority borrowers were
saddled with higher interest rates than nonminority
borrowers. The authors of this study test that claim using a
new sample that merges data on more than 75,000
adjustable rate mortgages with information on the race,
ethnicity, and gender of the borrowers. This dataset allows
them to examine the differences in mortgage lending while
controlling for both the risk profile of the mortgage and the
characteristics of the neighborhood in which the property
was located.
In contrast to some previous findings, their results
show that there is no evidence of adverse pricing for most
minority demographics. If anything, many minority
borrowers actually received slightly lower rates. Black and
Hispanic borrowers paid a slightly lower initial mortgage
rate than other borrowers, although Asian borrowers paid a
slightly higher rate. No appreciable differences were found
in lending terms based on gender. Finally, the adjustable
rates on the mortgages did not “reset” at higher levels for
minority borrowers relative to nonminority borrowers when
one controls for risk and location. The authors conclude
that these results suggest the possibility that subprime lending was a credit innovation that did serve as a positive credit
supply shock in locations with more minority residents. RF

S

Spring 2009 • Region Focus

13

POLICYUPDATE

AROUNDTHEFED

Are CEOs Paid Too Much?

What Prolonged the Great Depression?
BY M AT T H E W C O N N E R

BY DAV I D VA N D E N B E RG

n June, President Obama announced the appointment
of a Washington attorney as the administration’s new
“special master” for executive compensation. Kenneth
Feinberg, the appointee, will oversee pay packages of
company executives whose firms are receiving government
assistance.
Feinberg will review and approve any compensation for
the senior executives and the next 20 highest-paid employees at seven firms who received money through the Federal
Government's TARP program. Those companies include
Bank of America, Citigroup, AIG, General Motors, GMAC,
Chrysler, and Chrysler Financial, according to the Treasury
Department. Feinberg’s duties also include advising 80 more
financial companies that received government money about
executive pay.
Part of the debate in Washington about executive pay has
centered on the question of whether CEOs are overpaid
relative to their contribution to firm value. Another question has revolved around whether their compensation
packages create incentives for them to take excessive risks.
Across the corporate sector, the size of executive compensation packages has soared. The gap between the salaries
of the workers and the CEO of a corporation has widened
considerably. In 1994, the ratio of median CEO pay to
median production worker pay was 90 to 1, according to a
Congressional Research Service report. In 2005, that ratio
had increased to 179 to 1.
Executive compensation packages often contain multiple
elements. CEOs can receive company stock, stock options,
deferred compensation, long-term bonuses, and nonmonetary perks. Not all of these are new. Stock options have been
an important element of CEO pay since the 1950s, although
executives receive those more frequently now.
In a 2008 paper, New York University economists Xavier
Gabaix and Augustin Landier write: “[T]he sixfold increase
of U.S. CEO pay between 1980 and 2003 can be fully attributed to the sixfold increase in market capitalization of large
companies during that period.” Gabaix says that this suggests the market for CEOs works well and there are only a
few egregious examples of executives getting paid more than
you would expect based on their contributions to a company’s success.
CEOs may operate in a kind of superstar market, which
the late University of Chicago labor economist Sherwin
Rosen describes as one in which “relatively small numbers of
people earn enormous amounts of money and dominate the
activities in which they engage.” The differences in talent
levels among top executives is quite small, Gabaix and
Landier argue. However, those small differences can lead to

I

12

Region Focus • Spring 2009

big gaps in compensation and are magnified by firm size. In
their paper, they note that the first CEO on the list earns
over 500 percent more than the 250th ranked executive.
The more-talented CEOs seem to add more value to
their companies than the less-talented ones. Marko Tervio
of the University of California at Berkeley tried to determine what would happen if the managers of the 1,000
largest U.S companies in 2004 had been replaced by lessskilled executives, such as the CEO of the company at the
bottom of the list. The combined market value of the top
firms would have been perhaps $25 billion lower. Tervio’s calculations imply talented CEOs contributed $17 million to
$21 million, or 15 percent of the total market value, of the
largest 1,000 firms, writes Arantxa Jarque in a 2008 paper for
the Richmond Fed’s Economic Quarterly.
Economists differ on how closely the executive’s pay
should be linked to the company’s performance. For
instance, stock options may prove problematic in CEO
compensation packages, Gabaix says, by encouraging excessive risk taking that only temporarily bolsters a firm’s share
price. In addition, a large decline in share price can render
the stock options worthless and granting new options or
re-pricing existing ones may seem to reward an executive for
failure.
Part of the CEO’s compensation should not be subject to
risk, providing some insurance against bad performance due
to factors outside of his control, Jarque writes. Failure to
provide that assurance would make it difficult to recruit
executives.
In a May 2009 paper, Gabaix and three co-authors
propose one possible solution for improving incentive structures. They suggest awarding executive pay through
“dynamic incentive accounts.” Under the plan, CEOs would
see their pay escrowed each year and would have no immediate access to most of it. A constant percentage of the
executive’s pay would be invested in company stock and the
remainder in cash. The portfolio would be continuously
rebalanced so that the portion of company stock is sufficient
to induce effort at minimum risk to the executive. The executive would receive small portions of the account gradually,
and that gradual vesting would continue even after an executive’s departure. This could discourage an executive from
behaving badly, such as using accounting tricks to inflate the
company’s short-run stock price before cashing out and
leaving the firm in shambles.
In the end, structuring executive compensation in a way
that aligns the incentives of the CEO with those of the
company and its shareholders can be a tricky task — but one
crucial to well-functioning markets.
RF

“Capital Taxation During the U.S. Great Depression.” Ellen
R. McGrattan, Federal Reserve Bank of Minneapolis
Working Paper 670, April 2009.

hile most economists would argue that the main
cause of the Great Depression was unwise monetary policies, such policies alone cannot adequately explain
the severity and duration of the crisis. In this paper Ellen
McGrattan of the Minneapolis Fed seeks to prove that
some fiscal policies during the period had more than a
small impact. One key insight of the paper is that prior
studies on this topic have assumed that the only sort of
capital taxed during this period was profit. Yet the big
change in policy was actually a substantial increase in the
taxation of dividends in the Revenue Act of 1932.
As McGrattan suggests, even the anticipation of dividend taxation — a proposal publicly suggested by President
Herbert Hoover as early as 1930 — could have had an effect
on investment in that period. In addition, the studies that
suggest tax increases had little or no effect note that few
people actually paid income taxes during this period.
McGrattan notes that while this is true, the taxpayers who
did pay those taxes earned almost all of their income
through dividends.
Adding dividend taxation to the standard growth model
on which the majority of research on this topic is based,
McGrattan discovers that a large fraction of the observed
declines in real GDP between 1929 and 1933 is explained by
her tax-inclusive model. Additionally, the decline in production hours per capita during this period also can be
explained by her model.

W

“The Olympic Effect.” Andrew K. Rose and Mark M. Spiegel,
Federal Reserve Bank of San Francisco Working Paper 200906, March 2009.

he right to host a mega-event such as the Olympics
or the World Cup is seen as an honor to the nation
chosen, but economists are skeptical about the economic
benefits. In practice, these events usually end up imposing large costs on their hosts that are not often fully
recovered through revenue during the event or from the
structures that are left over afterward.
While it is commonly asserted that hosting the Olympics
will promote a nation’s exports, economists Andrew Rose of
the University of California at Berkeley and Mark Spiegel of
the San Francisco Fed examine the empirical evidence. They
find a large positive effect of the Summer Olympics on both
exports and overall trade. (The Winter Olympics are not

T

studied due to the fact that fewer countries are able to host
that event.) The authors also found a strong positive effect
on trade from other mega-events such as the World Cup.
The research shows that Olympic host countries have seen
up to a 30 percent increase in exports. Yet the authors
also find an almost equal increase in trade in the nations that
vied for the right to host the event but were not chosen.
This implies that the effect on trade comes not from
actually hosting the games but from bidding for them in the
first place.
The authors speculate that this increase results from the
signal that bidding to host the event sends to the world. This
“signaling strategy” conveys the country’s interest in trade
liberalization. This idea is illustrated by the fact that just two
months after being awarded the right to host the 2008
Summer Games in July 2001, China successfully concluded
negations with the WTO, thus formalizing its commitment
to trade liberalization.
“Subprime Mortgage Pricing: The Impact of Race, Ethnicity,
and Gender on the Cost of Borrowing.” Andrew Haughwout,
Christopher Mayer, and Joseph Tracy, Federal Reserve Bank
of New York Staff Report 368, April 2009.

ome have argued that during the peak period for subprime lending (2004 to 2006) minority borrowers were
saddled with higher interest rates than nonminority
borrowers. The authors of this study test that claim using a
new sample that merges data on more than 75,000
adjustable rate mortgages with information on the race,
ethnicity, and gender of the borrowers. This dataset allows
them to examine the differences in mortgage lending while
controlling for both the risk profile of the mortgage and the
characteristics of the neighborhood in which the property
was located.
In contrast to some previous findings, their results
show that there is no evidence of adverse pricing for most
minority demographics. If anything, many minority
borrowers actually received slightly lower rates. Black and
Hispanic borrowers paid a slightly lower initial mortgage
rate than other borrowers, although Asian borrowers paid a
slightly higher rate. No appreciable differences were found
in lending terms based on gender. Finally, the adjustable
rates on the mortgages did not “reset” at higher levels for
minority borrowers relative to nonminority borrowers when
one controls for risk and location. The authors conclude
that these results suggest the possibility that subprime lending was a credit innovation that did serve as a positive credit
supply shock in locations with more minority residents. RF

S

Spring 2009 • Region Focus

13

Can regulatory reforms adequately realign the incentives of credit rating agencies?
BY R E N E E CO U RTO I S

J

ust as consumer credit companies like Experian and
Equifax issue credit scores for individuals, in bond
markets credit rating agencies evaluate the risk level
of securities that are issued by corporations, local governments, and other entities to raise money. The processes
have substantial differences, but their purpose is largely
the same: to reduce asymmetric information in financial
markets that can otherwise raise the cost of connecting
borrowers and lenders. This is a valuable market function.
In the last decade, rating agencies have been an essential
part of the process of mortgage securitization, or turning
home mortgages into bonds that were sold throughout the
global financial market. Ratings opened up securities backed
by mortgages, including many subprime mortgages, to a
larger pool of investors than ever before, especially ones
constrained by regulations to hold only assets of a certain
safety level. This allowed profits from the booming housing
market to be shared throughout the financial system.
Like lenders and investors, rating agencies shared in that
profit. The “Big Three” rating agencies of Standard & Poor’s
(S&P), Moody’s Investors Service, and Fitch Ratings, which
together represent more than 95 percent of the market share
in the rating industry, made record profits rating mortgagebacked securities: The Big Three’s revenue from ratings
doubled from $3 billion to $6 billion during the 2002 to
2007 heyday of subprime lending and securitization.
In hindsight, many of these ratings did not do a good job
of predicting the performance of the securities. The financial market turmoil — related to the declining housing
market — that started in the summer of 2007 led the rating

14

Region Focus • Spring 2009

agencies to revise ratings downward in record numbers.
In 2007 Moody’s downgraded 31 percent of its asset-backed
collateralized debt obligations (CDOs), most of which
were based on mortgages. By just over six months into the
crisis, S&P had downgraded 44 percent of the residential
mortgage-backed securities (RMBS) based on subprime
mortgages that it had rated from 2005 through the third
quarter of 2007. In 2008 Fitch downgraded 51 percent of its
ratings on residential mortgage-backed securities.
In each of these cases, a large proportion (by historical
standards) of the downgrades was for securities rated AAA
— the highest possible rating, typically associated with
virtually zero default risk. The difficulty of pricing risks in
what had become a worldwide mortgage-backed security
market is ultimately what amplified the housing downturn
and made it a global problem.
Rating agencies were by no means the only parties
that underestimated the riskiness of these securities.
Nonetheless, the role that rating agencies played in the securitization process has led to an intense discussion about
reform within the rating industry, which will depend
critically on understanding the incentives these agencies
face to produce accurate ratings.

The Rating Process
The grade (called a rating) that a rating agency issues represents the probability the security issuer will default on the
bond it is issuing. For the most part, issuers of the securities
pay for the ratings to be developed, and then the majority of
ratings are published on the rating agency’s Web site for

public consumption free of charge. Rating agencies rate
virtually every corner of the financial market, from bonds
issued by insurance companies to foreign governments to
corporations. There are 10 official rating agencies in the
United States and more than 60 rating agencies worldwide.
The rating agencies are private, for-profit entities; of the Big
Three, only Moody’s is a publicly traded company.
The industry has been increasingly woven into financial
markets since its birth in 1909, when John Moody began
issuing public ratings of railroad bonds. Rating agencies
were virtually unregulated by the federal government until
2007, when the 2006 Credit Rating Agency Reform Act was
implemented. The act gave the Securities and Exchange
Commission (SEC), a government regulatory body that acts
as an advocate for investors, the authority to force the agencies to create certain procedures and investigate whether the
agencies adhered to those procedures. But the SEC drew a
very careful line prohibiting it from auditing the ratings
themselves, or forcing agencies to modify the sophisticated
methodologies used to produce them.
Rating agencies have historically centered their business
on grading bonds issued by a single entity, such as a corporation or a local government, to raise money. However, in the
last decade the agencies have gained an increasing amount of
their revenue, between one-third and one-half depending on
the agency, from rating a relatively new financial device
called “structured finance” bonds — so named because they
are “structured” out of other assets like mortgages. These
are much more complex and harder to rate because they
entail assessing the risk of many underlying assets.
A large class of structured finance products is RMBS.
To grade an RMBS, a rating agency works closely with
the security issuer, often an investment bank, to obtain
background information on the security, including the characteristics — like the borrowers’ FICO score, geographic
location, loan-to-value ratio, and whether income documentation was provided — of each of the up to several thousand
mortgages in the RMBS. Based on the risk characteristics of
the mortgages in the RMBS, the rating agency uses sophisticated mathematical models and some subjective judgment
to determine the probability that the issuer will default.
Based on that probability, the rating agency assigns a grade
to the security.
The grade is then provided to the issuer and, in the case
of the Big Three, published on their Web sites. The agency
continues to monitor the likelihood that the security issuer
will default, updating its rating as necessary. Some rating
agencies make these updates frequently to keep their ratings
current, while others, especially the Big Three, intentionally
do so only periodically to avoid erroneously adjusting ratings
in response to temporary blips in financial markets.

A Structural Problem
No other industry is structured quite like the credit rating
industry. Since the 1930s, certain financial institutions, such
as insurance companies, banks, pension funds, and money

market mutual funds, have been required to hold only securities that have been deemed “investment grade” by a rating
agency. Since then, rating agencies have been a part of the
regulatory apparatus. In 1975 regulations also began setting
minimum capital requirements for certain financial institutions based on the grades of the assets in their portfolio —
if a regulated financial institution held risky assets, regulators would require it to keep a little extra cash on hand as
protection.
But the SEC began to worry that bogus rating firms
would emerge and issue beneficial ratings for anyone willing
to pay. This compelled it to spell out exactly whose “grades”
counted. For that, in 1975 the SEC created the nationally
recognized statistical rating organization (NRSRO) designation for rating agencies. The Big Three were granted the
NRSRO title by the SEC, and NRSROs were formally written into SEC and other regulations. It followed that many
investors, even those whose portfolios weren’t regulated
in this way, would choose to also base their investments
in part on ratings, further cementing demand for rating
agencies’ services.
In these early days, rating agencies were paid by the
investors who were bound by regulation to use ratings in creating their portfolios. However, this changed around the
time the SEC created the NRSRO category. With simple
photocopying, those who did not pay could have access to
the thick manuals of ratings published by the Big Three,
introducing the “free rider” problem to the rating industry.
Around the same time, the large bankruptcy of Penn
Central Railroad — one of the largest issuers of commercial
paper at the time — left issuers of securities desperate to
prove to investors that their paper was sound.
The Big Three realized that issuers of securities, as
opposed to investors, were ready and willing to pay for ratings, and they each moved to an “issuer-pays” structure.
Currently more than 98 percent of all credit ratings issued
by NRSROs are paid for by the issuer. The remaining ratings
are paid for by subscribers, usually investors, which are kept
private.
The issuer-pays model has not been easy for everyone to
swallow. Critics say the rating agency being paid directly by
the party that it is evaluating presents a conflict of interest
because both sides have incentive for ratings to be as optimistic as possible. There is nothing preventing issuers from
shopping around among rating agencies, or at least threatening to if they think they can get a higher rating elsewhere.
Rating agencies — which are paid according to the quantity of securities they rate — in turn have incentive to attract
the business of issuers by providing ratings that are inflated,
according to critics.
This structure doesn’t guarantee that rating agencies
will inflate ratings, but it certainly presents incentive for
them to do so. The surprising volume of rating downgrades
taken place since the housing downturn, coupled with
anecdotal reports like a 2008 SEC investigation that found
rating analysts participated in fee negotiations with issuers,

Spring 2009 • Region Focus

15

highlight the possibility that conflicts of interest might
have affected the rating process in recent years.
For all its potential conflicts of interest, the issuer-pays
structure apparently posed no large problem until the recent
boom in subprime lending. There are two probable reasons
for this. The first is that securities grew increasingly complex during this time period (see sidebar), which encouraged
market participants to skimp on their own due diligence in
favor of over-relying on the straightforward simplicity of
ratings. Further, rating agency analysts’ models may not have
kept pace with the mounting complexity of RMBS and
CDOs, causing them to underestimate some of the risk. In
an open April 2009 SEC meeting on rating agencies, Daniel
Curry, head of Canadian rating agency DBRS’s U.S. operations, referred to increasing complexity as a “smokescreen”
that obscured any inaccuracies of the ratings.
Second, growth in securitization from the mortgage market was exceptional — CDO issuance grew from about $158
billion in 2004 to more than $520 billion in 2006. The result
was that large portions of rating agencies’ revenue became
increasingly concentrated in just a handful of clients since
the lucrative and exceedingly complex securities were issued

predominantly by a few firms. According to an SEC review
of 368 CDOs rated by the Big Three in 2006 and 2007, just
11 issuers accounted for 92 percent of them. These issuers
would have the power to wield more influence on rating
agencies to produce favorable ratings since, if they were
unhappy with ratings, they could threaten to take a very
large chunk of their business to a competing rating agency.
The rating agencies merely being conscious of this predicament could be enough to encourage them to inflate ratings
to keep business.
Supporters of the issuer-pays model, including the Big
Three, say that the question of who pays shouldn’t matter
since the market would weed out any agency that didn’t have
an established reputation for producing accurate ratings.
The issuer-pays rating agencies execute this “reputation
building” by publishing all of their ratings, covering virtually
every industry and every bond issuer, on their Web sites
for public consumption, providing the opportunity for
anyone, including competitors, to check on their ratings.
This ratings transparency was described as a “substantial
public good” by Raymond McDaniel, CEO of Moody’s,
in the April 2009 SEC roundtable. This public good would

Understanding Mortgage Securitization
In 2000 a J.P.Morgan analyst named David Li was intrigued
by the frequency with which someone dies after a spouse
passes away, commonly called the “broken heart” phenomenon. Li knew that insurance companies use mathematical
techniques to estimate that probability for the pricing
of insurance policies. He realized that a similar technique
could be applied to financial markets and published a highly
technical paper on the topic. He probably had no idea that
his revelation would arguably contribute to the worldwide
financial market downturn and the role that mortgage securitization had in it.
To understand how, you first need to know what mortgage
securitization is. When mortgage lenders sell mortgages on
the secondary market they are often grouped into a pool
called a residential mortgage-backed security (RMBS).
Then they are resold in pieces to institutional investors on
Wall Street.
Creating an RMBS requires some financial alchemy. The
issuer often divides the RMBS into groups called “tranches.”
Investors in the highest tranche get paid first as mortgage
payments come in; then the middle tranches are paid. The
lowest tranches are paid only if all the higher tranches have
been paid first. In other words, they bear losses first so they’re
a riskier investment. For the top tranches to be affected,
however, literally hundreds or thousands of homeowners in a
pool would have to default on their mortgages at once. That’s
not very likely to happen.
The lower RMBS tranches, on the other hand, were
obviously quite risky. But just as some mortgages could be

16

Region Focus • Spring 2009

pooled to create a virtually risk-free asset, what if a new, safer
security could be created out of the risky low tranches of
RMBS too?
Issuers already had a name for such a security: A collateralized debt obligation (CDO), which is a bond that is itself
backed by another pool of bonds and sold in tranches like
RMBS. The key to creating a mortgage CDO is pooling
together the low-tranche RMBS bonds in such a way that
the probability they would default at the same time is sufficiently low. This would mean the high tranches are likely
never to see losses. In fact, if the default probabilities are
sufficiently uncorrelated, the higher tranches could even earn
an AAA rating — even if they are comprised entirely of risky
assets — and sold to investors looking for safe assets.
Estimating the correlation of a pool of bonds for a CDO is
relatively easy when they are based on corporate bonds,
which are relatively simple. But what if underlying assets are
based on mortgages, each with different homeowner FICO
scores, geographic locations, loan-to-value ratios, and dozens
of other characteristics? It is hard enough to use those characteristics to estimate the probability of default for even one
mortgage. Estimating the probably that the hundreds or
thousands of mortgages would default together — their
default correlation — would seem nearly impossible.
The trouble with assessing the default correlation of a
pool of mortgages is that we haven’t observed each mix of
mortgage characteristics very many times in history to know
how they affect the likelihood of default. Further, some of the
characteristics, such as geographic location, are related across

Questions surrounding the incentive structure created by
the issuer-pays model are not the only ones on the table. The
rest comes down to supply and demand — both of which are
set artificially by the SEC. By establishing the NRSRO label
and parsimoniously choosing which rating agencies get that
label, the SEC has created barriers to entry into the rating

industry. These barriers were lowered somewhat after the
2006 act; there are now 10 approved NRSROs in the
United States.
In addition to restricting the supply of rating agencies,
the SEC has established guaranteed demand for NRSROs
by writing them into regulations. Issuers must get their securities rated in order for institutional investors — the largest
investors in the market — to hold them, ensuring that they
will always be in need of rating agencies’ services. The
presence of ratings in public regulations — and now in many
private contracts and investment guidelines — could mean
that the focus of a large proportion of the market’s investors
has shifted from holding sound investments to holding
investments that are simply highly rated. These should be
equivalent but may not be if there are active conflicts of
interest.
Economist Lawrence J. White of New York University is
one of the most vocal critics of the protection of NRSROs
in regulations. He believes that regulators have essentially
outsourced their responsibility to conflict-ridden credit rating agencies. “These third-party opinions had been given the
force of law,” he says. “Federal regulations make it clear that

mortgages, but we don’t always know how much. If a mortgage in Oakland County, Mich., defaults, how does that
impact the probability that another mortgage there will,
too, given the dozens of other differences between them?
It’s hard to say.
This is where David Li’s paper comes in. Insurance companies had used a “Gaussian copula” function to estimate the
probability of death — which he realized could also be used
to estimate the “death” of a security, or default. The copula
function predicts the likelihood of two events occurring
when they are somewhat affected by each other.
The breakthrough of the copula model was that rather
than gathering data from actual mortgage defaults, which
are rare, the copula looked at prices in bond markets, which
are abundant, to assess correlation. Through the lens of the
copula function, movements in certain asset prices revealed
their risk level, and produced the default correlation
between them. CDO issuers no longer had to scratch their
heads over the multitude of characteristics of each individual mortgage in the loan. The copula provided a much
simpler way to evaluate default correlation. Thus, the mortgage CDO boom was born.
The appetite for these structured finance securities was
substantial — in 2005, 81 percent of CDOs contained mortgages, the vast majority of which were highly rated by rating
agencies. Institutions also had begun issuing insurance policies for these RMBS, called credit default swaps (CDS).
The seller of the swap didn’t even have to own the RMBS
pool, and that allowed an unlimited number of securities to
be created out of a limited number of mortgages. From

2001 to 2007, the CDS market multiplied more than 67
times to $62 trillion, larger than the entire world’s gross
domestic product at that time.
CDO issuers became by far the largest purchasers of subprime mortgages in the secondary market, for the purpose
of issuing more securities. Subprime mortgage lenders saw
such a strong demand for their subprime loans that many
were encouraged to provide more of them, sometimes
lowering their lending standards to do so.
Securitization allowed the proliferation of mortgagerelated securities to expand far beyond the number of actual
mortgages extended during the boom. This explains how the
global economic impact of the housing market decline has
been many times larger than the total losses in subprime
loans. Securitization is not the enemy — it remains an
important way for financial markets to hedge risk. The
copula’s flaw was that the correlation estimates it provided
were extremely sensitive: As soon as market conditions
changed a tiny bit, the correlations became highly inaccurate. As mortgage holders defaulted in increasing numbers,
so did the trillions of dollars of securities on which they were
based. The logic of mortgage securitization was based on
pooling assets that were not likely to default together.
But issuers and rating agencies never accounted for the possibility that house prices would turn negative simultaneously
in so many regions.
But don’t blame Li for the mess others may have made
of his model. “The most dangerous part,” he warned in the
Wall Street Journal in 2005, “is when people believe everything coming out of it.”
— RENEE COURTOIS

go away if all rating agencies were required to switch to
the “subscriber-pays” model that some of the smaller
agencies use.
It may even be that it doesn’t truly matter which party
pays for ratings, since conflicts of interest can exist no matter who pays. For example, certain investors such as hedge
fund managers could just as easily persuade a subscriberbased rating agency to downgrade a security, allowing them
to short-sell it. “[A]s long as rating agencies are paid by any
party with a financial stake in the outcome of our opinions
… there are going to be pressures,” said Moody’s McDaniel
in October 2008 testimony before Congress. “And so the
question is not are there conflicts of interest. There are. It’s
managing them properly.”

Supply Does Not Meet Demand

Spring 2009 • Region Focus

17

‘investment grade’ is something entirely the creation of the
rating agencies.” Meanwhile, he notes, all ratings are alongside disclaimers that the ratings are purely opinions and
shouldn’t be construed as investment advice.
In fact, the rating agencies cannot be held liable for the
quality of their grades. Several court cases have ruled that
ratings are opinions, legally equal to those of journalists and
therefore protected as “freedom of speech” under the First
Amendment. “We are giving the force of law to a bunch of
judgments where the judgment providers are caveating and
taking absolutely no responsibility for the force of law they
were granted,” White says.
Just how rating agencies could be held responsible is
tricky, however. The procedure of the Big Three is to adjust
ratings only after fundamental changes in order to avoid mistakenly responding to short-run market fluctuations as
opposed to a security’s fundamental health. So what constitutes getting a rating “wrong” as opposed to simply declining
to adjust a rating in response to what the rater believes is a
temporary market turn?
Rating agencies argue that if you saddle them with liability for the imprecise art of rating securities, the industry
would no longer be profitable and wither away. “Ultimately,
we are not guaranteeing all the securities,” said Sean Egan of
Egan-Jones, a subscriber-based agency, in an October 2008
testimony before Congress. “There is too much out there.
The industry would go away … if you did away with the freedom of speech defense.” Barron Putnam of LACE Financial,
a smaller subscriber-pays agency, adds, “The industry needs
changes, but you have to be sure that you don’t kill it.”
White agrees, sort of. “It can’t be a healthy situation to
sue them anytime they make a modest mistake,” he says,
“but for big mistakes they ought to be held liable. There is a
difference between the kind of things they do and the kind
of things the New York Times and Wall Street Journal do.”
The agencies argue that they are indeed held liable —
again referring to the possibility that their reputations for
producing reliable ratings will be tarnished when their
ratings have to be downgraded. Heads of the Big Three have
conceded that their reputations have suffered as a result of
the subprime and securitization mess.
However, White is skeptical that concerns over reputation provide sufficient incentive for rating agencies to stay
in line. “The problem is, that’s what Arthur Andersen told us
up until the end of 2001, and we know where they ended up,”
he says, referring to the history-making collapse of the
accounting firm scandalized by Enron. “Of course there’s
always the long-run incentive to maintain one’s reputation,
but it can get overpowered, clouded, by short-run conflicts
and short-run temptations.” He says this is what appeared to
happen during the mortgage lending and securitization
boom.
So far, the rating agencies have not withered away like
Arthur Andersen, and no one seems to expect that outcome.
For example, one of the Federal Reserve’s recent programs
to assist financial markets, the Term Asset-Backed Securities

18

Region Focus • Spring 2009

Loan Facility (TALF), makes loans to investors only if
backed by highly rated collateral, as deemed by the rating
agencies. The Fed explained its reliance on the rating
agencies by pointing out that their grades on asset-backed
securities unrelated to mortgages have been more stable,
and that ratings are not the only criterion used for TALF
collateral.

The Call for Bolder Reform
White is among a growing group of academics who advocate
taking NRSROs out of the regulatory process completely by
removing all references to them in SEC rules. Certain
investors would still be required to hold assets of a given
safety level, but the burden of proof of the safety of that
portfolio would be placed on the regulated financial institutions. They could deal with this either by conducting their
own analysis on their portfolios, or by consulting an advisor,
which could very well be a rating agency. However, rather
than blindly using the ratings as justification for the assets
they hold, they would need to justify to regulators why they
believe the rating agencies’ opinions on their portfolios are
sound. Indeed, a key reason to write NRSROs out of regulations would be to encourage investors to rely on alternative
measures of risk that are market based, such as spreads on
asset yields.
In June 2008 the SEC did propose writing NRSROs out
of regulations, although the proposal has been absent from
all subsequent iterations of regulation changes. A new set of
SEC rules that have been proposed but not yet adopted are
geared toward improving competition by requiring background information on securities to be shared among rating
agencies. This would allow competing agencies to formulate
and publish second opinions based on the very same information that the initial rating agency used. The issuer-pays
rating agencies do currently publish these unsolicited second opinions, but they are based only on information that is
publicly available, which is of significantly less detail.
These proposed regulations carefully traverse what is
actually a fine line between promoting competition and
destroying it. The concern of some of the agencies is over
the potential infringement on proprietary information such
as the agencies’ rating models. If forced to share them, the
smaller subscriber-pays agencies, which don’t make their rating methodologies public, could be disproportionately
affected, further increasing barriers to entry in the industry.
Some of these agencies view their classified ratings models
as their most important asset.
Perhaps surprisingly, a recent batch of research has suggested that competition in an industry dominated by the
issuer-pays model may not actually improve the quality of
ratings. A 2009 paper by New York University economists
Vasiliki Skreta and Laura Veldkamp shows that increasing
the number of rating agencies in the game could enlarge the
pool from which securities issuers can shop for ratings. In a
world where the average security has grown more complex,
as in the past decade with RMBS and CDOs, the more

likely raters are to evaluate a security differently and thus
issue different ratings. The wider the dispersion of possible
ratings, the more likely an issuer is to find one that is overly
optimistic — an outcome that is possible even in the
absence of any fraud or active conflict of interest. Further, a
2009 paper by Bo Becker and Todd Milbourn of the
University of Illinois at Urbana-Champaign and Washington
University in St. Louis, respectively, suggests that since competition reduces profits in an oligopolistic setting like the
rating industry, it may also reduce the relative payoff for
rating agencies to “invest” in developing a reputation for
publishing consistently high-quality ratings compared to
other revenue-generating activities like ratings inflation.
Despite his instincts as an economist, White is also not
convinced that increased competition is the answer given
the industry’s other significant structural flaws. “I’m a procompetition guy, but I have to acknowledge the possibility
that in this fifth-best world it may well be that increasing
competition may have perverse consequences.”
On the other hand, LACE Financial’s Putnam thinks
increasing competition is crucial to driving the agencies’ primary incentive back to building a reputation for creating
high-quality ratings. “If you control so much market share,
you’re not really accountable to anybody,” he says. Besides,
he adds, if the current regulations on the table don’t work,
increasing competition will be hard to avoid for another
reason: “Congress and the SEC can pass reforms to make
them do a better job, but in the long run if you can’t straighten out the industry, and something like the current mess
happens again, Congress will likely address the problem with
antitrust regulation.”

Incentivize Me
While regulations may make ratings disproportionately
important to issuers and investors, the agencies say that
many investors misunderstand their purpose to begin with:
The grades assess the probability of default, nothing more.
They are not meant to signify whether an investment is

adequately priced or aligns with a given investor’s risk
appetite. Furthermore, two assets with the same rating may
exhibit great differences in price volatility. Investors are particularly prone to over-relying on rating agencies in an
environment in which securities are growing excessively
complex. A simple letter grade is an enticing way for an institutional investor to meet a regulatory requirement and also
take part in an opaque but burgeoning market that many of
its competitors are finding profitable.
Ratings are intended to be simply one tool of many for
reducing asymmetric information, however. This logic
was spelled out in the SEC’s initial regulations requiring
institutions to rely on NRSROs. But the profitability and
complexity of the securitization market in recent years
induced investors to ignore this caution, a fact that issuers
and rating agencies may have intentionally or unintentionally exploited.
Even those who argue for taking credit rating agencies
out of regulations do not argue that the agencies provide no
value to the market. However, without the status as a government-protected oligopoly, the agencies would be
profitable only if investors perceive that they produce consistent, high-quality ratings. In other words, it would
emphasize the need for the agencies to build a reputation by
developing a proven track record. What may also abet that
process is better procedural oversight of conflicts of interest
— such as oversight rules adopted by the SEC in February
2009 — which should fall short of regulating the ratings or
methodologies themselves.
At any rate, the discussion highlights that even if
there is no intentional fraud, the current structure of the
rating industry can, and did, produce adverse outcomes.
Whether any particular agency engaged in intentional
wrongdoing will take more than the duration of the present
economic downturn to ascertain. Perhaps the most important outcome, however, is that this has called attention to
the incentives that rating agencies faced in the past and still
face today.
RF

READINGS
Becker, Bo, and Todd Milborun. “Reputation and Competition:
Evidence from the Credit Rating Industry.” Harvard Business
School Working Paper 09-051, October 2008.

Securities and Exchange Commission. “Summary Report of Issues
Identified in the Commission Staff’s Examinations of Select
Credit Rating Agencies.” July 2008.

Benmelech, Efraim, and Jennifer Dlugosz. “The Credit Rating
Crisis.” NBER Macroeconomics Annual 2009, forthcoming.

Skreta, Vasiliki, and Laura Veldkamp. “Ratings Shopping and Asset
Complexity: A Theory of Ratings Inflation.” National Bureau of
Economic Research Working Paper No. 14761, February 2009.

____. “The Alchemy of CDO Credit Ratings.” National Bureau of
Economic Research Working Paper No. 14878, April 2009.
Coval, Joshua, Jakub Jurek, and Erik Stafford. “The Economics of
Structured Finance.” Journal of Economic Perspectives, Winter 2009,
vol. 23, no. 1, pp. 3-25.

White, Lawrence J. “The Credit Rating Industry: An Industrial
Organization Analysis.” In Levich, Richard M., Giovanni Majnoni,
and Carmen Reinhart (eds.), Ratings, Rating Agencies, and the Global
Financial System. New York: Springer, 2002.

Salmon, Felix. “Recipe for Disaster: The Formula That Killed Wall
Street.” Wired Magazine, Feb. 23, 2009.

Spring 2009 • Region Focus

19

Honeybees
Market for pollination services grows
nly a beekeeper would move to South Carolina for
the pollen. But Chuck and Karen Kutik of
Manning, S.C., count on it to help feed their
livestock — 2,500 to 3,000 hives of honeybees. Bees mix
pollen and nectar to make food (beebread). A summer
hive, or colony, at peak can hold as many as 80,000 bees.
The Kutiks pack bees off to California almond fields in
February, apple orchards in New York in May, and blueberry fields in Maine in late spring with vegetable and fruit
stops along the Atlantic seaboard in the summer.
Charles Hatley of Concord, N.C., also rents hives. “You
want to try to keep your bees busy.” His bees, in mid-April, were
foraging for nectar in the raspberry fields of Stanly County,
N.C., before heading to blueberry and blackberry fields.
Beekeepers like Hatley and the Kutiks are part of a growing market for pollination services that has expanded over
the past century, especially since the 1980s when wild
bee populations began to vanish. Farmers can’t rely on or
manage other pollinators — birds, other types of bees,
butterflies, wind, or water. Honeybees forage across flowering plants, improving quality and yields for farmers, while
the bees process the blossom nectar into honey, a boon for
the beekeeper if the weather, temperatures, and blossoms
cooperate. Pollination services can be found throughout the
nation and are estimated to be worth $15 billion annually.
Honeybees are vital to North Carolina’s $48 million blueberry crop, $28 million apple crop, and myriad vegetables
and crops like alfalfa, cotton, peanuts, and soybeans.
Commercial pollination markets have been well established since at least the 1940s. Yet research into the
economics of the honeybee and its role in agriculture continues to flourish as hive numbers fall and demand for
pollination grows.

O

Bees and Economic Thought
Honeybees also have appeared in economic theory. Imagine
adjacent property owners, a beekeeper and apple farmer.
Economist J. E. Meade suggested in a 1952 paper that beekeeping is an “unpaid factor” in apple production because
neither farmer nor beekeeper arranged pollination or honeymaking services in spite of mutual benefits to the bees’
stamen-to-pistil pollen deposits. Theory suggests that,
absent an agreement over compensation, the farmer will
neither arrange for optimal beekeeping services nor the beekeeper establish the number of hives that would maximize
the farmer’s return on apples. In that case, there is an
argument that bee pollination services — or the reverse,
nectar provision services — would be “under-provided” by
the market.

20

Region Focus • Spring 2009

Nectar provision and bee pollination are a “reciprocal
externality,” according to those early papers, both drawing
on the work of economist A.C. Pigou who in 1920 had
defined the concept of negative or positive side effects of a
firm’s behavior and termed them “externalities.” His theory
conceptualized the costs that aren’t borne by the firm.
Certain taxes might compensate for negative side effects
while positive side effects, such as pollination and honeymaking in the bee case, could be encouraged by a subsidy.
(Such observations had minimal influence on honey price
support policies at the time, but the U.S. honey program of
the 1980s and 1990s was in fact designed to encourage bee
and pollination services, according to research by economist
Walter Thurman of North Carolina State University. Today,
there are no price supports for honey, but trade rules govern
some honey imports.)
In 1973, economist Steven N.S. Cheung in his paper “The
Fable of the Bees,” described a functioning market with
obvious transactions between beekeepers and farmers:
Pollination services were listed in the Yellow Pages of rural,
apple-growing Washington state, evidence that beekeepers
rented hives. When he looked at pollination fees, he found
buyers and sellers of these services. He concluded that
“observed pricing and contractual arrangements governing
nectar and pollination services are consistent with efficient
allocation of resources.”
Cheung’s work drew on the now-famous paper by Ronald
Coase published in 1960, “The Problem of Social Cost,”
that, among other insights, pointed out that when property
rights are well defined, firms generally will bargain among
themselves to find an efficient solution.
Thurman explains that Cheung’s paper highlighted the
need to understand the details, in this case, of the beekeeping and farming businesses. “While in principle the
externalities exist, once people start contracting, there’s a
market,” Thurman notes.
“Markets coordinate the joint production of pollination
and honey in the face of dramatic variation in output prices,
and do so against a backdrop of continually evolving scientific views on the efficacy of honeybee pollination,”
according to a paper on the subject that Thurman coauthored. “Markets must also coordinate the delivery of
pollination services to multiple crops during their blooming
seasons, not perfectly forecastable.” That is no small task.

Coast-to-Coast Demand
Demand for hired hives grew along with knowledge about
pollination benefits, which often depends on dissemination
of the latest research. Other factors contributed, too, such

PHOTOGRAPHY: DEBBIE ROOS, NORTH CAROLINA COOPERATIVE EXTENSION

BY B E T T Y J OYC E N A S H

as the invention of the movable hive, and produced
markets that expanded with transportation improvements
like engines, trucks, and roads. “The costs of market
exchange declined and the returns to specialization
increased,” Thurman notes. Finally, the demise of wild bee
colonies that began in the 1980s — probably from the
appearance of the varroa mite, a dangerous parasite to
honeybees — put more pressure on domestic honeybee
colonies for pollination.
Honeybees have become essential in the production of
certain crops, and nowhere is that more evident than in the
almond groves of California. The science of pollination has
led to varieties of crops that are ever more dependent on
pollination, according to Thurman. The more a crop
depends on pollination services, the more the farmers are
willing to pay to rent bee colonies, and California’s Central
Valley hosts the most vigorous market in the nation. In 2004
and 2005, almond acreage required an estimated 60 percent
of the approximately 2.5 million hives in the United States.
Dispatched by owners through brokers or trucked in by beekeepers, colonies are placed in February and early March to
pollinate almonds, 80 percent of world supply, 1.5 billion
pounds (shelled) in 2008. While the keepers also may
arrange pollination services for other crops while they’re in
California, the almonds are the primary and most lucrative
crop. The bees may roam a couple of miles from the crops
they’re supposed to pollinate. However, the effects are often
negligible, and when this does occur it is probably on fields
smaller than the vast almond groves.
When bees suck nectar via their long tongues, their sticky
hind legs pick up pollen grains that are necessary to fertilize
some plants. (Some crops like corn are self-pollinating and
don’t require bees.) While much of that pollen returns to the
hive with the bees in tiny pollen sacks, some is deposited as
they land on flower blossoms. A honeybee’s work can make a
difference, but that difference is hard to measure in money.
For one thing, aggregate pollination data are not recorded,
including even the fees paid to beekeepers, according to
Thurman and co-authors Michael Burgett of Oregon State
University and Randal Rucker of Montana State University,
who have written a paper about pollination fees.
But Burgett has kept crop-by-crop summaries of an
annual pollination survey of about 60 commercial beekeepers in Washington and Oregon since 1986. The survey
captures the upward trend in demand for the service and
increases in commercial beekeeping operations. The authors
found that pollination fees rise according to costs — for
example, accounting for the appearance of the varroa mite in
1991, which increased the price of rentals by about $4.60 per
colony. The authors also examined the value of honey produced during the pollination periods. Although some
beekeepers like the Kutiks say that they don’t factor honey
production into their pollination prices, the authors found
fees in Washington and Oregon vary across pollinated crops.
Ranking crops from vetch seed, which produces good honey,
to almonds, which produce barely palatable honey, the

Bees pick up and deposit pollen as they forage across flowering plants,
improving quality and yields. Farmers often hire honeybee hives to
pollinate crops because wild bee populations have declined.

authors found the fees paid for a honey crop like vetch are
lower than all fees reported for non-honey crops like
almonds. Almond pollination prices are higher when honey
production and pollination do not occur simultaneously.
The authors find the price of pollination services reflect
“a complex array of knowledge of entomology, horticulture,
environmental science, consumer preference, logistics, and
world trade.”
Bee pests have reduced available supplies, especially in
California, and so the demand for almond pollination continues to be reflected in prices, which Thurman cites as
about $130 per colony in 2006. He estimates fees paid to all
U.S. beekeepers for all crops at about $180 million in 2006
and increasing.
With an estimated 2.5 colonies per acre, and an increase
of 25 percent in almond acreage from 1996 to 2004, economist Daniel Sumner and research specialist Hayley Borris of
the University of California at Davis estimate hive requirements at roughly 1.4 million in 2004. By 2012, the almond
crop may need about 2 million colonies.
Bee operators who migrate to California to pollinate
almond blossoms may rent hives to fruit and vegetable
growers along the way. After almonds, many move on to the
Northwest for apple, pear, and cherry crops. During the
summer, hives remain in the Midwest, home to the mega
operations for honeybees. There, bees may frequent sunflower, clover, basswood, and various nectar sources to
produce honey.
Higher prices are attracting beekeepers from as far
away as the East Coast. The Kutiks sent their bees by truck
to California for the first time in 2008 and again for the
2009 almond pollination. They contract with another
beekeeper in California who unloads and then ships the
bees back. “We lease our bees to another beekeeper who
deals with the farmer,” Karen Kutik notes. “The bees are
inspected to make sure they are the proper standard that the
farmer expects for the money he pays. It was very lucrative

Spring 2009 • Region Focus

21

Honeybees
Market for pollination services grows
nly a beekeeper would move to South Carolina for
the pollen. But Chuck and Karen Kutik of
Manning, S.C., count on it to help feed their
livestock — 2,500 to 3,000 hives of honeybees. Bees mix
pollen and nectar to make food (beebread). A summer
hive, or colony, at peak can hold as many as 80,000 bees.
The Kutiks pack bees off to California almond fields in
February, apple orchards in New York in May, and blueberry fields in Maine in late spring with vegetable and fruit
stops along the Atlantic seaboard in the summer.
Charles Hatley of Concord, N.C., also rents hives. “You
want to try to keep your bees busy.” His bees, in mid-April, were
foraging for nectar in the raspberry fields of Stanly County,
N.C., before heading to blueberry and blackberry fields.
Beekeepers like Hatley and the Kutiks are part of a growing market for pollination services that has expanded over
the past century, especially since the 1980s when wild
bee populations began to vanish. Farmers can’t rely on or
manage other pollinators — birds, other types of bees,
butterflies, wind, or water. Honeybees forage across flowering plants, improving quality and yields for farmers, while
the bees process the blossom nectar into honey, a boon for
the beekeeper if the weather, temperatures, and blossoms
cooperate. Pollination services can be found throughout the
nation and are estimated to be worth $15 billion annually.
Honeybees are vital to North Carolina’s $48 million blueberry crop, $28 million apple crop, and myriad vegetables
and crops like alfalfa, cotton, peanuts, and soybeans.
Commercial pollination markets have been well established since at least the 1940s. Yet research into the
economics of the honeybee and its role in agriculture continues to flourish as hive numbers fall and demand for
pollination grows.

O

Bees and Economic Thought
Honeybees also have appeared in economic theory. Imagine
adjacent property owners, a beekeeper and apple farmer.
Economist J. E. Meade suggested in a 1952 paper that beekeeping is an “unpaid factor” in apple production because
neither farmer nor beekeeper arranged pollination or honeymaking services in spite of mutual benefits to the bees’
stamen-to-pistil pollen deposits. Theory suggests that,
absent an agreement over compensation, the farmer will
neither arrange for optimal beekeeping services nor the beekeeper establish the number of hives that would maximize
the farmer’s return on apples. In that case, there is an
argument that bee pollination services — or the reverse,
nectar provision services — would be “under-provided” by
the market.

20

Region Focus • Spring 2009

Nectar provision and bee pollination are a “reciprocal
externality,” according to those early papers, both drawing
on the work of economist A.C. Pigou who in 1920 had
defined the concept of negative or positive side effects of a
firm’s behavior and termed them “externalities.” His theory
conceptualized the costs that aren’t borne by the firm.
Certain taxes might compensate for negative side effects
while positive side effects, such as pollination and honeymaking in the bee case, could be encouraged by a subsidy.
(Such observations had minimal influence on honey price
support policies at the time, but the U.S. honey program of
the 1980s and 1990s was in fact designed to encourage bee
and pollination services, according to research by economist
Walter Thurman of North Carolina State University. Today,
there are no price supports for honey, but trade rules govern
some honey imports.)
In 1973, economist Steven N.S. Cheung in his paper “The
Fable of the Bees,” described a functioning market with
obvious transactions between beekeepers and farmers:
Pollination services were listed in the Yellow Pages of rural,
apple-growing Washington state, evidence that beekeepers
rented hives. When he looked at pollination fees, he found
buyers and sellers of these services. He concluded that
“observed pricing and contractual arrangements governing
nectar and pollination services are consistent with efficient
allocation of resources.”
Cheung’s work drew on the now-famous paper by Ronald
Coase published in 1960, “The Problem of Social Cost,”
that, among other insights, pointed out that when property
rights are well defined, firms generally will bargain among
themselves to find an efficient solution.
Thurman explains that Cheung’s paper highlighted the
need to understand the details, in this case, of the beekeeping and farming businesses. “While in principle the
externalities exist, once people start contracting, there’s a
market,” Thurman notes.
“Markets coordinate the joint production of pollination
and honey in the face of dramatic variation in output prices,
and do so against a backdrop of continually evolving scientific views on the efficacy of honeybee pollination,”
according to a paper on the subject that Thurman coauthored. “Markets must also coordinate the delivery of
pollination services to multiple crops during their blooming
seasons, not perfectly forecastable.” That is no small task.

Coast-to-Coast Demand
Demand for hired hives grew along with knowledge about
pollination benefits, which often depends on dissemination
of the latest research. Other factors contributed, too, such

PHOTOGRAPHY: DEBBIE ROOS, NORTH CAROLINA COOPERATIVE EXTENSION

BY B E T T Y J OYC E N A S H

as the invention of the movable hive, and produced
markets that expanded with transportation improvements
like engines, trucks, and roads. “The costs of market
exchange declined and the returns to specialization
increased,” Thurman notes. Finally, the demise of wild bee
colonies that began in the 1980s — probably from the
appearance of the varroa mite, a dangerous parasite to
honeybees — put more pressure on domestic honeybee
colonies for pollination.
Honeybees have become essential in the production of
certain crops, and nowhere is that more evident than in the
almond groves of California. The science of pollination has
led to varieties of crops that are ever more dependent on
pollination, according to Thurman. The more a crop
depends on pollination services, the more the farmers are
willing to pay to rent bee colonies, and California’s Central
Valley hosts the most vigorous market in the nation. In 2004
and 2005, almond acreage required an estimated 60 percent
of the approximately 2.5 million hives in the United States.
Dispatched by owners through brokers or trucked in by beekeepers, colonies are placed in February and early March to
pollinate almonds, 80 percent of world supply, 1.5 billion
pounds (shelled) in 2008. While the keepers also may
arrange pollination services for other crops while they’re in
California, the almonds are the primary and most lucrative
crop. The bees may roam a couple of miles from the crops
they’re supposed to pollinate. However, the effects are often
negligible, and when this does occur it is probably on fields
smaller than the vast almond groves.
When bees suck nectar via their long tongues, their sticky
hind legs pick up pollen grains that are necessary to fertilize
some plants. (Some crops like corn are self-pollinating and
don’t require bees.) While much of that pollen returns to the
hive with the bees in tiny pollen sacks, some is deposited as
they land on flower blossoms. A honeybee’s work can make a
difference, but that difference is hard to measure in money.
For one thing, aggregate pollination data are not recorded,
including even the fees paid to beekeepers, according to
Thurman and co-authors Michael Burgett of Oregon State
University and Randal Rucker of Montana State University,
who have written a paper about pollination fees.
But Burgett has kept crop-by-crop summaries of an
annual pollination survey of about 60 commercial beekeepers in Washington and Oregon since 1986. The survey
captures the upward trend in demand for the service and
increases in commercial beekeeping operations. The authors
found that pollination fees rise according to costs — for
example, accounting for the appearance of the varroa mite in
1991, which increased the price of rentals by about $4.60 per
colony. The authors also examined the value of honey produced during the pollination periods. Although some
beekeepers like the Kutiks say that they don’t factor honey
production into their pollination prices, the authors found
fees in Washington and Oregon vary across pollinated crops.
Ranking crops from vetch seed, which produces good honey,
to almonds, which produce barely palatable honey, the

Bees pick up and deposit pollen as they forage across flowering plants,
improving quality and yields. Farmers often hire honeybee hives to
pollinate crops because wild bee populations have declined.

authors found the fees paid for a honey crop like vetch are
lower than all fees reported for non-honey crops like
almonds. Almond pollination prices are higher when honey
production and pollination do not occur simultaneously.
The authors find the price of pollination services reflect
“a complex array of knowledge of entomology, horticulture,
environmental science, consumer preference, logistics, and
world trade.”
Bee pests have reduced available supplies, especially in
California, and so the demand for almond pollination continues to be reflected in prices, which Thurman cites as
about $130 per colony in 2006. He estimates fees paid to all
U.S. beekeepers for all crops at about $180 million in 2006
and increasing.
With an estimated 2.5 colonies per acre, and an increase
of 25 percent in almond acreage from 1996 to 2004, economist Daniel Sumner and research specialist Hayley Borris of
the University of California at Davis estimate hive requirements at roughly 1.4 million in 2004. By 2012, the almond
crop may need about 2 million colonies.
Bee operators who migrate to California to pollinate
almond blossoms may rent hives to fruit and vegetable
growers along the way. After almonds, many move on to the
Northwest for apple, pear, and cherry crops. During the
summer, hives remain in the Midwest, home to the mega
operations for honeybees. There, bees may frequent sunflower, clover, basswood, and various nectar sources to
produce honey.
Higher prices are attracting beekeepers from as far
away as the East Coast. The Kutiks sent their bees by truck
to California for the first time in 2008 and again for the
2009 almond pollination. They contract with another
beekeeper in California who unloads and then ships the
bees back. “We lease our bees to another beekeeper who
deals with the farmer,” Karen Kutik notes. “The bees are
inspected to make sure they are the proper standard that the
farmer expects for the money he pays. It was very lucrative

Spring 2009 • Region Focus

21

last year for us and this year too.”
Trucking was cheaper this year too. She says they get paid
anywhere from $90 to $150 per hive — “what the guys are
willing to pay.” Prices for pollination vary but “have been
going up for the past few years.”
The Kutiks formerly rented bees to large-scale cucumber
farmers in South Carolina but some of those customers have
switched to other crops. And Karen Kutik says small fields
aren’t a good fit for the business any longer.
The Kutiks ship bees to New York to pollinate apples in
late April or early May for about $55 per hive. “There are a
lot more apple growers, and they’re not getting that much
for their apples. It’s what the market will bear. Some guys
[beekeepers] will rent for $30 per hive.”
While the Kutiks’ business is going well, most aspects of
the bee business are fickle. For instance, temperatures over
the recent winter were too cold for nectar in South Carolina.
“We have had to feed our bees this year,” Karen Kutik says.
Weather can wreak havoc on pollination and honey production alike. When it rains or temperatures drop, the bees
don’t forage. For instance, the bees may be out in the almond
groves of California for a month and only fly 10 days, she
explains.
The Kutiks depend on pollination services to round out
their income, which also derives from honey and making
“nucs,” the nucleus of a hive. Right now, honey is where the
money is, she says. Honey prices have risen, in part because
of a drought in major honey-producing countries and a
smaller than average crop in 2008, according to the
American Honeybee Producers Association. While there’s
no explicit honey subsidy, there was a new $2.63 per kilogram duty placed on Chinese honey in January.
Karen Kutik says they separate the honey production
from the pollination services. For example, although blueberries make good honey, when they pollinate that crop in
Maine, they “don’t even talk honey with them,” she says of
the blueberry growers. “That’s a perk. It is not a sure thing.
Honey-making isn’t ever a sure thing.” For instance, cool,
rainy weather in the past two years have stymied basswood
and locust honey production for the Kutiks. “It is feast or
famine,” she says, of the bee business in general. “Right now

Silver Screen Subsidies

seems to be a good time. For a number of years we were too
small.” She adds that they run between 2,500 and 3,000
hives, while among the Midwest bee operations, 10,000 is
considered small.

Is hoping to land the next Hollywood hit a sound economic development strategy?
BY DAV I D VA N D E N B E RG

Future of the Bee Business
While feral bees have vanished from the fields and forests,
domestic bees are also struggling with a variety of mites and
viruses. There are pest control options, but keeping hives
healthy is tricky. Researchers are even examining the possibility that the migrations may weaken bee colonies, making
them more susceptible to mites like varroa. Apiculturists are
worried. Some losses are odd and include reports of bees failing to return to the hives and rapid colony losses for reasons
that remain largely unknown, according to a 2008 report by
the Congressional Research Service.
“The market for pollination services has grown and it has
coincided with these infestations of exotic pests we’ve had,”
says Don Hopkins, the state apiarist for North Carolina.
The pests are one reason most states require inspections,
certifications, and permits for incoming bees.
North Carolina has the most beekeepers of any state in
the nation, but most keep the bees as hobbies or sideline
businesses, like Charles Hatley. He has kept bees for 33 of his
45 years. With demand for pollination services ramping up,
and bee populations in jeopardy, he wants to transform his
sideline into a full-time operation. He currently breeds
queen bees, good for disease resistance, for eventual sale. He
places bees in a 400-acre forest of sourwood trees for a distinctive honey that can bring a price premium of up to 200
percent over other varieties. Hatley also rents hives to
vegetable and fruit growers for about $50 per colony for six
weeks. He has drafted his own contract, one that specifies
whether they use insecticides because he prefers to rent
hives to organic farmers.
He now can’t keep up with demand. “I got a call from a
farmer who wanted 600 colonies for watermelon and
cucumber.” As research continues into colony collapse
disorder and the various pests plaguing managed beehives,
the demand for pollination intensifies. As he says, “This can
get as big as I want it to get.”
RF

ubsidy contests among states to lure sports teams and
factories have been fought for years. Now many states
want to attract movies and television shows and offer
those Hollywood productions generous incentives. Critics
of incentive programs argue that they don’t pay for themselves. Supporters of production incentives claim they
are an attractive and quick way to inject money into a
community.
When production companies arrive, they immediately
spend money on items such as lumber for set construction
and accommodations for out-of-town cast and crew. Tim
Reid, an actor who played the disc jockey “Venus Flytrap”
on the television show “WKRP in Cincinnati,” has firsthand
knowledge of these expenditures. Reid is also a filmmaker
and co-founder of New Millennium Studios in Petersburg,
Va. He says bringing a production to a community is
like hosting dream in-laws. “They come and visit you, they
spend lots of money and they leave quickly,” he says. “Who
wouldn’t want in-laws like that?”
“One Tree Hill,” a CW television network drama filmed
in Wilmington, N.C., shows the impact a production can
have, says David Hartley, a producer for the show. The program has just finished its sixth season shooting in
Wilmington. In the time it has been there, Hartley says the
show has generated revenue for Wilmington’s economy
through spending at local businesses, which boosts the city’s
tax base. “We’re not even a big budget show,” he says.
The overall effectiveness of these subsides, however,
remains in question. States that seek those revenues and
offer production incentives should be asking themselves
if this is a sound economic development strategy for the
long term.

S

The Incentives Game

READINGS

Cheung, Steven N.S. “The Fable of the Bees: An Economic
Investigation.” Journal of Law and Economics, April 1973, vol. 16,
no. 1, pp. 11-33.
Coase, R.H. “The Problem of Social Cost.” Journal of Law and
Economics, October 1960, vol. 3, no. 1, pp. 1-44.
Meade, J.E. “External Economies and Diseconomies in a
Competitive Situation.” Economic Journal, March 1952, vol. 62,
no. 245, pp. 54-67.

22

Region Focus • Spring 2009

Muth, Mary K., Randal R. Rucker, Walter N. Thurman, and
Ching-Ta Chuang. “The Fable of the Bees Revisited: Causes and
Consequences of the U.S. Honey Program.” Journal of Law and
Economics, 2003, vol. 46, no. 2, pp. 479-516.
Rucker, Randal R., Walter N. Thurman, and Michael Burgett.
“Internalizing Reciprocal Benefits: The Economics of Honey Bee
Pollination Markets.” Working Paper, June 2008, pp. 1-42.
PHOTOGRAPHY: CHRIS BROMLEY

Burgett, Michael, Randal R. Rucker, and Walter N. Thurman.
“Economics and Honey Bee Pollination Markets.” American Bee
Journal, April 2004, vol. 144, no. 4, pp. 269-271.

Many states offer incentives to all sorts of companies looking to relocate or open a plant. However, not all firms will
view incentives as a major factor in their location decision.
Education levels of the work force, the ease of transporting
goods, and the overall quality of life could prove just as
important for the company. A comparative advantage, like
the abundance of a particular natural resource or a specialized labor input, may also attract a firm to a state.
Film and television productions differ from corporations
making choices about where to put factories because movie
productions in particular are short-term work. Television
series can stay longer in a community but don’t always last.
Besides, especially with feature films shot on location, much
of the labor force could come from somewhere else and
eventually leave.

Cinematographers and camera
operators at EUE/Screen Gems
Studios in Wilmington, N.C.,
collaborate on a scene.

However, firms
that choose to bring
a plant or factory to
a community invest
in the area, train
workers, and will
have at least management personnel
or corporate leaders
living where the new
facility is located. “The motion picture industry isn’t
like that, except in Los Angeles or New York,” says Cornell
University City and Regional Planning Professor Susan
Christopherson.
Moviemaking and television production, furthermore,
don’t need to rely on a specific location. Just because a film
or television show takes place in one city doesn’t mean it has
to be shot there. Special visual effects can alter certain elements of a landscape or the look of a street. In these cases,
any city can be a substitute for any other, thereby reducing
any comparative advantage a city’s appearance provides.
The industry that can re-create any location also
produces one of the nation’s largest exports: movies. With
the decline in manufacturing and the appeal of the entertainment industry, it’s not surprising states would want to
attract film production, says Ned Rightor, principal of
MXCIX, a Boston area policy research film. Rightor has
worked with Christopherson on research into production
incentives. Currently, more than 40 states — even film

Spring 2009 • Region Focus

23

last year for us and this year too.”
Trucking was cheaper this year too. She says they get paid
anywhere from $90 to $150 per hive — “what the guys are
willing to pay.” Prices for pollination vary but “have been
going up for the past few years.”
The Kutiks formerly rented bees to large-scale cucumber
farmers in South Carolina but some of those customers have
switched to other crops. And Karen Kutik says small fields
aren’t a good fit for the business any longer.
The Kutiks ship bees to New York to pollinate apples in
late April or early May for about $55 per hive. “There are a
lot more apple growers, and they’re not getting that much
for their apples. It’s what the market will bear. Some guys
[beekeepers] will rent for $30 per hive.”
While the Kutiks’ business is going well, most aspects of
the bee business are fickle. For instance, temperatures over
the recent winter were too cold for nectar in South Carolina.
“We have had to feed our bees this year,” Karen Kutik says.
Weather can wreak havoc on pollination and honey production alike. When it rains or temperatures drop, the bees
don’t forage. For instance, the bees may be out in the almond
groves of California for a month and only fly 10 days, she
explains.
The Kutiks depend on pollination services to round out
their income, which also derives from honey and making
“nucs,” the nucleus of a hive. Right now, honey is where the
money is, she says. Honey prices have risen, in part because
of a drought in major honey-producing countries and a
smaller than average crop in 2008, according to the
American Honeybee Producers Association. While there’s
no explicit honey subsidy, there was a new $2.63 per kilogram duty placed on Chinese honey in January.
Karen Kutik says they separate the honey production
from the pollination services. For example, although blueberries make good honey, when they pollinate that crop in
Maine, they “don’t even talk honey with them,” she says of
the blueberry growers. “That’s a perk. It is not a sure thing.
Honey-making isn’t ever a sure thing.” For instance, cool,
rainy weather in the past two years have stymied basswood
and locust honey production for the Kutiks. “It is feast or
famine,” she says, of the bee business in general. “Right now

Silver Screen Subsidies

seems to be a good time. For a number of years we were too
small.” She adds that they run between 2,500 and 3,000
hives, while among the Midwest bee operations, 10,000 is
considered small.

Is hoping to land the next Hollywood hit a sound economic development strategy?
BY DAV I D VA N D E N B E RG

Future of the Bee Business
While feral bees have vanished from the fields and forests,
domestic bees are also struggling with a variety of mites and
viruses. There are pest control options, but keeping hives
healthy is tricky. Researchers are even examining the possibility that the migrations may weaken bee colonies, making
them more susceptible to mites like varroa. Apiculturists are
worried. Some losses are odd and include reports of bees failing to return to the hives and rapid colony losses for reasons
that remain largely unknown, according to a 2008 report by
the Congressional Research Service.
“The market for pollination services has grown and it has
coincided with these infestations of exotic pests we’ve had,”
says Don Hopkins, the state apiarist for North Carolina.
The pests are one reason most states require inspections,
certifications, and permits for incoming bees.
North Carolina has the most beekeepers of any state in
the nation, but most keep the bees as hobbies or sideline
businesses, like Charles Hatley. He has kept bees for 33 of his
45 years. With demand for pollination services ramping up,
and bee populations in jeopardy, he wants to transform his
sideline into a full-time operation. He currently breeds
queen bees, good for disease resistance, for eventual sale. He
places bees in a 400-acre forest of sourwood trees for a distinctive honey that can bring a price premium of up to 200
percent over other varieties. Hatley also rents hives to
vegetable and fruit growers for about $50 per colony for six
weeks. He has drafted his own contract, one that specifies
whether they use insecticides because he prefers to rent
hives to organic farmers.
He now can’t keep up with demand. “I got a call from a
farmer who wanted 600 colonies for watermelon and
cucumber.” As research continues into colony collapse
disorder and the various pests plaguing managed beehives,
the demand for pollination intensifies. As he says, “This can
get as big as I want it to get.”
RF

ubsidy contests among states to lure sports teams and
factories have been fought for years. Now many states
want to attract movies and television shows and offer
those Hollywood productions generous incentives. Critics
of incentive programs argue that they don’t pay for themselves. Supporters of production incentives claim they
are an attractive and quick way to inject money into a
community.
When production companies arrive, they immediately
spend money on items such as lumber for set construction
and accommodations for out-of-town cast and crew. Tim
Reid, an actor who played the disc jockey “Venus Flytrap”
on the television show “WKRP in Cincinnati,” has firsthand
knowledge of these expenditures. Reid is also a filmmaker
and co-founder of New Millennium Studios in Petersburg,
Va. He says bringing a production to a community is
like hosting dream in-laws. “They come and visit you, they
spend lots of money and they leave quickly,” he says. “Who
wouldn’t want in-laws like that?”
“One Tree Hill,” a CW television network drama filmed
in Wilmington, N.C., shows the impact a production can
have, says David Hartley, a producer for the show. The program has just finished its sixth season shooting in
Wilmington. In the time it has been there, Hartley says the
show has generated revenue for Wilmington’s economy
through spending at local businesses, which boosts the city’s
tax base. “We’re not even a big budget show,” he says.
The overall effectiveness of these subsides, however,
remains in question. States that seek those revenues and
offer production incentives should be asking themselves
if this is a sound economic development strategy for the
long term.

S

The Incentives Game

READINGS

Cheung, Steven N.S. “The Fable of the Bees: An Economic
Investigation.” Journal of Law and Economics, April 1973, vol. 16,
no. 1, pp. 11-33.
Coase, R.H. “The Problem of Social Cost.” Journal of Law and
Economics, October 1960, vol. 3, no. 1, pp. 1-44.
Meade, J.E. “External Economies and Diseconomies in a
Competitive Situation.” Economic Journal, March 1952, vol. 62,
no. 245, pp. 54-67.

22

Region Focus • Spring 2009

Muth, Mary K., Randal R. Rucker, Walter N. Thurman, and
Ching-Ta Chuang. “The Fable of the Bees Revisited: Causes and
Consequences of the U.S. Honey Program.” Journal of Law and
Economics, 2003, vol. 46, no. 2, pp. 479-516.
Rucker, Randal R., Walter N. Thurman, and Michael Burgett.
“Internalizing Reciprocal Benefits: The Economics of Honey Bee
Pollination Markets.” Working Paper, June 2008, pp. 1-42.
PHOTOGRAPHY: CHRIS BROMLEY

Burgett, Michael, Randal R. Rucker, and Walter N. Thurman.
“Economics and Honey Bee Pollination Markets.” American Bee
Journal, April 2004, vol. 144, no. 4, pp. 269-271.

Many states offer incentives to all sorts of companies looking to relocate or open a plant. However, not all firms will
view incentives as a major factor in their location decision.
Education levels of the work force, the ease of transporting
goods, and the overall quality of life could prove just as
important for the company. A comparative advantage, like
the abundance of a particular natural resource or a specialized labor input, may also attract a firm to a state.
Film and television productions differ from corporations
making choices about where to put factories because movie
productions in particular are short-term work. Television
series can stay longer in a community but don’t always last.
Besides, especially with feature films shot on location, much
of the labor force could come from somewhere else and
eventually leave.

Cinematographers and camera
operators at EUE/Screen Gems
Studios in Wilmington, N.C.,
collaborate on a scene.

However, firms
that choose to bring
a plant or factory to
a community invest
in the area, train
workers, and will
have at least management personnel
or corporate leaders
living where the new
facility is located. “The motion picture industry isn’t
like that, except in Los Angeles or New York,” says Cornell
University City and Regional Planning Professor Susan
Christopherson.
Moviemaking and television production, furthermore,
don’t need to rely on a specific location. Just because a film
or television show takes place in one city doesn’t mean it has
to be shot there. Special visual effects can alter certain elements of a landscape or the look of a street. In these cases,
any city can be a substitute for any other, thereby reducing
any comparative advantage a city’s appearance provides.
The industry that can re-create any location also
produces one of the nation’s largest exports: movies. With
the decline in manufacturing and the appeal of the entertainment industry, it’s not surprising states would want to
attract film production, says Ned Rightor, principal of
MXCIX, a Boston area policy research film. Rightor has
worked with Christopherson on research into production
incentives. Currently, more than 40 states — even film

Spring 2009 • Region Focus

23

production hubs like California and New York — provide
incentives in various forms.
The entertainment industry is a fixture of the economy
in both Los Angeles and New York City. Companies are
involved in pre- and post-production, operating studios, and
renting production equipment. Service providers like
accountants and lawyers are all there to assist projects at
every stage. Both places initially established leadership in
the industry and developed a comparative advantage without tax incentives.
Now some states hope to use incentives to build
their own comparative advantage. Production incentives
generally come in the form of either tax credits or rebates.
Some states also offer incentives for in-state construction of
studios and other businesses related to moviemaking and
production. Filmmaking incentives are typically applied
toward “below the line” expenses such as equipment rentals
and wardrobe. Some states cap the amount of incentives
that can be applied toward “above the line” expenses such as
salaries for star actors.
Top stars and big-budget movies have come to New
Mexico to shoot. The state enacted its incentive program in
2002 and has since expanded it. The program includes a tax
rebate on production expenses, employment training for
“below the line” costs (mostly production workers), support
for film and media programs at colleges and universities,
and funds for capital expenses. Filmmakers have responded,
as projects including Oscar-winner “No Country for Old
Men” and the action film “Terminator: Salvation” were
filmed in the state. A film production support industry
has grown there. According to a New Mexico State
University study, the industry had 136 businesses employing
2,284 workers in 2007. Both numbers had increased
since 2001.
In the same study, however, New Mexico State economist
Anthony Popp and a co-author show that in the 2008 fiscal
year, for every dollar provided in incentives, New Mexico
received only 14 cents in revenues. Companies have built
and announced plans to build studio complexes in the state
since the incentives took effect. Popp says he hopes the
state’s incentives will establish an industry that can survive
without them, but added that many of these sorts of companies are mobile. “The transaction costs of moving someplace
else are fairly small.”
Wilmington, N.C., has housed a studio since 1984. Film
producer Dino De Laurentiis brought the sound stages to
town after falling in love with the area while scouting filming
locations for Stephen King’s “Firestarter.” Numerous productions, including “Muppets from Space,” the HBO
television shows “Eastbound & Down” and “Little Britain
USA,” and “One Tree Hill,” have all been shot there. Though
DeLaurentiis built the studio, its former president Frank
Capra, Jr. — the deceased son of the legendary director of
“Mr. Smith Goes to Washington” — is considered the
godfather of the city’s film industry. The studio is one
element of the comparative advantage the city has in film

24

Region Focus • Spring 2009

and television production, and it was established initially
without subsidies from the state. Wilmington is also home
to a trained crew and multiple service providers.
The shooting of films and television series is one of the
most mobile parts of the production process. States provide
incentives for it in the hope that they can lure the less
mobile parts. That strategy has become more difficult as the
number of states offering production incentives has
increased, says Steven Miller, an economist at Michigan
State University. Michigan, Louisiana, and New Mexico have
succeeded in luring companies to build studios in their
states. But the only way a studio can make money is if a
production company owns it and shoots a steady number of
its own projects there, Christopherson says.
Boston has a comparative advantage in one specific area
of film and television production because it is home to PBS
station WGBH-TV. The station produces educational
programs and the most PBS primetime and online productions. States interested in developing a film and television
industry should pursue opportunities for specific niches
instead of seeking the same productions other states
fight for, Christopherson says. Opportunities are out there.
“Regions should be trying to identify what’s distinctive in
their economy and what they can build on rather than just
competing on basis of cost,” she explains.
For states, trying to sell themselves on their comparative
advantage alone is easier said than done. If left to their own
devices, industries would choose to locate in places best
suited to their needs, says Miller. In a world where incentives
exist, however, states face a kind of prisoner’s dilemma. “If
they’re not bidding for businesses to locate or stay in their
geography, someone else is going to,” he says.

Stand-in Cities
In a world where one city can double as another, incentives
can influence decisions about where productions are shot.
“The Curious Case of Benjamin Button,” the Oscar-nominated film starring Brad Pitt as a man who ages in reverse,
was based on an F. Scott Fitzgerald short story set in
Baltimore. The film’s director had chosen Maryland locations for filming and the Maryland Film Office provided
assistance, says Jack Gerbes, the office’s director. But, to
take advantage of Louisiana’s more generous incentives, the
setting of the story was changed to New Orleans, and most
of the movie was shot there. Pitt told reporters at the
movie’s New Orleans premiere that the project probably
could not have been completed without the tax breaks
Louisiana provided. Taxpayers there financed more than $27
million of the film’s $167 million budget.
There are more examples, including the movie
“Annapolis,” a 2006 film starring James Franco about a
young boxer struggling at the United States Naval
Academy. That film had opened offices in Baltimore and
was planning to shoot there and in Annapolis. But after
opening the offices, Pennsylvania Legislature passed
production incentives and within a couple days producers

were on their way north to shoot the movie.
Sometimes a state’s comparative advantage is vital. “One
Tree Hill” started shooting in Wilmington before North
Carolina’s incentives started. It followed in the footsteps of
“Dawson’s Creek,” a drama shot in Wilmington for six years.
But the setting for this show was Massachusetts. Warner
Brothers chose to film “One Tree Hill” in Wilmington
because of the presence of EUE/Screen Gems Studios and
the city’s pre-existing base of crew members, Hartley says.
The incentives strengthened the argument for keeping the
show in Wilmington. If the show was starting today, and no
incentives were in place, Hartley says the show would likely
not be filming there, and said consideration was even given
to moving “Dawson’s Creek” out of the city at one time.
“Creatively if you have a certain look in mind there are certainly other places in the country that have incentive
programs that can approach this place as a comparison.”
Gerbes says state film commissioners like him are essentially salespeople who travel to trade shows, film festivals,
and similar events selling their states’ film industries, diversity of locations, and other amenities for filmmakers.
Nothing would make him happier than to go back to
the 1990s when decisions about whether to film in Maryland
were made on those factors. But now it’s all about incentives. “That’s unfortunately the economics of today’s
Hollywood,” Gerbes says.

A Shift in Strategy?
When will incentives stop? No one has asked that question,
Popp says, but he thinks salespeople stop when states can no
longer afford them. For now, whenever states want something developed, they award tax incentives for it. Politicians
often focus on the jobs created but disregard the costs. Any
halt to incentives would cause problems, including anger
from the film industry. Current economic conditions, however, may mean that the approach states take toward
economic development could have to change. “I think we’re
in a position where we ought to think about what we should
be doing in terms of economic development,” Popp says.
North Carolina may be at that point now. The state’s
incentive program took effect in August 2006. In 2007 and
2008, the state provided a combined $32 million to 41
productions that spent $215.4 million. Pending legislation
would increase the state’s film incentive program from a 15
percent rebate of select production expenses to 25 percent.
More than 800 films and 14 television series have been

filmed in North Carolina, many before the state started
offering incentives. After the subsidy took effect, the state
has continued attracting productions, including feature
films like “Nights in Rodanthe” and television shows like
HBO’s “Eastbound & Down,” both shot in Wilmington.
Even with all the productions that have been shot and the
infrastructure that’s in place, at a 15 percent rebate, “we’re
not a player anymore,” says Aaron Syrett, director of North
Carolina’s film office. “We’re seeing an industry that has
been thriving here for the last 25 years start to dissipate and
go away. We’re losing that competitive edge along with our
share of the market.”
EUE/Screen Gems Studios could see more activity if the
state expands incentives. The studio will add a 10th sound
stage this year, it’s largest. The new sound stage will have a
60- by 60-foot water tank and will put the company in
contention for productions it wouldn’t have a chance at
nabbing otherwise, says Bill Vassar, the studio’s executive
vice president. A television production with distribution,
money, and major talent behind it is interested in the new
stage, Vassar says. However, a Disney film starring Miley
Cyrus and written by a North Carolina author, will be shot in
Georgia instead because of that state’s more generous incentives. “Disney would have been the first client in there,
which would have been great,” Vassar says.
Wilmington remains home to several small production
companies. Some of them benefit from the presence of large
productions like “One Tree Hill” in the city because they can
get called in to produce “behind the scenes” features for the
DVD release of the show, says Jennifer Mullins, who owns
Oriana East Productions with her husband, William. Their
steadiest source of work is post-production for nationally
broadcast reality shows. The company is now developing a
feature film that has financing outside the Hollywood studio
machine. As William Mullins explains, “We do have one
project that has a lot of development money in place at this
point, and fortunately it’s coming from private equity, so the
executive has a lot of creative control, and he wants to bring
it to Wilmington.”
The firm is serving as consulting producers on some
feature films, which may or may not be shot in North
Carolina. William Mullins says that decision — like so
many others in the film industry — depends on executive
producers, mostly based in Los Angeles. “The incentives
offered by Louisiana and Michigan are very often too high
for them to turn down.”
RF

READINGS
Abdulkadri, Abdul, and Steven R. Miller. “The Economic Impact
of Michigan’s Motion Picture Production Industry and the
Michigan Motion Picture Production Credit.” East Lansing,
Mich.: Michigan State University Center for Economic Analysis,
Feb. 6, 2009.
The Guide to United States Production Incentives. Santa Monica, Calif.:
The Incentives Office, Winter 2009.

Peach, James, and Anthony V. Popp. “The Film Industry in New
Mexico and the Provision of Tax Incentives.” Las Cruces, N.M.:
Arrowhead Center, New Mexico State University, Aug. 26, 2008.
Rollins Saas, Darcy. “Hollywood East? Film Tax Credits in New
England.” New England Public Policy Center Policy Brief 06-3,
Federal Reserve Bank of Boston, October 2006.

Spring 2009 • Region Focus

25

production hubs like California and New York — provide
incentives in various forms.
The entertainment industry is a fixture of the economy
in both Los Angeles and New York City. Companies are
involved in pre- and post-production, operating studios, and
renting production equipment. Service providers like
accountants and lawyers are all there to assist projects at
every stage. Both places initially established leadership in
the industry and developed a comparative advantage without tax incentives.
Now some states hope to use incentives to build
their own comparative advantage. Production incentives
generally come in the form of either tax credits or rebates.
Some states also offer incentives for in-state construction of
studios and other businesses related to moviemaking and
production. Filmmaking incentives are typically applied
toward “below the line” expenses such as equipment rentals
and wardrobe. Some states cap the amount of incentives
that can be applied toward “above the line” expenses such as
salaries for star actors.
Top stars and big-budget movies have come to New
Mexico to shoot. The state enacted its incentive program in
2002 and has since expanded it. The program includes a tax
rebate on production expenses, employment training for
“below the line” costs (mostly production workers), support
for film and media programs at colleges and universities,
and funds for capital expenses. Filmmakers have responded,
as projects including Oscar-winner “No Country for Old
Men” and the action film “Terminator: Salvation” were
filmed in the state. A film production support industry
has grown there. According to a New Mexico State
University study, the industry had 136 businesses employing
2,284 workers in 2007. Both numbers had increased
since 2001.
In the same study, however, New Mexico State economist
Anthony Popp and a co-author show that in the 2008 fiscal
year, for every dollar provided in incentives, New Mexico
received only 14 cents in revenues. Companies have built
and announced plans to build studio complexes in the state
since the incentives took effect. Popp says he hopes the
state’s incentives will establish an industry that can survive
without them, but added that many of these sorts of companies are mobile. “The transaction costs of moving someplace
else are fairly small.”
Wilmington, N.C., has housed a studio since 1984. Film
producer Dino De Laurentiis brought the sound stages to
town after falling in love with the area while scouting filming
locations for Stephen King’s “Firestarter.” Numerous productions, including “Muppets from Space,” the HBO
television shows “Eastbound & Down” and “Little Britain
USA,” and “One Tree Hill,” have all been shot there. Though
DeLaurentiis built the studio, its former president Frank
Capra, Jr. — the deceased son of the legendary director of
“Mr. Smith Goes to Washington” — is considered the
godfather of the city’s film industry. The studio is one
element of the comparative advantage the city has in film

24

Region Focus • Spring 2009

and television production, and it was established initially
without subsidies from the state. Wilmington is also home
to a trained crew and multiple service providers.
The shooting of films and television series is one of the
most mobile parts of the production process. States provide
incentives for it in the hope that they can lure the less
mobile parts. That strategy has become more difficult as the
number of states offering production incentives has
increased, says Steven Miller, an economist at Michigan
State University. Michigan, Louisiana, and New Mexico have
succeeded in luring companies to build studios in their
states. But the only way a studio can make money is if a
production company owns it and shoots a steady number of
its own projects there, Christopherson says.
Boston has a comparative advantage in one specific area
of film and television production because it is home to PBS
station WGBH-TV. The station produces educational
programs and the most PBS primetime and online productions. States interested in developing a film and television
industry should pursue opportunities for specific niches
instead of seeking the same productions other states
fight for, Christopherson says. Opportunities are out there.
“Regions should be trying to identify what’s distinctive in
their economy and what they can build on rather than just
competing on basis of cost,” she explains.
For states, trying to sell themselves on their comparative
advantage alone is easier said than done. If left to their own
devices, industries would choose to locate in places best
suited to their needs, says Miller. In a world where incentives
exist, however, states face a kind of prisoner’s dilemma. “If
they’re not bidding for businesses to locate or stay in their
geography, someone else is going to,” he says.

Stand-in Cities
In a world where one city can double as another, incentives
can influence decisions about where productions are shot.
“The Curious Case of Benjamin Button,” the Oscar-nominated film starring Brad Pitt as a man who ages in reverse,
was based on an F. Scott Fitzgerald short story set in
Baltimore. The film’s director had chosen Maryland locations for filming and the Maryland Film Office provided
assistance, says Jack Gerbes, the office’s director. But, to
take advantage of Louisiana’s more generous incentives, the
setting of the story was changed to New Orleans, and most
of the movie was shot there. Pitt told reporters at the
movie’s New Orleans premiere that the project probably
could not have been completed without the tax breaks
Louisiana provided. Taxpayers there financed more than $27
million of the film’s $167 million budget.
There are more examples, including the movie
“Annapolis,” a 2006 film starring James Franco about a
young boxer struggling at the United States Naval
Academy. That film had opened offices in Baltimore and
was planning to shoot there and in Annapolis. But after
opening the offices, Pennsylvania Legislature passed
production incentives and within a couple days producers

were on their way north to shoot the movie.
Sometimes a state’s comparative advantage is vital. “One
Tree Hill” started shooting in Wilmington before North
Carolina’s incentives started. It followed in the footsteps of
“Dawson’s Creek,” a drama shot in Wilmington for six years.
But the setting for this show was Massachusetts. Warner
Brothers chose to film “One Tree Hill” in Wilmington
because of the presence of EUE/Screen Gems Studios and
the city’s pre-existing base of crew members, Hartley says.
The incentives strengthened the argument for keeping the
show in Wilmington. If the show was starting today, and no
incentives were in place, Hartley says the show would likely
not be filming there, and said consideration was even given
to moving “Dawson’s Creek” out of the city at one time.
“Creatively if you have a certain look in mind there are certainly other places in the country that have incentive
programs that can approach this place as a comparison.”
Gerbes says state film commissioners like him are essentially salespeople who travel to trade shows, film festivals,
and similar events selling their states’ film industries, diversity of locations, and other amenities for filmmakers.
Nothing would make him happier than to go back to
the 1990s when decisions about whether to film in Maryland
were made on those factors. But now it’s all about incentives. “That’s unfortunately the economics of today’s
Hollywood,” Gerbes says.

A Shift in Strategy?
When will incentives stop? No one has asked that question,
Popp says, but he thinks salespeople stop when states can no
longer afford them. For now, whenever states want something developed, they award tax incentives for it. Politicians
often focus on the jobs created but disregard the costs. Any
halt to incentives would cause problems, including anger
from the film industry. Current economic conditions, however, may mean that the approach states take toward
economic development could have to change. “I think we’re
in a position where we ought to think about what we should
be doing in terms of economic development,” Popp says.
North Carolina may be at that point now. The state’s
incentive program took effect in August 2006. In 2007 and
2008, the state provided a combined $32 million to 41
productions that spent $215.4 million. Pending legislation
would increase the state’s film incentive program from a 15
percent rebate of select production expenses to 25 percent.
More than 800 films and 14 television series have been

filmed in North Carolina, many before the state started
offering incentives. After the subsidy took effect, the state
has continued attracting productions, including feature
films like “Nights in Rodanthe” and television shows like
HBO’s “Eastbound & Down,” both shot in Wilmington.
Even with all the productions that have been shot and the
infrastructure that’s in place, at a 15 percent rebate, “we’re
not a player anymore,” says Aaron Syrett, director of North
Carolina’s film office. “We’re seeing an industry that has
been thriving here for the last 25 years start to dissipate and
go away. We’re losing that competitive edge along with our
share of the market.”
EUE/Screen Gems Studios could see more activity if the
state expands incentives. The studio will add a 10th sound
stage this year, it’s largest. The new sound stage will have a
60- by 60-foot water tank and will put the company in
contention for productions it wouldn’t have a chance at
nabbing otherwise, says Bill Vassar, the studio’s executive
vice president. A television production with distribution,
money, and major talent behind it is interested in the new
stage, Vassar says. However, a Disney film starring Miley
Cyrus and written by a North Carolina author, will be shot in
Georgia instead because of that state’s more generous incentives. “Disney would have been the first client in there,
which would have been great,” Vassar says.
Wilmington remains home to several small production
companies. Some of them benefit from the presence of large
productions like “One Tree Hill” in the city because they can
get called in to produce “behind the scenes” features for the
DVD release of the show, says Jennifer Mullins, who owns
Oriana East Productions with her husband, William. Their
steadiest source of work is post-production for nationally
broadcast reality shows. The company is now developing a
feature film that has financing outside the Hollywood studio
machine. As William Mullins explains, “We do have one
project that has a lot of development money in place at this
point, and fortunately it’s coming from private equity, so the
executive has a lot of creative control, and he wants to bring
it to Wilmington.”
The firm is serving as consulting producers on some
feature films, which may or may not be shot in North
Carolina. William Mullins says that decision — like so
many others in the film industry — depends on executive
producers, mostly based in Los Angeles. “The incentives
offered by Louisiana and Michigan are very often too high
for them to turn down.”
RF

READINGS
Abdulkadri, Abdul, and Steven R. Miller. “The Economic Impact
of Michigan’s Motion Picture Production Industry and the
Michigan Motion Picture Production Credit.” East Lansing,
Mich.: Michigan State University Center for Economic Analysis,
Feb. 6, 2009.
The Guide to United States Production Incentives. Santa Monica, Calif.:
The Incentives Office, Winter 2009.

Peach, James, and Anthony V. Popp. “The Film Industry in New
Mexico and the Provision of Tax Incentives.” Las Cruces, N.M.:
Arrowhead Center, New Mexico State University, Aug. 26, 2008.
Rollins Saas, Darcy. “Hollywood East? Film Tax Credits in New
England.” New England Public Policy Center Policy Brief 06-3,
Federal Reserve Bank of Boston, October 2006.

Spring 2009 • Region Focus

25

Volatile Profits for the Airline Industry

BY R E N E E CO U RTO I S

ooking for a flight out of Charlotte, N.C.? You’ll have
3.6 percent fewer flight options by June 2009 compared to the same month last year. Excited to spend
a summer week in Myrtle Beach, S.C.? You’ll have 7.3
percent fewer flights for getting home than you would have
had last summer. Even our nation’s capital has seen about
6.5 percent fewer flights departing from Washington Dulles
International Airport this June compared to June 2008.
The main reason behind the capacity cuts at most of the
country’s major airports, of course, is the recession. When
the economy turns sour, people fly less. Since it doesn’t pay
to fly empty planes, airlines cut capacity by running fewer
flights or swapping big planes for smaller ones. “Right now
there are too many seats chasing too few passengers,” says
Vaughn Cordle of AirlineForecasts, an industry consulting
group.
But any seasoned traveler knows the recession is just the
latest in a series of shocks to hit the airline industry in this
decade. Oil prices — a key determinant of jet fuel prices and,
to a lesser extent, would-be travelers’ expendable cash —
spiked to a record-breaking $147 per barrel in July 2008.
The terrorist attacks of 9/11 led to huge costs for the
industry in the form of security protocols, and they
worried travelers, many of whom opted to just stay home.
The airline industry as a whole has been profitable for
only two years during this decade, 2006 and 2007. They
booked a loss again in 2008, and industry analysts are split
on what’s in the cards for this year. Analysts do agree, however, that because of the succession of shocks the industry
has experienced, and the emergence of a new breed
of competitors, we may be at a turning point in the
airline industry that could change how airlines operate in
the future.

L

Turbulence On the Books
In order to keep this in perspective, it is important to note
that the airline industry has never been consistently profitable. This is mostly a result of its structure. Airlines have
large upfront fixed costs for their fleet of jets, but their real
product is seats on those planes. They charge a fare for each
seat that is well above the marginal cost of flying one additional passenger in order to recoup those fixed costs over
time.
With the exception of fuel, airlines’ costs are relatively
stable. The real uncertainty that they face is exceptionally
erratic demand resulting from business cycles, and they are
more sensitive to weather patterns and geopolitical turmoil
than perhaps any other industry in existence. The airline
industry experienced its first-ever decline in world traffic
volume in 1991, an outcome of anxiety over traveling during

26

Region Focus • Spring 2009

the Gulf War. Other notable extremes since have included
airlines’ high-profit years during the dot-com boom, the subsequent decline in global air travel following 9/11 and the
current financial crisis. The International Air Transport
Association (IATA) predicts global passenger traffic will fall
by 3 percent in 2009. Despite the industry’s cyclicality, this
is only the third time in the last 35 years that passenger
traffic has fallen. This may be one reason why industry analysts are now speculating on whether the industry’s oldest
players will survive in their current form.
In an industry whose profits are so volatile, it is no
surprise that the competitive landscape for airlines is constantly changing through mergers, bankruptcies, and
liquidations. A small handful of airlines have stayed in the
game since the industry was deregulated in 1978. These
so-called “legacy carriers” include some of the country’s
biggest names in air travel: American, Continental, United,
US Airways, Delta, and Northwest (the latter two of which
merged in October 2008 and are in the process of being fully
integrated under Delta’s brand). They have seen their share
of financial distress.
When times are tough for airlines, new competitors tend
to enter or expand in the market when aircraft, labor, and
airport space are cheaper. They also gobble up any routes
that have been abandoned by existing airlines. In the last
two decades, the most intense competition has come from
the so-called “low-cost carriers,” or LCCs. The LCCs are the
group of airlines — the names Southwest, JetBlue, AirTran,
Allegiant, and Frontier, the biggest of the LCCs, might ring
a bell — known for offering cheap fares for flights all over
the country. The LCCs aren’t always the cheapest flight
option, but many times they are. Customers have increasingly chosen them over the legacy carriers.
This is because seats are a commodity. They are not
easily differentiated among airlines and have no intrinsic
value on their own — people fly to get somewhere, not for
the sake of taking a flight. The airline’s sole aim is to control
the supply of that commodity relative to its competitors in
order tomanage the fares at a profitable level, or carry more
traffic for a given fare.
The commodity nature of seats means that price is king
in the airline industry: The airline that offers the cheapest
flight for a given market will usually win the customer.
Because the LCCs tend to offer cheaper flights, they
often act as price-setters for the rest of the industry and
“everyone else has to scramble to meet them,” according to
Edmund Greenslet, author of The Airline Monitor, an industry publication. The market share of the LCCs has grown
from about one-tenth of the industry in the early ’90s to
over one-quarter in 2008. Southwest now carries more

Coming to a Hub Near You
Another key difference between legacies and LCCs is the
routes they fly. The airline industry was heavily regulated
prior to 1978, with the Civil Aeronautics Board determining
what routes airlines could fly and what fares they could
charge. Thus, in effect the government determined the market share of each airline. Decisions were typically made
based on what would best serve the “public interest.” (The
holdover from this regulatory regime is the painstaking
merger approval process that still exists for airlines today.)

8

180

6

160

4

140

2

120

0

100

-2

80

-4

60

-6

40

-8

20

-10

0

-12

PROFIT ($BILLIONS)

The fight for dominance in the airline industry

200

1978
79
80
81
82
83
84
85
86
87
88
89
1990
91
92
93
94
95
96
97
98
99
2000
01
02
03
04
05
06
07
08

Clear Skies?

OPERATING REVENUE ($BILLIONS)

U.S. passenger and cargo airlines

passengers than any other U.S. airline.
How do the LCCs serve up cheap flights? Aptly named,
they operate within a business model that allows them to keep
costs down, run more efficiently, and thus charge lower fares.
The defining characteristic of the LCCs is that they have a
relatively nondiverse fleet of jets. Frontier Airlines runs only
three types of jets. The rest of the LCCs fly either one or two.
Notably, at the end of 2008 Southwest had the third-largest
fleet of jets in the industry (after the Delta/Northwest merger) at 537 jets and they’re all 737s.
A homogeneous fleet saves the LCCs bundles in terms of
maintenance and staff training since they don’t need to train
staff on how to repair and operate multiple types of jets.
This helps the LCCs better utilize their staff, including
cross-training them on lots of jobs — which is why you may
have noticed that the person who checked your bags on your
last LCC flight also appeared on board to deliver your
peanuts. The LCCs are also known for offering “no frills”
service by sometimes eliminating seat assignments, in-flight
meals, and entertainment. They often have an uncomplicated fare structure, sometimes selling only one-way flights.
These simplifying features streamline flight operations.
This lean business model has created a considerable cost
advantage in terms of “cost per available seat mile” (CASM)
— or the cost of flying one airline seat for one mile. Over
time, consulting firm Oliver Wyman estimates the LCCs
operate about 25 percent more cheaply than the legacies in
terms of CASM. No legacy carrier beats any LCC in terms of
this cost measure. The cost gap between the two groups in
absolute terms has also widened over time, despite avid costcutting measures by the legacies. As much as 65 percent of
the cost advantage of the LCCs may be attributable to its
simplified business model, according to consulting firm
Booz Allen Hamilton.
Labor remains the biggest expense for airlines, between
one-quarter and one-third of total operating expenses. But
because the LCCs are able to better manage other costs, this
is not an impediment. Southwest in particular is so good at
keeping costs down that it completely compensates for the
fact that it has the most expensive labor force of the major
airlines as a percentage of its CASM. Its labor force is 77
percent unionized, and its staff and pilots make among the
highest incomes in the industry, with the biggest benefits
packages — yet Southwest still has among the lowest CASM
in the industry.

Profit

Operating Revenue

SOURCE: Air Transport Association

After deregulation in 1978, American Airlines pioneered
a new method for determining routes. They funneled all
their passengers through one common location, called a
hub, bundled them into common connecting flights, and
shipped passengers out from there to the final destinations.
By accumulating passengers in one location, the legacy airlines could schedule a greater number of flights, serve more
cities, and earn more revenue. This became known as the
“hub-and-spoke” setup, and all the airlines at the time
quickly adopted it.
But the hub-and-spoke model does come with some
costs. Key to the model is amassing lots of passengers into
the hub at peak points during the day to fill outgoing flights
and minimize the amount of time that planes are left idle
waiting for passengers. Idle time means lost revenue. “You
wind up piling up everybody and trying to get them in and
out at the same time,” says Greenslet. It also means the airlines must build in time between flights to move bags, staff,
and passengers from one flight to the next.
The LCCs revolutionized commercial flying by providing
direct flights under a “point-to-point” model, with no
hub at all. The LCCs provide more flights that run directly
from one city to another, even if neither city is particularly
large. The reduced congestion and idle time allows
LCCs to get planes back in the air more quickly. “The
LCCs’ planes are more productive. They’re flying 11 to 13
hours a day, compared to 9 to 11 hours a day for the legacies,”
says Cordle. This business model turned the costs and
benefits of hub-and-spoke airlines on its head: The pointto-point model is less costly in part because it reduces idle
time, but offers less in connectivity and flight times, and
therefore risks accumulating fewer passengers per flight.
Over time, cost-conscious vacationers, who are relatively
flexible on flight times, have come to rely on the lower-fare
LCCs, while business travelers, for whom connectivity and
scheduling convenience is most important, have stuck with
the legacy carriers.

Meeting in the Aisle
In some ways the business models of the LCC and legacy
airlines are merging. As LCCs grow and the two groups fight

Spring 2009 • Region Focus

27

Volatile Profits for the Airline Industry

BY R E N E E CO U RTO I S

ooking for a flight out of Charlotte, N.C.? You’ll have
3.6 percent fewer flight options by June 2009 compared to the same month last year. Excited to spend
a summer week in Myrtle Beach, S.C.? You’ll have 7.3
percent fewer flights for getting home than you would have
had last summer. Even our nation’s capital has seen about
6.5 percent fewer flights departing from Washington Dulles
International Airport this June compared to June 2008.
The main reason behind the capacity cuts at most of the
country’s major airports, of course, is the recession. When
the economy turns sour, people fly less. Since it doesn’t pay
to fly empty planes, airlines cut capacity by running fewer
flights or swapping big planes for smaller ones. “Right now
there are too many seats chasing too few passengers,” says
Vaughn Cordle of AirlineForecasts, an industry consulting
group.
But any seasoned traveler knows the recession is just the
latest in a series of shocks to hit the airline industry in this
decade. Oil prices — a key determinant of jet fuel prices and,
to a lesser extent, would-be travelers’ expendable cash —
spiked to a record-breaking $147 per barrel in July 2008.
The terrorist attacks of 9/11 led to huge costs for the
industry in the form of security protocols, and they
worried travelers, many of whom opted to just stay home.
The airline industry as a whole has been profitable for
only two years during this decade, 2006 and 2007. They
booked a loss again in 2008, and industry analysts are split
on what’s in the cards for this year. Analysts do agree, however, that because of the succession of shocks the industry
has experienced, and the emergence of a new breed
of competitors, we may be at a turning point in the
airline industry that could change how airlines operate in
the future.

L

Turbulence On the Books
In order to keep this in perspective, it is important to note
that the airline industry has never been consistently profitable. This is mostly a result of its structure. Airlines have
large upfront fixed costs for their fleet of jets, but their real
product is seats on those planes. They charge a fare for each
seat that is well above the marginal cost of flying one additional passenger in order to recoup those fixed costs over
time.
With the exception of fuel, airlines’ costs are relatively
stable. The real uncertainty that they face is exceptionally
erratic demand resulting from business cycles, and they are
more sensitive to weather patterns and geopolitical turmoil
than perhaps any other industry in existence. The airline
industry experienced its first-ever decline in world traffic
volume in 1991, an outcome of anxiety over traveling during

26

Region Focus • Spring 2009

the Gulf War. Other notable extremes since have included
airlines’ high-profit years during the dot-com boom, the subsequent decline in global air travel following 9/11 and the
current financial crisis. The International Air Transport
Association (IATA) predicts global passenger traffic will fall
by 3 percent in 2009. Despite the industry’s cyclicality, this
is only the third time in the last 35 years that passenger
traffic has fallen. This may be one reason why industry analysts are now speculating on whether the industry’s oldest
players will survive in their current form.
In an industry whose profits are so volatile, it is no
surprise that the competitive landscape for airlines is constantly changing through mergers, bankruptcies, and
liquidations. A small handful of airlines have stayed in the
game since the industry was deregulated in 1978. These
so-called “legacy carriers” include some of the country’s
biggest names in air travel: American, Continental, United,
US Airways, Delta, and Northwest (the latter two of which
merged in October 2008 and are in the process of being fully
integrated under Delta’s brand). They have seen their share
of financial distress.
When times are tough for airlines, new competitors tend
to enter or expand in the market when aircraft, labor, and
airport space are cheaper. They also gobble up any routes
that have been abandoned by existing airlines. In the last
two decades, the most intense competition has come from
the so-called “low-cost carriers,” or LCCs. The LCCs are the
group of airlines — the names Southwest, JetBlue, AirTran,
Allegiant, and Frontier, the biggest of the LCCs, might ring
a bell — known for offering cheap fares for flights all over
the country. The LCCs aren’t always the cheapest flight
option, but many times they are. Customers have increasingly chosen them over the legacy carriers.
This is because seats are a commodity. They are not
easily differentiated among airlines and have no intrinsic
value on their own — people fly to get somewhere, not for
the sake of taking a flight. The airline’s sole aim is to control
the supply of that commodity relative to its competitors in
order tomanage the fares at a profitable level, or carry more
traffic for a given fare.
The commodity nature of seats means that price is king
in the airline industry: The airline that offers the cheapest
flight for a given market will usually win the customer.
Because the LCCs tend to offer cheaper flights, they
often act as price-setters for the rest of the industry and
“everyone else has to scramble to meet them,” according to
Edmund Greenslet, author of The Airline Monitor, an industry publication. The market share of the LCCs has grown
from about one-tenth of the industry in the early ’90s to
over one-quarter in 2008. Southwest now carries more

Coming to a Hub Near You
Another key difference between legacies and LCCs is the
routes they fly. The airline industry was heavily regulated
prior to 1978, with the Civil Aeronautics Board determining
what routes airlines could fly and what fares they could
charge. Thus, in effect the government determined the market share of each airline. Decisions were typically made
based on what would best serve the “public interest.” (The
holdover from this regulatory regime is the painstaking
merger approval process that still exists for airlines today.)

8

180

6

160

4

140

2

120

0

100

-2

80

-4

60

-6

40

-8

20

-10

0

-12

PROFIT ($BILLIONS)

The fight for dominance in the airline industry

200

1978
79
80
81
82
83
84
85
86
87
88
89
1990
91
92
93
94
95
96
97
98
99
2000
01
02
03
04
05
06
07
08

Clear Skies?

OPERATING REVENUE ($BILLIONS)

U.S. passenger and cargo airlines

passengers than any other U.S. airline.
How do the LCCs serve up cheap flights? Aptly named,
they operate within a business model that allows them to keep
costs down, run more efficiently, and thus charge lower fares.
The defining characteristic of the LCCs is that they have a
relatively nondiverse fleet of jets. Frontier Airlines runs only
three types of jets. The rest of the LCCs fly either one or two.
Notably, at the end of 2008 Southwest had the third-largest
fleet of jets in the industry (after the Delta/Northwest merger) at 537 jets and they’re all 737s.
A homogeneous fleet saves the LCCs bundles in terms of
maintenance and staff training since they don’t need to train
staff on how to repair and operate multiple types of jets.
This helps the LCCs better utilize their staff, including
cross-training them on lots of jobs — which is why you may
have noticed that the person who checked your bags on your
last LCC flight also appeared on board to deliver your
peanuts. The LCCs are also known for offering “no frills”
service by sometimes eliminating seat assignments, in-flight
meals, and entertainment. They often have an uncomplicated fare structure, sometimes selling only one-way flights.
These simplifying features streamline flight operations.
This lean business model has created a considerable cost
advantage in terms of “cost per available seat mile” (CASM)
— or the cost of flying one airline seat for one mile. Over
time, consulting firm Oliver Wyman estimates the LCCs
operate about 25 percent more cheaply than the legacies in
terms of CASM. No legacy carrier beats any LCC in terms of
this cost measure. The cost gap between the two groups in
absolute terms has also widened over time, despite avid costcutting measures by the legacies. As much as 65 percent of
the cost advantage of the LCCs may be attributable to its
simplified business model, according to consulting firm
Booz Allen Hamilton.
Labor remains the biggest expense for airlines, between
one-quarter and one-third of total operating expenses. But
because the LCCs are able to better manage other costs, this
is not an impediment. Southwest in particular is so good at
keeping costs down that it completely compensates for the
fact that it has the most expensive labor force of the major
airlines as a percentage of its CASM. Its labor force is 77
percent unionized, and its staff and pilots make among the
highest incomes in the industry, with the biggest benefits
packages — yet Southwest still has among the lowest CASM
in the industry.

Profit

Operating Revenue

SOURCE: Air Transport Association

After deregulation in 1978, American Airlines pioneered
a new method for determining routes. They funneled all
their passengers through one common location, called a
hub, bundled them into common connecting flights, and
shipped passengers out from there to the final destinations.
By accumulating passengers in one location, the legacy airlines could schedule a greater number of flights, serve more
cities, and earn more revenue. This became known as the
“hub-and-spoke” setup, and all the airlines at the time
quickly adopted it.
But the hub-and-spoke model does come with some
costs. Key to the model is amassing lots of passengers into
the hub at peak points during the day to fill outgoing flights
and minimize the amount of time that planes are left idle
waiting for passengers. Idle time means lost revenue. “You
wind up piling up everybody and trying to get them in and
out at the same time,” says Greenslet. It also means the airlines must build in time between flights to move bags, staff,
and passengers from one flight to the next.
The LCCs revolutionized commercial flying by providing
direct flights under a “point-to-point” model, with no
hub at all. The LCCs provide more flights that run directly
from one city to another, even if neither city is particularly
large. The reduced congestion and idle time allows
LCCs to get planes back in the air more quickly. “The
LCCs’ planes are more productive. They’re flying 11 to 13
hours a day, compared to 9 to 11 hours a day for the legacies,”
says Cordle. This business model turned the costs and
benefits of hub-and-spoke airlines on its head: The pointto-point model is less costly in part because it reduces idle
time, but offers less in connectivity and flight times, and
therefore risks accumulating fewer passengers per flight.
Over time, cost-conscious vacationers, who are relatively
flexible on flight times, have come to rely on the lower-fare
LCCs, while business travelers, for whom connectivity and
scheduling convenience is most important, have stuck with
the legacy carriers.

Meeting in the Aisle
In some ways the business models of the LCC and legacy
airlines are merging. As LCCs grow and the two groups fight

Spring 2009 • Region Focus

27

Fleet Diversity: The Defining Characteristic

Legacy Carriers Losing Domestic
Market Share to LCCs

Selected airline fleets as of year-end 2008

directly for market share, they’re picking up each other’s
habits. Legacy carriers have started to mimic some of the
streamlined features of the LCCs. Many legacy carriers
increasingly charge for, or eliminate, the “frills” of air travel.
They have paid attention to the cost-minimizing innovations pioneered by the LCCs, like the fuel hedges that have
famously saved Southwest billions.
Some have also migrated to “rolling hubs.” Traditional
hubs schedule many planes to land and depart around the
same time during peak hours, which reduces the layovers
with which passengers must deal but leads to costly congestion. Rolling hubs, on the other hand, smooth flights over
the day rather than coordinating many flights to take off and
land around the same time. This reduces congestion and gets
planes back in the air more quickly.
As the LCCs have grown, their traffic has inevitably
accumulated in certain cities where demand is strong. As a
result, low-cost carriers increasingly operate out of hubs,
they just might not call them that. Many of the LCCs
instead call these de facto hubs “focus cities” or “gateways.”
Therefore it is something of a misnomer to say that the
LCCs operate strictly with a point-to-point model, according to Mike Boyd of Boyd Group International, an airline
forecasting firm based out of Colorado. Southwest, for
example, specifically calls itself a “point-to-point” airline,
even though Boyd estimates as much as a third of its flights
are connecting traffic. The LCCs don’t make a concerted
effort to market themselves as hub carriers, and many are
still much less reliant on hubs than the legacies.
Resorting to a partial hub system has allowed the LCCs
to offer the greater connectivity that the legacy airlines do.
This has expanded the number of markets they serve. They
have also begun to target “the most lucrative passenger, the
business traveler,” by offering more perks and frequent flier
programs, “and that’s the bread and butter of the legacies,”
according to Cordle. He estimates that business travelers are
8 percent to 12 percent of the passengers for legacy carriers,
but they are about 35 percent to 45 percent of their revenue,
and in some cases as much as half.

28

Region Focus • Spring 2009

Low-cost Carriers

Total Fleet Size

United

US Airways

American

Frontier

The legacy and low-cost carriers will face some issues that
both will find hard to ignore. One is the possible adoption of
a federal “cap and trade” emissions control program that
threatens to dramatically raise their cost of jet fuel. Another
is an outdated air traffic control system that forces costly
delays. Of course, economic cyclicality will continue to
plague the airlines. The industry expands and contracts in
line with, and at roughly twice the pace of, the overall economy. When the economy slows, so does travel demand as
businesses tighten their travel budgets and individuals opt
for fewer recreational trips.
In the future, Cordle expects an airline industry that is
smaller overall. “Because of excess spending and consumption in the United States since the early 2000s, with twin
bubbles in stocks and housing, expenditures on air travel
were inflated above long-run trend,” he says. “Now we’re
getting back to the reality of what the consumers can actually manage. When you strip away all the noise, it really
means the industry will be 10 percent or so smaller.”
This can take place through mergers or capacity cuts —
both of which can be aided by Chapter 11 bankruptcy, to
which the airlines are no stranger. Of the six legacy carriers,
four have filed for bankruptcy since the year 2000. There
have been more than 40 airline bankruptcies overall in this
decade alone. It’s a normal course of business that helps the
airlines renegotiate existing contracts, especially those with
organized labor. “The airline industry’s labor costs have
come down 40 percent since 2000,” Cordle estimates, “and
much of that was accomplished through bankruptcy or near
bankruptcy positions.” He says that one airline got concessions from pilots as the lawyers were essentially walking up
the steps of the court to file. This sort of negotiation has
been a standard way for airlines to deal with labor costs
during hard financial times.
Mergers are the way to go, according to Cordle, in part
because he views the legacies’ pension obligations as unsustainable. “Mergers can be win-win-win. Win for the
customer, shareholder, and employees.” A merger’s ultimate

Legacy Carriers

0

NUMBER OF JET TYPES IN FLEET

0

Delta (incl. Northwest)

LCC
Regional
Legacy
NOTE: One ASM is one seat flown one mile. Total ASM is the number of seat miles offered
by all passenger airlines in a period. ASM is a measure of the total “product on the shelf”
offered by airlines.
SOURCE: Airports:USA DataMiner

Landing on Common Ground

Continental

07

08

05

06

03

04

01

02

2000

99

98

96

97

95

94

93

92

1991

0

2

100

JetBlue

10

4

200

AirTran

20

300

Hawaiian

30

6

400

Spirit

40

8

500

Virgin America

50

10

600

Alaska

PERCENT

70
60

700

Southwest

80

12

800

Allegiant

90

Berkeley believes there appears to be a single “hybrid” airline model emerging. “The idea that some airlines have
the ‘right’ business model is nonsense. I think we’ll see
LCCs move increasingly toward hubbing, and I think we’ll
continue to see the legacy carriers move in the opposite
direction and streamline,” he says. “We’re definitely seeing
the two models merge.”

Midwest

100

It looks as though hubs are here to stay, even though, by
some measures, they’re more expensive to run. Hubs may be
the only way to serve a country of our size and composition.
“A country like ours, with a lot of population centers, generates a lot of travel demand even for relatively small cities,
but not always enough traffic to support a direct flight to
another medium-sized town. The only way to serve all those
points is to hub the traffic,” says Greenslet. “The train system does that in Europe. The hub-and-spoke system does
that in this country.”
As the LCCs saturate their existing markets, they have
two options if they want to keep growing. They can branch
into small-city short-haul traffic currently served by the
regional airlines — the small, 50- to 70-seat airlines that serve
very small cities, often as a subsidiary of a legacy carrier. Or,
they can branch into long-haul (generally defined as six or
more hours) and international travel like the legacies. The
LCCs can’t expect to continually branch into these areas
while maintaining only one or two types of jets. However,
buying an array of new jets departs rather dramatically from
the business model that has kept their costs so low to begin
with. “Right now they’re too big to go to Montgomery, Ala.,
and too small to go to Shanghai,” Boyd says.
What this means is that the low-hanging fruit for the
LCCs may be just about gone. They used their novel business model to connect markets in a way that didn’t
previously exist — point-to-point service between midsized
cities that created a low-cost alternative for people who
would otherwise drive 300 miles to their destination. In
other words, the LCCs expanded overall demand instead of
taking it away from their competitors. As they’ve grown,
they’ve moved into big-city markets and have been largely
successful at undercutting the legacies for many flights. But
they won’t be able to keep growing without fighting tooth
and nail to take that market share from the legacy carriers,
especially if consumer demand continues to fall.
What’s more, the cost advantages that made them so
successful to begin with may be dwindling. Their planes
are becoming less fuel efficient as they age. Labor costs are
getting higher too: Their staffs are gaining tenure and airline
wages are determined on a graduated scale by seniority. It’s
not obvious what more they can do to win market share
from the legacy carriers and keep their cost advantage. “The
big thing you’ll continue to see is that the legacy carriers will
keep pushing to lower their cost structure,” says Yale
University economist Steven Berry. “But the degree to
which the LCCs can adopt the hub system, for example, is
less certain.” But don’t be too fast to discount the innovative
LCCs. Since it has been around since the 1970s, low-cost
behemoth Southwest is a living case study of an aging LCC
and it has only seemed to get stronger. Regarding its purported disappearing cost advantages, “I’ve been saying that
about Southwest for about 30 years. So far aging has had no
major affect,” Greenlist says.
In light of the changes that have taken place, economist
Severin Borenstein of the University of California at

TOTAL FLEET SIZE

Relative market share by airline type in terms of available seat miles

Number of Jet Types

SOURCE: Air Transport Association

impact on consumers depends on the airlines involved.
For example, if the two airlines have largely overlapping
routes, then consumers can be harmed because the airlines
will eliminate the overlap which reduces the total network
available to passengers, according to Berry. However, if the
airlines have complementary networks, then mergers have
the potential to create a broader network overall for consumers. “The government looks out for this and impedes
mergers where the potential harm for consumers is greater,”
Berry says.
No matter what changes influence the new business
models, it’s hard to imagine a world without airlines. For U.S.
airlines, there are 31,000 scheduled departures ferrying an
average of 2.1 million passengers each day. The Federal
Aviation Administration predicts global air traffic will
double by 2025. The FAA also estimates that the industry
adds more than 5 percent to U.S. gross domestic product
through its direct and indirect economic impacts, and is
responsible for nearly 10 million jobs in industries (other
than airlines) related to hospitality and travel — even
though U.S. spending on air travel is less than 1 percent of
GDP, and airlines directly employ just over half a million
people. “There is tremendous spillover that ripples through
the entire economy,” Cordle says.
From the passenger’s perspective, ongoing capacity cuts
by the airlines will mean “more crowded aircraft, less quality
of service, yet better on-time performance because there are
fewer capacity bottlenecks,” Cordle sums up.
Boyd is also keen to put the ever-changing airline industry into perspective: “Flying will continue to be just as
uncomfortable as ever in the same seat space,” Boyd says.
“We can count on continuity in that sense.”
RF

READINGS
“Air Transport Association 2008 Economic Report.” Washington,
D.C.: Air Transport Association, 2008.
“Airlines 101 — A Brief History of the Airline and Commercial
Aircraft Industries.” Ponte Vedra Beach, Fl.: The Airline Monitor,
May 2006.
Borenstein, Severin, and Nancy Rose. “How Airline Markets

Work … Or Do They? Regulatory Reform in the Airline Industry.”
In Rose, Nancy L. (ed.), Economic Regulation and Its Reform:
What Have We Learned? Chicago: University of Chicago Press,
forthcoming.
“The Impact of Recession on Air Traffic Volumes.” Montreal,
Quebec. International Air Transport Association, December 2008.

Spring 2009 • Region Focus

29

Fleet Diversity: The Defining Characteristic

Legacy Carriers Losing Domestic
Market Share to LCCs

Selected airline fleets as of year-end 2008

directly for market share, they’re picking up each other’s
habits. Legacy carriers have started to mimic some of the
streamlined features of the LCCs. Many legacy carriers
increasingly charge for, or eliminate, the “frills” of air travel.
They have paid attention to the cost-minimizing innovations pioneered by the LCCs, like the fuel hedges that have
famously saved Southwest billions.
Some have also migrated to “rolling hubs.” Traditional
hubs schedule many planes to land and depart around the
same time during peak hours, which reduces the layovers
with which passengers must deal but leads to costly congestion. Rolling hubs, on the other hand, smooth flights over
the day rather than coordinating many flights to take off and
land around the same time. This reduces congestion and gets
planes back in the air more quickly.
As the LCCs have grown, their traffic has inevitably
accumulated in certain cities where demand is strong. As a
result, low-cost carriers increasingly operate out of hubs,
they just might not call them that. Many of the LCCs
instead call these de facto hubs “focus cities” or “gateways.”
Therefore it is something of a misnomer to say that the
LCCs operate strictly with a point-to-point model, according to Mike Boyd of Boyd Group International, an airline
forecasting firm based out of Colorado. Southwest, for
example, specifically calls itself a “point-to-point” airline,
even though Boyd estimates as much as a third of its flights
are connecting traffic. The LCCs don’t make a concerted
effort to market themselves as hub carriers, and many are
still much less reliant on hubs than the legacies.
Resorting to a partial hub system has allowed the LCCs
to offer the greater connectivity that the legacy airlines do.
This has expanded the number of markets they serve. They
have also begun to target “the most lucrative passenger, the
business traveler,” by offering more perks and frequent flier
programs, “and that’s the bread and butter of the legacies,”
according to Cordle. He estimates that business travelers are
8 percent to 12 percent of the passengers for legacy carriers,
but they are about 35 percent to 45 percent of their revenue,
and in some cases as much as half.

28

Region Focus • Spring 2009

Low-cost Carriers

Total Fleet Size

United

US Airways

American

Frontier

The legacy and low-cost carriers will face some issues that
both will find hard to ignore. One is the possible adoption of
a federal “cap and trade” emissions control program that
threatens to dramatically raise their cost of jet fuel. Another
is an outdated air traffic control system that forces costly
delays. Of course, economic cyclicality will continue to
plague the airlines. The industry expands and contracts in
line with, and at roughly twice the pace of, the overall economy. When the economy slows, so does travel demand as
businesses tighten their travel budgets and individuals opt
for fewer recreational trips.
In the future, Cordle expects an airline industry that is
smaller overall. “Because of excess spending and consumption in the United States since the early 2000s, with twin
bubbles in stocks and housing, expenditures on air travel
were inflated above long-run trend,” he says. “Now we’re
getting back to the reality of what the consumers can actually manage. When you strip away all the noise, it really
means the industry will be 10 percent or so smaller.”
This can take place through mergers or capacity cuts —
both of which can be aided by Chapter 11 bankruptcy, to
which the airlines are no stranger. Of the six legacy carriers,
four have filed for bankruptcy since the year 2000. There
have been more than 40 airline bankruptcies overall in this
decade alone. It’s a normal course of business that helps the
airlines renegotiate existing contracts, especially those with
organized labor. “The airline industry’s labor costs have
come down 40 percent since 2000,” Cordle estimates, “and
much of that was accomplished through bankruptcy or near
bankruptcy positions.” He says that one airline got concessions from pilots as the lawyers were essentially walking up
the steps of the court to file. This sort of negotiation has
been a standard way for airlines to deal with labor costs
during hard financial times.
Mergers are the way to go, according to Cordle, in part
because he views the legacies’ pension obligations as unsustainable. “Mergers can be win-win-win. Win for the
customer, shareholder, and employees.” A merger’s ultimate

Legacy Carriers

0

NUMBER OF JET TYPES IN FLEET

0

Delta (incl. Northwest)

LCC
Regional
Legacy
NOTE: One ASM is one seat flown one mile. Total ASM is the number of seat miles offered
by all passenger airlines in a period. ASM is a measure of the total “product on the shelf”
offered by airlines.
SOURCE: Airports:USA DataMiner

Landing on Common Ground

Continental

07

08

05

06

03

04

01

02

2000

99

98

96

97

95

94

93

92

1991

0

2

100

JetBlue

10

4

200

AirTran

20

300

Hawaiian

30

6

400

Spirit

40

8

500

Virgin America

50

10

600

Alaska

PERCENT

70
60

700

Southwest

80

12

800

Allegiant

90

Berkeley believes there appears to be a single “hybrid” airline model emerging. “The idea that some airlines have
the ‘right’ business model is nonsense. I think we’ll see
LCCs move increasingly toward hubbing, and I think we’ll
continue to see the legacy carriers move in the opposite
direction and streamline,” he says. “We’re definitely seeing
the two models merge.”

Midwest

100

It looks as though hubs are here to stay, even though, by
some measures, they’re more expensive to run. Hubs may be
the only way to serve a country of our size and composition.
“A country like ours, with a lot of population centers, generates a lot of travel demand even for relatively small cities,
but not always enough traffic to support a direct flight to
another medium-sized town. The only way to serve all those
points is to hub the traffic,” says Greenslet. “The train system does that in Europe. The hub-and-spoke system does
that in this country.”
As the LCCs saturate their existing markets, they have
two options if they want to keep growing. They can branch
into small-city short-haul traffic currently served by the
regional airlines — the small, 50- to 70-seat airlines that serve
very small cities, often as a subsidiary of a legacy carrier. Or,
they can branch into long-haul (generally defined as six or
more hours) and international travel like the legacies. The
LCCs can’t expect to continually branch into these areas
while maintaining only one or two types of jets. However,
buying an array of new jets departs rather dramatically from
the business model that has kept their costs so low to begin
with. “Right now they’re too big to go to Montgomery, Ala.,
and too small to go to Shanghai,” Boyd says.
What this means is that the low-hanging fruit for the
LCCs may be just about gone. They used their novel business model to connect markets in a way that didn’t
previously exist — point-to-point service between midsized
cities that created a low-cost alternative for people who
would otherwise drive 300 miles to their destination. In
other words, the LCCs expanded overall demand instead of
taking it away from their competitors. As they’ve grown,
they’ve moved into big-city markets and have been largely
successful at undercutting the legacies for many flights. But
they won’t be able to keep growing without fighting tooth
and nail to take that market share from the legacy carriers,
especially if consumer demand continues to fall.
What’s more, the cost advantages that made them so
successful to begin with may be dwindling. Their planes
are becoming less fuel efficient as they age. Labor costs are
getting higher too: Their staffs are gaining tenure and airline
wages are determined on a graduated scale by seniority. It’s
not obvious what more they can do to win market share
from the legacy carriers and keep their cost advantage. “The
big thing you’ll continue to see is that the legacy carriers will
keep pushing to lower their cost structure,” says Yale
University economist Steven Berry. “But the degree to
which the LCCs can adopt the hub system, for example, is
less certain.” But don’t be too fast to discount the innovative
LCCs. Since it has been around since the 1970s, low-cost
behemoth Southwest is a living case study of an aging LCC
and it has only seemed to get stronger. Regarding its purported disappearing cost advantages, “I’ve been saying that
about Southwest for about 30 years. So far aging has had no
major affect,” Greenlist says.
In light of the changes that have taken place, economist
Severin Borenstein of the University of California at

TOTAL FLEET SIZE

Relative market share by airline type in terms of available seat miles

Number of Jet Types

SOURCE: Air Transport Association

impact on consumers depends on the airlines involved.
For example, if the two airlines have largely overlapping
routes, then consumers can be harmed because the airlines
will eliminate the overlap which reduces the total network
available to passengers, according to Berry. However, if the
airlines have complementary networks, then mergers have
the potential to create a broader network overall for consumers. “The government looks out for this and impedes
mergers where the potential harm for consumers is greater,”
Berry says.
No matter what changes influence the new business
models, it’s hard to imagine a world without airlines. For U.S.
airlines, there are 31,000 scheduled departures ferrying an
average of 2.1 million passengers each day. The Federal
Aviation Administration predicts global air traffic will
double by 2025. The FAA also estimates that the industry
adds more than 5 percent to U.S. gross domestic product
through its direct and indirect economic impacts, and is
responsible for nearly 10 million jobs in industries (other
than airlines) related to hospitality and travel — even
though U.S. spending on air travel is less than 1 percent of
GDP, and airlines directly employ just over half a million
people. “There is tremendous spillover that ripples through
the entire economy,” Cordle says.
From the passenger’s perspective, ongoing capacity cuts
by the airlines will mean “more crowded aircraft, less quality
of service, yet better on-time performance because there are
fewer capacity bottlenecks,” Cordle sums up.
Boyd is also keen to put the ever-changing airline industry into perspective: “Flying will continue to be just as
uncomfortable as ever in the same seat space,” Boyd says.
“We can count on continuity in that sense.”
RF

READINGS
“Air Transport Association 2008 Economic Report.” Washington,
D.C.: Air Transport Association, 2008.
“Airlines 101 — A Brief History of the Airline and Commercial
Aircraft Industries.” Ponte Vedra Beach, Fl.: The Airline Monitor,
May 2006.
Borenstein, Severin, and Nancy Rose. “How Airline Markets

Work … Or Do They? Regulatory Reform in the Airline Industry.”
In Rose, Nancy L. (ed.), Economic Regulation and Its Reform:
What Have We Learned? Chicago: University of Chicago Press,
forthcoming.
“The Impact of Recession on Air Traffic Volumes.” Montreal,
Quebec. International Air Transport Association, December 2008.

Spring 2009 • Region Focus

29

Veto Politics
Can a line-item veto reduce spending?
BY DAV I D VA N D E N B E RG

tate legislative sessions often feature intense debates
over appropriations bills. Both legislatures and governors have their own weapons in these battles. One
of the most well known is the ability many governors have
to veto specific line items in a bill. The line-item veto is
often assumed to be an effective way of keeping spending
under control. But whether the conventional wisdom is
correct on this is still an open question. In fact, the lineitem veto is a tool that isn’t always used in the context we
might expect — and the results can be surprising.
Forty-four of America’s 50 governors have some form of
the line-item veto, according to the National Conference of
State Legislatures. Six states do not have any form of the
line-item veto, including North Carolina. Governors in
those states can only veto entire legislation, not portions
of it.
In the states where it exists, the line-item veto functions
differently and can shift the balance of power in budget
debates. Governors who have the line-item veto can eliminate portions of bills. In some cases, they can adjust
spending amounts, and in others, governors can amend
legislative language. Governors can use the line-item veto to
preserve their budget preferences sometimes, but legislators
can combat the use of the line-item veto by bundling expenses the governor doesn’t want with those the governor does
want. Yet line-item vetoes, if comprehensive enough, can
provide a way for governors to possibly thwart those efforts.
To determine whether this sort of veto can be an effective
way of imposing spending discipline requires making a few
assumptions. The first is that politicians, like anyone in any
profession, face incentives. Governors aren’t necessarily less
prone to them than are legislators. The line-item veto may
not be anything more than an additional bargaining chip
that a governor can use to go after additional spending he
might want, says Samuel Baker, a former economist at the
College of William & Mary.
The second assumption is that the political climate
affects how the veto power is used. The line-item veto, to
some extent, shifts power to the executive branch. But, as
we’ll see, that may not matter much. If it does, there are
some important contexts in which we can expect the veto to
be exercised more frequently.

S

economist who formerly taught at Syracuse University
before working at the Council of Economic Advisers during
the George W. Bush administration and then heading up the
Congressional Budget Office. Holtz-Eakin is now president
of a consulting firm in Washington, D.C.
Highly partisan environments are most conducive to use
of the line-item veto, says Glenn Abney, a former Georgia
State University political scientist. “The governors will often
use the veto because they disagree over policy,” he notes.
Conversely, when one party controls both the executive and
legislative branches, the partisan temperature is lower.
In those situations, the item veto is less likely to be used,
Abney and University of Georgia political scientist Thomas
Lauth argue in a 1985 paper.
While the line-item veto shifts some power to the executive branch, governors may have good reasons not to
exercise this power. For example, a governor may decline to
use the veto to avoid further antagonizing lawmakers, especially if relationships with the legislature have soured, in
order to preserve remaining political capital. Those relationships can be crucial. Stable political relationships between
elected officials and the state bureaucracy can be crucial and
can determine state expenditure levels, economists James
Dearden of Lehigh University and Thomas Husted of
American University write in a 1993 paper.
The scope of line-item veto powers may determine how
useful they are to governors. Only 15 of the 44 governors
with line-item vetoes can adjust both dollar amounts and
statutory language in legislation. When they can amend dollar amounts and language, governors are most likely to use
the veto. In their paper, Dearden and Husted argue that a
governor’s ability to obtain a desired budget outcome
increases with the comprehensiveness of the line-item veto
authority.
Line-item vetoes don’t render legislators powerless,
however: They can write bills in ways that make it difficult
for a governor to veto them. Lawmakers also have a bargaining chip of their own: the override. But research shows that
line-item vetoes are rarely overridden. Several explanations
for the upholding of vetoes are possible, argue Abney and
Lauth. For one, super-majorities are often required for an
override, which can be hard to achieve. When overrides are
difficult, the veto power is more meaningful.

Indeed, infrequent use of the veto
the years 1993 and 1995. Governors in
may mean that its mere threat has
only 18 states used the veto in 1993,
Executive Privilege
made actual usage unnecessary,
while 22 used it in 1995. In both years,
Not all governors can use the line-item veto
the same way. Whether the veto can be
although it’s hard to be certain, Reese
the researchers show more than 60
used to eliminate budget items or
and Lauth say.
percent of vetoes cut language about
legislative language depends on where
Such evidence should be qualified.
appropriations that did not contain
you are in the Fifth District.
Budget officers overwhelmingly say
dollar amounts. More than 20 perLegislative
that a constitutional balanced budget
cent of vetoes were of language
Appropriations Language
requirement is the most important
totally unrelated to appropriations.
8
MD
factor in promoting fiscal responsiVetoes of legislative language can
bility, Lauth claims in a 1996 paper. Both
still have fiscal effects, although
DC
executive and legislative budget officials
it is difficult to assign them a dollar
VA
were surveyed, and at least 90 percent of
value. Language and appropriations
WV
each group cited the balanced budget
in bills are not always related. ElimiNC
8
8
requirement’s importance.
nating language requiring certain
To resolve this dispute, then,
state agencies to maintain specific
SC
requires turning to the empirical evistaffing levels could lead to job
SOURCES: National Conference of State
dence. The most comprehensive
cuts and resulting cost savings, for
Legislatures, League of Women Voters of
Maryland, District of Columbia Mayor’s Office,
analysis to date is still the Holtzexample.
Virginia Department of Planning and Budget,
Eakin study. Looking there, you
Yet leaving the agency free to
South Carolina Office of State Budget
discover that evidence of whether
eliminate jobs may not necessarily
the overall level of spending actually
lead to job cuts if they find savings
goes down because of the line-item veto is hard to find.
elsewhere in their budget, so it’s hard to prove that the
In his paper, Holtz-Eakin concludes that the line-item
line-item veto would have a direct fiscal effect in such a
veto may influence the spending level only over the short
case. In a research project about the line-item veto in
run — particularly in regard to reducing a current budget
Georgia, Lauth and Catherine Reese of Arkansas State
deficit — in cases where a governor’s political party does not
University-Jonesboro find that 79 percent of the 209 linehold a majority in the legislature. Over time, however, there
item vetoes used between 1975 and 2002 eliminated language
is no statistically significant effect on the size of the budget
that had a fiscal impact that was hard to measure in dollars.
in the long run. Instead, it seems that the line-item veto
The threat of the veto can play an important role in
simply alters the composition of spending.
legislative debates. Reese and Lauth’s Georgia study covers
So as voters watch their legislature haggle over the
several decades. They conducted interviews of the state’s
budget each year, they should keep in mind the admonition
seven governors prior to Sonny Perdue, its current execuHoltz-Eakin includes in his study: “There are no simple
tive. The governors told Reese and Lauth the threat of the
truths concerning the impact of the line-item veto.”
RF
line-item veto was an important element of their power.
READINGS
Abney, Glenn, and Thomas P. Lauth. “Gubernatorial Use of the
Item Veto for Narrative Deletion.” Public Administration Review,
July-August 2002, vol. 62, no. 4, pp. 492-503.

Holtz-Eakin, Douglas. “The Line-Item Veto and Public Sector
Budgets: Evidence from the States.” Journal of Public Economics,
August 1988, vol. 36, no. 3, pp. 269-292.

____. “The Item Veto and Fiscal Responsibility.” Journal of Politics,
August 1997, vol. 59, no. 3, pp. 882-892.

Lauth, Thomas P. “The Line-Item Veto in Government
Budgeting.” Public Budgeting & Finance, Summer 1996, vol. 16, no. 2,
pp. 97-111.

____. “The Line-Item Veto in the States: An Instrument for Fiscal
Restraint or an Instrument for Partisanship?” Public Administration
Review, May-June 1985, vol. 45, no. 3, pp. 372-377.
Dearden, James A., and Thomas A. Husted. “Do Governors Get
What They Want? An Alternative Examination of the Line-Item
Veto.” Public Choice, 1993, vol. 77, no. 4, pp. 707-723.

Reese, Catherine C., and Thomas P. Lauth. “The Line Item Veto in
Georgia: Fiscal Restraint or Inter-Branch Politics?” Public Budgeting
and Finance, Summer 2006, vol. 26, no. 2, pp. 1-19.

Line-Item Veto and Divided Government
Political contexts usually influence usage of line-item vetoes.
They are used more often when opposing parties control
the executive and legislative branches and the legislature
cannot override the veto, argues Douglas Holtz-Eakin, an

30

Region Focus • Spring 2009

Fiscal Effects of the Line-Item Veto
The veto is not always used to strike dollar amounts. In a
nationwide study published in 2002, Abney and Lauth
review appropriations bills from line-item veto states from

Spring 2009 • Region Focus

31

Veto Politics
Can a line-item veto reduce spending?
BY DAV I D VA N D E N B E RG

tate legislative sessions often feature intense debates
over appropriations bills. Both legislatures and governors have their own weapons in these battles. One
of the most well known is the ability many governors have
to veto specific line items in a bill. The line-item veto is
often assumed to be an effective way of keeping spending
under control. But whether the conventional wisdom is
correct on this is still an open question. In fact, the lineitem veto is a tool that isn’t always used in the context we
might expect — and the results can be surprising.
Forty-four of America’s 50 governors have some form of
the line-item veto, according to the National Conference of
State Legislatures. Six states do not have any form of the
line-item veto, including North Carolina. Governors in
those states can only veto entire legislation, not portions
of it.
In the states where it exists, the line-item veto functions
differently and can shift the balance of power in budget
debates. Governors who have the line-item veto can eliminate portions of bills. In some cases, they can adjust
spending amounts, and in others, governors can amend
legislative language. Governors can use the line-item veto to
preserve their budget preferences sometimes, but legislators
can combat the use of the line-item veto by bundling expenses the governor doesn’t want with those the governor does
want. Yet line-item vetoes, if comprehensive enough, can
provide a way for governors to possibly thwart those efforts.
To determine whether this sort of veto can be an effective
way of imposing spending discipline requires making a few
assumptions. The first is that politicians, like anyone in any
profession, face incentives. Governors aren’t necessarily less
prone to them than are legislators. The line-item veto may
not be anything more than an additional bargaining chip
that a governor can use to go after additional spending he
might want, says Samuel Baker, a former economist at the
College of William & Mary.
The second assumption is that the political climate
affects how the veto power is used. The line-item veto, to
some extent, shifts power to the executive branch. But, as
we’ll see, that may not matter much. If it does, there are
some important contexts in which we can expect the veto to
be exercised more frequently.

S

economist who formerly taught at Syracuse University
before working at the Council of Economic Advisers during
the George W. Bush administration and then heading up the
Congressional Budget Office. Holtz-Eakin is now president
of a consulting firm in Washington, D.C.
Highly partisan environments are most conducive to use
of the line-item veto, says Glenn Abney, a former Georgia
State University political scientist. “The governors will often
use the veto because they disagree over policy,” he notes.
Conversely, when one party controls both the executive and
legislative branches, the partisan temperature is lower.
In those situations, the item veto is less likely to be used,
Abney and University of Georgia political scientist Thomas
Lauth argue in a 1985 paper.
While the line-item veto shifts some power to the executive branch, governors may have good reasons not to
exercise this power. For example, a governor may decline to
use the veto to avoid further antagonizing lawmakers, especially if relationships with the legislature have soured, in
order to preserve remaining political capital. Those relationships can be crucial. Stable political relationships between
elected officials and the state bureaucracy can be crucial and
can determine state expenditure levels, economists James
Dearden of Lehigh University and Thomas Husted of
American University write in a 1993 paper.
The scope of line-item veto powers may determine how
useful they are to governors. Only 15 of the 44 governors
with line-item vetoes can adjust both dollar amounts and
statutory language in legislation. When they can amend dollar amounts and language, governors are most likely to use
the veto. In their paper, Dearden and Husted argue that a
governor’s ability to obtain a desired budget outcome
increases with the comprehensiveness of the line-item veto
authority.
Line-item vetoes don’t render legislators powerless,
however: They can write bills in ways that make it difficult
for a governor to veto them. Lawmakers also have a bargaining chip of their own: the override. But research shows that
line-item vetoes are rarely overridden. Several explanations
for the upholding of vetoes are possible, argue Abney and
Lauth. For one, super-majorities are often required for an
override, which can be hard to achieve. When overrides are
difficult, the veto power is more meaningful.

Indeed, infrequent use of the veto
the years 1993 and 1995. Governors in
may mean that its mere threat has
only 18 states used the veto in 1993,
Executive Privilege
made actual usage unnecessary,
while 22 used it in 1995. In both years,
Not all governors can use the line-item veto
the same way. Whether the veto can be
although it’s hard to be certain, Reese
the researchers show more than 60
used to eliminate budget items or
and Lauth say.
percent of vetoes cut language about
legislative language depends on where
Such evidence should be qualified.
appropriations that did not contain
you are in the Fifth District.
Budget officers overwhelmingly say
dollar amounts. More than 20 perLegislative
that a constitutional balanced budget
cent of vetoes were of language
Appropriations Language
requirement is the most important
totally unrelated to appropriations.
8
MD
factor in promoting fiscal responsiVetoes of legislative language can
bility, Lauth claims in a 1996 paper. Both
still have fiscal effects, although
DC
executive and legislative budget officials
it is difficult to assign them a dollar
VA
were surveyed, and at least 90 percent of
value. Language and appropriations
WV
each group cited the balanced budget
in bills are not always related. ElimiNC
8
8
requirement’s importance.
nating language requiring certain
To resolve this dispute, then,
state agencies to maintain specific
SC
requires turning to the empirical evistaffing levels could lead to job
SOURCES: National Conference of State
dence. The most comprehensive
cuts and resulting cost savings, for
Legislatures, League of Women Voters of
Maryland, District of Columbia Mayor’s Office,
analysis to date is still the Holtzexample.
Virginia Department of Planning and Budget,
Eakin study. Looking there, you
Yet leaving the agency free to
South Carolina Office of State Budget
discover that evidence of whether
eliminate jobs may not necessarily
the overall level of spending actually
lead to job cuts if they find savings
goes down because of the line-item veto is hard to find.
elsewhere in their budget, so it’s hard to prove that the
In his paper, Holtz-Eakin concludes that the line-item
line-item veto would have a direct fiscal effect in such a
veto may influence the spending level only over the short
case. In a research project about the line-item veto in
run — particularly in regard to reducing a current budget
Georgia, Lauth and Catherine Reese of Arkansas State
deficit — in cases where a governor’s political party does not
University-Jonesboro find that 79 percent of the 209 linehold a majority in the legislature. Over time, however, there
item vetoes used between 1975 and 2002 eliminated language
is no statistically significant effect on the size of the budget
that had a fiscal impact that was hard to measure in dollars.
in the long run. Instead, it seems that the line-item veto
The threat of the veto can play an important role in
simply alters the composition of spending.
legislative debates. Reese and Lauth’s Georgia study covers
So as voters watch their legislature haggle over the
several decades. They conducted interviews of the state’s
budget each year, they should keep in mind the admonition
seven governors prior to Sonny Perdue, its current execuHoltz-Eakin includes in his study: “There are no simple
tive. The governors told Reese and Lauth the threat of the
truths concerning the impact of the line-item veto.”
RF
line-item veto was an important element of their power.
READINGS
Abney, Glenn, and Thomas P. Lauth. “Gubernatorial Use of the
Item Veto for Narrative Deletion.” Public Administration Review,
July-August 2002, vol. 62, no. 4, pp. 492-503.

Holtz-Eakin, Douglas. “The Line-Item Veto and Public Sector
Budgets: Evidence from the States.” Journal of Public Economics,
August 1988, vol. 36, no. 3, pp. 269-292.

____. “The Item Veto and Fiscal Responsibility.” Journal of Politics,
August 1997, vol. 59, no. 3, pp. 882-892.

Lauth, Thomas P. “The Line-Item Veto in Government
Budgeting.” Public Budgeting & Finance, Summer 1996, vol. 16, no. 2,
pp. 97-111.

____. “The Line-Item Veto in the States: An Instrument for Fiscal
Restraint or an Instrument for Partisanship?” Public Administration
Review, May-June 1985, vol. 45, no. 3, pp. 372-377.
Dearden, James A., and Thomas A. Husted. “Do Governors Get
What They Want? An Alternative Examination of the Line-Item
Veto.” Public Choice, 1993, vol. 77, no. 4, pp. 707-723.

Reese, Catherine C., and Thomas P. Lauth. “The Line Item Veto in
Georgia: Fiscal Restraint or Inter-Branch Politics?” Public Budgeting
and Finance, Summer 2006, vol. 26, no. 2, pp. 1-19.

Line-Item Veto and Divided Government
Political contexts usually influence usage of line-item vetoes.
They are used more often when opposing parties control
the executive and legislative branches and the legislature
cannot override the veto, argues Douglas Holtz-Eakin, an

30

Region Focus • Spring 2009

Fiscal Effects of the Line-Item Veto
The veto is not always used to strike dollar amounts. In a
nationwide study published in 2002, Abney and Lauth
review appropriations bills from line-item veto states from

Spring 2009 • Region Focus

31

INTERVIEW
Allan Meltzer has long been one of the most prominent
monetary economists and historians, contributing
significantly to our knowledge of how to achieve price
stability and economic growth through his academic
work, his role as a policy adviser, and his popular
writings. In 2004, he published the first volume of his
history of the Federal Reserve System, which covered
the period from the Fed’s founding to its accord with
the Treasury Department in 1951. The long-awaited
second volume of his history will appear in two parts
this autumn.

Meltzer has been quite critical of the Federal Reserve’s
actions immediately preceding and during the financial
crisis. In his view, the failure of the Federal Reserve and
other agencies to curb the assumption that some institutions were “too big to fail” played a major role in
fueling the crisis. In addition, he believes that many
of the Fed’s lending programs, initiated since the crisis
began, were misguided, threatening the Fed’s independence and risking its ability to control inflation over the
long run.
Aaron Steelman interviewed Meltzer at his office at
Carnegie Mellon on May 7, 2009.

RF: What’s the status of the second volume of your
history of the Federal Reserve?
Meltzer: I just completed the last pages of the manuscript.
The book will appear in October in two parts. I got a
number of comments from readers, but the main comment
was that the book is 1,400 pages long and we don’t print
1,400-page books. So we ended up dividing it into two
volumes: 2.1 and 2.2, which will come out simultaneously.
RF: Chronologically, how far did you go with the second
volume?
Meltzer: The second volume goes to 1986. I chose 1986
because it was pretty clear by then that rampant inflation
was over and that expected inflation was low. I have some
comments about the current episode, but the editor asked
me to include those as an epilogue. The most important
message of the epilogue is that you won’t get rid of crises
until you get rid of “too big to fail.”
RF: What do you think could reasonably be done to
reduce the scope of the federal financial safety net?
Meltzer: How would I get rid of too big to fail? I would have
bank reserves rise with the size of the bank. I think it’s in the

32

Region Focus • Spring 2009

PHOTOGRAPHY: DAVID ASCHKENAS. REPRINTED WITH PERMISSION OF CARNEGIE MELLON UNIVERSITY, PITTSBURGH, PENNSYLVANIA

Since 1957, Meltzer has taught at Carnegie Mellon
University (then known as the Carnegie Institute of
Technology). He co-founded the Carnegie-Rochester
Conference on Political Economy, where scholars
from academia, government, and business present
papers on important public policy issues, and the
Shadow Open Market Committee, which comments
upon the policies of the Federal Reserve. He also served
as an acting member of the Council of Economic
Advisers during the Reagan administration and was
chair of the International Financial Institution
Advisory Commission, better known as the “Meltzer
Commission,” during the 1990s. In addition, Meltzer is
a visiting scholar at the American Enterprise Institute
for Public Policy Research in Washington, D.C.

second is it would further remove responsibility from the
banks. A regulator of last resort would worsen the too big to
fail problem.
I believe there are a few relatively straightforward rules
of regulation. First, regulation is written by lawyers and
bureaucrats, and over time markets learn to circumvent regulation. The Basel Accord is a great example of that. Banks
were supposed to hold more capital in order to take on more
risk. But, instead, they took those risks off their balance
sheets and didn’t hold more capital. In that case, both the
regulation and the circumvention failed. Regulation Q of the
Glass-Steagall Act, which prevented banks from paying
interest on demand deposits, is another good example. We
wouldn’t have money market funds if it weren’t for
Regulation Q. There are lots of examples of markets circumventing regulation — and not only in banking and finance.
RF: In addition to addressing the too big to fail problem,
The second rule is that regulation can be beneficial when
what other current policy issues do you think are particprivate costs and social costs are
ularly important?
not aligned. For instance, there is
We cannot continue to have a arguably a good case for regulation
Meltzer: There are quite a few.
of banks if you have deposit insurOne of the most important ones is
system where profits
ance — otherwise, banks might
to get rid of Fannie Mae and
take excessive risk knowing that
Freddie Mac. The only thing they
are privatized and losses
they will not bear the full costs
do is to subsidize mortgages. We
associated with a failure. The third
should put that subsidy on the
are socialized.
is that if regulations are not
budget. That’s where it belongs.
circumvented, the reason is
Having Fannie and Freddie do this
because they are either beneficial or they are enforced with
encourages corruption and encourages excessive zeal to help
Draconian measures.
particular parts of the housing system.
I also would make the Federal Deposit Insurance
RF: Looking at the Fed’s actions over the past year or so,
Corporation Improvement Act (FDICIA) applicable to all
how well do you think it has done handling the crisis
financial institutions. The purpose was to have structured
early intervention — that is, to close down commercial
once it was upon us?
banks before they used all of their capital. Then, the shareholders could be made to bear the losses and the institution
Meltzer: In the history of the Federal Reserve System, there
could be sold. FDICIA was supposed to do that, but the
are three enormous mistakes, in my opinion. The first one
regulators haven’t followed through effectively. They
was the Great Depression, of course. The second one was
should — and FDICIA should be extended to investment
the Great Inflation of the 1970s. And the third one was the
banks as well.
failure of Lehman Brothers in September 2008. As I said, in
The next one is less of a specific policy proposal and more
principle, I’m in favor of permitting institutions to fail when
of a general recommendation. We should pay more attention
they have acted incautiously and are insolvent as a result.
to the fat tails in the distribution of risk. What are those fat
But this was a failure that occurred after 30 years of bailing
tails? Things like the Russian default, the failure of Long
out just about every institution of any size, with no prior
Term Capital Management, the enormous climb in housing
announcement that the policy had changed. Suddenly the
prices. Our current models of risk distribution — and how
Fed changed what had been the standard procedure and
they can affect the economy — don’t take adequate account
allowed a big firm to fail. That was a mistake. It created
of them.
uncertainty in financial markets. And then, of course, the
Fed changed course shortly afterward, back to its longRF: What do you think of the idea of establishing a
established policy of bailing out institutions.
systemic risk regulator?
RF: So would you have recommended allowing Bear
Stearns to fail in March 2008? That possibly could have
Meltzer: The administration’s proposal to make the Fed a
sent a signal to the market that policy had changed
super-regulatory body is a mistake for two reasons. The first
and, as a result, the failure of Lehman later in the year
is the Fed has a poor record of anticipating crises. The
public interest to say, if you want to be big, you must hold
more reserves so that you will be forced to bear a loss if you
make a mistake. We cannot continue to have a system where
profits are privatized and losses are socialized.
I should point out that much of this is in Walter Bagehot,
of course, who was an early rational expectationist. He said
that if central banks are going to lend, they should do so at a
penalty rate against good collateral and that this policy
should be made well-known in advance of a crisis. That
system worked for the better part of the century. There were
banking problems and failures along the way, but they never
spread. The reason was bankers knew they had to hold
collateral to protect themselves, and so they did. We have
abandoned that system, to our detriment.

Allan Meltzer

Spring 2009 • Region Focus

33

RF_SPRING_2009

9/3/09

3:22 PM

Page 33

second is it would further remove responsibility from the
banks. A regulator of last resort would worsen the too big to
fail problem.
I believe there are a few relatively straightforward rules
of regulation. First, regulation is written by lawyers and
bureaucrats, and over time markets learn to circumvent regulation. The Basel Accord is a great example of that. Banks
were supposed to hold more capital in order to take on more
risk. But, instead, they took those risks off their balance
sheets and didn’t hold more capital. In that case, both the
regulation and the circumvention failed. Regulation Q of the
Glass-Steagall Act, which prevented banks from paying
interest on demand deposits, is another good example. We
wouldn’t have money market funds if it weren’t for
Regulation Q. There are lots of examples of markets circumventing regulation — and not only in banking and finance.
RF: In addition to addressing the too big to fail problem,
The second rule is that regulation can be beneficial when
what other current policy issues do you think are particprivate costs and social costs are
ularly important?
not aligned. For instance, there is
We cannot continue to have a arguably a good case for regulation
Meltzer: There are quite a few.
of banks if you have deposit insurOne of the most important ones is
system where profits
ance — otherwise, banks might
to get rid of Fannie Mae and
take excessive risk knowing that
Freddie Mac. The only thing they
are privatized and losses
they will not bear the full costs
do is to subsidize mortgages. We
associated with a failure. The third
should put that subsidy on the
are socialized.
is that if regulations are not
budget. That’s where it belongs.
circumvented, the reason is
Having Fannie and Freddie do this
because they are either beneficial or they are enforced with
encourages corruption and encourages excessive zeal to help
Draconian measures.
particular parts of the housing system.
I also would make the Federal Deposit Insurance
RF: Looking at the Fed’s actions over the past year or so,
Corporation Improvement Act (FDICIA) applicable to all
how well do you think it has done handling the crisis
financial institutions. The purpose was to have structured
once it was upon us?
early intervention — that is, to close down commercial
banks before they used all of their capital. Then, the shareMeltzer: In the history of the Federal Reserve System, there
holders could be made to bear the losses and the institution
are three enormous mistakes, in my opinion. The first one
could be sold. FDICIA was supposed to do that, but the
was the Great Depression, of course. The second one was
regulators haven’t followed through effectively. They
the Great Inflation of the 1970s. And the third one was the
should — and FDICIA should be extended to investment
failure of Lehman Brothers in September 2008. As I said, in
banks as well.
principle, I’m in favor of permitting institutions to fail when
The next one is less of a specific policy proposal and more
they have acted incautiously and are insolvent as a result.
of a general recommendation. We should pay more attention
But this was a failure that occurred after 30 years of bailing
to the fat tails in the distribution of risk. What are those fat
out just about every institution of any size, with no prior
tails? Things like the Russian default, the failure of Long
announcement that the policy had changed. Suddenly the
Term Capital Management, the enormous climb in housing
Fed changed what had been the standard procedure and
prices. Our current models of risk distribution — and how
they can affect the economy — don’t take adequate account
allowed a big firm to fail. That was a mistake. It created
of them.
uncertainty in financial markets. And then, of course, the
Fed changed course shortly afterward, back to its longestablished policy of bailing out institutions.
RF: What do you think of the idea of establishing a
systemic risk regulator?
RF: So would you have recommended allowing Bear
Meltzer: The administration’s proposal to make the Fed a
Stearns to fail in March 2008? That possibly could have
super-regulatory body is a mistake for two reasons. The first
sent a signal to the market that policy had changed
is the Fed has a poor record of anticipating crises. The
and, as a result, the failure of Lehman later in the year
public interest to say, if you want to be big, you must hold
more reserves so that you will be forced to bear a loss if you
make a mistake. We cannot continue to have a system where
profits are privatized and losses are socialized.
I should point out that much of this is in Walter Bagehot,
of course, who was an early rational expectationist. He said
that if central banks are going to lend, they should do so at a
penalty rate against good collateral and that this policy
should be made well-known in advance of a crisis. That
system worked for the better part of a century. There were
banking problems and failures along the way, but they never
spread. The reason was bankers knew they had to hold
collateral to protect themselves, and so they did. We have
abandoned that system, to our detriment.

Spring 2009 • Region Focus

33

wouldn’t have come as such a
surprise.

Allan Meltzer
➤ Present Position

Meltzer: I don’t think you change
policy in a crisis. That is likely to
make things worse. At the same time,
I don’t think the Fed should have
engaged in many of the fiscal actions
it has taken. I believe that the Fed
has sacrificed its independence. It
hasn’t always been independent, but
Volcker and, to some extent,
Greenspan built up independence
for the institution. That has been
squandered in the current crisis. The
Fed has become a financing arm of
the Treasury Department. Now that
it has alerted Congress that it is willing to go along with just about
anything, it is going to have a hard
time digging its way out.
RF: Do you think the recent
actions of the Fed have reduced
its credibility as an inflation
fighter and that it will have more
difficulty pursuing policies consistent with price stability when the
economy rebounds?

The Allan H. Meltzer University
Professor of Political Economy, Carnegie
Mellon University
➤ Education
A.B. (1948), Duke University; Ph.D.
(1958), University of California at
Los Angeles
➤ Selected Publications
Author of A History of the Federal Reserve:
Volume 1 (2004) and A History of the
Federal Reserve: Volumes 2.1 and 2.2 (2009);
author or co-author of numerous papers
in such journals as the American Economic
Review, Journal of Political Economy,
Quarterly Journal of Economics, Journal of
Law & Economics, Journal of Finance, and
Journal of Money, Credit and Banking
➤ Awards and Offices
Co-organizer, Carnegie-Rochester
Conference on Political Economy;
co-founder and co-chairman, Shadow
Open Market Committee; past president,
Western Economic Association;
Distinguished Fellow, American
Economic Association

Meltzer: Yes. I’ve had this discussion with members of the
Board of Governors and some members of the Fed’s staff.
They argue that the lending programs have been structured
in a way that will permit them to remove liquidity from the
system when needed. I have no doubt, as I’ve told them,
about their technical ability to do that. It’s the political
problem. I just don’t see them overcoming the political
problem. Where will the political problem come from?
Probably Congress and the administration, but also the business community. They’re going to say, “The economy is just
beginning to recover. And you’re going to tighten policy
now?” It’s not going to be an easy sell.
Consider monetary policy during the 1970s. The people
at the Fed were not idiots. They knew what they were doing.
They would swear to themselves that they were not going to
let inflation get out of line. But then the unemployment rate
rose, and all of that went out the window. They expanded the
money supply rapidly.
Volcker was finally able to put a stop to it for two reasons.
First, by then inflation had become such a problem that
everyone knew something had to be done about it; there was
considerably more popular and political support for taking
a hard line against inflation. Second, he demonstrated
enormous courage when the tightening was accompanied by

34

Region Focus • Spring 2009

very high and rising unemployment.
In January 1982, when the recession was at its worst point and new
construction had basically stopped,
Volcker gave a talk to a home
builders association. Basically, he
told them, “I know you’re hurting,
but you have to understand, if we
don’t do this now and finish it, we’re
going to have to do it again and it
will be even harder the next time
because we gave up on this one.”
They gave him a standing ovation,
not because they admired his policy
but because they admired his
courage. He kept raising rates when
everyone thought he would not have
the will to do so. For instance, I
recall Jim Tobin saying that it would
take 10 years to get rid of inflation,
when in fact it took much less time.
RF: Do you believe that the Fed
was too easy for too long following the recession of 2001?

Meltzer: Well that’s one where I
have some scars. I was a visitor at
the Fed in 2003. Alan Greenspan
invited me down to talk to him about deflation, which he
was quite concerned about at the time. He had read and
commented on the first volume of my book and had some
questions. I told him that there had been six periods of
deflation in Federal Reserve history that didn’t hurt anything and one that did, the Great Depression. I told him
that I did not think the evidence suggested that deflation —
especially a harmful deflation — was likely. For instance, I
pointed out that countries that have large budget deficits,
active money growth, and the probability of a declining
exchange rate are very unlikely to experience deflation. So I
was very much opposed to the policy at that point and tried
to talk him into adopting a more restrained policy. But I was
not able to persuade him. Having said that, let me also say
that while I think the blame he gets for that is correct,
I think that it’s been overdone. He didn’t tell the bankers to
use that money to buy bad mortgages.
RF: Many commentators — and even some economists
— have argued that the financial crisis was the result of
a fundamental failure of the market system. What is
your opinion of that claim?
Meltzer: I have had several journalists call to ask me about
that issue. I think the answer is obviously no. Just look

continued funding of the International Monetary Fund
(IMF). They agreed to vote for funding if the president
would agree to have a commission to study the effectiveness
of the IMF and similar organizations. So that’s how it got
started. Its official name was the International Financial
Institution Advisory Commission but became known as the
Meltzer Commission because I chaired it. (I was not, by
the way, the first choice to chair it. Originally, I was just
supposed to be a member, but a couple of other people could
not do it, so I wound up taking it on.) I requested papers
on a number of topics and the people who had worked on
them would explain to the other Commission members
RF: How did you and Karl Brunner come up with the
what the relevant issues were.
idea of creating the Shadow Open
For example, I knew very little
Market Committee and what were
The World Bank is full of
about the Bank for International
your goals for it?
Settlements — what it did or
people who want to do good
whether it was a good thing. We
Meltzer: We did that at a time when
eventually issued a report with a
wage and price controls recently had
things for the poor people
series of recommendations.
been adopted. Karl and I organized
of the world. But they don’t
The major recommendation
about a dozen economists to sign a
that we made regarding the IMF
statement that we published in the
understand which things
was that it needed to be more
Wall Street Journal saying that the
discriminating in allocating
controls were a bad idea and would
will help them and which
funds. If countries adopt good
not work. To get that statement
things won’t.
policies, the IMF should consider
written — this was before fax
helping them. If they don’t, it
machines and personal computers —
should not. Also, the loans that are issued should be issued at
we had to spend hours on the telephone. Any time we had to
a penalty rate. That gives countries a strong incentive to purmake a change to accommodate somebody, we had to call
sue wise policies and avoid the need for IMF assistance in
the others and tell them what the change was. Obviously
the first place. The banks were a much harder problem
that was not a very good way to do things. We decided that
because their record of accomplishment is very poor. Many
we needed to have a meeting.
countries that have received significant funding from them
What was our objective? Karl and I were both disturbed
have not fared very well. Meanwhile, others that have gotten
— I knew I was very disturbed — because of the way the
relatively little funding — such as China — have seen
problem of inflation was being discussed generally. For
millions of their citizens lifted out of poverty as they
instance, there was a lot of talk that we either needed to go
have liberalized and adopted more market-oriented policies,
back to the gold standard on the one hand or that inflation
something that the World Bank does not do a good job of
wasn’t really anything to worry about on the other. We
encouraging.
didn’t think these views represented anything close to the
The report didn’t say this, but I think the World Bank
consensus of the good academic work that was being done
should close. What the report said instead is that there
then. So we put together a group of both business and acashould be an independent audit to find out which programs
demic economists, and we tried to inform people and build
work and which do not. Then it could either improve the
a constituency for a quite different policy. That’s how we
ones that don’t work or get rid of them. The World Bank is
started. And we were fortunate in that the New York Times,
full of people who want to do good things for the poor peoWashington Post, and Wall Street Journal all gave that first
ple of the world. But they don’t understand which things will
meeting a lot of attention. So we were well-launched. The
help them and which things won’t. They do not generally
committee has continued to meet since then — although I
appreciate that the only system that produces growth and
left in 1990 — and I think it has enjoyed some success in
freedom is capitalism. Also, they have no follow-up on their
pushing the debate about inflation in the correct direction.
programs. Their whole system is geared to the idea that a
program is successful once the final set of funds has been
RF: What was the Meltzer Commission? What was its
discharged. So if they’re building schools in Africa, when the
purpose? And which conclusions did it arrive at?
school is built, they declare it a success. But they don’t know
whether there’s a road to connect to the school, how many
Meltzer: The Commission got started mainly because in
kids go to the school, or if they are learning anything.
1998 some members of Congress were not in favor of
around. Capitalism has spread from western Europe and
North America to the rest of the world. Now, why is that?
It’s the only system man has come up with that provides
both freedom and growth. No other system does as well. All
of the other systems are generally someone’s idea of utopia.
But it’s not everyone’s idea of utopia. And when people look
at the recent crisis and say that the market failed, they
are not getting to the right issue. The market didn’t fail.
What failed were the incentives that we — human beings —
created. At the top of that list is too big to fail.

Spring 2009 • Region Focus

35

wouldn’t have come as such a
surprise.

Allan Meltzer
➤ Present Position

Meltzer: I don’t think you change
policy in a crisis. That is likely to
make things worse. At the same time,
I don’t think the Fed should have
engaged in many of the fiscal actions
it has taken. I believe that the Fed
has sacrificed its independence. It
hasn’t always been independent, but
Volcker and, to some extent,
Greenspan built up independence
for the institution. That has been
squandered in the current crisis. The
Fed has become a financing arm of
the Treasury Department. Now that
it has alerted Congress that it is willing to go along with just about
anything, it is going to have a hard
time digging its way out.
RF: Do you think the recent
actions of the Fed have reduced
its credibility as an inflation
fighter and that it will have more
difficulty pursuing policies consistent with price stability when the
economy rebounds?

The Allan H. Meltzer University
Professor of Political Economy, Carnegie
Mellon University
➤ Education
A.B. (1948), Duke University; Ph.D.
(1958), University of California at
Los Angeles
➤ Selected Publications
Author of A History of the Federal Reserve:
Volume 1 (2004) and A History of the
Federal Reserve: Volumes 2.1 and 2.2 (2009);
author or co-author of numerous papers
in such journals as the American Economic
Review, Journal of Political Economy,
Quarterly Journal of Economics, Journal of
Law & Economics, Journal of Finance, and
Journal of Money, Credit and Banking
➤ Awards and Offices
Co-organizer, Carnegie-Rochester
Conference on Political Economy;
co-founder and co-chairman, Shadow
Open Market Committee; past president,
Western Economic Association;
Distinguished Fellow, American
Economic Association

Meltzer: Yes. I’ve had this discussion with members of the
Board of Governors and some members of the Fed’s staff.
They argue that the lending programs have been structured
in a way that will permit them to remove liquidity from the
system when needed. I have no doubt, as I’ve told them,
about their technical ability to do that. It’s the political
problem. I just don’t see them overcoming the political
problem. Where will the political problem come from?
Probably Congress and the administration, but also the business community. They’re going to say, “The economy is just
beginning to recover. And you’re going to tighten policy
now?” It’s not going to be an easy sell.
Consider monetary policy during the 1970s. The people
at the Fed were not idiots. They knew what they were doing.
They would swear to themselves that they were not going to
let inflation get out of line. But then the unemployment rate
rose, and all of that went out the window. They expanded the
money supply rapidly.
Volcker was finally able to put a stop to it for two reasons.
First, by then inflation had become such a problem that
everyone knew something had to be done about it; there was
considerably more popular and political support for taking
a hard line against inflation. Second, he demonstrated
enormous courage when the tightening was accompanied by

34

Region Focus • Spring 2009

very high and rising unemployment.
In January 1982, when the recession was at its worst point and new
construction had basically stopped,
Volcker gave a talk to a home
builders association. Basically, he
told them, “I know you’re hurting,
but you have to understand, if we
don’t do this now and finish it, we’re
going to have to do it again and it
will be even harder the next time
because we gave up on this one.”
They gave him a standing ovation,
not because they admired his policy
but because they admired his
courage. He kept raising rates when
everyone thought he would not have
the will to do so. For instance, I
recall Jim Tobin saying that it would
take 10 years to get rid of inflation,
when in fact it took much less time.
RF: Do you believe that the Fed
was too easy for too long following the recession of 2001?

Meltzer: Well that’s one where I
have some scars. I was a visitor at
the Fed in 2003. Alan Greenspan
invited me down to talk to him about deflation, which he
was quite concerned about at the time. He had read and
commented on the first volume of my book and had some
questions. I told him that there had been six periods of
deflation in Federal Reserve history that didn’t hurt anything and one that did, the Great Depression. I told him
that I did not think the evidence suggested that deflation —
especially a harmful deflation — was likely. For instance, I
pointed out that countries that have large budget deficits,
active money growth, and the probability of a declining
exchange rate are very unlikely to experience deflation. So I
was very much opposed to the policy at that point and tried
to talk him into adopting a more restrained policy. But I was
not able to persuade him. Having said that, let me also say
that while I think the blame he gets for that is correct,
I think that it’s been overdone. He didn’t tell the bankers to
use that money to buy bad mortgages.
RF: Many commentators — and even some economists
— have argued that the financial crisis was the result of
a fundamental failure of the market system. What is
your opinion of that claim?
Meltzer: I have had several journalists call to ask me about
that issue. I think the answer is obviously no. Just look

continued funding of the International Monetary Fund
(IMF). They agreed to vote for funding if the president
would agree to have a commission to study the effectiveness
of the IMF and similar organizations. So that’s how it got
started. Its official name was the International Financial
Institution Advisory Commission but became known as the
Meltzer Commission because I chaired it. (I was not, by
the way, the first choice to chair it. Originally, I was just
supposed to be a member, but a couple of other people could
not do it, so I wound up taking it on.) I requested papers
on a number of topics and the people who had worked on
them would explain to the other Commission members
RF: How did you and Karl Brunner come up with the
what the relevant issues were.
idea of creating the Shadow Open
For example, I knew very little
Market Committee and what were
The World Bank is full of
about the Bank for International
your goals for it?
Settlements — what it did or
people who want to do good
whether it was a good thing. We
Meltzer: We did that at a time when
eventually issued a report with a
wage and price controls recently had
things for the poor people
series of recommendations.
been adopted. Karl and I organized
of the world. But they don’t
The major recommendation
about a dozen economists to sign a
that we made regarding the IMF
statement that we published in the
understand which things
was that it needed to be more
Wall Street Journal saying that the
discriminating in allocating
controls were a bad idea and would
will help them and which
funds. If countries adopt good
not work. To get that statement
things won’t.
policies, the IMF should consider
written — this was before fax
helping them. If they don’t, it
machines and personal computers —
should not. Also, the loans that are issued should be issued at
we had to spend hours on the telephone. Any time we had to
a penalty rate. That gives countries a strong incentive to purmake a change to accommodate somebody, we had to call
sue wise policies and avoid the need for IMF assistance in
the others and tell them what the change was. Obviously
the first place. The banks were a much harder problem
that was not a very good way to do things. We decided that
because their record of accomplishment is very poor. Many
we needed to have a meeting.
countries that have received significant funding from them
What was our objective? Karl and I were both disturbed
have not fared very well. Meanwhile, others that have gotten
— I knew I was very disturbed — because of the way the
relatively little funding — such as China — have seen
problem of inflation was being discussed generally. For
millions of their citizens lifted out of poverty as they
instance, there was a lot of talk that we either needed to go
have liberalized and adopted more market-oriented policies,
back to the gold standard on the one hand or that inflation
something that the World Bank does not do a good job of
wasn’t really anything to worry about on the other. We
encouraging.
didn’t think these views represented anything close to the
The report didn’t say this, but I think the World Bank
consensus of the good academic work that was being done
should close. What the report said instead is that there
then. So we put together a group of both business and acashould be an independent audit to find out which programs
demic economists, and we tried to inform people and build
work and which do not. Then it could either improve the
a constituency for a quite different policy. That’s how we
ones that don’t work or get rid of them. The World Bank is
started. And we were fortunate in that the New York Times,
full of people who want to do good things for the poor peoWashington Post, and Wall Street Journal all gave that first
ple of the world. But they don’t understand which things will
meeting a lot of attention. So we were well-launched. The
help them and which things won’t. They do not generally
committee has continued to meet since then — although I
appreciate that the only system that produces growth and
left in 1990 — and I think it has enjoyed some success in
freedom is capitalism. Also, they have no follow-up on their
pushing the debate about inflation in the correct direction.
programs. Their whole system is geared to the idea that a
program is successful once the final set of funds has been
RF: What was the Meltzer Commission? What was its
discharged. So if they’re building schools in Africa, when the
purpose? And which conclusions did it arrive at?
school is built, they declare it a success. But they don’t know
whether there’s a road to connect to the school, how many
Meltzer: The Commission got started mainly because in
kids go to the school, or if they are learning anything.
1998 some members of Congress were not in favor of
around. Capitalism has spread from western Europe and
North America to the rest of the world. Now, why is that?
It’s the only system man has come up with that provides
both freedom and growth. No other system does as well. All
of the other systems are generally someone’s idea of utopia.
But it’s not everyone’s idea of utopia. And when people look
at the recent crisis and say that the market failed, they
are not getting to the right issue. The market didn’t fail.
What failed were the incentives that we — human beings —
created. At the top of that list is too big to fail.

Spring 2009 • Region Focus

35

RF_SPRING_2009

9/4/09

1:51 PM

Page 36

Recently, there has been interest in pushing for better
evaluations of the World Bank and the African
Development Bank, and that’s a very positive development.
So I think we’re having some sort of slow, long-term effect
on the banks. But I believe they need to move faster in
telling countries that if you want to grow, open up to the
world market.

adopted the negative income tax, despite it being a popular
idea with economists and arguably the most efficient way to
transfer resources to poor people. The answer we gave is that
the decisive voter believes the amount that people work can
be increased by giving in-kind benefits rather than cash
benefits. If you look at the welfare system, for example, the
major cash benefits are issued in the form of unemployment
compensation and pensions to senior citizens. In short,
those benefits go to people who have worked. But for those
who do not work or have not worked, we give food stamps,
housing subsidies, and a variety of other transfers — but we
don’t give cash. And even when we have something that’s a
modified version of the negative income tax, such as the
earned income tax credit, it goes to people who work.

RF: During the late 1970s and early 1980s, you did quite
a bit of work on public choice questions. In particular,
you and Scott Richard published an influential paper in
the Journal of Political Economy titled “A Rational
Theory of the Size of Government.” What were the
principal arguments of that paper?

RF: You have served in the government on a couple of
occasions. Do you think that policymakers pay much
attention to the advice they solicit?

Meltzer: I think that paper has probably gotten more attention by other academics than any paper I have ever written,
including my work on money, which has been the focus
of most of my professional life. The paper says that the
principal factor determining the size of government is
the distribution of income. In a system of majority rule, the
voter with median income — not necessarily median
ideological views — is decisive. Voters with income above
the median favor lower taxes and less redistribution, while
those with income below the median favor higher taxes and
more redistribution. There are shocks, both political and
economic, that can change the position of the median voter
and, as a result, public policy. For instance, the expansion of
the right to vote in the late 19th and early 20th centuries
greatly increased the number of voters with low income,
shifting the decisive voter down the income distribution.
This, we argue, was one of the big reasons why taxes and
government grew during the 20th century.
Scott Richard and I wrote another paper on how redistribution is actually carried out in the United States.
Specifically, we wanted to explain why we have never

Meltzer: It very much depends on the politician. For
example, Nixon didn’t care much about economics. He
really relied on George Shultz to a considerable extent. As
budget director, George got to learn what Nixon’s priorities
and preferences were and he made a lot of the decisions
based on that, without consulting Nixon on specific questions because Nixon simply wasn’t interested. Gerry Ford,
who I got to know quite well, was entirely different. First of
all, he knew the budget inside and out because he had been
in Congress. But he also listened to his advisers. He took
what they said into consideration and was willing to do what
he thought was right, even if it cost him some political
support. Reagan was a slightly different case. He may not
have known the details of a piece of legislation as well as, say,
Ford. But he had strong convictions and if the goals and
likely effects of a bill coincided with what he believed, he
RF
would get behind it even if it was unpopular.

✧

36

Region Focus • Spring 2009

ECONOMICHISTORY
Sport of Kings: Horse Racing in Maryland
BY B E T T Y J OYC E N A S H

ttendance at the Preakness
Stakes in Baltimore fell by 30
percent last May compared to
the previous year, yet total wagering
on race day rose to nearly $87 million,
an 18 percent jump over 2008. That
was unusual. Industry watchers called
it an anomaly because odds favored
the winner, Rachel Alexandra, a thoroughbred filly, and 85 years had passed
since a filly had crossed the finish line
at the Pimlico Race Track. The race
is usually Maryland’s biggest one-day
sports event.
But Pimlico’s future is in limbo.
Owner Magna Entertainment is in
bankruptcy, and the state Legislature
has authorized the governor to use
eminent domain, if necessary, to keep
it in Baltimore. The 2009 racing
schedule has also been curtailed from
31 to 20 days. Pimlico’s plight illustrates an industry already dogged by
sparse attendance and revenues
dependent on slot machine gambling.
Since the 1970s, horse racing has competed not only with alternative
entertainment but also gambling via
state lotteries, and, later, casinos, and
racetrack casinos. While those “racinos” recently won legislative approval
in Maryland, bidding for casinos and
construction there are off to a slow
start. In neighboring Pennsylvania,
Delaware, and West Virginia, however,
the racinos are thriving, stealing business from Maryland tracks.

A

The method paid in proportion to the
total amount bet, and it dominates
horse racing today. Maryland’s racing
legacy also includes early off-track
betting parlors as well as 19th century
government incentives to build the
historic Pimlico track.
Maryland and Virginia were the
cradles of racing in the American
colonies. Colonial governors, appointed by the King of England, imported
the best-bred mares and stallions from
the mother country. One mare competed so well she was barred from
racing in Virginia in the 1700s, says Joe
Kelly, a newspaper man who covered
horse racing for the better part of the
20th century. Her name was Selima,
and Laurel Racetrack named a race in
her honor.
Overland races known as steeplechases were so named because riders
raced from church steeple to steeple,
and people wagered in a “my horse
can beat your horse” fashion. George
Washington’s diary noted wins and
losses on his trips to race in Maryland,
according to Joseph Challmes in The
Preakness: a History.

The once-vibrant
racing industry
monopolized legal
gambling until the
1970s, but now it
faces eroding
revenues because
of competition from
other types of
wagering

PHOTOGRAPHY: MARYLAND HORSE BREEDERS ASSOCIATION

Patrons of the Turf

36

Region Focus • Spring 2009

Horse racing today is thoroughly
dependent on wagering. Portions of
the gambling money provide funds for
owners and trainers, and indirectly for
breeders, since the value of a horse can
be traced to expectations about its
performance.
Of the many milestones in
Maryland racing, perhaps the biggest
was the introduction of the “French
Mutuel” machine at Pimlico in 1873.

Two offspring of
Man o’ War raced at
Pimlico in 1938. War
Admiral was heavily
favored to beat
Seabiscuit but came
up short by four
lengths.

Spring 2009 • Region Focus

37

Total Betting for Maryland Thoroughbred Racing
600

$MILLIONS

500
400
300
200
100
0
1950 1960 1970 1980
SOURCE: Maryland Racing Commission

1990

2000

2007

Racing of all kinds, steeplechase and flat track, can be
found throughout the District today. Those include countryday races where people don’t bet — for example in Camden,
S.C., at the Colonial and Carolina Cups, well-known and
well-attended steeplechases in the fall and spring. There are
also many on-track and off-track betting locations in
Maryland, Virginia, and West Virginia. Colonial Downs in
Virginia offers a racing summer season as well as off-track
betting and simulcasts. At West Virginia’s Charles Town and
Mountaineer Park tracks, there’s slot machine gambling.
Portions of gambling proceeds go to fund purses, and the
bigger purses bring in better horses, owners, and breeders.

The Dinner Party
Maryland Gov. Oden Bowie, a horseman himself, attended a
dinner party after the Saratoga races in New York in 1868. As
Baltimore rebuilt from the Civil War and grew, the city was
in a race of its own, vying with New York for economic
supremacy on the East Coast, Challmes says. The Baltimore
& Ohio Railroad rivaled New York’s Erie Canal when the
B&O became the nation’s first public commercial and
freight railroad. The city was known for other “firsts” such as
the Baltimore clipper ships that sped around the globe
returning with exotic cargo. This competition extended to
racing, already thriving in New York and Chicago.
Bowie used his own track on his plantation to test the
speed of his horses. He had also managed to hang on to his
wealth through the Civil War and regularly raced his horses
at Saratoga. When a wealthy land baron and owner of a
famous horse, Preakness, suggested a winner-take-all sweepstakes, Bowie persuaded them at this dinner party to hold it
in Baltimore.
It was a classic case of 19th century economic development. “Bring in wealthy people, they spend money,” Challmes
says in a telephone interview. It would be Maryland’s first
officially sanctioned race since 1859. But there was no race
course because the city was still remaking itself after the war.
The Maryland Jockey Club, formed in 1743 and now owned
by Magna Entertainment, chose a 70-acre site called Pimlico.
The track was partly funded by the state ($35,000) and the
city ($25,000) along with another $55,000 in private contributions, equivalent to about $1.9 million in 2008 dollars.
There were 4,000 seats in the grandstand.
In October 1870, the crowds arrived in force for a full race
day that included the “Dinner Party Stakes.” The stakes
offered a purse of $19,000, one of the biggest ever, $320,000
in 2008 dollars. The name of the winning horse was

38

Region Focus • Spring 2009

Preakness. And from that race, the 1.5 mile Preakness Stakes
for 3-year-olds began in the spring of 1873.
The most enduring legacy of that 1873 Preakness may
have been the popularity of the parimutuel betting machine
that paid winners in proportion to the total amount they
wagered. The machines allowed small bets, and would come
to dominate horse wagering, even when other forms of betting were outlawed in some states. Previously, people had
bought into betting pools or placed bets with bookmakers,
which often led to corruption.
Maryland boasted many famous horsemen, and from
about 1878 through 1882, George Lorillard of the Lorillard
Tobacco family dominated Maryland racing. As Lorillard’s
health declined and he dispersed his stables, New York was
pulling ahead as the hub of racing on the East Coast, offering more tracks and better purses. The Preakness even
moved to New York and stayed until 1909, when that state,
among others, banned betting in a nationwide reform movement (that ultimately prohibited alcohol as well). Maryland
and Kentucky were the only states where you could gamble
in that era, says Raymond Sauer, a sports economist at
Clemson University.
In 1909, the Preakness returned to Pimlico, and built its
reputation through horses like Sir Barton. In 1919, he
became the first to win the Triple Crown: the Kentucky
Derby, the Preakness, and the Belmont Stakes. Yet the
Pimlico race still wasn’t widely known outside of Maryland.
“The Preakness in 1920 was not really on the map,”
Challmes says. “Most of the early Derby winners did not find
their way to the Preakness.”
That was about to change. In 1920, one of the most
famous race horses in history ran not in the Derby but in the
Preakness, in his first start as a 3-year-old. Man o’ War was
based on the Eastern Shore of Maryland. The horse was also
the grand sire of the acclaimed Seabiscuit, who would
become a racing star during the Depression. “After that, it
[the Preakness] attracted so much attention and press that it
became the normal thing, where the Derby winner would
come to the Preakness,” Challmes says. And by the late
1920s, the Triple Crown had evolved into the prestigious
racing event.
Perhaps Pimlico’s biggest day came in 1938 when the
track hosted a match between Seabiscuit and a son of
Man o’ War, War Admiral, the favorite. Forty thousand
people filled the stands and 40 million tuned in to the radio.
Seabiscuit won by four lengths.
Racing in those days was dominated by stables such as
Sagamore Farms, which in 1925 was given over to horse
breeding by the Bromo-Seltzer magnate. The farm passed to
his grandson Alfred Vanderbilt, the racing dynamo who
arranged the Seabiscuit versus Man o’ War match and at one
time served as president and owner of Pimlico. Vanderbilt
cultivated and bred champions such as Native Dancer (a
Belmont and Preakness winner in 1953).
Maryland racing was some of the best in the country, says
Doug Reed, director of the University of Arizona Racetrack

Industry Program. “It was a state rich in history and horse
breeding.”

Racing’s Debt: Wagering
While gambling sustains horse racing today, until the expansion of an audience through television, radio, and the
Internet, tracks also made money on admission, parking,
food, and beverage sales. Off-track betting, however, is relatively new. New York was one of the first states to allow
off-track betting in the 1970s, a practice since adopted by
virtually all race tracks.
“In the ’70s and ’80s you were a standalone race track,”
Reed says. “The revenue I can get from you at the track is
different than what I would get from you far away.”
Today the gambling provides most of the revenue in racing, about 90 percent. Yet the proliferation of wagering has
hurt racing. “You just can’t keep oversupplying a product,”
Reed says. “Racing is starting to see that. How much racing
do you need when you can bet on every track over the country at the same time?” The Maryland Jockey Club reported a
22.5 percent drop in wagering in 2008 over 2007 at its tracks,
Pimlico and Laurel. Total parimutuel handle (wagering) on
thoroughbred racing in North America fell by 7 percent in
2008, according to the Jockey Club.
Joe Kelly covered the 1946 Preakness. “Racing changed
to the point where you can see the racing in your living
room,” he says. “Technology took over and people decided it
wasn’t necessary to go into the physical part of it by attending the race track.”
Economist Richard Thalheimer heads a consulting firm
in Lexington, Ky. He studies the horse racing industry and
notes, along with Sauer, that the introduction of state lotteries in the 1970s and the proliferation of casinos in the latter
two decades of the 20th century have cut into horse race
betting. Wagering peaked in real dollars in the mid-1970s,
and has declined 45 percent to 50 percent, largely because of
competition, according to Thalheimer.
“Back in the Seabiscuit days, you’d have 70,000 people at
a track on a Wednesday afternoon,” says Remi Bellocq, the
chief executive officer of the Horsemen’s Benevolent
Protective Association.
Racing also got hurt because it resisted television broadcasts in the early years, Sauer says, under the misconception
that TV would cut into live attendance. “The response was
really slow and played out over a decade.” Baseball and
football broadcasts expanded, and so did those sports’ attendance. In the long run, television builds interest in the sport,
he argues, and racing suffered on a relative basis from the
TV exposure that baseball, football, and eventually basket-

ball gained. Racing is also harder to broadcast, given its brief
spurts of action followed by the lags between races. “I think
the lack of regularly scheduled [television] racing and the
difficulty of convening it on television hurt in a period where
TV broadcasts made the landscape of modern sport.”
The Interstate Horse Racing Act in 1978 changed the
industry because it established the property rights of racing
tracks over their own races so they could be transmitted.
The 1974 Kentucky Derby was pirated by New York State
off-track betting sites — back then, Bellocq says, they didn’t
think simulcast would amount to much. “Now of course, 80
to 85 percent of wagering at a track is off-track.”
Those simulcasts allowed racing distribution and may
have increased its popularity. “We usually measure interest
in our sport almost more by betting handle than by attendance,” says Reed. He adds that graphs of handles increased
until 2003. But he agrees that the spread of gambling has
hurt racing. “It exploded in the 1990s — state after state
started approving riverboat, land based casinos, lotteries,”
he says. “The competition caught up to us.” In particular,
Maryland is ringed by states that made changes. “Charles
Town is having a huge negative effect on the horse and
customer population because of their change and Maryland
not having that change.”
Thalheimer notes that there is a decline in horse race
parimutuel betting because many people play the slots
rather than bet on the horses. In particular, he studied the
Mountaineer Park track in West Virginia and found that
parimutuel betting slowed when slots were introduced.
“The horse race handle went down on the order of 30 to 40
percent,” he says. “On the other hand, it produced enough
revenue to greatly increase purses. So the net benefit was to
the horse racing industry as well as to the state and track,
which both got far more money.” West Virginia is the only
state in the District with racinos, approved in 1994.
Racino revenues are growing even as gross gaming
revenues are falling. Expansion of racinos in Pennsylvania
and Indiana fueled a 17 percent increase in gross gaming
revenue from 2007 to 2008, according to the American
Gaming Association.
The future of betting on the horses, Thalheimer says,
may lie in wagering through telephone and the Internet,
where it’s legal. “It’s a great product to send out where it’s
convenient to bet on it,” he says.
And Maryland racing enthusiasts hope for a renaissance
of sorts now that Vanderbilt’s old place, Sagamore Farms,
has been restored into a horse farm once more. Kevin Plank,
who built the Under Armour empire, has entered the breeding business. He wants to win the Triple Crown.
RF

READINGS
Challmes, Joseph. The Preakness: A History. Baltimore: Anaconda
Publications, 1975.
Thalheimer, Richard. “Government Restrictions and the Demand
for Casino and Parimutuel Wagering.” Applied Economics, 2008,
vol. 40, no. 6, pp. 773-791.

Thalheimer, Richard, and Mukhtar Ali. “Exotic Betting
Opportunities, Pricing Policies and the Demand for Parimutuel
Horse Race Wagering.” Applied Economics, 1995, vol. 27, no. 8,
pp. 689-703.

Spring 2009• Region Focus

39

Total Betting for Maryland Thoroughbred Racing
600

$MILLIONS

500
400
300
200
100
0
1950 1960 1970 1980
SOURCE: Maryland Racing Commission

1990

2000

2007

Racing of all kinds, steeplechase and flat track, can be
found throughout the District today. Those include countryday races where people don’t bet — for example in Camden,
S.C., at the Colonial and Carolina Cups, well-known and
well-attended steeplechases in the fall and spring. There are
also many on-track and off-track betting locations in
Maryland, Virginia, and West Virginia. Colonial Downs in
Virginia offers a racing summer season as well as off-track
betting and simulcasts. At West Virginia’s Charles Town and
Mountaineer Park tracks, there’s slot machine gambling.
Portions of gambling proceeds go to fund purses, and the
bigger purses bring in better horses, owners, and breeders.

The Dinner Party
Maryland Gov. Oden Bowie, a horseman himself, attended a
dinner party after the Saratoga races in New York in 1868. As
Baltimore rebuilt from the Civil War and grew, the city was
in a race of its own, vying with New York for economic
supremacy on the East Coast, Challmes says. The Baltimore
& Ohio Railroad rivaled New York’s Erie Canal when the
B&O became the nation’s first public commercial and
freight railroad. The city was known for other “firsts” such as
the Baltimore clipper ships that sped around the globe
returning with exotic cargo. This competition extended to
racing, already thriving in New York and Chicago.
Bowie used his own track on his plantation to test the
speed of his horses. He had also managed to hang on to his
wealth through the Civil War and regularly raced his horses
at Saratoga. When a wealthy land baron and owner of a
famous horse, Preakness, suggested a winner-take-all sweepstakes, Bowie persuaded them at this dinner party to hold it
in Baltimore.
It was a classic case of 19th century economic development. “Bring in wealthy people, they spend money,” Challmes
says in a telephone interview. It would be Maryland’s first
officially sanctioned race since 1859. But there was no race
course because the city was still remaking itself after the war.
The Maryland Jockey Club, formed in 1743 and now owned
by Magna Entertainment, chose a 70-acre site called Pimlico.
The track was partly funded by the state ($35,000) and the
city ($25,000) along with another $55,000 in private contributions, equivalent to about $1.9 million in 2008 dollars.
There were 4,000 seats in the grandstand.
In October 1870, the crowds arrived in force for a full race
day that included the “Dinner Party Stakes.” The stakes
offered a purse of $19,000, one of the biggest ever, $320,000
in 2008 dollars. The name of the winning horse was

38

Region Focus • Spring 2009

Preakness. And from that race, the 1.5 mile Preakness Stakes
for 3-year-olds began in the spring of 1873.
The most enduring legacy of that 1873 Preakness may
have been the popularity of the parimutuel betting machine
that paid winners in proportion to the total amount they
wagered. The machines allowed small bets, and would come
to dominate horse wagering, even when other forms of betting were outlawed in some states. Previously, people had
bought into betting pools or placed bets with bookmakers,
which often led to corruption.
Maryland boasted many famous horsemen, and from
about 1878 through 1882, George Lorillard of the Lorillard
Tobacco family dominated Maryland racing. As Lorillard’s
health declined and he dispersed his stables, New York was
pulling ahead as the hub of racing on the East Coast, offering more tracks and better purses. The Preakness even
moved to New York and stayed until 1909, when that state,
among others, banned betting in a nationwide reform movement (that ultimately prohibited alcohol as well). Maryland
and Kentucky were the only states where you could gamble
in that era, says Raymond Sauer, a sports economist at
Clemson University.
In 1909, the Preakness returned to Pimlico, and built its
reputation through horses like Sir Barton. In 1919, he
became the first to win the Triple Crown: the Kentucky
Derby, the Preakness, and the Belmont Stakes. Yet the
Pimlico race still wasn’t widely known outside of Maryland.
“The Preakness in 1920 was not really on the map,”
Challmes says. “Most of the early Derby winners did not find
their way to the Preakness.”
That was about to change. In 1920, one of the most
famous race horses in history ran not in the Derby but in the
Preakness, in his first start as a 3-year-old. Man o’ War was
based on the Eastern Shore of Maryland. The horse was also
the grand sire of the acclaimed Seabiscuit, who would
become a racing star during the Depression. “After that, it
[the Preakness] attracted so much attention and press that it
became the normal thing, where the Derby winner would
come to the Preakness,” Challmes says. And by the late
1920s, the Triple Crown had evolved into the prestigious
racing event.
Perhaps Pimlico’s biggest day came in 1938 when the
track hosted a match between Seabiscuit and a son of
Man o’ War, War Admiral, the favorite. Forty thousand
people filled the stands and 40 million tuned in to the radio.
Seabiscuit won by four lengths.
Racing in those days was dominated by stables such as
Sagamore Farms, which in 1925 was given over to horse
breeding by the Bromo-Seltzer magnate. The farm passed to
his grandson Alfred Vanderbilt, the racing dynamo who
arranged the Seabiscuit versus Man o’ War match and at one
time served as president and owner of Pimlico. Vanderbilt
cultivated and bred champions such as Native Dancer (a
Belmont and Preakness winner in 1953).
Maryland racing was some of the best in the country, says
Doug Reed, director of the University of Arizona Racetrack

Industry Program. “It was a state rich in history and horse
breeding.”

Racing’s Debt: Wagering
While gambling sustains horse racing today, until the expansion of an audience through television, radio, and the
Internet, tracks also made money on admission, parking,
food, and beverage sales. Off-track betting, however, is relatively new. New York was one of the first states to allow
off-track betting in the 1970s, a practice since adopted by
virtually all race tracks.
“In the ’70s and ’80s you were a standalone race track,”
Reed says. “The revenue I can get from you at the track is
different than what I would get from you far away.”
Today the gambling provides most of the revenue in racing, about 90 percent. Yet the proliferation of wagering has
hurt racing. “You just can’t keep oversupplying a product,”
Reed says. “Racing is starting to see that. How much racing
do you need when you can bet on every track over the country at the same time?” The Maryland Jockey Club reported a
22.5 percent drop in wagering in 2008 over 2007 at its tracks,
Pimlico and Laurel. Total parimutuel handle (wagering) on
thoroughbred racing in North America fell by 7 percent in
2008, according to the Jockey Club.
Joe Kelly covered the 1946 Preakness. “Racing changed
to the point where you can see the racing in your living
room,” he says. “Technology took over and people decided it
wasn’t necessary to go into the physical part of it by attending the race track.”
Economist Richard Thalheimer heads a consulting firm
in Lexington, Ky. He studies the horse racing industry and
notes, along with Sauer, that the introduction of state lotteries in the 1970s and the proliferation of casinos in the latter
two decades of the 20th century have cut into horse race
betting. Wagering peaked in real dollars in the mid-1970s,
and has declined 45 percent to 50 percent, largely because of
competition, according to Thalheimer.
“Back in the Seabiscuit days, you’d have 70,000 people at
a track on a Wednesday afternoon,” says Remi Bellocq, the
chief executive officer of the Horsemen’s Benevolent
Protective Association.
Racing also got hurt because it resisted television broadcasts in the early years, Sauer says, under the misconception
that TV would cut into live attendance. “The response was
really slow and played out over a decade.” Baseball and
football broadcasts expanded, and so did those sports’ attendance. In the long run, television builds interest in the sport,
he argues, and racing suffered on a relative basis from the
TV exposure that baseball, football, and eventually basket-

ball gained. Racing is also harder to broadcast, given its brief
spurts of action followed by the lags between races. “I think
the lack of regularly scheduled [television] racing and the
difficulty of convening it on television hurt in a period where
TV broadcasts made the landscape of modern sport.”
The Interstate Horse Racing Act in 1978 changed the
industry because it established the property rights of racing
tracks over their own races so they could be transmitted.
The 1974 Kentucky Derby was pirated by New York State
off-track betting sites — back then, Bellocq says, they didn’t
think simulcast would amount to much. “Now of course, 80
to 85 percent of wagering at a track is off-track.”
Those simulcasts allowed racing distribution and may
have increased its popularity. “We usually measure interest
in our sport almost more by betting handle than by attendance,” says Reed. He adds that graphs of handles increased
until 2003. But he agrees that the spread of gambling has
hurt racing. “It exploded in the 1990s — state after state
started approving riverboat, land based casinos, lotteries,”
he says. “The competition caught up to us.” In particular,
Maryland is ringed by states that made changes. “Charles
Town is having a huge negative effect on the horse and
customer population because of their change and Maryland
not having that change.”
Thalheimer notes that there is a decline in horse race
parimutuel betting because many people play the slots
rather than bet on the horses. In particular, he studied the
Mountaineer Park track in West Virginia and found that
parimutuel betting slowed when slots were introduced.
“The horse race handle went down on the order of 30 to 40
percent,” he says. “On the other hand, it produced enough
revenue to greatly increase purses. So the net benefit was to
the horse racing industry as well as to the state and track,
which both got far more money.” West Virginia is the only
state in the District with racinos, approved in 1994.
Racino revenues are growing even as gross gaming
revenues are falling. Expansion of racinos in Pennsylvania
and Indiana fueled a 17 percent increase in gross gaming
revenue from 2007 to 2008, according to the American
Gaming Association.
The future of betting on the horses, Thalheimer says,
may lie in wagering through telephone and the Internet,
where it’s legal. “It’s a great product to send out where it’s
convenient to bet on it,” he says.
And Maryland racing enthusiasts hope for a renaissance
of sorts now that Vanderbilt’s old place, Sagamore Farms,
has been restored into a horse farm once more. Kevin Plank,
who built the Under Armour empire, has entered the breeding business. He wants to win the Triple Crown.
RF

READINGS
Challmes, Joseph. The Preakness: A History. Baltimore: Anaconda
Publications, 1975.
Thalheimer, Richard. “Government Restrictions and the Demand
for Casino and Parimutuel Wagering.” Applied Economics, 2008,
vol. 40, no. 6, pp. 773-791.

Thalheimer, Richard, and Mukhtar Ali. “Exotic Betting
Opportunities, Pricing Policies and the Demand for Parimutuel
Horse Race Wagering.” Applied Economics, 1995, vol. 27, no. 8,
pp. 689-703.

Spring 2009• Region Focus

39

The Shrinking Manufacturing Firm
Although employment in Fifth District manufacturing has
been declining steadily since 1990, the number of factories
actually grew by more than 10 percent from 1990 to 2000.
Starting in 2000, those levels began to drop, and by the third
quarter of 2008, the number of establishments had fallen by
more than 12 percent. Not surprisingly, employment
declined more dramatically as the number of establishments

40

Region Focus • Spring 2009

32

2000
1800
1600
1400
1200
1000
800
600
400
200
0

31
30
29
28
Number of
Establishments
Employment

27
26

2008

2005

2002

1999

1996

1993

1990

25

THOUSANDS OF EMPLOYEES

SOURCE: Bureau of Labor Statistics/Haver, Quarterly Covered Employment
and Wages

fell. Manufacturing employment fell by 6.5 percent in the
1990s, but since 2000 has dropped more than 30 percent.
As the number of manufacturing establishments grew
and total employment fell through the 1990s, the size of the
average establishment clearly fell. Despite the decline in the
number of establishments that began in 2000, however, the
shrinking in average establishment size has continued —
falling from almost 65 workers per firm in 1990 to about 54
workers in 2000 and down to 43 workers in 2008.
There are two possible explanations. First, there could be
a general decline in factory size across the District. Second,
more large factories could be closing relative to smaller
factories, leaving the District with smaller manufacturing
establishments on average. The data do not provide an
unequivocal answer, although most likely the explanation is
some combination of the two.

The Changing Face of Manufacturing
Manufacturing in the Fifth District is not concentrated
heavily in a particular product. In the third quarter of 2008,
only two products came close to accounting for 10 percent
of all manufacturing activity as measured by employment:
food and transportation equipment.
Transportation equipment has certainly been a growing
subsector of Fifth District manufacturing over the past two
decades as employment in the industry grew 4.5 percent and
the number of factories grew about 45 percent. Fabricated
metal products manufacturing, which transforms metal into
intermediate or end products (other than machinery, computers and electronics, or metal furniture), has also seen
considerable growth in the District. Employment in that
subsector grew 7.5 percent as the number of establishments
increased almost 23 percent since 1990.

The Manufacturing Sector Since 2000

60
50
40
30
20
10
2008

2005

2002

0
1999

Fifth District Manufacturing
THOUSANDS OF ESTABLISHMENTS

The goods-producing sector — which includes the subsectors of construction, natural resources and mining,
and manufacturing — has been falling steadily as a share
of Fifth District industry for quite some time. The story
of the decline, however, is really a story about the changing face of the region’s manufacturing base. Before the turn
of the century, most of the manufacturing decline was centered in the textile, apparel, and furniture industries. Today,
cutbacks have deepened and spread across subsectors of
manufacturing as both the number of establishments
engaged in manufacturing and employment in the sector
have decreased considerably.
Some of the recent employment losses can be attributed
to the globalization of manufacturing and the off-shoring of
some manufacturing operations. But much of the reduction
can be traced to increased labor productivity.
According to the Bureau of Labor Statistics, the goodsproducing sector in 1990 comprised 20 percent of all
business establishments in the Fifth District and 30 percent
of all employment. By 2008 those shares had fallen to 16 percent and 19 percent, respectively. This corresponds not only
with a decline in goods-producing employment, which fell
nearly 18 percent in the past two decades, but also with the
rise of the service sector — employment in that category has
expanded almost 50 percent over the same period.
To speak of broad trends in goods production, however,
can be misleading. Employment in the District’s manufacturing sector has fallen by more than a third (35 percent)
since 1990, and the number of establishments engaged in
manufacturing has dropped almost 3 percent. Meanwhile,
employment in the Fifth District natural resources and
mining sector has been generally steady over the past 20
years and, although the number of establishments has
recently stagnated, it remains above 1990 levels. In construction, too, employment and firm levels are 28 percent
and 29 percent above their 1990 mark, respectively, despite
a recent deterioration in activity.

70

1996

BY S O N YA R AV I N D R A N AT H WA D D E L L

Fifth District Average Manufacturing
Establishment Size

1993

Increased Productivity and Trade Have Reduced
Manufacturing Employment

The most notable structural change in the District’s
manufacturing base, however, occurred in the textile,
apparel, and furniture manufacturing. The decline in those
subsectors accounted for 72 percent of employment losses
and 63 percent of all firm closings from 1990 to 2008. Over
time, however, these subsectors’ contributions to total
losses diminished: They accounted for basically all employment losses (92 percent) in the 1990s, but only about half of
all losses since 2000.
Manufacturing activity in the Fifth District is not
distributed evenly across states, and therefore states have
been affected differently by the manufacturing decline.
North Carolina — which houses 38 percent of District
manufacturing firms and 43 percent of manufacturing
employment — has been hit the hardest. The Tar Heel State
accounted for about 50 percent of the gross decline in
employment and establishment numbers since 1990. That
year, more than 32 percent of North Carolinians worked in
manufacturing; the share has dropped to slightly more than
15 percent today.
All Fifth District states have lost more than 30 percent of
their manufacturing jobs over the past two decades, most
since 2000. North Carolina has led the Fifth District in
net employment losses, shedding over 300,000 manufacturing jobs since 1990. The other states in the District have
also seen manufacturing employment decline, but not as
severely. Virginia shed 121,670 jobs and South Carolina lost
112,060 jobs in manufacturing since 1990. (In both states,
more than 80 percent of the job losses occurred since 2000.)
Although the South Carolina economy has shed more
factory jobs than Virginia, South Carolina has also added
quite a few more. In particular, the Palmetto State has added
13,900 jobs in transportation equipment over the last two
decades, and in 2008 was home to about 275 automotiverelated companies.

1990

Economic Trends Across the Region

EMPLOYEES PER ESTABLISHMENT

DISTRICTDIGEST

SOURCE: Bureau of Labor Statistics/Haver, Quarterly Covered Employment
and Wages

sector’s job cuts since 2000 were in North Carolina. Fortyseven percent of those cuts were in textiles, textile products,
or apparel manufacturing, with an additional 14 percent in
furniture. In fact, these four subsectors in North Carolina
accounted for 30 percent of manufacturing cuts in the
District. North Carolina also saw sizeable losses in computer and electronic products (7 percent), and electrical
equipment and appliances (6 percent).
South Carolina and Virginia have continued to see their
manufacturing base move away from textile products, apparel, and furniture. In addition, although many subsectors of
manufacturing saw employment losses, certain industries,
such as computers and electronic products, contributed
more than average to the decline. Thirteen percent of
Virginia’s employment loss (and 15 percent of Maryland’s)
was in computer and electronic products.
Although manufacturing employment has declined at the
aggregate level, there are still some bright spots at the state
level. Employment in food manufacturing grew 8 percent in
North Carolina and almost 10 percent in South Carolina
between 2000 and the third quarter of 2008. South Carolina
also saw growth in transportation equipment (6 percent) and
petroleum and coal products (5 percent). Virginia saw
growth in petroleum and coal product employment (24
percent), as well as in textile product mills (5 percent).
Meanwhile, employment in plastics and rubber products
grew more than 6 percent in West Virginia.

The data from 1990 to 2008 show the loss of textile,
apparel, and furniture manufacturing and the rise in transportation equipment and fabricated metal production. Yet
employment has declined in all Fifth District subsectors of
manufacturing since 2000. The number of factories in the
Fifth District has dropped, and employment has fallen
even more precipitously.
Textile and textile products still accounted for about 30 percent of manufacturing job
Employment in the
losses since 2000, and apparel and furniture
District’s manufacturing
accounted for about 10 percent each. But
sector has fallen
the computer and electronic products indusby more than a third
try’s contribution rose to account for about
(35 percent) since 1990,
8 percent of losses. In addition, electrical
and the number of
equipment, wood products, chemicals,
establishments engaged
plastics, and machinery each contributed
in manufacturing
between 4 percent and 5 percent of total
has dropped almost
losses.
3 percent.
More than half of the manufacturing

QUICK
FACT

Deciphering the “Slump”
There are a few potential explanations for why
the District has seen such precipitous declines
in manufacturing employment, particularly
since 2000.
The first theory is that the demand for
manufactured goods — domestic or international — simply might have declined and the
lower demand spurred a cut in production. A
second theory is that foreign firms have outcompeted domestic firms in production. A
third theory is that American firms have found
it more profitable to manufacture goods

Spring 2009 • Region Focus

41

The Shrinking Manufacturing Firm
Although employment in Fifth District manufacturing has
been declining steadily since 1990, the number of factories
actually grew by more than 10 percent from 1990 to 2000.
Starting in 2000, those levels began to drop, and by the third
quarter of 2008, the number of establishments had fallen by
more than 12 percent. Not surprisingly, employment
declined more dramatically as the number of establishments

40

Region Focus • Spring 2009

32

2000
1800
1600
1400
1200
1000
800
600
400
200
0

31
30
29
28
Number of
Establishments
Employment

27
26

2008

2005

2002

1999

1996

1993

1990

25

THOUSANDS OF EMPLOYEES

SOURCE: Bureau of Labor Statistics/Haver, Quarterly Covered Employment
and Wages

fell. Manufacturing employment fell by 6.5 percent in the
1990s, but since 2000 has dropped more than 30 percent.
As the number of manufacturing establishments grew
and total employment fell through the 1990s, the size of the
average establishment clearly fell. Despite the decline in the
number of establishments that began in 2000, however, the
shrinking in average establishment size has continued —
falling from almost 65 workers per firm in 1990 to about 54
workers in 2000 and down to 43 workers in 2008.
There are two possible explanations. First, there could be
a general decline in factory size across the District. Second,
more large factories could be closing relative to smaller
factories, leaving the District with smaller manufacturing
establishments on average. The data do not provide an
unequivocal answer, although most likely the explanation is
some combination of the two.

The Changing Face of Manufacturing
Manufacturing in the Fifth District is not concentrated
heavily in a particular product. In the third quarter of 2008,
only two products came close to accounting for 10 percent
of all manufacturing activity as measured by employment:
food and transportation equipment.
Transportation equipment has certainly been a growing
subsector of Fifth District manufacturing over the past two
decades as employment in the industry grew 4.5 percent and
the number of factories grew about 45 percent. Fabricated
metal products manufacturing, which transforms metal into
intermediate or end products (other than machinery, computers and electronics, or metal furniture), has also seen
considerable growth in the District. Employment in that
subsector grew 7.5 percent as the number of establishments
increased almost 23 percent since 1990.

The Manufacturing Sector Since 2000

60
50
40
30
20
10
2008

2005

2002

0
1999

Fifth District Manufacturing
THOUSANDS OF ESTABLISHMENTS

The goods-producing sector — which includes the subsectors of construction, natural resources and mining,
and manufacturing — has been falling steadily as a share
of Fifth District industry for quite some time. The story
of the decline, however, is really a story about the changing face of the region’s manufacturing base. Before the turn
of the century, most of the manufacturing decline was centered in the textile, apparel, and furniture industries. Today,
cutbacks have deepened and spread across subsectors of
manufacturing as both the number of establishments
engaged in manufacturing and employment in the sector
have decreased considerably.
Some of the recent employment losses can be attributed
to the globalization of manufacturing and the off-shoring of
some manufacturing operations. But much of the reduction
can be traced to increased labor productivity.
According to the Bureau of Labor Statistics, the goodsproducing sector in 1990 comprised 20 percent of all
business establishments in the Fifth District and 30 percent
of all employment. By 2008 those shares had fallen to 16 percent and 19 percent, respectively. This corresponds not only
with a decline in goods-producing employment, which fell
nearly 18 percent in the past two decades, but also with the
rise of the service sector — employment in that category has
expanded almost 50 percent over the same period.
To speak of broad trends in goods production, however,
can be misleading. Employment in the District’s manufacturing sector has fallen by more than a third (35 percent)
since 1990, and the number of establishments engaged in
manufacturing has dropped almost 3 percent. Meanwhile,
employment in the Fifth District natural resources and
mining sector has been generally steady over the past 20
years and, although the number of establishments has
recently stagnated, it remains above 1990 levels. In construction, too, employment and firm levels are 28 percent
and 29 percent above their 1990 mark, respectively, despite
a recent deterioration in activity.

70

1996

BY S O N YA R AV I N D R A N AT H WA D D E L L

Fifth District Average Manufacturing
Establishment Size

1993

Increased Productivity and Trade Have Reduced
Manufacturing Employment

The most notable structural change in the District’s
manufacturing base, however, occurred in the textile,
apparel, and furniture manufacturing. The decline in those
subsectors accounted for 72 percent of employment losses
and 63 percent of all firm closings from 1990 to 2008. Over
time, however, these subsectors’ contributions to total
losses diminished: They accounted for basically all employment losses (92 percent) in the 1990s, but only about half of
all losses since 2000.
Manufacturing activity in the Fifth District is not
distributed evenly across states, and therefore states have
been affected differently by the manufacturing decline.
North Carolina — which houses 38 percent of District
manufacturing firms and 43 percent of manufacturing
employment — has been hit the hardest. The Tar Heel State
accounted for about 50 percent of the gross decline in
employment and establishment numbers since 1990. That
year, more than 32 percent of North Carolinians worked in
manufacturing; the share has dropped to slightly more than
15 percent today.
All Fifth District states have lost more than 30 percent of
their manufacturing jobs over the past two decades, most
since 2000. North Carolina has led the Fifth District in
net employment losses, shedding over 300,000 manufacturing jobs since 1990. The other states in the District have
also seen manufacturing employment decline, but not as
severely. Virginia shed 121,670 jobs and South Carolina lost
112,060 jobs in manufacturing since 1990. (In both states,
more than 80 percent of the job losses occurred since 2000.)
Although the South Carolina economy has shed more
factory jobs than Virginia, South Carolina has also added
quite a few more. In particular, the Palmetto State has added
13,900 jobs in transportation equipment over the last two
decades, and in 2008 was home to about 275 automotiverelated companies.

1990

Economic Trends Across the Region

EMPLOYEES PER ESTABLISHMENT

DISTRICTDIGEST

SOURCE: Bureau of Labor Statistics/Haver, Quarterly Covered Employment
and Wages

sector’s job cuts since 2000 were in North Carolina. Fortyseven percent of those cuts were in textiles, textile products,
or apparel manufacturing, with an additional 14 percent in
furniture. In fact, these four subsectors in North Carolina
accounted for 30 percent of manufacturing cuts in the
District. North Carolina also saw sizeable losses in computer and electronic products (7 percent), and electrical
equipment and appliances (6 percent).
South Carolina and Virginia have continued to see their
manufacturing base move away from textile products, apparel, and furniture. In addition, although many subsectors of
manufacturing saw employment losses, certain industries,
such as computers and electronic products, contributed
more than average to the decline. Thirteen percent of
Virginia’s employment loss (and 15 percent of Maryland’s)
was in computer and electronic products.
Although manufacturing employment has declined at the
aggregate level, there are still some bright spots at the state
level. Employment in food manufacturing grew 8 percent in
North Carolina and almost 10 percent in South Carolina
between 2000 and the third quarter of 2008. South Carolina
also saw growth in transportation equipment (6 percent) and
petroleum and coal products (5 percent). Virginia saw
growth in petroleum and coal product employment (24
percent), as well as in textile product mills (5 percent).
Meanwhile, employment in plastics and rubber products
grew more than 6 percent in West Virginia.

The data from 1990 to 2008 show the loss of textile,
apparel, and furniture manufacturing and the rise in transportation equipment and fabricated metal production. Yet
employment has declined in all Fifth District subsectors of
manufacturing since 2000. The number of factories in the
Fifth District has dropped, and employment has fallen
even more precipitously.
Textile and textile products still accounted for about 30 percent of manufacturing job
Employment in the
losses since 2000, and apparel and furniture
District’s manufacturing
accounted for about 10 percent each. But
sector has fallen
the computer and electronic products indusby more than a third
try’s contribution rose to account for about
(35 percent) since 1990,
8 percent of losses. In addition, electrical
and the number of
equipment, wood products, chemicals,
establishments engaged
plastics, and machinery each contributed
in manufacturing
between 4 percent and 5 percent of total
has dropped almost
losses.
3 percent.
More than half of the manufacturing

QUICK
FACT

Deciphering the “Slump”
There are a few potential explanations for why
the District has seen such precipitous declines
in manufacturing employment, particularly
since 2000.
The first theory is that the demand for
manufactured goods — domestic or international — simply might have declined and the
lower demand spurred a cut in production. A
second theory is that foreign firms have outcompeted domestic firms in production. A
third theory is that American firms have found
it more profitable to manufacture goods

Spring 2009 • Region Focus

41

Share of Total District Manufacturing
Manufacturing Subsector
1990
2008
“bright spots.” Although employabroad. Finally, the manufacturApparel
10.0
1.7
ment in food production grew
ing sector in the Fifth District
Beverage and Tobacco Product
2.0
2.3
about 8 percent in North Carolina
simply might have become more
Chemicals
7.0
8.4
and about 10 percent in South
productive as firms have found
Computer and Electronic Product
6.8
7.1
Carolina from 2000 to 2008, outways to produce the same output
Electrical Equipment and Appliance
3.4
4.0
put per worker in the subsector
with fewer establishments and
Fabricated Metal Product
5.2
8.5
grew only 5 percent in North
workers.
Food
6.1
10.0
Carolina and fell 9 percent in
The first theory — a general
Furniture and Related Product
7.1
5.6
South Carolina from 2000 to
decline in demand — might
Leather and Allied Product
0.4
0.1
2006.
explain a more recent decrease in
Machinery
5.3
7.0
On the other hand, transportamanufacturing activity. However,
Nonmetallic Mineral Product
3.2
3.7
tion equipment manufacturing
a decade of booming American
Paper
3.2
3.9
in South Carolina actually
consumer spending and rising perPetroleum and Coal Products
0.3
0.3
appeared to be a bright spot, as
capita incomes around the world
Plastics and Rubber Product
4.5
6.8
employment in the state subsector
does not suggest a reduced demand
Primary Metal
2.8
2.5
increased 6 percent even as output
that could explain a decade-long
Printing and Related Support
3.6
3.9
per worker in motor vehicle prodecline across the Fifth District
Textile Mills
15.3
5.3
duction more than doubled.
manufacturing sector.
Textile Product Mills
2.5
1.6
(Productivity in the “other transThe second and third theories
Transportation Equipment
5.9
9.4
portation” category in South
— that overseas firms are more
Wood Product
3.4
4.7
Carolina also grew.) Productivity
competitive or that formerly
Miscellaneous
2.3
3.1
in West Virginia’s motor vehicle
domestic jobs are moving overproduction jumped notably as
seas — have been commonly cited
SOURCE: Bureau of Labor Statistics/Haver, Quarterly Covered
Employment and Wages
well, but the state accounts for
reasons for shuttered factories in
only about 4 percent of all
textiles, apparel, and furniture
transportation equipment manufacturing in the District.
production. North Carolina State University economist
Meanwhile, some subsectors saw a decrease in both proMike Walden says the decline in textiles, apparel, furniture,
ductivity and employment. Job losses in the chemical
and cigarette production may be due to increased imports
subsector accounted for almost 5 percent of total losses in
and outsourcing.
Fifth District manufacturing employment while productiviBut Walden reports that productivity accounts for
ty in that sector actually declined in three of the five states
declines in other sectors. In fact, this final theory is critical
in the District.
to understanding the manufacturing decline. It is virtually
Productivity increases are also not likely to account for
undisputed that manufacturing across the United States has
the steep employment losses in the apparel, textile, and furbecome more productive. According to data from the
niture industries. Increased imports and labor outsourcing
Bureau of Economic Analysis and the Federal Reserve Bank
probably played a larger role in those subsectors’ work force
of San Francisco, overall manufacturing productivity
reductions.
in the United States, as measured by the real value of output
per worker, grew almost 40 percent from 2000 to 2007.
This trend held true across Fifth District states,
Looking Forward
especially in North Carolina and Maryland.
As the marginal productivity gains — particularly in newer
Of all manufacturing subsectors in the Fifth District,
manufacturing industries such as computer and electronic
the computer and electronic products
products — start to decrease, we might
industry had the highest productivity
begin to see the decline of manufacturing
growth. In that subsector, output per worker
employment stabilize. New sectors such
grew about three and a half times in South
as biotechnology seem promising. Already,
The Bureau of Labor
Carolina and Virginia and more than quadruNorth Carolina is a leading state for
Statistic’s Quarterly
pled in Maryland and North Carolina
biotech with 450 companies involved in
Covered Employment and
between 2000 and 2006. The data provide
some phase of research, development, or
Wages (QCEW) data comes
evidence that much of the drop in computer
manufacturing. Nonetheless, with the
from quarterly tax reports
and electronic product industry employment
increasing globalization of industry and
of more than 8 million
— that accounted for almost 10 percent of
freedom of trade, the urbanization of
employers and some
Fifth District manufacturing employment
our region, and the continued productivity
federal agencies. This data
losses over the decade — is due to increased
improvements, the share of our District
includes 99.7 percent of all
productivity.
devoted to manufacturing may remain
wage and salary civilian
The productivity data also provide
on a downward trajectory for some time
employment.
evidence to dim some of the Fifth District
to come.
RF

QUICK
FACT

42

Region Focus • Spring 2009

Smaller Textile Industry Reaches New Markets
BY B E T T Y J OYC E N A S H

J

eff Ward’s mother kicked him and his business, Innovative Geotextiles Corp., out of the garage in 1983.
“I moved. And so today we’re in a 10,000 square foot
manufacturing plant,” he says. He calls his business
category “rejuvenation,” because he finds new purposes for
old products. His first effort was to take the polypropylene commonly used as dust covers under sofas and chairs
and re-purpose it as landscaping fabric. “I developed a retail
product you use for weed block — it lets the water through,
but not the sun.”
Rejuvenation also describes the District’s diverse, but
much, much smaller textile industry today. Even with all the
layoffs and outsourcing, North Carolina remains the No. 1
textile mill employer and yarn producer as well as the No. 4
apparel producer in the nation. Today, however, the textile
and apparel sector accounts for less than 2 percent of the
state’s employment, and the industry’s labor-intensive production has been replaced by ideas. These technological
innovations include carbon fiber that will be used in the “airbus” slated to be built in Kinston, N.C., to fabric that serves
as a structure for new skin growth on burn patients. The
definition of what qualifies as a “textile” appears unlimited.
Mansour Mohamed founded and serves as the chief
scientific officer of 3TEX based in Cary, N.C. Formerly the
head of the department of textile engineering, chemistry,
and science at the North Carolina State University College
of Textiles, he and his colleagues have put the firm’s patent
portfolio to work. Among other products, the firm engineers and manufactures armor systems using its patented
fabrics and composite systems. The 3TEX technology
includes three-dimensional, noncrimped woven fibers
known for strength.
“We are also gearing up for a new focus on wind energy —
windmill blades,” Mohamed says.
While giants such as Milliken in South Carolina, and
International Textile Group, Unifi, and Glen Raven in North
Carolina remain, a wide variety of firms — small and large,
old and new — make up the textile sector today. And, like
3TEX, the products they engineer and fabricate would surprise many people.
Like nonwoven fabrics, for instance — think diapers and
wipes. They’re not woven or knitted, and they comprise a
growing piece of the industry, which began with the development of synthetic fibers during World War II. The
category has exploded in recent years. The United States
produces and uses more nonwoven products than any
other country, and North Carolina has more nonwoven
fabric producers than any other state. These include firms
like Freudenberg (the world’s biggest producer of nonwovens), Kimberly Clark, and PGI Nonwovens, which
operates four locations in North Carolina.
“It’s a very inexpensive way of putting materials

together,” says Ian Butler, who keeps statistics for INDA,
the industry association for nonwoven goods. But it’s also an
industry that requires little labor, he says. Machines churn
out 1,000 baby diapers per minute.
Textile firms have also specialized in “performance
fabrics” that retard flame and bacteria growth and moisture,
and even keep socks and shirts from getting smelly. Textile
firms have also found military products to be a growing
niche, in part thanks to the 1941 Berry Amendment. The
amendment was made permanent in the U.S. Code in 2002
and says military products must be manufactured in the
United States. Milliken, for instance, has a military division
that makes flame-resistant flight suits and boots, among
other products, using various trademarked fabrics. In 2008,
the U.S. Department of Defense purchased $133 million in
North Carolina textile goods.
Medical textiles is also a growing segment. “That is the
hot area now,” says Blanton Godfrey, the dean of the North
Carolina State University College of Textiles, “where you’re
growing peoples’ organs on textile scaffolds, a fiber base.”
Other products include artificial arteries and hernia
patches. Those products are almost all made in the United
States, some in Canada. These new niches supply a stillrobust part of the market. Until recently, automotive textile
suppliers were doing well.
Four years ago, a group of researchers, under a grant from
the North Carolina Department of Commerce, documented the textile industry in the state. Researchers from North
Carolina State, the University of North Carolina at Chapel
Hill, and Duke University merged a variety of databases and
identified 1,846 textile company locations in North Carolina
and more than 900 in South Carolina. They established Web
sites to connect firms in those states.
The North Carolina Hosiery Technology Center at
Catawba Valley Community College began 19 years ago to
train technicians and operators, but now helps firms test,
develop prototypes, and market products. The center’s testing
lab sees a lot of action these days, according to director Dan
St. Louis. “We test for a ton of people, like major brands Nike,
Lands End, Kmart; it could be for durability, fit, moisture
management, antimicrobial properties, compression testing,”
St. Louis says. Before firms choose which products to buy,
they have the samples tested. It doesn’t hurt that the center
has the resources of the North Carolina State University
College of Textiles behind them, among other expertise.
Manufacturing textiles today, says St. Louis, is not about
price. Thorlo, for instance, makes high-end athletic and
outdoor recreation socks in Statesville, N.C. “They focused
on quality,” he says, adding that they monitor to the
“nth degree.” Given the variety they now handle, the
center’s name is being changing to the Manufacturing
Solution Center.
RF

Spring 2009 • Region Focus

43

Share of Total District Manufacturing
Manufacturing Subsector
1990
2008
“bright spots.” Although employabroad. Finally, the manufacturApparel
10.0
1.7
ment in food production grew
ing sector in the Fifth District
Beverage and Tobacco Product
2.0
2.3
about 8 percent in North Carolina
simply might have become more
Chemicals
7.0
8.4
and about 10 percent in South
productive as firms have found
Computer and Electronic Product
6.8
7.1
Carolina from 2000 to 2008, outways to produce the same output
Electrical Equipment and Appliance
3.4
4.0
put per worker in the subsector
with fewer establishments and
Fabricated Metal Product
5.2
8.5
grew only 5 percent in North
workers.
Food
6.1
10.0
Carolina and fell 9 percent in
The first theory — a general
Furniture and Related Product
7.1
5.6
South Carolina from 2000 to
decline in demand — might
Leather and Allied Product
0.4
0.1
2006.
explain a more recent decrease in
Machinery
5.3
7.0
On the other hand, transportamanufacturing activity. However,
Nonmetallic Mineral Product
3.2
3.7
tion equipment manufacturing
a decade of booming American
Paper
3.2
3.9
in South Carolina actually
consumer spending and rising perPetroleum and Coal Products
0.3
0.3
appeared to be a bright spot, as
capita incomes around the world
Plastics and Rubber Product
4.5
6.8
employment in the state subsector
does not suggest a reduced demand
Primary Metal
2.8
2.5
increased 6 percent even as output
that could explain a decade-long
Printing and Related Support
3.6
3.9
per worker in motor vehicle prodecline across the Fifth District
Textile Mills
15.3
5.3
duction more than doubled.
manufacturing sector.
Textile Product Mills
2.5
1.6
(Productivity in the “other transThe second and third theories
Transportation Equipment
5.9
9.4
portation” category in South
— that overseas firms are more
Wood Product
3.4
4.7
Carolina also grew.) Productivity
competitive or that formerly
Miscellaneous
2.3
3.1
in West Virginia’s motor vehicle
domestic jobs are moving overproduction jumped notably as
seas — have been commonly cited
SOURCE: Bureau of Labor Statistics/Haver, Quarterly Covered
Employment and Wages
well, but the state accounts for
reasons for shuttered factories in
only about 4 percent of all
textiles, apparel, and furniture
transportation equipment manufacturing in the District.
production. North Carolina State University economist
Meanwhile, some subsectors saw a decrease in both proMike Walden says the decline in textiles, apparel, furniture,
ductivity and employment. Job losses in the chemical
and cigarette production may be due to increased imports
subsector accounted for almost 5 percent of total losses in
and outsourcing.
Fifth District manufacturing employment while productiviBut Walden reports that productivity accounts for
ty in that sector actually declined in three of the five states
declines in other sectors. In fact, this final theory is critical
in the District.
to understanding the manufacturing decline. It is virtually
Productivity increases are also not likely to account for
undisputed that manufacturing across the United States has
the steep employment losses in the apparel, textile, and furbecome more productive. According to data from the
niture industries. Increased imports and labor outsourcing
Bureau of Economic Analysis and the Federal Reserve Bank
probably played a larger role in those subsectors’ work force
of San Francisco, overall manufacturing productivity
reductions.
in the United States, as measured by the real value of output
per worker, grew almost 40 percent from 2000 to 2007.
This trend held true across Fifth District states,
Looking Forward
especially in North Carolina and Maryland.
As the marginal productivity gains — particularly in newer
Of all manufacturing subsectors in the Fifth District,
manufacturing industries such as computer and electronic
the computer and electronic products
products — start to decrease, we might
industry had the highest productivity
begin to see the decline of manufacturing
growth. In that subsector, output per worker
employment stabilize. New sectors such
grew about three and a half times in South
as biotechnology seem promising. Already,
The Bureau of Labor
Carolina and Virginia and more than quadruNorth Carolina is a leading state for
Statistic’s Quarterly
pled in Maryland and North Carolina
biotech with 450 companies involved in
Covered Employment and
between 2000 and 2006. The data provide
some phase of research, development, or
Wages (QCEW) data comes
evidence that much of the drop in computer
manufacturing. Nonetheless, with the
from quarterly tax reports
and electronic product industry employment
increasing globalization of industry and
of more than 8 million
— that accounted for almost 10 percent of
freedom of trade, the urbanization of
employers and some
Fifth District manufacturing employment
our region, and the continued productivity
federal agencies. This data
losses over the decade — is due to increased
improvements, the share of our District
includes 99.7 percent of all
productivity.
devoted to manufacturing may remain
wage and salary civilian
The productivity data also provide
on a downward trajectory for some time
employment.
evidence to dim some of the Fifth District
to come.
RF

QUICK
FACT

42

Region Focus • Spring 2009

Smaller Textile Industry Reaches New Markets
BY B E T T Y J OYC E N A S H

J

eff Ward’s mother kicked him and his business, Innovative Geotextiles Corp., out of the garage in 1983.
“I moved. And so today we’re in a 10,000 square foot
manufacturing plant,” he says. He calls his business
category “rejuvenation,” because he finds new purposes for
old products. His first effort was to take the polypropylene commonly used as dust covers under sofas and chairs
and re-purpose it as landscaping fabric. “I developed a retail
product you use for weed block — it lets the water through,
but not the sun.”
Rejuvenation also describes the District’s diverse, but
much, much smaller textile industry today. Even with all the
layoffs and outsourcing, North Carolina remains the No. 1
textile mill employer and yarn producer as well as the No. 4
apparel producer in the nation. Today, however, the textile
and apparel sector accounts for less than 2 percent of the
state’s employment, and the industry’s labor-intensive production has been replaced by ideas. These technological
innovations include carbon fiber that will be used in the “airbus” slated to be built in Kinston, N.C., to fabric that serves
as a structure for new skin growth on burn patients. The
definition of what qualifies as a “textile” appears unlimited.
Mansour Mohamed founded and serves as the chief
scientific officer of 3TEX based in Cary, N.C. Formerly the
head of the department of textile engineering, chemistry,
and science at the North Carolina State University College
of Textiles, he and his colleagues have put the firm’s patent
portfolio to work. Among other products, the firm engineers and manufactures armor systems using its patented
fabrics and composite systems. The 3TEX technology
includes three-dimensional, noncrimped woven fibers
known for strength.
“We are also gearing up for a new focus on wind energy —
windmill blades,” Mohamed says.
While giants such as Milliken in South Carolina, and
International Textile Group, Unifi, and Glen Raven in North
Carolina remain, a wide variety of firms — small and large,
old and new — make up the textile sector today. And, like
3TEX, the products they engineer and fabricate would surprise many people.
Like nonwoven fabrics, for instance — think diapers and
wipes. They’re not woven or knitted, and they comprise a
growing piece of the industry, which began with the development of synthetic fibers during World War II. The
category has exploded in recent years. The United States
produces and uses more nonwoven products than any
other country, and North Carolina has more nonwoven
fabric producers than any other state. These include firms
like Freudenberg (the world’s biggest producer of nonwovens), Kimberly Clark, and PGI Nonwovens, which
operates four locations in North Carolina.
“It’s a very inexpensive way of putting materials

together,” says Ian Butler, who keeps statistics for INDA,
the industry association for nonwoven goods. But it’s also an
industry that requires little labor, he says. Machines churn
out 1,000 baby diapers per minute.
Textile firms have also specialized in “performance
fabrics” that retard flame and bacteria growth and moisture,
and even keep socks and shirts from getting smelly. Textile
firms have also found military products to be a growing
niche, in part thanks to the 1941 Berry Amendment. The
amendment was made permanent in the U.S. Code in 2002
and says military products must be manufactured in the
United States. Milliken, for instance, has a military division
that makes flame-resistant flight suits and boots, among
other products, using various trademarked fabrics. In 2008,
the U.S. Department of Defense purchased $133 million in
North Carolina textile goods.
Medical textiles is also a growing segment. “That is the
hot area now,” says Blanton Godfrey, the dean of the North
Carolina State University College of Textiles, “where you’re
growing peoples’ organs on textile scaffolds, a fiber base.”
Other products include artificial arteries and hernia
patches. Those products are almost all made in the United
States, some in Canada. These new niches supply a stillrobust part of the market. Until recently, automotive textile
suppliers were doing well.
Four years ago, a group of researchers, under a grant from
the North Carolina Department of Commerce, documented the textile industry in the state. Researchers from North
Carolina State, the University of North Carolina at Chapel
Hill, and Duke University merged a variety of databases and
identified 1,846 textile company locations in North Carolina
and more than 900 in South Carolina. They established Web
sites to connect firms in those states.
The North Carolina Hosiery Technology Center at
Catawba Valley Community College began 19 years ago to
train technicians and operators, but now helps firms test,
develop prototypes, and market products. The center’s testing
lab sees a lot of action these days, according to director Dan
St. Louis. “We test for a ton of people, like major brands Nike,
Lands End, Kmart; it could be for durability, fit, moisture
management, antimicrobial properties, compression testing,”
St. Louis says. Before firms choose which products to buy,
they have the samples tested. It doesn’t hurt that the center
has the resources of the North Carolina State University
College of Textiles behind them, among other expertise.
Manufacturing textiles today, says St. Louis, is not about
price. Thorlo, for instance, makes high-end athletic and
outdoor recreation socks in Statesville, N.C. “They focused
on quality,” he says, adding that they monitor to the
“nth degree.” Given the variety they now handle, the
center’s name is being changing to the Manufacturing
Solution Center.
RF

Spring 2009 • Region Focus

43

State Data, Q4:08
DC
Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change

707.0
-0.4
1.3

MD
2,576.3
-0.8
- 1.3

NC
4,080.0
-1.3
-2.1

SC
1,894.9
-1.5
-2.7

VA
3,721.5
-1.2
-1.3

WV
759.8
-0.4
0.2

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

First Quarter 1998 - Fourth Quarter 2008

Change From Prior Year

First Quarter 1998 - Fourth Quarter 2008

First Quarter 1998 - Fourth Quarter 2008

4%

8%

7%

7%
3%
6%
6%

Manufacturing Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change

1.4
-12.5
-17.6

Professional/Business Services Employment (000’s) 152.7
Q/Q Percent Change
-0.3
Y/Y Percent Change
0.0

126.1
-1.5
-4.0
399.5
0.1
0.0

497.9
-2.8
-6.7
487.0
-3.4
-3.7

235.8
-2.5
-5.1
215.1
-1.9
-4.9

258.8
-2.0
-5.3
650.5
-1.3
-0.4

55.2
-1.5
-5.3
60.1
-0.8
-2.0

5%

2%

4%
5%

1%

3%
2%

0%
4%

1%

3%

0%
-1%

-1%
-2%
98 99 00 01

02

03 04 05 06 07

98 99 00 01

08

02

03 04 05 06 07

Fifth District
Government Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change

234.8
-0.7
0.8

488.3
-0.1
1.4

718.0
1.3
3.6

343.4
0.1
0.7

697.6
0.1
1.8

147.5
0.2
1.4

Civilian Labor Force (000’s)
Q/Q Percent Change
Y/Y Percent Change

332.9
-0.3
0.9

3,007.4
0.3
0.6

4,578.3
0.6
1.4

2,182.1
1.0
2.5

4,164.3
0.8
1.9

804.7
0.0
-0.9

Unemployment Rate (%)
Q2:08
Q3:07
Real Personal Income ($Mil)
Q/Q Percent Change
Y/Y Percent Change
Building Permits
Q/Q Percent Change
Y/Y Percent Change

8.0
7.2
5.7

5.1
4.5
3.6

31,897.6
1.5
1.6

224,316.3
1.3
0.8

42
-72.4
-74.7

1,889
-50.5
-45.9

7.5
6.6
5.0
262,490.3
1.1
0.8
8,058
-44.7
-49.4

8.3
7.2
5.7

4.6
4.1
3.3

4.4
4.2
4.4

117,934.5
1.1
0.8

275,775.9
1.3
0.9

45,643.6
1.7
2.9

3,441
-48.7
-53.2

5,033
-20.2
-33.4

403
-53.8
-67.7

House Price Index (1980=100)
Q/Q Percent Change
Y/Y Percent Change

614.2
-1.2
-6.0

493.0
-1.5
-7.7

346.2
0.1
1.1

325.0
-0.2
0.3

448.7
-0.9
-4.6

229.4
-0.1
-0.5

Sales of Existing Housing Units (000’s)
Q/Q Percent Change
Y/Y Percent Change

6.8
-5.6
-15.0

58.4
-11.0
-14.6

121.2
-21.1
-34.7

63.2
- 21.4
-31.0

105.2
-16.8
3.1

22.8
-9.5
-17.4

08

98 99 00 01

02

Nonfarm Employment
Metropolitan Areas

Unemployment Rate
Metropolitan Areas

Building Permits

Change From Prior Year

Change From Prior Year

First Quarter 1998 - Fourth Quarter 2008

First Quarter 1998 - Fourth Quarter 2008

First Quarter 1998 - Fourth Quarter 2008

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%

44

Region Focus • Spring 2009

08

United States
Change From Prior Year

7%

30%

6%

20%
10%

5%
0%
4%

-10%

3%

-20%
-30%

2%

-40%
1%
98 99 00 01

02

Charlotte

03 04 05 06 07
Baltimore

08

98 99 00 01

Washington

02

Charlotte

03 04 05 06 07
Baltimore

98 99 00 01

08

Washington

02

03 04 05 06 07

Fifth District

FRB—Richmond
Manufacturing Composite Index

House Prices

First Quarter 1998 - Fourth Quarter 2008

First Quarter 1998 - Fourth Quarter 2008

First Quarter 1998 - Fourth Quarter 2008

Change From Prior Year

15%
13%
11%
9%
7%
5%
3%
1%
-1%
-3%
-5%

30
40
30

20

20

10

10

0

0
-10
-10
-20

-20

-30

-30
98 99 00 01

02

03 04 05 06 07

08

08

United States

FRB—Richmond
Services Revenues Index

98 99 00 01

02

03 04 05 06 07

08

98 99 00 01

02

Fifth District

NOTES:
Nonfarm Payroll Employment, thousands of jobs, seasonally adjusted (SA) except in MSAs; Bureau of Labor Statistics (BLS)/Haver Analytics, Manufacturing Employment, thousands of jobs, SA in all but DC and SC; BLS/Haver Analytics, Professional/Business
Services Employment, thousands of jobs, SA in all but SC; BLS/Haver Analytics, Government Employment, thousands of jobs, SA; BLS/Haver Analytics, Civilian Labor Force, thousands of persons, SA; BLS/Haver Analytics, Unemployment Rate, percent, SA
except in MSA’s; BLS/Haver Analytics, Building Permits, number of permits, NSA; U.S. Census Bureau/Haver Analytics, Sales of Existing Housing Units, thousands of units, SA; National Association of Realtors®

03 04 05 06 07

03 04 05 06 07

08

United States

NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease.
The manufacturing composite index is a weighted average of the shipments, new orders, and
employment indexes.
2) Metropolitan area data, building permits, and house prices are not seasonally adjusted (nsa); all other
series are seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor,
http://stats.bls.gov.
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov.
Building permits: U.S. Census Bureau, http://www.census.gov.
House prices: Federal Housing Finance Agency, http://www.fhfa.gov.

Spring 2009 • Region Focus

45

State Data, Q4:08
DC
Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change

707.0
-0.4
1.3

MD
2,576.3
-0.8
- 1.3

NC
4,080.0
-1.3
-2.1

SC
1,894.9
-1.5
-2.7

VA
3,721.5
-1.2
-1.3

WV
759.8
-0.4
0.2

Nonfarm Employment

Unemployment Rate

Real Personal Income

Change From Prior Year

First Quarter 1998 - Fourth Quarter 2008

Change From Prior Year

First Quarter 1998 - Fourth Quarter 2008

First Quarter 1998 - Fourth Quarter 2008

4%

8%

7%

7%
3%
6%
6%

Manufacturing Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change

1.4
-12.5
-17.6

Professional/Business Services Employment (000’s) 152.7
Q/Q Percent Change
-0.3
Y/Y Percent Change
0.0

126.1
-1.5
-4.0
399.5
0.1
0.0

497.9
-2.8
-6.7
487.0
-3.4
-3.7

235.8
-2.5
-5.1
215.1
-1.9
-4.9

258.8
-2.0
-5.3
650.5
-1.3
-0.4

55.2
-1.5
-5.3
60.1
-0.8
-2.0

5%

2%

4%
5%

1%

3%
2%

0%
4%

1%

3%

0%
-1%

-1%
-2%
98 99 00 01

02

03 04 05 06 07

98 99 00 01

08

02

03 04 05 06 07

Fifth District
Government Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change

234.8
-0.7
0.8

488.3
-0.1
1.4

718.0
1.3
3.6

343.4
0.1
0.7

697.6
0.1
1.8

147.5
0.2
1.4

Civilian Labor Force (000’s)
Q/Q Percent Change
Y/Y Percent Change

332.9
-0.3
0.9

3,007.4
0.3
0.6

4,578.3
0.6
1.4

2,182.1
1.0
2.5

4,164.3
0.8
1.9

804.7
0.0
-0.9

Unemployment Rate (%)
Q2:08
Q3:07
Real Personal Income ($Mil)
Q/Q Percent Change
Y/Y Percent Change
Building Permits
Q/Q Percent Change
Y/Y Percent Change

8.0
7.2
5.7

5.1
4.5
3.6

31,897.6
1.5
1.6

224,316.3
1.3
0.8

42
-72.4
-74.7

1,889
-50.5
-45.9

7.5
6.6
5.0
262,490.3
1.1
0.8
8,058
-44.7
-49.4

8.3
7.2
5.7

4.6
4.1
3.3

4.4
4.2
4.4

117,934.5
1.1
0.8

275,775.9
1.3
0.9

45,643.6
1.7
2.9

3,441
-48.7
-53.2

5,033
-20.2
-33.4

403
-53.8
-67.7

House Price Index (1980=100)
Q/Q Percent Change
Y/Y Percent Change

614.2
-1.2
-6.0

493.0
-1.5
-7.7

346.2
0.1
1.1

325.0
-0.2
0.3

448.7
-0.9
-4.6

229.4
-0.1
-0.5

Sales of Existing Housing Units (000’s)
Q/Q Percent Change
Y/Y Percent Change

6.8
-5.6
-15.0

58.4
-11.0
-14.6

121.2
-21.1
-34.7

63.2
- 21.4
-31.0

105.2
-16.8
3.1

22.8
-9.5
-17.4

08

98 99 00 01

02

Nonfarm Employment
Metropolitan Areas

Unemployment Rate
Metropolitan Areas

Building Permits

Change From Prior Year

Change From Prior Year

First Quarter 1998 - Fourth Quarter 2008

First Quarter 1998 - Fourth Quarter 2008

First Quarter 1998 - Fourth Quarter 2008

7%
6%
5%
4%
3%
2%
1%
0%
-1%
-2%
-3%
-4%

44

Region Focus • Spring 2009

08

United States
Change From Prior Year

7%

30%

6%

20%
10%

5%
0%
4%

-10%

3%

-20%
-30%

2%

-40%
1%
98 99 00 01

02

Charlotte

03 04 05 06 07
Baltimore

08

98 99 00 01

Washington

02

Charlotte

03 04 05 06 07
Baltimore

98 99 00 01

08

Washington

02

03 04 05 06 07

Fifth District

FRB—Richmond
Manufacturing Composite Index

House Prices

First Quarter 1998 - Fourth Quarter 2008

First Quarter 1998 - Fourth Quarter 2008

First Quarter 1998 - Fourth Quarter 2008

Change From Prior Year

15%
13%
11%
9%
7%
5%
3%
1%
-1%
-3%
-5%

30
40
30

20

20

10

10

0

0
-10
-10
-20

-20

-30

-30
98 99 00 01

02

03 04 05 06 07

08

08

United States

FRB—Richmond
Services Revenues Index

98 99 00 01

02

03 04 05 06 07

08

98 99 00 01

02

Fifth District

NOTES:
Nonfarm Payroll Employment, thousands of jobs, seasonally adjusted (SA) except in MSAs; Bureau of Labor Statistics (BLS)/Haver Analytics, Manufacturing Employment, thousands of jobs, SA in all but DC and SC; BLS/Haver Analytics, Professional/Business
Services Employment, thousands of jobs, SA in all but SC; BLS/Haver Analytics, Government Employment, thousands of jobs, SA; BLS/Haver Analytics, Civilian Labor Force, thousands of persons, SA; BLS/Haver Analytics, Unemployment Rate, percent, SA
except in MSA’s; BLS/Haver Analytics, Building Permits, number of permits, NSA; U.S. Census Bureau/Haver Analytics, Sales of Existing Housing Units, thousands of units, SA; National Association of Realtors®

03 04 05 06 07

03 04 05 06 07

08

United States

NOTES:

SOURCES:

1) FRB-Richmond survey indexes are diffusion indexes representing the percentage of responding firms
reporting increase minus the percentage reporting decrease.
The manufacturing composite index is a weighted average of the shipments, new orders, and
employment indexes.
2) Metropolitan area data, building permits, and house prices are not seasonally adjusted (nsa); all other
series are seasonally adjusted.

Real Personal Income: Bureau of Economic Analysis/Haver Analytics.
Unemployment rate: LAUS Program, Bureau of Labor Statistics, U.S. Department of Labor,
http://stats.bls.gov.
Employment: CES Survey, Bureau of Labor Statistics, U.S. Department of Labor, http://stats.bls.gov.
Building permits: U.S. Census Bureau, http://www.census.gov.
House prices: Federal Housing Finance Agency, http://www.fhfa.gov.

Spring 2009 • Region Focus

45

Metropolitan Area Data, Q4:08

Metropolitan Area Data, Q4:08
Washington, DC

Baltimore, MD

Winston-Salem, NC

Charleston, SC

Columbia, SC

216.5
0.1
-2.2

298.5
-1.0
-1.0

365.2
0.0
-1.2

7.1
6.3
4.5

6.9
6.2
4.4

7.1
6.5
4.8

263
-25.5
-57.0

798
-26.8
-35.4

617
-55.1
-46.4

Greenville, SC

Richmond, VA

318.3
-0.1
-0.9

621.5
-1.1
-2.3

162.0
0.1
-1.3

7.2
6.4
4.9

5.0
4.5
3.2

4.6
4.1
3.1

312
-47.7
-73.0

1,045
-7.4
-19.7

103
-27.0
-42.1

Virginia Beach-Norfolk, VA

Charleston, WV

2,442.0
0.1
0.2

1,313.3
-0.3
-1.2

101.1
-0.0
-3.0

4.4
4.0
2.9

5.4
4.9
3.5

6.1
5.2
3.0

2,928
-15.3
-40.0

684
-57.8
-49.0

170
-39.5
-64.3

Asheville, NC

Charleston, SC

Durham, NC

175.0
-0.2
-2.3

854.1
-0.1
-2.7

293.9
0.6
1.3

5.9
5.2
3.6

7.9
6.8
4.8

5.5
5.2
3.8

263
-45.5
-55.3

2,018
-23.6
-47.0

339
-37.5
-40.6

Greensboro-High Point, NC

Raleigh, NC

Wilmington, NC

364.0
-0.3
-3.2

519.6
-0.2
-1.3

144.6
-1.4
-2.0

Nonfarm Employment (000)
Q/Q Percent Change
Y/Y Percent Change

767.1
-1.1
-1.0

152.8
-0.2
0.8

120.7
1.6
-1.8

7.9
6.9
4.8
584
-14.0
-39.7

5.9
5.2
3.6
1,224
-69.0
-56.6

7.3
5.9
4.2
505
-47.8
-44.4

Unemployment Rate (%)
Q2:08
Q3:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

4.9
4.4
3.3
648
-50.2
-47.4

3.3
3.3
3.4
57
-62.3
54.1

4.9
5.0
4.1
5
-37.5
-82.8

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q2:08
Q3:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q2:08
Q3:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q2:08
Q3:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

For more information, contact Sonya Ravindranath Waddell at (804) 697-2694 or e-mail sonya.waddell@rich.frb.org

46

Hagerstown-Martinsburg, MD-WV

Region Focus • Spring 2009

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q1:08
Q2:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q2:08
Q3:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Roanoke, VA

Huntington, WV

For more information, contact Sonya Ravindranath Waddell at (804) 697-2694 or e-mail sonya.waddell@rich.frb.org

Spring 2009 • Region Focus

47

Metropolitan Area Data, Q4:08

Metropolitan Area Data, Q4:08
Washington, DC

Baltimore, MD

Winston-Salem, NC

Charleston, SC

Columbia, SC

216.5
0.1
-2.2

298.5
-1.0
-1.0

365.2
0.0
-1.2

7.1
6.3
4.5

6.9
6.2
4.4

7.1
6.5
4.8

263
-25.5
-57.0

798
-26.8
-35.4

617
-55.1
-46.4

Greenville, SC

Richmond, VA

318.3
-0.1
-0.9

621.5
-1.1
-2.3

162.0
0.1
-1.3

7.2
6.4
4.9

5.0
4.5
3.2

4.6
4.1
3.1

312
-47.7
-73.0

1,045
-7.4
-19.7

103
-27.0
-42.1

Virginia Beach-Norfolk, VA

Charleston, WV

2,442.0
0.1
0.2

1,313.3
-0.3
-1.2

101.1
-0.0
-3.0

4.4
4.0
2.9

5.4
4.9
3.5

6.1
5.2
3.0

2,928
-15.3
-40.0

684
-57.8
-49.0

170
-39.5
-64.3

Asheville, NC

Charleston, SC

Durham, NC

175.0
-0.2
-2.3

854.1
-0.1
-2.7

293.9
0.6
1.3

5.9
5.2
3.6

7.9
6.8
4.8

5.5
5.2
3.8

263
-45.5
-55.3

2,018
-23.6
-47.0

339
-37.5
-40.6

Greensboro-High Point, NC

Raleigh, NC

Wilmington, NC

364.0
-0.3
-3.2

519.6
-0.2
-1.3

144.6
-1.4
-2.0

Nonfarm Employment (000)
Q/Q Percent Change
Y/Y Percent Change

767.1
-1.1
-1.0

152.8
-0.2
0.8

120.7
1.6
-1.8

7.9
6.9
4.8
584
-14.0
-39.7

5.9
5.2
3.6
1,224
-69.0
-56.6

7.3
5.9
4.2
505
-47.8
-44.4

Unemployment Rate (%)
Q2:08
Q3:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

4.9
4.4
3.3
648
-50.2
-47.4

3.3
3.3
3.4
57
-62.3
54.1

4.9
5.0
4.1
5
-37.5
-82.8

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q2:08
Q3:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q2:08
Q3:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q2:08
Q3:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

For more information, contact Sonya Ravindranath Waddell at (804) 697-2694 or e-mail sonya.waddell@rich.frb.org

46

Hagerstown-Martinsburg, MD-WV

Region Focus • Spring 2009

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q1:08
Q2:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Nonfarm Employment (000’s)
Q/Q Percent Change
Y/Y Percent Change
Unemployment Rate (%)
Q2:08
Q3:07
Building Permits
Q/Q Percent Change
Y/Y Percent Change

Roanoke, VA

Huntington, WV

For more information, contact Sonya Ravindranath Waddell at (804) 697-2694 or e-mail sonya.waddell@rich.frb.org

Spring 2009 • Region Focus

47

OPINION
The Importance of Luck
BY A A RO N ST E E L M A N

playing field is not level at birth — and this has important
f you spend much time talking to proponents of free
consequences for people’s prospects throughout their lives.
markets, you will find that many of them don’t have
Does that mean we should attempt to level conditions
much to say about the role that luck plays in people’s
through, say, a confiscatory inheritance tax? One has to conlives. Instead, you will often hear a lot about how people
sider the incentives such a tax would create. Some people, no
determine their own fates — and that as long as there is a
doubt, would not work as hard as they otherwise would
level playing field, then everyone has a good shot at making
because they would be unable to leave the fruits of their
his dreams a reality.
labor to their heirs. In contrast, some people who are now
There is a lot of truth in such statements. Most people do
likely to receive significant inheritances might work harder
fundamentally determine their own happiness — which, in
knowing that this cushion would not be forthcoming.
large measure, is determined by one’s general outlook on life.
Which effect is larger is ultimately an empirical question.
People can choose to be happy, or at least happier, just as
But more important, such a tax would codify into law the
they can choose to be miserable and unpleasant. This is not
belief that things ought to be equal, that we should all start
to deny that some people are prone to bouts of depression or
from the same position. This is simply unrealistic. It is also
sadness. But, fortunately, with effort people often can
handle such predispositions, so that
undesirable. Human beings are
their feelings of melancholia are
intrinsically different. Even if you
The most important
transitory and manageable rather
equalize wealth, you cannot equalthan permanent and crushing.
ize talent or ambition. And, for
factor affecting people’s
At bottom, happiness is an act of
that, we should be grateful. The
volition for most people.
world is much richer (financially
material status is completely and nonfinancially) because people
Does the same logic apply to
people’s material status? This is a
have varied interests and goals. It
beyond their control:
more complicated question. Hard
is this diversity that makes the
work is usually a necessary condidivision of labor such a powerful
the conditions into which
tion. But it often is not sufficient.
force for improving the human
Luck plays an enormous role. In
condition — and the world such an
we are born.
fact, the most important factor
interesting place.
affecting people’s material status is
It is also important to note that
completely beyond their control: We simply cannot affect
there are two kinds of luck. The first is what we normally
the conditions into which we are born.
think of and what is described above — that is, simple
It is by pure chance that some of us were born in develchance. The second is quite different. It is best illustrated by
oped countries, while others were born in desperately poor
an example. When someone receives a promotion at work,
ones. On average, people born in the United States can
we often say that he is lucky. It is true that a fortunate thing
expect to live about 80 years and have access to luxuries
has happened to him. But that promotion probably did not
unknown to even aristocrats just a few generations ago. In
just fall into his lap. He probably placed himself in that posicontrast, on average, people in parts of sub-Saharan Africa
tion by working hard and making wise decisions. In short,
can expect to live only into their 40s and get by on less than
we make this second type of luck. Life is a combination of
a dollar a day.
circumstances that we are dealt and those that we choose.
International comparisons provide the starkest example
At the beginning of this column, I noted that many free
of the role that chance plays in our lives. But intranational
marketeers downplay the role that chance plays in people’s
comparisons are instructive as well. Income inequality in the
lives. They may believe that acknowledging this weakens the
United States is significant. What’s more, people who are
argument for laissez faire and provides ammunition to those
born poor tend to remain poor and people who are born rich
who favor redistributionist schemes. As I have argued, I
tend to remain rich. It is possible to escape poverty in the
don’t think this is the case. Regardless, the evidence for the
United States — and as previously noted, being poor in the
importance of luck is all around us. And to deny it is to
appear to be oblivious to the facts, perhaps willingly so. That
United States means living a wholly different life than a poor
is a very real risk, especially at a time when many in the pubperson in, say, Tanzania. But who can doubt the educational
lic are expressing skepticism about the merits of a market
and cultural advantages, just to name a few, that accrue to
people born to more affluent families? By definition, the
RF
system and the wisdom of those who support it.

I

48

Region Focus • Spring 2009

RF_FULL Covers_SPRING_09

8/12/09

5:10 PM

Page 3

NEXTISSUE
Measuring the Standard of Living

Federal Reserve

Per-capita gross domestic product is the most common
“standard of living” measure in economics, largely because it is
well understood and widely available across countries. But it
doesn’t capture all possible aspects of a population’s wellbeing. Other measures focus on health, environmental quality,
income distribution, and happiness. Should policymakers look
to these alternative measures when crafting economic policy?

Is the Fed’s “Beige Book” a crystal ball? We’ll
take a look at how this publication is used to
forecast trends in different sectors of the
economy and regions of the country.

Seasonal Employment
In tourist hotspots like Myrtle Beach, S.C., laid-off workers are
now competing for seasonal jobs that students and temporary
employees from overseas used to do. What does this mean for
the labor market?

Retail Walk-In Health Clinics
Some retailers, like Wal-Mart and CVS, offer walk-in health
clinics for their customers. Staffed mainly by nurse practitioners, these one-stop centers provide remedies for routine
ailments and might have broader implications for the future of
health care.

Economic History
Did the Fed’s actions to rescue Long Term
Capital Management in 1998 affect market
expectations about what the public sector
would do to protect large nonbank financial institutions?

Research Spotlight
What might Fischer Black, one of the
pioneers of modern financial economics,
think about the economic crisis?

Jargon Alert
If you buy less of something as your income
rises, chances are that item is an “inferior
good.”

Antitrust
Antitrust laws were originally created to protect consumers
against monopolies by breaking up powerful companies and
cartels. Yet there are instances where monopoly power doesn’t
hurt consumers in the way some initially thought decades ago.
We’ll look at how the economics of antitrust has influenced the
regulation of large firms.

Visit us online:
www.richmondfed.org
• To view each issue’s articles
and Web-exclusive content
• To add your name to our
mailing list
• To request an e-mail alert of
our online issue posting

RF_FULL Covers_SPRING_09

8/12/09

5:10 PM

Page 4

Federal Reserve Bank
of Richmond

PRST STD
U.S. POSTAGE PAID
RICHMOND VA
PERMIT NO. 2

P.O. Box 27622
Richmond, VA 23261

Change Service Requested

Please send subscription changes or address corrections to Research Publications or call (800) 322-0565.

RECENT

Economic Research
Economists at the Federal Reserve Bank
of Richmond conduct research on a wide
variety of monetary and macroeconomic
issues. Before that research makes its way
into academic journals or our own publications, though, it is often posted on the
Bank’s Web site so that other economists
can have early access to the findings.
Recent offerings from the Richmond
Fed’s Working Papers series include:

from the Richmond Fed
“Unemployment Insurance with a Hidden Labor Market”
Fernando Álvarez-Parra and Juan M. Sanchez, June 2009
“The Consolidation of Financial Market Regulation: Pros, Cons,
and Implications for the United States”
Sabrina R. Pellerin, John R. Walter, and Patricia E. Wescott,
May 2009
“Assessing the Effectiveness of the Paulson ‘Teaser Freezer’
Plan: Evidence from the ABX Index”
Eliana Balla, Robert E. Carpenter, and Breck Robinson, April 2009
“On the Implementation of Markov-Perfect Interest Rate and
Money Supply Rules: Global and Local Uniqueness”
Michael Dotsey and Andreas Hornstein, April 2009
“Credit and Self-Employment”
Ahmet Akyol and Kartik B. Athreya, April 2009
“The Role of Information in the Rise in Consumer
Bankruptcies”
Juan M. Sanchez, April 2009
“Notes on Collateral Constraints in a Simple Model of Housing”
Andreas Hornstein, April 2009
“The Optimal Rate of Inflation with Trending Relative Prices”
Alexander L. Wolman, March 2009
“Fiscal Policy and Default Risk in Emerging Markets”
Gabriel Cuadra, Juan M. Sanchez, and Horacio Sapriza,
February 2009

You can access these papers and more at:
www.richmondfed.org/publications/research/working_papers