View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

The U.S. Experience with a Federal
Central Bank System
Adapted from a presentation made by President Santomero at the Oesterreichische Nationalbank’s 30th Economics Conference,
“Competition of Regions and Integration in the EMU,” Vienna, June 13, 2002
BY ANTHONY M. SANTOMERO

H

ow does a decentralized central bank work?
The events of September 11 put the Federal
Reserve System, the central bank of the
United States, to the test and highlighted
the benefits of its geographic diversification. In his
quarterly message, President Santomero presents an
overview of the Federal Reserve’s design and explains
how it helps the Fed carry out its various roles,
including formulating monetary policy, regulating
financial institutions, and keeping the payments system
running.

The events of September 11,
and the days and weeks that followed,
put many aspects of the U.S. financial
system to the test and demonstrated its
resiliency. At the Federal Reserve, our
response to those events was a coordinated effort across all areas of responsibility and across the entire Fed System.
We kept the payments system operating,
provided access to credit for affected
banking institutions, and implemented
aggressive monetary expansion. The
Fed’s ability to feel the pulse of financial
activity across the country, operate in
multiple locations, and coordinate its
efforts to ensure financial stability is a
testimony to its present, geographically
diversified organizational design. In this
message, I’d like to present an overview
of that design and explain how it helps
the Federal Reserve perform its roles as
the central bank of the United States.
As the central bank, the
Federal Reserve controls the monetary
base of the economy to affect interest
www.phil.frb.org

rates and inflation; it provides liquidity
in times of crisis; and it ensures the
general integrity of our financial system.
I believe the Federal Reserve’s decentralized structure has been a positive
force in the U.S. economy. It has proved
a vital, and indeed very practical,
structure for our central bank. Throughout the Fed’s history, decentralization
has provided the local context and
contact necessary for effective
policymaking.
A key to the success of our
decentralized structure is its flexibility.
To be sure, there is no single model that
works everywhere or all of the time. In
fact, it is just the opposite. The structure
of a central bank must fit the economic
and political realities of the time, or it
will not survive. It must evolve in
response to the unique features of the
economy it serves. This adaptation is a
constant challenge with new twists and
turns along the way.

THE ESTABLISHMENT OF
DECENTRALIZED CENTRAL
BANKING IN THE U.S.
In 1913, the U.S. Congress
established the Federal Reserve System
to serve as the central bank. The System
comprised 12 independently incorporated Reserve Banks spread across the
United States, operating under the
supervision of a Board of Governors in
Washington, D.C.
Why did the central bank
come along so late in the economic
history of the United States? Moreover,
why was it given such a decentralized
structure?
The answers to these questions
are interconnected. In fact, the United
States had made two previous attempts
to establish a central bank. The First
Bank of the United States was established in 1791, and the Second Bank of
the United States was established in
1816. Congress gave each an initial 20year charter. Yet, neither was able to

Anthony M. Santomero, President,
Federal Reserve Bank of Philadelphia
Business Review Q3 2002 1

muster the political support to have its
charter renewed. Therefore, the United
States spent most of the 19th century
without a central bank.
By the early 20th century, a
series of financial panics and economic
recessions further demonstrated the
need for a central bank. It became
widely recognized that the nation
required a more elastic supply of money
and credit to meet the fluctuating
demands of the marketplace. It also
needed a more efficient infrastructure
for transferring funds and making
payments in the everyday course of
business, particularly by check.
While the need for a central
bank was clear, so were the reasons to be
wary of one. Many people, particularly
small-business owners and farmers across
the South and West, were concerned
that a central bank headquartered
“back East,” in either the financial
center of New York City or the political
center of Washington, D.C., would not
be responsive to their economic needs.
In some sense, this was a replay
of the broader governance issue the
United States wrestled with from the

2 Q3 2002 Business Review

beginning of its short national history.
The 13 colonies saw the need to bind
together and form a nation, but they
were wary of ceding power to a national
government. It was out of that tension
that the federal government of the
United States was forged in Philadelphia with the establishment of the U.S.
Constitution.
The Constitution provided for
the establishment of a federal government that acknowledged and preserved
the rights of the states, and a system of
checks and balances within the federal
government. In this way, power was not
unduly concentrated in any one
individual or group.
To galvanize the necessary
political support to establish a central
bank, President Woodrow Wilson and
Congress drew on the now familiar
model of a federal structure. That
structure, embodied in the Federal
Reserve Act of 1913, essentially remains
intact today.
Overseeing the System is a
seven-member Board of Governors
appointed by the President of the
United States and confirmed by the

United States Senate. The 12 Reserve
Banks, spread across the country from
Boston to San Francisco, each serve a
defined geographic area, or District.
Each Reserve Bank is overseen by its
own local board of directors, with some
elected by the local District banks and
some appointed by the Board of
Governors in Washington. Each Reserve
Bank’s board of directors selects a
president, in consultation with the
Board of Governors, who serves as CEO
and chief operating officer.
Our founders’ original vision
was that the “central” in the central
bank would be minimized. That is, the
Reserve Banks would be relatively
autonomous bankers’ banks providing a
full array of services to the banks
operating in their Districts. The Reserve
Bank would extend credit directly to
District banks with short-term liquidity
needs on a collateralized basis through
rediscounting. Banks would also maintain reserve accounts at their Federal
Reserve Bank and use those accounts to
clear checks, move funds, and obtain
currency for their customers.
Of course, the original vision of
self-contained regional banks began to
erode almost as quickly as the System
was established. Technological change
and the dynamics of the marketplace
were driving the U.S. economy,
particularly its financial and payments
systems, into a more fully integrated
entity. The Federal Reserve System
would have to integrate the activities of
its various components as well. Indeed,
this is exactly what has happened in the
Fed over the course of its history and
what continues to happen today.
This integration has occurred
on all levels, from making policy
decisions to managing backroom
operations. It occurs through all of our
central bank lines of business —
monetary policy, bank supervision and
regulation, and payment system support.
Yet, the integration continues
to evolve within the context of the
www.phil.frb.org

“federal” structure established almost 90
years ago. I consider this a testament to
the Federal Reserve’s flexibility and also
to the value of its structure in achieving
the Fed’s mission.
Let me be specific about how
the Fed has evolved its decentralized
structure in each area of its operations.
MONETARY POLICY
When the Fed was founded,
the notion was that local economic
conditions generated local credit conditions and regional Reserve Banks
would help the regional banks address
them. Meanwhile, with the nation on
the gold standard, the overall supply of
money — and, hence, the long-run
price level — was out of the central
bank’s hands.
Today, we think of monetary
policy as an independent tool at the
central bank’s disposal to help stabilize
overall economic performance. The
establishment of the Federal Open
Market Committee was the pivotal
event in the Fed’s evolution to an
independent, activist, monetary policymaking body with national macroeconomic objectives.
Although the FOMC was not
formally established until 1935, its
history begins in the 1920s, when
regional Banks began looking for a
source of revenue to cover their
operating costs. As you may know, the
Fed does not receive an appropriation
from Congress. Instead, it funds itself
from the return on its portfolio. In fact, it
was with the intention of funding their
operations that the Federal Reserve
Banks began to purchase government
securities. Eventually, these assets were
managed collectively by the Federal
Reserve Bank of New York. This
portfolio became the System Open
Market Account, through which the
Fed now conducts open market
operations.
Gradually, it was recognized
that the Fed’s open market securities
www.phil.frb.org

transactions had a powerful and immediate impact on short-term interest rates
and the supply of money and credit.
Over time, open market operations
became the central tool for carrying out
monetary policy.
Congress created the structure
of the FOMC in the midst of the Great
Depression. The FOMC consists of the

to monetary policy — the effects of long
and variable lags on its impact.
Beyond this, the Reserve Bank
presidents can also bring broader
perspectives on monetary policy. On a
theoretical level, differences can coexist
on the structure of the economy and the
role of monetary policy. Some wellknown examples include the monetarist

While there can be only one national monetary
policy, making the right policy decision is the
product of sharing perspectives from different
regions of the country.
seven members of the Board of Governors and the 12 Reserve Bank presidents. Because it is a mix of presidential
appointees, the members of the Board of
Governors, and Reserve Bank presidents, who are selected by their respective boards of directors, the FOMC is a
blend of national and regional input of
both public and private interests.
The fundamental insight is
this: While there can be only one
national monetary policy, making the
right policy decision is the product of
sharing perspectives from different
regions of the country.
The Reserve Bank presidents
provide both valuable up-to-date
intelligence about economic conditions
and the perspective of business people
about prospects for the future. They
glean these from their meetings with
their Banks’ boards of directors and
advisory councils, through informal
“town meetings” around their Districts,
as well as through the contacts they
make in the everyday course of
operating a Reserve Bank.
Some of this finds its way into
our regional reviews, the so-called Beige
Book, but even this suffers from time
lags and a formulaic approach to gathering intelligence. Our real-time grassroots perspective is valuable for helping
to overcome the fundamental challenge

perspective championed by the St. Louis
Fed and the real-business-cycle
perspective supported by research at the
Minneapolis Fed. On a more practical
level, differences still exist in the
geographic distribution of industries
across our nation. The perspective of
some regions gives particularly useful
insight into certain parts of our economy,
for example, San Francisco’s technology
focus and Chicago’s heavy industry
concentration.
Decisions are usually made by
consensus, so unanimous decisions are
usually the rule rather than the
exception. Nonetheless, we do have a
voting procedure. The 12 voting
members make the formal decision of
the FOMC. All seven Governors vote at
all times, while only five of the 12
presidents vote, on a rotating basis.
Philadelphia happens to be a voting
member in 2002. In any case, we all
participate on equal terms in the
discussion and consensus building that
leads to the formal policy vote.
Once the FOMC has made its
decision on the appropriate target level
for the federal funds rate, it is up to the
Fed’s trading Desk located at the
Federal Reserve Bank of New York to
achieve the objective. To facilitate that
process, a policy directive is drafted
requesting the appropriate action by the
Business Review Q3 2002 3

New York Desk to achieve the overnight borrowing rate target.
Time has shown that the
structure of the FOMC uses the
decentralized Federal Reserve to its best
advantage. This structure allows for the
generation of well-informed monetary
policy decisions at the national level,
plus an ability to communicate decisions
and rationale to various parts of the
nation. This two-way exchange of
information enhances the Fed’s ability
to monitor the economy and build
consensus for the needed policy action.
PAYMENTS INFRASTRUCTURE
Monetary policy is the role for
which central banks are best known.
But the Fed also plays an integral role in
the U.S. payments system. In fact,
payments processing is the largest
component of Fed operations. Systemwide, the Federal Reserve Banks employ
over 23,000 people. Of these, about
12,000 — roughly half — are involved
in payments.
Over the years, the Fed’s
decentralized structure has given us an
advantage in supporting the payments
system. The U.S. has long been a nation
of many small banks serving relatively
limited geographic areas. Establishing a
network for the efficient movement of
money among them is one reason the
Fed was founded. One of the Fed’s first
projects was setting up a check-clearing
system. In that system, each Reserve
Bank provided the banks in its District
with a local clearinghouse and access to
a national clearing network through its
sister Reserve Banks.
As early as 1918, the Reserve
Banks also gave banks in their Districts
convenient access to a national
electronic funds transfer network —
Fedwire. At that time, the transfers
were via telegraph connection among
the Reserve Banks.
The traditional paper-based
forms of payment — cash and check —
still require a decentralized delivery
4 Q3 2002 Business Review

network. However, over time, the
movements toward electronic payments
and mergers in the U.S. banking
industry have been driving the Fed
toward greater coordination and
consolidation of payments services.
Accordingly, the Fed has reorganized to
provide nationally managed services
through the decentralized structure of
the regional Reserve Banks.

First, at the strategic level, the
Federal Reserve has established the
Payments System Policy Advisory
Committee (PSPAC). Its mission is to set
the direction for Fed payments activities
System-wide. Like the FOMC, PSPAC
is a committee of Fed Governors and
Reserve Bank presidents.
Second, at the operational
level, the Reserve Banks coordinate
www.phil.frb.org

their payments operations through
national product offices, reporting to the
Financial Services Policy Committee. By
this means, each payments product is
centrally managed by one Reserve Bank
and delivered, as appropriate, through
the Reserve Bank distribution network.
SUPERVISION AND
REGULATION
I have discussed the benefits of
the Federal Reserve’s decentralized
structure on the monetary policy
decision process, as well as on its
evolving role in the nation’s payment
system. This structure has also served us
well in our third area of responsibility,
bank supervision and regulation.
As I noted earlier, the U.S. has
long been a nation of many small banks,
serving local communities in narrow
geographic areas and offering relatively
limited product lines. This was primarily
the result of government regulation.
Long-standing state laws prohibited
banks from branching across state lines
and frequently other political boundaries as well. Then, in reaction to the
Great Depression, the U.S. Congress
passed legislation prohibiting commercial
banks from engaging in investment
banking or insurance activities.
During that period in our
history, under delegated authority, local
Reserve Banks kept a close watch on
the safety and soundness of the local
banks under their jurisdiction.
But, recently, in the U.S. and
around the globe, a wave of deregulation has cut away the thicket of
limitations on banks’ activities. Now
technology and the marketplace are
driving banking organizations to expand
their geographic reach and diversify
their array of product offerings. The
result has been the growth of larger and

www.phil.frb.org

more complex banking organizations
with national or international scale and
scope.
Through this process of
change, the Federal Reserve’s role in the
regulatory structure has been expanding. Congress first entrusted the Fed
with the responsibility of regulating all
bank holding companies. More recently,
the Fed has been assigned the additional role of “umbrella supervisor” for
newly formed financial holding
companies. As such, the Fed aggregates

Since its creation
almost 90 years ago,
the Federal Reserve
has survived, and
succeeded, by
evolving.
the assessments of other regulators of the
financial services industry to form an
enterprise-wide view of risk and protect
depository institutions.
To fulfill its responsibilities in
this new environment, the Federal
Reserve has been transforming its
supervision and regulation function. Our
focus has shifted from point-in-time
financial statement reviews to continuous risk-based assessments; from on-site
examinations to early warning systems;
from strictly financial evaluations to
ones that include increased emphasis on
community lending and technology.
Furthermore, in light of the shift toward
broad financial holding companies, we
are working in closer cooperation with
other regulators in the banking and
financial industry.
In addition, to properly oversee
larger, more complex organizations, we

have employed new and more sophisticated analytical tools and have consolidated examination reports from
geographically dispersed subsidiaries into
overall financial profiles.
Our approach has been the
System-wide coordination of bank
supervision to achieve efficiency in staff
deployment, yet still gain the benefits of
specialized knowledge. Still, we have
maintained face-to-face contact with
the regulated institutions, as well as the
use of on-site examinations. In the end,
even with all the changes in the
financial services industry, there is no
substitute for first-hand knowledge of
the organization and its leadership. The
Reserve Bank network allows the Fed to
have geographic proximity, which
substantially improves its ability to know
the institutions it regulates.
CONCLUSION
Since its creation almost 90
years ago, the Federal Reserve has
survived, and succeeded, by evolving.
Through congressional mandates and its
own internal restructuring, the Fed has
proved an ever-changing entity,
decentralized yet coordinated. The
trends in the financial sector imply a
continuation of the move toward a
single national market, with a growing
number of national and international
players. As a result, further coordination
and consolidation of activity is inevitable.
Yet, even as we develop into a
more fully integrated organization to
better address our central bank responsibilities, we continue to extract value
from our decentralized structure. Today,
as we have seen in both normal times
and times of crisis, the regional structure
of the Federal Reserve System is one of
its greatest strengths. BR

Business Review Q3 2002 5

The Philadelphia Fed Policy Forum:
Summary of the 2001 Policy Forum and
Announcement of the 2002 Policy Forum
BY LORETTA J. MESTER

O

n November 30, 2001, the Federal Reserve
Bank of Philadelphia held its first
Philadelphia Fed Policy Forum, “Three
Questions for Monetary Policymakers.” This
event, sponsored by the Bank’s Research Department,
brought together a group of highly respected academics,
policymakers, and market economists, for discussion
and debate about important macroeconomic and
monetary policy issues the Fed needed to address in the
coming year. The Policy Forum was not intended to be
a traditional academic conference on monetary policy,
nor was it intended to be a discussion of issues relevant
to the next FOMC meeting. Rather, we took a longer
term perspective and tried to engage the right people in
a discussion of current macroeconomic research and its
implications for monetary policy.

Last year’s Policy Forum
addressed three questions facing
monetary policymakers: How Should
Monetary Policy React to Asset Prices?;
How Should Monetary Policy and Fiscal

Loretta J. Mester
is a senior vice
president and
director of
research at the
Federal Reserve
Bank of
Philadelphia.

6 Q3 2002 Business Review

Policy Interact?; and How Transparent
Should a Central Bank Be? * In my
closing remarks, I noted that because
conferences have to be planned so far in
advance, there is always a danger of
focusing on topics that lose their
relevance as the event approaches. And
the economic changes that occurred
between November 2000 and November
2001 were striking, as the economy
headed into recession and the
September 11 tragedy unfolded. Still, in

* Many of the presentations reviewed here
are available on our web site at
www.phil.frb.org/conf/policyforum.html.

my view, the topics discussed at the
Policy Forum turned out to be even
more relevant as the economic
landscape shifted, and the day’s
program generated interesting debate
and discussion and provided numerous
insights.
President Anthony M.
Santomero of the Philadelphia Fed
began the day, pointing out that
Chairman Greenspan moved one aspect
of the first issue, “How Should Monetary
Policy React to Asset Prices?” to the
forefront several years ago and, in the
process, introduced a new phrase into
the financial lexicon. During a speech in
December 1996, with the Dow at 6437,
he posed the now famous question:
“… how do we know when irrational
exuberance has unduly escalated asset
values?” As Santomero said, since that
time, the Chairman’s question has
gotten tougher to answer. And the
follow-up questions — How do dramatic
shifts in asset values affect aggregate
spending? Should they figure into the
Fed’s monetary policy decisions? And if
so, how? — are equally tough to answer.
In Santomero’s view, the Fed must take
into account the potential impact of
asset markets on the real economy.
Meanwhile, asset market participants
must take into account the Fed’s impact
on financial conditions, real-sector
performance, and hence the returns on
their portfolios. In his view, the Fed and
the asset markets are locked into a
complicated game. The question is:
What are the rules of that game, and
how should the Fed play it?
Santomero said that the
second Policy Forum question, “How
Should Monetary Policy and Fiscal
www.phil.frb.org

Policy Interact?” has also grown more
important and has become more
complex. The Fed can move monetary
policy quickly, but the effects of its
actions unfold slowly. Fiscal policy
actions usually take longer to
implement, but their impact may come
more quickly. (Andrew Abel, another
speaker, elaborated on this point.) It is
important to address how policymakers
should use the two to provide steady
support for the economy going forward,
add strength if necessary, and ease back
when appropriate.
In addressing the third
question, “How Transparent Should a
Central Bank Be?” Santomero
indicated the case for transparency is a
strong one. Making the Fed transparent is one way to cut through the
complex interplay between financial
markets and the Fed. The Fed states
its policy goals and its intended path for
achieving them. Financial markets
process the information efficiently and
adjust accordingly. Uncertainty and the
risk of confusion are minimized;
efficiency is maximized. Transparency
also serves the broader goal of building
public confidence in the Fed as an
institution. But Santomero also made
the point that in assessing whether
there is a need for greater transparency
one must consider a tradeoff. Greater
transparency can improve the Fed’s
clarity and increase its accountability.
But it can also limit the Fed’s
resourcefulness and slow its response
times. Wise and timely policy decisions
are the product of frank discussion and
open debate among the policymakers,
and maintaining the confidentiality of
those proceedings helps preserve their
frank and open character. In
Santomero’s view, the issue is striking
the right balance of transparency and
confidentiality.
In closing, Santomero made
the point that the issues being discussed
were relevant not only in the U.S. but
to policymakers around the world.
www.phil.frb.org

HOW SHOULD MONETARY
POLICY REACT TO ASSET
PRICES?
As pointed out by discussant
Mark Watson of Princeton University,
the first session’s three papers illustrate
that the answer to the question “How
Should Monetary Policy React to Asset
Prices?” depends on the imperfections
and frictions in the economy. The three
papers reached different conclusions
about how monetary policy should
respond, since they assumed different
causes for variations in asset prices.

The optimal monetary
policy depends on the
level of risk aversion,
since inflation
facilitates risk sharing.
Fernando Alvarez of the
University of Chicago presented a paper
that investigated how optimal monetary
policy changes when market participants’ level of risk aversion changes. In
his model, stock price fluctuations arise
from variations over time in the level of
investors’ risk aversion. The optimal
monetary policy depends on the level of
risk aversion, since inflation facilitates
risk sharing. Market participants trade
in the market (incurring transactions
costs) to insure against idiosyncratic
shocks to their income. Inflation reduces
the income of all economic agents, and
at the margin, it compresses the distribution of income, thereby reducing the
need to trade for insurance purposes.
But too much inflation leads everyone
to trade and incur transactions costs.
The optimal inflation rate balances
these two forces. Risk aversion affects
the amount of trading and therefore
the need for inflation to reduce crosssectional income dispersion. When risk
aversion is higher than average, the

optimal monetary policy is to choose a
lower inflation rate than average; when
risk aversion is less than average, the
optimal monetary policy is to choose a
higher inflation rate than average.
Thus, high risk aversion leads to lower
prices of risky assets and to lower levels
of inflation, and in this sense, optimal
monetary policy is procyclical.
Bill Dupor of the Wharton
School, University of Pennsylvania,
discussed his work on how monetary
policy should optimally respond to
movements in asset prices. According to
his model, optimal monetary policy is
contractionary in response to an
inefficient boom in the stock market or
in investment. Thus, in contrast to
Alvarez, the optimal policy is countercyclical. In Dupor’s model, firms make
investment decisions to maximize the
expected present value of their real
profits, but they sometimes mis-estimate
the future return to their investment.
These mis-estimates drive investment
and asset price movements in the model.
When firms overestimate future returns
to capital, they increase physical
investment and asset prices appreciate.
Optimal monetary policy works not only
to reduce nominal price fluctuations in
the economy but also to reduce these
nonfundamental asset price movements,
since these movements indicate that
firms’ investment decisions have been
distorted. By running a contractionary
policy in the face of inefficiently high
asset prices, the monetary authority
reduces the return on investment and
lowers the distortion. Dupor’s model
provides a formal justification for
monetary policy to respond to
nonfundamental movements in asset
prices at the expense of nominal price
stabilization.
The third presenter, Mark
Gertler of New York University,
summarized and updated his recent
work with Ben Bernanke. Asset price
bubbles can cause fluctuations in
spending and inefficient business cycles,
Business Review Q3 2002 7

but in designing optimal monetary
policy, the central bank must remain
cognizant of the fact that it cannot be
confident about whether fundamentals
(like an improvement in technology) or
nonfundamentals (a bubble) are driving
asset prices. Gertler also pointed out
that even if the central bank were
certain that a rise in stock prices was a
bubble, there is a great deal of
imprecision between high frequency
moves in asset prices and spending. In
Gertler’s view, the best feasible policy for
dealing with the harmful effects of asset
price bubbles is a flexible inflationtargeting strategy in which the central
bank commits explicitly or implicitly to
adjust interest rates to stabilize inflation
over the medium run. A central bank
that follows an inflation-targeting
strategy should respond to changes in
asset prices only to the extent that such
changes affect the central bank’s
forecast of inflation or deflation, or
movements in the equilibrium real
interest rate. This strategy would lead
the central bank to accommodate asset
price movements driven by fundamentals but offset nonfundamental asset
price movements that generate
inflationary and deflationary pressures.
Thus, the central bank should not
ignore asset prices; the central bank
should include them in the information
set with which it forecasts inflationary
pressures or movements in the
equilibrium real interest rate. In
Gertler’s view, inflation targeting
provides a nominal anchor for monetary
policy and has worked well in practice,
although, he points out, such a strategy
has not been stress tested by large swings
in asset prices.
In their work, Gertler and
Bernanke simulated how the economy
would react to a boom and bust cycle in
asset prices when the central bank
practices inflation targeting, that is,
when the monetary policy interest rate
instrument responds primarily to
changes in expected inflation. They
8 Q3 2002 Business Review

find that inflation targeting yields good
outcomes, substantially stabilizing both
output and inflation, when asset prices
are volatile. As in Dupor’s model, the
central bank offsets purely speculative
increases or decreases in stock values
that are driven through aggregate
demand, and it accommodates
technology shocks. They found little
additional gain from allowing monetary
policy to respond to stock price
movements over and above their
implications for inflation. Gertler also
pointed out that aside from the model
predictions, it might be dangerous to
have the central bank attempt to
influence stock prices, since the effects
of such attempts on market psychology
are very unpredictable. Finally, Gertler
presented results suggesting that there is
only an imprecise link between shortterm changes in asset prices and
spending. While more permanent
changes in asset prices, which change
wealth, lead to changes in consumption
spending (the wealth effect) and
investment spending, the evidence
indicates that short-run changes in asset
prices do not have a large impact on
spending. In Gertler’s interpretation, this
again suggests there is little to be gained
following policies that target asset prices.
Mark Watson of Princeton
University, the first discussant, pointed
out that the three papers all conclude
that monetary policy can act in a way to
ensure and improve macroeconomic
stability. But they differ in their
recommendations of how policy should
behave: Alvarez’s model suggests the
central bank should ease monetary
policy in response to rising asset prices;
Dupor’s model suggests the central bank
should tighten in response to rising asset
prices; and Gertler-Bernanke suggest
that the central bank should essentially
ignore asset prices except to the extent
that asset prices help forecast or signal
something about the overall state of the
economy or inflation. But how useful
are asset prices in forecasting future

inflation or future output? As Watson
points out, the answer is very mixed in
the literature. Watson’s comprehensive
study with James Stock of Harvard
University of seven countries and 38
asset prices, forecasting over two time
periods (1971-1983 and 1984-1998),
indicates that asset prices are useful for
predicting inflation sometimes and
somewhere, but there is little
consistency and there is a lot of
instability across time. For example,
trying to rely on one or two asset prices
to forecast inflation or output would be a
mistake — the forecasts are too noisy.
But if one combines information from
many asset prices in constructing
forecasts and averages across many asset
price predictors, one obtains forecasts
that are better than those that ignore
asset prices — essentially, one can
average out the noise.
Ben Bernanke of Princeton
University (and current member of the
Federal Reserve Board of Governors)
stated that the Alvarez and Dupor
papers provide nicely worked out
theoretical analyses of the case for
monetary policy to respond to the stock
market over and above the extent
implied by the market’s implications for
inflation. Bernanke pointed out that this
is true even in the Bernanke-Gertler
model, since stock market bubbles lead
to excessive volatility in investment.
However, in Bernanke’s view the real
question is whether, in practice, we have
sufficient confidence in our understanding of stock market behavior and
its response to monetary policy to
improve over an inflation-targeting rule.
He is skeptical that we do or that the
Fed does, and he feels that history
argues against trying to stabilize the
stock market. While he strongly
encourages the central bank to make
emergency responses to financial crises
to protect the payments system (for
example, the 1987 stock market crash,
the Russian default, September 11),
Bernanke pointed out that past attempts
www.phil.frb.org

to prick stock market bubbles have led
to some very bad outcomes.
Jeremy Siegel of the Wharton
School, University of Pennsylvania, the
session’s final discussant and moderator
argued that while there is some
empirical evidence that asset prices
might not be that informative about the
economy, in his view, they are becoming
increasingly informative. For example,
consumer confidence is more linked to
the stock market and the cost of capital
is dependent on equity prices. In his
view, there are signals in equity prices
that the Fed should pay attention to.
And he also believes that the Fed
should respond to them, but not with
the aim of pricking a bubble. For
example, to the extent that the late
1990s stock market boom reflected an
increase in productivity and therefore a
rise in the potential growth rate of the
economy, the equilibrium real interest
rate rose. Had the Fed not raised
interest rates, inflationary pressures
would have built. On the other hand, if
the central bank believes that the
market is too high, then in Siegel’s view,
trying to prick the bubble can be risky
because there are lags in the effect of
policy and interactions between policy
and the market.
HOW SHOULD MONETARY
POLICY AND FISCAL POLICY
INTERACT?
A panel of four speakers
addressed our second question.
Andrew Abel of the Wharton School,
University of Pennsylvania, started by
laying out and commenting on some of
the channels of interaction between
monetary and fiscal policy, some of
which he feels are more relevant now
than others. These include financing
and monetizing government deficits;
the effect of inflation on tax rates and
revenues; open market operations in
Treasury securities; the liquidity trap;
lags in monetary and fiscal policy; and
short-run vs. long-run uses of policy.
www.phil.frb.org

The first channel, financing
the government deficit, is the oldest and
simplest issue, according to Abel. During
World War II, the Fed cooperated with
the Treasury by keeping interest rates
low to reduce the Treasury’s financing
costs. But since the Treasury-Federal
Reserve Accord of 1951, the Fed has
become independent of the Treasury,
which is not to say that fiscal policy has
no effect on monetary policy.
Inflation affects effective tax
rates, since the tax code is not indexed
to inflation. Abel pointed out that
Martin Feldstein estimated that a
2-percentage-point reduction in
inflation would increase welfare by 1
percent of GDP per year, through its
impact on effective tax rates. In Abel’s
view a simple and desirable way of
remedying the problem would be to
index the tax code.
Another issue that has become
more topical is how monetary policy
should be conducted in a world with
shrinking government debt. Abel thinks
this is an interesting question; however,
he points out that over the longer run, it
will be much less of an issue, since
government debt will be “back with a
vengeance” in the long run.
Abel said that contrary to some
economists, he does not think the issue
of the liquidity trap applies to the U.S. at
the moment, although it might apply to
Japan, where interest rates had gone so
low that monetary policy had become
an ineffective tool for stimulating
spending. In Abel’s opinion, the
structural problems in Japan, for
example, the weak banking sector, are
quite different from those the U.S. was
facing at the time of the Policy Forum.
In thinking about how
monetary and fiscal policy interact, Abel
outlined three types of lags. The
recognition lag — how long it takes to
figure out there’s a need for policy
action — is short for monetary policy,
since the meetings are frequent, and
short to medium for fiscal policy. The

decision lag — how long it takes to
implement a policy change — is
incredibly short for monetary policy and
usually long for fiscal policy. Finally,
Abel cites Milton Friedman’s “long and
variable lags” as a good characterization
of monetary policy’s action lag — how
long it takes policy to affect the
economy once it is implemented; the
action lag for fiscal policy, Abel stated,
is medium to long. Based on this lag
structure, monetary policy should be
used for short-run stabilization, since it
generally has shorter lags. But in the
long run, monetary policy should focus
on keeping inflation low and stable.
Fiscal policy should be used to achieve
the following long-run goals. First, assess
whether programs are worth what they
cost; whether there are market failures
that need to be corrected; and what
public goods need to be provided. Then
set taxes to collect sufficient revenues
to fund these expenditures in a way
that respects economic efficiency and
equity and that minimizes distortions,
and perhaps meets some redistributive
goals. In Abel’s view, any short-run
stabilization through fiscal policy should
generally occur through automatic
stabilizers.
R. Glenn Hubbard, chairman
of the Council of Economic Advisers,
said he was also skeptical of using fiscal
policy for short-run stabilization. He
believes that the fiscal policy applied in
2001 was appropriate, viewing the tax
rebates in the spring 2001 tax act not as
a cyclical measure but as down
payments on a permanent tax cut.
Hubbard said the question of how fiscal
and monetary policy should interact is
an important one. He said the key was
cooperation, not coordination. When
monetary policy is made, it must
consider current and future fiscal
policy, and vice versa. The fiscal and
monetary authorities need to
understand what each is doing. At the
simplest level, this means talking to one
another, and there are a variety of ways
Business Review Q3 2002 9

in which the Administration and the
Fed do communicate with each other.
This is different from coordination.
Hubbard agrees that monetary policy
independence is a key ingredient of
good policy and benefits the economy.
He pointed to the combination of
monetary and fiscal policy in 2001 as an
illustration of the harmonious working of
monetary and fiscal policy in the U.S.
And he stated that he believes that
monetary policy and fiscal policy are
committed to their long-term goals —
for monetary policy, its long-run goal of
price stability and for fiscal policy, its
long-run goal of improvement in longterm budget balance.
Laurence Kotlikoff of Boston
University disagreed; he thinks that
monetary and fiscal policy have exactly
the wrong long-term goals and direction.
He does not believe monetary policy

10 Q3 2002 Business Review

and fiscal policy should interact; we do
not want to use monetary policy as a
fiscal instrument. However, Kotlikoff
believes that in the U.S. they will
interact because of the nature of our
long-term fiscal problems. Based on his
research, our fiscal policy is highly
unsustainable. Kotlikoff and co-authors
have used generational accounting to
compare the size of the government’s
bills now and in the future to the
amounts available to pay those bills now
and in the future. These are not in
balance in the U.S — future generations
will face a much higher tax burden than
the current generation, since we are
passing on a large debt to them.
According to Kotlikoff’s research, in the
U.S. it will be difficult to achieve
generational balance whereby the
lifetime net tax rates of future
generations equal that of the current

generation. Other countries facing a
similar problem have used hyperinflation
to bring about balance. Kotlikoff
outlined some alternative policies that
could be used to achieve generational
balance in the U.S., including tax
increases and cuts in transfers and
government purchases. For example,
according to his and his co-authors’
estimates, as of summer 2001, the U.S.
would need to raise federal income
taxes 68 percent or all taxes (local, state,
and federal) 26 percent to achieve
generational balance. Alternatively, it
would take a cut of 44 percent of all
government transfers. Kotlikoff said
that these numbers were so scary
because the demographics are that bad
— he stated that in 30 years the U.S.
will have twice as many old people and
only 15 percent more workers. Kotlikoff
pointed out that some have argued that

www.phil.frb.org

economic growth will bail out the U.S.
from this problem — as the population
ages, there will be a lot of wealthy older
people relative to young workers, which
will lead to more capital per worker,
higher real wages, and capital
deepening. This would mean that we
would have a higher tax base and that
tax rates would not have to rise as much.
Kotlikoff does not subscribe to this view.
He presented the results of some
simulation exercises that indicate that
instead of capital deepening, the
economy could experience capital
shallowing during the demographic
transition, since payroll tax rates might
have to rise so much. In conclusion,
Kotlikoff said that the menu of things
the U.S. needs to do to solve its fiscal
problem is very painful, but the
unsustainability of our current fiscal
policy should not be ignored, given the
great harm that has been inflicted on
other countries’ economies by their
pension liabilities.
In his response, Hubbard said
that the government’s fiscal situation is
less harrowing than the version
presented by Kotlikoff. He interprets
Kotlikoff’s research as making the
important point that delay in addressing
the problem is very costly; it is important
to take action. In Hubbard’s view,
action is being taken. In his opinion,
there is nonpartisan recognition of the
need to shore up entitlements and avoid
the crisis Kotlifoff discusses, and progress
is being made.
Christopher Sims of Princeton
University concluded the session by
discussing his research program on what
determines the price level in terms of
monetary and fiscal policy jointly, the socalled fiscal theory of the price level.
Sims explained that this way of thinking
about the price level recognizes that
monetary policy is fiscal policy; there is
no clean distinction between the two.
This might seem to contradict the
notion of central bank independence.
But in most countries, central bank
www.phil.frb.org

independence is a convention about
which aspects of fiscal policy are handed
over to the central bank. Monetary
policy has a direct impact on the interest
expenditure component of the federal
government. A change in interest rates
affects the nominal value of these
expenditures, and inflation affects the
real value. As Sims sees it, monetary
policy independence is a convention by
which the effects of monetary policy on
the federal budget aren’t subject to
policy dispute and argument between
the Treasury and the central bank. For
example, the Treasury doesn’t complain
to the Fed that there wasn’t enough
seignorage this year or ask the Fed to
lower interest rates because the interest
component of the budget has increased.
Moreover, the Fed and the public are
confident that when the Fed raises
interest rates, the fiscal system will
absorb the costs of increased interest
expenditures in the budget, for example,
by cutting other expenditures or raising
taxes. If this were not the case, a rise in
interest rates could lead to inflation
rather than having the desired
dampening effect on economic activity.
This convention has arisen to help
control the historical tendency of fiscal
authorities to systematically use
seignorage and inflation as a source of
revenue.
Independence is a good idea in
normal times, but it is possible only over
a certain range of conditions. Sims
argued that if we don’t understand its
nature, that is, that central bank
independence is a convention and
monetary policy has a fiscal impact, then
we can get into trouble in certain
historically unusual circumstances. For
example, during a liquidity trap, the
central bank might have to change how
it implements monetary policy in order
to have an effect. Instead of buying
short-term nominal government debt, it
might have to purchase other assets, like
long-term bonds, foreign government
bonds, or loans from banks, which would

expose the central bank’s balance sheet
to risk. Were the central bank’s balance
sheet to succumb to the risk, the
Treasury would need to recapitalize the
central bank, and it should do so, even
though this would be a breach of the
usual independence between the
central bank and the Treasury. Another
extreme circumstance is wartime.
During almost every war the U.S. has
fought, a substantial fraction of the
financing of the war has come from
seignorage and inflation. In Sims’ view,
a surprise inflation that reduces the
value of outstanding government debt
— if used at times of fiscal stress when
the alternative is increased distortionary
taxes — may be a good thing to do.
Sims added that it is obviously not a
good thing to do regularly, and indeed,
it would work only if it were a surprise.
In relating his work to the
economic situation at the time of the
Policy Forum, Sims said he thought it
most likely that the U.S. economy
would not find itself in either of these
circumstances (that is, liquidity trap or
fiscal exigency). However, he said that
one thing we have learned is that
extremely surprising things can happen.
Thus, it is worthwhile having the
discussion.
HOW TRANSPARENT SHOULD A
CENTRAL BANK BE?
Our final session tackled the
issue of central bank transparency, that
is, how the central bank communicates
and explains it actions to the public. In
the view of all the speakers, transparency is beneficial, and central banks
have made progress toward greater
transparency in just a short time. Our
speakers did differ in their assessments
of the amount of progress that has been
made and that still needs to be made.
William Poole, president of the
Federal Reserve Bank of St. Louis,
began the discussion, pointing out that
the real questions are how, in fact, to be
transparent and what being transparent
Business Review Q3 2002 11

really means. Poole said that
transparency means providing “the
fullest explanation possible of policy
actions and the considerations
underlying them, in as timely a manner
as possible.” One benefit of transparency is that it helps policymakers
themselves develop coherent views.
Having to explain things helps clarify
one’s own thinking. The success of
monetary policy depends on market
expectations and market confidence,
and those will be more accurate and
complete, the better market participants
understand the Fed’s actions. In Poole’s
view, the macroeconomics literature
supports the case for policymakers to
provide as much information as it can
about policy. This does not necessarily
mean that all disclosures are beneficial,
since meetings held in the open would
yield a different type of deliberation, not
necessarily better policymaking, and the
public might become confused more
than enlightened. But Poole said that
releasing transcripts of FOMC meetings
with a five-year lag, as is current
practice, does not inhibit his discussion
at meetings and provides a valuable
record for scholars. Poole also discussed
some of his research, co-authored with
others at the St. Louis Fed, showing that
prompt disclosure of policy actions
significantly improved the accuracy of
market forecasts of policy actions. Poole
concluded his remarks by indicating two
ways to improve Fed transparency:
announcing an explicit inflation
objective and reducing the statement
released at the end of FOMC meetings
to simple, boilerplate language (since the
current statement is open to a variety of
interpretations and may increase
uncertainty in the market).
Michael Prell, consultant and
former director of Research and
Statistics at the Board of Governors of
the Federal Reserve System, indicated
that the amount of information released
by the Fed has increased greatly over
the last 30 years. Prell says this has
12 Q3 2002 Business Review

served several purposes, including
meeting the demands of Congress,
lowering the “suspicions in some circles
that a secretive, non-elected body is
manipulating the financial markets,”
and increasing the effectiveness of
policy by allowing the markets to better
anticipate Fed policy actions. But in
Prell’s opinion, the Fed has been wary of
transparency over the years. In his view,
there has been some concern that
greater openness could jeopardize the
Fed’s independence and that markets
might overreact to indications of
potential Fed policy actions, thereby
causing noise that distorts the signals the
Fed could otherwise draw from the
market about underlying economic
pressures. He does say that the
challenge of transparency is greater
because Fed policymakers can have
disparate analytical views about the
economy, but he is against trying to
regiment these “many voices of the
System.” Rather, he favors allowing
these voices to speak, but in a clearer
fashion. In his view, the post-meeting
announcements by the FOMC are an
advance in transparency, although they
fall short of desired clarity. In conclusion, Prell says that the answer posed in
the session, “How transparent should a
central bank be,” is “As much as possible,
without jeopardizing its mission.”
Mickey Levy, chief economist
of Bank of America, provided a privatesector view. Levy believes the Fed has
dramatically improved its implementation of monetary policy and its transparency, with the Fed being more
straightforward and understandable.
However, he thinks further improvement could be achieved. In Levy’s view,
the announcements made by the Fed
after FOMC meetings suffer by
emphasizing current economic
conditions rather then the Fed’s longrun goals. Levy discussed his analysis of
18 FOMC policy announcements made
between February 2000 and November
2001. These announcements were

made after the Fed shifted from
providing a statement about its “bias” to
providing a “balance-of-risks” statement.
In Levy’s view, these announcements
fuel market speculation about near-term
monetary policy, just as earlier
announcements that included the bias
statement did. He also said that the
phrasing of the announcements could
mislead the public into believing that
the central bank’s objective is to limit
economic growth in order to control
inflation, a mistaken view of the
inflation process. In Levy’s view, the
Fed announcements should “reinforce
its long-run objectives and establish
guidelines to achieve them,” as one of
the goals of transparency is to build
credibility. Confusing statements can be
counterproductive.
Our final presenter on the
topic of transparency was Alan Blinder
of Princeton University, a former vice
chairman of the Federal Reserve Board
of Governors. Blinder based his
discussion of the “why, what, and how”
of central bank transparency on a recent
monograph he co-authored on
transparency at central banks around
the world (“How Do Central Banks
Talk?” A. Blinder, C. Goodhart, P
.
Hildebrand, D. Lipton, and C. Wyplosz).
In his view there has been a revolution
in central bank thinking on the subject
of transparency over the past five to 10
years — a very short period of time.
Blinder and co-authors begin with the
presumption that central banks should
reveal almost all information; while
there will be some pieces of information
that should not be revealed, the central
bank must have a good reason not to
reveal them. In other words, the central
bank should reveal enough information
so that interested observers understand
what it is doing, why it is doing it, and
how it makes decisions, and this
includes forward-looking information.
For the “why” of transparency,
Blinder cited two reasons. First,
transparency is important for democratic
www.phil.frb.org

accountability. Second, transparency
aids the effectiveness of monetary
policy, which works through
expectations. Blinder said that in his
FOMC experience, one of the more
difficult parts of setting monetary policy
was in understanding the transmission of
changes in the fed funds rate to other
interest rates and asset prices in the
economy. In his view, transparency helps
tighten the “gearing” between what the
Fed does and what the market does in
reaction to what the Fed does. The
central bank should try to condition
expectations and teach the markets to
think like it does. Blinder thinks that
theoretical arguments for mystery and
surprise do not hold up well to real-world
circumstances.
The “what” of transparency
involves the central bank’s articulating
its objectives. This is more difficult for
central banks such as the Fed that have
multiple objectives (price stability and
sustainable economic growth), and
somewhat easier for central banks with a
single objective, such as inflationtargeting. Blinder said the central bank
also needs to reveal its methods,
including forecasts and models, for
reaching policy decisions. He noted that
the details of the forecast (for example,
forecasts of housing starts seven quarters
from now) are less important to most
people than the broad contours of the
outlook. He also favors the central bank
giving forward-looking indicators (for
example, the “balance-of-risks” or the
“bias”) of future policy actions.
The “how” of transparency
depends on how monetary policy
decisions are made at the central bank.
Blinder and co-authors categorize
central banks into three types: decisions
made by an individual (for example, the
Reserve Bank of New Zealand);
decisions made by a collegial committee
that works to reach a consensus (for
example, the European Central Bank);

www.phil.frb.org

and decisions made by an “individualistic” committee in which people vote
what they believe and the majority rules
(for example, the Bank of England).
Blinder and co-authors believe that the
modes of being transparent are different
in these three cases. As a simple
example, consider the question of how
much to reveal in statements versus in
the minutes of the meeting. When the
decision is made by an individual there
is no meeting and so no transcript to
issue. But then it is important for the
individual decision maker to explain
fully his or her rationale for the decision.
With an individualist committee, it is
difficult to explain the diverse views in a

Transparency aids the
effectiveness of
monetary policy,
which works through
expectations.
statement. For Blinder and his coauthors, if the committee is collegial,
there is a real danger in having a
cacophony of voices, which may provide
a lot of noise without providing any new
information. However, if the committee
is an individualistic one, differences in
opinions across committee members are
very relevant and give forward-looking
information to the market. In this case,
Blinder (like Prell) thinks communication should be encouraged.
Blinder agreed with the other
speakers that the Fed has become more
transparent over time, pointing out that
it was only in 1994 that the Fed began
announcing its decision after FOMC
meetings. Unlike Levy, he views the
“balance-of-risks” statement as a vast
improvement over the “bias” statement.
He agrees with Prell that the statements

have improved over time, but he also
agrees with Prell and Levy that there is
further to go in making the statements
more informative. And while he
“philosophically” agrees with Poole that
the transcripts are valuable scholarly
records, he believes the cost has been
too great in terms of stultifying
conversation and debate; so he favors
discontinuing verbatim transcripts of
FOMC meetings. To conclude, Blinder
laid out what he would like the Fed to
do: clarify its objectives, publish its
forecasts, and make fuller statements.
In particular, Blinder said this will
become much more important in the
post-Greenspan era, when the markets
have to learn and understand the Fed’s
decision-making under a new Chairman.
KEYNOTE ADDRESS: THE
CENTRAL BANK OF BRAZIL:
TRANSFORMATION TO
TRANSPARENCY
Dr. Arminio Fraga, Governor
of the Central Bank of Brazil, delivered
the keynote address. Fraga presented an
overview of the reforms that have been
implemented by the Central Bank of
Brazil to increase the level of transparency. Included in the reforms was a
move to inflation targeting. Fraga
discussed the steps the Central Bank has
taken to announce its targets and
disclose information about its policy
meetings and its economic models. He
also discussed the benefits of such
reforms and the progress that has been
made on the inflation front in Brazil
since the reforms have been implemented. In Fraga’s view, over the years,
Brazil has been a laboratory; it has had
to deal with many of the issues research
economists in the Federal Reserve
System, other central banks, and
academia have studied. In Fraga’s
opinion, the Central Bank of Brazil’s
transparency has been beneficial to the
economy of Brazil. BR

Business Review Q3 2002 13

We will hold our second annual Philadelphia Fed Policy
Forum on November 22, 2002 (the Friday before
Thanksgiving). This year’s topic is “Crises, Contagion, and
Coordination: Monetary Policy Issues in a Global Context.”
At right is the program. The Policy Forum brings together a
group of distinguished economists and policymakers for what
we hope will be a rousing discussion and debate of the
issues. For information on attending this year’s event, please
contact us at PHIL.Forum@phil.frb.org or visit our web page
at www.phil.frb.org/conf/policyforum2002.html.

14 Q3 2002 Business Review

www.phil.frb.org

The Philadelphia Fed Policy Forum
Crises, Contagion, and Coordination: Issues for Policymakers in the Global Economy
November 22, 2002
The Pennsylvania Convention Center, Room 113

Presentations
Welcoming Remarks
Anthony M. Santomero, President, Federal Reserve Bank of Philadelphia
Financial Crises
Moderator and Discussant: Loretta J. Mester, Director of Research, Federal Reserve Bank of Philadelphia
“Financial Crises in Emerging Market Economies”
V. V. Chari, University of Minnesota
“Foreshadowing LTCM: The Crisis of 1763”
Hyun Song Shin, London School of Economics
Financial Contagion and Business Cycle Correlation
Moderator and Discussant: Sylvain Leduc, Senior Economist,Federal Reserve Bank of Philadelphia
“Financial Stability and Currency Areas”
Franklin Allen, The Wharton School, University of Pennsylvania
“Globalization of Financial Turmoil”
Graciela Kaminsky, George Washington University
Policy Coordination
Moderator, Presenter, Discussant: Lawrence Christiano, Northwestern University
“The Gains from International Monetary Cooperation”
Kenneth Rogoff, Economic Counselor and Director, Research Department,
International Monetary Fund
“On the Fiscal Implications of Twin Crises”
Martin Eichenbaum, Northwestern University
“Monetary Policy After a Financial Shock”
Lawrence Christiano, Northwestern University
Policymaking in a Global Context
Moderator and Panelist: Anthony M. Santomero, President, Federal Reserve Bank of Philadelphia
Other Panelists:
Urban Bäckström, Governor, Central Bank of Sweden
Paul Jenkins, Deputy Governor, Bank of Canada
Robert Parry, President, Federal Reserve Bank of San Francisco
www.phil.frb.org

Business Review Q3 2002 15

Innovation in Financial Services
And Payments
A Conference Sponsored by the Research Department and the Payment Cards Center
of the Federal Reserve Bank of Philadelphia

O

n May 16-17, 2002, the Federal Reserve
Bank of Philadelphia's Research Department
and Payment Cards Center co-sponsored a
conference on Innovation in Financial
Services and Payments. Robert Hunt, an economist in
the Research Department, put together the program,
which included three distinguished addresses and four
focused sessions. The conference addressed such
questions as: How far has the U.S. progressed in the
transition to electronic consumer payments? Does
competition between payment networks stimulate
innovation? These questions and others addressed at
the conference do not yet have definitive answers. One
goal of the conference was to encourage work in this
important, but under-researched area.

President Anthony M.
Santomero opened the conference,
providing an overview of changes in the
payment system, such as the growing
role played by nonbank providers of
these services. These changes have
important implications for the Fed,
which is both a provider of payment
services and a regulator of private
providers of these services.
In his address, David Balto
(White and Case, formerly of the
Federal Trade Commission) pointed out
that in financial services, innovation
often occurs through joint ventures or
network arrangements, organizational
forms that present a difficult challenge
for antitrust analysis. He explored this
16 Q3 2002 Business Review

theme by reviewing the complicated
history of antitrust litigation involving
the credit card associations MasterCard
and Visa. Balto was joined in this
conversation by Alex Miller (Visa
U.S.A). On Friday, Lawrence J. White
(New York University), together with
Scott Frame (Federal Reserve Bank of
Atlanta), reviewed the existing
literature on financial innovation,
emphasizing how little empirical
research has been done and how much
more there is still to do.
The first panel examined
recent trends in the use and efficiency
of consumer payments in the United
States. David B. Humphrey (Florida
State University and Payment Cards

Center visiting scholar) documented the
decline in the share of consumer
transactions paid via cash or check and
the corresponding rise in the share of
payments accounted for by credit cards
and debit cards (point of sale) since
1990. Paul Bauer, together with Patrick
Higgins (both of the Federal Reserve
Bank of Cleveland), found that the unit
cost of the Federal Reserve System's
small-dollar electronic payment network
(the Automated Clearinghouse, or
ACH) has fallen 75 percent since 1990
and that there is evidence of further
economies of scale. The discussant,
Elizabeth Klee (Board of Governors),
emphasized the difficult problem taken
on in the Humphrey paper because of
the very limited data available. Klee also
suggested it would be worthwhile to
extend the analysis in the Bauer and
Higgins paper to cover the most recent
period to evaluate the use of ACH for
electronic check conversion at the point
of sale.
The second panel focused on
the role of networks in financial
markets. In many networks, including
telephone and most payment systems,
the value of participating in the network
rises with the number of other participants. Sujit Chakravorti (Federal
Reserve Bank of Chicago), together
with Ted To (Bureau of Labor Statistics), presented a theoretical model to
show that merchants accept credit cards
even though they are more costly than
cash or check payments because their
sales will be higher if they accept credit
cards than if they do not. This may
increase merchants’ profits in the short
run but, as the authors point out, not
necessarily in the long run. Gautam
www.phil.frb.org

Gowrisankaran (Federal Reserve Bank
of San Francisco) and Joanna Stavins
(Federal Reserve Bank of Boston)
showed that a bank's decision to provide
ACH payment services depends on the
concentration of the local banking
market and the extent to which
neighboring banks have adopted ACH.
Discussant James McAndrews (Federal
Reserve Bank of New York) argued that
the Chakravorti and To paper contributed to our understanding of the
welfare implications of pricing in credit
card networks. McAndrews said that
the Gowrisankaran and Stavins paper
had raised the standard for the use of
statistical techniques in empirical studies
of network effects in payments. He also
pointed out competing alternative
explanations of their results that cannot
yet be ruled out.
The third panel investigated
when and how firms in the industry
adopt financial innovations. Scott
Frame, together with Lawrence J. White
and Jalal Akhavein (Moody’s), reported
the results of their survey of banks'

www.phil.frb.org

adoption of credit scoring models for use
in small-business lending decisions.
They found that larger banks were more
likely to be early adopters of this
technology, and there was some
evidence of geographic clustering of
adopters (specifically, in the New York
Federal Reserve District). David
Nickerson (Colorado State University)
and Richard J. Sullivan (Federal Reserve
Bank of Kansas City) with Marsha
Courchane (Freddie Mac) examined
the adoption of Internet banking among
banks in the 10th Federal Reserve
District. They found that banks that are
relatively large compared with their
competitors were more likely to adopt
Internet banking. They also found that
banks facing more concentrated rivals
were less likely to adopt Internet
banking. Robert DeYoung (Federal
Reserve Bank of Chicago) discussed the
papers, putting their results in the
context of the industry’s response to
deregulation and new technologies
introduced over the last 20 years.
In the final session of the

conference, Robert Marquez, together
with Robert Hauswald (both of the
University of Maryland), showed how
financial innovations and intellectual
property can affect lenders’ incentives to
engage in their traditional activities of
screening and monitoring borrowers.
This, in turn, will affect the pricing and
availability of credit. John R. Thomas
(George Washington University Law
School) described the recent phenomenon of patenting methods of doing
business and how this may soon affect
providers of financial services. The
discussant, Bob Hunt (Federal Reserve
Bank of Philadelphia), described how
the Hauswald and Marquez paper could
be adapted to evaluate the welfare
effects of extending patents to financial
intermediaries and why existing criteria
for evaluating patent applications might
be inappropriate for this industry.
For a detailed summary of the
conference and electronic copies of all
the papers and presentations, please see
our web site: www.phil.frb.org/econ/
conf/innovations.html. BR

Business Review Q3 2002 17

Should Business Bankruptcy Be a
One-Chapter Book?
BY MITCHELL BERLIN

W

hat makes more economic sense? A
bankruptcy system that auctions a firm’s
assets and distributes the proceeds among
the creditors? Or one that allows a firm to
seek to resume business after renegotiations between its
stockholders and its creditors? Or is there room — or
even a need — for both? Mitchell Berlin outlines
current U.S. bankruptcy law and looks at recent
research that has reopened the debate on the value of
separate procedures for reorganizing the bankrupt firm.

Businesses sometimes go
bankrupt. That's a fact of life. Bankruptcy may occur because of bad
management, an economic downturn,
or simply a change in consumers’
preferences for the products they buy.
As a society, we would like to establish
laws to deal with bankrupt firms that
allow the firms' managers, workers, and
equipment to be deployed elsewhere as
quickly and efficiently as possible if the
firm is no longer viable. Alternatively,
the best solution may not be to break up
the firm but to have the firm draw up a
new business plan and to reach a new
understanding with its creditors.
In the United States, there are

Mitchell Berlin is
research officer
and economist in
the Research
Department of
the Philadelphia
Fed.

18 Q3 2002 Business Review

two different procedures for a firm's
bankruptcy. One, called Chapter 7 (the
chapter refers to its location in the U.S.
bankruptcy code), auctions all of the
firm's assets and distributes the proceeds
to the firm's creditors. The second
procedure, called Chapter 11, allows the
firm to go back into business once it has
renegotiated existing contracts with
suppliers and creditors.
For many years, critics — both
legal scholars and economists — have
charged that Chapter 11 is inefficient
and should be eliminated. They have
argued that reorganization proceedings
under Chapter 11 take too long, that
they reward and entrench incumbent
owners and managers, and that
reorganized firms end up being liquidated anyway, often after multiple
attempts at reorganization. In contrast,
Chapter 7’s auction procedure is simpler
and more efficient, according to these
same critics.
Nonetheless, the U.S. has yet
to close the book on Chapter 11. And

despite bankruptcy scholars’ criticism of
Chapter 11, other countries have
reformed their own bankruptcy laws to
look more like the U.S. law. For
example, both England and Germany —
with bankruptcy systems that were
heavily biased toward the liquidation of
enterprises, rather than their rehabilitation — have introduced new provisions
facilitating the reorganization of firms.
Do these reforms fly in the face of
economic reason and experience, or
have the critics of U.S. bankruptcy law
been missing something important?
In fact, recent economic
research has reopened the case against
U.S. bankruptcy law. Researchers have
shown that seemingly objectionable
features of Chapter 11 — for example,
the bias toward incumbent owners —
may make economic sense. Further,
while even proponents of using a single
chapter (such as Chapter 7) have always
recognized practical difficulties — for
example, the possibility that distressed
auctions would fetch fire-sale prices for
the firm's assets — more recent research
has raised new concerns about auctions
as a means to sell firms' assets. Researchers have also examined ways in which
auction procedures might be modified to
address some of these concerns.
U.S. BANKRUPTCY LAW
Under both Chapter 7 and
Chapter 11, a bankruptcy filing triggers
an automatic stay. Under an automatic
stay, the firm's creditors — its bankers,
bondholders, trade creditors, or pensioners, among others — must hold off any
attempts to satisfy their claims by
grabbing the firm's assets. In particular, a
secured creditor, whose contract states
www.phil.frb.org

that in the event of default, she has the
right to take possession of one (or more)
of the firm’s assets (for example, a drill
press), must wait until the courts decide
who gets what.
The underlying idea of the
automatic stay is to blunt the strong
incentive that the firm's creditors,
especially secured creditors with a legal
claim on particular assets, have to run to
the courthouse to be first in line. While
the first creditors on the courthouse
steps may get paid in full and would be
satisfied, this disorganized dash would
probably leave creditors, as a group,
worse off. For example, the drill press
may fetch a higher price when sold
along with the factory than if sold
separately, but the creditor with the
secured claim will be concerned only
with whether she can sell the drill press
for more than the unpaid portion of her
loan. A more organized disposal of the
firm's assets could ensure a higher sale
price for all the firm's assets and, thus,
extra dollars to share among the firm's
creditors.
Chapter 7: The Creditor
Comes First. When a firm enters
Chapter 7, its owners and managers are
immediately replaced by a courtappointed trustee, who acts as a
representative of all claimants as a
group. The trustee has two essential
roles. The first role is to secure the
highest possible value for the firm's assets
at auction. Assets might be sold
piecemeal; for example, the drill press
might be sold separately from the firm's
factory building (which might have
higher value as a space for an indoor
driving range). Alternatively, the factory
building and all the machines inside
might be most valuable as a single unit.
In this case, the trustee would seek a
bidder for all the firm's assets.
The trustee’s second role is to
distribute the money received for the
firm's assets, that is, to evaluate and rule
on competing claims.1 In those cases
where the firm's financial structure is
www.phil.frb.org

simple, this is a straightforward job.2 In
other cases, determining the value of
various claims may be more difficult, for
example, when there are bonds with
different levels of priority and debt
secured by assets.
Even for relatively simple
financial structures, the trustee must be
guided by some general principles in
deciding the value of competing claims.
In the U.S. and most other countries,
the overarching principle is the absolute
priority rule. According to this rule, all
investors are ranked in order of priority:
Creditors with claims secured by
particular assets — collateral — have
priority over unsecured creditors.
Among the unsecured creditors, those
with seniority clauses in their contracts
will be paid before those without such
clauses. Finally, all creditors have priority
over the firm's stockholders.3 Under the
absolute priority rule, all creditors with
higher priority must be paid the full
value of their claims before those with
lower priority receive a single cent.
Chapter 11: The Last Shall
Be First. Although a trustee is also
appointed in a Chapter 11 proceeding,
the firm's owners remain in control of
the firm until a reorganization plan has
been accepted. The trustee has many
roles in Chapter 11, but its main
responsibility is to protect creditors'
interests. In this role, for example, the
trustee will have to approve large
corporate expenditures to ensure that
owners are not seeking to enrich
themselves at creditors' expense.
1

I’ve simplified the discussion by talking
about money received for a firm’s assets. In
reality, bankruptcy claimants could receive
securities rather than cash.

Unlike the auction and
distribution procedure of Chapter 7,
Chapter 11 takes the form of structured
bargaining among investor groups: the
firm's owners, secured creditors,
unsecured creditors, and so forth.
Bargaining is structured in that Chapter
11 prescribes a set of rules under which
investor groups present reorganization
plans, which are then voted on by
committees representing the investors.4
The firm's owners — often, but not
always, represented by incumbent
management — have the sole right to
propose plans for reorganization for the
first six months. In practice, though, the
court trustee has substantial discretion to
extend this initial period. After six
months — or if the trustee determines
that the owners can't come up with an
acceptable plan — a committee of
creditors may then propose its own
reorganization plan.
A reorganization plan is a
complicated proposal that has two main
elements. The first is a blueprint for
deploying the firm's assets; this blueprint
often calls for the sale of some businesses
and the hiring of a new management
team to run the remaining business.5
The second element is an outline of the
firm's new financial structure, in
particular, how much and what types of
securities the various claimants would
receive. So, for example, a plan might
propose that the firm's banks — whose
claims are secured — receive stock and
cash worth 92 percent of the value of
their outstanding claims, unsecured
bondholders receive stock valued at 40
percent of the face value of their
outstanding bonds, and the firm's
shareholders retain 7 percent of the

2

Financial structure refers to a firm’s mix of
bonds, bank loans, and equity.

4

The trustee determines the precise
structure of the committees.

3

This is simplified. Other types of claimants
exist, for example, the IRS and customers
with outstanding lawsuits. Throughout, I
focus on the main investor groups. David
Epstein's book provides a particularly clear
account of the system of priorities.

5

A new management team is put in place 70
percent of the time, according to Edith
Hotchkiss's sample. Hotchkiss reviews the
evidence concerning management turnover
from other studies.
Business Review Q3 2002 19

reorganized firm's stock.
Note that any reorganization
plan requires an estimate of the firm's
ongoing value, both to permit the trustee
to evaluate whether the plan serves
creditors' interests and to determine
precisely what mix of new securities and
cash each investor group will receive.6
Note also that the payments in
the example, which are in line with
actual U.S. experience, do not respect
absolute priority, even though the
bankruptcy code explicitly calls upon
trustees to follow this rule. Most
strikingly, the firm's existing owners
systematically retain a share of the
reorganized firm, even though unsecured creditors have received much less
than the outstanding value of their
claims. Many commentators note that
this systematic bias away from absolute
priority is the predictable effect of the
rules of Chapter 11. Specifically,
incumbent owners have lots of power,
both because they retain control of the
firm and because they get to offer the
initial reorganization plan. This power
enables them to retain a share of the
reorganized firm, even though investors
with higher priority have not had their
claims satisfied.
BANKRUPTCY WITHOUT
CHAPTER 11
Let's Get Rid of Chapter 11.
In articles that have been influential
among legal scholars and economists,
lawyer Douglas Baird and economist
Michael Jensen have argued that
Chapter 7 can be used either to liquidate or to reorganize firms, and, thus,
there is no need for a separate bankruptcy procedure for reorganizations.

One of the key functions of a
bankruptcy mechanism is to create an
orderly forum for answering two related
questions: (1) Are the firm's assets worth
more if the firm is simply broken up? (2)
Should the firm be placed under new
management? But why not settle these
questions by auction, with current
owners and management teams bidding
along with others for the firm's assets? If
these assets are more valuable together,
the winning bidder will propose reorganization, rather than liquidation. And if
current management is the most
capable, the winning bidder would not
necessarily replace them with new
managers.

According to its many critics, the structured
bargaining of Chapter 11 leads to
systematically poor outcomes in economic
terms.
A large economic literature
supports the use of well-designed
auctions as a mechanism for getting the
largest possible value for the firm's assets
and, in turn, yielding the highest payoff
for a firm’s creditors.7
In contrast, according to its
many critics, the structured bargaining
of Chapter 11 leads to systematically
poor outcomes in economic terms. In
addition to promoting the systematic
violation of absolute priority,8 Chapter11 serves as a venue for entrenching
inefficient managers (who were, after
all, running the firm when it went
bankrupt), and the lengthy bargaining
process itself leads to increased costs, for
example, lawyers’ and accountants’ fees
and other court costs.9

6

Stuart Gilson, Edith Hotchkiss, and Richard
Ruback's article presents compelling evidence
that reorganization plans have systematic
biases in their estimates of firms’ value. For
example, the firm’s priority bondholders would
prefer that the court place a low dollar value
on the firm, so that subordinated bondholders
and stockholders would receive only a small
share of the claims on the reorganized firm.
20 Q3 2002 Business Review

Of course, the case for Chapter
7 and against Chapter 11 is not really as
clear cut as the preceding arguments
would make it seem, as I'll discuss later.
But underlying most economic arguments against Chapter 11 (and in favor
of a single creditor's chapter like
Chapter 7) is a simple but powerful
economic idea about the features of a
well-functioning bankruptcy mechanism. The mechanism should keep
separate two issues: (1) how to get the
most money for the firm's assets; and (2)
how that money should be distributed.
The reason for keeping these
issues separate is that while all the firm’s
creditors may agree on little else, they

7

Paul Klemperer's article contains a good
review of the existing theoretical literature on
auctions.
8

The merits of absolute priority are discussed
below.

— indeed anyone with a potential claim
on the firm — would agree that all will
be made better off if there is a larger pie
to divide. And a substantial body of
economic knowledge supports using
auctions as a means of getting the
largest pie. However, bargaining among
investor groups over competing reorganization plans invariably mixes the issues
of getting the most for the firm's assets
and distributing the claims on those
assets. It is unlikely that such bargaining
would ever arrive at a plan that gives
creditors the most money to split up.10
And since bargaining takes time, the

9

The evidence for systematic violations of
absolute priority in Chapter 11 is voluminous.
See the articles by Edith Hotchkiss for
evidence about how often inefficient
managers remain entrenched. See the article
by Julia Franks, Kjell Nyborg, and Walter
Torous for a range of estimates of the
administrative costs of Chapter 11.
10

Mixing the two types of issues also makes
bargaining more complicated and creates
stronger incentives for groups to use
bankruptcy proceedings in a strategic way.
www.phil.frb.org

firm’s assets may be declining in value
while investor groups dicker.11
The Reasons for Respecting
Absolute Priority. Chapter 11's
systematic violation of absolute priority
in favor of incumbent stockholders is
essentially a distributional issue. If so,
what is the significance of the particular
distribution dictated by the absolute
priority rule? Essentially, absolute priority
ensures that claimants’ payoffs are made
in the same order of priority that would
have existed had the distressed firm
never entered bankruptcy at all. As
argued by Thomas Jackson in his
influential book, a well-designed
bankruptcy mechanism avoids a race to
the courthouse to prevent a disorderly
— and value-destroying — assertion of
creditors’ rights, but it should not
overturn contractual agreements that
were freely negotiated by the firm and
its investors. These contracts were
negotiated with an eye toward keeping
the firm's funding costs as low as possible
and with the intention of raising the
firm’s value as much as possible.
Deviations from absolute
priority will increase the firm's borrowing
costs, since creditors who expect to lose
out in bankruptcy demand compensation through a higher rate of interest.12
11
Legally, the trustee may petition the court
to shift the bankruptcy proceedings from
Chapter 11 to Chapter 7 if he or she feels that
creditor interests would be served. However,
the trustee could not unilaterally choose to
make this decision. Instead, the court would
decide after a hearing, with all groups of
claimants represented.
12

This argument is not immune to criticism.
Some economists have argued that freely
negotiated contracts won’t lead to the lowest
possible financing costs, so long as the firm
negotiates contracts in sequence with
different investors. For example, the firm may
offer collateral to a new creditor, thus
reducing the value of all existing unsecured
claims. Fearing this, prior investors would
demand a higher interest rate or contractual
protections that the prior investors — or their
lawyers — must monitor closely. This line of
thinking has raised questions about the
desirability of absolute priority. See, for
example, Lucien Bebchuck and Jesse Fried’s
article discussing these and related issues.

www.phil.frb.org

Even worse, deviations that are hard to
predict with certainty raise the firm’s
financing costs higher still because
investors require compensation for the
added uncertainty.
SOME PROBLEMS WITH
CHAPTER 7
Scholarly debate following
Baird’s and Jensen's criticisms of
Chapter 11 has taken issue with the
view that an efficient bankruptcy
mechanism would necessarily look like
Chapter 7: an auction that gets the
largest possible price for the firm's assets,
followed by a distribution of the money
received in line with the absolute
priority rule.13
Auctions May Not Obtain
the Highest Price for a Firm's Assets.
A key feature that distinguishes an
auction in bankruptcy from many other
auctions is that the potential bidders
include individuals with existing claims
on the object to be auctioned. In
addition to the firm's current owners,
the firm’s creditors or other investors
might also choose to make competing
bids. For example, vulture investors
— those who buy up a distressed
firm's debt at discounted prices in
order to play a significant role in
bankruptcy proceedings — are
experts at managing and breaking up
bankrupt firms.14
In a textbook auction, no

bidder would ever choose to bid more
for an asset than it was worth because
the bidder has no prior claim on the
auctioned item. However, this is not true
if the bidder has a prior claim on the
asset. Existing claimants systematically
overbid, that is, they bid more than they
think the assets are worth. An existing
claimant overbids because if he loses, he
gets a share of the money paid by the
winning bidder. Thus, unlike in a
textbook auction, the claimant gains if a
competing bidder ultimately pays too
much for the asset.15 But this means
that any potential bidder must take into
account not only the possibility of high
bids from someone who places a higher
value on the firm's assets but also the
possibility of high bids from someone
whose valuation is actually lower than
her own. This is a problem because some
outside bidders — ones not connected
with the firm — who may have superior

15

Mike Burkart's article explains overbidding
in the context of a model of competing bids to
take over a firm, although he notes that the
same ideas apply to bankruptcy proceedings.

13

In this article, I focus on recent
theoretical work on the use of auctions in
bankruptcy. I don't emphasize some
important issues, for example, whether the
difficulty of obtaining funding might act
as a barrier for some bidders or the
possibility that a distressed firm will be
forced to sell assets at fire-sale prices.
Both of these problems further reduce the
relative attractiveness of auctions
compared with structured bargaining.
Oliver Hart's article discusses and
evaluates some of these issues.
14
Edith Hotchkiss and Robert
Mooradian's article describes the
activities of vulture investors.

Business Review Q3 2002 21

plans for running the firm (or selling its
assets) will be driven away from the
auction.16
Separating Asset Deployment Issues and Distribution Issues
May Be Impossible. One reason
auctions of a firm's assets have appeared
attractive to economists who think
about bankruptcy is that a large
literature on auctions has established
that many types of auction procedures
will yield the same expected revenues to
the seller.17 We might conclude that
while existing claimants will disagree
about how revenues should be distributed, they should all agree upon an
auction procedure that generates the
highest expected price.
But the article by Sugato
Bhattacharyya and Rajdeep Singh
shows that senior and junior creditors
would disagree about the choice of
auction procedures, even when the
auctions yield the same expected
revenues. The reason is that while we
can predict the expected revenues for an
auction, the actual price that will be paid
by the winning bidder is uncertain. The
riskiness of the bids will be important to
the firm's creditors, and different types
of creditors will have different risk
preferences. Specifically, junior creditors
will prefer auction procedures with a
higher probability of both very low and
very high bids because they get paid
only if the senior creditors have already
been paid in full.18 And auction
procedures that generate a wide
16

Per Stromberg's article presents some
empirical evidence that overbidding actually
occurs in Swedish bankruptcy auctions.
17

Some of the more familiar forms of auctions
include the ascending-bid auction and the
sealed-bid auction.
18

Actually, junior creditors (like all
creditors) would prefer auctions that yield
only high bids most of all. To focus on
different investors’ preferences for different
degrees of risk, Bhattacharyya and Singh
compared auction procedures that have the
same expected return and different amounts
of dispersion
22 Q3 2002 Business Review

dispersion of returns increase the
probability that junior creditors will get
paid. By the same reasoning, senior
creditors prefer auctions with a narrower
range of bidding.19
One conclusion we can draw
from Bhattacharyya and Singh's article
is that there is no bankruptcy procedure
— and that includes auctions — in
which asset deployment and distributional issues can be completely separated, at least as long as investors hold
different types of claims that yield
different preferences about risk. (See
The Options Approach, for an ingenious
auction procedure that helps overcome
this problem.)
TWO CHAPTERS ARE BETTER
THAN ONE
Much of the literature on
bankruptcy has assumed that absolute
priority is a necessary component of an
efficient bankruptcy law. However, a
recent article by Elazar Berkovitch and
Ronen Israel explains why systematic
deviations from absolute priority may
make economic sense.20 Their model
indicates that an efficient bankruptcy
system includes a number of features

19

The precise result shown by Bhattacharyya
and Singh is that a senior creditor strictly
prefers a sealed-bid first-price auction to a
sealed-bid ascending-bid auction, while a
junior creditor prefers the opposite. In an
ascending-bid auction, bids rise until all but
one bidder has dropped out. Thus, the winner
need only pay (slightly more than) the price
bid by the second-highest bidder. When the
ascending-bid auction is a sealed-bid auction,
none of the bidders sees when the others drop
out. In a sealed-bid first-price auction the
winner pays his or her own bid price, rather
than the price bid by the second-highest
bidder. Since the bids are sealed, bidders do
not see one another’s bids..
20

This is the most ambitious of a series of
articles by Berkovitch and Israel (along with
Jamie Zender) that explain why violations of
absolute priority may be desirable. The
common theme of these articles is that
managers, with superior information, must be
provided incentives to act on investors’
behalf. In general, this requires that the
manager receive a share of the firm's value in
bankruptcy.

that resemble the different bankruptcy
laws we observe around the world. In
fact, their model demonstrates that
some types of economies are best served
by a bankruptcy mechanism with two
chapters: a creditor's chapter with
similarities to Chapter 7 and a debtor's
chapter with similarities to Chapter 11.
Thus, their model suggests that a
bankruptcy mechanism like that in the
U.S. does have certain desirable
features. However, Berkovitch and
Israel’s research also suggests that other
types of economies are best served by a
single chapter: the creditor's chapter.
This system resembles the traditional
British bankruptcy system.
The two types of chapters
differ according to who initiates the
bankruptcy and whether the chapter
violates absolute priority by giving
incumbent stockholders a share of the
reorganized firm. The debtor's chapter is
initiated by the firm's stockholders and
violates absolute priority. The value of
violating absolute priority is that
stockholders are given an incentive to
voluntarily seek bankruptcy if they have
information that the firm is likely to fail.
Stockholders will never
voluntarily seek the protection of the
bankruptcy court unless there is
something to gain by doing so. Inducing
stockholders to voluntarily enter
bankruptcy can be valuable because the
firm's owners are often the first to
become aware of serious financial
troubles. Postponing bankruptcy too long
hurts all creditors because a troubled
firm's assets typically continue to decline
in value until the firm is reorganized or
dissolved. Thus, even creditors would
agree to give up a piece of a larger pie to
shareholders if it’s necessary to induce
stockholders to enter bankruptcy
voluntarily.
The creditor's chapter, which,
as its name suggests, is initiated by
creditors, respects the absolute priority
rule. This chapter permits creditors that
are well informed about the firm's affairs
www.phil.frb.org

to petition for bankruptcy without giving
anything to incumbent owners. Unless
the creditors are relying on the firm’s
owners to enter bankruptcy voluntarily,
creditors would never give the owners a
portion of the money received for the
bankrupt firm’s assets. Owners will
typically work harder to make the firm
profitable and avoid bankruptcy if they
know they’re not getting a share of the
assets when the firm goes bankrupt.
Either System May Be
Superior. In an undeveloped financial
market, especially one characterized by
strong relationships between a borrower
and its lender, Berkovitch and Israel
predict that an efficient bankruptcy law
will have only a creditor's chapter.21
In a relationship-driven
financial market, adding a debtor's
chapter would be both not very helpful
and too costly. Not very helpful, because
the lender's information about the
borrower is likely to be good when
relationships are close; thus, the
creditor's chapter will enforce efficient
liquidation most of the time even
without using the firm's information.22
Too costly, because a firm with bad news
about its prospects will have a powerful
incentive to use the debtor's chapter to
preempt its lender from initiating
proceedings, so as to capture a share of
the payoffs in bankruptcy.
In an economy without close
lending relationships, but with many
21

The combination of undeveloped financial
markets and strong relationships is probably a
fair description of Japan until the 1980s.

22

If creditors have conflicting interests — for
example, if some claims are collateralized — it
is possible that creditor-initiated proceedings
could lead to premature liquidation. However,
the automatic stay greatly reduces the possibility that any creditor could gain by pushing
the firm into bankruptcy prematurely.

www.phil.frb.org

different individuals, analysts, and
investors producing information about
firms, a two-chapter system may be both
feasible and desirable. In such a system,
a firm can't predict with certainty what
creditors know about its financial
condition, since the information
available to a firm's owners and the
information available to market participants are different. In this case, should a
firm's owners become aware of serious
problems, they will not always seek court
protection to pre-empt creditors from

CONCLUSION
Recent economic scholarship
on the efficiency of existing bankruptcy
mechanisms has been a productive
source of insights. Substantial empirical
evidence holds that Chapter 11
reorganization proceedings are drawn
out, costly affairs, with a significant bias
toward incumbent owners that is
reflected in systematic deviations from
absolute priority. Some critics have
suggested replacing the two-chapter
bankruptcy system of the U.S., in which

Many nations have introduced bankruptcy
reform (and reform proposals) along the lines
of the two-chapter model in the United States.
forcing the firm into bankruptcy. After
all, it may turn out that the firm’s
creditors won’t receive information that
would lead them to do so. Nonetheless,
the firm's owners will sometimes enter
bankruptcy voluntarily, thus improving
the decisions made about liquidating
and reorganizing firms. In such an
economy — for example, the United
States — two chapters can coexist and
improve on a single-creditor chapter.
Interestingly, Berkovitch and
Israel's model predicts that in an economy in which firms reduce their reliance
on banks and shift more of their financing toward capital markets, an efficient
bankruptcy system would shift from a
single-chapter system (with only the
creditor's chapter) to a two-chapter
system. This shift toward capital markets
is a trend in many developed countries.
And, as predicted, many nations have
introduced bankruptcy reform (and
reform proposals) along the lines of the
two-chapter model in the United States.

auctions are used to liquidate firms in
Chapter 7 and bargaining among
claimants is used to reorganize firms in
Chapter 11. Specifically, the critics
argue that all bankruptcies, whether
liquidations or reorganizations, can be
handled through auctions.
These proposals have generated further debate. While the outcome
of the debate is not conclusive, a
number of provisional conclusions have
arisen. Although critics have complained that Chapter 11 proceedings
don't separate the valuation of the firm's
assets from the distribution of this value
to claimants, it now seems clear that
auctions suffer from the same problem.
Furthermore, theorists have provided
explanations not only for systematic
deviations from absolute priority but also
for bankruptcy mechanisms with
significant similarities to the two-chapter
bankruptcy mechanism in the United
States. BR

Business Review Q3 2002 23

THE OPTIONS APPROACH

A

s long as existing claimants on the bankrupt
firm have different types of claims,
decisions about how the firm’s assets should
be handled can’t be separated from
decisions about how the value of these
assets should be distributed. Thus, claimants would not
unanimously support efficient plans for selling the firm’s
assets or reorganizing under new management.
Lucien Bebchuck proposed the following approach to
satisfying claimants in bankruptcy.a The basic idea is that if
all creditors have the same type of claim, their interests are
harmonized, and getting the most value for the firm’s assets
becomes everyone’s objective. Bebchuck’s idea is to give
senior creditors all of the firm’s equity. They would receive
pro rata shares, according to the size of their claim on the
firm. Junior creditors would receive options to buy senior
creditors’ shares for cash. The firm’s stockholders would
similarly receive options to buy out the claims of both classes
of creditors.b
To get an idea how this would work, consider a highly
simplified example with only two types of claimants. At
bankruptcy, the firm has 100 bondholders, each with $1
debt outstanding, and five shareholders, each with 20 shares
of the firm’s total 100 shares of stock issued. Under this
scheme, the 100 shares of stock would be distributed
equally among the 100 bondholders, with each receiving
one share. Each stockholder would receive an option to buy
up to 20 shares of stock at $1 per share. The exercise price of
the option ($1) is set so that the firm’s former bondholders
are obliged to sell their current shares as long as they are
offered at least as much as the face value of the their
original bond.c
Before individuals make decisions about whether to

exercise their options, a trustee would solicit plans for selling
the firm’s assets or reorganizing the firm. Participants’ ability
to buy and sell their options would ensure that those
individuals who place the highest value on the firm could
amass a majority of the firm’s equity. Under this procedure,
there is no need for everyone to agree that a particular plan
for the firm is best; those who don’t agree would sell their
option to the individual who places the highest value on the
firm.
If the firm’s former stockholders believe that the firm
is worth less than $100 — even under the best plan — they
would not exercise their options to buy the firm’s shares
because the cost of exercising the option exceeds the value
of the firm. However, if they believe that under some plan
the firm is worth, say, $120, the firm’s former stockholders
would choose to exercise their options to buy the firm’s
shares for $100. And since options can be sold, if other
investors believe that they have a plan worth more than
$100, the former shareholders would gladly sell their options
even if they disagree about the value of the plan.
Of course, no procedure is perfect. This approach
does not overcome the problem that existing claimants have
an incentive to overbid. Thus, we can’t assume that bidders
with the highest valued plan for the firm’s reorganization or
liquidation will participate. Also, as in any auction, the
procedure will work well only if those who place a high
value on the firm can also finance their purchase of equity
or options. Furthermore, for firms with both secured debt
and unsecured senior debt, the procedure may not be as
straightforward as in the example. In this case, the procedure must take account not only of the value of the plan as
a whole but also of the value of those assets pledged as
collateral.

aAlthough

the basic idea is Bebchuck’s, Philippe Aghion, Oliver Hart, and John Moore extended Bebchuck’s procedure to include a separate
stage in which potential suitors propose different reorganization plans, as developed here.

b

The scheme does not require that investors purchase all of the claims of a senior class. However, an investor (or group of investors) may need
to purchase a majority of the shares of the firm to gain control of the firm to ensure that a particular reorganization plan is carried out.

c

For a firm with a more complicated financial structure — with claims of many different priorities — a junior group receives options to buy out
all claimants who are senior to that group. The version of Bebchuck’s scheme developed here maintains absolute priority by requiring the
senior claimant to sell at the exercise price. However, the scheme can be modified to permit deviations from absolute priority if so desired.

24 Q3 2002 Business Review

www.phil.frb.org

REFERENCES
Aghion, Philippe, Oliver Hart, and John
Moore. “The Economics of Bankruptcy
Reform,” Journal of Law, Economics, and
Organization, 1992, pp. 523-46.
Baird, Douglas.“The Uneasy Case of
Corporate Reorganizations,” Journal of
Legal Studies, pp. 69-98.
Bebchuck, Lucien A. “A New Approach to
Corporate Reorganizations,” Harvard Law
Review, pp. 775-804.
Bebchuk, Lucien A., and Jesse M. Fried.
“The Uneasy Case for the Priority of
Secured Claims in Bankruptcy,” Yale Law
Journal, January 1996, pp. 857-934.
Berkovitch, Elazar, and Ronen Israel.
“Optimal Bankruptcy Laws Across
Different Economic Systems,” Review of
Financial Studies, Spring 1999, pp. 347-77.
Bhattacharyya, Sugato, and Rajdeep Singh.
“The Resolution of Bankruptcy of
Auction: Allocating the Residual Right of
Design,” Journal of Financial Economics,
1999, pp. 269-94.

Epstein, David G. Bankruptcy and Other
Debtor-Creditor Laws in a Nutshell. St. Paul,
MN: West Publishing, 1995.

Jackson, Thomas. The Logic and Limits of
Bankruptcy Law. Cambridge, MA: Harvard
University Press, 1986.

Franks, Julian R., Kjell G. Nyborg, and
Walter N. Torous. “A Comparison of U.S.,
U.K., and German Insolvency Codes,”
Financial Management, Autumn 1996, pp.
86-101.

Jensen, Michael C. “Corporate Control and
the Politics of Finance,” Journal of Applied
Corporate Finance, Fall 1991, pp. 13-33.

Gilson, Stuart, Edith Hotchkiss, and
Richard Ruback. “Valuation of Bankrupt
Firms,” Review of Financial Studies, Spring
2000, pp. 43-75.
Hart, Oliver. “Different Approaches to
Bankruptcy,” Discussion Paper 1903,
Harvard Institute of Economic Research,
September 2000.
Hotchkiss, Edith. S. “Post-Bankruptcy
Performance and Management Turnover,”
Journal of Finance, 1995, pp. 3-21.

Klemperer, Paul. “Auction Theory: A Guide
to the Literature,” Journal of Economic
Surveys, July 1999, pp. 227-86.
Pulvino, Todd C. “Do Asset Fire Sales
Exist? An Empirical Investigation of
Commercial Aircraft Transactions,” Journal
of Finance, June 1998, pp. 939-78.
Stromberg, Per. “Conflicts of Interest and
Market Illiquidity in Bankruptcy Auctions:
Theory and Tests,” Journal of Finance, Dec.
2000, pp. 2641-92.

Hotchkiss, Edith S., and Robert M.
Mooradian. “Vulture Investors and the
Market for Control of Distressed Firms,”
Journal of Financial Economics, 1997, pp.
401-32.

Burkart, Mike. “Initial Shareholdings and
Overbidding in Takeover Contests,”
Journal of Finance, December 1995, pp.
1491-1515.

www.phil.frb.org
www.phil.frb.org

25 Q3 2002 Business Review
Business Review Q3 2002 25

The Taylor Curve and the
Unemployment-Inflation Tradeoff
BY SATYAJIT CHATTERJEE

I

n the past, monetary policy options were
described in terms of a tradeoff between the
unemployment rate and the inflation rate,
the so-called Phillips curve.
Macroeconomists no longer view the Phillips curve as
a viable “policy menu” because its use as such is
inconsistent with mainstream macroeconomic theory.
In the late 1970s, John Taylor suggested an alternative
set of options for policymakers to consider, one
consistent with macroeconomic theory. These
alternative options involve a tradeoff between the
variability of output and the variability of inflation.
Satyajit Chatterjee explains the logic underlying this
new variability-based policy menu and discusses its
implications for the conduct of monetary policy.
In thinking about how the Fed
should conduct monetary policy, it’s
important to know what monetary
policy can and cannot accomplish.
Without a clear idea of what is within
the reach of a central bank in terms of
controlling economic activity, it’s not
possible to make sensible choices
regarding monetary policy.
Scientific consensus on what

Satyajit Chatterjee
is a senior economic
advisor and
economist in the
Research Department of the
Philadelphia Fed.

26 Q3 2002 Business Review

central banks can do has evolved over
time and so have prescriptions for
conducting monetary policy.1 In the
1950s and 1960s, monetary policy
options were formulated in terms of a
tradeoff between the unemployment
rate and the rate of inflation, the socalled Phillips curve.2 Economists back
then thought that the Fed could sustain
a lower or higher rate of unemployment
by bringing about a higher or lower rate
of inflation. The implication was that if
the unemployment rate associated with
price stability (that is, zero inflation)
turned out to be too high, the Fed could
1

See the article by Philadelphia Fed
President Anthony Santomero in the First
Quarter 2002 Business Review for more
discussion of this point.

improve economic performance by
engineering some inflation in order to
reduce the unemployment rate.
But by the early 1970s,
scientific support for a tradeoff between
the rate of inflation and the unemployment rate had ebbed. As a result of
advances in monetary theory and a
clearer perception of monetary facts,
economists recognized that a higher
inflation rate could lower the unemployment rate only temporarily. An expansionary monetary policy sustained over a
long period would, in the end, generate
only higher inflation with no reduction
in the unemployment rate.
Currently, the conduct of
monetary policy respects this circumscribed view of the effectiveness of
monetary policy actions. The challenge
for policymakers is to determine how
best to carry out monetary policy when
people know that monetary policy
actions have only temporary effects on
the unemployment rate.
One possibility is to refrain
from exploiting the temporary tradeoff
between inflation and unemployment
and carry out monetary policy with
some desired long-run inflation target in
mind. For instance, Nobel laureate

2

British economist A.W. Phillips documented
an inverse relationship between the rate of
wage inflation for U.K. workers and the
unemployment rate in the U.K. for the years
1861-1957. In 1960, American economists Paul
Samuelson and Robert Solow drew attention
to the inverse relationship between the rate of
price inflation in the United States and the
U.S. unemployment rate, a relationship they
called a “modified Phillips curve.” The
qualifier “modified” has long since disappeared, and the Phillips curve is now
generally understood to represent the inverse
relationship between price inflation and the
unemployment rate.
www.phil.frb.org

Milton Friedman has suggested that the
Fed should endeavor to keep the money
supply growing at a constant rate, one
consistent with long-run price stability or
a modest level of long-run inflation.3
In 1979, economist John Taylor
suggested a different possibility.4 Taylor
pointed out that the temporary tradeoff
between inflation and unemployment
was consistent with a permanent
tradeoff between the variability of
inflation and the variability of output
over time. At some point, policymakers
face a choice between lowering the
variability of output at the cost of more
variability in the inflation rate or
lowering the variability of the inflation
rate at the cost of more variability in
output. In his article, Taylor estimated
the tradeoff between variability in
inflation and output for the U.S.
economy.5 This “Taylor curve” displays
one set of options available to
policymakers when monetary policy
actions have only temporary effects on
the unemployment rate.
In this article, I will explain
how policymakers can exploit a
temporary tradeoff between the
unemployment and inflation rates to
consistently achieve particular inflation
and output variability combinations on
the Taylor curve.6 Then I will discuss
what lessons about the conduct of
monetary policy can be drawn from the
Taylor curve. Taylor has argued that the
very shape of the curve reveals the
general nature of the monetary policy
rule that macroeconomists should
recommend to policymakers. I suggest

3

Friedman stated his views in his 1967
presidential address to the American
Economic Association. The text of his address
appears in his 1968 article.
4

John Taylor is professor of economics at
Stanford University and a renowned scholar
on issues concerning monetary policy.
Professor Taylor has served as a member of the
President’s Council of Economic Advisers and
is currently serving as Undersecretary for
International Affairs at the U.S. Department
of Treasury.
www.phil.frb.org

that macroeconomists should be
cautious about recommending any
particular policy rule too strongly until
more is known about the effects that
different combinations of inflation and
output variability (on the Taylor curve)
have on a typical household’s standard
of living.
A PRIMER ON THE THEORY OF
THE NATURAL RATE OF
UNEMPLOYMENT
The proposition that the policy
choices suggested by the Phillips curve
cannot be sustained is a key implication
of the theory of the natural rate of unemployment. Since the natural rate theory is
Taylor’s point of departure in his search
for a sustainable tradeoff between inflation and output, it’s best to begin with a
brief description of this theory and its
implications for the Phillips curve.
The theory of the natural rate
of unemployment centers on the
determinants of the unemployment rate.
The theory makes a distinction between
the fundamental determinants of the
unemployment rate and nonfundamental factors. Fundamental determinants are factors that change slowly over
time, such as demographics, technology,
laws and regulations, and social mores.
These fundamental factors determine
the natural rate of unemployment.

5

Taylor couches his arguments in terms of
variability of output rather than unemployment but this difference is not important
because the two are closely related.
Macroeconomists often use a rule of thumb to
translate variability in output to variability in
the unemployment rate. The rule of thumb is
that a 1-percentage-point reduction in the
unemployment rate goes hand-in-hand with a
3-percentage-point increase in output. This
rule of thumb, which appeared in a 1971
article by Arthur Okun, is referred to as
Okun’s Law. For the sake of comparison with
the Phillips curve, later in the article I’ll
couch Taylor’s arguments in terms of the
variability of the unemployment rate instead
of output.
6

Economists refer to this tradeoff as a “policy
menu.”

However, because of nonfundamental
factors, the actual unemployment rate
can deviate from the natural rate. The
theory links these deviations to events
that cause the actual inflation rate, at
any given date, to diverge from the
inflation rate expected for that date in
earlier periods.
The reasoning underlying this
link goes as follows.7 In modern
industrial economies, it’s common for
workers to enter into employment
contracts in which they agree to supply
as many hours of work as demanded by
their employers (within reasonable
limits) for an agreed-upon wage rate or
salary. This contractually fixed wage
rate or salary reflects, in part, what
workers and employers expect the
inflation rate to be over the term of the
contract. If the inflation rate turns out to
be as expected, employers demand (and
workers supply) the normal level of work
hours, and the overall unemployment
rate is close to the natural rate. If the
inflation rate turns out to be higher than
expected, employers buy additional
work hours because the price at which
they can sell their products is higher
than expected but the wage they must
pay for additional hours of work remains
contractually fixed. In this case the
utilization of labor rises, and the
unemployment rate tends to fall below
the natural rate. Conversely, if the
inflation rate turns out to be lower than
expected, firms lay some workers off
because the price at which firms can sell
their products is now lower than
expected but the wage they must pay
their workers remains contractually
fixed. In this case, the utilization of labor
falls, and the unemployment rate tends

7

There are two variants of the natural rate
theory. The text describes the variant
formulated, in part, by Taylor, which forms the
basis for Taylor’s subsequent work. Robert
Lucas Jr. developed the other variant, which
focuses on informational frictions rather than
employment contracts. Both variants appear
to be consistent with the evidence.
Business Review Q3 2002 27

to rise above the natural rate.8
The architects of the natural
rate theory took a stand on which
events caused actual inflation to diverge
from expected inflation. They attributed
these discrepancies to erratic monetary
policy. They argued that when the
monetary authority expands the money
supply unexpectedly, it makes aggregate
demand for goods and services rise faster
than aggregate supply. This excess
demand causes the actual inflation rate
to rise above the expected inflation rate,
which, in turn, motivates firms to
increase the utilization of all factors of
production, including labor. The
increase in the utilization of labor leads
to a decline in the unemployment rate.
Conversely, when the monetary
authority unexpectedly contracts the
money supply, aggregate demand falls
short of aggregate supply. Now excess
supply causes the actual inflation rate to
fall below the expected inflation rate,
which, in turn, induces firms to reduce
the utilization of labor (and other factors
of production) and causes the unemployment rate to rise.
The Natural Rate and the
Phillips Curve. Under certain conditions, the natural rate theory can explain
why the data on inflation and unemployment can take the form of a Phillips
curve. Recall that the Phillips curve
refers to a negative relationship between
the inflation rate and the unemployment rate: During years in which the
inflation rate is high, the unemployment
rate tends to be low; during years in
which the unemployment rate is high,
the inflation rate tends to be low. If the

8

If employers indexed wage rates or salaries
to future inflation outcomes, the incentives
to demand additional work hours when the
inflation rate is higher than expected and to
reduce work hours when the inflation rate is
lower than expected would disappear. Thus,
Taylor’s variant of the natural rate theory
leans rather heavily on the fact that most
employers do not appear to index wage-rate
or salary contracts to inflation outcomes in
the future.
28 Q3 2002 Business Review

average of unemployment rates over
time is a good proxy for the natural
unemployment rate and if the average
of inflation rates over time is a good
proxy for the expected inflation rate, the
natural rate theory implies that a plot of
the actual annual rates of inflation and
unemployment should trace out an
inverse relationship. According to the
theory, a year with a higher-thanexpected inflation rate should be a year

interesting aspect of the figure is the
authors’ labeling of the curve. As noted
at the bottom of the figure, Samuelson
and Solow thought that this curve
“shows the menu of choice between
different degrees of unemployment and
price stability.” The authors’ labeling
suggests that if policymakers find the 5.5
percent unemployment rate corresponding to price stability (point A on the
curve) unacceptably high, monetary

The natural rate theory can explain why the
data on inflation and unemployment can take
the form of a Phillips curve but implies that the
Phillips curve shows a short-run tradeoff
between inflation and unemployment.
with an unemployment rate lower than
the natural rate, which, using averages
of the two rates over time, implies that a
year with a higher-than-average
inflation rate should also be a year with
a lower-than-average unemployment
rate. In other words, there should be a
negative relationship between the
inflation and the unemployment rates.9
Figure 1 reproduces Paul
Samuelson and Robert Solow’s original
estimate of the “modified” U.S. Phillips
curve for the period 1933-58. The curve
shows a negative relationship between
the average annual rate of inflation and
the annual unemployment rate. For
instance, at point B on the curve, an
inflation rate of 4.5 percent accompanies
an unemployment rate of 3 percent; at
point A, an inflation rate of zero
accompanies an unemployment rate of
5.5 percent.
From the perspective of the
natural rate theory, however, the most
9

It’s worth noting that the prediction of the
natural rate theory concerning Phillips curves
holds up when the natural unemployment
rate and the expected inflation rate are
proxied by formulas more sophisticated than
simple averages of the rates over time. See, for
instance, Figure 1.5 in Thomas Sargent’s 1999
book on U.S. inflation.

policy actions could lower the unemployment rate to 3 percent at the cost of
an annual inflation rate of 4.5 percent
(that is, move the economy from point
A to point B on the curve).
Although the natural rate
theory accounts for the existence of a
Phillips curve in the data, the theory also
implies that the Phillips curve shows a
short-run tradeoff between inflation and
unemployment, not one that can be
sustained over the long run. To see why,
suppose that the natural rate of unemployment in the economy of Figure 1 is 5
percent, and suppose that policymakers
want to lower the unemployment rate to
3 percent. According to the natural rate
theory, the only way in which the
monetary authority can sustain an
unemployment rate of 3 percent is by
generating actual inflation that’s higher
than expected inflation. Initially, the
monetary authority may succeed in
generating higher-than-expected
inflation and get the unemployment
rate below the natural rate. But
eventually people will catch on to the
fact that the monetary authority is
generating more than the expected
amount of inflation, and employment
contracts will begin to take the new
www.phil.frb.org

FIGURE 1
Phillips Curve for U.S.

This figure shows the menu of choice between different degrees of unemployment and price
stability, as roughly estimated from American data from 1933-58. Adapted from Paul A.
Samuelson and Robert Solow, “Analytical Aspects of Anti-Inflation Policy,” American
Economic Review (Papers and Proceedings), 50, May 1960, pp. 177-94. Used with permission.

higher rate of inflation into account.
Once that discrepancy between actual
and expected inflation disappears, the
unemployment rate will rise again to 5
percent. Thus, unless the inflation rate is
continuously different from what people
expect, the unemployment rate will
return to the natural rate.
The natural rate theory implies
that for the monetary authority to keep
the unemployment rate permanently
below the natural rate, it must continually stay ahead of people’s expectations
of rising inflation by generating inflation
at an ever-rising rate. Put differently, the
only unemployment rate that’s consistent with nonaccelerating or
nondecelerating price inflation is the
natural unemployment rate. This also
implies that the inflation rate associated
with the natural rate is a matter of policy
choice. Within limits, it can be anything
the monetary authority wants it to be,
since once people come to expect the
www.phil.frb.org

chosen inflation rate, it will be consistent
with the natural rate of unemployment.
To summarize, the genesis of
the Phillips curve lies in studies of the
historical relationship between the
growth rates of wages and prices and the
unemployment rate. Although the
negative relationship between inflation
and unemployment exists in the
historical data (for that matter, in more
recent data as well), macroeconomists
no longer believe in a long-run policy
tradeoff between inflation and unemployment. The natural rate theory
persuaded most macroeconomists that
it’s impossible for a monetary authority
to achieve any unemployment rate
other than the natural rate without
eventually having either accelerating or
decelerating inflation. Although the
Phillips curve describes a genuine
pattern in the data, the reason underlying the pattern implies it cannot be
viewed as a policy menu.

THE TAYLOR CURVE: A
TRADEOFF CONSISTENT WITH
NATURAL RATE THEORY
If the Phillips curve cannot be
used as a policy tool, is there any
tradeoff between inflation and unemployment that can? Taylor argues that
there is. Like the Phillips curve, this
alternative curve also concerns the
relationship between inflation and
unemployment but focuses on the
variability of inflation and the variability
of unemployment.
To develop these variabilitybased combinations, Taylor takes the
view that there are other nonfundamental events, besides erratic changes in
monetary policy, that cause the actual
unemployment rate to deviate from the
natural rate. For instance, if consumers
become unduly pessimistic about their
prospects for future income and,
consequently, reduce their spending,
the economy can end up in a situation
where aggregate supply will exceed
aggregate demand at prices that firms
expected to prevail. In this situation, the
downward pressure on prices will make
the actual inflation rate fall below the
expected inflation rate and the utilization of factors of production will fall and
the unemployment rate will rise. Conversely, if consumers become unduly
optimistic about prospects for future
income and, consequently, increase
their spending substantially, prices will
be higher than expected and the
utilization of factors of production will
rise and the unemployment rate will fall.
Given the possibility of such
events, the central idea underlying
Taylor’s variability-based tradeoff is that
policymakers can choose the degree to
which monetary policy is used to buffer
the unemployment rate against
nonfundamental disturbances. For
instance, if consumers become unduly
pessimistic about the future and the
actual inflation rate turns out to be
lower than expected, the monetary
authority can then expand the money
Business Review Q3 2002 29

supply to counteract the higher
unemployment that results from the
disinflationary shock. Similarly, if
consumers become unduly optimistic
about the future and the actual inflation
rate rises faster than expected, the
monetary authority can then contract
the money supply to counteract the
negative unemployment effect of the
inflationary shock.
The important point to note is
that such buffering is not inconsistent
with the natural rate theory because the
monetary authority is not trying to
create unexpected inflation or deflation
on a sustained basis. On the contrary,
the monetary authority is acting to offset
variability in unemployment caused by a
discrepancy between actual and
expected inflation. Various events can
cause actual inflation to deviate from
expected inflation, so there is a scope for
beneficial monetary policy actions that’s
entirely consistent with the natural rate
theory.
The UnemploymentInflation Variability Tradeoff. Taylor
notes that successful buffering of the
unemployment rate against nonfundamental disturbances can dampen the
variability of both the inflation and the
unemployment rate. However, he also
argues that at some point, further
reduction in the variability of the
unemployment rate can come only at
the expense of more variability in the
inflation rate.
The problem is that a change
in the inflation rate tends to persist over
time. For instance, if the inflation rate
rises because of some unexpected event,
all else remaining the same, the inflation
rate will tend to be higher in the future.
This means that even if the monetary
authority undertakes monetary policy
action to fully offset the unemployment
effects of, say, a positive inflation shock,
it’s left facing a path of future inflation
that’s higher than the path that
everyone expected to prevail prior to the
shock. To nudge the inflation rate back
30 Q3 2002 Business Review

down toward the previously expected
path, the monetary authority has to
tighten monetary policy more than what
would be needed to keep the unemployment rate at the natural rate. The
additional monetary restraint raises the
unemployment rate above the natural
rate and, therefore, adds to the
variability of the unemployment rate.
But it also works to bring the inflation
rate back toward the pre-shock level
and therefore serves to lower the
variability of the inflation rate.
Furthermore, the more quickly the
monetary authority aims to bring the
inflation rate back down to the preshock level, the more variability it will
inflict on the unemployment rate.
This then is the tradeoff facing
policymakers, according to Taylor’s
theory. To reduce the variability of the
inflation rate, the monetary authority
must be willing to tolerate increased
variability in the unemployment rate.
Two ingredients seem necessary for such
a tradeoff to exist. First, there must be
disturbances (other than erratic
monetary policy actions) that cause the
actual inflation rate to deviate from the
expected inflation rate.10 Second, any
change in the inflation rate must tend to
be persistent. It’s this property of
persistence that leads to a situation
where the variability of the inflation rate
can be lowered only at the expense of
greater variability in the unemployment
rate.
To summarize, Taylor has
developed an inflation and output
tradeoff consistent with the natural rate
theory. His tradeoff involves the
variability of the inflation rate and the
variability of output, which, recall, is
closely related to the variability of the
unemployment rate. Figure 2 shows
what this tradeoff looks like for the U.S.
10

Such disturbances could be due to
consumers’ undue optimism or pessimism
about their future earning prospects. More
generally, any disturbance that results in
pricing mistakes by businesses would qualify.

By choosing how aggressively to combat
variability in the inflation rate, the
monetary authority determines where
on this curve to locate. A policy of
aggressively combating deviations in the
inflation rate from a given target path
will put the economy on a point like B,
where the variability of output is
relatively high but the variability of the
inflation rate is low. Conversely, a less
aggressive policy of combating
deviations in the inflation rate from a
given target path will put the economy
on a point like A, where the variability
in output is low but variability in the
inflation rate is relatively high.11
THE TAYLOR CURVE AND THE
CONDUCT OF MONETARY
POLICY
Taylor posed the problem of
the best way to conduct monetary policy
in the following way.12 Is there any
particular point on the Taylor curve
that’s likely to be acceptable to all
policymakers?
Suppose that some
policymakers are more concerned about
variability in the inflation rate and
others about variability in the
unemployment rate. In that case, the
point where Figure 2 curves sharply,
point C, is the variability combination for
which there is likely to be consensus.
The reasoning goes as follows.
Policymakers more concerned about
output variability are not likely to agree
on variability combinations that lie to
the northwest of point C because they
would be giving up a lot in terms of

11

The bowed-in shape of the curve indicates
that policymakers face a form of “diminishing
returns.” To bring about a given level of
decline in output variability, policymakers
must accept larger and larger amounts of
inflation variability (and vice versa). The
existence of such “diminishing returns” seems
plausible, although the exact reasons for it lie
in the character of the macroeconomic model
used by Taylor.
12

This description draws on Taylor’s 1999
article.
www.phil.frb.org

output variability for meager gains in
inflation stability. Analogously,
policymakers more concerned about
inflation variability are not likely to
agree on variability combinations that lie
to the southeast of point C because they
would be giving up a lot in terms of
higher inflation variability for meager
gains in output stability. Consequently,
as long as there is some diversity of views
about the relative demerits of inflation
and output variability, the combination
for which there is likely to be consensus
is somewhere in the vicinity of point C.
Taylor recommended a policy
rule that gives equal weight to stabilizing
inflation and output. In particular, his
rule recommends that the Fed lower the
fed funds rate by half a percentage point
when real GDP falls below potential
GDP by 1 percent and that it raise the
fed funds rate by half a percentage point
if actual inflation rises above its target
path (of 2 percent) by 1 percentage
point. This policy rule has come to be
known as the Taylor rule. Taylor
recommended this rule, in part, because
it was simple. As he notes in his 1999
article (p. 47), this “[p]olicy rule was
purposely chosen to be simple. Clearly,
the equal weights on inflation and the
GDP gap are an approximation
reflecting the finding that neither
variable should be given negligible
weight.” 13
Taylor’s policy recommendation hinges on two important assumptions. His first assumption is that the
selection of a policy rule (or, equivalently, the selection of a variability
combination on the Taylor curve) will
occur through a democratic process.
Given this assumption, Taylor views the
economist’s job as proposing a policy rule
that’s most likely to command
consensus. His second assumption is that
he takes for granted that some
13

This rule will not put the economy on point
C on the Taylor curve, but it will deliver
similar variability in inflation and output.
www.phil.frb.org

policymakers are more leery of inflation
volatility and others more leery of
volatility in the unemployment rate.
This second assumption,
however, is troublesome. In effect,
Taylor treats a policymaker’s preferences
for inflation stability over output stability
or vice versa in the same way an economist would treat a person’s innate preferences for, say, apples over oranges. But
surely preferences about inflation and
output variability must derive from some
understanding of the relative merits of
output and inflation stability, an
understanding that ultimately must (or
should!) have some connection to how
output and inflation variability affects
the welfare of working households.
This consideration suggests
that the derivation of the variability
tradeoff is an important first step for the
satisfactory resolution of the question of
which monetary policy rule to adopt.
Taylor’s variability tradeoff defines the
choices that a monetary authority faces,
choices that are consistent with the

natural rate theory. But there remains a
second, equally important, step: to
determine how the economic welfare of
the typical household varies across
different points on the Taylor curve.
VARIABILITY AND ECONOMIC
WELFARE
At present, not much is known
about the economic welfare consequences of different variability
combinations on the Taylor curve.
Furthermore, the connection between
economic welfare and different degrees
of variability of inflation and output is
sufficiently complex that we cannot be
certain how economic welfare will
change as we move from a point like A
on the Taylor curve to points like B or C.
Turning first to the economic
welfare effects of inflation variability,
observe that variability of the inflation
rate will be most harmful if it affects the
real value, or purchasing power, of a
household’s earnings. During periods of
higher-than-expected inflation, growth

FIGURE 2
The Taylor Curve

Adapted from John B. Taylor, “Estimation and Control of a Macroeconomic Model with
Rational Expectations,” Econometrica, 47 (5), 1979, pp. 1267-86. Used with permission.

Business Review Q3 2002 31

in nominal compensation will lag growth
in the general level of prices, and real
compensation will decline (recall that
this decline in real compensation is the
reason firms expand hiring during
periods of surprise inflation). Conversely,
during periods of lower-than-expected
inflation, households will experience
faster growth in real compensation.
These fluctuations in real
income inflicted by variability in
unexpected inflation cannot be good for
households. But how bothersome
variability in inflation is depends on how
much variability in unexpected inflation
it leads to. The important point here is
that the high variability of inflation at a
point like A in Figure 2 need not imply a
high variability of unexpected inflation.
The logic of the Taylor curve suggests
that some of it will come from variability
in expected inflation. But variability in
expected inflation need not have the
same effect on economic welfare as
variability in unexpected inflation. For
one thing, firms and workers have the
opportunity to alter compensation terms
in response to changes in inflation that
are expected to happen. Arguably, the
disruption caused by changes in
inflation that are expected to happen is
likely to be less than the disruptions
caused by unexpected changes in
inflation. Therefore, to assess the effects
of inflation variability on households, we
need information on how the mix
between expected and unexpected
inflation variability varies as we go from
a point like B on the Taylor curve to a
point like A. At present, this knowledge
is lacking.
Turning to the economic
welfare effects of output variability,
consider, again, points A and B on the
Taylor curve. At point A, variability in
output is much lower than at point B.
Why is this relevant? One obvious
answer is that output variability goes
hand-in-hand with variability in the
unemployment rate, which is of immediate concern to households. If we
32 Q3 2002 Business Review

use Okun’s rule of thumb that a 1percentage-point increase in the
unemployment rate corresponds to a 3percentage-point drop in output from
trend, points A and B on the Taylor
curve would roughly correspond to
unemployment rate variability of about
1/3 and 1-1/3 percent, respectively.
Fluctuations in the unemployment rate affect households in two
ways: the probability of job loss for
employed members and the probability
of job gain for unemployed members.
For instance, during a recession, when
the unemployment rate is relatively
high, the probability of job loss for
employed workers is also relatively high,
and the probability of job gain for
unemployed individuals is relatively low.
Thus, all individuals face a higher risk of
unemployment. Conversely, during an
economic expansion, the probability of
job loss for employed workers is relatively
low, and the probability of job gain for
unemployed workers is relatively high.
Hence, all individuals face a lower risk
of unemployment. If a policy rule reduces the variability of the unemployment rate, it will reduce fluctuations in
the risk of unemployment.
To make matters concrete, let’s
suppose that the monetary authority is
comparing two policy rules with the
following properties. Under the first
policy, the unemployment rate is
predicted to be (almost) constant at, say,
5 percent, and under the second policy
it’s predicted to fluctuate, with equal
probability, between 6 percent and 4
percent from one period to the next.
Observe that the average unemployment rate is 5 percent under the second
policy as well.
The effects of these two
policies on economic well-being will
depend on exactly how these policies
affect an individual’s probability of
experiencing unemployment. Suppose
that a lower or higher unemployment
rate implies that all households face a
proportionately lower or higher prob-

ability of experiencing unemployment. If
we ignore for now the inflation
variability effects of the two policies, it
follows that all households will benefit
under the second policy, relative to the
first, when the unemployment rate is 4
percent but will lose under the second
policy, relative to the first, when the
unemployment rate is 6 percent.
Economic research has shown that the
gain will be less than the loss so that,
overall, households will be economically
worse off under the second policy as
compared to the first. However, this
research has also shown that the
predicted loss can be quite small.14 If this
is the case, the important consideration
in comparing the two policies may turn
out to be the policies’ effects on inflation
variability rather than unemployment
rate, or output, variability.
But this is not, by any means,
the only possibility. The economic
welfare effects of unemployment rate
variability depend importantly on the
details of how the fluctuations in the
unemployment rate affect an individual’s probability of experiencing
unemployment. If we drop the assumption that a lower or higher unemployment rate implies that all households
face a proportionately lower or higher
probability of experiencing unemployment, the outcome may be different. In
particular, if an increase or decrease in
the unemployment rate makes the
probability of experiencing unemployment rise or fall proportionately more for
people who are currently jobless, the loss
in economic welfare from following the
second policy will be larger. Also,
unemployment rate variability may not
be the only important consequence of
output variability; greater output
variability may adversely affect the
investment decision of firms and thereby
reduce the long-term growth rate of
worker productivity and wages.
14 For details on this point, see my Business
Review article.

www.phil.frb.org

CONCLUSION
An intelligent choice of
monetary policy requires knowledge
about what monetary policy can or
cannot accomplish. In the past,
monetary policy options were described
in terms of a tradeoff between the
unemployment rate and the inflation
rate, the so-called Phillips curve.
Macroeconomists no longer view the
Phillips curve as a viable “policy menu”
because its use as such is inconsistent
with mainstream macroeconomic
theory. In the late 1970s, John Taylor
suggested an alternative tradeoff for
policymakers to consider. Like that

suggested by the Phillips curve, Taylor’s
tradeoff is also concerned with
unemployment and inflation, but it
focused on the variability of both the
unemployment rate and the inflation
rate. (Actually, Taylor focused on
output variability instead of unemployment rate variability, but the two are
very closely related.)
In particular, Taylor argued
that policymakers face a tradeoff
between the variability of inflation and
the variability of the unemployment
rate. Unlike the Phillips curve, the
Taylor curve displays a tradeoff
consistent with mainstream macro-

economic theory. Taylor’s development
and elucidation of this variability-based
tradeoff is clearly an important advance
in monetary policy thought. Still, the
Taylor curve does not resolve the
question of which monetary policy rule
to adopt. That decision requires some
understanding of how the welfare of
working households is affected by the
different combinations (of variability of
inflation and unemployment rates) on
the Taylor curve, an understanding that,
at present, is lacking. We hope that
future research will fill in this gap in our
knowledge. BR

Phillips, A.W. “The Relation Between
Unemployment and the Rate of Change of
Money Wage Rates in the United Kingdom,
1861-1957,” Economica, November 1958,
pp. 283-99.

Sargent, Thomas J. The Conquest of
American Inflation. Princeton, NJ:
Princeton University Press, 1999.

REFERENCES
Chatterjee, Satyajit. “Why Does
Countercyclical Monetary Policy Matter?”
Federal Reserve Bank of Philadelphia,
Business Review, Second Quarter, 2001.
Friedman, Milton. “The Role of Monetary
Policy,” American Economic Review, 58,
March 1968, pp. 1-17.
Okun, Arthur M. “Potential GNP: Its
Measurement and Significance,” in
American Statistical Association,
Proceedings of the Business and Economics
Statistics Section; reprinted in Arthur M.
Okun, The Political Economy of Prosperity.
Washington DC: The Brookings Institution,
1970.

www.phil.frb.org

Samuelson, Paul A., and Robert M. Solow.
“Analytical Aspects of Anti-Inflation
Policy,” American Economic Review (Papers
and Proceedings), 50, May 1960, pp. 17794.
Santomero, Anthony M. “What Monetary
Policy Can and Cannot Do,” Federal
Reserve Bank of Philadelphia, Business
Review, First Quarter, 2002.

Taylor, John B. “Estimation and Control of a
Macroeconomic Model with Rational
Expectations,” Econometrica, 47 (5), 1979,
pp. 1267-86.
Taylor, John B. “Monetary Policy Guidelines
for Inflation and Output Stability,” in
Benjamin F. Friedman, ed., Inflation,
Unemployment, and Monetary Policy.
Cambridge, MA: MIT Press, 1999.

Business Review Q3 2002 33

Does Lower Unemployment
Reduce Poverty?
BY ROBERT H. DEFINA

I

s the link between unemployment and
poverty as strong as many people think it
is? Possibly not. How strong the link is
depends critically on how we measure
poverty. And during the past two decades, researchers
have identified numerous shortcomings in the
government’s official procedures for determining the
extent of poverty. In this article, Bob DeFina presents
empirical evidence that improved measures of poverty
are less strongly related to changes in unemployment
than the headcount rate.
The record-setting U.S.
expansion of the 1990s, especially the
torrid growth in the latter half of the
decade, helped push the unemployment
rate down to its lowest level in 30 years.
By October 2000, the jobless rate had hit
3.9 percent, about 3 percentage points
below its previous peak. Such a
remarkable decline, when sustainable, is
to be celebrated for many reasons. In
part, an improving labor market signals
that the economy’s overall prosperity is
being more widely shared. These
improvements are especially welcome

Bob DeFina is the
John A. Murphy
Professor of Economics, Villanova
University, Villanova, Pennsylvania. When he
wrote this article,
he was a visiting
scholar in the Research Department of the
Philadelphia Fed.

34 Q3 2002 Business Review

when they help the country’s most
financially vulnerable population — the
poor.
As in most countries, the
extent of poverty in the United States is
officially gauged using a headcount
rate, which is the fraction of the
population that is poor. To determine
how many people are poor, government
statisticians estimate the income needed
for a minimally decent life; that number
is called a poverty threshold. A person is
considered poor if he or she lives in a
household with an income less than the
poverty threshold. Having counted the
number of poor individuals, statisticians
then divide that number by the total
population, which yields the headcount
rate. In 2000, about 31.1 million
individuals were classified as poor. With
a population of 275.9 million at the time,
the headcount rate was 31.1/275.9, or
11.3 percent.
A tightening labor market,
indicated by falling unemployment,
potentially reduces the headcount rate

in several ways. Temporary and longlived changes in unemployment alter job
availability, work hours, promotion
possibilities, and real wages. These, in
turn, influence families’ financial
positions and their likelihood of falling
above or below official poverty
thresholds. The impact on the
headcount rate need not be immediate
or, at times, even strong. Other labor
market developments, perhaps specific
to population sub-groups, might interfere
with the benefits of a generally
prosperous economy. Still, analyses of
historical data, based on both national
and state-level data, indicate that
changes in the unemployment rate are
related to significant reductions in the
fraction of the population that is
officially poor, especially once other
factors are accounted for.1 For example,
the strong economy of the past decade
coincided with a substantial decline in
the headcount rate (Figure 1).
While seemingly intuitive and
straightforward, the link between
unemployment and poverty may not be
as strong as it has traditionally been
thought to be. Any conclusions about
how unemployment affects poverty
depend critically on the particular way
in which poverty is measured. And
during the past two decades, researchers
have identified numerous shortcomings
in the government’s official procedures
for determining the headcount rate.
They have suggested improvements,
both in the way individuals are iden1

Examples can be found in the articles by
Rebecca Blank (1996 and 2000) and the
articles by Blank and Alan Blinder; David
Cutler and William Katz; Blank and David
Card; Robert Haveman and John Schwabish;
and Paul Romer.
www.phil.frb.org

FIGURE 1
The Unemployment and Poverty Rates

tified as poor and in the characteristics
of the poor population used to measure
the extent of poverty.2
On the basis of empirical
evidence presented in this article,
improved measures of poverty are less
strongly related to changes in
unemployment than the headcount
rate. The unemployment rate declines
of the 1990s were not related at all to
some alternative poverty indicators.
HOW IS POVERTY MEASURED
IN THE UNITED STATES?
Poverty in the United States is
measured by the Census Bureau, which
uses an approach developed in the early
1960s.3 The procedure begins with a
benchmark income threshold meant to
gauge the resources an individual needs
to purchase a minimally acceptable
bundle of goods and services. In 2000,
2

There are also variants on the way unemployment is measured. The headline unemployment rate, which measures unemployed
workers aged 16 years or older as a percentage
of the civilian labor force and which I use in
my analysis described below, is one of several
measures compiled by the Bureau of Labor
Statistics.
3

The procedure is detailed in Mollie
Orshansky’s article and in the article by Gary
Fisher.
www.phil.frb.org

the baseline threshold (for a single,
nonelderly adult) was $8959.
The individual baseline
threshold is then adjusted to account for
different family sizes and for the number
of children versus adults. The adjustments recognize that all material needs
do not rise proportionately with the
number of family members. Whether a
family has two or three individuals, it is
likely to have, say, only one refrigerator.
The less-than-proportional increases in
need show up in the official thresholds:
for example, moving from a family with
one nonelderly adult to a family with
two nonelderly adults causes the official
2000 poverty line to rise from $8959 to
$11,531, a 29 percent increase. The
adjustment factors for different family
sizes and types are known as equivalence scales because they are meant to
yield an amount of income necessary to
leave families of different size or
composition with an equivalent
standard of living.
The resulting thresholds are
increased annually for consumer price
inflation nationwide, with the aim of
keeping the purchasing power of the
poverty level unchanged over time. A
lack of data prevents an accounting for
differences in the cost of living in

different regions of the United States.
No adjustment is made for changes in
real living standards, such as raising
threshold levels in line with increases in
the average real income of families.
To identify who is poor, the
Census Bureau compares a family’s
actual pre-tax cash income (including
cash payments from the government)
with its appropriate poverty threshold.
Members of families whose income is
below their threshold are deemed poor.
The extent of poverty is then gauged by
simply summing the number of poor
individuals and expressing the result as a
fraction of the population, that is, the
headcount rate.
The headcount rate is
measured retrospectively once a year.
The Census Bureau collects the needed
data in its March Current Population
Survey, which asks questions about the
income that individuals received in the
preceding year. The March survey
covers about 60,000 households. Thus,
the Census Bureau does not literally
compare the incomes of every U.S.
family to its relevant threshold. Instead,
it makes the comparison for a large
random sample of U.S. families, then
uses the information to statistically
estimate the national headcount rate.
PROBLEMS WITH THE OFFICIAL
MEASURE
The official poverty measure is
not without critics. Indeed, the Census
Bureau’s approach has widely recognized shortcomings that concern the
way individuals are officially identified as
poor and the way the extent of poverty is
measured. Because various studies have
provided comprehensive discussions of
these concerns, only the most important
ones are touched on here. 4
4

Measuring Poverty: A New Approach, prepared
by the Panel on Poverty and Family Assistance, contains a thorough analysis of
identification issues. See the article by B.
Zheng, for a discussion of aggregation
concerns.
Business Review Q3 2002 35

Problems Identifying Who
Is Poor… Numerous researchers have
argued that the baseline poverty
threshold is too low. As mentioned
earlier, the poverty threshold for a family
of two adults is $11,531, a fairly meager
sum. A more glaring example perhaps is
the official threshold for a family of
eight adults: $31,704, or less than $4000
a person. The official adjustments to the
baseline for different family sizes and
compositions have also come under fire.
Critics argue that the adjustments are
inconsistent and counterintuitive.
Essentially, the changes in thresholds
assigned to families as their size and
composition change seem somewhat
judgmental, with no clear, discernable
pattern. These nonsystematic adjustments call into question the extent to
which the resulting poverty thresholds
represent equivalent standards of living
for families of different size or
composition.
Poverty analysts and budget
experts have prepared alternative
thresholds that are 30 percent to 100
percent above the official ones.5 These
suggested increases are based on
updated and more complete analyses of
budget data and family spending
patterns.
The measure of family income
that is compared to poverty thresholds is
also problematic. Official calculations
use a concept called census income,
which includes all the money income
received by a family before any income
taxes are deducted. Money income
includes wages and salaries, interest
income, government income support
payments like unemployment insurance,
or even a cash birthday gift.
Researchers have found the
concept of census income confusing. On
the one hand, it includes the portion of
a family’s income that may come from
5

Many of these alternative budgets are
discussed in Measuring Poverty: A New
Approach.
36 Q3 2002 Business Review

some government programs — the cash
income support payments from
unemployment insurance, Social
Security, and the like. On the other
hand, it excludes that part of a family’s
income that may come from other
government programs — those providing
in-kind payments like food stamps and
subsidized housing — even though the
in-kind payments represent real
purchasing power to families. Census
income also ignores the income taxes
that families pay, monies obviously not
available for spending. A more
consistent approach would either (1)
ignore all government payments and
taxes in order to measure poverty before
any government intervention; or (2)
recognize them all in order to gauge
poverty after the government’s actions
are taken into account. It would also
deduct any work-related expenses, since
these decrease a family’s spendable
income regardless of the government’s
policy actions.
Addressing these shortcomings
in the way poor individuals are identified would alter both the number of
individuals officially classified as poor
and their demographic mix.
Consequently, the relationship
between the newly defined poor
population and swings in
unemployment could be different
from that for the old official
population. Using higher poverty
thresholds, for instance,
would mean that the
poverty population
would include more
full-time workers,
albeit ones with
relatively low
wages. The
poverty status of
such individuals
would probably be
less sensitive to
changes in
unemployment,
since they would

be deemed poor whether or not they
work. Correcting the other problems in
the official procedure would also change
the sensitivity of poverty to unemployment, although the net impact of all the
recommended changes is unclear.
…And Problems
Determining What the Extent of
Poverty Is. The official method for
gauging the total degree of poverty has
also been criticized, essentially because
it neglects characteristics of the poor
population other than the number of
poor individuals. That is, the official
procedure equates the extent of poverty
with the headcount rate. But since
publication of the landmark work of
Nobel Prize-winning economist
Amartya Sen, many researchers feel
that the official approach is too
restrictive. They argue that, at a
minimum, any assessment of the degree
of poverty should also take into account
the average poverty gap and income
dispersion among poor individuals.
The average poverty gap
represents the average dollar difference
between the income of poor families and
their relevant poverty thresholds. In
2000, that gap equaled $6820 per
family.6 Why might the poverty
gap be relevant for gauging the
extent of poverty? Sen suggests performing the following
mental exercise. Suppose that

6

Official poverty data
are published in the
Census Bureau’s
Current Population
Reports, P-60 series.

www.phil.frb.org

the number of poor individuals remains
unchanged, but each poor family has its
income cut in half. Now ask yourself,
“Has poverty increased as a result?”
Intuitively, many people would answer
“yes” because each family now suffers
greater financial hardship. Notice that
the headcount rate, which is based only
on the number of poor individuals,
indicates that the extent of poverty has
not changed.
Related logic suggests that
including income dispersion among poor
individuals is important in measuring the
degree of poverty. To see why, perform
another mental exercise. Suppose that
both the number of poor individuals and
the average poverty gap remain
unchanged. Now, take a dollar from the
poor person with the lowest income and
give it to the poor person with the
highest income. This monetary transfer
increases income dispersion among poor
individuals, since, other things being
equal, poor individuals at the extremes
of the income distribution move farther
apart. Once again, ask yourself, “Has
poverty increased as a result?”
According to Sen, the answer
is “yes” because a dollar is worth more to
the poorest person than to the least poor
person. Essentially, Sen accords greater
social weight to the financial situation of
the poorest person compared to that of
the least poor person. The loss to the
poorest person thus outweighs the gain
to the least poor person. In this view,
greater inequality among the poor, other
things equal, suggests a greater degree
of poverty. The official headcount rate,
by contrast, is unaffected.
Sen’s assessment certainly can
be debated. For example, one can
reasonably argue that poor individuals
are in sufficiently similar circumstances
that a dollar in the hands of each should
be given equal weight. Still, his framework cannot be dismissed out of hand
and, in fact, has been championed by
many prominent poverty analysts.
During the past two decades, they have
www.phil.frb.org

developed new poverty indexes that
incorporate and expand upon Sen’s
original work.
Accounting for both the
average poverty gap and income
dispersion among the poor when
gauging poverty conceivably could alter
the perceived benefits of declines in
unemployment. It is possible, for
instance, that lower unemployment
results in a lower average poverty gap
without affecting the number of poor
individuals. Such an outcome would
occur if an unemployed person got a job

of how the unemployment declines of
the 1990s were related to the headcount
rate and nine alternative poverty
indicators.7 The alternatives incorporate
suggested improvements for identifying
who is poor and for measuring the
extent of poverty. To keep the discussion
manageable, I will provide details on the
results for only three of the alternatives
and simply mention in passing some of
the other findings. The results for these
three alternatives are, however,
representative of the findings for the
others.

Accounting for both the average poverty gap
and income dispersion ... when gauging
poverty conceivably could alter the perceived
benefits of declines in unemployment.
that paid poverty-level wages. The
person would remain officially poor, but
the income from the job could reduce
his poverty gap. Consequently, lower
unemployment would reduce a broader
measure of poverty but leave the
headcount rate unchanged. Alternatively, lower unemployment might
result in fewer poor individuals but leave
the average poverty gap unchanged.
This would happen, for instance, if the
individuals no longer deemed poor had
poverty gaps close to the average gap.
In sum, recommended
improvements in the way poor
individuals are identified and grouped
potentially affect the relationship
between changes in the unemployment
rate and changes in measures of poverty.
It is, of course, impossible to know in
advance how the suggested changes will
actually affect the relationship.
AN EMPIRICAL ANALYSIS OF
ALTERNATIVE POVERTY
INDICATORS
To explore the practical
importance of the suggested improvements, I conducted an empirical analysis

Three Alternative
Indicators. The first alternative
indicator is a revised headcount rate, for
which poor individuals are identified
using higher poverty thresholds, an
improved set of equivalence scales, and
a pre-tax measure of family income that
excludes all government cash and inkind payments and subtracts an
estimate of work-related expenses. The
new thresholds and equivalence scales
are consistent with the recommendations of the Panel on Poverty and
Family Assistance, a group of experts
who worked on improving procedures
for measuring poverty.8
The second alternative
indicator is the average poverty gap. To
make the gap calculations more
meaningful, I express each family’s

7

See my working paper.

8

The new thresholds were set 30 percent
higher than the official ones. The new
equivalence scales were computed using the
poverty threshold of a single adult as the
benchmark.
Business Review Q3 2002 37

income shortfall as a fraction of its
associated poverty threshold. Doing so is
a standard procedure. The methods for
identifying poor individuals and for
measuring income are the same as for
the alternative headcount rate.
The third alternative indicator
is a gauge of income dispersion among
the poor. I use the coefficient of
variation, which equals the standard
deviation of income among poor
individuals divided by the average
income of the poor.9 Once again, the
procedures for identifying poor
individuals and for measuring income
are the same as for the alternative
headcount rate.
An Analysis of State-Level
Data. My analysis is based on data from
all 50 U.S. states (plus Washington,
D.C.) covering the years 1991 to 1998.
The data come from the Census
Bureau’s March Current Population
Survey, the same information used to
calculate the official headcount rate.
Using state-level data, as opposed to
national data, allows me to increase the
number of observations used in the
study. It also permits me to control for a
variety of demographic influences on
the poverty indicators not possible with
national data. These other variables will
serve as controls to better isolate the particular relationship with unemployment.
I computed state averages for
all of the indicators and other variables
in each of the years. Following Census
Bureau guidelines for handling statelevel data, I then calculated two-year
averages for the years 1991/1992, 1993/
1994, 1995/1996, and 1997/1998. Thus,
my data set has 204 state-level values for
each variable in the study: one for each
of the 51 “states” in each of the four
time periods.

Average period values for the
four poverty indicators are presented in
Figure 2. As can be seen, both the
official and revised headcount poverty
rates initially rose and then fell
substantially during the nineties. The
decline in the official poverty rate was
greater. By contrast the poverty income
gap and the dispersion of income among
the poor fell much less. Indeed, the
level of income dispersion ended the
study period higher than where it began.
These very different profiles suggest that
the relationship of each indicator to
unemployment will vary.
It is also useful to examine how
closely the different poverty indicators
correlate with one another across states
and time periods. The degree of
correlation suggests whether each
poverty indicator provides substantially
different information. To measure the
degree of correlation, I used a statistic
known as a correlation coefficient,
where a value of 1 indicates perfect
correlation. For the official and
alternative headcount rate, the value of
the correlation coefficient is 0.92. That
is, despite the different techniques used
for identifying poor individuals, the

patterns of variation in the alternative
headcount rates across states and over
time are quite similar. By contrast, the
correlation coefficients between the
poverty gap and the headcount rates
and between income dispersion and the
headcount rates are much lower. These
range between 0.25 and 0.35. Thus, the
poverty gap and income dispersion
measures appear to provide a different
view of the extent of poverty than the
headcount rates. Finally, the poverty gap
and income dispersion are themselves
quite highly correlated, with a
coefficient value of 0.96.
Statistical Models of the
Poverty Indicators. What is the
relationship between the unemployment
rate and each of the indicators? To
answer the question, I estimated
statistical models in which the
movements in each poverty indicator
are related to movements in the
unemployment rate and the other
control variables. The control variables
are ones that have been used in other
studies. Two of these are meant to
account for changes in wages and hours
that are not correlated with the
unemployment rate: median state real

FIGURE 2
Four Poverty Indicators

9

This is a standard way of measuring income
dispersion, although others, such as the socalled Gini coefficient, are available. See the
1995 Business Review article by Martin Asher
and me.
38 Q3 2002 Business Review

Average poverty indicators for 50 states + Washington, D.C., each divided by its average
level in the 1991-1992 period.

www.phil.frb.org

per capita income and the standard
deviation in state real per capita income.
The others are demographic variables
that have been found to vary systematically with poverty indicators: the
percent of the population aged 16 years
to 19 years, the percent 65 years and
older, the percent in female-headed
families, the percent black, the percent
residing in metropolitan areas, the
percent with at least a college degree,
and the percent not in the labor force.10
The model also controlled for determinants of poverty that are unique to each
state and year but that are not captured
by the other variables.11
Results of the estimations are
represented in Figures 3 through 6. Each
figure shows the relationship between
the unemployment rate and the particular poverty indicator, after statistically controlling for the influences of
all the other variables in the model,
based on 51 “states” and 4 two-year
periods. As mentioned before, controlling for the other influences allows
the link between unemployment and
each poverty indicator to be seen more
clearly. In statistical terms, the figures
show the partial correlation between the
unemployment rate and the poverty
indicators.
10

In theory, the use of the demographic
control variables can hinder estimation of the
relationship between the unemployment rate
and the poverty indicators if the variables are
highly correlated with the unemployment
rate. This is not an actual concern in the
present study. The correlation coefficients
between each of the demographic variables
and the unemployment rate are small, the
largest being about 0.34.
11

The approach I have used is technically
known as a fixed-effects regression. Rebecca
Blank and David Card’s study also used a
fixed-effects regression model to study the
relationship between unemployment and
poverty. Also, all the nondemographic
variables are expressed as natural logarithms.
Expressing the variables as natural logs allows
the estimated relationship between the
unemployment rate and the poverty
indicators to be interpreted as an elasticity –
the percentage change in the poverty
indicator associated with a 1 percent change
in the unemployment rate.
www.phil.frb.org

Figure 3 displays the relationship between the unemployment rate
and the official headcount rate. The
points in the scatterplot indicate a
generally positive relationship: As
unemployment rates rise, official
headcount rates tend to rise as well,
even after accounting for all other
influences on the headcount rates. The
upward-sloping line fitted through the
points gives the average relationship:
Each 1 percent increase in the unemployment rate is associated with about a
0.12 percent increase in poverty. The
estimated magnitude of the response is
consistent with that found by other
researchers using state-level data. While
there is clearly variation in this
relationship — not all points lie exactly
on the line — the points are clustered
closely enough for the relationship to be
statistically significant.
Figure 4 presents the results for
the revised headcount rate. As is true
for the official rate, the revised rate has
a clear positive relationship with the
unemployment rate, after accounting
for the other influences. The points are
rather closely clustered around the
average response line, and the
relationship is statistically significant.
The size of the estimated average
response is smaller, though, by about
half. Further investigation revealed that
the smaller response is due mainly to the
use of a higher poverty threshold. As
noted earlier, the higher thresholds
capture more individuals who remain
poor whether they work or not.
In contrast to the headcount
rates, neither the poverty gap nor
income dispersion among the poor is
significantly related to unemployment.
The relationship between the unemployment rate and the poverty gap is
illustrated in Figure 5. The points in
Figure 5 suggest a weakly positive relationship. Indeed, the average response
line barely slopes upward. Moreover, the
points are widely dispersed around the
line and are noticeably less clustered

than those in Figures 3 and 4. The large
amount of dispersion means that both
large and small poverty gaps occurred
regardless of whether unemployment
rates were low or high. Indeed, a formal
statistical test confirms the lack of a
significant link between the unemployment rate and the poverty gap.
A similar picture emerges for
income dispersion among the poor
(Figure 6). The average relationship
between the unemployment rate and
the adjusted income dispersion measure
is upward sloping, but less so than that
for the headcount rates. And as with
the poverty gap, the points in the
scatterplot are widely dispersed around
the line. A formal test indicates a
statistically insignificant link between
unemployment and income dispersion.
The results just described
appear to hold up under further study. I
redid the preceding analysis using a
different income definition to compute
the three indicators and the conclusions
were the same.12 Namely, the revised
headcount rate exhibits a significant
link with the unemployment rate, but of
a smaller magnitude than does the
official headcount rate. Neither the
recomputed poverty gap nor recomputed income dispersion among the poor
had a statistically significant relationship
with the unemployment rate. I also
explored the relationship between the
unemployment rate and a
comprehensive poverty index, developed by James Foster, Joel Greer and
Erik Thorbecke, that simultaneously
includes the headcount rate, the
average poverty gap, and income
dispersion among the poor. No
significant link emerged, regardless of
the income definition used.

12

The other income concept starts with all
private-sector income, subtracts all income
taxes paid, and adds in all government cash
and in-kind payments. It also subtracts an
estimate of work expenses.
Business Review Q3 2002 39

FIGURES 3, 4, 5, AND 6
Unemployment and the Revised
Headcount Rate
(Figure 4)

headcount rate

headcount rate

Unemployment and the Official
Headcount Rate
(Figure 3)

unemployment rate

unemployment rate

Unemployment and Income Dispersion
Among the Poor
(Figure 6)

poverty gap

income dispersion

Unemployment and the Poverty Gap
(Figure 5)

unemployment rate

unemployment rate

Note: All variables are in logarithms. The variable on each vertical axis has been adjusted to account for variables in the model other
than the unemployment rate. Thus, each figure shows the partial correlation between the unemployment rate and a particular poverty
indicator.

CONCLUSION
Historically, the official
headcount rate has generally moved
with changes in unemployment, rising as
unemployment rose and vice versa.
This sympathetic relationship offered
one more reason to cheer a strengthening labor market — not only did the
40 Q3 2002 Business Review

average person gain but so did society’s
most vulnerable.
It is widely recognized,
however, that the method by which
poverty is officially gauged has a variety
of shortcomings. These shortcomings
include the methods for identifying who
is poor and for measuring the extent of

poverty. During the past two decades,
researchers have suggested numerous
improvements in poverty measurement,
including the use of higher poverty
thresholds, better equivalence scales,
more coherent income definitions, and
additional indicators that reflect
information beyond simply the number
www.phil.frb.org

of poor individuals. Should these
improvements be implemented, it is
quite possible that the measured link
between poverty and unemployment
could change.
Indeed, my research on the
experience of the 1990s reveals that the
relationship between unemployment

and the revised poverty headcount rate
was much weaker than that between
the unemployment and the official
poverty rate. The revised headcount
rate did decline significantly as
unemployment fell, but 40 percent less
than the official headcount rate did.
Moreover, the unemployment rate

showed no significant statistical link to
either the average poverty gap or
income dispersion among the poor.
Taken together, the findings caution
against overreliance on lower
unemployment as an anti-poverty
strategy. While helpful in some regards,
its impact could well be overstated. BR

Blank, Rebecca, and David Card. “Poverty,
Income Distribution, and Growth: Are
They Still Connected?” Brookings Papers on
Economic Activity, 2, 1993, pp. 285-339.

Orshansky, Mollie. “Counting the Poor:
Another Look at Poverty,” Social Security
Bulletin, 28, 1965, pp. 3-29.

REFERENCES
Asher, Martin A., and Robert H. DeFina.
“Has Deunionization Led to Higher
Earnings Inequality?” Federal Reserve Bank
of Philadelphia Business Review, November/
December 1995.
Blank, Rebecca. “Why Were Poverty Rates
So High in the 1980s?” in D. Papadimitriou
and E. Wolff, eds., Poverty and Prosperity in
the USA in the Late Twentieth Century.
London: Macmillan, 1993.
Blank, Rebecca. “Why Has Economic
Growth Been Such an Ineffective Tool
Against Poverty in Recent Years?” in J. Neill,
ed., Poverty and Inequality, The Political
Economy of Redistribution. Kalamazoo: W.E.
Upjohn Institute, 1996.

Cutler, David, and William Katz. “Macroeconomic Performance and the Disadvantaged,” Brookings Papers on Economic
Activity, 2, 1991, pp. 1-74.
DeFina, Robert. “The Impact of Unemployment on Alternative Poverty Measures,”
Federal Reserve Bank of Philadelphia
Working Paper 02-8, May 2002.
Fisher, Gary. “The Development and
History of the Poverty Thresholds,” Social
Security Bulletin, 55, 1992, pp. 3-14.

Blank, Rebecca. “Fighting Poverty: Lessons
From Recent U.S. History,” Journal of
Economic Perspectives, 14, 2000, pp. 3-19.

Foster, J.E., J. Greer, and E. Thorbecke. “A
Class of Decomposable Poverty Measures,”
Econometrica, 52, 1984, pp. 761-66.

Blank, Rebecca, and Alan Blinder. “Poverty
and the Macroeconomy,” in Sheldon
Danziger and Daniel Weinberg, eds.,
Challenging Poverty: What Works and What
Doesn’t. Cambridge, MA: Harvard
University Press, 1987.

Haveman, Robert, and John Schwabish.
“Has Macroeconomic Performance
Regained Its Anti-Poverty Bite?” Contemporary Economic Policy, 18, 2000, pp. 415-27.

www.phil.frb.org

Panel on Poverty and Family Assistance.
Measuring Poverty: A New Approach.
Washington, D.C.: National Academy
Press, 1995.
Romer, Paul. “Poverty and Macroeconomic
Activity,” Federal Reserve Bank of Kansas
City Economic Review, First Quarter 2000,
pp. 1-13.
Sen, Amartya. “Poverty: An Ordinal
Approach to Measurement,” Econometrica,
44, 1976, pp. 219-31.
Sen, Amartya. Poverty and Famines: An
Essay on Entitlement and Deprivation.
Oxford: Oxford University Press, 1981.
B. Zheng. “Aggregate Poverty Measures,”
Journal of Economic Surveys, 11, 1997, pp.
123-62.

Business Review Q3 2002 41