View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

recent research

research review

Issue No. 12, July 2009 – December 2009

federal reserve bank of boston

Research Department
Jeffrey C. Fuhrer
Executive Vice President and
Director of Research
Geoffrey M. B. Tootell
Senior Vice President and
Deputy Director of Research
Economists
Giovanni P. Olivei, VP
Robert K. Triest, VP
Michelle L. Barnes
Anat Bracha
Katharine Bradbury
Mary A. Burke
Daniel Cooper
Federico Diez
Christopher L. Foote
Fabià Gumbau-Brisa
Julian Jamison
Yolanda K. Kodrzycki
Sergei Koulayev
Jane Sneddon Little
Ali K. Ozdagli
Scott Schuh
Oz Shy
Joanna Stavins
J. Christina Wang
Paul S. Willen
Manager
Patricia Geagan, AVP
Editors
Suzanne Lorant
Elizabeth Murry

research review
Issue No. 12, July 2009 – December 2009
Research Review provides an overview of recent research by economists of the
research department of the Federal Reserve Bank of Boston. Included are summaries of scholarly papers, staff briefings, and Bank-sponsored conferences.
Research Review is available on the web at:
http://www.bos.frb.org/economic/ResearchReview/index.htm
Beginning with this issue, Research Review is available only online.
Earlier issues of Research Review in hard copy are available without charge. To
order copies of back issues, please contact the Research Library:
Research Library—D
Federal Reserve Bank of Boston
600 Atlantic Avenue
Boston, MA 02210
Phone: 617.973.3397
Fax: 617.973.4221
E-mail: boston.library@bos.frb.org
Views expressed in Research Review are those of the individual authors and do
not necessarily reflect official positions of the Federal Reserve Bank of Boston or
the Federal Reserve System. The authors appreciate receiving comments.

Graphic Designer
Heidi Furse
Research Review is a publication of
the Research Department of the
Federal Reserve Bank of Boston
ISSN 1552-2814 print (discontinued
beginning with the current issue)
ISSN 1552-2822 (online)
©Copyright 2010
Federal Reserve Bank of Boston

Research Department Papers Series of the Federal Reserve Bank of Boston
Public Policy Discussion Papers present research bearing on policy issues. They are
generally written for policymakers, informed business people, and academics. Many
of them present research intended for professional journals.
Working Papers present statistical or technical research. They are generally written
for economists and others with strong technical backgrounds, and they are intended
for publication in professional journals.
Public Policy Briefs present briefing materials prepared by Boston Fed research
staff on topics of current interest concerning the economy.
Research department papers are available online only.
http://www.bos.frb.org/economic/research.htm

2 research review

Executive Summaries in This Issue
Public Policy Discussion Papers
p-09-4 Why Don’t Lenders Renegotiate More Home Mortgages? 			
5
Redefaults, Self-Cures, and Securitization		
				
Manuel Adelino, Kristopher S. Gerardi, and Paul S. Willen
p-09-5 Securitization and Moral Hazard: Evidence from a Lender Cutoff Rule		
Ryan Bubb and Alex Kaufman

8

p-09-6 Reinvigorating Springfield’s Economy: Lessons from Resurgent Cities 		
12
Yolanda K. Kodrzycki and Ana Patricia Muñoz with Lynn Browne, 			
DeAnna Green, Marques Benton, Prabal Chakrabarti, David Plasse, 			
Richard Walker, and Bo Zhao					
		
				
p-09-7 Did Easy Credit Lead to Overspending? Home Equity Borrowing		
14
and Household Behavior in the Early 2000s						
Daniel Cooper
p-09-8 A TIPS Scorecard: Are TIPS Accomplishing What They Were Supposed to
17
Accomplish? Can They Be Improved?						
Michelle L. Barnes, Zvi Bodie, Robert K. Triest, and J. Christina Wang
p-09-9 Impending Spending Bust? The Role of Housing Wealth			
as Borrowing Collateral
Daniel Cooper

20

p-09-10 The 2008 Survey of Consumer Payment Choice				
22
Kevin Foster, Erik Meijer, Scott Schuh, and Michael A. Zabek			
p-09-11 Jobs in Springfield, Massachusetts: Understanding and Remedying the 		
Causes of Low Resident Employment Rates
Yolanda K. Kodrzycki and Ana Patricia Muñoz with Lynn Browne,
DeAnna Green, Marques Benton, Prabal Chakrabarti, Richard Walker,
and Bo Zhao

25

Working Papers
w-09-7 Trends in U.S. Family Income Mobility, 1967–2004 				
Katharine Bradbury and Jane Katz

28

w-09-8 Real Estate Brokers and Commission: Theory and Calibrations			
Oz Shy

31

w-09-9 Efficient Organization of Production: Nested versus 				
Horizontal Outsourcing
Oz Shy and Rune Stenbacka

33

w-09-10 Estimating the Border Effect: Some New Evidence				
Gita Gopinath, Pierre-Olivier Gourinchas, Chang-Tai Hsieh,
and Nicholas Li

34

w-09-11 Social and Private Learning with Endogenous Decision Timing			
Julian Jamison, David Owens, and Glenn Woroch

36

w-09-12 Housing and Debt Over the Life Cycle and Over the Business Cycle		
Matteo Iacoviello and Marina Pavan

38

w-09-13 Financial Leverage, Corporate Investment, and Stock Returns 			
Ali K. Ozdagli

41

w-09-14 Inflation Persistence				
Jeffrey C. Fuhrer

43

		

w-09-15 Closed-Form Estimates of the New Keynesian Phillips Curve 			
46
with Time-Varying Trend Inflation							
Michelle L. Barnes, Fabià Gumbau-Brisa, Denny Lie,
and Giovanni P. Olivei
w-09-16 Estimating Demand in Search Markets: The Case of Online Hotel Bookings
Sergei Koulayev

48

w-09-17 Multiple Selves in Intertemporal Choices				
Julian Jamison and Jan Wegener		

50

w-09-18 The Valuation Channel of External Adjustment				
Fabio Ghironi, Jaewoo Lee, and Alessandro Rebucci

52

w-09-19 Productivity, Welfare, and Reallocation: Theory and Firm-Level Evidence
Susanto Basu, Luigi Pascali, Fabio Schiantarelli, and Luis Serven

54

w-09-20 State-Dependent Pricing and Optimal Monetary Policy			
Denny Lie

56

w-09-21 Seeds to Succeed? Sequential Giving to Public Projects			
Anat Bracha, Michael Menietti, and Lise Vesterlund

58

Public Policy Briefs
b-09-1 A Proposal to Help Distressed Homeowners: 					
A Government Payment-Sharing Plan
Christopher L. Foote, Jeffrey C. Fuhrer, Eileen Mauskopf,
and Paul S. Willen

Contributing Authors 						

4 research review

61

63

Public Policy Discussion Papers
p-09-4

Why Don’t Lenders Renegotiate More Home Mortgages?
Redefaults, Self-Cures, and Securitization
by Manuel Adelino, Kristopher S. Gerardi, and Paul S. Willen
complete text: http://www.bos.frb.org/economic/ppdp/2009/ppdp0904.htm
email: madelino@mit.edu, kristopher.gerardi@atl.frb.org, paul.willen@bos.frb.org

Motivation for the Research
Many commentators have attributed the severity of the foreclosure crisis in the United States during 2007–2009 to the unwillingness of lenders to renegotiate mortgages—and as a consequence
have placed renegotiation at the heart of the policy debate. It is easy to understand the appeal of
renegotiation to policymakers: if a lender makes a concession to a borrower by, for example, reducing the principal balance on the loan, it can prevent a foreclosure. This is clearly a good outcome
for the borrower, and possibly good for society as well. But the key to the appeal of renegotiation is
the belief that it can benefit the lender too: the reasoning holds that the lender loses money only if
the reduction in the loan’s value exceeds the loss the lender would sustain in a foreclosure. In short,
proponents of home mortgage renegotiation see it as a type of public policy holy grail, in the sense
that it may help both borrowers and lenders while costing the government little.
In this paper, the authors seek to discover why renegotiation is so rare in practice, since it seemingly
benefits both borrowers and lenders. The leading explanation for lenders’ reluctance to renegotiate
has been the process of securitization, which involves slicing and dicing the loans into many pieces
and selling them to other investors, distributing ownership rights in the process. Thus, securitization sets up conflicting interests that complicate what might otherwise have been a simple resolution between the borrower and the original lender. But some market observers and researchers have
expressed doubts about the role securitization plays in limiting such renegotiations.
Research Approach
The authors employ a variety of analytical and statistical techniques to examine a large, detailed
dataset of residential mortgages from Lender Processing Services (LPS), covering approximately
60 percent of the U.S. mortgage market. They explore several definitions of renegotiation, looking initially at concessionary modifications that serve to lower a borrower’s monthly payment by
reducing the principal balance or interest rate, by extending the term of the loan, or by employing
various combinations of these methods. Renegotiation that involves lowering a borrower’s monthly
payment is a key focus of the analysis, because many market observers believe that concessionary
modifications are the most, or possibly the only, effective way to prevent foreclosures. The authors
then broaden the definition of renegotiation to include any modification, regardless of whether it
lowers the borrower’s payment. Modifications are often thought to always involve concessions to the
borrower, but many involve the capitalization of arrears into the balance of the loan and thus lead to
increased monthly mortgage payments.
To examine the effect of securitization on renegotiation rates, the authors compare renegotiation
rates for two types of loans: private-label loans—loans serviced for private securitization trusts not
sponsored by any of the government sponsored enterprises (GSEs) such as Fannie Mae and Freddie Mac—and portfolio loans—loans that are kept on banks’ balance sheets. Private-label loans
are subject to contract frictions, including global limits on the number of modifications a servicer
can perform for a particular pool of mortgages, expense reimbursement rules that may provide a
perverse incentive to foreclose rather than modify a loan, and the possibility that investors whose

july 2009 – december 2009 5

claims are adversely affected by modification may take legal action. Portfolio loans are immune to
such frictions, but may be subject to accounting concerns on the part of banks and servicer resource
constraints.
One potential problem with the data is that there may be unobserved heterogeneity in the characteristics of portfolio and private-label loans. To address this, the authors exploit subsets of the LPS
data, in which servicers provide an exceptional amount of information about borrowers. To further
test the robustness of the results, the authors limit the sample to only subprime loans (as defined by
LPS). These loans comprise only 7 percent of the LPS data but account for more than 40 percent
of serious delinquencies and almost 50 percent of the modifications identified in the data. Another
potential issue, arising from the authors’ focus on 60-day delinquent loans, is that portfolio lenders can contact borrowers at any time, whereas some securitization agreements forbid lenders from
contacting borrowers until they are seriously delinquent (at least 60-days late, equivalent to two
missed payments). To address this, the authors also examine 30-day delinquent borrowers (one
missed payment).
By looking at the “cure rate”—the percentage of delinquent loans that transition to current status
after being 60-days late—in both the full sample and the subprime sample, the authors test the
proposition that servicers engage in loss mitigation actions other than renegotiation, for example,
forbearance agreements and repayment plans. They then formalize the basic intuition of the investor
renegotiation decision with a simple theoretical model that, in a stylized way, mirrors the net present value calculation that servicers are supposed to perform when deciding whether to offer a loan
modification. Finally, because so much of the policy debate has focused on institutional obstacles to
modification—particularly obstacles associated with securitization—the authors examine institutional evidence to further test their conclusions.
Key Findings
• Regardless of the definition of renegotiation used, one message is quite clear: lenders rarely renegotiate. Fewer than 3 percent of the seriously delinquent loans in the sample received a concessionary
modification in the year following the first serious delinquency. More loans were modified under
the broader definition, but the total number of renegotiations still accounted for fewer than 8 percent of the seriously delinquent loans. These numbers are small both in absolute terms and relative
to the approximately half of the sample of seriously delinquent loans against which foreclosure
proceedings were initiated and the nearly 30 percent that were completed.
• The empirical analysis provides strong evidence against the role securitization plays in preventing mortgage renegotiations. For the narrowest definition of renegotiation, a payment-reducing
modification, the differences between a private-label loan and a portfolio loan in the likelihood of
renegotiation in the 12 months subsequent to the first 60-day delinquency is neither economically
nor statistically significant. For the broader definition that includes any modification at all, which
one would expect to be most affected by securitization, the data reject even more strongly the role
of securitization in preventing renegotiation. Servicers are more likely to perform modifications,
broadly defined, and to allow the borrower to prepay on a private-label loan than on a portfolio loan.
• The results are highly robust. When the authors exclude observations where the servicers failed
to report whether the borrower’s income was fully documented at origination, or what the debtto-income ratio was at origination, the results become even stronger. For loans made with full
documentation of the borrower’s income at origination, the results are broadly consistent with, or
in some cases stronger than, the results for the full sample. Results for the subprime sample only are
also consistent with the results for the full sample. Focusing on 30-day delinquencies rather than
60-day delinquencies continues to show no meaningful difference between renegotiation rates of
private-label and portfolio loans.

6 research review

• In the full sample, private-label loans are less likely to cure, but the gap, although statistically significant, is small—correcting for observable characteristics. The authors find a cure rate of around
30 percent for the typical portfolio loan and about 2 percentage points less for an otherwise equivalent private-label loan. However, for three subsamples—subprime loans, loans with information
about income documentation and debt-to-income status, and fully documented loans—the private-label loans are significantly more likely to cure than portfolio loans.
• The model results show that higher cure rates, higher redefault rates, higher expectations of house
price depreciation, and a higher discount rate all make renegotiation less attractive to the investor.
Thus, one cannot evaluate a modification by simply comparing the reduction in the interest rate
on the loan or in the principal balance with the expected loss in foreclosure. One must take into
account both the redefault and the self-cure risks, something that most proponents of modification
fail to do.
Implications
If contract frictions are not a significant problem, then what is the explanation for why lenders do
not renegotiate with delinquent borrowers more often? The authors argue for a very mundane explanation: lenders expect to recover more from a foreclosure than from a modified loan. This may
seem surprising, given the large losses lenders typically incur in foreclosure, which include both
the difference between the value of the loan and the collateral and the substantial legal expenses
associated with the conveyance. The problem is that renegotiating exposes lenders to two types of
risks that can dramatically increase their cost. The first is what the authors call “self-cure” risk: more
than 30 percent of seriously delinquent borrowers “cure” without receiving a modification; if taken
at face value, this means that as much as 30 percent of the money a lender spends on modifications
is wasted. The second cost comes from borrowers who subsequently redefault; the results show that
a large fraction of borrowers who receive modifications become seriously delinquent again within
six months. For them, the lender has simply postponed an inevitable foreclosure, and in a market
environment with rapidly falling house prices, the lender will now recover even less in foreclosure.
In addition, a borrower who faces a high likelihood of eventually losing the home will do little or
nothing to maintain the property and may even contribute to its deterioration, again reducing the
lender’s expected recovery.
This research has three main implications for policy. First, “safe harbor” provisions, which shelter
mortgage loan servicers from investor lawsuits, are unlikely to affect the number of modifications.
Second, and more broadly, the number of “preventable foreclosures” may be far fewer than many
observers believe. Finally, the model result showing why investors may not want to perform modifications does not necessarily imply that modifications may not be socially optimal. One key input
to the authors’ theoretical model is the discount rate, and it is possible that investors, especially in a
time when liquidity is highly valued, may be less patient than society as a whole and therefore may
pursue foreclose when the broader society would prefer renegotiation. Large financial incentives to
investors or even to borrowers to continue payment could mitigate this problem

july 2009 – december 2009 7

p-09-5									

Securitization and Moral Hazard: Evidence
from a Lender Cutoff Rule
by Ryan Bubb and Alex Kaufman

complete text: http://www.bos.frb.org/economic/ppdp/2009/ppdp0905.htm
e-mail: ryanbubb@fas.harvard.edu, akaufman@fas.harvard.edu

Motivation for the Research
A key question about the recent subprime mortgage crisis is whether securitization reduced originating lenders’ incentives to carefully screen borrowers. A fundamental role of financial intermediaries
is to produce information about prospective borrowers in order to allocate credit. But lenders’ incentives to generate information and screen borrowers may be attenuated if they know that they plan to
securitize the loans they originate by selling them to dispersed investors. On the other hand, rational
loan purchasers may recognize this moral hazard problem and take steps to mitigate it. Determining
whether securitization played a role in the recent sharp rise in mortgage defaults is critical to evaluating the social costs and benefits of securitizing residential mortgages.
One promising strategy to address this question is to examine variation in the behavior of market
participants induced by credit score cutoff rules. Credit scores are used by lenders as a summary
measure of default risk, with higher credit scores indicating lower default risk. Histograms of mortgage loan borrower credit scores reveal that they are step-wise functions. It appears that borrowers
with credit scores above certain thresholds are treated differently than borrowers just below these
thresholds, even though potential borrowers on either side of the threshold are very similar. These
histograms suggest using a regression discontinuity design to learn about the effects of the change in
behavior of market participants at these thresholds. But how and why does lender behavior change
at these thresholds? In this paper, the authors attempt to distinguish between two explanations for
credit score cutoff rules, each with different implications for what they imply about the relationship
between securitization and lender moral hazard. The authors investigate two hypotheses to explain
the cutoff rules: the explanation currently most accepted in the literature, which Bubb and Kaufman
call the securitizer-first theory, and an alternative theory, which Bubb and Kaufman propose in this
paper and call the lender-first theory.
The securitizer-first theory, initially put forth by Keys, Mukherjee, Seru, and Vig (2008), posits that
secondary-market mortgage purchasers employ rules of thumb whereby they are exogenously more
willing to purchase loans made to borrowers with credit scores just above some cutoff. The difference
in the ease of securitization induces mortgage lenders to adopt weaker screening standards for loan
applicants above the cutoff, since lenders know they will be less likely to keep these loans on their
books. In industry parlance, they will have less “skin in the game.” Because lenders screen applicants
more intensely below the cutoff than above, loans below the cutoff are fewer but of higher quality
(that is, they have a lower default rate) than loans above the cutoff.
In the lender-first theory, the causality goes the other way. As in the securitizer-first theory, lenders
will collect additional information only about applicants whose credit scores are below the cutoff
score. According to this theory, the reason they do so is that the benefit to lenders of collecting additional information and thereby screening out more high-risk applicants is greater for borrowers at
higher risk of default than for those at lower risk of default and therefore outweighs the per-applicant
fixed cost of screening, which drives use of the cutoff rule. A screening cutoff rule also results in a
discontinuity in the amount of private information lenders have about loans. Securitizers may respond to this problem in a variety of ways. Because the efficient amount of screening is greater and
therefore more costly below the screening cutoff, rational securitizers who are unable to contract on

8 research review

Rates of Conforming and Jumbo Loans by FICO Score
Conforming Sample
Default rate

0.35

0.25

0.15

0.05
590

600

610

620

630

640

650

630

640

650

FICO score

Jumbo Sample

Default rate

0.35

0.25

0.15

0.05

590

600

610

620
FICO score

Below cutoff threshold
Above cutoff threshold

Source: Authors’ calculations.
Note: Fitted curves from 6th-order polynomial regression on FICO interval [500,800} without year fixed effects.

screening directly because of asymmetric information may reduce loan purchases below the cutoff
score, leaving more loans on the books of originating lenders, in order to maintain their incentives to
bear the costs of efficient screening. However, if securitizers have alternative incentive instruments to
police lender moral hazard, they may use those instruments rather than leave more loans below the
threshold credit score on the books of lenders.

july 2009 – december 2009 9

Rates of Conforming and Jumbo Loans by FICO Score
Conforming Sample
Securitization rate

1.0

0.9

0.8

0.7

0.6

0.5

590

600

610

620

630

640

650

FICO score

Jumbo Sample

Securitization rate

1.0

0.9

0.8

0.7

0.6

0.5

590

600

610

620

630

640

650

FICO score

Below cutoff threshold
Above cutoff threshold

Source: Authors’ calculations.
Note: Fitted curves from 6th-order polynomial regression on FICO interval [500,800} without year fixed effects.

Under the securitizer-first theory, finding discontinuities in the default rate and securitization rate at
the same credit score cutoff is evidence that securitization led to moral hazard in lending screening.
Under the lender-first theory, finding discontinuities in the default rate and the securitization rate at
the same credit score cutoff is evidence that securitizers that had asymmetric information-adjusted
purchases to maintain lenders’ incentives to screen. The robust prediction of the lender-first theory

10 research review

is that lenders will use cutoff rules—how securitizers will respond to this situation depends on their
degree of sophistication and on the incentive instruments they have available to police lenders’ moral
hazard.
The securitizer-first model predicts discontinuities in the lending, default, and securitization rates
expected for a single FICO score. This pattern of predictions is similar to that of the lender-first
model in the case of a rational securitizer with asymmetric information, except that the endogenous
screening threshold has been replaced by the securitizer’s exogenous threshold. Moreover, under the
securitizer-first theory, the change in the default rate of loans at the securitizer’s threshold cutoff
score is a measure of the extent to which securitization leads originating lenders to conduct less applicant screening.
Research Approach
To test these two theories of credit score cutoff rules, the authors examine loan-level data from Lender
Processing Services.These are data collected through the cooperation of 18 large servicers, including 9 of the top 10 mortgage servicers in the United States. After establishing that the data show
discontinuities in the frequency of loan issuance, with the largest discontinuity in log point terms occurring at a FICO score of 620, the authors develop a lender-first model in which the securitization
rate jumps up discontinuously as the screening threshold is crossed from below. The authors then use
the loan-level data and institutional evidence to test the lender-first and the securitizer-first theories.
Key Findings
• Institutional evidence suggests that, as predicted by the lender-first theory, lenders make discrete
choices about screening intensity at a FICO score of 620, for reasons unrelated to the ease
of securitization.
• The lender-first theory of cutoff rules is substantially more consistent with the data than is the
securitizer-first theory: evidence from the loan-level dataset shows that in the conforming mortgage
market, largely serviced by the government-sponsored enterprises Fannie Mae and Freddie Mac,
as well as in a low-documentation sample, there are screening cutoffs at 620 but no securitization
discontinuity—a pattern of evidence consistent with the lender-first theory but not with the
securitizer-first theory.
• In the jumbo mortgage market for large, nonconforming loans (in 2010, greater than $417,000
in the contiguous United States for a single-family residence), in which only private securitizers
participate, the securitization rate is lower just below the screening threshold (a FICO score of
620). This suggests that private securitizers were aware of the moral hazard problem posed by loan
purchases and sought to mitigate it. However, in the conforming (non-jumbo) market dominated
by the GSEs, there is a substantial jump in the default rate at the 620 threshold but no jump at
620 in the securitization rate. One possible explanation for this result could be that the GSEs were
unaware of the moral hazard threat posed by securitization. An arguably more plausible explanation is that, as large repeat players in the industry, the GSEs had alternative incentive instruments
to police lender moral hazard.
Implications
Interpreting the cutoff rule evidence in light of the lender-first theory, the results from this study
suggest that private mortgage securitizers adjusted their loan purchases around the lender screening
threshold in order to maintain lender incentives to screen applicants. Although the paper’s findings
suggest that securitizers were more rational with regard to moral hazard than previous research has
judged, the extent to which securitization contributed to the subprime mortgage crisis is still an open
and pressing research and public policy question.

july 2009 – december 2009 11

p-09-6									

Reinvigorating Springfield’s Economy:
Lessons from Resurgent Cities

by Yolanda K. Kodrzycki and Ana Patricia Muñoz, with Lynn Browne, DeAnna Green,
Marques Benton, Prabal Chakrabarti, David Plasse, Richard Walker, and Bo Zhao
complete text: http://www.bos.frb.org/economic/ppdp/2009/ppdp0906.htm
e-mail: yolanda.kodrzycki@bos.frb.org, anapatricia.munoz@bos.frb.org

Motivation for the Research
The economic position of Springfield, Massachusetts, has eroded over the past five decades. In 1960,
median family income in the city was slightly higher than the national average. By the mid-2000s,
median family income in Springfield had decreased to only about two-thirds of the national average.
Its poverty rate went from a little below average in 1980 to over twice the U.S. average in recent years.
To some extent, this deterioration in Springfield’s living standards reflects the forces of deindustrialization and suburbanization that challenged many city economies during these decades. However, these
nationwide forces do not fully account for Springfield’s decline. Although its economic position was
in line with its peer group of mid-sized manufacturing-oriented cities in the 1960s, by 2005–2007

Springfield, MA and 25 Peer Cities

ME

VT
NH

NY

WI
MI

Flint
Grand Rapids
Rockford
Gary

Fort Wayne

Peoria

IL

South Bend

Rochester
MA Worcester
Syracuse
SPRINGFIELD ✱
Providence
CT
Hartford RI
Waterbury
New
Haven
Erie
Bridgeport
PA
Paterson
Jersey
City
Youngstown
Akron
Allentown

NJ

OH

IN

Dayton

MD

DE

WV

Evansville

VA

KY
Winston-Salem

TN

Greensboro

NC

*Based on population, employment in manufacturing, and the role of the city in the region in 1960.

12 research review

✱

Resurgent cities
Other peer cities
Springfield

median annual family income in Springfield had fallen to nearly $4,000 below the peer-city average
and the poverty rate had risen to 4 percentage points above the peer-city average. As part of the Federal
Reserve Bank of Boston’s commitment to supporting efforts to revitalize Springfield’s economy, this
paper seeks to draw some lessons that may be useful in guiding the city’s revitalization efforts.
Research Approach
From among a comparison group of 25 municipalities that were similar to Springfield in 1960, the
study identifies and draws some lessons from 10 “resurgent cities” that have made substantial progress
in improving living standards for their residents compared with other metropolitan centers facing
similar challenges and opportunities. Recognized by experts on economic development and policy as
vital communities in a broader sense, these 10 peer cities were chosen based on their being mid-sized
manufacturing-oriented cities and on the role each city plays in its region. Like Springfield, each of
the peer cities constitutes the primary urban center of its metropolitan area. Most had a population
of between 100,000 and 200,000 residents from 1960 to 1980, although a few started with larger
populations in 1960 before declining in size.
To characterize the cities, the authors considered broad measures of residents’ economic well-being
plus other information on community vitality drawn from a wide range of reports, books, and newspaper articles, and they focused on long-term trends as opposed to more temporary developments
associated with business cycles. From the set of peer group cities, the authors defined the subset
of resurgent cities as those showing better performance than Springfield in each of the following
respects: the median family income in 2005–2007, change in median family income ranking since
1960, poverty rate, and percentage point change in poverty rate since 1980. The percentage change
in population since 1960 and additional indicators were used as secondary criteria to distinguish
resurgent cities from others in the peer group.
After identifying the resurgent cities and quantifying key economic and social differences between
these 10 cities and Springfield, the authors present a brief economic history of Springfield. This is
followed by case studies of each of the resurgent cities, focusing on their challenges and economic
development efforts. The authors conclude by drawing lessons from the case studies and suggesting
implications for Springfield.
Key Findings
• The research strongly suggests that industry mix, demographic composition, and geographic location are not the key factors distinguishing the resurgent cities from Springfield. Therefore, the
erosion of Springfield’s economic position relative to its peer cities has been due mostly to other factors. Identifying these other factors and taking the appropriate actions is likely to increase
Springfield’s chances of reaching its economic potential.
• The most important lessons from the resurgent cities concern leadership and collaboration. Initial
leadership in these cities came from a variety of key institutions and individuals. In some cases,
the turnaround started with efforts on the part of the public sector, while in other cases nongovernmental institutions or private developers were at the forefront. In all cases, the instigators of
revitalization in the peer group cities recognized that it was in their own interest to prevent further
deterioration in the local economy, and they took responsibility for bringing about improvement.
Regardless of who initiated the turnaround, economic redevelopment efforts spanned decades and
involved collaborations among numerous organizations and sectors. These joint efforts involved
creating new, distinct entities, with names like “Growth Alliance” or “Development Corporation.”
• The stories of the resurgent cities involve fundamental shifts in local economies and human and
physical infrastructure. Mid-sized cities that were once known for manufacturing goods rang-

july 2009 – december 2009 13

ing from refrigerators and home furnishings to jewelry and cigarettes have earned new identities.
Many have turned to more technology-related forms of manufacturing for part of their transformation. All of the cities have diversified their economic base away from the manufacturing sector.
• In addition to experiencing blows from the recent nationwide recession and financial crisis, the
resurgent cities continue to face the challenges of providing quality education and training to
broader segments of their populations and extending the benefits of resurgence to more of their
neighborhoods. Their efforts along these lines are multifaceted but often involve key initiatives on
the part of local educational institutions and foundations.
Implications
This study attempts to lay out reasonable aspirations for Springfield and add to the available information
concerning the economic development approaches tried by its peer cities. On one hand, the message is
a positive one: nothing about Springfield’s past or present industry mix, demographic composition, or
geography prevents the city from becoming as successful as the 10 resurgent cities that confronted similar
circumstances half a century ago. On the other hand, the report challenges Springfield’s various constituencies to compare their actions with those taken by their counterparts in other cities and to formulate and
act upon some fresh ideas about how to deal with the lingering challenges facing the city.
p-09-7									

Did Easy Credit Lead to Overspending? Home Equity
Borrowing and Household Behavior in the 2000s
by Daniel Cooper

complete text: http://www.bos.frb.org/economic/ppdp/2009/ppdp0907.htm
e-mail: daniel.cooper@bos.frb.org

Motivation for the Research
According to work by Greenspan and Kennedy (2007), U.S. households’ net equity extraction from
their homes averaged nearly 6 percent of disposable income between 2001 and 2005. This paper
examines the role of equity extraction during the recent house-price boom by analyzing what factors
influence households’ decisions to extract equity from their homes. The paper further considers how
equity extraction affects household spending, balance sheets, and residential investment.
There are multiple reasons why households may extract equity from their homes besides the need to
finance desired expenditures and/or to smooth consumption in response to a negative income shock.
Households may borrow to make home repairs or improvements. In this case, equity extraction is
used to fund residential investment needs. Alternatively, households may borrow against their homes
to consolidate more costly debt, such as credit cards. Recently, home equity credit has been one of the
cheapest forms of borrowing, so it makes sense for households to substitute toward such financing.
Not only are the interest rates on home equity lines of credit low compared with rates on credit cards,
but interest payments on home equity debt are, for the most part, a tax-deductible expense. Home
equity borrowing may also offer households a less expensive (and tax-deductible) way to help finance
their children’s education. In this regard, equity extraction helps finance human capital investment.
Households may extract equity to invest in personal businesses or other entrepreneurial ventures, and
to help finance the purchase of second homes and/or other real estate. Finally, some households may
extract equity to engage in a form of investment arbitrage. To the extent that such households believe
they can earn a greater return in the financial markets than the tax-adjusted cost of equity extraction,
they may borrow against their homes to invest in stocks, bonds, or other financial instruments

14 research review

Home Equity and Credit Card Debit
Ratio
0.11
Revolving Debt-to-Income Ratio
Home Equity Debt-to-Income Ratio

0.10
0.09
0.08
0.07
0.06
0.05
0.04

1990

1992

1994

1996

1998

2000

2002

2004

2006

2008

Year
Sources: Income - NIPA; Home Equity Debt - Federal Reserve Z.1 release; Revolving Debt - Federal Reserve G.19 release.

Understanding households’ uses of extracted equity is important for understanding the potential
implications of the decline in house prices and households’ reduced ability to borrow against their
homes. Equity extraction that goes primarily toward funding household expenditures is potentially
a concern, since it will likely cause a decline in consumption when house prices fall, and consumer
spending makes up nearly two-thirds of U.S. GDP. A reduction in the availability of cheap forms of
credit to fund investment in residential or human capital is also a concern, but the macroeconomic
implications are likely different from the impact of a fall in consumer spending. In addition, if much
of households’ extracted home equity goes toward balance sheet reshuffling, then a drop in available
home equity will likely lead to fewer balance sheet changes and have a much more limited impact on
the overall macroeconomy than a sharp drop in household expenditures.
Research Approach
Using data through 2009 from the Panel Study of Income Dynamics (PSID), the author estimates
a cross-sectional, binary choice model to study the determinants of a household’s decision to extract
equity from its home. In addition, he uses a consumption function to study whether consumption
rises (or falls) when households extract home equity, conditional on the other factors that are known
to explain households’ spending behavior. The PSID tracks households over time and includes detailed data on households’ income, housing wealth, mortgage debt, balance sheets, automobiles, active saving, and home improvement investments. Beginning in 1999, the PSID added detailed data
on household expenditures in addition to food consumption, and the spending data were further
extended in 2005 to cover most of households’ personal spending categories. The 2009 data are
available in a limited pre-release from the PSID and include information on homeownership and
household balance sheets, but not on household spending or income. The analysis of households’
reasons for extracting equity focuses on the 1997–2009 period.

july 2009 – december 2009 15

Key Findings
• Households with greater financial wealth were less likely to extract equity than other households,
since wealthier households possessed other resources to finance their spending and investment
needs. Households with less than 20-percent equity in their homes were substantially less likely to
borrow, as they had less extractable equity available. Households with college-age children also had
a higher predicted probability of borrowing against their homes, consistent with some households’
extracting equity to finance educational expenses. The bi-yearly results over the sample period suggest that the vast majority of households whose head was unemployed for 13 weeks or more during
the year were more likely to extract equity, and households with higher income growth were less
likely to extract equity, although neither effect is precisely estimated. There do not appear to be
time-specific patterns in the reasons for equity extraction.
• A $1 increase in equity extraction between 2003 and 2007 led to a 10 to 20 cent increase in overall
nonhousing expenditures for homeowners who did not relocate. This effect appears strongest in
the 2003 and 2005 periods (covering equity extraction in 2001–2003 and 2003–2005, respectively),
which preceded the 2006 downturn in house prices. The exact expenditure categories that increased
as a result of extraction in these years varied somewhat, but overall the increase was broadly concentrated in transportation-related expenses, food, schooling, and minor home upkeep (including
utilities). Equity extraction had a much smaller impact on consumer spending in 1999 and 2001,
when a good portion of the expenditure impact was concentrated in healthcare costs.
• Equity extraction also resulted in greater residential investment (home improvement spending),
as well as increased household saving. During the 2003–2005 and 2005–2007 periods, a $1.00
increase in equity extraction led to a roughly 20 cent increase in capital spending on home additions
and improvements for households that made such improvements. Household saving increased by a
similar amount over those time intervals. Overall, there was a positive relationship between equity
extraction and household saving between 2001 and 2007. The exact balance sheet location for the
increased saving varies by period, but, overall, households extracted equity to invest in personal
businesses as well as other real estate.
• The results do not explain the entire destination of each dollar of equity extracted during the recent
U.S. house-price boom. This is likely because the PSID data do not adequately account for households that extracted home equity as part of financing the purchase of a new home.
Implications
It will be interesting to see how U.S. household behavior with regards to home equity borrowing
changes, now that prices have dropped and households’ outstanding equity has generally declined.
The pre-release 2009 data provide a glimpse of what may happen, but it is difficult to draw strong
conclusions from such limited data. What little data there are suggest that some of the household
saving patterns in response to equity extraction observed in this paper remain, but are perhaps less strong.
An additional question worth considering in future work is the extent to which the timing of the
data matters for capturing the relationship between equity extraction and household spending and
investment behavior. In particular, this paper finds little if any empirical relationship between equity
extraction and repayment of noncollateralized debt (credit card debt and education loans) despite the
potential cost savings for households and anecdotal evidence suggesting that households did indeed
extract equity to consolidate other debt. This paper argues that this discrepancy could be due to the
timing of the PSID data, and the larger question is whether one gains additional insight into household behavior by trying to pin down households’ spending and investment decisions at the exact
moment they choose to extract home equity. It is not clear whether such data exist, however, and this
issue is left for consideration in future work.

16 research review

p-09-8									

A TIPS Scorecard: Are TIPS Accomplishing What They
Were Supposed to Accomplish? Can They Be Improved?
by Michelle L. Barnes, Zvi Bodie, Robert K. Triest, and J. Christina Wang

complete text: http://www.bos.frb.org/economic/ppdp/2009/ppdp0908.htm
e-mail: michelle.barnes@bos.frb.org, zbodie@bu.edu, robert.triest@bos.frb.org, christina.wang@bos.frb.org

Motivation for the Research
The U.S. Treasury designed and issued Treasury Inflation-Protected Securities (TIPS) in order to
achieve three major policy objectives: (1) to provide consumers with a class of assets that enable them
to hedge against real interest rate risk; (2) to provide holders of nominal contracts with a way to
hedge against inflation risk; and (3) to provide everyone with a reliable indicator of the term structure
of expected inflation. This paper examines the extent to which these objectives have been achieved
and seeks to identify ways they can be achieved better in the future.
The viability of the TIPS market hinges on whether TIPS provide an effective hedge for most
investors against unexpected changes in the real rate of interest that could result from unexpected
fluctuations in inflation. Inflation-protected indexed bonds are designed to deliver, a certain pre-tax
real return to maturity. In the United States, these bonds are indexed to the nonseasonally adjusted
consumer price index for all urban consumers (CPI-U). This paper focuses on two important factors that may limit the ability of this class of securities to offer investors a complete hedge against
unexpected changes in the real rate: (1) the possibility that the CPI may not be an appropriate index
for all investors, and (2) the potential for biases due to technical revisions to the measurement of the
CPI, such as those recommended by the Boskin Commission just before the initial TIPS auction in
January 1997. Either or both of these factors could engender inflation basis risk.

Actual and Expected Inflation Over a 10-Year Horizon
Annualized rate (Percent)
10
CPI: All
CPI Elderly
PCE Deflator
CPI Midwest

8

CPI Northeast
CPI South
CPI West
FRB/US Model Expectations

6

4

2

0

1965
Q1

1970
Q1

1975
Q1

1980
Q1

1985
Q1

1990
Q1

1995
Q1

2000
Q1

Source: Haver Analytics, Bureau of Labor Statistics, Bureau of Economic Analysis, and Federal Reserve Board.

july 2009 – december 2009 17

During the summer of 2008, a spate of popular press articles emerged claiming that the existing
methodology for computing the CPI underestimates true inflation. It was even asserted that the
measure is subject to political influence and has been biased downward over time via methodological changes made during several presidential regimes. Since these concerns speak to uncertainties
regarding TIPS’ ability to hedge effectively against unexpected changes in the real rate, it is not
surprising that a few of these articles concluded that TIPS are not, in fact, good hedges of inflation
for many investors. Another criticism of TIPS that arises occasionally is that break-even inflation
rates as implied by simultaneously considering the TIPS and nominal Treasury markets often diverge
substantially from survey measures of inflation expectations. Such mounting criticisms and concerns
could jeopardize the viability of the TIPS market. This paper evaluates the premises of these criticisms, and, to the extent that the criticisms are valid, assesses their implications for the efficacy of
TIPS as a hedge against unexpected changes in the real rate of interest.
Research Approach
The authors explain the design of TIPS, their tax implications for investors, the demographics of
TIPS holders, and other considerations relating to whether TIPS should yield measures of breakeven inflation rates comparable with survey measures of consumers’ inflation expectations. The
authors use both theoretical and empirical analysis to evaluate criticisms of the CPI as an inflation
benchmark used to adjust the return on TIPS and discuss a number of issues that have been raised
concerning TIPS. These issues include: whether the potential mismeasurement of the CPI is relevant
to the efficacy of TIPS as a hedging instrument to guarantee the real return, whether the CPI is
a good measure for everyone, and whether there might be more appropriate measures for certain
heterogeneous groups, as well as the costs and benefits of issuing such securities. The authors then
demonstrate the efficacy of TIPS as a short-term versus a long-term hedge by comparing various ex
ante and ex post inflation measures. They conclude by drawing implications of their findings for the
design of the TIPS market.
Key Findings
• Buying and holding to maturity a newly issued TIPS is an effective way to lock in a risk-free real
rate of return. If TIPS had been available during the 1970s and early 1980s (periods characterized
by high or highly fluctuating inflation), they would have been a very effective means of achieving
a certain real rate of return. In contrast, long-term nominal Treasury issues produced unexpectedly
erratic rates of return.
• Although there are important differences across price indexes, the changes in the inflation rate
based on the CPI-U are highly correlated with inflation rates based on other price indexes over long
periods. In particular, many measures of inflation, including those designed for the elderly or based
on particular geographic regions, move together, so differences among these measures are swamped
by the difference between any of these measures and any survey-based measure of expected inflation.
• The difference between expected (ex ante) real yields on long-term Treasuries at the time of issue
and their ex post realized real returns provides one measure of the potential value of TIPS as a hedge
against unexpected fluctuations in inflation.
• Inflation basis risk arising from mismeasurement of the CPI is both small and uncorrelated with
common risk factors, suggesting that the concern on the part of the popular press that such mismeasurement leaves TIPS investors poorly hedged against inflation risk is unfounded. Since various
inflation measures are so highly correlated, it follows that inflation basis risk arising from specific
mismeasurement issues or from the fact that certain heterogeneous groups may face different inflation rates also tends to be uncorrelated with common risk factors, implying that the CPI-U is a
good index for TIPS for a variety of investors, despite a variety of measurement issues.

18 research review

• Buy-and-hold investors are hedged best, and investors who buy and hold long-maturity TIPS are
better hedged than investors who hold short-term TIPS maturities. The same shocks that generate
unexpected changes in inflation will alter the coupon yield on new TIPS issues, so the short holding
period strategy becomes an ineffective hedge against short-term inflation fluctuations.
• TIPS-implied break-even inflation rates are conceptually not the same as inflation expectations and
hence are not necessarily good measures of inflation expectations. As a result, TIPS-implied breakeven inflation rates are also unlikely to be good forecasts of future inflation.
• For investors subject to the federal income tax, TIPS can provide protection against only a fraction
of inflation because the inflation compensation of TIPS is taxable on individual federal returns.
This is essentially no different than the tax treatment on nominal bonds, since the inflation premium component of nominal interest payments is also taxed. Of course, the investor receives full
inflation protection if TIPS are held in a tax-preferred account, such as a 401(k).
• Since the CPI does not take into account the aspect of durable goods as long-lived assets and the
attendant variations in their market values over time, it is likely more efficient to offer consumers
separate instruments (other than TIPS) to hedge the risk of unexpected changes in house prices.
Furthermore, house prices exhibit substantial heterogeneity across geographic regions.
• The most appropriate and useful role for TIPS may be for life cycle saving by individuals and
their agents.
• The TIPS market provides a good hedge against inflation risk, and from a cost/benefit perspective
there seems little to be gained from indexing to other inflation measures—be they broader, such as
the GDP deflator, or narrower, such as regional inflation measures or the CPI-E for the elderly. As
the proportion of retirees who have defined-benefit pensions continues to decrease, the need for
individuals to manage lump-sum accounts to provide a steady stream of real income during their
retirement becomes more difficult. A “ladder” of TIPS with maturities linked to the dates when the
money will be needed for expenses is a safe investment well-suited to retirees and those approaching retirement.
Implications
TIPS have the potential to be the backbone asset underlying inflation-indexed annuities, but to facilitate use of these annuities, the maximum duration of TIPS would need to be extended, since the
time horizon for many retirees extends to 30 or more years.
With respect to housing as an investment as opposed to a consumption good, there is room for
alternative hedging instruments and they are currently available in the form of futures contracts on
Standard & Poor’s/Case-Shiller Metro Home Price Indexes or forward contracts on the Residential
Property Index 25-MSA Composite (RPX). Intuitively, the need to hedge short-to-medium term
house-price fluctuations should be greatest for people who plan to make substantial changes in the
near future in the amount of housing held in their asset portfolio. One such group is those who
plan to become first-time home buyers in the next few years. Another group is those who plan to
downsize or upsize their houses. A third group is households that plan to move to an area where the
housing market is substantially different in terms of housing prices or price movements.

july 2009 – december 2009 19

p-09–9

Impending U.S. Spending Bust? The Role of Housing
Wealth as Borrowing Collateral
By Daniel Cooper

complete text: http://www.bos.frb.org/economic/ppdp/2009/ppdp0909.htm
e-mail: daniel.cooper@bos.frb.org

Motivation for the Research
Life cycle models of household spending posit that individuals attempt to smooth their consumption
over their entire lives based on their expected lifetime earnings. An example of consumption smoothing is younger households borrowing and consuming more (saving less) in a given year, knowing that
their incomes will rise in future years. The theory assumes, however, that households are not borrowing constrained—they have access to all the credit they desire.
In the United States, an important component of a household’s wealth (assets) is its housing investment. As a durable good, housing provides a service flow that contributes to a household’s annual
consumption. In addition, people can use their housing equity as borrowing collateral to the extent
that they have sufficient equity in their homes. Rising house prices provide households with increased borrowing capacity, but when house prices are falling, individuals are limited in the degree to
which they can finance additional consumption through housing wealth. The key question therefore
is to what extent are spending decisions by U.S. households driven by housing’s changing value as
a financial asset? Of the two standard explanations, the first holds that increases in housing wealth
directly affect consumption through what is termed the “wealth effect”—household balance sheets
improve, so consumers feel justified in spending more. The alternative argument holds that housing
wealth serves as borrowing collateral to finance nonhousing consumption, thus relaxing the income

U.S. Real Consumption Growth versus
U.S. Real House Price Growth
4-Quarter percent change
12
Real Personal Consumption Expenditures
. (OFHEO)
8

4

0

-4

-8

1970

1975

1980

1985

1990

1995

2000

Year
Source: Author’s calculations based on NIPA data (Real PCE) and house price data. (OFHEO)

20 research review

2005

2010

constraints a household may face. Households that own their primary residence can access the equity
they have amassed in their homes via a home equity line of credit (HELOC) or a cash-out refinancing, thus freeing up funds to achieve their desired level of current consumption. The ability to tap
into housing wealth is particularly advantageous for households that may be experiencing a negative
income shock due to unemployment or that are confronting high medical or educational expenses.
Compared with using a credit card, HELOCs tend to offer much lower interest rates, higher borrowing limits, and potential income-tax deductibility of some of the financing costs.
Throughout the first half of the last decade, real house prices rose rapidly in the United States. Aggregate U.S. consumption was strong during this period despite two notable events in 2001: the
collapse of the bubble in technology stocks and the economic slowdown following the September
11 attacks. Between 2000 and late 2006, real house prices rose 50 percent, while the Federal Reserve
Board found that between 2002 and 2005 the dollar value of outstanding HELOCs grew at an
annual rate of 30 to 40 percent—evidence that households were borrowing against their homes to
finance personal spending. The annual personal savings rate in the United States steadily decreased
to almost zero in 2005 before recovering slightly in 2006, when U.S. house prices peaked. Given the
recent financial crisis, understanding how household spending decisions may be driven by changes in
perceived housing wealth can inform how the U.S. economy will recover from the current economic
recession and can quantify the implied aggregate impact of falling house prices. Now that U.S. housing prices have declined substantially, how might household consumption respond to this changed
financial landscape?
Research Approach
By comparing the net wealth effect channel with the borrowing collateral channel, the author investigates how individual U.S. households respond through consumption to changes in their house’s
asset value, conditional on their being content with their current level of housing services. Since
the households under consideration remain living in their current home, this approach isolates how
household spending decisions may be influenced by balance sheet gains or losses in the value of their
housing investment. The existing literature and many macroeconomic forecasting models do not
control for households’ individual borrowing needs when evaluating the relationship between housing wealth and nonhousing consumption, but a household-specific measure of borrowing demand
is important for analyzing this relationship. Using household-level data from the Panel Study of
Income Dynamics (PSID) allows the author to distinguish between individual households that do
and do not have a high demand for consumption financed by borrowing. Comparing a household’s
current real income with its average income—a measure of a household’s lifetime mean earnings—
identifies its deviations from average income and indicates its potential inability to fund its desired
current consumption. The author’s estimation includes all households in the PSID between 1984
and 2005 that own their primary residence and whose head of household is 65 years old or younger.
The sample starts in 1984, when the PSID began to track financial wealth data. The author deems
that a household is constrained, hence a potential borrower, if its current income is at least 10 percent
below its average income.
Key Findings
• The author finds that across all households, a $1.00 increase in house values leads to a roughly
3.5 cent permanent increase in nonhousing expenditures. For constrained households, a $1.00 increase in home values yields approximately an 11 cent increase in nonhousing consumption. Yet for
households that have limited borrowing needs, changes in their home values have a small effect on
consumption that for the most part is not statistically different from zero. Overall, when the role of
housing wealth as household borrowing collateral is controlled for, there is little evidence of a net
housing wealth effect on consumer spending.

july 2009 – december 2009 21

• Controlling for their borrowing needs, when households are categorized by age group—young
households under 35 years of age, middle-aged households 35 to 50 years old, and older households
50 to 65 years old—housing wealth has a substantial direct impact on nonhousing consumption
for the middle-aged and older groups, 11 cents and 12.6 cents, respectively, for each $1.00 increase
in housing wealth. For young households that are constrained, spending increases 6 cents for every
$1.00 increase in housing wealth. Overall, this suggests that the relationship between household
spending and housing wealth cannot be explained entirely by life cycle differences in individual
households’ housing tenure and spending needs. Rather, regardless of age, those households with
higher borrowing needs to finance nonhousing consumption will potentially use the value of their
housing wealth as collateral.
• W hen high versus low amounts of household leverage are compared against whether households
experience a positive or negative change in their housing wealth, the average consumption of highly
leveraged households increases in response to a housing capital gain. In addition, the marginal
consumption response to changes in housing wealth is substantially larger for highly leveraged
households than for households with lower debt levels.
• Extrapolating from his results, the author estimates that the roughly 11 percent decline in real
housing wealth between 2007:Q4 and 2008:Q4 caused about a 75 basis point (three quarters of 1
percent) decrease in aggregate real nonhousing consumption, a result that is robust to alternative
calculation approaches. Yet, overall, this finding shows that the direct effect of falling house prices
on aggregate U.S. consumption is small. About two-thirds of the reported aggregate decline in
spending is traced to the behavior of households with high borrowing needs. When home prices
fall, housing assets are worth less, so households have less equity to use as borrowing collateral.
Implications
While the borrowing collateral channel has a more important positive effect on consumption than
the largely negligible net wealth effect channel, it would be useful to understand better what specific
areas of consumer spending are impacted by rising or falling house prices. Particularly in the case
of declining home values, such information could indicate which sector(s) of the economy might be
most affected by falling house prices. Work in this area will depend on better data that are not yet
available.
p-09-10									

The 2008 Survey of Consumer Payment Choice
by Kevin Foster, Erik Meijer, Scott Schuh, and Michael A. Zabek

complete text: http://www.bos.frb.org/economic/ppdp/2009/ppdp0910.htm
email: kevin.foster@bos.frb.org, meijer@rand.org, scott.schuh@bos.frb.org, michael.zabek@bos.frb.org,

Motivation for the Research
In 2003, the Federal Reserve Bank of Boston launched the Survey of Consumer Payment Choice
(SCPC) program to develop high-quality, timely, comprehensive, and publicly available data on
consumer payment behavior. A general shortage of such data has inhibited the payments industry,
researchers, and public policymakers from fully understanding the ongoing transformation of the
U.S. payment system. Traditional paper-based payment instruments have been giving way to new
payment instruments that have emerged from innovations in information and communication technologies as well as from innovations in financial markets.
This paper presents the 2008 version of the SCPC, a nationally representative survey developed by
the Consumer Payments Research Center of the Federal Reserve Bank of Boston and implemented
by the RAND Corporation with its American Life Panel. This survey fills a gap in knowledge about

22 research review

the role of consumers in the transformation from payments using paper as the primary medium of
exchange to those using electronic media. It provides a broad-based assessment of U.S. consumers’
adoption and use of nine payment instruments, including cash. Besides helping researchers learn
how consumers choose among the nine payment instruments, the 2008 SCPC data should also help
public policymakers design policies affecting the U.S. payment system and economy.
These data, which are expected to be produced annually, can be used for at least two purposes: (1)
to create aggregate time-series data that can be used to characterize and analyze trends in payment
markets pertaining to U.S. consumers and (2) to create a longitudinal panel of data that can be used to
study consumer payment behavior and evaluate public policies pertaining to the U.S. payment system.
The consumer-level micro data from the 2008 and 2009 SCPC will be released to the public in 2010.
This paper’s primary purpose is to publish and document for general readership the aggregate statistics obtained from the 2008 SCPC, which appear in a series of detailed tables in the full paper. More
information about the CPRC and supporting documentation for the 2008 SCPC, including the survey instrument, tables of standard errors, and the purpose and methodology of the SCPC program,
are available at http://www.bos.frb.org/economic/cprc/index.htm and in Schuh 2010 (forthcoming).
A secondary purpose of this paper is to provide a brief snapshot of the U.S. payments transformation
from using paper instruments to using electronic and other new payment instruments. The authors
report the most salient basic facts in this paper, but do not provide any economic or business interpretation of the 2008 results. A companion paper (Foster, Schuh, and Zabek 2010, forthcoming) will
provide a more in-depth, yet nontechnical, overview of the results from the 2008 SCPC. That paper
will include economic and business interpretations of the 2008 facts in historical context with results
from other surveys and data.
Research Approach
As noted above, the 2008 SCPC was developed by the Consumer Payments Research center of the
Federal Reserve Bank of Boston and implemented by the RAND Corporation with its American
Life Panel, a nationwide panel of U.S. consumers. Since the intent of the SCPC is to measure
the payment choices of consumers, the survey concepts and definitions were constructed from the
perspective of a typical consumer. This demand-side approach to payments helps to fill a gap in
knowledge about consumer payment behavior. It also provides the information needed to understand
payment trends and to develop optimal public policies toward payments.
The consumer-oriented concepts and definitions may seem different from the terminology and perspectives of the supply side of the payment system, especially in the area of electronic payments. For
example, the supply-side perspective (the viewpoint of banks, the Federal Reserve System, nonbank
payment service providers and consultants, as well as merchants who accept payment from consumers) focuses on the network on which payments are settled. In contrast, the SCPC looks at payments
from the perspective of how a consumer initiates the payment.
The central focus of the SCPC is on measuring consumer choices about payment instruments. The
2008 SCPC asks questions about nine payment instruments commonly available to U.S. consumers:
four types of paper instruments—cash, checks, money orders, and travelers checks; three types of
payment cards—debit, credit, and prepaid; and two types of electronic payment instruments—online
banking bill payment (OBBP) and electronic bank account deduction (EBAD).
Consumers make three basic choices about payment instruments: (1) whether to get, or “adopt,”
them; (2) whether or not to use them (incidence of use); and (3) how often to use them (frequency
of use, or simply, “use”). The 2008 SCPC measures consumers’ adoption of payment instruments, as
well as consumers’ various banking and other payments practices. The survey also measures the use

july 2009 – december 2009 23

of payment instruments by incidence (the percentage of consumers who use them), frequency (the
number of payments each consumer makes), and the types of transactions for which each consumer
uses the various instruments. The 2008 SCPC also asks questions about seven types of payment transactions: three types of bill payments—automatic, online, and in person/by mail; one type of nonbill
online payment; two types of retail goods payments—essential and nonessential; and other nonretail
payments. For each transaction type, the survey asks questions about the number of payments made
with each payment instrument that can be used for that type of transaction. Additionally, the 2008
SCPC asks respondents to rate eight types of payment characteristics—acceptance for payment; acquisition and setup; control over payment timing; cost; ease of use; payment records; payment speed;
and security—for each of six payment instruments—cash, check, debit card, credit card, prepaid card,
and both types of electronic account deductions combined. Finally, the survey collects demographic
information about the respondents.
Key Findings
• The nine common payment instruments enumerated above mean that U.S. consumers have more
payment instruments to choose from than ever before. In 2008, the average consumer had 5.1 payment instruments and used 4.2 payment instruments in a typical month.
• Consumers have widely adopted some, but not all, payment instruments. The vast majority of consumers have adopted cash. Both checks and payment cards (separately) have been adopted by more
than 90 percent of consumers. Slightly more consumers now have debit cards than credit cards (approximately 80 percent of consumers have debit cards, versus the 78 percent that have credit cards),
and consumers use debit cards more often than cash, credit cards, or checks.
• Consumers make 53 percent of their monthly payments (in terms of number of payments) with a
payment card (credit, debit, and prepaid) and only about 37 percent with paper instruments.
• Most consumers have used newer electronic payments, such as online banking bill payment, but
these instruments account for only 10 percent of consumer payments.
• Cash, checks, and other paper instruments are still popular and account for 37 percent of U.S. consumer payments, but more than half of consumers said that they wrote fewer paper checks in 2008
than in 2005. In contrast, during the same time period nearly half of consumers reported an increase in
their use of debit cards, more than 40 percent reported increasing their use of electronic bank account
deductions, and more than 60 percent reported increasing their use of online banking bill payments.
• For retail payments, cash is the most widely used payment instrument, and credit cards and debit
cards are the second and third most widely used payment instruments. Paper checks are still the
most widely used instrument for bill payment.
• Consumers rate security and ease of use as the most important characteristics of payment instruments.
Implications
Although the 2008 SCPC aggregate statistics presented in this paper are preliminary and subject to
revision, they shed new light on consumers’ practices and preferences in the use of various payment
media, and thus provide insight into where we are in the transition from paper to electronic media.
The SCPC complements and supplements existing sources of payments data. The two main publicly
available sources are the Survey of Consumer Finances (SCF) and the Federal Reserve Payment Studies (FRPS). The main advantages of the SCPC over both of these alternative data sources are: (1) it is
higher frequency (annual instead of triennial), so it will provide more timely information on payments;
and (2) it contains a more comprehensive assessment of payment behavior.

24 research review

A number of private companies also provide some data on consumer payment behavior. Among others, these sources include: the American Bankers Association; Hitachi (formerly Dove Consulting),
which contributed to the FRPS; Javelin Strategy & Research; The Ohio State University Consumer
Finance Monthly; Phoenix Marketing International; Synergistics Research Corp; the U.S. Postal
Service Household Diary (NuStats); and Visa Inc. Most of these data sources are proprietary and
either unavailable to the public or prohibitively expensive, and the details and methodology underlying these data sources are often opaque and difficult to obtain.
Together, the information in these public and private data sources overlaps a great deal. As a result,
an opportunity exists to consolidate and streamline the data collection process into one publicly
available, standardized, and consistent data source on consumer payment behavior. The SCPC offers
that opportunity and the CPRC welcomes partners in this endeavor. Toward that end, the CPRC
developed a Board of Advisors in 2009, including representatives from industry, academia, and the
public sector, to provide input and help develop a consolidated and standardized source of data on
consumer payments as viewed from the consumer’s perspective.
p-09–11

Jobs in Springfield, Massachusetts: Understanding
and Remedying the Causes of Low Resident
Employment Rates

By Yolanda K. Kodrzycki and Ana Patricia Muñoz with Lynn Browne, DeAnna Green,
Marques Benton, Prabal Chakrabarti, Richard Walker, and Bo Zhao
complete text: http://www.bos.frb.org/economic/ppdp/2009/ppdp0911.htm
e-mail: yolanda.kodrzycki@bos.frb.org, anapatricia.munoz@bos.frb.org

Motivation for the Research
For decades the economy of Springfield, Massachusetts has lagged behind its peer cities in New
England. As part of the Federal Reserve Bank of Boston’s multi-year project to promote Springfield’s economic revitalization, this paper examines the causes of and potential remedies for the city’s
low resident employment and labor force participation rates, particularly in neighborhoods of concentrated poverty. As of 2000, labor force participation rates in some poor neighborhoods were below
50 percent. Since any potential solutions to Springfield’s economic problems must include increasing
its resident employment rates, this paper seeks to outline policy priorities to help achieve this goal.
Research Approach
Addressing Springfield’s employment challenges requires ascertaining whether there is a mismatch
between the number of jobs available and the number of potential workers or whether the problems
stem from other issues affecting the city’s residents, such as inadequacies in education, training, and
access to jobs. In 2005–2007 there were almost 76,000 jobs located in Springfield, plus another
90,000 jobs located within a 10-mile radius of the city. Springfield had approximately 148,000 total
residents; its working-age population, comprising those aged 16 years and older, was about 113,000,
of whom 58,000 were employed. This translates to 51 percent of Springfield’s working-age population, which is lower than the average employment rates of 57–60 percent of its peer cities in New
England. In contrast, the share of private industry jobs located in Springfield relative to the city’s
working-age population, 64 percent, is similar to the average job density in peer New England cities.
To explore the situation in more detail, the authors estimate the number of private sector jobs available by industry and neighborhood areas in Springfield, using the 2006 ZIP Business Patterns
(ZBP). The ZBP data contain total employment in private establishments organized by zip code,
and the size distribution of these businesses. Using these data allowed the authors to categorize

july 2009 – december 2009 25

Employment in Springfield
Manufacturing

Healthcare

Thousands

Thousands

20

20

15

15

10

10

5

5

0

0
1980

1990

2000

2005-2007

1980

1990

Year

2000

2005-2007

Year

Finance & Related Activities

Construction

Thousands

Thousands

20

10

15
10

5

5
0
1980

1990

2000

2005-2007

0

1980

Year

1990

2000

2005-2007

Year
Employed residents
Jobs located in Springfield

Source: U.S. Bureau of the Census. Decennial Census (1980, 1990, 2000), American Community Survey (2005-2007);
Massachusetts Executive Office of Labor and Workforce Development Employment and Wage (ES-202) data (2005, 2006, 2007).
Note: Healthcare and social assistance are reported as a single category in the 2000 and 2005-2007 census data.
Excludes social assistance services, using the share (around 11 percent) of such services in the “healthcare and social
assistance” sector from the ES-202 data from 2005-2007.

approximately 61,000 Springfield jobs, by industry and area. This classification does not fully
account for employment in the city because the ZBP data exclude certain categories, notably the
self-employed, and omit most public sector (government) jobs.
Key Findings
• Springfield’s total job availability is not unusually low. Using the city job density rate, which compares the number of jobs to the size of the working-age population, Springfield has 67 jobs for
every 100 citizens—somewhat lower than the average ratio among its peer cities. Included in
the peer group are two state capitals, Hartford and Providence, which have comparatively high
numbers of government jobs. In terms of private-industry jobs, Springfield’s job density rate is 64,
which is close to its peer group’s average.

26 research review

• Springfield’s low resident employment rate is partly rooted in pronounced demographic changes
over the last few decades. The rise in the city’s percentage of Hispanic residents has been particularly dramatic. As of 2005–2007, 29 percent of Springfield’s working-age population were Hispanic
and 20 percent were black. Springfield’s low employment rate is mostly traceable to the fact that its
disadvantaged groups are less likely to be employed than those in other cities, and not to the high
share of disadvantaged groups that make up Springfield’s population. The city’s employment rate
was 2 percentage points below the corresponding average of other New England cities for the city’s
whites, 7 percentage points lower for blacks, and 9 percentage points lower for Hispanics.
• W hile a lower percentage of Springfield’s residents have completed high school or college compared with those in other mid-sized New England cities, this gap in educational attainment plays a
relatively minor role in accounting for the differential between the employment rate in Springfield
and the average in other cities. The main difference is that at each level of educational attainment,
Springfield residents are less likely to be employed than are comparably educated individuals in the
other cities. This gap is particularly pronounced for less educated segments: only 39 percent of high
school dropouts were employed in Springfield, compared with 45–53 percent in other cities, while
64 percent of high school graduates were employed, compared with 65–75 percent in peer cities.
• Since local labor markets extend beyond municipal boundaries, another way of measuring job availability for city residents is to directly measure commuting time or distance. The results are mixed
for Springfield, for while a relatively high share of jobs are located within a 10-mile radius of its
downtown, over the last decade there has been a pronounced decentralization of jobs outside this
10-mile radius.
• Distance between residential neighborhoods and jobs is one barrier to employment that deserves
further attention. Springfield’s poor are concentrated near its downtown. Jobs within the city limits
are scattered across various neighborhoods. Most of the retail jobs, for example, are located on the
eastern edge of Springfield, requiring a lengthy bus ride from the city center for those without a car.
Jobs in the suburbs are moving even farther away from the city. To have full access to employment
opportunities in manufacturing and construction, in particular, workers must be able to commute
outside of Springfield.
• Healthcare and social assistance is Springfield’s largest industry and has been a source of growing
employment opportunities for the city’s residents. Service-sector industries, particularly leisure and
hospitality, also are significant employers in and near downtown Springfield. Hiring more people
from inner-city neighborhoods in these industries should be a component of any jobs strategy.
• Other social services aimed at enhancing residents’ abilities to hold a job are needed. Within the
city, poorer residents in the downtown area are hampered by single-parenting duties and the need to
rely on public transportation or carpooling to commute to jobs. Improving transportation options
would help, as well as better matching of residents with jobs near their homes.
Implications
Solving Springfield’s economic malaise depends heavily on improving its citizens’ employment rates.
Over 10 percent of the city’s working-age population, or 6,000 more people, need to find jobs if
Springfield’s resident employment rate is to equal the average among its peer cities in New England.
Increasing employment will involve some combination of job creation, improving residents’ entrylevel labor market skills so they are better employment candidates, improving informational access
to job opportunities, and improving physical access to work sites. Job density rates are quite high in
and near the Springfield neighborhoods with low incomes and low employment rates, so better job
matching of inner-city residents with inner-city jobs is a promising strategy.

july 2009 – december 2009 27

Working Papers

w-09-7									

Trends in U.S. Family Income Mobility, 1967–2004
by Katharine Bradbury and Jane Katz

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0907.htm
e-mail: katharine.bradbury@bos.frb.org, jane.katz@ny.frb.org

Motivation for the Research
Much of America’s promise is predicated on the existence of economic mobility—the idea that people are not limited or defined by where they start in life, but can move up the economic ladder based
on their own efforts and accomplishments. Family income mobility—changes in individual families’
incomes over time—is one indicator of the degree to which the eventual economic well-being of
any family is tethered to its starting point. In the United States, family income inequality has risen
yearly since the mid-1970s, raising questions about whether long-term income is also increasingly
unequally distributed. Changes over time in mobility, which can offset or amplify the cross-sectional
increase in inequality, determine the degree to which longer-term income inequality has risen in
tandem. Other things being equal, an economy with rising mobility—one in which people move
increasingly frequently or traverse increasingly greater income distances—will result in a more equal
distribution of lifetime incomes than an economy with declining mobility.
In the very broadest terms, economic mobility is the pace and degree to which individuals’ or families’ incomes (or other measures of economic well-being) change over time. Measures of mobility
summarize the transition process from the set of incomes in the economy at one point in time to the
incomes of those same individuals or families at a later point. Researchers have employed a variety

Mobility of U.S. Families Over 10-Year Periods
Percent
65
Percent in richest decile who moved down
Percent in poorest decile who moved up
60

55

50

45

Year
Source: Authors’ calculations.
Note: The deciles shown are the highest and lowest 10 percent of the U.S. family income distribution.

28 research review

-0
1
19
93
-0
3

19
91

19
89
-9
9

797
19
8

3
19
75
-8
5
19
77
-8
7
19
79
-8
9
19
81
-9
1
19
83
-9
3
19
85
-9
5

-8
19
73

71
19

19
6

97

9

81

40

of mobility concepts and measures, sometimes using different measures to address different underlying questions. In this paper, the authors focus on concepts and measures that most closely address
questions related to mobility as an equalizer of long-term incomes and the degree to which end-ofperiod income (or position) is independent of beginning-of-period income or position. The authors
are particularly interested in learning whether different concepts and measures tell a consistent story
and whether findings from previous studies are artifacts of the particular measures used.
Research Approach
Using data from the Panel Study of Income Dynamics (PSID), the authors examine time patterns
of income mobility for U.S. working-age families between 1967 and 2004 according to a number
of mobility concepts and measures, including a measure of the degree to which mobility equalizes
long-term incomes. Calculating these measures for overlapping 10-year periods, they document
mobility levels and trends for U.S. working families, overall and by race. For purposes of comparison,
the authors also look at shorter (4-year) periods and longer (16-year) periods.
The authors begin by discussing various concepts of income mobility and the associated measures.
They first distinguish between relative, absolute, and interaction mobility. Relative mobility refers
to individuals or families trading relative position in the distribution of outcomes between the beginning and end of a period. Absolute mobility is movement relative to some real standard of wellbeing or purchasing power, such as the poverty line or median income at the start of the period.
Although relative and absolute measures may move together, rising absolute mobility can occur
during a period of declining relative mobility and vice versa. Interaction mobility, a term coined by
the authors, refers to the interaction of changes in families’ relative ranks and the associated changes
in the level and spread of the income distribution. Thus, interaction mobility reflects changes in the
structure of rewards in the economy as well as changes in individual families’ access to these rewards
over time. Interaction measures are useful for summarizing how much movement the average family experiences—both relative to other families and in terms of absolute income change—taking
into account contemporaneous changes in average income levels and the degree of inequality of the
income distribution.
The authors next discuss the distinction between overall mobility—changes between the start and
end of a period in an entire vector of individual observations of well-being, such as family income
or rank in the distribution—and origin-specific mobility—changes over a period in the incomes of
individuals or families defined by their position in the distribution at the beginning of the period.
Origin-specific measures are of interest for several reasons, including concerns about the ability of
the poorest families to escape the bottom rungs of the income ladder and concerns about stability
at the top, as such measures may provide evidence of unequal opportunity or a lack of meritocracy.
Introducing the concept of subgroup mobility, the authors focus on between-group mobility, which
indicates how members of a subgroup move relative to members of another subgroup or relative to
the overall income distribution, rather than within-group mobility, which indicates the extent to
which members move relative to one another within a subgroup.
Key Findings
• Different measures yield similar pictures of mobility trends. By most measures, family income
mobility has been lower in the more recent periods studied (the 1990s into the early 2000s) than
in the 1970s.
• Family income mobility apparently decreased or did not increase enough between the 1970s and
the 1990s to stem increases in long-term income inequality. Furthermore, a family’s position at the
end of a period was less likely to have been produced by a random process and more correlated with
the family’s starting position than was the case 30-plus years earlier.

july 2009 – december 2009 29

Upward Mobility of Poorest U.S. Families
Over 10-Year Periods, by Race
Percent
75

65

55
Percent of white families in poorest decile who moved up
Percent of black families in poorest decile who moved up

45

35

25

3
-0
93
19

91

-0

1

9
19

-9
89

-9

5

7

19

87
19

-9
85

3
83
19

19

-9

1
-9

9

7

-8

81
19

79
19

5

-8
77
19

3

-8
75
19

1

-8
73
19

-8
71

-7
69
19

19

9

15

Year
Source: Authors’ calculations.
Note: The poorest decile is the lowest 10 percent of the U.S. family income distribution.

• Like overall mobility, the mobility of families starting near the bottom has worsened over time. In
addition, declines in mobility seem to be more pronounced lower in the income distribution, as
poorer families were decreasingly likely to move up.
•H
 owever, comparing only the most recent periods, the downward trend is less pronounced, or even
nonexistent, depending on the mobility measure employed—although a decrease in the frequency
of collection of panel data on family income in recent years makes it difficult to draw firm conclusions.
•B
 lack families exhibit substantially less mobility than white families in all periods relative to the overall
distribution of families and in absolute terms, but the disparity between the races’ mobility patterns
does not appear to be growing except in terms of the between-race difference in long-term income.
• Taken together, the evidence suggests that over the 1967-to-2004 time span, a low-income family’s probability of moving up decreased, families’ later year incomes increasingly depended on their
starting places, and the distribution of families’ lifetime income became less equal.
Implications
Although the authors find that family income mobility has decreased and long-term inequality has
risen, they also note that there is no simple answer when it comes to evaluating levels and trends in
inequality and/or mobility. Some inequality in the potential and actual economic rewards to individuals and families undoubtedly produces efficiencies in allocation and production; it may encourage
people to work hard, to save, to invest in human and physical capital, and to innovate. But inequality
may also reflect restricted opportunity or barriers to mobility. Such barriers—individual circumstances, economic/social institutions or arrangements, discriminatory practices, imperfect capital
markets, imperfect information, or other impediments that prevent poor families from improving

30 research review

their situation—result in unequal starting points being reinforced over time. These barriers not
only distort market incentives and discourage the hard work and investment that lead to economic
growth but are also likely to result in negative externalities such as crime and reduced social cohesion, making public policy decisions more difficult.
One public policy implication is relatively clear, however, based on the authors’ finding that the typical
poor family is less likely to move up and out of poverty within several years than it was 30 years ago:
policy remedies for those at the bottom should aim beyond short-term help, as the poor at any point in
time are likely to have low long-term incomes. Beyond the short term, the choice of policy presumably
hinges, at least in part, on the reasons for the decline in mobility—for example, whether it reflects rising barriers to opportunity or rising returns to high-stakes labor market promotion practices. Further
research is needed to assess the balance among these potential sources of the decline in mobility.
w-09-8									

Real Estate Brokers and Commission:
Theory and Calibrations
by Oz Shy

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0908.htm
e-mail: oz.shy@bos.frb.org

Motivation for the Research
This paper has two goals: (1) to model an inherent conflict of interest between a seller of a house and
the real estate broker hired by the seller and (2) to calibrate the broker’s commission rates that would
maximize the seller’s expected gain. The inherent conflict of interest between the seller and the broker
results from the fact that the broker’s commission constitutes only a small fraction of the transaction
value. Thus, brokers often have an incentive to convince sellers that waiting for a higher-paying buyer
would be risky. A lower price increases the probability of a sale and hence a faster sale. Faster sales
often reduce brokers’ costs by more than the extra commission they might receive from trying to sell
at higher prices. The calibrated rates may provide a rough estimate of whether the widely used 6 percent commission rate reflects collusion among real estate agencies (in which case, the calibrated values
should be much lower than the observed value of 6 percent) or whether this rate is competitively determined (in which case the calibrated values should be around the observed value of 6 percent). This
investigation is important in view of the long-term investigations by the Federal Trade Commission
(FTC) and the Department of Justice (DOJ) concerning the possibility that the widespread use of the
6 percent commission rate may reflect collusive behavior in the real estate brokerage industry.
Most homesellers in the United States pay a 6 percent commission to real estate brokers. However,
under some circumstances, the individual agent who exerts most of the effort may receive only
around 1.5 percent of the sale price because the seller’s and the buyer’s agents (if they are not the
same) tend to split the 6 percent commission and each agency may take half of the remaining 3
percent. Outside the United States, sellers’ commission rates are generally much lower, often ranging from 1.5 percent to 2 percent. This may be a consequence of the fact that buyers also pay some
commission to brokers. Clearly, it is a puzzle why discount real estate brokers—who offer (perhaps)
more limited services for a lower commission—are not observed more frequently in the United
States, while discount brokers are now widely prevalent in U.S. financial markets.
This paper differs from the literature in that it does not attempt to explain the role played by
middlemen. Instead, its scope is much narrower: to measure the magnitude of the conflict of interest
between house sellers and real estate brokers by examining the difference between house prices set
by sellers and those set by brokers.

july 2009 – december 2009 31

Research Approach
The paper develops a dynamic model in which a house seller hires a real estate broker to handle the
sale. Both the seller and the broker bear costs of delay each time the broker fails to sell the house
and the sales effort continues in a subsequent period. The paper demonstrates the inherent conflict
between a seller and a real estate broker, initially using a simple example with two types of buyers
who differ in their willingness to pay for a house, with the brokerage commission exogenously determined by, say, an association of real estate brokers. The paper then extends the model to a continuum
of buyer types and constructs a model in which the broker’s commission is determined by a seller
who maximizes the expected net-of-commission gain from selling a house. To address the second
goal, the author computes the commission rate that maximizes the seller’s expected gain, assuming
that the house price is determined by the broker and not by the seller. This assumption generates an
incentive on the part of sellers to pay a commission sufficient to motivate the brokers to avoid setting
a low price just to accelerate the sale. This model then calibrates the sellers’ most profitable commission rate, using data on housing prices and costs of delay taken from the website of the National
Association of Realtors.
Key Findings
• A real estate broker will recommend a lower price than the price that maximizes the seller’s expected gain as long as the broker’s commission rate is below 50 percent, which is always the case.
In other words, sellers prefer setting a higher price, which generally prolongs the sale of the house,
compared with the price that would be set by a commission-paid real estate broker. This finding
stems from the fact that a real estate agent has less to gain from selling at a high price than does
the seller.
• The results imply that the standard 6 percent commission rate, if paid to a single broker, far exceeds
the commission rate that would be preferred by a seller, despite the fact that a higher commission
rate would motivate the broker to ask for a higher price. This, however, need not be the case if the
commission is split among several brokers and agencies.
• If several brokers split the commission (for example, the buyer’s and seller’s brokers and the agencies that employ these brokers), then a 6 percent commission may be needed to motivate the broker
to sell at a high price.
Implications
The conflict of interest between a house seller and the real estate agent hired by the seller harms the
seller and benefits the buyer. In this model, real estate agents improve social welfare because they
reduce the cost of delaying a sale. That is, the pressure agents put on sellers to reduce their prices
shortens the amount of time it takes to sell a house. Since social welfare is not affected by the allocation of rents between sellers and buyers, and between sellers and real estate brokers, social welfare is
enhanced when sales decisions are delegated to realtors.
The model developed in this paper and the calibration itself can be easily modified to capture situations in which several brokers or agencies split the commission paid by a house seller. The important
empirical question to ask in this context is what fraction of real estate transactions involve one, two,
three, or four real estate brokers.
Another related empirical question is how commission rates affect the speed of home sales. This
investigation might be accomplished by comparing the number of house visits by potential buyers
divided by the number of brokers involved in the sale. One could also investigate whether houses
sold in countries with lower commission rates sell faster than in the United States. Clearly, in such
investigations it may be impossible to control for the institutional differences of housing markets in
different countries.

32 research review

The model could be further extended by introducing two additional features. First, the model could
be extended by incorporating benefits for the seller in hiring a real estate broker. To accomplish this,
the seller’s utility function should be modified slightly to include the seller’s additional possible gains
from employing a broker compared with “sale by owner.” Second, the model could be extended to
enable analysis of how the commission rate influences the efforts exerted by brokers and how these
efforts are translated into the speed of sale.
The conflict of interest identified in this paper prevails not only in the market for residential real
estate but also in some other markets. For example, in legal cases for which attorneys receive a fraction of the final settlement instead of fixed fees, attorneys may recommend to their clients that they
should settle on lower compensation levels than the level that would maximize the client’s expected
benefit. Similar conflicts may exist between stock brokers and their clients because brokers’ compensation is contingent on their clients’ actual purchase and sale of stocks and mutual funds, and even
in agricultural contracts involving cropsharing.
w-09–9

Efficient Organization of Production: Nested
versus Horizontal Outsourcing
by Oz Shy and Rune Stenbacka

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0909.htm
e-mail: oz.shy@bos.frb.org, rune.stenbacka@hanken.fi

Motivation for the Research
Manufacturing firms rely on intermediate components when assembling final (finished) goods. A
strategic part of a firm’s production process, termed the “make-or-buy” decision, is determining
whether to produce intermediate components in-house or to outsource some to subcontractors.
Firms choose different patterns of outsourcing production of components, and two principal types
of outsourcing are generally observed. The first involves outsourcing components to several component-producing firms. Under this outsourcing structure (which the authors call horizontal outsourcing), outsourced firms must produce the components themselves and cannot subcontract any production to other firms. In the second approach, the final good producer outsources the production
of some components to another firm, which then outsources the production of some components
to a third firm, and so on. The authors term this pattern nested (vertical) outsourcing because a
subcontractor may hire additional subcontractors to perform some of the work. For industries that
have high component-specific monitoring costs, how outsourcing is structured may have significant
effects on the firm’s overall production costs. For this reason, it is important to investigate two questions: (1) Why do firms in different industries adopt different patterns of outsourcing? (2) What is
the optimal pattern of outsourcing in a given industry?
Research Approach
This paper adds to the literature by comparing nested and horizontal outsourcing to find which
approach is the more efficient outsourcing method. Determining how to conduct outsourcing is
important for a firm that relies on component-specific monitoring in its manufacturing process. The
authors construct a model in which component-specific monitoring costs are incurred for managing
the in-house production of intermediate parts and managing the outsourced production of intermediate parts. Monitoring costs also increase with the number of subcontractors being employed. By
having constant marginal costs for production together with increasing marginal costs for monitoring production lines, the model focuses on the effects of these monitoring costs on the efficiency of
the outsourcing choice.

july 2009 – december 2009 33

Key Findings
• Under nested outsourcing, firms that are higher on the outsourcing ladder, where “higher” means
closer to the original firm that assembles the final product, produce a larger number of components
than firms that are lower on the outsourcing ladder.
• It is efficient to outsource a smaller fraction of production lines under nested outsourcing compared
with horizontal outsourcing as long as there are no significant diseconomies with respect to monitoring a large number of subcontractors. Under this condition, nested outsourcing is inefficient
relative to horizontal outsourcing.
• Nested outsourcing is more profitable for the final good producer than horizontal outsourcing if
there are strong diseconomies with respect to the number of subcontractors.
• A market failure may arise in situations where nested outsourcing is the market outcome but horizontal outsourcing is the efficient outcome.
Implications
For firms that require intermediate components in order to produce finished goods, the strategic decision of whether to outsource production of some components and if so, how to efficiently allocate
to subcontractors, has implications for their total production costs and profits. Despite a market bias
towards using nested outsourcing, the authors find this approach to be inefficient in many instances.
This paper’s analysis of the most efficient approach to outsourcing might be extended by investigating the final good producer’s degree of bargaining power relative to the subcontracting firms.
w-09-10

Estimating the Border Effect: Some New Evidence

by Gita Gopinath, Pierre-Olivier Gourinchas, Chang-Tai Hsieh, and Nicholas Li
complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0910.htm
e-mail: gopinath@harvard.edu, pog@berkeley.edu, chsieh@chicagogsb.edu, nickli@econ.berkeley.edu

Motivation for the Research
According to the “law of one price,” the price of a given good will be the same everywhere once
adjusted for exchange rates. Yet this prediction does not hold empirically. Price differences at the
consumer level and at the wholesale level can result from varying transaction costs due to differences in currencies and regulations. Other factors such as different market conditions, wages, tastes,
and infrastructures also can result in price differences across countries. In international economics,
a critical question is the extent to which national borders and national currencies impose costs that
segment markets across countries. In the existing literature attempts to identify the factors that
generate the “border effect” and its magnitude have not controlled for heterogeneity among retailers
or established clear benchmarks that separate the border effect from other factors generating price
dispersion. This has given rise to an argument that a composition bias affects cross-country price
indexes at higher levels of aggregation. To address these deficiencies, this paper develops a crossborder model of price determination and exploits critical information about the geographic location
of individual stores to better estimate the factors that truly contribute to the border effect.
Research Approach
To address the issue of heterogeneity among retailers, the paper’s first key innovation is its use
of a dataset that contains weekly store-level price data from 325 grocery stores belonging to the
same large food and drug chain retailer operating in the United States and Canada. Collected from
January 2004 through June 2007, the data contain weekly total sales, quantities sold, retail prices,
wholesale unit costs, and a measure of per-unit gross profit for 125,048 unique goods identified by

34 research review

universal product codes (UPCs) in 61 distinct product groups. The product observations are mostly
concentrated in the processed and unprocessed food and beverage category, housekeeping supplies,
personal care products, and books and magazines. The retail prices exclude U.S. sales taxes and
Canadian value-added taxes and provincial sales taxes. The authors match UPCs to get a set of
4,221 identical products available in at least one Canadian store and one U.S. store.
The second major innovation is the authors’ use of the individual store’s geographic location to isolate the border effect from other causes of price dispersion. In most of the existing literature, due to
a lack of data, no distinction is made between stores that are close to a border and stores that are far
from it. By developing a pricing model based on the store’s distance from the border, and employing a regression-discontinuity approach, patterns of cross-border prices are established that capture
more significant differences in market conditions and arbitrage costs for stores located close to and
farther away from the shared U.S.-Canadian border. By locating stores on a circle, this model estimates the distribution of prices within and across countries in the presence of a border effect, and
heterogeneity in marginal costs across countries. Through comparing the prices of identical products sold in stores run by the same retailer, the authors can test whether there are deviations in the
law of one price between stores located close to but across the border from each other. The authors’
results withstand four robustness checks.
Key Findings
• The study’s results affirm the existence of significant border costs: at the border large and heterogenous price discontinuities across products are observed for retail prices and wholesale prices, and
smaller discontinuities are observed in markups.
• W hen border costs become sufficiently large, markets are fully segmented across countries, and
the magnitude of border costs no longer affects pricing decisions. The authors find strong evidence
of international market segmentation, even for identical goods. The failure of the law of one price
that they observe at the UPC level is very similar to the failure observed at a more aggregate level.
• The median retail and wholesale price discontinuities at the border move almost one-to-one with
the U.S.-Canadian nominal exchange rate. The Canadian dollar appreciated in cumulative terms
by 16 percent over the sample period. The median price gap across the UPCs between the average price and cost in Canada and the United States increased from −5 percent in June 2004 to 15
percent in June 2007, a variation that closely tracks the U.S.-Canadian nominal exchange rate. It
appears that the U.S. dollar’s depreciation between January 2004 and June 2007 increased both the
costs and the prices in Canadian stores close to the border relative to U.S. stores on the other side.
Overall, the evidence indicates that the median price gap moves closely with the nominal exchange
rate and that cost differences play an important role.
• W hile the median price gap moves closely with the exchange rate, the price gap for an individual
UPC is likely to be dominated by idiosyncratic factors. The border effect on prices varies substantially across products, and there is a large dispersion of price gaps across UPCs at any given point
in time. Most differences in cross-border consumer prices arise from differences in an apparently
tradeable component of costs, and not from systemic markup differences.
• The median price discontinuity across UPCs is as high as 15 percent for consumer prices and
17 percent for wholesale prices, while the median absolute price discontinuity is 21 percent for
consumer prices and 21 percent for wholesale costs. The standard deviation across UPCs is large,
indicating that the discontinuity at the border across goods varies from large and positive to large
and negative.

july 2009 – december 2009 35

Implications
The strong evidence of international market segmentation at both the barcode level and the aggregate level argues against the contention that aggregate-level differences in the law of one price
are due to a compositional bias. It appears that wholesale markets are highly segmented, even when
serving the same retailer, a striking result since wholesale costs are the most tradeable component
of overall costs. To the extent that the nature of price setting and the costs of arbitrage vary across
goods, or across retailers, further work that encompasses a wider range of goods and retailers would
be very useful.
w-09-11									

Social and Private Learning with Endogenous
Decision Timing
by Julian Jamison, David Owens, and Glenn Woroch

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0911.htm
e-mail: julian.jamison@bos.frb.org, dowens@haverford.edu, glenn@econ.berkeley.edu

Motivation for the Research
Individuals and organizations routinely have the option to undertake large sunk investments in
technologies that could drastically alter how they operate. Typically, these expenditures come with
significant risk: besides uncertainty as to whether a new technology will live up to its promises, the
return on investment depends on factors outside their control, such as the cost of complements,
market conditions, and macroeconomic trends. Contemporary examples include the deployment of
an advanced computing or communications system or the adoption of green technologies to conserve energy and reduce pollution. While adoption initiates the stream of benefits from the innovation—whether these take the form of lowered costs or a new revenue source—delay allows a firm to
gather additional information on the prospects of the technology’s profitability.
Economists are generally puzzled as to how, in practice, superior technologies diffuse slowly through
the population. The authors of this paper seek to contribute to the vast literature on the causes and
patterns of the adoption and diffusion of innovations, concentrating on the portion that deals with how
information is used by potential adopters to select among available innovations and decide when to
adopt them. The aim of this paper is to investigate whether all the information available to members
of an industry is employed in making the choice among several potential innovations and whether the
best technology is chosen, given the available information. More specifically, the authors are interested
in the choice between a safe and risky innovation, in the extent to which people delay adoption to
gather private and public information, and in whether these two sources of information impact timing
in different ways. While many econometric studies of diffusion models have been undertaken, few
laboratory experiments have been conducted to test various hypotheses concerning these issues.
.
Research Approach
This paper employs laboratory experiments to investigate behavioral patterns that govern firm and
industry adoption of innovations as decisionmakers balance the tension between acting quickly and
waiting for more information. In the authors’ experiments, subjects choose between a safe and a
risky innovation and also decide when to adopt the technology. Prior to adoption, subjects earn a
return associated with a status quo technology that is smaller than the return on the safe innovation.
The authors implement three treatments that differ in terms of the amount of information made
available to the subjects. In addition to knowledge of the risk and return properties of the three technologies, each subject observes a private, imperfectly informative signal regarding the true return of
the risky innovation; in the first treatment (the control) this private, imperfectly informative signal
is the only information available. In the second treatment (“private values”), the subjects additionally

36 research review

observe the prior adoption decisions of other subjects, but each subject has a unique return on the
risky innovation—hence these observations are indicative of potentially useful timing information
but not useful payoff information. In the third treatment (“common values”), the risky return is the
same across all subjects—hence the observation of others’ decisions is indirectly informative of one’s
own payoff. Adoptions are irreversible, so delaying the decision is the only way to acquire additional
information about the return on the risky technology. Delay is not costless, however, since subjects
incur an opportunity cost equal to the difference between the per-period profitability of the safe
innovation and the status quo technology. To establish a benchmark against which to evaluate the
experimental results, the authors solve for the Bayesian-Nash equilibrium strategies for the subjects
in each of the three experimental treatments.
The laboratory experiments offer tests of several behavioral hypotheses. From a purely decisiontheoretic perspective, subjects may rely solely on their private information to decide whether to opt
for the risky alternative, ignoring completely the choices made by others. Delay would indicate a
subject’s desire to gather more private information to make a better choice between the two investment alternatives. At the other extreme, subjects may simply ignore their private information and
imitate the adoption decisions of others who acted earlier. While such unreflective imitation can
accelerate the diffusion of an innovation through the population, it can also lead to an industry-wide
selection of an inferior technology. Furthermore, such conformity could also create perverse incentives, as when a firm adopts an innovation early to steer the industry toward one technology rather
than another.
The authors’ experimental design, together with their hypotheses about adoption behavior, has been
greatly influenced by the rapidly growing body of research on social learning games and experiments.
Theoretical models in that literature analyze sequential investment games played by rational agents
who have access to both private and public information. These models have been preoccupied with
the possibility that adopters choose to imitate prior adoptions as a means to free ride on the information gathered by others, but most previous models have assumed that the timing of such decisions
is exogenously given.
Key Findings
• On average, subjects show a slight preference for choosing the safe innovation over the risky one.
Their adoption decisions significantly improve upon pure randomization, indicating that subjects
incorporate private and public information into their decisionmaking.
• Subjects do tend to be guided by their private signals. However, observation of others’ earlier
adoption decisions tends to improve subjects’ performance by inducing them to respond to their
own private signals earlier than they otherwise would have, even if their decisions are not based on
common payoffs (that is, even if the information about other subjects’ decisions does not provide
useful information regarding the outcomes). Surprisingly, subjects do a better job at picking the
better of the two innovations when they receive noninformative reports on prior adoptions than
when those reports contain valuable information.
• Roughly half of the subjects in all treatments do not follow the theoretical prescription to adopt
the technology favored by their first private signal. Instead, they delay the adoption decision with
the apparent intent of acquiring additional information. With social information available (in this
case, knowing the decisions of others, whether or not this information is payoff-relevant), subjects
are slightly less likely to make the choice in the first round. However, when subjects observe their
peers, they adopt more quickly as a group than when they do not. This result suggests that early
adopters generate “competitive pressure” on other subjects to act, even when such action diverges
from the most popular prior adoption decision.

july 2009 – december 2009 37

• Profits earned by subjects appear to be related to their access to information, in that subjects earn a
higher profit when they have the opportunity to observe their peers than when they lack that opportunity. This superior profit performance is, in large part, a result of subjects’ tendency to adopt
an innovation more quickly than they would otherwise; it therefore seems to be driven less by the
diffusion of valuable information than by the competitive pressure mentioned above.
Implications
The finding that on average subjects make a decision about adopting a new technology earlier when
they have access to information about the choices made by their peers suggests that people pay attention to others—even if others’ choices have no direct implications for their own payoff—and
that, relatively speaking, they pay more attention to the fact that others do something than to what
in particular they do. This discovery has a range of implications both for firms that are attempting
to maximize profits and for those that are attempting to predict what choices firms will make. A
natural extension to this work, currently underway, is to include the additional possibility of direct
network payoff externalities.
w-09-12

Housing and Debt Over the Life Cycle and
Over the Business Cycle
by Matteo Iacoviello and Marina Pavan

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0912.htm
e-mail: iacoviello@bc.edu, marina.pavan@ucd.ie

Motivation for the Research
While housing investment is an important and volatile component of GDP and movements in the
housing market are central to understanding aggregate fluctuations, modern business cycle models
often treat housing as just another form of capital, thus ignoring the housing market. When housing
is included in business cycle models, no allowances are made for the distinction between owning and
renting, for income and wealth heterogeneity, borrowing constraints, transactions costs, or life cycle
considerations. The authors seek to address this imbalance by studying the life cycle and business cycle
properties of household investment and household debt in a quantitative general equilibrium model.
Research Approach
The starting point is a standard life cycle model in which households face idiosyncratic income
and mortality risk. The authors modify it to include aggregate uncertainty (by making aggregate
productivity time-varying) and an explicit treatment of the housing sector. The model accounts for
characteristics that make housing different from other goods: the choice of renting versus owning,
the role that housing can play as collateral for loans, and the fact that, at the individual level, changes
in housing investment occur infrequently but in large amounts. Individual households differ in their
age profile and labor productivity. They can also belong to either of two groups, termed “patient”
and “impatient,” a modification that allows one, at the cross-sectional level, to mirror the skewed
U.S. wealth distribution and to replicate the life cycle profiles of housing and nonhousing wealth.
Patient households prefer to save more relative to current consumption, while impatient households
prefer consumption over saving. Recent literature suggests that heterogeneity in such preferences
can account for the fact that households with similar income levels amass very different amounts of
wealth over their life cycle.
At every stage of the life cycle, the model describes an individual household’s behavior as choosing
its preferred consumption, saving, labor supply, and housing investment by taking into account its
income, both current and expected, its liquid assets, and its housing position at the start of each

38 research review

Household Debt, Housing Investment, and GDP
(Hodrick-Prescott Filtered Variables)
Percent deviation from trend

10

0

-10

Real GDP
Residential Investment

-20
1955

1960

1965

1970

1975

1980

1985

1990

1995

2000

2005

Percent deviation from trend

6
Real GDP
Real Mortgage Debt

4
2
0
-2
-4
-6

1955

1960

1965

1970

1975

1980

1985

1990

1995

2000

2005

Source: Bureau of Economic Analysis, National Income and Product Accounts; Federal Reserve Board;
Flow of Funds of the United States; and authors’ calculations.

period, defined as one year. Households begin each period either as renters or homeowners; if renters
have sufficient liquid assets, they become homeowners. Every period existing homeowners face four
choices: continue in their current house, increase their house size, decrease their house size, or switch
to renting. The option an individual household chooses depends on a combination of the housing
and liquid assets it owns at the start of the period, as well as on its age and income. In the model,
households that are young, old, and poor hold few assets and are renters, while households that are
middle-aged and/or asset rich are homeowners.

july 2009 – december 2009 39

The authors run a baseline calibration of their model from 1952 to 1982, a period when the U.S.
economy was characterized by relatively high aggregate volatility but low individual income volatility. During this era, downpayment requirements were high, and household mortgage debt was
strongly procyclical. Terming these years the early period, the authors then use their model to explore the business cycle implications of structural changes tied to increasing income volatility and
lower downpayment requirements that began in the early 1980s. During this later period from the
mid-1980s to the present, dubbed by many observers the Great Moderation, these structural changes
might affect the sensitivity of macroeconomic aggregates to economic shocks—so higher income
volatility and lower required downpayments would be potential candidates for explaining the role
played by debt and the housing market in the post-1980s U.S. economy. The volatility of housing
investment has fallen more than proportionately relative to GDP, while the correlation between
mortgage debt and macroeconomic activity has dropped substantially, from about 0.8 to 0.3. The
authors regard risk and the availability of mortgage financing as two key determinants of housing
demand and housing tenure: higher risk should make individuals more averse to purchasing largeticket items that are costly to sell in bad economic times, while the greater availability of financing
should encourage housing demand, since a smaller amount of savings is needed to buy a house.
In sum, by using as inputs exogenous aggregate and idiosyncratic uncertainty, the model delivers the
endogenously derived dynamics of housing and nonhousing investment over the household’s life
cycle and the business cycle to address the question: what are the implications of lower downpayment requirements and higher income volatility for macroeconomic performance?
Key Findings
• Lower downpayment requirements reduce the volatility of housing investment from about 6.7 to 6.4
percent. Effectively, lower downpayments allow people to more smoothly adjust their housing over
the life cycle, irrespective of business cycle fluctuations. By contrast, high downpayments mean that
more households are unable to save enough for a downpayment or are able to save only enough to
afford the minimum house size. The housing investment for these agents reacts strongly to shocks:
in good times they switch from renting to owning or to owning a larger house, and in bad times they
switch from owning to renting—in other words, housing investment is more volatile when downpayment requirements are higher.
• Lower downpayment requirements lead to an increase in the homeownership rate and a decrease
in the volatility of household investment and, to a lesser extent, of other components of demand.
Lower downpayment requirements allow households with relatively little net worth to own homes.
The model predicts that lower required downpayments substantially increase homeownership rates
for households between the ages of 30 and 65 years, with the homeownership rate rising from 64
to 76 percent.
• The model finds that the combination of larger idiosyncratic risk and lower required downpayments reduces aggregate volatility and housing investment volatility. The model explains 10 to 15
percent of the reduced variation in GDP observed in the data, and about half of the reduction in
the variance of housing investment. Compared with the early period, this effect explains the later
period’s decline in the correlation of household mortgage debt with GDP.
• Compared with renters, indebted homeowners are more likely to work during cyclical downturns
in order to finance mortgage payments, thus offsetting the decrease in output due to negative productivity shocks. Since homeowners are less likely to adjust their housing capital over the business
cycle, this effect mitigates both housing investment volatility and aggregate volatility, and might
help explain some aspects of the Great Moderation.

40 research review

• Higher income risk leads to higher precautionary savings and to a slight decrease in homeownership rates among impatient agents, going from 64 percent to 62 percent. Higher risk makes
wealth-poor individuals more cautious, and thus they adjust their consumption, working hours,
and housing demand by smaller amounts in response to aggregate shocks. Since housing is a
particularly large purchase, this mechanism is quite pronounced for housing investment. Coupled
with lower downpayment requirements, these forces reduce the procyclicality of household debt
and reduce the sensitivity of housing demand to changes in aggregate conditions.
Implications
To the best of the authors’ knowledge, no previous model with rigorous micro-foundations for
housing demand has succeeded in reproducing housing’s procyclicality and volatility in quantitative
general equilibrium. Including these features should yield a better approach for central bank modeling and policymaking by helping to describe more precisely the effects that housing investment has
on aggregate economic activity.
w-09-13									

Financial Leverage, Corporate Investment,
and Stock Returns
by Ali K. Ozdagli

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0913.htm
e-mail: ali.ozdagli@bos.frb.org

Motivation for the Research
Firms with a high ratio of book value of equity to market value of equity (value firms) earn higher
expected stock returns than firms with a low book-to-market equity ratio (growth firms). However,
conventional wisdom tells us that growth options should be riskier than assets already in place and
should therefore command higher expected returns than value firms, which derive their value from
assets in place. Additionally, Fama and French (1992) have shown that portfolios of stocks with
different book-to-market ratios have similar risk profiles, as measured by the standard capital asset
pricing model (CAPM) of Sharpe (1964), Lintner (1965), and Black (1972). This phenomenon,
known as the value premium puzzle, helped the Fama and French model replace the CAPM as the
benchmark in the asset pricing literature.
This paper presents a dynamic model of the firm with risk-free debt contracts, investment irreversibility, and debt restructuring costs in order to analyze the effects of financial leverage on investment
and explain the cross-sectional differences in equity returns. In a parsimonious and tractable way, the
model captures several irregularities in the corporate finance and asset pricing literature.
.
Research Approach
The author develops a theoretical model that extends the investment irreversibility model of Abel
and Eberly (1996) by incorporating investors’ risk preferences, risk-free debt contracts, and debt adjustment costs. He then calibrates the model, drawing on data from the literature and estimating
the remaining parameters using maximum likelihood, based on the long-run stationary distribution
of book-to-market values from the Compustat database. The financing decisions in this model are
similar to those of Fischer, Heinkel, and Zechner (1989) and Gomes and Schmid (2009), who add
debt restructuring costs to the standard tradeoff theory of capital structure, in which a firm chooses its
financing policy by balancing the costs of bankruptcy against the benefits of incurring debt, such as tax
shields due to interest payments. The model developed in this paper assumes that firms benefit from
the tax shield of debt, as in the tradeoff theory, and that they face additional costs at the time of debt
restructuring. However, in this paper debt has two properties distinct from its properties in previous
papers: it is free from risk and endogenously limited by the lenders to a certain fraction of capital.
july 2009 – december 2009 41

Key Findings
• An important property of the model is that book leverage—the fraction of total capital supplied
by lenders—is state-independent. Book leverage is determined in a manner that ensures that the
firm’s value is nonnegative even in the worst-case scenario, in order to avoid bankruptcy. This
worst-case scenario is independent of the state variables and hence a revision of the debt agreement at a later date would lead to the same amount of leverage. Thus, it is not optimal for a firm
to change its book leverage once it is set, and book leverage remains the same across firms with
different book-to-market equity ratios, whereas market leverage differs significantly. Moreover,
because the debt level is constant when the firm does not invest, the firm’s market debt-to-equity
ratio varies closely with fluctuations in its own stock price. This implication of the model is in line
with the results of Welch (2004), who finds that U.S. corporations do little to counteract the influence of stock price changes on their capital structures.
• Investment irreversibility alone causes a growth premium rather than a value premium. The firm’s
investment opportunity is a call option, because the firm has the right, but not the obligation, to buy
a unit of capital at a predetermined price. As is known from the financial options literature, when the
price of the underlying security rises and falls, the price of the call option rises and falls at a greater
rate. This suggests that the value of a growth option, meaning the call option to invest, should be
more responsive to economic shocks than the assets in place. Therefore, growth options increase the
firm’s level of risk. Similarly, the disinvestment opportunity is a put option—because the firm has the
right, but not the obligation, to sell a unit of capital at a predetermined price. The value of this put
option is negatively related to the value of the underlying asset because the gain from exercising the
option is higher for less productive firms. Therefore, the disinvestment option provides value firms
that have low productivity with insurance against downside risk and hence reduces their risk. This
proposition is contrary to the conventional wisdom of recent literature—for example, Zhang (2005)
and Cooper (2006) —which presents investment irreversibility as the source of the value premium.
• In the author’s model, financial leverage affects stock returns directly—through its effect on equity risk à la Modigliani and Miller (1958), and indirectly, through its effect on business risk, by
influencing investment decisions. These two channels have opposing effects on the relationship
between book-to-market ratios and stock returns. However, the Modigliani-Miller effect strongly
dominates the investment channel and explains the major share of the value premium.
• Financial leverage also affects investment—and hence the business risk—because it influences the
effective degree of investment irreversibility faced by the firm’s owners. When investment can be
financed with leverage, the effective price of capital is reduced by the tax savings associated with
debt financing at the time the investment was made. On the other hand, at the time of disinvestment, the firm has to repay its debt, in line with the debt agreement, and therefore has to give
up the tax savings associated with the debt financing of that particular investment. Because the
purchase price is greater than the resale price and both should be adjusted by the same value of tax
savings, their ratio increases as a result of debt financing. In turn, this result increases the effective
irreversibility perceived by the firm’s owners. Since irreversibility reduces the value premium, the
investment channel of leverage is also reduced.
• Although financial leverage can explain the major share of the value premium, while investment irreversibility alone generates a growth premium instead, investment irreversibility still contributes importantly to improving the model’s fit with the data, by generating a wide range of book-to-market values.

42 research review

Implications
This paper makes a number of contributions to the growing literature that tries to link corporate
decisions to asset returns. First, the model’s closed-form solution identifies explicitly how investment
irreversibility, financial leverage, and their interaction affect the cross-section of stock returns. Second,
the debt capacity of the firm is endogenously determined. Third, because of the interaction of financial
leverage and irreversibility, the paper does not need to rely on a high degree of irreversibility in order
to generate a sizable variation in stock returns. Fourth, the paper calibrates the model using maximum
likelihood to capture the distribution of book-to-market values instead of plugging in parameter values
in an ad hoc manner, and the calibrated model captures the distribution of market leverage reasonably
well. Finally, the paper shows that financial leverage can explain the value premium.
Introducing debt into production-based asset pricing raises interesting possibilities for further research. For example, the model presented here could be extended with time-varying interest rates in
a framework similar to Merton’s (1973) intertemporal capital asset pricing model (ICAPM). This
extension would serve two purposes. First, it would decrease the explanatory power of the conditional market beta for stock returns and get us one step closer to solving the value premium puzzle.
Second, because firms with a high book-to-market ratio also have higher leverage, they would have
greater exposure to interest rate shocks, further reinforcing the value premium.
w-09-14									

Inflation Persistence
by Jeffrey C. Fuhrer

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0914.htm
e-mail: jeff.fuhrer@bos.frb.org

Motivation for the Research
This paper examines the concept of inflation persistence in macroeconomic theory. For many
decades, economists assumed that inflation is an inertial or persistent economic variable, meaning
that the rate of change of the price level tends to remain constant in the absence of an economic
force to move it from its current level. The concept of the sacrifice ratio—the number of point-years
of elevated unemployment required to reduce inflation by a percentage point—implies that inflation
does not move freely but requires significant economic effort in the form of elevated unemployment
or lost output to reduce its level.
The early incarnations of the accelerationist Phillips curve modeled the apparent inertia in inflation by including lags of inflation. In this early literature, the theoretical justification for including
lags of inflation was to serve as a proxy for expected inflation and for price-setting frictions, such as
contracting. As an empirical matter, the lags helped the model fit the data.
The introduction of Muth’s (1961) theory of rational expectations into the macroeconomics literature and the consequent move toward explicit modeling of expectations posed considerable challenges in modeling prices and inflation. In the earliest rational expectations models of Lucas (1972)
and Sargent and Wallace (1975), the price level was a purely forward-looking or expectations-based
variable like an asset price, which in these models implied that prices were flexible and could “jump”
in response to shocks. It was difficult at first to reconcile the very smooth, continuous behavior of
measured aggregate price indexes such as the consumer price index with the flexible-price implications of these early rational expectations models.
A number of economists recognized the tension between the obvious persistence in the price-level
data and the lack of persistence implied by these early rational expectations models. Fischer (1977),
Gray (1977), Taylor (1980), Calvo (1983), and Rotemberg (1982, 1983) developed a sequence of

july 2009 – december 2009 43

models that rely on nominal price contracting in attempts to impart a data-consistent degree of
inertia to the price level in a rational expectations setting. The overlapping contracts of Taylor and
Calvo/Rotemberg were successful in doing so, allowing contracts negotiated in period t to be affected by contracts set in neighboring periods, which would remain in effect during the terms of
the current contract. The subsequent trajectory of macroeconomic research drew heavily on these
seminal contributors, who had neatly reconciled rational expectations with inertial (or persistent)
macroeconomic time series.
However, in the early 1990s, a number of authors discovered that these rational expectations formulations yielded less satisfying implications for the change in the price level, that is, the rate of
inflation. Ball (1994) demonstrated that such models could imply a counterfactual “disinflationary
boom”—the central bank could engineer a disinflation that would cause output to rise rather than
contract. Fuhrer and Moore (1992, 1995) showed that Taylor-type contracting models implied a
degree of inflation persistence that was far lower than was apparent in inflation data of the postwar
period to that point.
While much of economists’ intuition about inflation persistence is obtained from responses to identified monetary policy shocks, considerable interest also centers on the behavior of inflation in response
to a central bank-engineered disinflation The work of Ball (1994), Fuhrer and Moore (1992), and
others emphasizes this aspect of inflation dynamics. In response to such a shock, the differences in the
behavior of inflation are striking in purely forward-looking models versus its behavior in hybrid models. In purely forward-looking models, regardless of how persistent output is, the inflation rate jumps
to its new equilibrium in the period after the policy announcement, with no disruption of output. In
marked contrast, when lagged inflation is added to the inflation equation, inflation declines gradually
to its new long-run equilibrium, with a concomitant decline in output during the transition.
Many view the dynamics of the purely forward-looking specification as strikingly counterfactual.
Counterfactual or not, one needs to understand the dynamics of inflation to pursue appropriate
monetary policy. Knowledge of the reduced-form behavior of inflation is not sufficient. The central
bank needs to understand the sources of inflation dynamics: it needs to know whether inflation
arises from the persistence of output, which may in turn arise from the behavior of monetary policymakers or from persistence intrinsic to the price-setting process. A third source of persistence is the
behavior of the central bank. Either through the vigor (or lack thereof ) of its systematic response
to deviations of inflation from its current target or in the low-frequency movement in its inflation
target the central bank can exert significant influence on the persistence of inflation. Thus, the issue
of persistence is of more than passing interest to macroeconomists and policymakers.
Research Approach
The author analyzes and explains the existing literature and current knowledge in the area of inflation persistence, weaving into the analysis new econometric results and pointing the way to
advancing the state of knowledge in this area. He begins by emphasizing the difference between
reduced-form and structural persistence and goes on to examine a number of empirical measures of
reduced-form persistence, considering the possibility that persistence may have changed over time.
Next, he examines the theoretical sources of inflation persistence, distinguishing intrinsic inflation
(inflation that occurs as a result of inherent price dynamics) from inherited inflation (inflation that
occurs in response to changes in real activity and supply shocks) and deriving a number of analytical
results on persistence with emphasis on the influence of the monetary policy regime. He summarizes the implications for persistence from the literatures on imperfect information models, learning
models, and so-called trend inflation models, providing some new results throughout his analysis.
Finally, he summarizes the results on persistence from the many studies of disaggregated price data.

44 research review

Key Findings
• It may be early to draw firm conclusions about the structural sources of inflation persistence or
about the extent to which these sources have changed and manifested themselves in changes in
reduced-form inflation persistence. In the first case, it may be premature because there is not yet
widespread agreement about the appropriate mapping between micro data or reduced-form aggregate data and economists’ structural models. In the second case, the sample period from which
to draw inferences about potential changes is fairly short.
• To the extent that reduced-form persistence has changed, policymakers need to gain clarity about
the sources of the change. There may have been a number of structural channels through which
persistence may have changed. There may have been a change in the intrinsic persistence of inflation—the importance of lagged inflation in the structural Phillips curves. Alternatively, the amount
of inherited persistence may have changed. In principle, this could arise because the persistence
of the driving process has changed, or because the coefficient on the driving process has changed,
or because the relative variances of the shocks to inflation and the driving process have changed.
• It is unlikely that any change in inflation persistence has arisen from a change in the persistence
of the driving process, as this has remained remarkably stable throughout the period. In addition,
a dynamic stochastic general equilibrium model-based analysis suggests that while changes in the
systematic component of monetary policy likely have led to inflation that is less persistent, the
largest changes in persistence are most likely due to changes in the so-called intrinsic sources of
inflation persistence—whether these arise from indexation, rule of thumb price-setters, or a rising
price reset hazard.
• The models that depart from the standard Calvo framework suggest that other aspects of the
economy that impinge upon inflation persistence may be responsible for changes in its persistence.
These aspects may include smaller or less frequent changes in trend inflation or a smaller role for
learning, as central bank transparency regarding its policy goals has increased.
Implications
An impressive and growing body of evidence now exists on price- (and wage-)setting behavior at the
disaggregated level. This evidence strongly suggests that some of the inferences drawn from micro
data about the frequency of price changes, as well as the degree of inflation persistence, may pertain
largely to price responses to industry- or firm-specific shocks. The response to aggregate shocks by the
aggregate component common to the individual price series may well have quite different properties
from the responses of individual firms to idiosyncratic shocks. Integrating this evidence into our structural models, perhaps along the lines of rational inattention models (see Sims (2003), Gorodnichenko
(2008), and Maćkowiak and Wiederholt (2009)) seems a promising avenue for research.
Finally, economists are currently accumulating additional evidence that should allow a firmer conclusion to be drawn on whether reduced-form persistence has changed and to discern the structural
sources of any such changes. The upheaval created by the 2007–2009 financial crisis and recession,
with the concomitant prospect of a prolonged period of elevated unemployment and depressed marginal cost, suggests that over the next decade sufficient evidence will have been gathered to enable
economists to test more fully the hypothesis that reduced-form inflation persistence has declined
and to test competing theories that identify the structural sources of persistence.

july 2009 – december 2009 45

w-09-15									

Closed-Form Estimates of the New Keynesian Phillips
Curve with Time-Varying Trend Inflation
by Michelle L. Barnes, Fabià Gumbau-Brisa, Denny Lie, and Giovanni P. Olivei

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0915.htm
e-mail: michelle.barnes@bos.frb.org, fabia.gumbau-brisa@bos.frb.org, dlie@bu.edu, giovanni.olivei@bos.frb.org

Motivation for the Research
This paper illustrates the importance of imposing model discipline on inflation expectations when
estimating a New Keynesian Phillips curve (NKPC). The standard difference equation (DE) form of
the NKPC states that current inflation is a function of past inflation, expected future inflation, and real
marginal costs. The alternative closed-form (CF) specification solves the DE form to express inflation
as a function of past inflation and a present discounted value of current and expected future marginal
costs. In essence, the CF solution explicitly states that if the NKPC were a good model for inflation,
then inflation expectations should be formed in a manner consistent with this model. Therefore, CF
estimates are particularly well suited to assess the validity of forward-looking relationships like the
NKPC as (part of ) a macroeconomic model.
There is now a large literature on estimating NKPC models. The forward-looking component in
the NKPC is usually derived from a micro-founded problem in which firms cannot reset prices
optimally in every period. Firms then take into account not only current market conditions, but
also expected future conditions when setting prices optimally. This mechanism alone provides no
role for lagged inflation in the NKPC. But in actual data the high degree of inflation persistence
often means that purely forward-looking versions of the NKPC fit the data worse than “hybrid”
versions where current inflation depends both on inflation expectations and on past inflation. The
role of past inflation in the NKPC is frequently introduced through some ad hoc pricing mechanism
(for example, indexation or “rule of thumb” price-setting). Nonetheless, this modeling approach is
unsatisfactory as the mechanism lacks micro-foundations; from a theoretical point of view, a purely
forward-looking NKPC would be much more convenient. Cogley and Sbordone (2008) explore the
possibility that the persistence in the inflation process is due to a time-varying inflation trend rather
than to some ad hoc element in firms’ price-setting decisions. Their empirical findings favor a purely
forward-looking Phillips curve where inflation persistence is entirely due to time variation in that
persistent trend. Supporting this explanation for inflation persistence, there is considerable evidence
that the Federal Reserve’s inflation target has slowly changed over time (Ireland 2007).
As long as the inflation target does not move, a purely forward-looking NKPC implies that inflation
is just as persistent as its driving process, which is typically a measure of real activity such as real
marginal costs. Instead, when the NKPC is not purely forward-looking, the adjustment of inflation
to movements in the driving process is slower because inflation also depends on its own past path.
The two alternative models differ in their implied tradeoffs between inflation and real activity, an
element of central importance to the optimal conduct of monetary policy.
Research Approach
The paper examines the differences that arise from estimating a New Keynesian Phillips curve
(NKPC) when the relationship is expressed as a difference equation (DE) or in its closed-form (CF)
specification. The initial Monte Carlo analysis ranks DE and CF estimates of the NKPC in terms of
their small sample bias and dispersion, and in terms of their sensitivity to a particular form of misspecification that the authors consider plausible for this specific model. Next, the empirical exercise
uses quarterly U.S. data from 1960:Q1 to 2003:Q4 to contrast the DE and CF estimates with and
without controls for the misspecification analyzed in the Monte Carlo exercise.

46 research review

The gain in efficiency from using the CF estimation shown in the Monte Carlo exercise is likely
to apply to other relationships that express a variable as a function of its driving process, nextperiod expectations of the variable, and (possibly) its past value. The authors also present a general
method to estimate this kind of relationship that avoids the problem of computing the CF. The
method allows the econometrician to impose model-consistent expectations for a finite period of
time instead of ad infinitum as in the closed form. The authors show that in the context of the
NKPC, imposing model-consistent expectations for just a few periods forward yields efficiency
gains that quickly approximate the gains in efficiency from using the closed form. Both Monte
Carlo methods and actual U.S. data are used to illustrate this point.
Key Findings
• In the Monte Carlo exercise, the CF estimates are much more precise, are less affected by small
sample bias, and are more robust to a particular misspecification that alters the ad hoc part of the
model in a plausible way.
• Using actual data, deep parameter estimates of the NKPC obtained from the DE and CF specifications differ substantially. Some of the DE estimates imply that, once time-varying trend inflation is taken into account, the NKPC is purely forward-looking. Nonetheless, the corresponding
CF estimates always find a more important role for lagged inflation. Indeed, according to the CF
estimates, both lagged and expected future inflation enter the NKPC with rather similar weights.
• The CF estimates of the NKPC suggest that U.S. inflation has an important persistent component
that is not fully explained by time variation in the inflation trend, or by persistence in the driving process of inflation. Focusing the analysis on the post-1984 subsample leaves the main results
unaltered.
• Another important dimension in which the DE and CF estimates differ is the frequency with
which prices are readjusted optimally. In the DE specification, this frequency is estimated to be 3.9
months, while in the CF specification it is close to one year.
• The estimation method that imposes some model discipline on expectations yields estimates
that are very similar to the CF estimates. The authors show that having four quarters of modelconsistent expectations already closes most of the gap between DE and CF estimates.
Implications
The Monte Carlo exercise illustrates that the CF estimates are more precise and less subject to small
sample bias than the DE counterparts. Additionally, the CF estimates are less affected by a plausible
form of misspecification in the ad hoc part of the NKPC. In order to place the DE and CF estimates
on a more comparable footing, part of the empirical application estimated on U.S. data controls for
this misspecification. The DE estimates obtained in that particular exercise are already very different
from the DE estimates reported in Cogley and Sbordone (2008), which imply that the NKPC is
not purely forward-looking. The CF estimates always place a larger weight on past inflation, quite
close to the weight on expected inflation. The high autocorrelation of deviations of inflation from
its (time-varying) trend appears to square better with the reported CF estimates than with a purely
forward-looking specification of the NKPC.
The paper contributes to previous literature (Fuhrer, Moore, and Schuh 1995; Fuhrer and Olivei
2005) that compares DE and CF estimates, albeit in different settings and using different estimation
methods. Moreover, it provides a formal explanation for an important source of differences between
the DE and CF estimates of a forward-looking Euler equation, and illustrates how to improve on
the DE estimates by placing some model-consistent constraints on expectations without resorting
to the closed-form model solution. This is particularly convenient when the CF specification is

july 2009 – december 2009 47

difficult to compute and in instrumental variables estimation settings where the CF involves infinite
sums of present discounted values—which, at best, can only be approximated. In this regard, the
paper links the estimation problem of forward-looking Euler equations (such as the NKPC) to the
minimum-distance and GMM (generalized method of moments) estimation literature on the efficiency gains that can result from imposing additional estimation restrictions.
w-09-16									

Estimating Demand in Search Markets:
The Case of Online Hotel Bookings
by Sergei Koulayev

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0916.htm
e-mail: sergei.koulayev@gmail.com

Motivation for the Research
In markets with multiple sellers and frequently changing prices, consumers often have to engage in
costly search in order to collect information necessary for making a purchase. A rational consumer in
such a situation would make a sequence of search efforts, stopping when the expected benefit from
another attempt falls short of the search cost. When the search is over, the consumer makes a purchase from the set of goods discovered during the process, representing the choice set. Generated in
this way, choice sets have two distinct properties. First, since searching is costly, choice sets are usually small compared with the full set of available products: according to comScore data, only a third
of all consumers visit more than one store while shopping online. Second, choice sets are endogenous to (depend on) preferences. This is because the decision to stop searching is dictated in part
by the expected benefit of any additional search, which is itself a function of a searcher’s preferences.
These properties complicate inference about consumer demand for differentiated goods in search
markets. The standard approach is to recover preferences from the joint variation of market shares
of goods and their attributes, including price. Implicitly, this method assumes that consumers possess full information about all goods available on the market. Therefore, the variation of choice sets
across consumers comes from the availability of goods across markets, which is arguably exogenous
to (independent of ) preferences. In search markets, where the variation of choice sets comes through
individual search efforts, these assumptions do not hold and the application of this method leads to
biased estimates of demand. The purpose of this paper is twofold: First, to propose an alternative
estimation method that corrects for this bias. Second, using this method, to evaluate the overall
magnitude of the bias due to search and to assess the individual contributions of its two sources—its
limited nature and the endogeneity of choice sets. The emphasis on separating the two sources of
bias is motivated by the fact that their correction requires rather different approaches, both in nature
and in the cost of implementation.
Research Approach
Correcting for the limited nature of choice sets can be achieved either by using information on
actual choice sets (as done in this paper) or by employing simulation methods developed in the literature. To correct for the endogeneity bias, the author takes the approach of estimating preferences
within a model that includes as outcome variables both observed search decisions and purchases.
Indeed search decisions are precisely the channel through which preferences affect the distribution
of choice sets, leading to the endogeneity problem. However, explaining search decisions in the
context of differentiated goods contains an identification problem. A person may stop searching
either because she has a high idiosyncratic valuation for goods already found (her status quo) or
because she has a high search cost. Therefore, an observed measure of search intensity (such as the
distribution of search duration) can be explained either by variability in utilities across goods or by
moments of the search cost distribution. To separate the effects of search costs and preferences on

48 research review

search decisions, one may use exogenous shifters of search costs. Alternatively, as in this paper, one
can use conditional search decisions: a search action together with the observable part of the search
history preceding the action. Using this approach, the author obtains a source of exogenous variation
in the status quo across consumers, which allows separation of the effects of search costs from the
effects of preferences on search decisions.
The author implements these ideas by estimating a structural model of sequential search, using a
unique dataset of search histories by consumers who were searching on a popular website for hotels
in Chicago. Although this website offers a variety of search tools, the author focuses on a subset of
consumers who employed a simple yet common strategy: start the search by sorting hotels by increasing price and then flip through the pages of search results. The advantage of this dataset is that
it offers detailed information on search histories: search actions, observed hotels, and clicks. The author compares price elasticities from the search model with those from a static discrete choice model
with full information. To correct for the limited choice sets, the author next drops the assumption of
full information and re-estimates the static model using data on actual choice sets.
Key Findings
• There is significant heterogeneity of search costs among the population. While the model does a
good job of predicting average search intensity, it performs rather poorly at detecting heterogeneous incentives to search.
• Both properties of choice sets generated by a search process—their limited nature and endogeneity to preferences—have a significant impact on estimates of the price elasticity of demand, an
important input in many applications, including pricing decisions, welfare analysis of mergers, and
benefits from the introduction of new products. Both factors lead to biased estimates in a static
demand framework that takes choice sets as given.
• Within a linear utility framework, the mean utility function and the search cost distribution of a
representative consumer are nonparametrically identified.
• The nested logit model with full information overestimates the price elasticities by as much as a
factor of five compared with the results from the search model. One explanation is that the choice
sets of these searchers include mostly cheaper-brand hotels that are located farther from the city
center. As a result, consumers choose lower-quality hotels not only because they are price sensitive
(as the full information model predicts), but also because the higher-quality ones are often not
observed. Although intuitive, this argument appeals only to the limited nature of choice sets, while
both properties of choice sets are responsible for the bias.
• After correcting for the limited choice sets by dropping the assumption of full information, the
logit model still overestimates the price elasticity by a factor of four. This is a consequence of the
endogeneity of choice sets. For example, if we see someone willing to incur a cost in order to find
more expensive but potentially better-quality hotels, we should conclude that the consumer in
question is less price-sensitive than the static model would predict. A static demand model ignores
this piece of information and therefore makes biased conclusions.
• The results indicate that accounting for actual choice sets but ignoring their endogeneity leads to overestimation of price elasticity by 17 to 400 percent across specifications. However, contrary to the above
case, the direction of the bias is specific to the dataset being used and cannot be determined a priori.
Implications
The biases found in this study are of significant magnitude from the perspective of decisionmaking by a firm. If for simplicity we assume that every hotel is a monopolist, the inverse elasticity

july 2009 – december 2009 49

rule (which states that the optimal markup of a monopolist is inversely related to the elasticity of
demand) implies that overestimation of elasticity by 50 percent leads to (1 – 1/1.5)*100=33 percent
loss of markup, because the price charged is suboptimal.
While the model does a good job of predicting average search intensity, it performs rather poorly
at picking heterogeneous incentives. This fact points to some limitations of the model that suggest
directions for future research. In particular, it would be desirable to relax the assumptions of common prior and search cost distributions by introducing consumer heterogeneity. Also, the model
estimates are obtained for a rather select group of the population, that is, consumers who search by
price sorting. To generalize these results, it is important to increase the scope of search strategies by
adding more pages and other sorting and filtering tools.
This paper takes another step toward more realistic modeling of the search process, both in terms of
the specifics of the actual search environment and in terms of the complexity of goods searched for.
Clearly, greater realism comes at an increased cost of implementation and computation, which can
limit the scope of search behavior that can be modeled in a fully structural way. Nevertheless, the author believes this is a fruitful direction for research and offers two main reasons for this contention.
First, one can look more closely at the implications of search frictions for demand for heterogeneous
goods. Second, a comprehensive search model allows one to evaluate different ways of organizing
the display, an important problem in online markets such as those for hotel accommodations or
airline tickets.
w-09-17									

Multiple Selves in Intertemporal Choice
by Julian Jamison and Jon Wegener

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0917.htm
e-mail: julian.jamison@bos.frb.org, jonw@drcmr.dk

Motivation for the Research
The notion of self has a long tradition in both philosophy and psychology, dating back to at least
Hume (1739). In economics, the focus on self has been primarily implicit, yet prominent in the assumption that an individual maximizes his or her utility function and in performing welfare analyses
by aggregating and comparing across individuals. The concept also has a long legal history, with
questions of autonomy rising to the forefront. Of course, “self ” has essentially no meaning except in
distinction to some other individual or group, and so the relevant question becomes where to draw
the line between the self and the other entity.
Recent neuroscientific studies have found evidence that systems involved with the general process of
imaginatively putting oneself into the shoes of another (that is, the ability to distinguish between the self
and the other, or, stated differently, to perceive one’s own mental state and attribute analogous but distinct
mental states to others—known in the Theory of Mind (ToM) literature as mentalizing) are similar to
those involved in prospection (imagining oneself in the future). This raises the question as to whether
this neuroscientific evidence can shed light on the process of intertemporal decisionmaking (decisionmaking over multiple points in time) as conceptualized implicitly or explicitly in economic theory.
Research Approach
The authors draw connections between recent findings in neuroeconomic research and traditional
economic thinking about intertemporal choice. They describe the neuroscientific evidence that leads
them to propose a novel view of how individuals see their future selves. The authors then suggest
additional studies—behavioral, clinical, and neuroimaging—to confirm their conclusions. Finally,
they discuss the policy implications of their conceptual framework.

50 research review

Key Points
• The new discipline of neuroeconomics has been defined as a set of experimental, empirical, and
theoretical analyses of the decisionmaking process that take into account the physical (and especially
the neurological) embodiment of the decisionmaker. Neuroeconomics combines neuroscience, economics, and psychology, but also touches on the concerns of philosophy, medicine, and public policy.
• Humans seem to use the same brain systems to think about themselves in the future as they do to
think about other conscious agents. By “think about,” the authors refer to empathy (not just affinity), the mentalization of intentionality, and the prediction of behavior.
• The authors propose that individuals consider future versions of themselves to be truly separate
persons from their present selves in terms of actual brain systems and that the decisionmaking
process involving a tradeoff between one’s current and future selves is substantially the same as the
decisionmaking process involving a tradeoff between oneself and other individuals. The authors’
approach differs from previous studies that draw a parallel between mentalizing and prospection
in that the authors argue that intertemporal choice and decisions concerning time preferences are
more analogous to mentalizing than is prospection, since intertemporal choice involves an implicit
prediction of future actions.
• Since similar outcomes from experimental studies could easily arise from entirely separate brain
processes, it is difficult to determine using only observed behavioral data whether a similar mechanism is being used for decisions relating to others and to one’s future selves. On the other hand,
since it is known that some subjects are better at mentalizing than others, it would be possible
to compare this trait with a related version regarding future selves. In particular, one could test
whether individuals who are proficient at predicting the behavior of others are also relatively proficient at predicting their own future actions, controlling for age and other relevant variables. Such a
correlation would be suggestive (although not conclusive) in confirming the validity of the analogy
between mentalizing and intertemporal choice along the dimension of predicting choice.
• It would be interesting to test whether subjects who are known to have theory of mind impairments (for example, subjects with a specific lesion to the tempoparietal junction) demonstrate
impairments in prospection and whether they discount future outcomes more heavily than normal
subjects. Patients with autism also would be expected to discount the future more than normal
subjects, and this prediction too could be tested. Both these hypotheses could be tested with purely
behavioral (choice-based) data and potentially augmented with neuroimaging.
• Merely observing the choices made by those with and without mentalizing impairments would be
insufficient to draw any conclusions about the underlying processes. Neuroimaging via fMRI (functional magnetic resonance imaging) could shed light on the brain processes involved while healthy
subjects were engaged in behavioral economic experiments, allowing direct comparison of brain
activity in various regions during decisionmaking in each case. Experiments could be conducted
to compare brain processes during activities that involve mentalizing about others and mentalizing
about subjects’ future selves. Other behavioral economics research could also benefit from information provided by neuroimaging about the areas of subjects’ brains engaged during experiments. For
example, any types of choices, (for example, those involving house purchases, severe medical interventions, or the environment) are simply infeasible to study via controlled nonhypothetical laboratory experiments. Given the fact that survey or self-reported responses are viewed as inherently less
trustworthy than observed behavior, there is a clear rationale for augmenting studies of such decisions
with concurrent neurological data in order to determine at least whether the decisionmaking process
is proceeding in a manner known to be valid and consistent in other circumstances. Although finding
overlapping areas of brain activity does not necessarily prove that precisely the same system is at work,
it is highly suggestive that similar cognitive processes are involved.

july 2009 – december 2009 51

Implications
Social norms do not allow individuals to do excessive harm to their neighbors, and the empirical
findings discussed in this paper (that in the brain, a person’s future self is viewed as a neighbor)
suggest that perhaps society should likewise protect the welfare of an individual’s future selves.
As with one’s relations with one’s neighbors, this does not imply that the government or a panel
of experts would (or should) tell anybody what choices to make or exactly how to behave. Rather,
the government might make certain negative behaviors harder or more expensive to engage in to
counterbalance the underlying potentially harmful tendencies. For instance, the government might
require a waiting period before allowing individuals to make life-altering choices such as entering
into marriage. Neuroscience can provide a scientific foundation for why we as a society might want
to do this, and it can inform the debate as to when, to what extent, and how we should collectively
engage in trading off present freedom of choice against benefits to future selves.
The authors are not explicitly suggesting such policies; such a recommendation would depend on
both further scientific work and broader social decisions regarding the relative rights of future selves
(not future generations, as is more commonly debated). The purpose of this approach is to make
these sorts of choices more explicit and to provide the scientific input that is necessary but not sufficient for sound policymaking. The fact that future selves have no current voices of their own raises
the question of who gets to speak for them; hence any such policies face unusual constraints and
would need to be weighed especially carefully. Nevertheless, the authors believe that this view of
future selves is fundamentally different from the prevailing one, that it is based on sound data from
multiple sources, and that it has deep implications for policy that should be openly discussed.
If one takes seriously the idea of multiple selves over time, there are also individual responses that
do not require any government intervention These can range from simply being more attuned to
discrepancies between past and present (leading to better predictions of one’s future actions or future
welfare), to playing a parental-equivalent role with friends and relatives, to voluntarily joining or creating institutions to encourage specific behaviors that take into account the welfare of future selves.
w-09-18									

The Valuation Channel of External Adjustment
by Fabio Ghironi, Jaewoo Lee, and Alessandro Rebucci
complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0918.htm
e-mail: fabio.ghironi@bc.edu, jlee3@imf.org, alessandror@iadb.org

Motivation for the Research
The experience of the United States over the past few decades shows that measured by changes in
a country’s net foreign asset position, external adjustment can take place not only through changes
in quantities and prices of goods and services—the so-called trade channel of adjustment—but also
through changes in asset prices and returns—the so-called financial channel of adjustment. International financial integration has greatly increased the scope for adjustment through the financial
channel. Although the precise magnitude, composition, and working of the financial channel of
adjustment are the subject of an ongoing debate, there is consensus that this channel is quantitatively
important in the case of the United States.
This paper examines a specific component of the financial channel of external adjustment that works
through valuation effects only, which the authors call the valuation channel of external adjustment.
The valuation channel works solely through a country’s capital gains and losses on the stock of gross
foreign assets and liabilities due to expected or unexpected asset price changes. In this paper, the
authors seek to understand the determinants of the valuation channel and its relative importance in
external adjustment and to illustrate its working and implications for macroeconomic dynamics and
risk sharing.

52 research review

Research Approach
The authors examine the valuation channel theoretically in a dynamic equilibrium portfolio model
with international trade in equity that encompasses complete and incomplete asset market scenarios.
The model is a two-country DSGE (dynamic stochastic general equilibrium) model with production under monopolistic competition. In the model, households supply labor, consume a basket of
goods that aggregates subbaskets of differentiated domestic and foreign goods in constant elasticity
of substitution fashion, and hold shares in domestic and foreign firms. To preserve the ability to
obtain a set of analytical results, the authors consider a simple production structure in which output
is produced using only labor, subject to country-wide productivity shocks. Monopolistic competition, based on product differentiation within countries, generates nonzero profits and firm values,
essential for the asset dynamics being studied. Uncertainty arises as a consequence of productivity
and government spending shocks, and asset markets are incomplete when both types of shocks are
present. The authors solve the model by combining a second-order approximation of the portfolio
optimality conditions with a first-order approximation of the rest of the model, using the technique
developed by Devereux and Sutherland (2009a) and Tille and van Wincoop (2008). They then illustrate their results with numerical examples, presenting impulse responses to relative productivity
and government spending shocks.
Key Findings
• The authors show that separating asset prices and asset quantities in defining asset positions makes
it possible to characterize the first-order dynamics of valuation effects (changes in relative crosscountry equity prices, interchangeably referred to as valuation in the paper) and portfolio adjustment (changes in quantities of net foreign equity holdings, or the current account of balance of
payments statistics in the authors’ model) and their relative contributions to net foreign asset and
macroeconomic dynamics.
• The initial response of valuation to a shock at time t is unanticipated as of time t – 1, but the
dynamics in all following periods are fully anticipated when the shock occurs. For instance, the response of the valuation channel to relative productivity shocks is generally described by an ARMA
(autoregressive moving average) (1, 1) process, while the response to relative government spending
is described by an i.i.d. (independent and identically distributed) variable. These results stem from
the fact that the cross-country dividend differential, which determines relative equity values in the
authors’ model, is proportional to the contemporaneous productivity and consumption differentials. The i.i.d. nature of valuation effects in response to government spending shocks then follows
because the consumption differential in the model obeys a random walk process. The proportionality of relative dividends to productivity (in addition to relative consumption) results in richer
ARMA dynamics of valuation.
• The share of valuation in net foreign asset adjustment is positive and constant in all periods after
the impact of a productivity shock, thus playing a distinct role in the adjustment of external accounts. In contrast, the share of valuation in the adjustment to government spending shocks is zero
in all periods except the impact period, with portfolio adjustment responsible for all changes in net
foreign assets in subsequent periods.
• The difference between the authors’ measure of the valuation channel and an excess return-based
measure used in Devereux and Sutherland (2009b) is nonnegligible in response to productivity
shocks. Excess returns are i.i.d., unpredictable variables in the authors’ model. Thus, their approach yields nonnegligible predictable valuation effects along the dynamics that follow productivity shocks.
• In response to productivity shocks in an incomplete markets scenario, plausible parameter values
imply that valuation represents a significantly larger share of net foreign asset movements than

july 2009 – december 2009 53

portfolio adjustment. This finding is consistent with an equilibrium allocation that remains close
to the complete markets outcome. However, analytical results and numerical illustration show that
portfolio adjustment is the most important determinant of net foreign asset movements following
government spending shocks.
• Finally, separating quantities and prices in net foreign assets also enables the authors to fully characterize the role of capital gains and losses versus the current account in the dynamics of macroeconomic aggregates. The authors show how excess returns, changes in asset prices, and portfolio
adjustment affect consumption risk sharing with incomplete markets, contributing to dampening
or amplifying the impact response of the cross-country consumption differential to shocks, and to
keeping it constant in subsequent periods.
Implications
The paper’s contribution to the literature on the financial channel of external adjustment is twofold.
On the methodological side, it shows the importance of distinguishing quantities and prices in the
definition of asset positions. On the substantive side, the authors obtain and illustrate a set of results
that shed light on the mechanics of valuation effects and portfolio adjustment that can be at work in
richer, quantitative models of international portfolio and business cycle dynamics.
w-09-19									

Productivity, Welfare, and Reallocation: Theory
and Firm-Level Evidence
by Susanto Basu, Luigi Pascali, Fabio Schianterelli, and Luis Serven

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0919.htm
e-mail: susanto.basu@bc.edu, luigi.pascali@bc.edu, fabio.schianterelli@bc.edu, Lserven@worldbank.org

Motivation for the Research
What portion of aggregate growth is due to innovation and technological advances and what portion is due to changes in the efficiency of resource allocation? This question arises in a variety of
economic contexts, and in fields as diverse as growth and development, international trade, and
industrial organization. Yet despite the importance of the question, there is no consensus regarding
the answer. A large number of papers have proposed a bewildering variety of methods to measure
the importance of allocative efficiency, leading to a wide range of numerical estimates. Much of the
confusion stems from the lack of an organizing conceptual framework for studying this issue. This
paper proposes such a framework and provides a quantitative answer, using one particular set of data.
Research Approach
The authors start building their framework by using a standard utility-maximization approach. It
assumes that a representative household (consumer) with infinite horizons values both consumption and leisure, and maximizes its utility based on a standard intertemporal budget constraint. The
authors prove analytically that the change in welfare of a representative consumer is summarized
by the current and expected future values of the standard Solow productivity residual (a measure of
the change in total factor productivity) if the representative household maximizes utility while taking prices as given. This result justifies using total factor productivity (TFP) as the right summary
measure of welfare, even in situations where it does not properly measure technology, and makes
it possible to calculate the contributions to aggregate welfare of disaggregated units (industries or
firms), using readily available TFP data. Based on this finding, the authors compute firm and industry contributions to welfare for a set of European OECD countries (Belgium, France, Great Britain,
Italy, and Spain), using industry-level (EU-JLEMS) and firm-level (Amadeus) data. After adding
further assumptions about technology and market structure (firms minimize costs and face com-

54 research review

mon factor prices), they show that changes in welfare can be decomposed into three components
that reflect, respectively, technological change, aggregate distortions, and allocative efficiency. Using
appropriate firm-level data, they assess the importance of each of these components as sources of
welfare improvement in the same set of European countries.
Key Findings
• The present value of aggregate TFP growth is a complete welfare measure for a representative
consumer, up to a first-order approximation. This result rigorously justifies using TFP, rather than
technological change or labor productivity, as the central statistic of interest in any exploration of
productivity at all levels of aggregation. Importantly, the result holds even when TFP is not a correct measure of technological change—for example, as a result of increasing returns, externalities,
or imperfect competition. It also suggests that productivity decompositions should be oriented
towards showing how particular features or frictions in an economy either promote or hinder aggregate TFP growth, since that measure is the key to economic welfare.
• The theoretical results point to a key role for the persistence of aggregate TFP growth, since welfare
change is related to the entire expected time path of productivity growth in addition to the current
growth rate.
• In order to create a proper welfare measure, TFP must be calculated using prices faced by households rather than prices faced by firms. In advanced economies with high rates of indirect and
income taxation, the gap between household and firm TFP can be considerable.
• One can explore the sources of welfare change using both nonparametric index numbers and formal econometrics. The nonparametric approach has the great advantage of simplicity, and it avoids
the need to address issues of econometric identification. Many interesting cross-country comparisons can be performed using the index-number approach, including calculating summary statistics
of allocative efficiency for each country, based on firm-level data. However, if one wants to ask
what share of aggregate TFP growth is due to technological change as opposed to scale economies
or improvements in allocative efficiency, one needs to make additional assumptions and estimate
production functions at the firm level, as the authors do in an example.
• In the majority of the OECD countries analyzed in this paper (Belgium, France, Great Britain,
Italy, and Spain), most of the growth in productivity during the period studied is accounted for by
advances in technology. This is certainly true for France and Great Britain. Moreover, aggregate
distortions are quite important in many countries, such as Belgium, Italy, and Spain. Finally, the
reallocation terms for primary factors or materials account for a small proportion of productivity
growth in all countries over the 1995–2005 period.
Implications
In a deep sense, neither the nonparametric approach nor the production-function approach can answer
the most interesting questions regarding the sources of welfare change. The reason is that neither approach allows one to answer the most interesting counterfactual questions, such as “How much lower
would welfare be if there had been no technological change in sector x over an interval of time y?” In
order to answer such questions, one needs to estimate a full general equilibrium (GE) model. However,
realistic GE models that allow for dynamic imperfect competition and nontrivial, firm-level heterogeneity are very complex to specify, let alone to estimate. The authors’ results, based on theory, enable
them to suggest an exercise that would be rigorous without requiring a full GE model.
One interesting question concerns the effects of various government policies on welfare. These policies might be trade policies, such as joining NAFTA, or purely domestic, such as a change in the
income tax rate. If one can isolate exogenous measures of policy change—an exercise that is difficult

july 2009 – december 2009 55

but not impossible, as the literature on identifying exogenous monetary and fiscal policy shocks suggests—then, knowing that the entire welfare-relevant effects of these policy changes are summarized
by their effects on the time path of national TFP, one can simply regress TFP on current and lagged
measures of policy changes (in a single time series or using a panel of countries) and take the present
discounted value of the impulse response. The results would yield the effects of a particular policy
change on national welfare, without requiring the researcher to develop a GE model that specifies
all the channels through which the policy might operate. A similar exercise could be conducted for
the components of productivity growth resulting from technological change or resource reallocation.
w-09-20

State-Dependent Pricing and Optimal Monetary Policy
by Denny Lie

complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0920.htm
e-mail: dlie@bu.edu

Motivation for the Research
In modern macroeconomic theory, monetary policy is assumed to have real effects because of a
tradeoff between nominal and real activity. When responding to various shocks to the economy, the
central bank’s policy goal is the optimal exploitation of this tradeoff. A large and growing literature
analyzing the nature of optimal monetary policy is dominated by time-dependent pricing (TDP)
models, which hold that firms have no choice about the timing of their price adjustments. Under
this assumption, price changes are exogenous and the frequency of price adjustment is constant—in
this type of environment, firms may not be able to adjust prices even if the economy experiences a
large shock. Yet there is increasing microeconomic evidence that firms’ price adjustments are statedependent and that it is the frequency, rather than the size of price adjustment, that has a strong
positive correlation with inflation. Under state-dependent pricing (SDP) models, individual firms
decide when to change prices and pay a small menu cost when they do so. Since the endogenous
timing of price adjustments may alter the monetary authority’s inflation-output tradeoff, the use of
SDP models may alter the prescription of what constitutes the optimal conduct of monetary policy.
This paper analyzes optimal monetary policy in a SDP environment.
Research Approach
The author compares the optimal responses under TDP with and without monetary distortions
and under TDP and SDP with “full” distortions to gauge the optimal monetary policies under each
condition, and compares the difference prescriptions imposed by using a TDP or a SDP framework.
The SDP model used in this paper assumes the presence of monopolistically competitive firms and
nominal price rigidity. The approach studies the optimal precommitment monetary policy using a
timeless perspective policy implemented long ago and focuses on the long-run responses to either
a productivity shock or a government purchase shock. This study departs from the widespread use
of the linear-quadratic approach; instead, it follows the public finance literature’s common practice
of evaluating social welfare through households’ lifetime utility. This alternative approach identifies
market distortions and how monetary policy influences the variations in these distortions and thus
affects welfare. The author’s SDP model features four distinct sets of distortions: (1) the markup
distortion that arises from a firm’s monopoly power, which causes the market-generated output level
to be inefficient; (2) the relative-price distortion arising from firms’ asynchronous price-adjustment
process; (3) the monetary (or exchange) distortions due to the use of money and credit to purchase
final consumption goods; and (4) the menu cost distortion due to the fixed cost of price adjustment.
The tradeoffs among these distortions require the central bank to balance the overall effect of the
distortions with the goal of achieving the socially optimal allocation.

56 research review

Another innovation is the author’s method for solving the optimal policy problem. He computes a
second-order approximate equilibrium solution to the optimal policy problem in addition to employing the standard first-order (linear) approximation method most often used in the literature.
Using the second-order approximate solution addresses recent criticism that first-order approximations to SDP models miss the state-dependent nature and the nonlinear properties of these models.
Unlike most previous studies, this paper also examines the optimal monetary policy start-up problem, as the true Ramsey solution maximizing the welfare of representative agents specifies that the
monetary authority should treat the early period of implementing a precommitment policy differently from later periods. This is so because in the starting period there is no past commitment that
the central bank has to follow. The existing literature contends that to address the start-up problem,
the monetary authority should temporarily stimulate the economy by generating surprise inflation
in the starting period.
Key Findings
• Under the timeless perspective, the optimal monetary policy response to either a temporary productivity shock or a temporary government purchase shock can be characterized as an approximate price
stability rule—in the sense that the price level is still largely stabilized around its deterministic trend.
Hence, the optimal policy under SDP closely replicates the dynamics under the TDP assumption
found in previous studies. However, the SDP’s endogenous timing of price adjustments alters the
policy tradeoff faced by the monetary authority: it is optimal to let inflation vary more under SDP.
• Within the long-run timeless perspective policy, the optimal response based on a second-order
approximation to the policy problem is virtually identical to the response computed using a standard linear approximation method. This finding suggests that there are second-order components
(state-dependence and nonlinearity) in the optimal policy response under SDP. However, this
finding is conditioned on the standard assumption in the literature that prior to the shock the
economy was at steady state. If this assumption is relaxed, there are some differences between
the first-order and second-order approximate dynamics under SDP. The degree of nonlinearity is
shown to depend on the interaction between the state of the economy before the shock, the size of
the shock, and the assumed policy rule.
• The cost of inflation variation in the relative price distortion is lower under SDP than under the standard TDP assumption. The presence of endogenous timing of price adjustments alters the tradeoff
faced by the monetary policy authority. Thus, compared to the standard TDP assumption, under
SDP it is desirable for the monetary authority to put less weight on inflation stabilization relative to
other stabilization goals.
• Incorporating SDP in the model leads to different start-up dynamics from the dynamics under the
standard TDP assumption. In particular, it is optimal to generate much higher start-up inflation
despite the fact that the monetary authority is shown to have less leverage over real activity in the
presence of SDP. This result is once again due to the subtle modification to the policy tradeoff
involving the lower cost of inflation variation on the relative-price distortion. However, the welfare improvement from generating this surprise inflation is shown to be relatively small. Thus, the
timeless perspective policy may be a good approximation to the true Ramsey policy.
• The author concludes that unlike TDP models, SDP models generally exhibit some degree of nonlinearity, both under optimal monetary policy and when the policy rule itself is linear. The nonlinearity depends on the interaction between the state of the economy and the size of the shock.
Although this nonlinearity does not seem to change the qualitative property of the responses, there
are some important quantitative differences. It follows that a conventional first-order approximation
to the equilibrium solution may be good enough for some purposes, such as when an analyst is only

july 2009 – december 2009 57

interested in looking at the qualitative dynamics of an SDP model. But for other purposes such as
forecasting, estimation, and so on, a second- or higher-order approximation may be warranted.
Implications
Many of the findings for the SDP case closely track the response under TDP, but SDP offers a better
model of actual price-setting behavior. The paper’s analysis can be extended in several ways. While
the present study focuses on characterizing optimal monetary policy under SDP, the implementation issues should be considered in evaluating the model’s usefulness for policymakers. In the current
paper, the cyclical fluctuations are driven by a productivity shock and a government spending shock,
but consideration of a cost-push inflation shock should be added, and its inclusion will result in a
nontrivial modification to the SDP model’s policy tradeoff.
w-09-21

Seeds to Succeed: Sequential Giving to Public Projects
By Anat Bracha, Michael Menietti, and Lise Vesterlund
complete text: http://www.bos.frb.org/economic/wp/wp2009/wp0921.pdf
e-mail: anat.bracha@bos.frb.org, mem78@pitt.edu, vester@pitt.edu

Motivation for the Research
Fundraisers usually use one of two basic approaches: a simultaneous or a sequential fundraising campaign. In a simultaneous campaign, the total amount required is announced and all charitable donations are accepted in the order in which they are pledged. A sequential campaign employs a two-step
approach: First, a substantial amount of seed donation(s) is secured in a silent phase, and only then is
the campaign’s public phase launched. For instance, raising $500,000 for a project could be achieved
using a campaign that first secures pledges totaling $300,000 and then announces this amount while
publically launching the campaign to raise the additional $200,000. This type of sequential strategy
is a widely accepted fundraising practice, especially for capital campaigns with large fixed costs such
as those for new building construction or buying expensive equipment.
Despite the common use of sequential fundraising, from a theoretical perspective sequential fundraising seems to have no advantage over a simultaneous campaign. This is because one donor’s
contribution is a perfect substitute for another’s, and therefore sequential provision will only shift
contributions, thus allowing the initial donor to free ride on subsequent donors (Varian 1994). This
mismatch between widely accepted fundraising practice and a theoretical prediction has prompted
further research to identify when it might be optimal to employ sequential fundraising campaigns.
Andreoni (1998) argued that when there are large fixed production costs, a sequential fundraising effort is preferable to a simultaneous one. He showed that in the presence of large fixed costs
multiple outcomes (equilibria) are possible, some of which will secure the fundraising target, while
others will not. Thus, capital campaigns that rely on the simultaneous fundraising strategy may fail,
while a sequential strategy is more apt to succeed, as a sufficiently large initial donation incentivizes
subsequent donors to eliminate inefficient outcomes and secure the desired goal. A study by List and
Lucking-Reiley (2002), using an actual campaign, shows evidence in line with Andreoni’s, and it is
also consistent with other models of sequential fundraising, as in the field it is difficult to vary seed
donation and fixed production costs while keeping the treatments otherwise comparable. This paper
uses laboratory experiments to test whether sequential fundraising eliminates inefficient outcomes
that arise in the presence of fixed costs, as suggested by Andreoni.
Research Approach
The authors construct an experiment to examine simultaneous and sequential giving in the presence
and absence of fixed costs, resulting in four different treatments. They designate the fixed cost to
be six units, large enough so that no single donor has an incentive to cover it single-handedly, yet

58 research review

Fraction of Public Goods Provided Under
Simultaneous versus Sequential Fundraising
(with Fixed costs of 6)
Fraction
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2

Simultaneous Fundraising
Sequential Fundraising

0.1
0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

Round
Source: Authors’ calculations.

small enough to secure both positive and zero provision outcomes in the simultaneous treatment
with fixed costs. Indeed, in the simultaneous treatment with fixed costs of six, there are two possible equilibria: one with each player contributing three units, exactly covering the fixed costs and
providing the public good, while the other has each player contributing zero units and no provision
of the public good. In the sequential treatment, the zero provision outcome is eliminated, since the
first mover has an incentive to provide a sufficiently large donation to ensure that the second player
covers the remaining fixed cost.
The experiment was conducted at the University of Pittsburgh’s Experimental Economics Laboratory.
Three sessions lasting one hour each were held for each of the four treatments. Fourteen undergraduates participated in each session for a total of 168 participants. Each session had 14 rounds (periods)
in which participants played a public good game. More specifically, at the onset of each session, each
participant was assigned a role of either first mover or second mover, and this role was kept throughout
the session. At the beginning of each round, pairs of first and second movers were created randomly
and then each participant was given a $4 endowment that could be invested in a public account (project). The public good account of each pair was designed to yield benefits for both participants if their
joint investment equaled or exceeded the fixed costs. Investment was made in “units” and investment
could be any integer amount between zero and 10 units. Although the benefit was for both paired
participants, the per-unit investment cost was charged to the individual making the contribution—it
was 40 cents for the first three units, 70 cents for units four through seven, and $1.10 for the last three
units. Contributions were made either simultaneously or sequentially: in the simultaneous public good
game, the total contribution was revealed only after both parties placed their contribution, while in the
sequential public good game the second player was informed of the first mover’s contribution before
making his or her own contribution decision. After completion of the 14 rounds, three rounds were
randomly selected to count for payment, and average earnings were $22.

july 2009 – december 2009 59

Simultaneous versus Sequential Fundraising Results
(with Fixed costs of 8)
Mean Individual Contribution

Likelihood of Provision

Contribution

Fraction

4.5

1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0

4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0

1

2

3

4

5

6

7

8

9

10 11 12 13 14

1

2

Round

3

4

5

6

7

8

9

10 11 12 13 14

Round
Simultaneous Fundraising
Sequential Fundraising

Source: Authors’ calculations.

The comparative statics across treatments allowed the authors to answer the following three questions: (1) In simultaneous play, do fixed costs give rise to inefficient outcomes? (Inefficient outcomes
are outcomes in which the public good is not provided, even though it is socially desirable.) (2) If
such inefficiencies exist under simultaneous play, does the sequential treatment help to eliminate
them and to increase the likelihood of positive provision? (3) Comparing the change in behavior
from simultaneous to sequential play with and without fixed costs, is the potential increase in contributions under sequential provision greater in the presence of fixed costs?
Key Findings
• Surprisingly, the authors found that in the presence of fixed costs of six, simultaneous provision
increased rather than decreased individual provision. Individual donors seemed uncertain of which
outcome would result, and opted to increase their contributions to ensure the positive provision
outcome. All else equal, with fixed costs of six, sequential play reduced individual contributions by
almost one unit.
• For fixed costs of six, sequential play was shown to decrease both contributions and individual
payoffs. The reason for this deviation from theory is rooted in the simultaneous game, where the
introduction of fixed costs increases rather than decreases contributions. The larger-than-expected
contributions in the simultaneous treatment are due to coordination difficulties combined with
relatively low fixed costs. Since the sequential treatment alleviates the coordination problem by
making the first mover’s contribution known to the other player, both participants can safely contribute less and still secure a positive outcome. In the simultaneous situation, the cost of contributing is so low relative to the benefit from provision that individuals contribute an inefficiently large
amount to make sure the good is provided.
• Given these results, the authors ran similar treatments, but with a higher fixed cost of eight. In
this case they found that individual contributions were similar whether employing sequential or
simultaneous giving. However, although sequential giving did not increase individual donations, it
did increase the chances of provision and individual earnings. As predicted, with simultaneous play,
many participants did not contribute to the public good, or failed to coordinate on meeting the

60 research review

fixed cost level needed to provide the good. Hence, the authors’ results support Andreoni’s claim,
but only for sufficiently high fixed costs. In this case, using seed money mitigated the risk of falling
short of the target goal.
Implications
This paper provides mixed support for Andreoni’s theory: it does not find evidence consistent with
the theory given small fixed costs, but does find support for it when fixed costs are sufficiently high.
That is, the findings affirm the fundraising practice of securing seed donations for projects with
fixed costs that are sufficiently large. However, the evidence reveals that in these cases fundraisers
must be cautious when designing a campaign: setting too low an initial contribution, they run the
risk of first movers exploiting their advantage and causing subsequent donors to undercontribute,
and therefore failing to secure the public good. A concern for equity may help to explain why fundraisers have specific targets for the size of the seed donation as a share of the overall goal.

Public Policy Briefs
b-09-1

A Proposal to Help Distressed Homeowners:
A Government Payment-Sharing Plan

by Christopher L. Foote, Jeffery C. Fuhrer, Eileen Mauskopf, and Paul S. Willen
complete text: http://www.bos.frb.org/economic/ppb/2009/ppb091.htm
email: chris.foote@bos.frb.org, jeff.fuhrer@bos.frb.org, eileen.mauskopf@frb.gov, paul.willen@bos.frb.org

Motivation for the Proposal
This public policy brief presents a proposal designed to help homeowners who are facing foreclosure because their incomes have fallen and because the balances owed on their mortgages exceed
the value of their homes. These homeowners represent a subset of all distressed homeowners, but
according the authors’ research, they face an elevated risk of default and are unlikely to be helped
by current programs aimed at reducing foreclosures. The authors’ proposal was originally posted in
January 2009 on the website of the Federal Reserve Bank of Boston.
Proposal Summary
The authors propose a government payment-sharing arrangement that provides a significant reduction in the homeowner’s monthly mortgage payment. Previous research indicates that foreclosures
most often occur when a homeowner has negative equity (owes more on the house than the property
is worth) and has suffered an adverse life event, such as job loss, illness, or divorce, making it difficult
to keep up with the mortgage payments.
The plan does not involve reducing the mortgage’s outstanding principal. Rather, it provides homeowners with direct government assistance to meet their monthly mortgage payments. Two options
are presented, both designed to help people with negative housing equity and a significant income
disruption. In one version, the government assistance comes in the form of a loan that must be
repaid when the borrower’s financial well-being is restored. The second version of the proposal features government grants that do not have to be repaid. In either case, the homeowner must provide
evidence of negative equity in the home and of job loss or other significant income disruption.
Key Points
• Upon determining eligibility, the government pays a significant share of the household’s current
mortgage payment directly to the mortgage servicer.

july 2009 – december 2009 61

• The government’s share of the mortgage payment is equal to the percentage decline in the family’s
earned income as a result of the adverse life event.
• With both options, the plan requires proof of a recent and significant income disruption—the
authors suggest 25 percent.
• The assistance terminates upon resumption of the borrower’s normal income stream or after two
years, whichever comes first.
• The plan caps the maximum monthly payment that the government will pay. The authors offer
$1,500 as a plausible amount for the cap.
• In the loan version of the plan, the government’s payments accrue to a balance the homeowner
must repay with interest to the government at a future date. The interest rate reflects the risk entailed in lending to the borrower and thus may be above the rate charged on prime mortgages. If
the homeowner eventually sells the house for more than the value of the mortgage balance, the
government has first claim on any equity remaining after the mortgage has been paid off.
• In the grant version, there is no required repayment to the government for the share of the homeowner’s
mortgage payments it has made. This version includes an income limit for qualifying households.
• The cost of the plan depends on which version policymakers choose—loans or grants. Under
the grant version, the authors estimate the cost to the government of providing help to 3 million
homeowners (a generous estimate of the number of homeowners who would be eligible) to be
about $25 billion annually, or about $50 billion overall. Under the loan version, the cost would be
significantly smaller. If all recipients paid back their government loans, the program would be virtually costless to the government; some defaults on these loans are likely, however, and it is difficult
to estimate the rate of such defaults.
Implications
The plan has a number of important advantages. First, the authors believe that the plan will stop
most preventable foreclosures from occurring. This benefits the borrower, the lender/investor, communities with many distressed mortgages, and the financial markets more broadly. Second, the plan
provides a significant reduction in the homeowner’s payment during a period of income loss, in contrast to existing loan modification programs that either lower payments insufficiently or even raise
monthly mortgage payments. Third, because it works with the homeowner’s existing mortgage, the
plan does not depend on lender/servicer or second lien-holder cooperation, a major stumbling block
to aiding a wider group of distressed homeowners. The plan works equally well for individual loans
held in portfolio and for securitized loans. Fourth, the private lender should be considerably better
off under this plan than by pursuing foreclosure.
The plan also has some disadvantages. First, it is unlikely to stop homeowners with very large
negative equity positions from defaulting when the government aid ends. To the extent that such
foreclosures are ultimately unavoidable, this plan may delay such an outcome without providing
any guarantee that such a delay is beneficial on either economic or social grounds. Next, there are
potential disadvantages that are specific to which option is implemented. If the program takes the
form of loans, some borrowers may be wary of taking on a government loan and may choose to
default instead. If it takes the form of grants, moral hazard problems could be more serious, despite
the safeguards included in the plan. Finally, administering this program would likely require some
cooperation from the mortgage servicers, such as providing information on such items as outstanding mortgage loan balances of applicants. The government could offer some payment to the servicer
for performing this function.

62 research review

Contributing Authors
Manuel Adelino is a Ph.D. candidate in financial economics at the Massachusetts Institute of Technology and a research associate in the research department of the Federal Reserve Bank of Boston.
Michelle L. Barnes is a senior economist and policy advisor in the research department of the
Federal Reserve Bank of Boston.
Susanto Basu is a professor of economics at Boston College and a visiting scholar in the research
department of the Federal Reserve Bank of Boston.
Marques Benton is a vice president in the public and community affairs department of the Federal
Reserve Bank of Boston.
Zvi Bodie is the Norman and Adele Barron Professor of Management at Boston University.
Anat Bracha is an economist with the Research Center for Behavioral Economics in the research
department at the Federal Reserve Bank of Boston.
Katharine Bradbury is a senior economist and policy advisor in the research department of the
Federal Reserve Bank of Boston.
Lynn E. Browne is an executive vice president and economic advisor at the Federal Reserve Bank
of Boston.
Ryan Bubb is a Ph.D. candidate in the economics department at Harvard University, a Terrence M.
Considine Fellow in Law and Economics at Harvard Law School, and a graduate research fellow in
the research department at the Federal Reserve Bank of Boston.
Prabal Chakrabarti is an assistant vice president and the director of community development in the
public and community affairs department of the Federal Reserve Bank of Boston.
Daniel Cooper is an economist in the research department of the Federal Reserve Bank of Boston.
Christopher L. Foote is a senior economist and policy advisor in the research department of the
Federal Reserve Bank of Boston.
Kevin Foster is a survey methodologist with the Consumer Payments Research Center in the
research department of the Federal Reserve Bank of Boston.
Jeffrey C. Fuhrer is an executive vice president and the director of research at the Federal Reserve
Bank of Boston.
Kristopher S. Gerardi is a research economist and assistant policy advisor in the research department at the Federal Reserve Bank of Atlanta. At the time the papers summarized in this issue were
written he was also a visiting scholar of the Federal Reserve Bank of Boston.
Fabio Ghironi is an associate professor of economics at Boston College, a visiting scholar in the
research department at the Federal Reserve Bank of Boston, a research associate at the National
Bureau of Economic Research, and a fellow of the Euro Area Business Cycle Network.

july 2009 – december 2009 63

Gita Gopinath is an associate professor of economics at Harvard University and a visiting scholar
in the research department at the Federal Reserve Bank of Boston.
Pierre-Olivier Gourinchas is an associate professor of economics at the University of California
at Berkeley.
DeAnna Green is a senior community affairs analyst in the public and community affairs department of the Federal Reserve Bank of Boston.
Fabià Gumbau-Brisa is a senior economist in the research department of the Federal Reserve Bank
of Boston.
Chang-Tai Hsieh is a professor of economics at the Booth School of Business at the University of
Chicago.
Matteo Iacoviello is an economist in the Division of International Finance at the Board of Governors of the Federal Reserve System. He is on leave in 2010 from Boston College where he is an
associate professor of economics At the time the paper summarized in this issue was written he was
a visiting scholar in the research department at the Federal Reserve Bank of Boston.
Julian Jamison is a senior economist with the Research Center for Behavioral Economics in the
research department at the Federal Reserve Bank of Boston and a visiting faculty member in the
department of economics at Yale University.
Jane Katz is the director of education programs at the Federal Reserve Bank of New York. At the
time the paper summarized in this issue was written she was also a visiting scholar in the research
department in the research department at the Federal Reserve Bank of Boston.
Alex Kaufman is a Ph.D. candidate in the economics department at Harvard University and a
graduate research fellow in the research department at the Federal Reserve Bank of Boston.
Yolanda K. Kodrzycki is a vice president and the director of the New England Public Policy Center
in the research department of the Federal Reserve Bank of Boston. At the time the papers summarized in this issue were written she was a senior economist and policy advisor in the research
department of the Federal Reserve Bank of Boston.
Sergei Koulayev was an economist in the Consumer Payments Research Center in the research
department of the Federal Reserve Bank of Boston at the time the paper summarized in this issue
was written.
Nicholas Li is a Ph.D. candidate in economics at the University of California at Berkeley.
Denny Lie is a research associate in the research department at the Federal Reserve Bank of Boston
and is affiliated with Boston University.
Jaewoo Lee is deputy division chief of the open economy macroeconomics division in the research
department of the International Monetary Fund and a fellow of the Euro Area Business Cycle
Network.

64 research review

Eileen Mauskopf is a research economist at the Board of Governors of the Federal Reserve System.
Erik Meijer is an economist with the Roybal Center for Financial Decision Making of the RAND
Corporation.
Michael Menietti is a graduate student in economics at the University of Pittsburgh.
Ana Patricia Muñoz is a policy analyst in the public and community affairs department of the
Federal Reserve Bank of Boston. At the time the papers summarized in this issue were written she
was a research associate in the research department at the Federal Reserve Bank of Boston.
Giovanni P. Olivei is a vice president and economist in the research department of the Federal
Reserve Bank of Boston.
David Owens is an assistant professor of economics at Haverford College.
Ali K. Ozdagli is an economist in the research department of the Federal Reserve Bank of Boston.
Luigi Pascali is a Ph.D. candidate in economics at Boston College.
Marina Pavan is a senior research fellow at the Geary Institute of the University College Dublin.
David Plasse recently retired from the Federal Reserve Bank of Boston. At the time the paper
summarized in this issue was written he was vice president of check services at Windsor Locks,
Connecticut.
Alessandro Rebucci is a senior research economist at the Inter-American Development Bank and
a fellow of the Euro Area Business Cycle Network.
Fabio Schianterelli is a professor of economics at Boston College.
Scott Schuh is a senior economist and policy advisor and the director of the Consumer Payments
Research Center in the research department of the Federal Reserve Bank of Boston.
Luis Serven is research manager for macroeconomics and growth in the development research
group at the World Bank
Oz Shy is a senior economist at the Federal Reserve Bank of Boston and a member of the Consumer
Payments Research Center in the research department.
Rune Stenbacka is a professor of economics at the Hanken School of Economics in Helsinki,
Finland.
Robert K. Triest is a vice president, and economist in the research department of the Federal
Reserve Bank of Boston.
Lise Vesterlund is the Andrew W. Mellon Professor of Economics at the University of Pittsburgh.
Richard Walker is a vice president in the public and community affairs department of the Federal
Reserve Bank of Boston.

july 2009 – december 2009 65

J. Christina Wang is a senior economist in the research department at the Federal Reserve Bank of
Boston.
Jan Wegener is a postdoctoral fellow in the decision neuroscience research group at the Copenhagen
Business School.
Paul S. Willen is a senior economist and policy advisor in the research department of the Federal
Reserve Bank of Boston and a faculty research fellow at the NBER.
Glenn Woroch is an adjunct professor of economics at the University of California, Berkeley.
Michael A. Zabek is a senior research assistant with the Consumer Payments Research Center
in the research department of the Federal Reserve Bank of Boston.
Bo Zhao is a senior economist in the New England Public Policy Center in the research department
at the Federal Reserve Bank of Boston.

66 research review

federal reserve
bank of boston

TM

Research Department
Federal Reserve Bank of Boston
600 Atlantic Avenue
Boston, MA 02210

PRSRT STD
US POSTAGE
PAID
WILMINGTON, MA
PERMIT NO. 121

www.bos.frb.org/economic/index.htm

july 2009 – december 2009 67