View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

recent research

research review

Issue No. 14, July 2010 - December 2010

federal reserve bank of boston

Research Department
Jeffrey C. Fuhrer
Executive Vice President and
Director of Research
Geoffrey M. B. Tootell
Senior Vice President and
Deputy Director of Research
Economists
Yolanda K. Kodrzycki, VP
Giovanni P. Olivei, VP
Robert K. Triest, VP
Michelle L. Barnes
Anat Bracha
Katharine Bradbury
Mary A. Burke
Daniel H. Cooper
Federico J. Díez
Christopher L. Foote
Fabià Gumbau-Brisa
Julian C. Jamison
Jane Sneddon Little
Ali K. Ozdagli
Alicia Sasser
Scott Schuh
Oz Shy
Joanna Stavins
J. Christina Wang
Paul S. Willen
Bo Zhao
Manager
Patricia Geagan, AVP
Editors
Suzanne Lorant
Elizabeth Murry

research review
Issue no. 14, July 2010 – December 2010
Research Review provides an overview of recent research by economists of the
research department of the Federal Reserve Bank of Boston. Included are summaries of scholarly papers, staff briefings, and Bank-sponsored conferences.

Research Review is available on the web at
http://www.bostonfed.org/economic/ResearchReview/index.htm.

Earlier issues of Research Review (through December 2009) are available in hard
copy without charge. To order copies of back issues, please contact the Research
Library:
Research Library—D
Federal Reserve Bank of Boston
600 Atlantic Avenue
Boston, MA 02210
Phone: 617.973.3397
Fax: 617.973.4221
E-mail: boston.library@bos.frb.org
Views expressed in Research Review are those of the individual authors and do
not necessarily reflect official positions of the Federal Reserve Bank of Boston or
the Federal Reserve System. The authors appreciate receiving comments.

Graphic Designers
Heidi Furse and Fabienne Madsen

Research Review is a publication
of the Research Department of the
Federal Reserve Bank of Boston
ISSN 1552-2814 print (discontinued
beginning with the June 2010 issue)
ISSN 1552-2822 (online).
© Copyright 2011
Federal Reserve Bank of Boston

Research Department Papers Series of the Federal Reserve Bank of Boston
Public Policy Discussion Papers present research bearing on policy issues. They
are generally written for policymakers, informed business people, academics, and
the informed public. Many of these papers present research intended for professional journals.
Working Papers present statistical or technical research. They are generally written for economists and others with strong technical backgrounds, and they are
intended for publication in professional journals.
Public Policy Briefs present analysis on topics of current interest concerning
the economy. These briefs are written by Boston Fed staff economists, based on
briefing materials or presentations prepared by Boston Fed research staff for senior
Bank executives or outside audiences.
Research department papers are available online only.
http://www.bostonfed.org/economic/research.htm

2 research review

Executive Summaries in This Issue
Public Policy Discussion Papers
p-10-3

Who Gains and Who Loses from Credit Card Payments?
4
Theory and Calibrations 		
Scott Schuh, Oz Shy, and Joanna Stavins

p-10-4

$1.25 Trillion Is Still Real Money: Some Facts about the Effects of the
Federal Reserve’s Mortgage Market Investments		 7
Andreas Fuster and Paul S. Willen		

p-10-5

Reasonable People Did Disagree: Optimism and Pessimism about the
U.S. Housing Market Before the Crash
10
Kristopher S. Gerardi, Christopher L. Foote, and Paul S. Willen		

p-10-6

A Profile of the Mortgage Crisis in a Low-and-Moderate-Income Community		 12
Lynn M. Fisher, Lauren Lambie-Hanson, and Paul S. Willen		

Working Papers
w-10-9

In Search of Real Rigidities
Gita Gopinath and Oleg Itskhoki

15

w-10-10

Strategic Choice of Preferences: The Persona Model,
David H. Wolpert, Julian C. Jamison, David Newth, and Michael Harre

18

w-10-11

Some Evidence on the Importance of Sticky Wages
Alessandro Barattieri, Susanto Basu, and Peter Gottschalk

21

w-10-12

Imputing Household Spending in the Panel Study of Income Dynamics:
A Comparison of Approaches 		 24
Daniel H. Cooper

w-10-13

The Distress Premium Puzzle
Ali K. Ozdagli

26

w-10-14

Characterizing the Amount and Speed of Discounting Procedures
Dean T. Jamison and Julian C. Jamison

27

w-10-15

Internal Sources of Finance and the Great Recession
Michelle L. Barnes and N. Aaron Pancost

29

w-10-16

Affective Decision Making: A Theory of Optimism Bias
Anat Bracha and Donald J. Brown

33

w-10-17

The Financial Structure of Startup Firms: The Role of Asset, Information, 		
and Entrepreneur Characteristics
35
Paroma Sanyal and Catherine L. Mann

Public Policy Briefs
b-10-3

Evidence of a Credit Crunch? Results from the
2010 Survey of First District Community Banks		 37
Jihye Jeon, Judit Montoriol-Garriga, Robert K. Triest, and J. Christina Wang

Multimedia
The Great Recession (video)
Christopher L. Foote

39

Contributing Authors		 40

Public Policy Discussion Papers
p-10-3									

Who Gains and Who Loses from Credit Card Payments?
Theory and Calibrations
by Scott Schuh, Oz Shy, and Joanna Stavins

complete text: http://www.bostonfed.org/economic/ppdp/2010/ppdp1003.htm
e-mail: scott.schuh@bos.frb.org, oz.shy@bos.frb.org, joanna.stavins@bos.frb.org

Motivation for the Research
The typical consumer is largely unaware of the full ramifications of paying for goods and services by
credit card. Faced with many choices—cash, check, debit or credit card, and so on—consumers naturally consider the costs and benefits of each payment instrument and choose accordingly. For credit
cards, consumers likely think most about the benefits of this method: delayed payment—“buy now,
pay later”—and perhaps the rewards earned—cash back, frequent flier miles, or other enticements.
What most consumers do not know is that their decision to pay by credit card involves merchant
fees, retail price increases, a nontrivial transfer of income from cash to card payers, and consequently
a transfer from low-income to high-income consumers. (For simplicity, the authors refer to consumers who not pay by credit card as cash payers, where “cash” represents all payment instruments other
than credit cards: cash, checks, debit and prepaid cards, and so on.)
In contrast, the typical merchant is acutely aware of the ramifications of customers’ decision to pay
with credit cards. For the privilege of accepting credit cards, U.S. merchants pay banks a fee that
is proportional to the dollar value of the sale. The merchant’s bank then pays a proportional interchange fee to the consumer’s credit card bank. Naturally, merchants seek to pass the merchant fee to
their customers. Merchants may want to recoup the merchant fee only from the consumers who pay

Fee and Payments in a Simple Market with a Card Network

“Banks”
Card Companies

Issuer

κ=Interchange fee (%)
(ρ < κ < υ)

ο=Reward (1%)
Low-income card users
High-income card users

Acquirer
υ=Merchant Fee (2%)

ρ=Price ($)
Merchant

ρ

Low-income card users
High-income card users

ε=Handling cash cost (0.5%)

Source: Authors’ illustration.

4 research review

by credit card. In practice, however, credit card companies impose a “no-surcharge rule” (NSR) that
prohibits U.S. merchants from doing so, and most merchants are reluctant to give cash discounts.
Instead, merchants mark up their retail prices for all consumers by an amount high enough to recover the merchant fee from credit card sales.
This retail price markup for all consumers results in credit-card-paying consumers being subsidized
by consumers who do not pay with credit cards, a result that was first discussed in Carlton and
Frankel (1995) and later in Frankel (1998), Katz (2001), Gans and King (2003), and Schwartz and
Vincent (2006). Thus, cash buyers must pay higher retail prices to cover merchants’ costs associated
with the credit cards’ merchant fees. Because these fees are used to pay for rewards given to credit
card users, cash users also finance part of the rewards given to credit card users.
If the subsidy of card payers by cash payers results from heterogeneity in consumer preferences and
utility between cash and card payments, the subsidy may not be harmful in terms of consumer and
social welfare. However, U.S. data show that credit card use is strongly and positively correlated
with consumer income. Consequently, the subsidy of credit card payers by cash payers involves a
regressive transfer of income from low-income to high-income consumers. This regressive transfer
is amplified by the disproportionate distribution of rewards, which are proportional to credit card
sales, to high-income credit card users. Frankel (1998) was the first to connect the wealth transfers
to the average income of groups of consumers (that is, subsidies from noncardholders to wealthier
cardholders). This idea was later discussed in Carlton and Frankel (2005) and Frankel and Shampine (2006). This paper is the first to compute who gains and who loses from credit card payments in
the aggregate economy.
Research Approach
The authors compute dollar-value estimates of the actual transfers from cash payers to card users
and from low-income to high-income households. They propose a simple, model-free accounting
methodology to compute the two transfers by comparing the costs imposed by individual consumer
payment choices with actual prices paid by each buyer. To conduct a welfare and policy analysis of
these transfers, the authors construct a structural model of a simplified representation of the U.S.
payments market and calibrate it with U.S. micro data on consumer credit card use and related
variables. Their analysis is consistent with, but abstracts from, three features of the U.S. payments
market.
First, it focuses on the use of credit cards for convenience (payments only) and does not incorporate
a role for revolving credit, which is an important feature of the total consumer welfare associated
with credit cards. Revolving credit is a one-time application for a line of credit that has no fixed
payment schedule, can be drawn upon repeatedly up to the limit of the credit line, and leaves the
repayment plan (which is really the credit decision) up to the card holder (except for a minimum
payment). The authors use the term “revolving credit” to indicate credit that is not paid off completely at the end of each billing cycle.
Second, the study abstracts from the supply-side details of the payments market for both cash and
cards. The authors take as given the well-established seminal result of Rochet and Tirole (2006)
concerning the critical role of an interchange fee between acquiring and issuing banks in the twosided credit card market, a result that notes that the optimal level of the interchange fee is an empirical issue. By incorporating both merchant fees and card reward rates, they can assume that the
interchange fee lies between these two rates and is set internally in the banking sector to the optimal
level conditional on fees and rewards.
Finally, they do not include a role for the distribution of bank profits from credit card payments to households that own bank stocks, due to a lack of sufficient micro data. Given these three simplifications, they

july 2010 – december 2010 5

can assess only the consumer welfare implications of the payment instrument transfers but not the
full social welfare implications.
Key Findings
•	On average, each cash payer pays $149 to card users and each card payer receives $1,133 from cash
users every year, a total transfer of $1,282 from the average cash payer to the average card payer.
•	On average, and after accounting for rewards paid to households by banks, when all households
are divided into two income groups, each low-income household pays $8 to high-income households and each high-income household receives $430 from low-income households every year.
The magnitude of this transfer is even greater when household income is divided into seven categories: on average, the lowest-income household ($20,000 or less annually) pays a transfer of $21
and the highest-income household ($150,000 annually) receives a subsidy of $750 each year. The
transfers among income groups are smaller than those between cash and card users because some
low-income households use credit cards and many high-income households use cash.
•A
 bout 79 percent of banks’ revenue from credit card merchant fees is obtained from cash payers—and this comes disproportionately from low-income cash payers.
•A
 ccording to the authors’ model, high-income households appear to receive an inherent utility
benefit from credit card use that is more than twice as high as that received by low-income households. Eliminating the merchant fee and credit card rewards (together) would increase consumer
welfare by 0.15 to 0.26 percent, depending on the degree of concavity of utility, which also can be
interpreted in an aggregate model as the degree of aversion to income inequality in society.
Implications
The authors do not allege or imply that banks or credit card companies have designed or operated the credit card market intentionally to produce a regressive transfer from low-income to highincome households. They are not aware of any evidence to support such an allegation nor do they
have any a priori reason to believe it. However, the existence of a nontrivial regressive transfer in the
credit card market may be a concern that U.S. individuals, businesses, or public policymakers might
wish to address. If so, the authors’ analysis suggests several principles and approaches worth further
study and consideration.
Recent U.S. financial reform legislation, motivated by concerns about competition in payment card
pricing, gives the Federal Reserve responsibility for regulating interchange fees associated with debit
(but not credit) cards. The authors’ analysis provides a different but complementary motivation—
income inequality—for policy intervention in the credit card market.

6 research review

p-10-4									

$1.25 Trillion is Still Real Money: Some Facts About
the Effects of the Federal Reserve’s Mortgage
Market Investments
By Andreas Fuster and Paul S. Willen

complete text: http://www.bostonfed.org/economic/ppdp/2010/ppdp1004.htm
e-mail: afuster@fas.harvard.edu, paul.willen@bos.frb.org

Motivation for the Research
On November 25, 2008, the Federal Open Market Committee (FOMC) announced that the Federal Reserve Bank of New York would purchase $500 billion of mortgage-backed securities (MBS)
issued by Fannie Mae and Freddie Mac, the two main government-sponsored entities (GSEs) for
housing, as well as ones guaranteed by the government agency Ginnie Mae. This plan, informally termed the large-scale asset purchase program (LSAP), was intended to reduce the spread between mortgage interest rates and other interest rates of similar duration. The LSAP program was
largely meant to assist the U.S. housing market, which had slowed considerably, and to help stabilize
broader financial markets. In March 2009, the FOMC expanded the LSAP program, and before
its conclusion on March 31, 2010, the Federal Reserve bought a total of $1.25 trillion of agency
debt, including $175 billion of GSE debt and $300 billion of U.S. Treasury securities. This FOMC
action was substantial; the total LSAP intervention corresponds to about 22 percent of the total
outstanding stock of these securities. Despite the LSAP program’s scope and scale, relatively little is
known about its effect on the U.S. mortgage market and the overall U.S. macroeconomy. This paper
investigates the program’s impact on the U.S. mortgage market in terms of credit availability and
macroeconomic effects, and draws lessons for similar policy interventions in the future.
Research Approach
The authors employ an event-study approach and measure the movements in both interest rates and
the quantity of loan applications around the initial LSAP announcement in late November 2008
and subsequent announcement dates. The complex manner in which lenders price mortgages makes
it challenging to discover how borrower opportunities changed after the LSAP program was announced. Obviously interest rates differ depending on the amount of the loan, the borrower’s credit
score, and whether the loan is fixed-rate or variable-rate, prime or subprime. But the borrower’s
choice set is further complicated by discount points tied to bond market pricing, the par value of
the loan (meaning the amount the lender is financing), and the market value of the loan, which is
the price paid by investors in the secondary market for MBS. To bridge the gap between the market
price and the par value of the loan, lenders pay or receive discount points at the loan closing, and for
a given loan, lenders may offer a combination of different contract interest rates and corresponding
discount points. Discount points can be positive or negative and can influence whether it makes
sense for a borrower to purchase or refinance. Positive discount points mean that the lender pays.
Brokers often use positive points to offer “no points/no close” mortgages, meaning that the borrower
is not liable for paying points or closing costs. Negative discount points mean that the borrower
or sometimes (in the case of a purchase mortgage) the property seller must pay. For instance, if a
borrower wants an interest-only mortgage, the lender requires the borrower to pay points to obtain
these terms. The authors define the borrower opportunity set as the combined set of available discount rates and interest rates for a given mortgage transaction at a given time.
There are two ways that the authors’ analysis is innovative. First, they focus on the entire menu
of price options available to prospective borrowers, rather than focusing on a single interest rate.
Second, unlike previous researchers they measure how many borrowers searched for loans, applied

july 2010 – december 2010 7

for loans, were rejected for loans, and received loans immediately before and after the LSAP program was announced, as well as further along in the life of the program. By examining the credit
market conditions prevailing before and after the initial LSAP announcement, and by employing
micro-level data, the authors are able to examine whether borrower characteristics changed after
the program was announced—in other words to answer the question, did the LSAP program help
borrowers?
The authors use three different datasets each of which provides a different view of the primary U.S.
mortgage market just before the program’s inception and during its administration, and which taken
together provide a broad view of mortgage market activity. The first dataset, from LoanSifter, a firm
that aggregates lending rates and terms offered by over 140 lenders, provides a snapshot of a significant portion of the entire U.S. mortgage lending industry. Updated on a daily basis, the LoanSifter
database allows a mortgage broker to search many variables that influence lending rates and terms,
including the loan amount, the borrower’s FICO score (a measure of creditworthiness based on a
scale of 300 to 850, with the median around 720), the state where the property is located, whether
the loan is intended for a purchase of a new home or a refinance of an existing loan, the borrower’s
loan-to-value and debt-to-income ratios, whether the loan is fixed rate, variable rate, or requires
a balloon payment, and whether the property is being purchased as a primary residence or for investment purposes. When using the LoanSifter database, a broker enters either a desired number
of discount points or a desired interest rate and receives various offers. The authors have access to
LoanSifter’s daily database from October 16, 2008 to February 9, 2009 (excluding December 8–14,
2008 due to a backup failure). The authors pose as a certain broker and access the offers that would
have been received from affiliated lenders—on average, each broker in the sample has access to 20
lenders, although there is considerable variation in this number. The authors have access to loan offers made from January 1, 2008 to April 9, 2009, and have the history of actual searches conducted
by brokers and, after February 2009, by borrowers directly via Zillow, a consumer web site. Thus,
the authors can see borrower and loan characteristics, as well as the best offer received by the broker.
The second database is from the Home Mortgage Disclosure Act (HMDA), which requires lenders
to provide information about all applications for mortgage credit. HMDA collects information on
an applicant’s race, income, gender, occupancy status, loan amount, and property loan. The lender
must also disclose whether the loan application was approved, denied, or withdrawn by the borrower.
The authors have access to confidential data files that include the loan application and action dates,
information that is not disclosed in the public data. Thus, the authors have access to when borrowers took a potentially costly step to obtain a new loan, while the origination date allows them to
link the application to loan-level data sets, which offer a wealth of additional information about the
borrower and the loan. The HMDA data cover over 90 percent of the total U.S. mortgage market.
The third database, a collection of records from loan-servicing agencies maintained by Lender Processing Services (LPS), records the loan amount, the property value and location, whether it is a
prime or subprime mortgage, whether it remains in the lender’s portfolio or was packaged into a
MBS, whether it is a first lien or a second lien loan, and the interest rate terms­—including when an
adjustment might take place. The LPS dataset covers about 60 percent of the U.S. mortgage market,
but Avery et al. (2010) note that it appears to overrepresent GSE lending and to underrepresent
jumbo and subprime lending. The authors perform some analyses by matching the HMDA and
LPS data using a loan’s origination date, the loan amount, and the property’s zip code—this permits
getting detailed loan information for about 35 percent of the loans reported in the HMDA data.
Key Findings
•	The initial November 25 announcement of the LSAP program led to an immediate and large
increase in borrower activity in the primary mortgage market. The LoanSifter data show
an approximately 300 percent increase in the number of borrowers shopping for refinance

8 research review

mortgages on November 25 compared with preceding days. The nonpublic HMDA data show
that this increase in searches translated into a 150–200 percent increase in the number of applications and subsequent originations. The increase in search activity peaked in mid-December and early January, and again after the program’s extension was announced on March 18.
•	The LSAP program resulted in significant interest rate reductions for prospective borrowers.
But due to the complex interaction of FICO scores, interest rates, and discount points, for some
borrowers the LSAP program was a boon, while for other very similar and sometimes even observationally equivalent borrowers, the LSAP program was irrelevant. Mortgage lenders typically
impose cutoff points at FICO scores of 680, 700, and 720, and in all three cases the data show that
loan originations were over 25 percent higher immediately above the cutoff than right below it.
•	The initial LSAP program announcement resulted in a marked shift in borrower characteristics.
Refinancing activity became highly skewed towards borrowers with high credit scores. The authors document this by using a matched sample of loans from LPS and HMDA that determines
the application date of originated mortgages. On November 25, there was over a doubling of
refinance applications for borrowers with FICO scores below 700 from the previous day, the one
preceding the initial LSAP announcement. For borrowers with FICO scores between 700 and
720, the application volume more than tripled; it quadrupled for borrowers with scores between
720 and 740, quintupled for those with scores between 740 and 760, and for highly creditworthy
borrowers with FICO scores above 760, reapplication activity increased over seven-fold. These
differences in refinancing activity persisted throughout the life of the LSAP program.
•	In the days immediately after November 25, the reduction in rates available to borrowers was
more pronounced for loans that required borrowers to pay discount points than for loans for
which borrowers expected the lender to pay points. The authors’ data show that a prototypical
borrower who expected to pay one point at closing saw the interest rate fall by 60 basis points, on
average, across lenders, while a borrower who expected the lender to pay one point saw the interest
rate fall by only 16 basis points. This asymmetry became more pronounced over time—by the first
week of January 2009, the average rate differential obtained by paying one discount point instead
of receiving one point had gone up to 120 basis points, compared with 70 basis points in the weeks
before the LSAP program was announced.
•	The HMDA information on applicant income shows that denial rates increased for all applicants
in all income categories after the LSAP program began.
•	The LSAP program did not significantly affect the market for purchase mortgages or originations. The LoanSifter data show little effect even on search activity, suggesting that the program
announcement did not increase interest among prospective buyers who did not already own a home.
•	Borrowers with poor credit are at higher risk of default, and are required to pay additional points
when closing a mortgage. The LSAP program did not reduce rates for borrowers with poorer
credit scores as much as it did for borrowers with good credit.
• The authors suggest that the presence of additional fees, known as loan-level price adjustments
(LLPAs), charged by the GSEs may account for the overrepresentation of borrowers with high
credit scores benefiting more from the LSAP program. LLPAs were announced by Fannie Mae on
November 6, 2007, and Freddie Mac followed its lead a week later. The existence of price adjustments tied to borrowers’ creditworthiness was a relatively new aspect of the agency loan market,
with the fee depending on the mortgage’s loan-to-value ratio and the borrower’s FICO score. The
relationship between borrowers’ FICO scores and refinancing activity after the LSAP program
began was not smooth, but instead displayed discontinuities that coincide exactly with increases

july 2010 – december 2010 9

in fees charged to borrowers. Since these additional fees interact with the changes in borrowers’
rate-point opportunity set, the new fees may have had a particularly large impact on the cost of
refinancing. Hence, many borrowers with low FICO scores may have found that they did not have
a sufficient incentive to refinance. It is also quite possible that borrowers with lower FICO scores
are more credit-constrained and hence less able or willing to pay discount points or other fees
involved with financing or refinancing a mortgage.
•B
 orrowers with less robust credit may also have been prevented from refinancing if they had little
positive equity in their homes, or if they were simply not as financially attuned to the benefits of
refinancing as more creditworthy borrowers, and hence did not pursue the opportunity to refinance their mortgages
Implications
The authors’ results raise important policy implications. First, the LSAP announcement had immediate and large effects on U.S. mortgage prices and ignited activity in what had been a moribund
market. This greatly contrasts with other economic stimulus programs, such as tax cuts or “shovel
ready” construction projects, which typically lag before impacting economic activity. Yet it is questionable whether the LSAP program truly stimulated consumption and stabilized house prices. The
program did not result in a large increase in new purchase mortgages, although increased search activity could be interpreted as indicating that more households were considering a purchase. Rather,
the data suggest that most of the borrowers who took advantage of more favorable terms enabled
by the program were creditworthy homeowners looking to refinance an existing mortgage. Such
borrowers are less apt to funnel any savings realized from refinancing into additional consumption
expenditures. Homeowners more constrained by credit or income, such as subprime borrowers, were
not always able to take advantage of potentially lower rates. The authors suggest that since the inception of the LSAP program resulted in such a dramatic change in the FICO-score distribution
of successful applicants, this outcome merits further investigation as to the possible unintended
consequences.
p-10-5									

Reasonable People Did Disagree: Optimism and Pessimism
About the U.S. Housing Market Before the Crash
By Kristopher S. Gerardi, Christopher L. Foote, and Paul S. Willen
complete text: http://www.bostonfed.org/economic/ppdp/2010/ppdp1005.htm
e-mail: kristopher.gerardi@atl.frb.org, chris.foote@bos.frb.org, paul.willen@bos.frb.org

Motivation for the Research
Much of the blame for the recent mortgage crisis and ensuing “Great Recession” can be traced to
unrealistically high expectations for U.S. housing prices. Starting in the mid-to-late 1990s, house
prices experienced an almost decade-long expansion, with real house prices rising 72 percent according to the Case-Shiller repeat-sales index, and 41 percent according to the OFHEO (now
FHFA) repeat-sales index. This inflation-adjusted price growth was unprecedented; from the late
1940s through the mid-1990s, real house prices were essentially flat. In hindsight, it seems that by
the mid-2000s, housing prices had risen to unsustainable heights, so a crash in housing values could
have easily been foreseen. But the crash caught many observers unaware. This paper pieces together
the real-time evolution of beliefs about U.S. house prices during the peak of the recent housing
boom. The goal is to provide a retrospective understanding of why so many observers were unconcerned about housing prices during the housing boom—a boom that set the stage for the largest
financial crisis since the Great Depression.

10 research review

Research Approach
The authors review the work of prominent academic and professional economists who wrote about
the U.S. housing market during the last decade. They pay particular attention to opinions written
about the 2004–2006 period. Collectively, these views take one of three positions: (1) a “pessimistic”
minority assessment of the U.S. housing market; (2) a strong “optimistic” assessment; and (3) an
“agnostic” majority viewpoint that was unwilling to take a strong position either way about U.S.
house prices.
Key Findings
•	Among the pessimists, Dean Baker was one of the first economists to claim that the U.S. housing
market was experiencing a bubble. He wrote in 2002 that the price-rent ratio in the housing market had risen almost 50 percent in nominal terms during the previous seven years. The implication was that this increase was out of line with previous norms and thus unsustainable. Karl Case
and Robert Shiller (2003) found that overall U.S. housing prices tracked market fundamentals
fairly well, but they discovered some evidence of speculative thinking in a survey that measured
attitudes among housing-market participants. Given the decline in housing prices that actually
occurred, these pessimistic economists now seem prescient. But some of them argued for a bubble
years before the housing market peaked, so they lost credibility when those predictions did not
materialize.
•	One housing pessimist, Paul Krugman, claimed in 2005 that the U.S. housing market could be
divided into “Flatland,” where prices remained in line with fundamentals, and a “Zoned Zone,”
where restrictions on new construction contributed to large house-price increases. The authors of
the paper present some empirical work indicating that many land-scarce cities, such as Boston,
New York, and San Francisco, did indeed experience sizeable price increases during the boom. But
the authors also found that some of the largest price increases were in cities like Las Vegas and
Phoenix, which had ample land to accommodate new construction. The authors conclude that the
data do not support Krugman’s claim that differences in city-level house-price growth stemmed
mainly from varying housing-supply elasticities interacting with a uniform rise in demand.
•	Among the optimists, Himmelberg, Mayer, and Sinai (2005) offered the most widely cited case
against the existence of a housing bubble. They took issue with the empirical measures used by
the pessimists, such as the price-rent ratio or the price-income ratio. Instead they studied the user
cost of housing, a concept that recognizes the many factors that either raise or lower the true cost
of homeownership. These factors include property taxes, maintenance costs, anticipated capital
gains, the mortgage interest deduction, and the risk of large capital losses. In their empirical work,
Himmelberg, Mayer, and Sinai found that user costs varied substantially across U.S. cities, but
that these costs did not indicate the presence of a nationwide housing bubble, as they were generally within the range of historical experience.
•	The majority of professional economists were agnostic on the question of whether a housing price
bubble existed in the United States. Krainer and Wei (2004) studied the price-rent ratio for housing using statistical techniques to predict stock market returns. While they found evidence that
beliefs about future returns were important in driving current prices, they did not take a strong
stand on whether a bubble existed. Davis, Lehnert, and Martin (2008) constructed a long timeseries of rent-price ratios going back to 1960, and found that up until 1995 the rent-price ratio
fluctuated between 5 and 5.5 percent, but that it declined sharply to 3.5 percent between 1995
and 2006. They concluded that a return to the historic average would require a modest decline in
housing prices.

july 2010 – december 2010 11

Implications
Clearly, well-respected economists looked at the U.S. housing market during the early-to-mid2000s and arrived at vastly different conclusions about the future trajectory of house prices. Moreover, many if not most of the economists who studied the housing market were not comfortable
making predictions one way or the other about where prices would go. The authors conjecture that
this majority agnostic opinion is a natural outgrowth of the type of training that Ph.D. economists
receive. In general, economists are taught that asset markets are efficient, in that these markets
already contain relevant information about the future supply of and demand for traded assets. This
efficiency assumption implies that asset prices are fundamentally unpredictable, so economists will
be loath to take on the heavy burden of proof to claim otherwise. While the assumption of efficient
asset markets is common among economists, it appears that large and systemic departures from efficiency do take place. Such deviations have been discussed in the theoretical literature, but matching
such models to real-world data is difficult. The recent housing crisis may prove helpful in this regard.
In any case, the authors claim that understanding how economists think about asset prices in real
time is critically important when crafting policy. Given widely held views about asset markets,
policymakers and regulators may not be able to prevent a bubble from forming, nor may they be able
to identify a bubble after the fact. Rather than try to prevent or pop asset bubbles, a more promising
policy stance might be to ensure that potential investors not only understand the risks associated
with investments but also be well prepared for them. As an example, individual homeowners should
be insured against significant declines in housing values. A standard way to do this—which was
sadly ignored by many homeowners during the housing boom—is to make a substantial down payment, which guards against incurring negative equity if and when house prices fall.
p-10-6 								

A Profile of the Mortgage Crisis in a
Low-and-Moderate-Income Community

By Lynn M. Fisher, Lauren Lambie-Hanson, and Paul S. Willen
complete text: http://www.bostonfed.org/economic/ppdp/2010/ppdp1006.htm
e-mail: lynn_fisher@kenan-flagler.unc.edu, lslh@mit.edu, paul.willen@bos.frb.org

Motivation for the Research
It is widely accepted that the U.S. foreclosure crisis has damaged communities, especially those whose
residents fall into the low-and-moderate-income category. Yet systemic community measures of the
precise effects that falling house prices have had upon sale and foreclosure activities have not been so
common, and as a result many assertions about how low-and–moderate-income communities have
fared during the crisis do not have solid empirical backing. This paper is an attempt, admittedly quite
narrow in scope, to study the effects of the foreclosure crisis upon one hard-hit community.
Chelsea, Massachusetts, a city located just north of Boston, was particularly affected by the foreclosure crisis. Ninety percent of its 34,356 residents live in census tracts identified by the Federal
Financial Institutions Council as low-and-moderate-income. Over 56 percent of its residents are
Hispanic or Latino, and communities with high concentrations of minority and low-income residents, as well as borrowers with limited credit records (like immigrants) became targets for highcost mortgage lending during the recent housing boom. The 2008 census recorded 12,798 housing
units in Chelsea, of which 8,158 (almost two-thirds) of the housing units were built before 1940.
Only 4,609 of Chelsea’s housing units are owner-occupied, and only 17 percent are single-family
homes. The city’s most typical residential structure is a small multifamily building, as 6,579 of the
units are two-to-four-unit buildings. Chelsea’s residential property market peaked in 2005, and by
2009 house prices had fallen by almost 50 percent. Lenders foreclosed upon or agreed to short sales

12 research review

on almost 8 percent of the city’s one-to-three-family properties. For the purposes of this study, the
authors define a short sale as a transaction for which the seller receives less than 75 percent of the
total amount of the purchase mortgage.
Research Approach
The authors exploit an exceptionally good dataset to explore five specific items impacted by the housing crisis: (1) repeat-sales prices, (2) foreclosure activity, (3) the accumulation of bank-owned properties,(4) investments made by owners to improve their properties, and (5) sales activity. The dataset they
use is a combination of three individual sources. The first is public record property-level transactions
assembled by the Warren Group, a Massachusetts company that collects residential property records
in New England. The Warren Group dataset has information on all one-to-three-family home and
condominium transactions taking place from 1987 on, including mortgage originations, foreclosure
petitions, foreclosure auctions, and deed transfers for both nonforeclosure and foreclosure sales. This
dataset distinguishes between properties sold at foreclosure auction to a third party from those that
become bank-owned properties; it also gauges how long a property is retained by the bank before
being resold. The Warren Group data also contain information on the property’s structural characteristics and assessed valuations since 1987, which the authors supplement with information from the
Chelsea assessor’s office. Over 90 percent of the city’s one-to-three-family and condominium units
are tracked by the Warren Group. The second dataset is assembled by LPS Applied Analytics, and
collects records from large loan-servicing organizations, including the original amount borrowed, the
value and location of the property that secures the loan, whether the loan is classified as prime or subprime, whether the mortgage is held in the lender’s portfolio or was packaged into a mortgage-backed
security, whether the loan is a first-lien or second-lien loan, and whether the interest rate is fixed or
variable, and if the latter, the rules for changing it. Since Massachusetts public records do not identify
short sales, the authors matched a sample of loans from the Warren Group data to the First American
CoreLogic LoanPerformance dataset of securitized subprime loans, which do report investor losses
on the disposition of a loan—allowing the authors to identify short sales. The third main dataset consists of records of every building permit filed with Chelsea’s inspectional services department between
January 1996 and July 2009. Each permit lists the property address, issue date, permit fee paid, and a
description and cost estimate of the scheduled work. After cleaning and standardizing the addresses,
the authors matched the building permit records to the Warren Group data for one-to-three-family
dwellings. Condominiums were excluded because it is difficult to determine which unit the permit
applied to at a given address. The authors regard the building permit data as a good approximation of
the improvements owners made to their properties.
The authors used methods developed by Case and Shiller (1987 and 1989) to construct annual
weighted repeat sales price indices for one-to-three-family units and condominium properties, excluding properties sold through foreclosure or reverting to bank-owned status. The authors also
constructed indices separately by property class and a hedonic index measuring housing quality;
sales priced to reflect outlier appreciation rates or prices were removed to avoid unduly skewing the
results. The authors looked at the Warren Group public record data and the LPS data to track the
monthly delinquency status of loans, though these data only cover a subset of servicers and thus
understate the actual amount of foreclosure activity in Chelsea.
Key Findings
•	Disallowing distressed sales, in Chelsea the average house price more than doubled between 2000
and 2005, then fell by about 40 percent by 2009. There was less price appreciation, and hence
less volatility, in the condominium market than in the market for one-to-three-family properties.
While this 40-percent figure is less than the almost 50-percent decline recorded if one includes
distressed sales due to foreclosure or short sale, it has a substantial deleterious effect on an owner’s
housing investment, especially if the home was purchased near the peak of the recent housing
boom. Since a typical homeowner is highly leveraged, falling house prices likely wiped out any

july 2010 – december 2010 13

downpayment investment for most Chelsea homeowners who purchased since 2000. In contrast,
house prices across Massachusetts rose less dramatically than in Chelsea and fell by less than 13
percent by 2009.
•	After a period of exceptionally low activity, and no foreclosures between 2003 and 2004, Chelsea
saw a foreclosure increase beginning in 2006, peaking at 125 foreclosures in 2008 and then dropping to about 50 foreclosures in 2009. From 2006 through 2009, lenders had foreclosed on 263
properties, or roughly 6 percent of homes; 8 percent if short sales are included. As of April 2010,
98 properties were identified as being in post-petition, pre-deed, foreclosure status, and another
152 properties were more than 90-days delinquent on the mortgage payment. Buyers who purchased homes after the price drop had stabilized had better credit scores, and may be in a better
position to avoid any eventual distressed sale.
•	The foreclosure crisis has resulted in a large accumulation of bank-owned properties [real-estate
owned (REO) in the industry lingo], and this inventory build-up has concerned policymakers,
in part because of the perception that vacant homes invite theft, vandalism, and a deterioration
of property values that may generate more foreclosures. Such policy concerns are especially pronounced for low-and-moderate-income communities. While Chelsea did experience a build-up
in REO properties after the foreclosure crisis began in 2006, with these stocks increasing after the
financial crisis began—tellingly, there were 41 bank-owned properties in 2007 and 120 in 2008—
by 2009 there were two positive developments. Lenders made increasing use of short sales, so that
properties passed directly from one owner to another. Banks also increased their sales of distressed
properties at foreclosure auctions. The main point is that banks did find willing buyers for the
properties, indicating that even at depressed prices Chelsea remains an attractive community to
many. In most cases, these sales have gone to owner-occupants, not to investors concerned with
flipping the property.
•	W hile some observers argue that owners with no positive home equity are unlikely to invest in the
property’s upkeep, Chelsea tells a more optimistic story. Judging from the building permit data,
Chelsea’s homeowners remain quite willing to invest in their properties even if current house prices in the city are depressed. While work permit fees for improvements made to one-to-three-unit
properties peaked in 2006 at almost $1.2 million in 2006:Q3, then fell to $738,000 in 2008:Q3,
these rebounded to $900,000 in 2009:Q2. There are some interpretation problems associated with
these data, given that a post-2006 drop in home equity may have precluded obtaining cash-out
refinances or second mortgages, the two traditional sources for funding home improvements.
Furthermore, there was a possible credit crunch in 2008 following the collapse of Lehman Brothers and AIG. But despite credit supply issues that continued into 2009, home improvement investment, as proxied by the issue of building permits, increased. Over the last decade in Chelsea
recent homebuyers (those making purchases one to three years ago) have accounted for a 27 to 35
percent share of these permits, and some of these owners saw their property values drop by 20 to
50 percent, depending on the year they bought the house. So, while Chelsea residents may have
lost equity in their homes, they did not lose an ongoing interest in investing in these properties.
•	The homeowners who exited the Chelsea market seem to consist mainly of individuals who purchased at the market peak in the mid-2000s, not the city’s long-term residents. During the 2004–
2005 height of the housing boom, about 45 homes, or 1 percent of the city’s residential housing
stock, changed ownership each month, and given this period’s rising prices, almost none of these
transactions represent distressed sales. By 2007, the total monthly sales were cut in half, and over
25 percent of these were distressed sales. While total sales in 2008 and 2009 rose to about 28 sales
per month, the majority of these were distressed sales, and this volume is still 40 percent lower
than before the housing crisis. While some argue that foreclosures drive down house prices by
increasing the supply of properties on the market, the drop in both prices and transactions implies

14 research review

a reduction in demand. Chelsea’s long-term owners are not prone to selling, and should be able to
take advantage of higher prices when a recovery in house prices eventually occurs.
Implications
The authors’ analysis of Chelsea paints a picture of a fundamentally viable community coping, albeit
imperfectly, with a bad situation. While many homeowners lost equity in their homes, or lost their
homes outright, other buyers stepped in and assumed ownership of these properties. Chelsea’s story
offers a more positive take on what is often a cautionary tale about how low-and-moderate income
communities respond to a housing crisis. Yet the authors are well aware that Chelsea’s location close
to Boston, an economically diverse city, may account for much of the hopeful picture it paints. For
similar cities located elsewhere in New England or in the Midwest, the collapse of manufacturing
industries underpinning the local economy has raised doubts about their long-term viability, and
this is reflected in the local housing market.

Working Papers
w-10-9 								

In Search of Real Rigidities
by Gita Gopinath and Oleg Itskhoki

complete text: http://www.bostonfed.org/economic/wp/wp2010/wp1009.htm
e-mail: gopinath@harvard.edu, itskhoki@princeton.edu

Motivation for the Research
Real rigidities are mechanisms that dampen price responses of firms because of factors such as strategic complementarities in price setting, real wage rigidity, the dependence of costs on input prices
that have yet to adjust, and others. A large literature has recently emerged that documents patterns
of nominal price stickiness at the very micro level—the goods level. The documented durations of
a given level of nominal prices are significantly shorter than the estimated real effects of money
on output. The long-lasting real effects of monetary shocks can be reconciled with moderate price
stickiness if real rigidities are an important phenomenon.
An important empirical literature has emerged recently that evaluates the question: are quantitatively important real rigidities present in the data? The answer appears to depend on what data one
examines. In international economics, there is a large and growing literature that estimates exchange
rate pass-through from exchange rate shocks into prices. The estimated exchange rate pass-through
is found to be incomplete; that is, if the U.S, dollar depreciates by 10 percent relative to the euro,
the dollar prices of goods imported from the euro area increase by less than 10 percent even in the
long run. This incomplete pass-through is argued to be consistent with the presence of important
real rigidities. Changes in exchange rates generate relative price movements for the same good across
markets despite costs being the same. This destination-specific markup is argued to be consistent
with the presence of significant strategic complementarities in price setting. The closed economy
literature, on the other hand, uses indirect tests of real rigidities in the absence of well-identified and
sizeable shocks like exchange rate shocks. The recent work based on micro evidence for retail prices
argues that real rigidities are not an empirically important phenomenon.
There are many developments in the measurement of real rigidities in the closed and open economy
literatures, but these developments have taken place in parallel and have not be reconciled. In this
paper the authors bring together the closed economy macro literature, which focuses mainly on
indirect tests of real rigidities, with the international pricing literature, which uses an observable

july 2010 – december 2010 15

and sizeable shock­—namely the exchange rate shock—to evaluate the behavior of prices, and in
particular, the behavior of strategic complementarities in pricing. The paper presents new empirical results on price adjustment using international data; a closed economy model with differential
markup variability in the retail and wholesale sector and sluggish price adjustment; and a model of
bargaining and variable markups in intermediate-goods pricing
Research Approach
The authors first review the recent evidence on real rigidities to evaluate whether a consensus is
emerging on the importance of these rigidities in the data. Second, since the two literatures use different metrics to evaluate the importance of real rigidities, the authors use unpublished international
price data collected by the U.S. Bureau of Labor Statistics (BLS) to estimate both metrics using the
same data. Third, they present new evidence on the dynamic response of international prices to exchange rate shocks and the response to competitor prices. Fourth, they calibrate sticky-price macro
models (Calvo and menu cost) with a retail and wholesale sector to the evidence on the variable
markup channel of real rigidities. They evaluate their ability to match the behavior of prices in the
data and to measure the extent of monetary nonneutrality that this channel generates.
In reviewing the literature, the authors group evidence based on whether the prices studied refer to retail (consumer) prices or wholesale prices. Wholesale prices can alternatively be viewed as
intermediate-good prices in business-to-business transactions. The literature on exchange rate passthrough into at-the-dock prices of goods refers to wholesale prices. The authors next use the BLS
import price data to perform tests of real rigidity, using measures employed in the closed economy
literature, namely, the persistence of reset-price inflation (Bils, Klenow, and Malin 2009, henceforth
BKM) and measures employed in the open economy literature, namely, the dynamic response of
prices to exchange rate shocks. Next, the authors evaluate the importance of strategic complementarities in price setting for incomplete pass-through, using some measures that capture the pricing behavior of competitors and measures that capture the extent of competition in sectors. These
measures are not perfect but provide useful information about pricing behavior. The authors also
evaluate the sensitivity of firm pricing to shocks to competitors by measuring the response of prices
to movements in the U.S. trade-weighted exchange rate that is orthogonal to the bilateral exchange
rate for the country.
An important distinction between retail prices and wholesale prices is that the latter capture business-to-business transactions. Consequently, the strength of the buyer’s bargaining power can impact the extent of the pass-through. The authors use unpublished measures of market concentration
in the import sector provided to them by the BLS—specifically, the Herfindahl index and the number of importers that make up the top 50 percent of trade—to evaluate this hypothesis.
Lastly, the authors use estimates from the data to calibrate a closed economy model with different degrees of variable markup elasticity at the wholesale and retail level. In the existing monetary
literature there is typically no interesting distinction made between the retail and wholesale sectors.
The authors calibrate the parameters for the wholesale sector, using the evidence from international
prices. In the benchmark model, they use Calvo price setting and later evaluate the case of menu
cost pricing.
Key Findings
•	A review of the existing literature reveals one surprisingly consistent result across several studies—
surprising since these studies use different methodologies and datasets. This result is that strategic complementarities—for example, operating through variable markups—play only a small
role in affecting retail prices yet appear to have quite an important influence on wholesale prices.

16 research review

•	The actual import-price inflation series has a monthly persistence of 0.56, while the corresponding reset-price inflation series has a persistence of −0.04. In comparison, BKM estimate
for retail prices that the inflation series has persistence of −0.05, while the reset-price inflation
series has a persistence of −0.41. In comparison to retail prices, import prices have greater persistence, but the magnitude of this persistence suggests very little sluggishness in price adjustment.
•	Projecting the aggregate import reset-price inflation on lags of the trade-weighted nominal
exchange rate changes yields autocorrelation of the fitted series substantially higher than that
of unconditional reset-price inflation (0.33 versus -0.04). Individual import prices, conditional on changing, respond to exchange rate shocks prior to the last time the price was adjusted
and these lagged effects are large and statistically significant. The pass-through, conditional
on a price change to the cumulative exchange rate change since the last price adjustment, is
0.11 and the response to the cumulative exchange rate over the previous price duration is 0.08.
Both these pieces of evidence evaluating the response to a specific shock suggest a more important role for real rigidities than for the point estimate of the autocorrelation of reset prices.
•	The prices set by competitor firms (firms in the same 10-digit or 4-digit harmonized code in
the import price sample) have an important positive effect on firms’ pricing, reducing the direct
pass-through of the exchange rate into prices. The point estimates are consistent with a markup
elasticity of 1.5, which implies a 40 percent pass-through for purely idiosyncratic shocks.
•	The response of prices to movements in the U.S. trade-weighted exchange rate that is orthogonal to the bilateral exchange rate for the country is sizeable and significant. In a similar
vein, comparing the response to bilateral exchange rate shocks versus trade-weighted exchange
rate shocks shows that the exchange rate pass-through is higher in response to a more aggregate shock than to more idiosyncratic shocks. The incompleteness in pass-through is also related to certain sectoral features that proxy for the level of competition among importers.
•	Point estimates, using the Herfindahl index and the number of importers that make up the
top 50 percent of trade, suggest that in many cases sectors dominated by a few large importers
have lower pass-through from foreign firms; however, the estimated standard errors are large.
•	The model shows that sluggishness in the response of wholesale prices to monetary shocks feeds
into slow adjustment of retail prices. However, inflation, as measured by the aggregate inflation
and reset-price inflation series, exhibits little persistence, since the movement of these series is
dominated by more transitory shocks. Yet, conditional on monetary shocks or exchange-rate-like
shocks, inflation series exhibit considerable persistence. Similarly, output series can exhibit significant monetary non-neutralities. Second, while calibrated real rigidities in the form of variable
markups increase the size of the contract multiplier, these effects are limited unless coupled with
exogenous sources of persistence. But the model fails to match the slow dynamic in price adjustment that is documented in the empirical data, suggesting that additional sources of persistence
are missing from the model.
Implications
Why does one observe differences in markup variability at the wholesale and retail level? The authors
do not provide a definitive answer here, but conjecture that this result can be consistent with differences in the competitive environment at the two levels. That is, the retail sector can be described as
monopolistically competitive, while the wholesale sector is better described as a bilateral bargaining
environment. The authors present a static bargaining model of wholesale price setting that results
in variable markups and incomplete pass-through of shocks into wholesale prices. Specifically, each
final good producer bargains with its intermediate good suppliers regarding the price of intermediate goods. Given these bargained prices, the final good producer is free to choose quantities of the

july 2010 – december 2010 17

intermediate inputs, as well as to set the price of its final good in the monopolistically competitive
consumer market. This model results in constant markups at the retail stage, but in variable markups
at the wholesale level that depend, among other things, on the relative bargaining power of the final
good producer and on the market share of the intermediate good supplier. Important outstanding questions are whether wholesale prices are allocative and also whether contracts specify fixed
prices at fixed quantities. While there is no simple way to test this, Gopinath and Rigobon (2008)
show that in the case of contracts for international prices they typically involve a fixed price with
a quantity range specified, as opposed to a fixed quantity. Moreover, firms export the same good at
the same price to multiple destinations and consequently prices behave in many cases like list prices.
Further, the behavior of prices is consistent with models of monopolistic price setting where prices
are allocative, as discussed in the papers by Gopinath, Itskhoki, and Rigobon (2010), Gopinath and
Itskhoki (2010), and Neiman (2009). Also, as the authors make clear, changes in intermediate good
prices affect final good prices, as these fully pass through into retail consumer prices. These separate
pieces of evidence are consistent with wholesale prices being allocative.
w-10-10									

Strategic Choice of Preferences: The Persona Model
by David H. Wolpert, Julian C. Jamison, David Newth, and Michael Harre

complete text: http://www.bostonfed.org/economic/wp/wp2010/wp1010.htm
e-mail: david.h.wolpert@nasa.gov, julian.jamison@bos.frb.org, david.newth@csiro.au, mike@centreforthemind.com

Motivation for the Research
In behavioral evolution of preference (EOP) models, it is well established that even in an anonymous single-shot game where every player knows he will never interact with his opponent(s) again,
human players often exhibit “nonrational” behavior (Camerer 2003; Gachter and Herrmann 2009,
and references therein). (“Nonrational” is a term used in the literature to remove the negative connotations of “irrational.”) Stated more precisely, often in an anonymous single-shot game where
there are exogenously provided (often material) underlying preferences, humans do not maximize
these underlying preferences. A great deal of research has modeled such nonrational behavior by
hypothesizing that humans have behavioral preferences that differ from their underlying preferences and that they maximize these behavioral preferences rather than maximizing their underlying
preferences. We refer to such models as behavioral preference models, and the nonrational behavior
given by simultaneous maximization of every player’s behavioral preferences as a behavioral preference equilibrium. Different kinds of behavioral preference models arise for different choices of how
to formalize the underlying and behavioral preferences.
Perhaps the most prominent example of a behavioral preference model is the work on interdependent, other-regarding social preferences (Sobel 2005; Bergstrom 1999; Kockesen et al. 2000). In
that work, both the underlying and the behavioral preferences are formalized as expectations of von
Neumann-Morgenstern utility functions. Accordingly, these behavioral preference models presume
that people do not maximize expected underlying utility subject to the play of their opponents, but
instead maximize expected behavioral utility. Often in this work on interdependent preferences the
behavioral utility function of player i is a parameterized combination of i’s underlying utility function and the underlying utility functions of i’s opponents. A typical analysis in this work seeks to
find parameters of such behavioral utility functions that provide a good fit for some experimental
data. Other work has explored behavioral preference models when the behavioral preferences are not
expected utilities. An example is the (logit) quantal response equilibrium (QRE).
In the interdependent preferences and QRE experimental work the researcher’s task is simply
to ascertain the parameters of real-world behavioral objective functions from data. Two important issues are unaddressed in that work. The first such issue is how the players acquire common

18 research review

knowledge of one another’s behavioral objective functions before the start of play. This issue is
particularly pronounced in nonrepeated games, and even more so when the games are played anonymously. The second issue is how to explain why the parameters of the behavioral objective functions
have the values they do. The interdependent preferences and QRE experimental work does not
consider the issue of why a human should try to optimize a particular behavioral objective function
rather than his underlying objective function. In this paper, the authors address this second issue.
Research Approach
The authors note that, by definition, the strategy profile adopted by the players in any strategic scenario
is an equilibrium solution of the game specified by the players’ behavioral objective functions rather
than an equilibrium solution of the game specified by their underlying objective functions. Therefore,
changing the values of the parameters in the behavioral objective functions changes the equilibrium
strategy profile. In particular, for a fixed set of behavioral objective function parameters for all players
other than player i, by varying the parameters of i’s behavioral objective function, the authors create a
set of equilibrium profiles of the associated behavioral games. The profiles in that set can be ranked in
terms of player i’s underlying objective function. In this way, the possible values of the parameters in i’s
behavioral objective function can be ranked according to i’s underlying objective function.
In a nutshell, the authors’ thesis is simply that over the course of a lifetime a person learns what
parameter values of his behavioral objective function have the highest rank in terms of his underlying objective function. In this way, the parameters of an individual’s behavioral objective function
are determined endogenously, in a purely rational way, as the values that optimize his underlying
objective functions.
Key Findings
•	Many of the formal difficulties of EOP models can be removed by modifying the two-timescale
games studied in the literature so that the strategic process on the long timescale is learning by
an individual across his or her lifetime rather than natural selection operating on genomes over
multiple generations.
•	Two-timescale games with the modified process can provide endogenous explanations for
why humans sometimes adopt interdependent preferences and sometimes exhibit logit quantal
response functions.
•	By trying to maximize the behavioral preferences (and in particular publicly committing to doing
so), a person in fact strategically maximizes his underlying preferences. So what we observe is
maximization of particular [optimal] behavioral preferences, but this is not inconsistent with an
ultimate goal of maximizing underlying preferences.
•	The modified process explains experimental data in the Traveler’s Dilemma and allows the authors
to show how cooperation can arise in nonrepeated versions of the Prisoner’s Dilemma. In the Prisoner’s Dilemma the modified process predicts a crowding out phenomenon, in which introducing
incentives to cooperate instead causes players to stop cooperating, and enables the authors to predict a tradeoff in the Prisoner’s Dilemma between the robustness and the benefit of cooperation.
Implications
One response to the observation that humans and some animals sometimes exhibit what appears
to be nonrational behavior when they play noncooperative games with others is to simply state this
observation as a fact and leave it at that. Under this response, essentially the best that can be done
is to catalog the various types of nonrationality that arise in experiments (loss aversion, framing
effects, the endowment effect, sunken cost fallacy, confirmation bias, reflection points, other-regarding
preferences, uncertainty aversion, and so on). Inherent in this response is the idea that “science stops at
the neck”—that somehow logic suffices to explain the functioning of the pancreas but not of the brain.

july 2010 – december 2010 19

There has been a lot of work that implicitly disputes this and tries to explain apparent nonrationality of humans as actually being rational, if we appropriately reformulate the strategic problem faced
by the humans. The implicit notion in this work is that the apparent nonrationality of humans in
experiments does not reflect “inadequacies” of the human subjects. Rather it reflects an inability of
scientists to know precisely what strategic scenario the human subjects are considering when they
act. From this point of view, the work of scientists should be to try to determine just what strategic
scenario really confronts the human subjects, as opposed to the one that apparently confronts them.
One body of work that adopts this point of view is evolutionary game theory, which holds that
humans (or other animals) really choose their actions in any single instance of a game to optimize
results over an infinite set of repetitions of that game, rather than to optimize it in the single instance
at hand. The persona framework is based on the same point of view–the view that the apparent game
and the real game differ. In the persona game framework, the apparent game is the underlying game,
but the real game the humans play is the persona game.
There are many interesting subtleties concerning when and how persona games arise in the real
world. For example, a necessary condition for a real-world player to adopt a persona other than
one of perfect rationality is that he believes that the other players are aware that they can do that.
The simple computer programs for maximizing utility that are currently used in game theory experiments do not have such awareness. Accordingly, if a human knows he is playing against such a
program, he should always play perfectly rationally, in contrast to his behavior when playing against
humans. This distinction between behavior when playing computers and playing humans agrees
with much experimental data, for example, data concerning the Ultimatum Game (Camerer and
Fehr 2006; Camerer 2003; Nowak et al. 2000).
What happens if the players in a persona game are unfamiliar with the meaning of one another’s
signals, say, because they come from different cultures? This might lead them to misconstrue the
personas (or more generally persona sets) adopted by one another. Intuitively, one would expect that
the players would feel frustrated when this happens, since in the behavioral game each does what
would be optimal if his opponents were using the misconstrued persona—but, in fact, his opponents
are not doing that. This frustration can be viewed as a rough model of what is colloquially called a
“culture gap” (Chuah et al. 2007).
Persona games provide a very simple justification for nonrationality (often disparaged in popular parlance as “irrationality”) with very broad potential applicability. They also make quantitative
predictions that can often be compared with experimental data. (In work currently being written
for submission, two of the authors have found that the predictions of the persona game framework
also agree with experimental data for the Ultimatum Game.) While in this paper the authors have
considered only personas involving degrees of rationality and degrees of altruism, there is no reason
not to expect other kinds of persona sets in the real world. Risk aversion, uncertainty aversion, reflection points, framing effects, and all the other “irrational” aspects of human behavior can often be
formulated as personas.
Even so, persona games should not be viewed as a candidate explanation for all nonrational behavior.
Rather they complement other explanations, for example, those involving sequences of games. Indeed, many phenomena probably involve sequences of persona games (or more generally, personality
games). As an illustration, say an individual i repeatedly plays a face-to-face persona game involving
signaling, persona sets, and so on, and adopts a particular persona distribution for these games. By
playing all these games, i would grow accustomed to adopting this persona. Accordingly, if i plays
new instances of the game, where signaling is prevented, he might at first continue to adopt the
same persona distribution. However, as he keeps playing signal-free versions of the game, he might

20 research review

realize that the persona he adopted in the game with signaling makes no sense in this new context.
This would lead him to adopt the fully rational persona instead. If, after doing so, he was to play
a version of the game where signaling was no longer prevented, he could be expected to return to
the original persona fairly quickly. This behavior agrees with experimental data (Cooper et al. 1996;
Dawes and Thaler 1988).
w-10-11								

Some Evidence on the Importance
of Sticky Wages

by Alessandro Barattieri, Susanto Basu, and Peter Gottschalk
complete text: http://www.bostonfed.org/economic/wp/wp2010/wp1011.htm
e-mail: barattie@bc.edu, susanto.basu@bc.edu, gottscha@bc.edu

Motivation for the Research
It is difficult to explain the estimated real effects of monetary policy shocks without assuming that
some nominal variables adjust sluggishly. In the General Theory, Keynes (1936) assumed that nominal wages were rigid, and thus that expansionary monetary policy would reduce real wages and
increase employment and output. Fischer (1977) and Taylor (1980) showed that nominal wage
contracts would have similar effects even in explicitly dynamic models with rational expectations.
Recent macro-econometric models have typically followed the important contribution of Erceg,
Henderson, and Levin (2000) and assumed that both prices and nominal wages are slow to adjust.
The large number of recent models with such features has inspired researchers to examine micro
data on the frequency of price changes for individual products, with notable papers by Bils and
Klenow (2004) and Nakamura and Steinsson (2008). However, to date there has been little research
using micro data to estimate the rigidity of nominal wages—even though Christiano, Eichenbaum,
and Evans (2005, henceforth CEE) find that nominal wage rigidity is more important than nominal
price rigidity for explaining the dynamic effects of monetary policy shocks. This paper attempts to
address this gap in the literature.
Research Approach
The lack of previous work on the business cycle implications of nominal wage rigidity using micro
data may be due in part to a lack of suitable datasets. The authors provide evidence about the frequency of wage adjustment in the United States using data from the Survey of Income and Program
Participation (SIPP). The SIPP, a survey run by the Bureau of Labor Statistics, provides individual
wage histories for a large and representative sample that is followed for a period of 24 to 48 months.
Importantly, the individuals are interviewed every four months. These data allow the authors to
examine wage changes using high-frequency data. Most previous work on nominal wage rigidity
using U.S. micro data has used the Panel Study of Income Dynamics, which is an annual survey
and thus less useful for high-frequency analysis. Other well-known sources of micro wage data, the
Current Population Survey and the Employment Cost Index, do not provide sufficiently long timeseries data on individual wages and thus cannot be used for the authors’ purpose. The authors use
the longest SIPP panel for which complete data are available: the 1996 panel (run from March 1996
to February 2000).
The authors focus on the frequency of nominal wage adjustments disregarding employment history. This is arguably the concept that is most relevant for macro models with nominal wage
rigidities, particularly medium-scale dynamic stochastic general equilibrium models à la CEE. The
reason is that most business cycle models with nominal wage rigidity follow Blanchard and Kiyotaki

july 2010 – december 2010 21

(1987) and assume that all workers are monopolistically competitive suppliers of differentiated labor
services. In this framework, the worker sets the wage and revises it occasionally on his/her own schedule,
thus making the sequence of wages the relevant series to examine regardless of employment history.
As a baseline the authors use the results for hourly workers (or wage earners) who reported their
hourly wages to the SIPP interviewer. The reason is that computing wages as hourly earnings increases measurement error. For the baseline results they chose to focus on the statistic measured
with least error, the hourly wage, at the cost of making the sample less representative. However, they
also present results for the sample of salaried workers, using their monthly earnings as their “wage”
measure. By reporting the results for both hourly workers and salaried workers, the authors leave the
decision of the “right number” for macroeconomics to individual researchers who may be interested
in calibrating their models using the estimates presented in this paper.
Regardless of the sample used, it is clear that the data are contaminated with a significant amount of
measurement error. This is a disadvantage of working with data on individual wages, which in U.S.
survey data are always self-reported. The authors deal with this problem by applying to the reported
wage and earnings series the correction for measurement error introduced by Gottschalk (2005),
who built upon the work of Bai and Perron (1998 and 2003). The application uses the identifying
assumption that wages are not adjusted continuously but are changed by a discrete amount when
an adjustment takes place, which corresponds to our usual intuition about labor market institutions.
The implied statistical model says that the true wage (or earnings) is constant for an unspecified
period of time and then changes discretely at unspecified breakpoints. Thus, true wage changes in a
noisy series can be estimated as one would estimate structural break dates in a standard time series.
The Bai-Perron-Gottschalk method is to test for a structural break at all possible dates in a series.
If one can reject the null hypothesis of no break for the most likely break date, then one can assume
that there is a break at that point in time. One examines the remaining subperiods for evidence of
structural breaks, and continues until one cannot reject the hypothesis of no break for all remaining
dates. The adjusted series have wage (earnings) changes at all dates where one can reject the nobreak hypothesis, and are constant otherwise. This is a systematic way of excluding many instances
of transitory wage changes that look very much like measurement error. The authors apply this
method to SIPP data for individuals in their sample.
Key Findings
•	After correcting for measurement error, wages appear to be very sticky. In the average quarter, the
probability that an individual will experience a nominal wage change is between 5 and 18 percent,
depending on the samples and assumptions used.
•	The frequency of wage adjustment does not display significant seasonal patterns.
•	There is little heterogeneity in the frequency of wage adjustment across industries and occupations, although wages in manufacturing appear to be somewhat stickier than wages in services.
•	The hazard of a nominal wage change first increases and then decreases, with a peak at 12 months.
Thus, at a micro level, the pattern of wage changes appears somewhat more in keeping with the
staggered contracting model of Taylor (1980) than with the constant-hazard model of Calvo
(1983). However, the second result suggests that the timing of wage contracts is uniformly staggered throughout the year, which is the pattern that gives maximum persistence of nominal wages
following a shock.
•	The probability of a wage change is positively correlated with the unemployment rate and with
the consumer price inflation rate.

22 research review

•	Higher wage stickiness makes it easier for macroeconomic models to match the stylized fact
that monetary shocks cause persistent changes in real output and small but relatively persistent
changes in prices.
Implications
The authors’ results shed some light on a small but interesting literature on the seasonal effects of
monetary policy shocks. Recently, Olivei and Tenreyro (2008) have found that monetary policy
shocks that occur in the first half of the year have larger real effects than those that occur later in the
year. They explain this result by positing a model where wage changes are more likely to occur in the
second half of the year. The authors of this paper find that while the frequency of wage changes is
indeed slightly higher in the second half of the year, the magnitude of the difference is much smaller
than assumed in the calibrated model of Olivei and Tenreyro, suggesting that a different model
might be needed to explain their very interesting empirical finding.
With respect to directions for future research, the authors suggest a number of areas to explore.
First, it is important to understand why the stickiness estimated from micro data is greater than that
estimated from aggregate data using Bayesian techniques. Idiosyncratic measurement error, such a
large concern in the analysis of micro data, is unlikely to be the explanation. Such errors would average out and contribute little to the variance of any aggregate wage series. One possibility is that the
difference is due purely to aggregation issues: for example, if high-wage workers’ wages also adjust
more frequently, then the aggregate wage will appear to be more flexible than the average worker’s
wage. The authors plan to investigate this possibility using their data, but since high-wage workers
are likely to be salaried workers, whose adjusted earnings they find to be stickier than the wages of
hourly workers, this explanation appears unlikely. The reasons for this micro-macro gap should shed
light on the perplexing issues of aggregation that must concern all macroeconomists interested in
structural models. Second, the lack of sizeable seasonality in wage changes raises the question: what
can explain the estimated differential effects of monetary shocks occurring in different quarters?
Nakamura and Steinsson’s (2008) finding that price adjustment is seasonal suggests one possible
answer. Third, the findings on the shape of the hazard functions suggest that one should explore the
properties of models based on fixed-length wage contracts, as in Taylor (1980), in addition to the
very tractable stochastic-length contracting models in the style of Calvo (1983). Fourth, the authors’
desire to estimate the key parameter of one particular macro-labor model led them to focus on wage
histories and disregard employment histories. However, the implication that employment history
is irrelevant is not shared by all macro models of the labor market. For example, in the literature on
search and matching in business cycle models, the wage stickiness that matters for macroeconomists
is the degree of (real) wage rigidity for new hires. The authors plan to further explore these issues in
future research. Finally, from an epistemological point of view, the authors hope that this work will
increase the awareness that greater communication between economists working in different fields
(in this case, macro and labor economics) can produce valuable insights at relatively low cost.

july 2010 – december 2010 23

w-10-12								

Imputing Household Spending in the Panel Study
of Income Dynamics: A Comparison of Approaches
by Daniel H. Cooper

complete text: http://www.bostonfed.org/economic/wp/wp2010/wp1012.htm
e-mail: daniel.cooper@bos.frb.org

Motivation for the Research
Performing microeconomic analysis of macroeconomic issues often requires a comprehensive measure of household expenditures as well as detailed wealth and income data. Household-level data
allow researchers to investigate heterogeneity in household behavior—something that cannot be
addressed with aggregate data. Investigating and/or controlling for household heterogeneity is particularly important when analyzing issues such as the recent housing market and financial crises.
Yet for economists the usefulness of household surveys has been limited by a lack of comprehensive
household wealth and expenditure data in the same dataset.
The Panel Study of Income Dynamics (PSID) is an ongoing, nationally representative longitudinal
study of households and their offspring that began in 1968, and until 1999 gathered data primarily
on households’ food expenditures together with detailed information on household wealth, income,
and other demographics. In contrast, the Consumer Expenditure Survey (CEX) collects very detailed data on household expenditures but only limited data on income and wealth. Other household
surveys such as the Current Population Survey (CPS) and/or the Survey of Income and Program
Participation (SIPP) contain little if any information on household expenditures.
The PSID appeals to researchers because, unlike most household-level datasets, it has a long panel
dimension, which enables the researcher to control for household-specific effects and changes in
household behavior over time. In addition, the PSID is nationally representative in the cross-section. Until 1999, however, the only consistent measure of household spending in the PSID was
households’ expenditures on food, so the dataset failed to provide a comprehensive picture of households’ overall spending decisions. Questions were added to the survey beginning in 1999 that now
provide a broader picture of household expenditures.
Several approaches have been proposed to circumvent the dearth of expenditure data in the PSID.
Skinner (1987) imputed nondurable consumption in the PSID, based on the observed relationship
between nondurable consumption, food consumption, and a group of demographic variables that are
common to both the PSID and the CEX. Blundell, Pistaferri, and Preston (2006) (BPP) expanded on
Skinner’s approach and estimated food demand relationships in the CEX, which they then inverted
to get nondurable consumption in the PSID. The contribution of their paper is the use of an instrumental variable approach to deal with potential bias in the imputation process. In addition, Cooper
(2009) used an in-sample method to impute households’ nonhousing expenditures in the PSID, based
on households’ budget constraint and the available income and saving data. This paper compares the
different techniques for imputing a broader basket of household expenditures in the PSID.
Research Approach
In particular, this paper analyzes and extends the approach in BPP through 2007 along with that of
Cooper (2009) and compares the data from BPP’s out-of-sample imputation method and Cooper’s
in-sample approach to aggregate benchmarks. The paper also looks at how well BPP’s imputation
method captures the actual spending data reported in the PSID from 1999 onward. The analysis
also extends the work in Charles et al. (2007) to provide a mapping between the disaggregated CEX
expenditure categories and the additional PSID spending questions added in 2005.

24 research review

Total Household Expenditures
Thousands of 2000 Dollars

30

25

20

15
PSID-imputed (BPP)

10

CEX-actual
Total PCE
PCE excl. Housing

5

PSID-imputed (Cooper)

0
1984-1988

1989-1993

1994-1998

1999-2001

2001-2002

2003-2004

2005-2006

Sources: Blundell, Pistaferri, and Preston (2006); Consumer Expenditure Survey; U.S. National Income and Product
Accounts; Cooper (2009).

Key Findings
•	BPP’s out-of-sample approach does a good job of imputing households’ nondurable expenditures in
the PSID. The imputed data line up well with the actual CEX data, but tend to be somewhat lower
than the equivalent data from the National Income and Product Accounts (NIPA). The divergence
between the micro data and the aggregate data worsens when one imputes a broader basket of expenditures than BPP’s nondurable expenditure measure. In particular, total per capita imputed household expenditures and the actual CEX data are substantially lower than per capita total personal
consumption expenditures (PCE) in the NIPA. This finding is consistent with recent work by Sabelhaus (2010) and others showing that the CEX data under-report aggregate household spending.
•	In comparison, the in-sample imputation approach of Cooper (2009), based on households’ budget constraints, does a much better job of capturing total household expenditures in the PSID. As
predicted, these data lie somewhere between total PCE and total PCE excluding housing, and
follow the general trend observed in the NIPA data. This budget constraint-based approach clearly dominates BPP’s imputation approach when a researcher is interested in examining households’
total expenditures in the PSID. This method also is preferable to using households’ reported expenditure data recorded in the PSID from 1999 onward in terms of measuring households’ total
composite consumption. The actual PSID data, however, are reasonable and worth using when
a researcher is interested in households’ more disaggregated spending behavior. The actual PSID
data from 1999 onward are also preferable to using BPP’s technique to impute a comparable basket of goods.
Implications
This paper shows that none of the imputation techniques used to compute household expenditures
is perfect. The perceived accuracy of the imputation approaches depends somewhat on what one
believes is the appropriate spending benchmark for comparison purposes. The CEX under-reports
expenditures relative to the NIPA, but this under-reporting does not mean that the CEX data
should be hastily dismissed as a valid benchmark for disaggregated household expenditure measures,

july 2010 – december 2010 25

especially given the proposed reasons for the CEX’s shortcomings. More work needs to be done to
improve the accuracy of imputed expenditures in the PSID, but, as this paper demonstrates, the two
existing techniques are reasonable given their goals.
w-10-13									

The Distress Premium Puzzle
by Ali K. Ozdagli

complete text: http://www.bostonfed.org/economic/wp/wp2010/wp1013.htm
e-mail: ali.ozdagli@bos.frb.org

Motivation for the Research
The conventional wisdom suggests that firms with high risk exposure should have high expected
returns and low market values and as a result of the latter should be closer to default than other firms.
Consequently, firms’ default probabilities should be positively correlated with market-based risk
characteristics, such as dividend-price, earnings-price, and book-to-market ratios, and firms that are
more likely to default should have higher expected equity returns. Indeed, Fama and French (1992)
claim that size and value premiums result from distress risk. However, using empirical estimates
of default probabilities, recent empirical research, including Dichev (1998), Griffin and Lemmon
(2002), and Campbell, Hilscher, and Sziglayi (2008), has reached the opposite conclusion: financially distressed firms have lower returns than other firms. This paper aims to reconcile the apparent
contradiction.
Research Approach
The author develops and calibrates a model, using Compustat/CRSP data, that aims to capture
the following three empirical regularities observed in the distress premium literature: (1) firms with
higher default likelihood have lower returns than other firms, (2) firms with higher earnings-price
ratios and higher book-to-market values have higher returns than other firms, and (3) when firms
are ranked according to their bond yields, firms with higher bond yields have higher returns.
Key Findings
•The apparent contradiction between the conventional wisdom, which suggests that firms exposed
to a high degree of risk should have high expected returns and low market values and should
therefore be more financially distressed than other firms, and recent research, which shows that
financially distressed firms have lower returns than other firms, can probably be understood once
one realizes that the default measures employed in recent research aim to capture the probability of
observing a default under the real probability measure­—and that this probability does not necessarily line up with the risk-neutral default probability that governs the market value of equity and
the risk characteristics based on it. Therefore, one could not back out risk-neutral default probabilities using default observations from the data even if one had the perfect model, because one is
trying to fit the econometric model to observed defaults rather than to risk-neutral defaults.
•The author’s model successfully matches the three regularities it set out to match and in addition
successfully captures the following patterns noted in the literature, which involve book-to-market
value, financial leverage, and stock returns. (1) Stock returns are positively related to market leverage but are insensitive to book leverage. (2) Stock returns are less sensitive to market leverage
than to book-to-market leverage. (3) Market leverage is only weakly linked to stock returns after
controlling for book-to-market value. (4) Book leverage remains insensitive to stock returns after
controlling for market leverage.

26 research review

Implications
Both Fama and French (1992) and the studies that find a negative relationship between stock returns and the likelihood of default are right. On the one hand, as empirical studies suggest, firms
with a higher observed likelihood of default should have lower returns, given risk-neutral default
probabilities. On the other hand, firms with a higher default probability under the risk-neutral
measure should have higher market-based risk characteristics and higher returns, given observed
default probabilities.
The paper makes an additional claim: firms with a higher default risk under the risk-neutral measure
should have higher returns than other firms. This claim could be checked empirically, for example,
by using market data on credit default swaps. Given that the credit default swap instruments are
relatively new and currently do not cover the entire Compustat/CRSP universe, testing this hypothesis will be problematic with current data. So far, the findings of Anginer and Yildizhan (2010) using
bond yields seem to support this claim.
w-10-14

Characterizing the Amount and Speed of Discounting
Procedures
by Dean T. Jamison and Julian C. Jamison

complete text: http://www.bostonfed.org/economic/wp/wp2010/wp1014.htm
e-mail: djamison@uw.edu, julian.jamison@bos.frb.org

Motivation for the Research
Economists in a diverse range of specialized fields—including behavioral economics, environmental
economics, financial economics, and health economics—rely on discounting procedures in order to
evaluate the potential outcomes of policies and projects. The relevant time interval being evaluated
can range from a relatively short period, as is often the case in behavioral economics, to hundreds
of years, as might be the concern when implementing an environmental policy to curb greenhouse
gas emissions. Discount functions evaluate possible outcomes according to a present value function,
and the inverse of the present value of a unit stream of benefits (usually gauged in dollars or some
concept of utility, such as improved health outcomes) is a natural measure of the amount by which
a procedure discounts the future. Different procedures use different speeds to arrive at the present
value, with the result that, depending on the particular discounting procedure used, there can be
major differences in the weight given to the far future.
Exponential discounting, the procedure most commonly employed by economists, uses a constant
discount rate that fails to fully capture the variety of preferences of those individuals that differentially value the present or near term versus the distant future; in other words, exponential discounting does not take a stand on the relative weights of the near and far future, thus fixing the total
amount of discounting at a constant rate. The profession has recognized that there is a great need
for nonconstant rate discounting procedures that decline slowly with time in order to more reasonably balance far-future outcomes relative to nearer-term outcomes, and some alternatives have been
proposed. Exponential discounting combines the concepts of amount and speed into a single parameter that must be disaggregated in order to characterize nonconstant rate procedures. Yet while the
exponential discounting procedure has many disadvantages, it remains the dominant discounting
method used in economics.
Research Approach
The authors categorize the increasingly diverse literature using nonconstant rate discounting procedures by distinguishing the speed of discounting from the total amount by which the future is

july 2010 – december 2010 27

Three Discounting Procedures with Present Value(∞)=50
Cumulative Present Value (Dollars)

60

50

40
Zero-speed hyperbolic

30
Exponential

20

Fast Weibull

10

0
0

20

40

60

80

100

120

140

160

180

200

220

240

260

280

Time (years)

Source: Authors’ calculations.

discounted. The framework they develop facilitates a systematic comparison of these procedures and
enhances their tractability. Second, the authors identify the inadequacies in existing approaches to
using the average discount rate (ADR) or an average of the discount functions (ADF) to generate
an aggregate social discounting procedure. The authors consider four different discount functions:
exponential, hyperbolic, quasi-hyperberbolic, and fast Weibull. They propose an alternate social
discounting procedure that better reflects the preferences of all members of a society—meaning
that the preferences of those who value the present or near term are better balanced against the
preferences of individuals who place more emphasis on the distant future. The paper’s overall aim
is to improve the tools available for using discounting procedures and to facilitate the wider use of
nonconstant rate discounting procedures.
Key Findings
•	Each of the four measures considered has a present value of 50 but they differ in how rapidly the
present value is acquired. By using geometrical- and time horizon-based measures of how rapidly a
procedure acquires its ultimate present value, and showing that these values are the same, the authors
establish an unambiguous measure of the speed of discounting. A value of 0 is slow, and a value of 2
is fast. Exponential discounting has a speed of 1, while the fast Weibull has a speed of π /2 (1.57).
•	On the question of how to trade off between two future time points when individual members of a society are heterogeneous in terms of their time preferences, the ADR method
counts all their opinions equally, even those who do not value the future. The ADF method
can be nonconvergent, generating infinite present value, a shortcoming that negates its viability as a general aggregation procedure. To overcome the shortcomings of the ADR and the
ADF methods, the authors propose what they call the average normalized discount function
(ANDF) aggregation process. While the other two methods each satisfy only one requirement,
the ANDF process satisfies both criteria: (1) the aggregate procedure discounts the future by
an amount that is the average of the individual amounts; and (2) the aggregate procedure’s
discount rates in the future place greater weight on individuals who value the future more

28 research review

highly. This results in a more socially representative aggregation of multiple individual discounting procedures and better reflects a range of preferences over both the short run and the long run.
•	The authors argue that a specific slow procedure they call the zero-speed hyperbolic (ZSH) function is a good alternative candidate to the widely used exponential procedure used for social discounting with long time horizons. The ZSH procedure has a speed of 0 and a single parameter
equal to the amount of discounting, which renders it a simple yet flexible procedure for social
discounting. The ZSH function provides an analytically tractable way to give substantial weight
to the far future in policy analyses while preserving reasonable discount rates in the short term.
Implications
The authors suggest that their proposed approach to discounting, ANDF and ZSH, provides answers to the practical objections that have inhibited a wider use of nonconstant rate discounting
procedures and provides a missing framework for integrating and comparing results in the existing
literature. Yet transforming the empirical literature into useful discounting procedures will require
two additional steps. First, to the extent that it is practical, the data underlying the reported literature needs to be characterized in terms of estimates of the amount and speed of individual discounting procedures. Second, the ANDF aggregation algorithm can be used to generate candidate social
discounting procedures.
w-10-15									

Internal Sources of Finance and the Great Recession
by Michelle L. Barnes and N. Aaron Pancost

complete text: http://www.bostonfed.org/economic/wp/wp2010/wp1015.htm
e-mail: michelle.barnes@bos.frb.org, aaron.pancost@bos.frb.org

Motivation for the Research
The financial crisis and ensuing credit supply shock that began in August 2007 was distinguished in
part by the largest and most persistent drop in real private nonresidential equipment and software
investment growth since the Bureau of Economic Analysis (BEA) began data collection in 1947. At
the same time as the crisis began, aggregate cash holdings as a share of total assets for nonfinancial
corporations were at a 30-year high and should have provided firms with a very large cushion to
absorb any shock to the supply of credit.
In this paper the authors seek to shed light on two basic questions. One, what role did cash and
its attributes play in the investment performance of firms during what has been called the Great
Recession and how does this compare with its role in previous recession and credit crunch episodes
(Bernanke and Lown 1991)? Two, in terms of investment, what are the characteristics of firms that
were hit hardest during the recent recession? In particular, the authors seek to contribute to the current policy debate regarding the need to restore the flow of credit to small firms (Bernanke 2010;
Duygen-Bump, Levkov, and Montoriol-Garriga 2010).
The striking upward trend in corporate cash holdings has been noted earlier (Bates, Kahle, and Stulz
2006, later published as Bates, Kahle, and Stulz 2009), as has its potential role in alleviating credit
constraints in the recent recession (Duchin, Ozbas, and Sensoy 2010, henceforth DOS). However, the
authors do not know of any paper on investment financing that looks as deeply as this paper does into
firms’ sources of cash holdings. By using variables not hitherto examined in the literature, they are able
to decompose firms’ cash stocks by source and show how the use of these sources has varied over time.
In particular, the authors examine the role of cash and its sources over business cycles, with an emphasis on understanding the role of cash from these various sources during the Great Recession. In this

july 2010 – december 2010 29

context, they also study the role of firm size in investment financing over the business cycle, because in
the literature firm size has been identified as indicative of financial-constraint status.
Research Approach
Bates, Kahle, and Stulz (2009) (henceforth BKS) argued that the rise in cash holdings was due in
part to an increase in cash-flow volatility. Consistent with the BKS story, firms at the extreme ends
of the cash-flow distribution do indeed have higher than usual stocks of cash. However, given that
these cash stocks do not come from current operating inflows, at least not for the firms at the bottom (negative) end of the cash-flow distribution, it is natural to ask how these firms financed their
cash holdings—by raising funds externally, or by saving systematically out of cash flows over time?
Which behavior would indicate a firm facing financial constraints? In an earlier paper, Almeida,
Campello, and Weisbach (2004) (henceforth ACW) showed theoretically that firms expecting future funding shortfalls (for example, because they need to finance losses) will systematically save
more cash out of income. ACW identified these “hoarding” firms empirically, and showed that they
are firms that are typically considered to be more “financially constrained”—smaller, without bond
ratings, and not paying dividends. This suggests that in order to understand how firms might be
financially constrained, one needs to identify the sources of firms’ accumulation of cash.
The financial-constraint literature stems from a seminal paper by Fazzari, Hubbard, and Petersen
(1988) (hereafter FHP) documenting the sensitivity of investment to operating cash flows at the
firm level. FHP argued that the apparent sensitivity of investment to cash flows, even after controlling for future investment opportunities using Tobin’s Q, indicates that capital market frictions
prevent firms from investing in all profitable opportunities, and that internal cash flows provide an
additional source of financing. Most of the literature since FHP has similarly focused on cash flows,
despite the theoretical results of Gomes (2001) and Alti (2003) that empirical investment/cash-flow
sensitivities can be observed even in the absence of financial constraints; the argument of Erickson
and Whited (2000) that cash-flow sensitivities disappear when measurement error in Q is treated;
and work by Cleary, Povel, and Raith (2007) and Kaplan and Zingales (1997) showing that the
positive cash-flow sensitivities are largely a result of sample selection. Regardless, cash flows were
originally intended only as a proxy for firm liquidity—although current cash flows may indeed be
important, it also seems reasonable to suspect that previous cash flows, saved to the present, should
also be considered as affecting firms’ investment choices, particularly against a backdrop of a large
secular increase in cash holdings.
Other than ACW and Opler et al. (1999), comparatively little attention has been paid to the stock
of cash as it relates to financial constraints, despite its secular rise as documented by BKS. One
exception is DOS, who showed that large cash stocks before the crisis are correlated with higher
investment during the crisis and argue that this result is consistent with the identification of the
period from 2007:Q3 to 2008:Q2—which partly coincides with the NBER dating of the Great Recession—as one characterized by a supply shock to external credit markets, a shock that firms with
higher internal liquidity were better able to weather. If external financial markets were functional
prior to the crisis, then firms’ cash stocks are choice variables and thus probably endogenous to most
dependent variables of interest, as argued above. For example, firms may issue a large amount of debt
prior to embarking on a large investment project for transaction reasons; this could induce a cashstock/investment correlation even though in this scenario financial markets are perfectly functional.
A standard way around these difficulties is to use a difference-in-differences regression specification,
which controls for lower investment demand during recessions, as well as the “usual” correlation of cash
and investment during normal times. In this set-up we would use the estimated interaction between
recessions and the stock of cash to measure the presence of financial constraints; this is essentially
the approach taken by DOS. In addition, firm fixed-effects arguably control for any time-invariant

30 research review

Average Cash Stock as a Share of Total Assets, Decomposed
Share of Total Assets (Percent)

35

Internal Cash

30

External Cash
Nether Cash

25
20
15
10
5
0
1990:Q1

1995:Q1

2000:Q1

2005:Q1

2010:Q1

Source: Authors’ calculations.

investment demand effects at the firm level, and the inclusion of Tobin’s Q could be expected to
control for some time-varying future investment opportunities.
However, even a difference-in-differences methodology does not get around the fact that the stock
of cash is a matter of firm choice and therefore—even lagged one year, or sampled prior to the crisis,
as in DOS—it is not truly exogenous to investment if firms are forward-looking. The authors of this
paper propose to mitigate this issue by decomposing firms’ cash stocks by component source, using
data from an unbalanced quarterly panel of almost 9,000 publicly traded firms from 1989 to 2009
from the Compustat database.
Since firms that accumulate cash by issuing debt or equity in order to finance future investment
would not, under normal credit conditions, be considered financially constrained, whereas firms
that meticulously save out of operating cash flows in order to finance future investment opportunities would be, it is important to distinguish between the two sources of cash. It is only financially
constrained firms that one would expect to invest more out of their internally generated cash stocks.
The authors include in internal sources such items as income before extraordinary items; depreciation and amortization; deferred taxes; sale of plant property and equipment; inventory decreases;
and net disinvestment, while external sources include such items as sale of equity stock, debt issuance, decreases in accounts receivable, increases in accounts payable, and changes in current debt.
The authors also experiment with excluding working capital components, as these are arguably used
to fund normal day-to-day operations as opposed to the more irregular investment in equipment,
software, and structures. Although the authors argue that they are better able to identify firms that
are financially constrained using this breakdown of cash stock into its sources, they do not claim to
identify a supply shock.
Key Findings
•	The rise in cash stocks first documented by BKS has been financed largely from internal sources.
•	The rise in internal funds has been driven primarily by small and medium-sized firms, as well as
by firms that do not pay dividends.

july 2010 – december 2010 31

•	Lagged cash stocks are always correlated with investment, but much more so in the last recession.
•	The components of cash to which investment is sensitive have changed: in “normal” times
investment is most sensitive to externally generated cash, and this did not change during the last
recession. The increase in cash-sensitivity was due to an increase in the sensitivity of investment
to internally generated cash. Furthermore, it is not just small firms that appear constrained by this
metric during the Great Recession.
Implications
The paper’s results have important implications for the policy response to the recent financial crisis.
The evidence suggests that the recent financial turmoil has affected the real side of the economy by
constraining firms financially; thus policies that aim to ease credit conditions should be helpful in
increasing investment and speeding up the recovery. The findings also show that these financial constraints are greatest on smaller firms, suggesting that measures specifically designed to make credit
available to smaller firms might also be helpful.
Yet since a “small firm” in the Compustat data is still large relative to the rest of the economy (the
5th percentile of total assets in 1982 dollars, the median for firms below the 10th percentile, is about
$10 million), this biases the authors’ results against finding a size effect, and they conjecture that
financial constraints on even smaller, nonpublicly traded firms may be even greater. Furthermore,
the results suggest that firms as high as the 50th percentile of the Compustat size distribution were
affected by financial constraints in this recession. These firms are not small; thus credit-easing policies aimed at the economy as a whole are also important in combating this recession.
There are numerous directions for future research along these lines. In particular, a closer look at
the behavior of some of the detail components estimated—for example, income before extraordinary items, depreciation, net debt issuance, or sale of investments—might help to reveal why some
firms saved more internal cash than others. Indeed, armed with these detail data, it may even be
possible to understand why there is a break in many of the cash-stock series around the time of the
2001 recession. Also, further analysis of the detailed cash-stock data may help in understanding the
depth and duration of the Great Recession to the extent that it is related to a constrained credit environment. It also seems worthwhile to better understand the role of working capital and inventory
investment along the lines of the analysis in this paper. Given that detailed information exists about
the composition of the stock of cash, it might also be interesting to evaluate the age of different
components and their role in hoarding behavior. It should also be possible to derive a new measure
of financial flexibility by using a Herfindahl concentration index on the sources of funds that constitute the stock of cash to see how this compares with other measures put forth in the literature,
such as those of Arslan, Florackis, and Ozkan (2010). Finally, it might be profitable to use quantile
regression analysis to determine precisely which firms fared best and worst over the Great Recession
and to study their relative financial characteristics.

32 research review

w-10-16

Affective Decision Making: A Theory of Optimism Bias
by Anat Bracha and Donald J. Brown

complete text: http://www.bostonfed.org/economic/wp/wp2010/wp1016.htm
e-mail: anat.bracha@bos.frb.org, donald.brown@yale.edu

Motivation for the Research
Many decisions such as working on a project, getting a flu shot, or buying insurance require an estimate of probabilities of future events: the chances of a project’s success, of falling sick, or of being
involved in an accident. In assessing these probabilities, decision makers tend toward optimism bias,
defined as the tendency to overestimate the likelihood of favorable future outcomes and underestimate the likelihood of unfavorable future outcomes
Optimism bias translates into both microeconomic and macroeconomic activity. For example, CEOs
who are optimistic regarding their firm’s future performance are more sensitive to investment cash
flow and this distorts their investment decisions (Malmendier and Tate 2005); optimistic CEOs are
also 65 percent more likely to complete mergers, to overpay for those target companies, and to undertake value-destroying mergers (Malmendier and Tate 2008). On the macroeconomic level, Robert Shiller (2000, 2005) makes the case that irrational exuberance contributes to generating bubbles
in financial markets, where irrational exuberance is “wishful thinking on the part of investors that
blinds us to the truth of our situation.” Shiller points out several psychological and cultural factors
that affect individuals’ beliefs and consequently the investment behavior that leads to real macrolevel effects. Many of these factors can be summarized as optimistically biased beliefs.
Yet optimism bias is inconsistent with the independence of decision weights and payoffs found in
models of choice under risk, such as expected utility, subjective expected utility, and prospect theory.
Research Approach
To explain the evidence suggesting that agents are optimistically biased, the authors suggest an
alternative model of risky choice where decision weights—labeled affective or perceived risk—are
endogenized. More specifically, the authors consider two systems of reasoning: the rational process
and the emotional process. The rational process decides on an action, while the emotional process
forms a perception of risk and in doing so is optimistically biased. The two processes interact to
yield a decision. This interaction is modeled as a simultaneous-move intrapersonal potential game,
and consistency between the two processes, which represents the agent’s choice, is the equilibrium
outcome realized as a pure strategy Nash equilibrium of the game.
This novel formulation of optimism bias, by employing a simultaneous choice of action and beliefs
where the tradeoff is accomplished through a game, may be viewed as a model of the specialization
and integration of brain activity considered in recent neuroscience studies (for instance, Reisberg
2001; Gray, Braver, and Raichle 2002; Camerer, Loewenstein, and Prelec 2004; Pessoa 2008). This
model is also consistent with the psychology literature that draws a distinction between analytical
and intuitive, or deliberate and emotional, processing (Chaiken and Trope 1999).
Formally, the rational process coincides with the expected utility model, where for a given risk perception (meaning the affective probability distribution), the rational process chooses an action to
maximize expected utility. The emotional process forms a risk perception by selecting an optimal
risk perception that balances two contradictory impulses: (1) affective motivation and (2) a taste for
accuracy. This is a definition of motivated reasoning, a psychological mechanism where emotional
goals motivate an agent’s beliefs (see Kunda 1990), and is a source of psychological biases, such as
optimism bias. Affective motivation is the desire to hold a favorable personal risk perception—optimism—and in the model it is captured by the expected utility term. The desire for accuracy is

july 2010 – december 2010 33

modeled as a mental cost the agent incurs for holding beliefs in lieu of her base rate probabilities,
given her desire for favorable risk beliefs. The base rate probabilities are the beliefs that minimize
the mental cost function of the emotional process, that is, the risk perception that is easiest and least
costly to justify. In many instances, one can think of the baseline probabilities as the empirical, relative frequencies of the states of nature.
As an application of affective decision making, the authors present an example of the demand for
insurance in a world with two states of the world: a bad state and a good state. The relevant probability distribution in insurance markets is personal risk; hence, the demand for insurance may depend
on optimism bias. Affective choice in insurance markets is defined as the insurance level and risk
perception that constitute a pure strategy Nash equilibrium of the affective decision making (ADM)
intrapersonal potential game.
The authors show that the ADM intrapersonal game is a potential game, where a (potential) function
of a penalized subjective expected utility (SEU) form characterizes the entire game. This property has
the natural interpretation of the utility function of the composite agent or the integration of the two
systems, and the authors use it to derive the axiomatic foundation of ADM potential maximizers.
Key Findings
•	The emotional process leads to exaggerated choices relative to the standard expected utility model—
agents will buy too much or too little insurance.
•	Choices are subject to framing “context” effects—if the agents are manipulated to think first of risk,
they will generally buy less insurance than if their attention is manipulated to think first of insurance.
•	Report and choice tasks are different—reported risk will tend to be lower than the risk implied in
the actual action (insurance) taken.
•	Consistent with consumer research, the ADM model shows that campaigns intended to educate
consumers on the magnitude of their potential loss may backfire. That is, these campaigns may lead
consumers to purchase less, rather than more, insurance. Hence, the ADM model suggests that the
failure of the expected utility model to explain some datasets may be due to systematic affective biases.
•	There is a relationship between risk and ambiguity, and the ADM model has an alternative interpretation as ambiguity-seeking behavior. The authors draw a distinction between endogenous
and exogenous ambiguity. Endogenous ambiguity is generated by the agent in a skewed manner.
If the individual is optimistic, then the generated endogenous ambiguity would be favorable to
her; therefore, in this case being optimistic is being ambiguity-seeking. In this sense, attitudes
toward ambiguity are equivalent to holding optimistic attitudes. In contrast, uncertain or ambiguous situations are instances of exogenous ambiguity, meaning ambiguity that is imposed on
the individual. Using this distinction between endogenous and exogenous ambiguity and existing studies, we would expect to find ambiguity-seeking à la ADM in endogenously ambiguous situations, while we would expect ambiguity-aversion in exogenously ambiguous situations.
Implications
The ADM model proposed by the authors reconciles some of the discrepancies between actual decision making under risk and standard models of choice under risk, such as expected utility, subjective
expected utility, and prospect theory. The ADM model offers a more complex and nuanced interpretation of decision making under risk where decisions are product of two processes, an approach that
is consistent with recent literature from the fields of psychology and neuroscience.

34 research review

w-10-17									

The Financial Structure of Startup Firms: The Role of
Assets, Information, and Entrepreneur Characteristics
by Paroma Sanyal and Catherine L. Mann

complete text: http://www.bostonfed.org/economic/wp/wp2010/wp1017.htm
e-mail: sanyal.paroma@gmail.com, clmann@brandeis.edu

Motivation for the Research
Financial structure is central to a firm’s business strategy and has important implications for firm
behavior, yet little is known about the financial structure of startup firms. Theoretical research and
most empirical investigations have focused on large established firms, which can tap an array of
financial sources, such as stock equity or commercial paper, a scenario quite different from the
situation facing small firms. Most empirical research on small firms has focused on ongoing firms
despite recent research revealing the importance of startups for economic vibrancy and job creation.
(Haltiwanger, Jarmin, and Miranda 2010; Kane 2010; Stangler 2010). During times of financial
crisis, such as 2008–2010, it is difficult to determine whether credit conditions affect startup activity without having a benchmark assessment of the financial structure of startup firms during more
normal credit conditions. The question addressed by this paper is whether the relative importance of
internal funds, external debt, and external equity that comes from established-firm theory plays out
for startups, which have different asset and information characteristics as well as different available
financial resources. The paper’s contribution arises from the fact that the research is based on the
Kauffman Firm Survey (KFS) dataset, which tracks a panel of 5,000 businesses from year of initiation in 2004. These data enable the authors to compare the financial structures of firms at inception
with the structures predicted by the theories of established firms and with the findings of empirical
investigations of ongoing small firms.
Research Approach
In extensions of Modigliani-Miller (1958), theoretical analyses of large established firms have
addressed how the degree of asset specificity (asset value at bankruptcy) and information opacity
(alignment of manager and shareholder interests) influences governance and financial structure. Established-firm theory finds that, on the one hand, firms with highly specific assets (low liquidation
value at bankruptcy) should have a higher proportion of equity relative to debt, since stockholders
in principle can exercise greater control over the operations of the firm, whereas debtholders cannot
appropriate the highly specific assets. On the other hand, under conditions of information opacity
about managers’ activities, after first using internal resources the firm then should use bank debt,
which disciplines management, and only lastly turn to external equity for financing, since ensuring
the alignment of interests between managers and shareholders is more difficult.
Previous research points out that in the case of startups there are no ongoing operations and no track
record by which to judge the firm. This information opacity makes external financing more difficult
to obtain at the nascent stages. A startup’s potential external equity investors (such as angel or venture capital) may have limited information about the founder (unless s/he is a serial entrepreneur)
and about the prospects for the enterprise and may therefore demand a high ownership stake for a
given financial outlay. From the standpoint of the owner-founder, internal finance is preferred, followed by external debt such as bank financing, and only lastly would the founder use external equity,
which is expensive in terms of ownership stake.
These general predictions based on information opacity are qualified by the characteristics of the
assets of most startups. In small startups the entrepreneur provides not only managerial expertise,
but also financial and human capital to the firm. Such specific human capital may not be easily

july 2010 – december 2010 35

transferable to alternative uses, which compounds the information opacity. The “inalienable” nature
of the entrepreneur’s human capital exacerbates the tension between the owner and debtholders
because the owner can threaten to walk away. Therefore, firms with a high degree of asset specificity
should be financed primarily by the entrepreneur’s own resources, followed by external equity such
as venture capital, and last by external debt.
Insights gleaned from theory suggest that startups would use internal funds first, followed by external resources, with predictions on the external debt-equity mix unclear and dependent on the
relative importance of asset specificity and information opacity. However, despite this theoretical
preference for internal finance, Berger and Udell (2003) reveals the importance of debt financing for
young firms in the United States, including high-growth startups. Therefore, in practice, internal
resource constraints faced by the entrepreneur mean that startups may have to rely primarily on
external financing of one sort or another.
Outside the issues of asset specificity, information opacity, and financial constraints, substantial empirical work focuses on the relationship between financial structure and entrepreneur characteristics
such as education, strategic alliances and networks, and experience of the founding team.
Ninety-eight percent of the 5,000 businesses tracked by the KFS have fewer than 25 employees.
Each business has a unique identification number, and the original survey posed more than 1,400
questions to each firm in the survey, including detailed questions on financial structure, owner and
founder characteristics, business and innovation activity, and location. The authors of this paper
examine the firms in 2004, their founding year, considering these entrepreneurial characteristics in
conjunction with financial structure. With regard to race and gender, the authors examine whether
the financial structure of firms owned by African-Americans and women differs from that of other
startups, and, specifically, whether these firms have less external funding.
The authors use multinomial logit applied to the KFS dataset to examine the financial structure of
startups, looking first at internal debt or equity versus external debt or equity. They then look into
the type of external debt, via a six-way decomposition of startups’ financial structure. To do this, they
take owner equity to be the base financial resource, with the other five sources being (1) internal
debt and equity (that is, equity owned by family, and loans from friends, family, and employees, (2)
external debt in the form of a bank loan, (3) external debt in the form of a personal or business credit
card, (4) other external debt, such as loans from the government and other businesses, and (5) equity
from venture capitalists, angel investors, and other sources.
Key Findings
•	Startups with more physical assets or those where the entrepreneurs have other similar businesses
are more likely than other startups to use external debt in the financial structure, since these assets
have a high liquidation value.
•	Startups with human capital embodied in the entrepreneur or with intellectual property assets
have a lower probability of using debt, consistent with the higher asset specificity and lower collateral value of these assets.
• Startups characterized as small, unincorporated, solo, first-time, or home-office-based are more
likely to be financed by self, family and friends, and importantly through credit cards, as these
startups have both highly specific assets and information opacity.
•More educated founders and nonAfrican-American founders are more likely than other startups
to be financed by external sources.
•	Controlling for other attributes of the startup, the financial structure of women-owned startups
does not differ from that of other startups.

36 research review

•	High-tech startups’ financial structure differs significantly from the financial structure of startups
in other business sectors.
Implications
Consistent with theoretical underpinnings based on asset specificity, the findings show that startups
with more tangible assets as potential collateral are more likely to use external debt in their financial
structure, since these assets have a high liquidation value. Entrepreneurs with other businesses as
collateral are less likely to give up control to external equity investors. On the other hand, all else
being equal, startups with higher human capital embodied in the entrepreneur or more intellectual
property assets have a lower probability of using debt than other startups, consistent with the higher
asset specificity and lower collateral value of these assets.
In terms of information opacity, startups located in the entrepreneur’s home are the most opaque
and their financial structure is dominated by credit card debt. Team-run startups are less likely to use
debt financing, particularly credit cards and other external loans, and, consistent with their greater
personal resources and available information, more likely to have internal and external equity in their
financial structure. Serial entrepreneurs are equally likely to finance their businesses using their own
resources, bank loans, or external equity, since more information is available about these entrepreneurs, which mitigates the information opacity problem.
In terms of owner attributes, some—but importantly not all—of the findings mirror the research on ongoing small businesses. Educated entrepreneurs are more likely to use debt financing. African-American
entrepreneurs are more likely to use their own resources to finance their business and are less likely to
use credit card or nonbank debt. An important finding is that the financial structure of women-owned
startups does not differ from that of male-owned startups, controlling for many other attributes.
Regional factors and local conditions relate to the financial structure of startups. Areas with bettereducated resident populations may have greater personal resources to finance startups using internal
debt. Startups in innovative states and states with higher venture capital activity have a greater probability of having external equity in their financial structure. Startups in larger states have a higher
probability of having bank loans in their financial structure.
Some of the biggest differences in the financial structure of high-tech startups and startups in other
sectors can be traced to the relationship between financial structure and race, citizenship, and business knowledge.

Public Policy Briefs
b-10-3 								

Evidence of a Credit Crunch? Results from the
2010 Survey of First District Banks

by Jihye Jeon, Judit Montoriol-Garriga, Robert K. Triest, and J. Christina Wang
complete text: http://www.bostonfed.org/economic/ppb/2010/ppb103.htm
e-mail: jihye.jeon@bos.frb.org, judit.montoriol-garriga@bos.frb.org, robert.triest@bos.frb.org, christina.wang@bos.frb.org

Motivation for the Research
Restricted access to credit, especially decreased availability of bank credit to small businesses, is often
cited as a potentially important factor in amplifying the effects of the recent recession and contributing to the weakness of the subsequent expansion. In an effort to gather first-hand data to help assess
how the supply of, and demand for, bank credit changed in the period following the financial crisis,

july 2010 – december 2010 37

Change In Underwriting Standards for Business Lines
of Credit since August 1980
Fraction

0.6

0.4
Existing Customers
New Customers

0.2

0
Eased
considerably

Eased
somewhat

Remained
unchanged

Tightened
somewhat

Tightened
considerably

Sources: Authors’ Calculations.

the Research Department and Financial Institution Relations and Outreach (FIRO) group of the
Federal Reserve Bank of Boston cooperated to conduct a survey of First District community banks in
May 2010. The survey was designed (1) to assess how much community banks were willing and able
to lend to local businesses that were formerly customers of large banks but had lost access to credit in
the aftermath of the financial crisis and (2) to understand the role of Small Business Administration
(SBA) lending in promoting business lending by community banks in New England.
Research Approach
The survey questionnaire was sent to 268 banks; of these, 135 responded. The response rate for qualitative questions was far higher than the response rate for quantitative questions. At least one of the
qualitative questions was answered by 124 banks and 121 banks answered all of the qualitative questions. In contrast, 84 banks answered one or more of the quantitative questions and only 44 banks
answered all of the quantitative questions.
Key Findings
•The survey responses provide some evidence that lending standards for commercial loans have
tightened moderately at community banks since late 2008, with the tightening being more severe
for new customers than for those that already had a relationship with the respondent bank. The
survey also reveals that expansions of several SBA guarantee programs since the crisis have ameliorated possible credit constraints on small businesses.
•M
 ore than 40 percent of respondents reported that the amount of new originations remained essentially unchanged during 2008:Q4. On the other hand, more banks (slightly over 40 percent)
reported that origination volume decreased than reported that originations increased (16 percent).
•B
 usiness loan applications from new customers decreased less than overall applications, suggesting
that businesses that had relied on large commercial banks for credit may have turned instead to
community banks for credit as the large banks cut back on lending because of the serious capital
constraint stemming from subprime-induced balance sheet losses.

38 research review

• Of the banks that responded to the survey, the vast majority (78 percent) indicated that they
participate in one or more of the SBA programs. Slightly over one-third of these banks (35
percent) were SBA-preferred lenders. On average, banks increased business lending by $11.23
million as a result of the availability of SBA programs. As expected, this average increase was
larger for SBA- preferred lenders ($22.69 million) than for nonpreferred lenders ($4.49 million).
The median values are somewhat lower, given the skewness of the distribution. Overall, these
results suggest that the SBA programs were somewhat effective at promoting business lending
among community banks in New England, especially among the SBA-preferred lenders.
Implications
Although tighter lending standards for new customers than for existing customers makes sense at any
given time, it is less obvious why underwriting standards for new customers should have been tightened more than for existing customers during the last two years. One possibility is that the community
banks believed that the information asymmetry problem with regard to firms that used to but were no
longer able to borrow from large banks had become more severe, since larger banks are likely to shed
their most problematic customers. Another possible reason is that community banks wanted to slow
the growth of their assets in the face of a rather uncertain economic outlook, while protecting their
investment in relationships with existing customers.
The community banks generally did not report that balance sheet problems impeded their ability to
lend. In contrast, many large commercial banks suffered graver losses during the financial crisis due to
their greater exposure to subprime-based assets and as a result were more likely to be forced to raise
their capital ratio by restricting lending. To the extent that some larger banks restricted lending as a
result of balance sheet problems, the survey responses suggest that the customers of these large banks
who were denied additional credit also would have faced a difficult time in obtaining credit from the
community banks.
Information gathered through this survey suggests that New England community banks have tightened their loan underwriting standards, especially for new customers, since the onset of the financial crisis. Nevertheless, deteriorating borrower qualifications and reduced demand for loans have also
clearly played a role in the contraction of bank credit.
The persistence of tighter standards is consistent with similar indications from the Senior Loan Officer Opinion Survey (SLOOS) of tighter lending standards at both large and small banks. The survey
data suggest that businesses that were turned away from large banks would generally have found it
difficult to get credit at community banks. Overall, community banks do not appear to have been able
or willing to offset the contraction in the credit supply stemming from the actions of large banks. On
the other hand, the survey responses provide some evidence in support of the efficacy of SBA lending
programs in boosting the supply of credit to small businesses. This suggests that further expansion of
the SBA programs could potentially be effective in increasing the supply of credit to small businesses,
all else being equal. More data and analysis of this issue should prove useful.

Multimedia
The Great Recession (video presentation)
by Christopher L. Foote

complete video: http://www.bostonfed.org//videos/index.htm
e-mail: chris.foote@bos.frb.org

This four-part video presentation examines the Great Recession, paying particular attention to
New England. A senior Boston Fed economist analyzes the recession from four perspectives: (1)

july 2010 – december 2010 39

the expansion and bursting of the housing bubble; (2) the consequences of the recession for output,
employment, and inflation; (3) the fiscal and monetary policy response to the recession; and (4) the
Great Recession in the context of longer-term trends in the labor market. The presentation reflects
his independent views as a researcher and does not represent official views or policies of the Federal
Reserve System.

Contributing Authors
Alessandro Barattieri is a Ph.D. student at Boston College.
Michelle L. Barnes is a senior economist and policy advisor in the research department of the
Federal Reserve Bank of Boston.
Susanto Basu is a professor of economics at Boston Colege, a visiting scholar in the research
department at the Federal Reserve Bank of Boston, and a research associate at the National Bureau
of Economic Research.
Anat Bracha is an economist with the Research Center for Behavioral Economics in the research
department at the Federal Reserve Bank of Boston.
Donald J. Brown is the Phillip R. Allen Professor of Economics in the department of economics
at Yale University.
Daniel H.Cooper is an economist in the research department at the Federal Reserve Bank of
Boston.
Lynn M. Fisher is an associate professor of real estate in the department of urban studies and planning and at the Center for Real Estate at the Massachusetts Institute of Technology.
Christopher L. Foote is a senior economist and policy advisor in the research department of the
Federal Reserve Bank of Boston.
Andreas Fuster is a graduate student in the economics department at Harvard University and a
graduate fellow in the research department at the Federal Reserve Bank of Boston.
Kristopher S. Gerardi is a research economist and assistant policy advisor in the research department at the Federal Reserve Bank of Atlanta.
Gita Gopinath is an associate professor of economics at Harvard University and a visiting scholar
in the research department at the Federal Reserve Bank of Boston.
Peter Gottschalk is a professor of economics at Boston College and a research fellow at IZA.
Michael Harre is a post-doctoral fellow with the Centre for the Mind at the University of Sydney.
Oleg Itskhoki is an assistant professor of economics and international relations in the department
of economics and the Woodrow Wilson School of Public and International Affairs at Princeton
University and an NBER faculty research fellow.
Dean T. Jamison is a professor of global health at the University of Washington, Seattle.

40 research review

Julian C. Jamison is a senior economist at the Federal Reserve Bank of Boston’s Research Center
for Behavioral Economics and a visiting lecturer at Harvard University .
Jihye Jeon is a research assistant in the research department of the Federal Reserve Bank of Boston.
Lauren Lambie-Hanson is a doctoral student in the department of urban studies and planning at
the Massachusetts Institute of Technology and a graduate fellow in the research department at the
Federal Reserve Bank of Boston.
Catherine L. Mann is the Barbara and Richard M. Rosenberg Professor of Global Finance in the
International Business School at Brandeis University and a visiting scholar in the research department of the Federal Reserve Bank of Boston.
Judit Montoriol-Garriga is a financial economist in the risk and policy analysis unit of the Federal
Reserve Bank of Boston.
David Newth is a research scientist with the CSIRO Centre for Complex Systems Science.
Ali K. Ozdagli is an economist in the research department of the Federal Reserve Bank of Boston.
N. Aaron Pancost is a research associate in the research department of the Federal Reserve Bank
of Boston.
Paroma Sanyal is an assistant professor of economics at Brandeis University.
Scott Schuh is director of the Consumer Payments Research Center and a senior economist in the
research department of the Federal Reserve Bank of Boston.
Oz Shy is a senior economist at the Federal Reserve Bank of Boston and a member of the Consumer
Payments Research Center in the research department.
Joanna Stavins is a senior economist and policy advisor and a member of the Consumer Payments
Research Center in the research department of the Federal Reserve Bank of Boston.
Robert K. Triest is a vice president and economist in the research department of the Federal
Reserve Bank of Boston.
J. Christina Wang is a senior economist in the research department of the Federal Reserve Bank
of Boston.
Paul S. Willen is a senior economist and policy advisor in the research department of the Federal
Reserve Bank of Boston and a faculty research fellow at the National Bureau of Economic Research.
David H. Wolpert is a senior computer scientist at the NASA Ames Research Center and a
consulting professor in the aeronautics and astronautics department of Stanford University.

july 2010 – december 2010 41

federal reserve
bank of boston

TM

Research Department
Federal Reserve Bank of Boston
600 Atlantic Avenue
Boston, MA 02210

PRSRT STD
US POSTAGE
PAID
WILMINGTON, MA
PERMIT NO. 121

www.bostonfed.org/economic/index.htm

july 2010 – december 2010 42