View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

recent research

research review

Issue No. 8, July 2007 – December 2007

federal reserve bank of boston

Research Department
Jeffrey C. Fuhrer
Executive Vice President and
Director of Research
Geoffrey M. B. Tootell
Senior Vice President and
Deputy Director of Research
Economists

research review
Issue No. 8, July 2007 – December 2007
Research Review provides an overview of recent research by economists of
the research department of the Federal Reserve Bank of Boston. Included are
summaries of scholarly papers, staff briefings, and Bank-sponsored conferences.

Jane Sneddon Little, VP
Giovanni P. Olivei, VP
Michelle L. Barnes
Katharine Bradbury
Mary A. Burke
Christopher L. Foote
Lorenz Goette
Fabià Gumbau-Brisa
Jane Katz
Yolanda K. Kodrzycki
Stephan Meier
Scott Schuh
Joanna Stavins
Robert K. Triest
J. Christina Wang
Paul S. Willen

Research Review is available without charge. To be placed on the mailing list,
or for additional copies, please contact the Research Library:

Manager
Patricia Geagan, AVP

Views expressed in Research Review are those of the individual authors and do
not necessarily reflect official positions of the Federal Reserve Bank of Boston
or the Federal Reserve System. The authors appreciate receiving comments.

Editors
Suzanne Lorant
Elizabeth Murry

Research Library—D
Federal Reserve Bank of Boston
600 Atlantic Avenue
Boston, MA 02210
Phone: 617.973.3397
Fax: 617.973.4221
E-mail: boston.library@bos.frb.org
Research Review is available on the web at
http://www.bos.frb.org/economic/ResearchReview/index.htm.

Graphic Designer
Heidi Furse

Research Review is a publication of the
Research Department of the Federal Reserve
Bank of Boston.
ISSN 1552-2814 (print)
ISSN 1552-2822 (online)
©Copyright 2008
Federal Reserve Bank of Boston

Research Department Papers Series of the Federal Reserve Bank of Boston
Public Policy Discussion Papers present research bearing on policy issues. They are
generally written for policymakers, informed business people, and academics. Many
of them present research intended for professional journals.
Working Papers present statistical or technical research. They are generally written
for economists and others with strong technical backgrounds, and they are intended
for publication in professional journals.
Public Policy Briefs present briefing materials prepared by Boston Fed research
staff on topics of current interest concerning the economy.
Research department papers are available online only.
http://www.bos.frb.org/economic/research.htm

Executive Summaries in This Issue
Public Policy Discussion Papers
p-07-4

Consumer Behavior and Payment Choice: 2006 Conference Summary
Margaret Carten, Dan Littman, Scott Schuh, and Joanna Stavins

4

p-07-5

Selection into Financial Literacy Programs: Evidence from a Field Study
Stephan Meier and Charles Sprenger

6

Working Papers
w-07-7

Does Competition Reduce Price Discrimination?
New Evidence from the Airline Industry
Kristopher Gerardi and Adam Hale Shapiro

8

w-07-8

Population Aging, Labor Demand, and the Structure of Wages
Margarita Sapozhnikov and Robert K. Triest

11

w-07-9

Doing Good or Doing Well? Image Motivation and
Monetary Incentives in Behaving Prosocially
Dan Ariely, Anat Bracha, and Stephan Meier

14

w-07-10 Space and Time in Macroeconomic Panel Data:
Young Workers and Unemployment Revisited
Christopher L. Foote

16

w-07-11 How Much Is a Friend Worth? Directed Altruism and Enforced Reciprocity in
Social Networks
Stephen Leider, Markus M. Möbius, Tanya Rosenblat, and Quoc-Anh Do

18

w-07-12 Social Networks and Vaccination Decisions
Neel Rao, Markus M. Möbius, and Tanya Rosenblat

21

w-07-13 Active Decisions and Prosocial Behavior
Alois Stutzer, Lorenz Goette, and Michael Zehnder

23

w-07-14 The Effects of Expectations on Perception: Experimental Design Issues
and Further Evidence
Tyler Williams

24

w-07-15 Subprime Outcomes: Risky Mortgages, Homeownership Experiences,
and Foreclosures
Kristopher Gerardi, Adam Hale Shapiro, and Paul S. Willen

27

w-07-16 Input and Output Inventories in General Equilibrium
Matteo Iacoviello, Fabio Schiantarelli, and Scott Schuh

32

Public Policy Briefs
b-07-2

A Principal Components Approach to Estimating Labor Market Pressure
and Its Implications for Inflation
Michelle L. Barnes, Ryan Chahrour, Giovanni P. Olivei, and Gaoyan Tang

Contributing Authors

35

38

xx

xx

Public Policy Discussion Papers
p-07-4

Consumer Behavior and Payment Choice:
2006 Conference Summary
by Margaret Carten, Dan Littman, Scott Schuh, and Joanna Stavins
complete text: http://www.bos.frb.org/economic/ppdp/2007/ppdp0704.htm
email: margaret.carten@bos.frb.org, daniel.a.littman@clev.frb.org, scott.schuh@bos.frb.org,
joanna.stavins@bos.frb.org

Motivation for the Research
As a result of the technological revolution in information processing, we are in the midst of a historic migration from older, paper-based payment practices to electronic-based payment methods.
The traditional practice of using cash, money orders, or, most commonly, written paper checks
mailed to settle bills is being replaced by a host of other payment options, such as ATM cards, credit cards, debit cards, pre-paid cards, and online payment methods. While in hindsight this transformation has been underway since the mid-1990s, the dramatic aggregate decline in U.S. paper
check usage was not documented conclusively until a 2002 study by the Federal Reserve Board.
As part of its ongoing program to raise awareness of and encourage research about these sweeping
changes in payment practices, the Federal Reserve Bank of Boston’s Emerging Payments Research
Group held a second conference titled “Consumer Behavior and Payment Choice” in July 2006.
Designed to be “an unusual conference for unusual times,” the conference continued the inquiry
begun with the inaugural 2005 conference, this time with a concentration on exploring the numerous individual sources of information available on consumer payment practices.
Research Approach
By bringing together two groups of payments experts, one drawn from the private-sector payments
industry, the other from the academic, research, and policymaking communities, the organizers
aimed to capture the advantages gained by sharing information available from numerous but disparate and widely scattered sources on contemporary consumer payment practices. The goal of the
conference was to foster a constructive environment for pooling information, forming productive
research collaborations, and advancing the available collective knowledge to enable consumers,
firms, financial institutions, and policymakers to make the best choices in a time of rapidly changing payment practices.
Key Findings
• Debit cards are the fastest-growing electronic payment method, since for many consumers, a debit
card linked to a bank account is the best substitute for cash or paper checks, and is preferable to
a credit card. Generational differences among consumers will play a very large role in fostering a
larger migration away from checks and credit cards to debit card payments: younger consumers
tend to favor debit cards, while older adults tend to favor paper checks and credit cards.
• In the United States, payment cards, especially debit cards, are displacing cash in low-value transactions, defined as $25 or less. The adoption of card-based payment methods will accelerate along
with dynamic factors that influence payments behavior, such as demographic shifts and technological advances. The shift of substantial volumes of consumer purchases to the Internet is indicative of a shift from paper instruments (cash and checks) to electronic card payments. However,
the expectation is that the substitution of payment cards for cash will be uneven, penetrating

4 research review

industry segments and population groups at varying rates.
• Cash, while no longer king, still rules in the sense that it maintains a strong presence in the
American payments landscape, and is likely to remain a viable and important payment method
for the foreseeable future. A distinct advantage that cash has over other payment methods is its
high degree of privacy and anonymity, qualities that are increasingly valued in the sometimes too
intrusive information age.
• Another issue affecting the move to electronic payment methods is that while these are more efficient, these methods are also more costly. The United States has the highest interchange fees
among the industrialized countries, and is the only nation where these costs are increasing. While
the move from cash to electronic payments may, in the long run, be better for social welfare, the
current situation seems to be one in which the more efficient method may be less equitable than
the less efficient one.
• While consumer demand for electronic payment methods is important, the supply of electronic
payment services is also necessary for promoting the transition to electronic payments. For example, in order to take full advantage of current payment technologies, many vending machines and
subway fare machines need to be redesigned.
• A recurring conference theme was the difficulty and costliness of conducting consumer payment
surveys that yield accurate and thorough information. Problems with existing data sources are that
these may not be truly representative of American consumers, as many surveys are prone to sample selection bias. For instance, some surveys intentionally restrict consumers based on age, geographic location, payment methods used, account holding, income, and other wealth characteristics. There is a need to be careful about using the data sources that currently exist, and an urgent
need for new and better information sources.
Implications
Some of the most complete sources on consumer payment practices are maintained by private firms
in the payments industry, and given the proprietary nature of this data, full disclosure of this information is not permissible. Yet even if readily accessible, the collective body of existing information
on consumer payment practices still leaves much to be desired in terms of accuracy and comprehensiveness. The current data sources are often redundant and not wholly representative of the
entire range of consumers, and hence are limited in scope. Given the vast differences among various segments of consumers regarding their payment choices, it is important to disaggregate the
trends in payment methods according to demographic characteristics.
There are myriad public interest needs for improving the information that currently exists on contemporary consumer payment practices. Business firms and consumers require better information
in order to make effective and efficient choices. It is unclear whether today’s consumers are making sound and informed choices about payment methods, or whether they are making economically important mistakes due to a lack of complete information, or due to a failure to completely
understand the available information. This same dilemma applies to policymakers, and further
underscores the need to assemble better information about how and why consumers make their
payment decisions. Theories of consumer behavior help to form the foundation of many financial,
fiscal, and monetary policies, so it is important to develop an accurate understanding of contemporary consumer payments behavior in order to make sound policy decisions. It is vitally important
for policymakers to understand how the payments system is going to evolve, and what the future
optimal payment system may be.

july 2007 – december 2007 5

p-07-5

Selection into Financial Literacy Programs:
Evidence from a Field Study
by Stephan Meier and Charles Sprenger
complete text: http://www.bos.frb.org/economic/ppdp/2007/ppdp075.htm
email: stephan.meier@bos.frb.org and csprenger@ucsd.edu

Motivation for the Research
Because financial literacy has been shown to correlate with good financial decisionmaking, policymakers promote educational programs to improve individuals’ financial acumen. But who selfselects into these educational programs to acquire information about personal finance?
Understanding what kind of people decide to improve their financial literacy is crucial to assessing
the effects of financial education on economic behavior. If individuals select into educational programs based on unobservable characteristics that are directly linked to their financial outcomes, it
remains unclear whether those individuals “treated” with the information would have had outcomes
different from those of “untreated” individuals, even in the absence of such educational intervention. Additionally, those who voluntarily choose not to participate in the financial education programs may be those persons who would benefit the most.
Focusing on low-to-moderate-income individuals, this field study measures an individual’s discount factor, abstracted from the decision whether or not to participate in a financial literacy program, and then investigates who self-selects into the educational program.
Research Approach
The authors offered more than 870 individuals the opportunity to participate in a short credit
counseling session (which included learning about their credit report and score), and independently measured individual time preferences using incentivized choice experiments to test whether selfselection into credit counseling is based on normally unobservable time preferences.
The study took place at a Volunteer Income Tax Assistance (VITA) site in a low-to-moderateincome neighborhood in Boston, Massachusetts. All taxpayers entering the VITA site while the
study was being conducted were offered a free, short credit counseling session while waiting for a
volunteer to help them with their taxes. Participation was free in the sense that the researchers paid
the cost of obtaining the credit report, individuals were explicitly informed that the credit report
request was a “soft inquiry” that would not affect their score, and most individuals already had to
wait a long time at the site for help with preparing their taxes. Concerns about identity theft were
unlikely to affect the decision to receive counseling, since individuals come to the VITA site voluntarily and entrust all their personal information to the tax preparation volunteers. Individuals
could choose to participate in the counseling session at any point in the tax preparation process.
Regardless of whether they opted into the credit counseling session, all individuals received a
preparatory packet with forms for their taxes, a survey asking for some sociodemographic and other
information, and a set of multiple price lists for the incentivized choice experiments. In these experiments individuals were asked to make a series of selections between receiving a smaller monetary
reward in an earlier period and a larger reward in a later period. After explaining how to fill in the
price lists and how the payment mechanism would work, the authors asked individuals to fill out
the surveys and complete the price choices. Those who elected to participate in the educational
program had a short credit counseling session before being assisted with tax preparation.

6 research review

Relationship of Discount Factors to
Self-Selection into Counseling
Cumulative Distribution
100

Received Counseling
Did Not Receive Counseling

80

60

40

20

0
0.56

0.62

0.67

0.71

0.75

0.79

0.83

0.87

0.91

0.95

0.99

Individual Discount Factors
Note: Individuals who are more patient (indicated by higher discount factors) were more likely to choose counseling.

Key Findings
• Time preferences strongly influenced whether or not individuals chose to be educated about personal finance. The less individuals care about the future, the lower the probability that they will
elect to acquire information about a crucial aspect of personal finance. This result held when controlling for prior investment in human capital—both general (for example, educational attainment) and specific to financial awareness (for example, awareness of credit scores).
• Additionally, time preferences influenced information acquisition prior to receiving the offer of
the brief credit counseling session in the field study. Individuals who are more patient were more
likely to know what a credit score is, and, conditional on knowing what a credit score is, they were
more likely to believe that credit scores are important to their lives. Controlling for this prior
information acquisition, however, only partly explains the strong correlation between individual
time preferences and the individual participation decision.
• Comparing the two self-selected groups (those who chose to participate and those who chose not
to) shows that these people do not differ much in observable basic demographic characteristics
such as those typically controlled for in evaluation studies of education programs. The two groups
differ most in educational level and prior knowledge of what a credit score means. Both variables
are potentially correlated with time preferences (because these levels reflect investment in human
capital) and appear to be correlated with the participation decision.
• The two groups also differ in their experience with credit cards. Individuals who chose to become
better informed were more likely to have a credit card, and were more likely to have a substantial
amount of outstanding debt. However, even controlling for experience with credit cards, those
people who were more likely to self-select into credit counseling were those who are more patient.
• Only 55 percent of individuals who were offered the short credit counseling session chose to
accept the offer, while the rest declined.
july 2007 – december 2007 7

Implications
The results of this study have important policy implications for financial literacy programs and
educational programs in general (for example, in the health domain). Self-selection on time preferences in attending financial education programs will affect the results of most evaluations of these
programs. For example, evidence on the positive effect of credit counseling programs is most likely biased upwards.
These findings indicate that individuals entering voluntary financial education programs probably
care more about the future than those who decide not to enter. Previous research has shown that
these more patient individuals (who are also the type of individual who chose to be “treated” in this
study) are more likely to have improved financial outcomes regardless of whether they participate in
education programs. Measured effects of “treatment” are therefore biased and the direction of the
bias is toward overestimation of positive effects. Unbiased evidence on the effect of financial education programs therefore requires randomized treatment, although the estimated effects are then
expected to be much smaller. In fact, one randomized study, published in 2003, finds very small
treatment effects. Additional unbiased studies are needed to evaluate whether promotion of financial education programs will indeed have positive effects on individual financial decisionmaking.
A second implication is that efforts to improve individual financial decisions through education programs are unlikely to reach those people who most need help. Making financial education mandatory, however, risks both irritating responsible consumers and having little effect on individuals who
would have avoided the program had it been voluntary. In any case, there is very little evidence as to
how, and whether, mandatory programs, like the one introduced in the new U.S. bankruptcy law,
work. One of the few convincing findings, in Bernheim, Garrett, and Maki (2001), shows that
mandatory financial education for high school students does increase future retirement savings.
A third implication concerns work that links financial literacy to the propensity to plan for the
future. The results of this study show that the discount factor, a more traditional preference parameter, may be able to explain information acquisition. Thus, what appears to be a propensity to plan
ahead may be a proxy for an individual’s discount factor, which gauges how one values the present
versus the future.
Evidence from this field research also raises questions as to whether individual time preferences are
fixed or are potentially susceptible to being influenced. The findings show that, even controlling for
education and prior financial knowledge, personal time preferences influence the willingness to
acquire new information. Future research should investigate the relationship between time preferences and such abilities as planning, imagination, and motivation in general. This will be crucial in
order to think about how time preferences are formed and hence how to promote financial literacy for all consumers.

Working Papers
w-07-7

Does Competition Reduce Price Discrimination?
New Evidence from the Airline Industry
by Kristopher Gerardi and Adam Hale Shapiro
complete text: http://www.bos.frb.org/economic/wp/wp2007/wp0707.htm
email: kris.gerardi@bos.frb.org, adam.shapiro@bos.frb.org

8 research review

Motivation for the Research
The relationship between price discrimination and market structure has provoked a great deal of
economic research. Traditional economic theory argues that market power increases a firm’s ability
to sustain markups, and hence to implement price discrimination strategies. From this, it follows
that competitive firms cannot price discriminate, since they are price takers, while monopolists can
discriminate to the extent that there exists heterogeneity in consumers’ demand elasticities and a
usable sorting mechanism exists to distinguish between consumer types. The standard textbook
explanation therefore predicts that, other things being equal. more concentrated markets should be
characterized by more price discrimination. However, a number of recent theoretical articles have
argued that this may not be the case. In the absence of an accepted consensus on an overarching
theoretical model, understanding the relationship between market structure and price discrimination becomes an empirical question.
In a seminal empirical study of the airline industry—an industry that lends itself to price discrimination studies because the practice is so prominent in airlines’ pricing strategies and the industry
is rich with historical data—Borenstein and Rose (1994) found evidence of a negative relationship
between price discrimination and market concentration: routes with higher levels of competition
exhibited more price dispersion. They attributed this result to airline pricing practices that are
based on exploiting heterogeneity in customers’ brand preference, rather than solely on exploiting
heterogeneity in reservation prices for air travel. For example, business travelers tend to remain loyal
to a particular airline and ignore lower fares offered by competitors, while leisure travelers are more
apt to purchase tickets with lower fares regardless of brand. These findings spawned a new line of
research in the airline literature that attempts to verify the existence of a negative relationship
between concentration and price dispersion.
In the decade since the Borenstein and Rose study, the U.S. domestic airline industry has experienced major changes. Dramatic increases in the price of oil have placed severe upward pressure on
airlines’ operating costs, and the 2001 recession had a significant impact on the demand for domestic air travel. As a result, in recent years a number of the large, traditional U.S. “legacy carriers” have
been forced to declare bankruptcy. At the same time, a number of new airlines with vastly different business strategies have emerged. These aptly named “low-cost carriers” have increased competition along many domestic routes, threatening the remaining market shares of the legacy carriers. Although low-cost carriers initially targeted leisure travelers, their emergence has also benefited business travelers. This paper re-examines Borenstein and Rose’s findings, using more complete,
and more recent, data on airline prices.
Research Approach
The authors construct a panel data set of domestic airline ticket prices, focusing on direct, coachclass tickets issued by nine major U.S. carriers during the period from 1993 to 2006. In the panel,
each observation is a flight conducted by a specific airline between an originating and a destination
airport in a specific time period (year and quarter). Low-cost and regional carriers are excluded to
maintain consistency with Borenstein and Rose’s samples, and to follow the established practice of
many airline economists. However, the analysis does take into account the effects of competition
from low-cost and regional carriers. Ticket prices are obtained from the DB1B database of the
Bureau of Transportation Statistics, a 10-percent sample of all domestic tickets issued by airlines.
The DB1B database also includes information on the originating and destination airports, the number of passengers, the number of plane changes (stops), and the fare class. Additional route characteristics are drawn from the Bureau of Transportation Statistics T-100 Domestic Segment Database,
which contains domestic, non-stop segment data reported by all U.S. carriers. This information
includes the number of passengers transported, the flight origin and destination, the aircraft type
and available capacity, scheduled departures, actual departures performed, and aircraft hours.

july 2007 – december 2007 9

Pricing Dynamics: Entry and Exit
Philadelphia (PHL) to Orlando (MCO) - US Airways

Ticket Price
700
ATA 90th Percentile
AirTran 90th Percentile
Delta Airlines 90th Percentile
Southwest 90th Percentile
US Airways 90th Percentile
US Airways 10th Percentile

600

500

400

300

200

100

0
1993

1994

1995

1996

1997

1998

1999

2000

2001

2002

2003

2004

2005

2006

Philadelphia (PHL) to Chicago (ORD) - United Airlines

Ticket Price
700
ATA 90th Percentile
Midway Airlines 90th Percentile
Southwest 90th Percentile
US Airways 90th Percentile
US Airways 10th Percentile

600

500

400

300

200

100

0
1993

1994

1995

1996

1997

1998

1999

2000

2001

2002

2003

2004

2005

2006

Note: This figure shows the entry and exit of carriers into two specified routes. Depicted are the 90th percentiles
of the entrants, and the 10th, 20th, 30th, 40th, 50th, 60th, 70th, 80th, and 90th percentiles of the incumbent carrier.

The analysis focuses on the Gini coefficient. The authors also calculate an additional statistic, the
inverse of the Herfindahl index, computed using passenger shares. This statistic can be interpreted
as the effective number of competitors per route. The authors segment the data into “leisure
routes”—routes with mainly price-sensitive leisure travelers—and “big-city routes”—routes with
both leisure and price-insensitive business travelers. By updating and extending the cross-sectional

10 research review

regression analysis of Borenstein and Rose and performing a panel analysis using fixed-effects estimation that controls for all time-invariant, carrier-route-specific factors, the authors are able to
identify the effects on price dispersion over time of changes in a route’s competitive structure.
Key Findings
• In cross-section estimation, the study was able to replicate the results of Borenstein and Rose,
finding a negative relationship between market concentration and price dispersion. However, the
panel analysis, in which any bias induced by time-invariant, route-specific effects was removed,
contradicts Borenstein and Rose’s results, finding instead a positive relationship between concentration and price dispersion. These results indicate that Borenstein and Rose’s cross-sectional estimates for route concentration suffer from a downward bias, possibly resulting from their omission
of plane size as an explanatory variable and their inclusion of distance in the instrument set.
• The effect of competition (in the form of an increase in the number of carriers on a route) is
stronger on routes that the authors identify as having a heterogeneous mixture of business travelers and leisure travelers. Specifically, an increase over time in the number of carriers on these
routes lowers prices at the top of the distribution to a greater extent than it lowers prices at the
bottom of the distribution, resulting in a decline in overall price dispersion. On routes with mostly leisure travelers, competition generally does not have a statistically significant effect on price
dispersion. Thus, business travelers are more affected by an increase in competition than are
leisure travelers, and the market entry of low-cost carriers or regional carriers has a stronger competitive effect than does the presence of legacy carriers.
Implications
These results lend support to the monopoly effect, showing that over time a decrease in market concentration (that is, an increase in competition) along a route results in a decrease in price dispersion,
in line with the theoretical textbook explanation of sustaining markups in monopoly markets.
In addition to the theoretical implications, the analysis also has potentially important public policy ramifications. The consequences for consumers of deregulation and privatization in a particular
industry depend on the relationship between competition and pricing in that industry. Since the
results of this study suggest that airlines are better able to price discriminate in more concentrated
markets, policies aimed at increasing the ease of new firms’ entry into existing airline markets
should narrow the gap between the prices charged to business and leisure consumers.
w-07-8

Population Aging, Labor Demand,
and the Structure of Wages
by Margarita Sapozhnikov and Robert K. Triest
complete text: http://www.bos.frb.org/economic/wp/wp2007/wp0708.htm
e-mail: msapozhnikov@crai.com, Robert.Triest@bos.frb.org

Motivation for the Research
It is widely recognized that almost all advanced countries will soon begin experiencing a profound
demographic shift that will affect labor force growth rates. In the United States over the next two
decades, the baby boom generation born between 1946 and 1964 will approach their early-to-mid60s, the age range when normal retirement from the work force traditionally starts. As this shift
takes place, the ratio of those over age 65 to the population aged 15–65 years will dramatically
increase the elderly dependency ratio.

july 2007 – december 2007 11

What is less well acknowledged is that the age distribution of the labor force, defined in this paper
as the working population aged 18 to 65 years old, will undergo a correspondingly momentous
change: while labor demand may outstrip labor supply amid an overall shortage of workers, there
will be a relative glut of older workers. It is firmly established in the research literature that the labor
market fortunes of workers are inversely related to the relative size of their birth cohort, and that
cohort crowding effects influence the wages large birth cohorts receive throughout their working
lives. As the population ages, what may happen to the wage structure of older workers—and the
structure of wages more generally—has important public policy implications. Anticipated wage
income can affect working and retirement guidelines, future payroll tax revenues, Social Security
benefits and payments, and aggregate earnings growth. The aim of this study is to explore how
changes in the age distribution of the U.S. working age population may impact the structure of
wages, and the challenges this might pose for public policy considerations.
Research Approach
The authors use wage data from the annual income and demographic supplement to the March
Current Population Survey spanning the years from 1963 through 2003. This data set is more recent
than ones used by previous researchers, and by tracking the oldest baby boomer cohorts through
their mid-50s, captures the majority of their working lives. Unlike prior studies, this one analyzes
the changes in the wages of women as well as those of men. The analysis groups individuals according to five educational attainment categories: those who did not finish high school, high school
graduates, those who completed 1–3 years of college, college graduates, and those with post-college graduate education. The empirical investigation explores changes in the age distribution of the
working age population over the past 40 years, changes in the relative supplies of workers with different levels of educational attainment and labor market experience, changes in the distribution of
labor market experience within the same educational attainment groups, and yearly changes in the
experience premium paid to workers within the same education-experience group. An econometric specification then is used to estimate more formally how the relationship between a given level
of educational attainment and labor market experience influences the wage differential specific to
a given gender-educational attainment-birth cohort combination. This allows the authors to investigate whether the relative cohort size effect changes as cohort members gain experience, a point of
contention in earlier research on the baby boom generation’s entry into the labor market.
Key Findings
• The distribution of the working age population in the United States will remain fairly even over
the next decade or so, then become increasingly skewed towards older workers. Based on U.S.
Census Bureau projections, the 2014 panel shows a working age population that is distributed
fairly evenly over all age ranges, with only a modest downward tilt towards people in their 50s and
60s. Yet this tilt will become more pronounced afterwards, and the historical trend of having a
large ratio of younger to older workers seems to be coming to a definitive end.
• The baby boom generation’s entry into the labor force had a larger initial impact on the age distribution of college-educated workers than it did on that of high school graduates, as the oldest
baby boomers were not only much larger in overall numbers than were earlier cohorts, but were
also much more likely to complete college. The relative impact of the baby boom’s oldest collegegraduate cohort then decreased over time as the pre-baby boom cohorts were replaced by the
younger and even more highly educated baby boomers. In recent years, the experience distributions of the high school graduates and college graduates have converged, and in the future will
increasingly resemble a uniform distribution.
• The econometric estimates show that while real wages increase rapidly with labor market experience, there is a sharp drop in the growth rate of earnings as this experience increases. Real wage
rates tend to level off after 15 years of work experience.

12 research review

Changes in the Distribution of the U.S. Working Age
Population Over Time
Relative Birth Cohort Size
1964

03

1974

03

02

02

01

02

01
20

40

60

1994

03

01
20

40

60

2004

03

20

02

02

01

01

01

40

60

20

40

60

40

60

2014

03

02

20

1984

03

20

40

60

Age
Source: Authors’ calculations.

• The elasticity of wages with respect to the relative size of one’s cohort is uniformly negative, which
confirms that belonging to a relatively large birth cohort is associated with depressed lifetime wages.
These wage elasticities are sizeable, generally around -0.1 for high school graduates, and a little over
0.05 for college graduates. There is a tendency for the coefficients to decrease in magnitude as educational attainment increases, which suggests that the substitutability between workers with different
experience levels increases with educational attainment. This result is somewhat surprising, since one
might expect more educated workers to face more sharply delineated career ladders.
• For all educational levels, the relative cohort size effect varies relatively little with years of labor
market experience, which implies that relative cohort size is roughly as important to the wages
earned late in one’s career as it is in one’s earlier work life. This evidence differs from earlier
research that argued that relative wage reductions associated with belonging to a large birth
cohort were concentrated in one’s early working years.
• The results imply that in the near future older American workers will face increasingly unfavorable relative labor market conditions as their ranks become crowded with the baby boomers.
Though the general slowing of labor force growth may create tight labor markets, the wage premium resulting from labor market tightness will disproportionately accrue to younger, less experienced workers.
Implications
This paper offers strong empirical support for the claim that cohort crowding affects the wages of
large birth cohorts throughout their entire working lives. These cohort size effects are quantitatively important, and need to be incorporated into public policy deliberations that impact upon working and retirement decisions, the latter often being predicated on accumulated savings and expected
pension payments. The increasingly common practice of replacing defined benefit pension plans

july 2007 – december 2007 13

with defined contribution plans, such as 401Ks, and increases in Social Security retirement ages for
receiving full benefits, along with the possibility that future benefits will be cut, may prompt many
baby boomers to work longer and retire later than did preceding birth cohorts. Yet the boomers will
suffer from the same cohort crowding effects on wages at the end of their working lives as they confronted earlier in their careers. Many policy analysts believe that longer working lives must be a key
part of any solution to providing for the consumption needs of older citizens as the traditionally
defined dependency ratio increases, and as life expectancies increase. However, the effectiveness of
this solution partly depends on the wages older workers can expect to command in the labor market. Since a crowding effect depresses the wages of older workers, prolonging one’s working life may
not be a desirable option for some, and those who do elect to work longer may not earn as much as
they might expect. These considerations pose implications for forecasting future payroll tax revenues,
gauging expected Social Security benefit payments, and predicting aggregate earnings growth.
w-07-9

Doing Good or Doing Well?
Image Motivation and Monetary Incentives
in Behaving Prosocially
by Dan Ariely, Anat Bracha, and Stephen Meier
complete text: http://www.bos.frb.org/economic/wp/wp2007/wp0709.htm
email: ariely@mit.edu, bracha@post.tau.ac.il, stephan.meier@bos.frb.org

Motivation for the Research
Engaging in prosocial behavior—doing good to benefit others—may be prompted by motives other
than purely philanthropic intentions. Charities have long recognized that offering incentives tied
to donations can increase gifts by rewarding givers with some token of thanks, such as a coffee mug,
preferred seating at the opera, or naming a new hospital wing after the building fund’s principal
contributor. Granting tax breaks to encourage charitable giving or energy conservation operates on
the same motivational idea.
The various types of charitable contributions and the numerous ways organizations solicit and promote donations suggest that individuals vary in their response to three broadly defined types of
incentives for prosocial acts. Intrinsic motivation denotes a wholly altruistic impetus for promoting
the welfare of others. Extrinsic motivation describes any reward or perk, material or monetary,
received in return for acting in a prosocial manner. Image motivation refers to the advantages an
individual’s reputation may gain from how others perceive his/her behavior. This third concept captures how the desire for social approval may result in a person acting more generously in a public
setting—since a private act will send no external signal. These three motivations have separate and
interacting influences. While it is recognized that extrinsic incentives can have detrimental influences, the mechanism by which these may operate is not well established. One explanation is that
extrinsic motives may skew intrinsic motives, such as an individual’s own self-interest weakening
the absolute mutual trust that should underpin a principal-agent relationship. This paper tests how
the dual presence of extrinsic incentives and image-based incentives complicates determining
whether someone is acting purely for the greater social good, or whether—despite appearances of
being driven by purely altruistic tendencies—some individual self-interest plays a part in the underlying incentive for undertaking the prosocial activity.
Research Approach
A two-part experimental study investigates whether prosocial behavior is partly motivated by image
considerations, and exploits the fact that the public disclosure of one’s actions sends a signal to others.

14 research review

At one university, each participating undergraduate was asked whether, in his/her opinion, a majority of fellow students would judge each cause on a list as either being “good” or “bad.” The main
experiment, performed in a laboratory setting, focused on making a voluntary donation to one of
two organizations: one has a strong positive public image, while the other cause, depending on subjective norms and values, often has a negative image. An effort made to help one of these groups
conveys a signaling value to others if the choice is publicly disclosed. Each subject’s choice was randomly treated as private decision made anonymously, or publicly disclosed to others. In addition
to the donation made to their selected charity, some randomly chosen participants were offered
individual monetary incentives tied to their contribution efforts. At the end of the laboratory experiment, those in the public treatment were asked to tell the other participants what charity they
chose, how much money they donated, and whether, based on the payment scheme, they earned
money for themselves. To check the laboratory results, a similar experiment was conducted in a
field setting.
Key Findings
• Monetary incentives have no effect on the contribution amount when the donation decision is
made public, while monetary incentives do increase the contribution effort when the decision is
made in private. This result supports the effectiveness hypothesis, which posits that private monetary incentives are less effective in public than in private settings.
• The different effectiveness of monetary incentives in the public versus the private condition is
substantial, and statistically significant at the 95 percent level. It appears that monetary incentives
are less effective when interacting with image motivation if the decision is publicly disclosed. The
result implies that in promoting prosocial activities, private monetary incentives crowd out image
concerns.
• The above results were further tested in a limited field study, the results of which strongly support the lab-based findings: private monetary incentives appear to interact negatively with image
concerns, implying that monetary incentives are more effective in promoting prosocial decisions
made in private settings than in public ones.
Implications
Monetary incentives can have unintended consequences by actually reducing prosocial activity.
Pecuniary rewards are more effective in facilitating behavior when these decisions made in private
(anonymously), instead of being subject to public disclosure. People want to be seen by others as
doing good solely for “goodness’s sake,” for in the absence of extrinsic incentives, observers will
attribute the prosocial act to the individual’s innate altruism. In the presence of extrinsic incentives,
the signaling value of a philanthropic act is diluted, as others might conclude that private self-interest drove the publicly virtuous decision.
These results have important policy implications for designing incentives to promote certain desirable behavior. For example, offering a tax break to promote the adoption of environmentally friendly technologies and practices may be more efficacious if the benefit is attached to purchasing energy-efficient furnaces or water heaters, which are non-visible technologies, rather than attached to
buying a hybrid car, which sends a visible signal to others. Major donors to cultural institutions
might not want to accept favorable seats at the opera, as this visible reward compromises the image
value signaled by their charitable contribution. This study concentrates on the effect of monetary
incentives, and further research should explore whether non-monetary incentives may have different effects on outcomes.

july 2007 – december 2007 15

w-07-10

Space and Time in Macroeconomic Panel Data:
Young Workers and Unemployment Revisited
by Christopher L. Foote
complete text: http://www.bos.frb.org/economic/wp/wp2007/wp0710.htm
email: chris.foote@bos.frb.org

Motivation for the Research
Macroeconomists often use regional or state-level data when national variation is insufficient or
when a particular identification strategy is feasible only on a sub-national level. A recent example
is Robert Shimer’s provocative paper that carefully investigates the effects of demographic change
on U.S. labor markets [Shimer (2001)]. The traditionally accepted demographic adjustment for the
unemployment rate assumes that the aggregate unemployment rate moves mechanically along with
the population shares of various demographic groups. For example, the increase in young workers
in the 1970s and 1980s is generally thought to explain part of the increase in overall U.S. unemployment during this period, simply because younger workers experience higher unemployment
rates than older workers. Using data from U.S. states to estimate—rather than assume—the effect
that young workers have on aggregate unemployment, Shimer’s paper finds, surprisingly, that a
state’s unemployment rate falls when its youth share rises. This negative correlation is not driven
by the migration of young people to states with booming economies. Using lagged birth rates as a
source of exogenous variation for youth shares generates even larger negative effects of youth shares
on state-level unemployment rates. Shimer interprets his findings in two ways. First, he concludes
that firms want to locate in states with many young workers, because these workers are likely to be
mismatched in their current jobs and accept other job offers. Second, he argues that the large number of vacancies posted by firms in states with many young workers lowers unemployment among
all demographic groups.
In this paper, the author counters Shimer’s conclusions by showing that the surprising negative correlation between youth shares and state-level unemployment is much weaker in recent data. The
reason for the change, the author claims, is that the original negative correlation is not really a
robust feature of the data.
Research Approach
The author re-estimates Shimer’s regressions, using both ordinary least squares (OLS) and instrumental variables. Occasionally, corrections for first-order serial correlation (AR1) are also included. Two sample periods are used: 1973–1996 (as in the original paper) and 1973–2005. By making
use of some new statistical techniques, developed after the original paper was written, the author
contends that the estimated standard errors in the 2001 paper, using the shorter sample period,
were too small.
To show this, the author uses a variety of methods to calculate standard errors for the regressions
on both samples. These methods vary in the types of residual correlation that they account for.
Some methods account for serial correlation, as Shimer did in his 2001 paper. This type of correlation links residuals from the same state in different years. But the author also employs methods
that account for spatial correlation, which links residuals from different states in the same year.
Spatial correlation was not accounted for in Shimer’s original paper, but turns out to have important effects on the standard errors.

16 research review

Spatial Correlation in Unemployment
and Youth Shares in 1985
Panel A: Unemployment

Quartile
-2.82 to-1.07
-1.07 to 0.03
0.03 to 0.78
0.78 to 3.59

Panel B: Youth Share

Quartile
-0.03 to-0.00
-0.00 to 0.00
0.00 to 0.00
0.00 to 0.01

Note: The data in each panel correspond to 1985 deviations from state and year means of the log of the unemployment rate
(top panel) or the log of the youth share (bottom panel). The means are taken over the years 1973 to 2005. The maps show
that the relative values of both variables (unemployment rate and youth share) for one state are generally close to corresponding
values in nearby states.

Key Findings
• The most striking finding is the large change in the estimated regression coefficients when the
longer sample period is used. The absolute value of the OLS point estimate drops by more than
70 percent, from –1.55 to –.42, when the estimation period changes from 1973–1996 to
july 2007 – december 2007 17

1973–2005. The decline in the instrumental variable (IV) estimate is an even steeper 90 percent,
from –1.90 to –.19. Although the declines in the AR1-corrected estimates are not as severe, the
IV-AR1 estimate still declines by more than half, from –1.68 to –.82.
• Accounting for both spatial and serial correlation in the state-level data often renders the original
point estimates insignificant. This finding indicates that there is not enough variation in the statelevel data to estimate youth-share effects on unemployment in a precise way. We should, therefore,
not be surprised if these estimates change dramatically when a different sample period is used.
• While controlling for both serial and spatial correlation is a good idea, current statistical procedures that attempt to do so may be inadequate. The primary reason is that it is difficult for these
methods to account for “cross correlations’’ among residuals that span several states and years,
especially when the sample period is relatively short. (An example of a cross correlation would be
a link between the residual for Michigan in 1990 and the residual for Wisconsin in 1991. These
residuals correspond to different states and years, but these may be still be correlated if both spatial and serial correlation are present in the data.)
Implications
Macroeconomists should be wary of spatial correlation when using regional variables to test theories. In the United States, boundaries between states are often arbitrary political designations that
divide nearly identical parts of the country. Although in some research designs, economic similarity across a state border is a good thing, at other times it can lead to spatial correlation in residuals.
Such correlation can lead to imprecise estimates and misleading standard errors. In short, crosssectional units may not generate adequate independent variation, and accounting for cross-state
correlations at long lags is difficult when the number of time periods is short.
w-07-11

How Much is a Friend Worth? Directed Altruism
and Enforced Reciprocity in Social Networks
by Stephen Leider, Markus M. Möbius, Tanya Rosenblat, and Quon-Anh Do
complete text: http://www.bos.frb.org/economic/wp/wp2007/wp0711.htm
email: sleider@hbs.edu, mobius@fas.harvard.edu, trosenblat@wesleyan.edu, do@fas.harvard.edu

Motivation for the Research
Despite a considerable existing body of research on cooperation and other-regarding preferences,
most experimental economics work on altruistic behavior has focused on interactions among
strangers. But it is reasonable to assume that the degree of altruism displayed varies according to
the social proximity or distance between players. In a variety of laboratory experiments, varying
magnitudes of prosocial behavior have been observed when the decisionmaker learns the unknown
partner’s gender or ethnicity. Yet there have been few experiments that rely explicitly on the subjects’ ongoing social relationships, situations that inherently create greater prospects for future
interactions, as opposed to a one-time encounter between two strangers.
A decisionmaker’s altruism towards strangers can be described as baseline altruism, while directed
altruism describes a decisionmaker’s treatment of partners with whom s/he shares some type of
social connection. Economic theory suggests that there are three reasons why a decisionmaker may
treat a partner more generously if there is some likelihood that they will interact in the future:
enforced reciprocity, signaling, and preference-based reciprocity. “Enforced reciprocity” means that
the decisionmaker expects a favor granted to the partner to be repaid in a future exchange.

18 research review

“Signaling” is the idea that given the possibility of future encounters, the decisionmaker has a reputational incentive to show the partner a propensity to be generous. “Preference-based reciprocity”
captures the idea of reciprocal incentives by positing that a decisionmaker will preemptively treat
the partner kindly in anticipation that in the future s/he will be generous in return. This paper
explores the value of having a friend instead of a complete stranger make payoff-relevant decisions,
and tests what conditions make a friendship valuable for these outcomes.
Research Approach
Using a large real-world social network existing among undergraduates at one university, two webbased experiments are conducted over a 1–2 week period. Within the network, composed of students living in two residence halls, five varying categories of social distance (SD) are identified and
are used to approximate the strength of a relationship. SD1 refers to the relationship between direct
friends, while SD2, a “friend of a friend,” and SD3, a “friend of a friend of a friend,” denote progressively more distant relationships. SD4 describes a student who lives on the same staircase or
floor, and SD5 a student from the same dormitory who falls into none of the other categories.
From this network population, randomly selected decisionmakers are asked to make repeated
choices that determine payoffs for themselves and their randomly-chosen partners, who are either
anonymous or specifically identified by first and last name. The anonymous treatment gauges the
decisionmaker’s baseline altruism, and this outcome is compared with the results from the nonanonymous treatments. By including games in which giving is both efficient and inefficient, the
signaling mechanism can be distinguished from the reciprocity mechanism: if giving is efficient,
decisionmakers motivated by reciprocity should transfer more surplus to a socially close partner
only if the decisionmaker’s identity is known. In contrast, decisionmakers who want to signal their
generosity to the partner should always transfer more surplus, irrespective of whether the setting is
anonymous or not. Taken all together, the experiments isolate motives stemming from directed
altruism and those stemming from expectations of future interactions to determine the absolute
and relative strength of both effects, and to isolate the mechanism behind the repeated interaction
channel. The experiments also elicit the partners’ beliefs about the expected generosity of various
named decisionmakers in anonymous and non-anonymous decisionmaking scenarios.
Key Findings
• Subjects with a high level of baseline altruism tend to have more friends with a high level of baseline altruism, while selfish subjects tend to have more selfish friends. Partners with high baseline
altruism are treated substantially better by their friends than by subjects with low baseline altruism.
• There is a correlation between baseline altruism and directed altruism. A decisionmaker’s action
towards a named partner is strongly determined by his or her baseline altruism towards an anonymous partner, and her/his directed altruism towards socially close partners. Subjects who give more
to anonymous partners also give more to partners who are specifically identified. Controlling for
their baseline altruism, decisionmakers are substantially more generous towards friends, and transfer at least 50 percent more surplus to direct friends as compared to complete strangers. The
strength of this generosity declines when making decisions involving indirect friends.
• Close social ties promote directed altruism. Allocations made to friends are substantially higher
than allocations made to distant partners or strangers. Directed altruism makes a decisionmaker,
on average, at least half a standard deviation more generous relative to the distribution of the decisionmaker’s baseline altruism. Partners’ beliefs reflect directed altruism: subjects accurately predict
that, on average, their friends will behave more generously towards them than towards strangers.
However, people overestimate the generosity of friends of friends, and do not predict individual
differences in baseline altruism.

july 2007 – december 2007 19

Decisionmaker’s Social Network

Share
staircase
Direct
Friend

Direct
Friend

Decisionmaker

Direct
Friend

Direct
Friend
SD=1

Indirect
Friend
SD=2

Indirect
Friend
SD=3

Same
house

Illustration of a decisionmaker’s social network with four direct friends (SD = 1) and two indirect friends at distance SD = 2
(“friend of a friend”) and at distance SD = 3 (“friend of a friend of a friend”).

• Giving increases by 24 percent in treatments where giving is efficient and partners learn the decisionmaker’s identity. This result offers evidence that favors reciprocity, not signaling, as the motivating factor.
• Network flow highlights information about the network structure that is not reflected in the “consumption value” of friendship captured by the simple social distance measure. Intuitively, the greater
the number of distinct paths connecting the decisionmaker with the partner, the higher the network
flow. For instance, sharing a large number of common friends will tend to increase the network flow,
which replicates a widely accepted idea in the sociology literature that dense social networks are
important for building trust because agents have the opportunity to engage in informal arrangements. Network flow predicts the decisionmaker’s generosity under non-anonymity even after controlling for social distance. This finding suggests that the results are driven by enforced reciprocity
rather than by preference-based reciprocity. The non-anonymity effect increases with maximum
network flow. The non-anonymity effect and directed altruism are substitutes.
Implications
This is the first time within-subjects experiments have attempted to distinguish between directed
altruism and future interaction effects. The results show that people tend to seek out and/or maintain friendships with others who have similar social preferences. In other words, it pays one to be
generous by “doing onto others as they might do onto you.” But enforced reciprocity is the motivating incentive behind this, not signaling or preference-based reciprocity.
The results in this paper are a first step towards developing a broader theory of how trust operates
in social networks. Most theoretical and experimental studies of absolute trust have centered on
one-shot games played between strangers, and basically ask why strangers trust one another. This

20 research review

study asks why and when some decisionmakers might be more trusting than others, a distinction
termed “differential trust” that is determined by measuring social networks. A natural extension of
this research, which is being pursued by some of the authors, is determining whether partners do
in fact choose “trustworthy” decisionmakers.
w-07-12

Social Networks and Vaccination Decisions
by Neel Rao, Markus M. Möbius, and Tanya Rosenblat
complete text: http://www.bos.frb.org/economic/wp/wp2007/wp0712.htm
email: trosenblat@wesleyan.edu

Motivation for the Research
Expanding vaccine coverage is a national health objective in the United States. The U.S.
Department of Health and Human Services lists immunization against influenza as a leading
health indicator, and has established a target vaccination rate of 90 percent among high-risk adults
in its bulletin Healthy People 2010. To achieve government health goals, many healthcare organizations have implemented mass inoculation programs that dispense vaccines at public sites, including schools, pharmacies, and supermarkets. Although epidemiologists and public health scholars
have long documented the role of social networks on the spread of infectious diseases, public health
literature on prevention and education has focused mostly on individual-level interventions, such
as reminders and education campaigns.
Since peer effects can amplify the efficacy of such interventions through social learning, measuring
how an individual’s medical beliefs and healthcare decisions are influenced by social interactions
can aid in the design of cost-effective public health programs. In several instances, policymakers
have begun to design interventions that utilize peer interactions to achieve public health goals. This
paper investigates how peers may influence an individual’s decision to get a flu vaccine, examining
whether peer effects can widen the impact of such programs and thereby improve vaccination coverage in communities at large.
Research Approach
The authors decompose social effects on vaccination decisions, obtaining dollar value estimates of
social learning and other peer interactions. Using data on each student’s health beliefs, the authors
directly measure social learning about the medical benefits of immunization. The authors also examine how the experience of having contracted influenza alters the effects of friends’ influence on an
individual’s valuation of the benefits of being vaccinated.
To measure peer influences, the authors use data on the social networks and vaccination histories
of students at Harvard College. Information on social networks was collected through an online
trivia game at the website facebook.com. Vaccination histories are from the University Health
Service and are matched with the social network data from the trivia game. Data on beliefs are
taken from the house experiment described below.
The house experiment takes advantage of the process by which students are assigned living quarters.
Each spring, groups of sophomores are randomly assigned to one of the college’s 12 residential houses. During the fall, vaccination clinics are held at four of these houses. Because individuals living in
houses with clinics may find it especially convenient to get vaccinated and may be better informed
about the location and time flu clinics are held, the randomization procedure helps to generate
exogenous variations in individual propensities to get vaccinated, enabling the authors to distinguish
peer influences from selection effects. Previous studies use randomization to assign reference groups

july 2007 – december 2007 21

to individuals. This setup differs in that individuals can select their peers but not where their peers
live. Thus, each individual’s allocation of social contacts across all 12 houses is exogenous.
The trivia game is a web-based economic experiment in which participants have clear incentives to
truthfully reveal their friendship links. In the house experiment, students living in two of Harvard’s
residential houses were invited to complete an online survey about their beliefs about the risks of the
influenza virus and benefits of the flu vaccine. Students also answered questions about their vaccination records, medical histories, and peer influences on their vaccination decision. The house experiment also collected data on the social ties among residents of the two houses, using a coordinationgame technique in which each participant was asked to list her 10 best friends and indicate the average amount of time per week she spends with each friend, selecting from a menu of ranges.
The authors devise a procedure to obtain separate dollar-valued estimates of social learning and
other peer influences, enabling them to show that the positive results are attributable to social
learning about medical benefits of obtaining a flu vaccine, as opposed to mere imitation effects.
Using an adaptation of Ellison and Glaeser’s (1997) dartboard technique, the authors test whether
friends tend to cluster together at flu clinics, then measure peer effects on an individual’s vaccination decision, attempting to isolate the mechanisms whereby peers influence an individual’s behavior. In order to distinguish social effects on perceptions of health benefits from other peer influences on immunization decisions, the authors analyze how having experienced a flu infection moderates peers’ influence on an individual student’s beliefs and choices.
Key Findings
• Social exposure to medical information raises people’s perceptions of the benefits of immunization. In
this study, the average student’s belief about the flu vaccine’s health value increased by $5.00 when an
additional 10 percent of the student’s friends were assigned to residences that host inoculation clinics.
• Among students with no recent first-hand experience of the flu, a 10 percent increase in the number of friends living in residences where clinics are held raised cumulative valuations of the vaccine by $10.92. Eighty-five percent of this increase is attributable to heightened perceptions about
the medical benefits of immunization. Students with recent flu experiences were less responsive
to social influences and relied more on their own personal judgment, when deciding whether or
not to be vaccinated.
• The study also finds evidence of positive peer effects on individuals’ vaccination decisions. The
likelihood of a student’s getting vaccinated increases by up to 8.3 percentage points if an additional 10 percent of the student’s friends receive flu shots. Furthermore, the excess clustering of
friends at inoculation clinics suggests that students coordinate their vaccination decisions with
their friends.
• Unlike a study by Miguel and Kremer (2007), which examines a setting in which individuals are
reluctant to adopt a treatment that has high social benefits but substantial private costs, this study,
in which the private costs of adoption are low, finds that vaccinated students appear to provide
favorable evaluations to their friends, thereby enhancing perceptions about the medical benefits
of immunization.
• While learning from peers may be the main social determinant of vaccination decisions, other
social interactions such as peer pressure and companionship needs also appear to influence location choices.

22 research review

Implications
By exploiting the influence of social factors on healthcare decisions, targeted interventions can alter
behavior among the broader population, as individuals who receive flu shots at outreach clinics encourage their peers to get vaccinated as well. Thus, social networks can improve the efficacy and effectiveness of vaccine delivery systems by raising the demand for vaccination in the community at large.
w-07-13

Active Decisions and Prosocial Behavior
by Alois Stutzer, Lorenz Goette, and Michael Zehnder
complete text: http://www.bos.frb.org/economic/wp/wp2007/wp0713.htm
email: lorenz.goette@bos.frb.org

Motivation for the Research
Previous research on the effects of measurement on people’s behavior has examined the effect of
asking relatively general questions, such as whether they intend to buy “a car.” This study examines
the effects of an active decision intervention on choices regarding a very specific prosocial activity.
By adding well-defined context to the decisionmaking environment in this way, the authors aim to
test whether different types of prosocial behavior can be targeted and encouraged.
To measure the effect of active decisionmaking on prosocial behavior, this paper uses a large-scale
field experiment involving blood donations. The behavioral consequences of active decisions arise
in two ways when asking for an explicit statement regarding the choice: (1) cognitive processes are
stimulated in which a more in-depth examination of the choice takes place than if no explicit statement is involved, and (2) the expressed choice is understood as a commitment. In this study, subjects are asked either to consent to or to dissent from a request in an otherwise unrestrained choice
situation—that is, subjects are confronted with the same behavioral options as in a situation where
no active decision is involved.
Research Approach
The authors obtained permission to have a brief survey distributed to almost 2000 university students in seven large lecture classes, just before a break, and to have representatives of the Swiss Red
Cross issue an invitation to participate in a blood drive scheduled for the following week. The invitation was issued in a nonbinding manner, and the students were unaware that they were participating in an experiment. The survey requested demographic information, posed questions aimed
at measuring prosocial preferences and personality characteristics, and asked respondents whether
they felt sufficiently informed about the importance of donating blood. An information sheet was
appended to the survey, listing the time slots available for the blood drive.
There were three experimental conditions: two involved “active decisions,” in which the subjects
were asked to report a preference, and a third, control condition, in which no expressed preference
was solicited. The conditions were established by administering three different versions of the survey that were nearly identical but differed in the treatment at the end. In both active decision treatments, an additional survey page asked subjects whether they were willing to donate blood at one
of the times listed on the information sheet. In the strong active decision treatment, the choices
were “yes” and “no,” while in the weak active decision treatment, a third option was added that
allowed respondents to indicate that they did not wish to make a decision. In the control condition, there was no request at the end of the survey for an explicit response to the invitation to
donate blood. Treatments were randomized within each of the lecture classes and care was taken to
ensure identical information conditions for all subjects.

july 2007 – december 2007 23

Key Findings
• For people without well-formed preferences on blood donation, a strong active decision intervention increased their likelihood of giving blood, despite the high immediate opportunity costs associated with this choice.
• The strong active decision treatment also had a significant positive effect, over and above the weak
active decision treatment, on people’s stated willingness to donate blood if they had no fully
formed prior opinions about the issue.
• In contrast to the effect on individuals who indicated relative unawareness of the topic, no significant difference was observed as a result of the active decision for subjects who were already aware
of the importance of donating blood.
Implications
The results indicate that whether or not people decide to act prosocially is not a foregone conclusion, but rather the choice is context-driven and issue-specific. Thus, active decisions are potentially a procedural innovation to develop the “latent social asset” in a society. However, it is important
to learn under what circumstances requesting active decisions is perceived as supportive of positive
societal values (rather than as controlling) and to work to build up prosocial preferences.
An active decision intervention might be effective in promoting social goals such as post-mortem
organ donation, where a statement with low immediate costs puts people on a donor list. At the
individual level, an active decision intervention might help those people who would prefer to act in
their long-term interests but struggle with self-control: an example would be undersaving, where
an active decision to participate in a savings plan could mitigate the problem. The active decision
intervention approach might be seen as an ethically attractive alternative to presumed consent,
which may strike some as controlling or coerced.
How often an active decision framework can be applied effectively without being seen as too controlling may also be specific to the particular context and issue at hand, given that the effects vary
with subject awareness. For some goals, like promoting post-mortem organ donation, a simple
active intervention to solicit a decision might be enough to overcome the inertia of a low-contribution status quo resulting from a system of self-selection.
w-07-14

The Effects of Expectations on Perception:
Experimental Design Issues and Further Evidence
by Tyler Williams
complete text: http://www.bos.frb.org/economic/wp/wp2007/wp0714.htm
email: tyler.williams@bos.frb.org

Motivation for the Research
Studies on a variety of sensory stimuli have shown that people’s sensory experiences are determined
not only by bottom-up processes (that is, through the impact of external stimuli on individuals’ sensory organs), but also by top-down processes, such as expectations and prior desires. Most such
studies measure the importance of top-down processes by comparing a control treatment, in which
individuals are given no information about the stimulus prior to experiencing it, with an experimental treatment, in which individuals receive relevant information prior to the stimulus. However,
the usual design of these experiments does not identify how top-down processes change individuals’ observations and/or judgments.

24 research review

There are two general ways that information or expectations could play a role in these studies. Topdown processes may act indirectly on perceptions and decisionmaking by clouding the memory of
an experience or by changing the intensity of attention paid during the experience. Alternatively,
expectations may directly affect the perceptions that people have when presented with a stimulus.
The direct-effect hypothesis has strong implications for decision theory. It states that when information is provided before the stimulus is experienced, the interpretation of an experience—as
determined by individuals’ sensory organs and the part of the brain that interprets signals from
these organs—may change without any shift in attention to or uptake of the stimulus. According
to this hypothesis, changing an individual’s expectations by providing prior information primes the
brain to experience the stimulus differently. In the context of decision theory, this direct-effect
hypothesis suggests that an individual’s correct interpretation of the stimuli around him depends,
in part, on the person’s particular state of mind.
Lee, Frederick, and Ariely (2006) [hereafter, LFA], Hoch and Ha (1986), and Braun-LaTour and
LaTour (2005) provide some evidence in favor of the direct-effect hypothesis. All three papers have
drawbacks that potentially confound the studies’ ability to distinguish between direct and indirect
effects. Focusing on LFA, this paper critiques the methodology and evidence in these studies, and
conducts a new field experiment that attempts to address some of the simpler confounding issues
in the design of these experiments.
Research Approach
Like the aforementioned work, this study uses three between-subjects treatments that arguably test
whether expectations directly change perceptions. In the first treatment (the “before” treatment),
subjects receive expectations-generating information before they experience some stimulus and
then are asked to provide feedback on the stimulus. In the second treatment, subjects receive the
same information after they sample the stimulus, but before they are asked to provide feedback (the
“after” treatment). In the third treatment (the control treatment), subjects receive no additional
information about the stimulus that they are asked to sample.
The design of this study departs from that in LFA in two important ways. First, it eliminates one
potentially spurious generator of differences among the treatments by giving subjects no extraneous information about the stimulus in the control treatment. Second, subjects in this experiment
are not told what question they will have to answer about the stimulus until after sampling it. This
design is necessary to ensure consistent application of the “after” treatment.
The subjects in this study were volunteers who passed by a free lemonade stand on a cycling/walking path in Somerville, MA, over the course of a few summer afternoons. Volunteers were recruited using a sign offering free lemonade in return for participation in a brief research study. The subjects were given two small samples of lemonade to taste, one of which was slightly diluted. Both
samples were simply called “samples of lemonade” in all three treatments. In the “before” treatment,
subjects were told before tasting the lemonade that one sample was slightly diluted; in the “after”
treatment, this information was given immediately after the sample tasting. Subjects were told
before tasting, in all three treatments, that they would be asked a question when they were finished.
In all treatments, when they were finished tasting, subjects were asked which sample they preferred.
After completing the experiment, subjects filled out a short questionnaire that asked for their age,
gender, frequency of drinking lemonade in an average week, and how much they like lemonade on
a scale of one to seven.
Key Findings
• In this experiment, information shifted aggregate preferences slightly toward the diluted lemonade

july 2007 – december 2007 25

The Effect of Information on Preferences
For Diluted Lemonade
Proportion who prefer diluted lemonade
By Gender

Overall
1.0

.9

1.0
Control
Before
After

Control
Before
After

.9

.8

.8

.7

.7

.6

.6

.5

.5

.4

.4

.3

.3

.2

.2
Male

Female

in both the “before” and “after” treatments, but neither of these effects was statistically significant.
In contrast, LFA found that, in general, information in the “before” treatment had a statistically
significant effect on perception. Although LFA’s results suggest that information may change
individual perceptions directly, at the aggregate level the results of this experiment do not support
that conclusion.
• At the subgroup level, preliminary findings offer some support for LFA’s general finding for men,
but not for women, implying that any effect of top-down processes depends on individuals’ characteristics and/or the specific context.
• The finding that heterogeneity across individuals can play a role in determining the nature of the
effect of expectations on perceptions complicates the interpretation of results such as those presented in LFA.
• The difficult question of whether perceptions are directly altered by context-generated top-down
processes is likely far from resolved, not only because of the finding concerning heterogeneity, but
for other reasons as well. In particular, with this type of design, it may be impossible to ensure
equal attention to the stimulus across the three treatments.
Implications
Neither this study nor any of the other studies discussed in this paper can rule out all of the confounding influences on experiments designed to determine whether receiving prior information
changes perceptions directly. These influences include extraneous information, difficulties in
designing the “after” treatment, individual heterogeneities, attention effects, and probably other
considerations that have not been explored.

26 research review

The problems of extra information and after-treatment design might be solved more completely by
having subjects experience the stimuli in a manner that eliminates the need to explain beforehand
what to expect in the “after” and control treatments.
The issue of individual heterogeneity is probably more difficult to solve. A within-subjects design,
which would overcome this issue easily, does not seem feasible, since giving subjects information
more than once would inherently ruin the timing of information. The best solution might be to
measure as many characteristics as possible and to control for these. A more creative solution might
be to use a within-subjects design but to use multiple stimuli and to ensure that each subject
receives a different stimulus in each experiment, randomly assigning the stimuli across the three
treatments. Another possibility would be to control for heterogeneity in information valences by
asking individuals, after the experiment, about the valence they associated with the information.
Controlling for differences in attention and search effort may be the hardest issue to resolve. A first
attempt might be to raise the subjects’ level of involvement to as high a level as possible, or simply
to encourage subjects to pay very close attention to the experience.
One possible avenue for new research would be to use brain-imaging tools, such as functional magnetic resonance imaging (fMRI) in order to determine which parts of the brain are most active during each stage of the three treatments. At the very least, a larger body of results using different
stimuli and measurement techniques should offer some preliminary bounds on the possible range
of situations in which top-down processes can change perception directly.
w-07-15

Subprime Outcomes: Risky Mortgages, Homeownership
Experiences, and Foreclosures
by Kristopher Gerardi, Adam Hale Shapiro, and Paul S. Willen
complete text: http://www.bos.frb.org/economic/wp/wp2007/wp0715.htm
email: kristopher.gerardi@bos.frb.org, adam.shapiro@bos.frb.org, paul.willen@bos.frb.org

Motivation for the Research
The ongoing foreclosure crisis and credit crunch in the United States is a direct result of falling
house prices, which are declining from an unprecedented historical peak. Many of these foreclosures are linked to the subprime lending channel, which coalesced in the early 1990s and reached
a lending peak during the 2004–2006 period. As subprime lenders make mostly high-cost loans to
risky borrowers, this situation has spawned a vigorous public policy debate about whether or not
the subprime lending channel should be regulated. Moreover, many people question whether or not
subprime borrowers should even be approved for purchase mortgages, as these borrowers seem to
end up in foreclosure “too often.” Yet without having better information about how frequently these
foreclosure events occur, it is impossible to judge whether subprime borrowers are overly prone to
defaulting on their mortgages From a public policy standpoint, it is important to differentiate
between borrowers who used the subprime mortgage market to finance an initial residential housing purchase, and those who used the subprime channel to refinance pre-existing mortgages. In an
effort to provide the first rigorous assessment of this entire class of borrowers and to better inform
this national debate, the authors examine the Massachusetts history regarding subprime lending
experiences and outcomes. Since 2001, Massachusetts has consistently been among the top 15
states in terms of the subprime market share of mortgages issued.

july 2007 – december 2007 27

Research Approach
The Warren Group, a firm that compiles information on New England real estate, provided the
authors with a unique data set covering the 1989–2007 period, which encompasses two cyclical
downturns in the Massachusetts residential housing market. Using these data, the authors document
the frequency of foreclosures linked to homeownerships begun with a subprime purchase loan, as
opposed to homeownerships initially financed with prime mortgages. The data set comprises the
state’s complete historical registry of deeds records from January 1989 to August 2007, and also
includes Massachusetts assessor data from 2006 and 2007. These individual-level data follow
approximately 1.6 million homeownerships in all 351 Massachusetts towns over an 18-year period.
Repeat-sale price indexes are used to calculate, from the initial purchase date, the average quarterly
cumulative price appreciation in the town where the residential property is located. The data allow
properties to be identified as single-family homes, multi-family homes, or condominiums, and the
latter two categories are assumed to be more indicative of investment properties, since these provide
a stream of rental income. Bureau of Labor Statistics data on town-level monthly unemployment
rates since 1990 are used as a proxy for labor demand and income shocks. The 6-month LIBOR rate,
a short-term interest rate that is a popular index for adjustable rate mortgages, especially in the subprime lending market, proxies for the aggregate interest rate prevailing at the time a given mortgage
was contracted. Foreclosure sales are used as a proxy for defaults, as these sales signify the owner’s
eviction from the property. The data set contains very accurate information about the identity of
each mortgage lender, and the Department of Housing and Urban Development’s list of subprime
mortgage lenders, first compiled in 1993 and updated annually, provides a very good, but not perfect, indication of which mortgages were issued to subprime borrowers. The current public policy
debate focuses on the subprime lending channel, so defining a subprime mortgage based on the
lender’s identity is a reasonable approach to exploring the experiences of subprime borrowers.
Departing from the traditional research literature, which usually tracks individual loans taken out
at a specfic point in the ownership cycle, this study tracks the default probabilities over the entire
duration a borrower holds a mortgage on a particular residential property. This is an important
methodological innovation, as the authors find that the average number of mortgage instruments
held over an entire ownership cycle is 2.7. The data set contains information that allows the authors
to calculate initial loan-to-value (LTV) ratios for each purchase transaction, and the value of a second or even third mortgage is added to the initial LTV calculation. By characterizing sale and
default probabilities over the complete time horizon associated with a particular “ownership experience,” meaning the time that an individual household occupies the same residence, the analysis
more realistically models the individual household conditions that might trigger a default decision.
It is very unlikely that a household’s default probability is unrelated to the risk associated with prior
mortgages it has held on the property. In recent years, housing equity extractions, in the form of
refinancing an existing mortgage or taking out a home equity loan, have become common practice
among U.S. households. While some individual borrowers undertaking such refinancing may simply be engaging in optional consumption planning and portfolio rebalancing, for other households,
such refinancing may be a sign of financial distress. In this second scenario, the subsequent mortgage may carry a higher risk of default. By studying the entire ownership duration, the household’s
cumulative default probability can be calculated, even when a subprime mortgage is subsequently
refinanced. Ownership experiences are calculated over a 12-year period, since the data do not contain many ownership cycles that exceed this length.
Key Findings
• The authors estimate that, at any point over a 12-year period, residential homeownerships initiated with purchase mortgages financed by a subprime lender are almost 6 times more likely to
default than are ownerships financed through mortgages issued by a prime lender. Using data
from the entire 18-year sample, within 12 years of the initial purchase, the estimated default probability

28 research review

Subprime Market Shares
Percent
35
32.3

31.3

Massachusetts
California

30

25.9
24.5

25

19.7

20

15

10

5
2.3

2.8

3.0

2.3

2.3

0
2001

2002

2003

2004

2005

Year
Note: Statistics are based on HMDA Data, as recorded in The 2007 Mortgage Market Statistical Annual.

for a subprime borrower is approximately 18 percent, compared with 3 percent for prime borrowers. This means, however, that over a 12-year period, 82 percent of homeownerships initially
financed with a subprime mortgage had positive outcomes, in the sense that the borrower successfully serviced the monthly mortgage payment and chose to remain in the residence or elected to sell the house.
• Negative house price appreciation is the main determinant of foreclosures. Homeownership outcomes are highly sensitive to the evolution of house prices and to the initial combined LTV ratio
at origination, but are somewhat less sensitive to employment conditions. Since there is a large
variation in the values of these variables across different annual cohorts of home buyers, there is
a huge variation in the individual experiences of subprime borrowers. For instance, Massachusetts
borrowers who financed a purchase with a subprime mortgage in 1998 benefited from the state’s
historic run-up in house prices through 2005. Subprime borrowers who took out purchase loans
in 2005 were disproportionately vulnerable to declining property values, and hence to default.
• Much of the dramatic rise in Massachusetts foreclosures during 2006 and 2007 is attributable to
a decline in house price appreciation that began in mid-2005. Approximately 30 percent of
Massachusetts foreclosures that took place in 2006 and 2007 were traced to borrowers who used
a subprime mortgage to purchase their house, up from only 10 percent in 2003 and 2004.
However, almost 44 percent of the 2006–2007 foreclosures were tied to borrowers whose most
recent mortgage was issued by a subprime lender; of this group, almost 60 percent had initially
financed their residential housing purchase with a mortgage issued by a prime lender. This result
strongly implies that many of the current foreclosures have affected borrowers who initially
financed with a prime loan and subsequently refinanced with a subprime lender.

july 2007 – december 2007 29

Massachusetts House-Price Growth,
Foreclosures, and Delinquencies
January 1989 to August 2007

Percent of growth at annual rates

Percent of homes foreclosed
0.8
0.6
0.4

Foreclosure Rate

0.2
0
15
10
5

House Price Growth

0
-5
-10

1988

1990

1992

1994

1996

1998

2000

2002

2004

2006

Year
Percent of borrowers delinquent

Percent of homes foreclosed
0.8
0.6
0.4

Foreclosure Rate

0.2
0
2.8
2.6
2.4
2.2

30-Day Delinquency Rate

1988

1990

1992

1994

1996

1998

2.0

2000

2002

2004

2006

Year
Source: Authors’ calculations based on foreclosure and house-price data from the Warren Group, and 30-day delinquencies data
from the Mortgage Bankers Association.

• The relationship between house prices, foreclosures, and cash-flow problems at the individual
household level is consistent with a simple model of the mortgage default problem that is standard in the literature, but by enriching this model, the authors show how the individual household’s unique financial circumstances may determine the decision to default. Negative equity is a
necessary but not a sufficient condition for default, as future house price appreciation may make

30 research review

it optimal to continue servicing the mortgage. Owners of condominiums and multi-family homes
are estimated, respectively, to have a 42 percent and a 57 percent higher conditional default probability than are owners of single-family residences.
• The subprime mortgage market has played an important role in the ongoing foreclosure crisis by
creating a class of homeowners who were particularly vulnerable to declining house price appreciation. As a group, subprime borrowers had higher LTV ratios and, thus, a smaller financial cushion against negative house price appreciation. Subprime lenders also made loans to borrowers
with a history of cash-flow problems, and demanded monthly mortgage payments that in some
cases exceeded 50 percent of a household’s monthly income.
Implications
Looking at the entire history of the U.S. subprime mortgage market, it is evident that it experienced a run of good luck in the form of persistently increasing house price appreciation until mid2005. The interaction between declining house price values and the rise in foreclosures is a significant consideration in assessing the current problems stemming from the foreclosure crisis, but policymakers should not lose sight of the fact that the presence of the subprime mortgage market has
enabled a class of borrowers, many of whom might not have qualified for a mortgage from a prime
lender, to become successful homeowners.
The current public policy debate improperly questions the value of the subprime lending channel
as a whole, and fails to differentiate among mortgages given to specific types of subprime borrowers. While subprime borrowers are more apt to experience foreclosure, 82 percent of borrowers who
use a subprime loan to finance an initial purchase mortgage have had successful outcomes, while a
substantial portion of the foreclosures tied to subprime loans is traced to prime mortgages that were
subsequently refinanced by a subprime lender. Therefore, it is important to distinguish among the
various components of the subprime lending channel in assessing whether subprime borrowers as

Kaplan-Meier Non-Parametric Cumulative Incidence
of Default: Subprime versus Prime
Percent
14
Subprime
Prime

12

10

8

6

4

2

0
0

4

8

12

16

20

24

28

32

36

40

44

48

52

Quarters Since House Purchase

july 2007 – december 2007 31

OFHEO House Price Indexes
House Price Level
800
Massachusetts
California
United States

700

600

500

400

300

200

100

0
1988

1990

1992

1994

1996

1998

2000

2002

2004

2006

Year
Note: Data are from the Office of Federal Housing Enterprise Oversight (OFHEO). Only single-family residential homes are
included the house-price calculations.

a group default “too often.” In the context of certain public policy questions, the probability of
default associated with an entire ownership experience may be much more relevant than the default
probability associated with a single loan—an important distinction for judging the effects of the
subprime mortgage market on increasing the homeownership rate. The authors plan additional
research to try to pinpoint the exact types of borrower and mortgage characteristics that most often
lead to defaults resulting in foreclosures.
w-07-16

Input and Output Inventories in General Equilibrium
by Matteo Iacoviello, Fabio Schiantarelli, and Scott Schuh
complete text: http://www.bos.frb.org/economic/wp/wp2007/wp0716.htm
email: iacoviel@bc.edu, schianta@bc.edu, scott.schuh@bos.frb.org

Motivation for the Research
While it is widely recognized that inventory investment plays an important role in business cycle
fluctuations, the goal of building actual macroeconomic models that successfully explain the role
played by inventories has not been realized. The conventional view of inventories continues to hold
that these are a significant factor in cyclical changes. Moreover, some believe that structural changes
in inventory behavior are an important reason behind the GDP volatility decline in the United
States since the early 1980s, a phenomenon widely termed by economists as the “Great
Moderation.”
Prior research on inventories has focused mainly on the manufacturing sector, and thus has not
really attempted to classify total inventories within a general equilibrium framework. Yet to be truly

32 research review

effective, models of aggregate inventory behavior must be comprehensive enough to explain heterogeneous behavior among different types of inventories and stocks from sectors other than manufacturing. Inventory target ratios exhibit substantial differences in their cyclical properties. The
output-inventory ratio is roughly acyclical, while the input-inventory ratio is very countercyclical,
consistently exhibiting sharp increases during recessions. Disaggregating inventories can help our
understanding of the reasons why input and output inventories display distinctly different cyclical
behavior. Models that distinguish between these two types of inventories are also likely to have an
advantage in explaining and understanding aggregate inventory behavior. By separating inventories
into input and output components, the authors sharpen the focus on the economic determinants of
inventory holding and their role in transmitting shocks throughout the economy.
Research Approach
Using an estimated two-sector dynamic stochastic general equilibrium (DSGE) model that incorporates independent roles for input and output inventories, this paper provides the first data-consistent, structural decomposition of the Great Moderation. Novel features include having two sectors based on inventory-holding behavior (goods sector with inventories, services sector without),
and differentiating input- and output-inventories by stage of fabrication. Other unique features are
the use of non-zero inventory depreciation, which provides an empirically important incentive to
adjust inventories in response to shocks, and the inclusion of multiple shocks. Estimation by
Bayesian methods is based on the following six variables: 1) output from the goods sector; 2) output from the services sector; 3) the stock of input inventories; 4) the stock of output inventories; 5)
the relative price of goods to services; and 6) total fixed investment. The full sample period estimate runs from 1960:Q1 to 2004:Q4, and the model is also estimated over two sub-periods,
1960–1983 and 1984–2004. The sub-sample estimation takes into account the notable changes in
the steady-state values of the inventory-to-target ratios and the relatively greater importance played
by the services sector in the U.S. economy since 1984. This division into two sub-periods permits
an investigation of structural changes that may have occurred and caused a decline in output volatility since 1984, thus identifying what role inventories may have played in the Great Moderation.
Key Findings
• The model accounts reasonably well for the variance of the key variables, as well as for how these
variables respond to various shocks, and for most correlations among the variables in the data. The
model successfully mimics the greater volatility of input-inventory investment and its higher
degree of procylicality as compared with output-inventory investment, both when testing the correlation between inventory investment and goods output, and when exploring the connection
between changes in inventory investment and the change in GDP. Moreover, the model can
reproduce the countercyclicality of the input-inventory target ratio, and the relative acyclicality of
the output-inventory target ratio.
• When estimated across two sub-periods, the model captures the volatility reduction observed in
aggregate variables. The sub-sample estimates for 1960–1983 and 1984–2004 match the volatility decline almost perfectly, showing a reduction in the standard deviation of GDP of .86 percentage points. The model accurately reproduces the reduced procylicality of output-inventory investment after 1983. Yet in the second sub-period, inventory movements depend more on their sector’s own innovations, while for other variables, a larger component of the volatility in economic
activity, as summarized by GDP, appears to be due to demand-preference shocks.
• The model suggests that the decline in GDP volatility is primarily due to a decline in the volatility of technology shocks, especially goods technology. The correlation between the technology
shocks in the two sectors also decreased substantially between the two sub-periods, falling from
.75 in the 1960–1983 period to .51 in the 1984-2004 period. In the second sub-period, the share

july 2007 – december 2007 33

of GDP variance accounted for by non-technological shocks rose from 13 percent to 31 percent.
• Three important structural changes appear to have occurred in the U.S. economy during the
1984–2004 period. First, the depreciation rate for output inventories decreased, going from 6 percent to 4 percent. Second, in the service sector, the capital utilization rate rose. Third, capital
adjustment became more costly. The stock of fixed capital became more costly to adjust, especially in the goods sector, and varying the rate of capital utilization became more costly, especially in
the services sector. However, the authors find that these structural changes in the parameters
account for only a small fraction of the reduction in aggregate output volatility.
• The authors do not find that inventory investment played a significant role in the Great
Moderation—neither reduced volatility of inventory shocks nor changes in structural parameters
associated with inventories appear to have played a real role in the decline of GDP volatility. The
reduced ratio of input inventories to goods output observed in the data is associated with a
decrease in GDP and goods-sector output volatility, but the size of the decrease is small.
• To summarize, the model suggests that the bulk of the Great Moderation is attributable to the
lower volatility of shocks, which was evident primarily in the goods-sector technology shock, a
result consistent with other aggregate analyses of the Great Moderation. In the second period, the
standard deviations of the technology shock in the goods sector and of the input-inventory shock
exhibited the largest declines.
Implications
The most important lesson of this study is that an estimated DSGE model can help characterize
the behavior of input and output inventories in general equilibrium. Each kind of inventory investment plays a different role in the model and exhibits different degrees of volatility and procyclicality. The model can replicate the observed volatility and cyclicality of both input and output inventory investment, particularly the fact that input-inventory investment is more volatile and procyclical than output-inventory investment. The model also reproduces the countercyclicality of the
input-inventory target ratio, and the relative acyclicality of the output-target ratio.
The authors find that the estimated depreciation rates for inventories are significant, especially
those for output inventories, which decay at an estimated rate of 8 percent per quarter. The magnitude of this depreciation rate and its importance for fitting the data indicate that further research
on understanding inventory depreciation would be valuable.
While the model developed in this study provides a new, more expansive, and data-consistent
framework for analyzing the cyclical properties of inventories, it omits an examination of certain
aspects of inventory behavior that may be important to understanding business cycle fluctuations.
For example, categorizing inventories into only two types abstracts from the supply and distribution chains that pervade the actual input-output structure of the goods sector and likely play a vital
role in propagating shocks. The model does not consider how markup variations and nominal features matter for inventory behavior and business cycles, nor the micro-founded motivations for why
firms wish to hold finished goods. The representative-agent approach to output inventories used in
this model abstracts from the decentralized problem of inventory holding by retailers or by final
good producers that is common in partial-equilibrium inventory analyses. Further research should
explicitly model the relationship between individual consumers and retailers (or final goods producers) in an imperfectly competitive setting; such an analysis should use a model that also allows
for input-output supply-chain relationships.

34 research review

Public Policy Briefs
b-07-2

A Principal Components Approach to Estimating Labor
Market Pressure and Its Implications for Inflation
by Michelle L. Barnes, Ryan Chahrour, Giovanni P. Olivei, and Gaoyan Tang
complete text: http://www.bos.frb.org/economic/ppb/2007/ppb072.htm
email: michelle.barnes@bos.frb.org, rc2374@columbia.edu, giovanni.olivei@bos.frb.org, gaoyan.tang@bos.frb.org

Motivation for the Research
While the unemployment rate, defined as the deviation (or “gap”) from its long-term trend, usually serves as a good summary statistic of current labor market conditions, economic policymakers
follow many different labor market series to predict future wage and price inflation. At times, other
measures of labor market activity send different signals than are indicated by the unemployment
rate. Policymakers need the ability to usefully summarize data from a large number of series and to
determine whether the unemployment rate gap serves as a sufficient summary statistic of labor
market pressure, meaning tight or loose employment conditions.
Research Approach
A summary statistic is constructed for 12 different labor market series, some more specific to the
supply side, some to the demand side of the market. Taken together, these series provide a wide
array of information on labor market activity. The summary statistic is built using principal components analysis, wherein each principal component measures a different driving force in the original data and does not repeat information. The authors examine the period 1973:Q1 through
2002:Q4, and compare the evolution of the unemployment rate gap with the evolution of this principal components summary measure. The comparisons are tested for robustness by considering
dynamic factors in real time, and by using a larger set of series that encompasses more disaggregated information about the labor market to construct the summary measure. To explore the recent
evolution of the principal components measure against the evolution of the unemployment rate
gap, the analysis based on the original 12 series is then extended to include data from 2003:Q1
through 2007:Q2.
Key Findings
• A comparison of the unemployment rate gap and the first principal component for the 12 series
spanning 1973:Q1 through 2002:Q4 reveals that the principal component measure explains 75 percent of the variability in the original data. The unemployment rate gap closely tracks this summary
statistic. Over this entire sample period, the correlation between the two series is 0.96, even during
periods when the unemployment rate gap peaked higher than the first principal component.
• When the evolution of the unemployment rate gap is compared to a dynamically estimated first
principal component using a 10-year rolling sample for the period 1982:Q1 to 2002:Q4, the correlation between the two series, at 0.97, is very close.
• Using a larger set of labor market series to construct the summary statistic using principal components, and then comparing this with the summary statistic for the 12 original series, 68 percent
of the variability is accounted for in the larger series. This expanded principal component also
closely resembles the unemployment rate gap.

july 2007 – december 2007 35

• The unemployment rate gap remains a good summary indicator of labor market pressure, even when
the unemployment rate series in not included in the information set upon which the first principal
component is based. The results confirm the close relationship between various principal-component-based summary indicators of labor market pressure and the unemployment rate gap.
• Over time the first principal component factor and the unemployment rate gap have moved closely together. When these have recently diverged during some quarters in the 2003:Q1–2007:Q2
period, the difference between the summary indicator and the unemployment rate gap has been
a function of the de-trending assumption used for the unemployment rate. Using a HodrickPrescott filter, the difference is relatively small. The difference is more significant, judged by historical standards, when the unemployment rate trend is kept flat at its 2004:Q4 level: the unemployment rate gap suggests more labor market pressure than does the principal components measure, although the extent of the overstatement is not certain.
• Using a standard Phillips curve framework that relates wage or price inflation to labor market pressure, wage or price inflation expectations, and a set of other control variables, the unemployment
rate gap and the summary measure are taken as alternative variables for gauging labor market pressure. In dynamic simulations over the period 1996:Q1 to 2007:Q2 using 4-quarter moving averages of actual and predicted wage and price inflation, there is no clear-cut evidence as to whether
the principal components measure or the unemployment rate gap is the more accurate predictor of
wage and price inflation. While there is some evidence that wage inflation behavior has been more
consistent with the recent evolution of the summary measure than with the evolution of the unemployment rate gap, there is little statistical reason to favor one series over the other.

Actual and Predicted Wage Inflation
(4-quarter moving average, 1996:Q1 – 2007:Q2)

Percent
5.0

4.5

4.0

3.5

3.0

Actual Wage Inflation
Predicted Wage Inflation using Principal Components Labor Market Activity Gap Measure
Predicted Wage Inflation using Unemployment Gap Measure

2.5

2.0
1997

1998

1999

2000

2001

2002

2003

2004

2005

2006

2007

Year
Note: Predicted wage inflation is based on the Phillips Curve specification, using the measures noted in the legend.

36 research review

Implications
Principal components methodology allows policymakers to summarize many labor market series,
and to reconcile much of the divergence among the different labor market indicators. As a summary indicator, the principal components measure is robust to various specifications, and closely tracks
the unemployment rate gap. Yet over the last 35 years, relying on the unemployment rate gap rather
than on the principal components factor has not produced consistently worse outcomes for inflation forecasting; thus, the unemployment rate gap remains a good summary statistic for the current
state of the labor market. In addition to its usefulness for predicting wage and price inflation, the
principal components summary indicator of labor market pressure can also serve as an estimate of
the size of the output gap.

july 2007 – december 2007 37

Contributing Authors
Dan Ariely is the Alfred P. Sloan Professor of Behavioral Economics at the Massachusetts Institute
of Technology’s Sloan School of Management.
Michelle L. Barnes is a senior economist and policy advisor in the research department at the
Federal Reserve Bank of Boston.
Anat Bracha is an professor at the Eitan Berglas School of Economics at Tel Aviv University.
Margaret Carten is a payments industry specialist in the emerging payments research group at the
Federal Reserve Bank of Boston.
Ryan Chahrour is a second-year graduate student in economics at Columbia University. At the
time the research for this paper was conducted, he was a research assistant in the research department of the Federal Reserve Bank of Boston.
Quoc-Anh Do is a Ph.D. candidate in economics at Harvard University.
Christopher L. Foote is a senior economist and policy advisor in the research department at the
Federal Reserve Bank of Boston.
Kristopher S. Gerardi is a research associate in the research department at the Federal Reserve
Bank of Boston and a Ph.D. candidate in economics at Boston University.
Lorenz Goette is a senior economist with the Research Center for Behavioral Economics and
Decisionmaking in the research department at the Federal Reserve Bank of Boston.
Matteo Iacoviello is a professor of economics at Boston College.
Stephen Leider is a Ph.D. candidate in economics at Harvard University.
Dan Littman is a payments research consultant in the payments research group at the Federal
Reserve Bank of Cleveland.
Stephan Meier is a senior economist with the Research Center for Behavioral Economics and
Decisionmaking in the research department at the Federal Reserve Bank of Boston.
Markus M. Möbius is an associate professor of economics at Harvard University and a faculty
research fellow at the National Bureau of Economic Research.
Giovanni P. Olivei is a vice president in the research department at the Federal Reserve Bank of
Boston.
Neel Rao is a graduate student at Harvard University.
Tanya Rosenblat is an assistant professor of economics at Wesleyan University. At the time the
paper summarized in this issue was written she was a visiting scholar in the research department of
the Federal Reserve Bank of Boston.

38 research review

Margarita Sapozhnikov is a senior associate at CRA International.
Fabio Schiantarelli is a professor of economics at Boston College.
Scott Schuh is a senior economist and policy advisor in the research department at the Federal
Reserve Bank of Boston.
Adam Hale Shapiro is a research associate in the research department at the Federal Reserve Bank
of Boston and a Ph.D. candidate at Boston University.
Charles Sprenger is a graduate student at the University of California, San Diego. When this
paper was written, he was a research associate in the research department at the Federal Reserve
Bank of Boston.
Joanna Stavins is a senior economist and policy advisor in the research department at the Federal
Reserve Bank of Boston.
Alois Stutzer is an assistant professor at the University of Basel.
Gaoyan Tang is a senior research assistant in the research department at the Federal Reserve Bank
of Boston.
Robert K. Triest is a senior economist and policy advisor in the research department at the Federal
Reserve Bank of Boston and was recently a visiting scholar with the Center for Retirement
Research at Boston College.
Paul S. Willen is a senior economist and policy advisor in the research department at the Federal
Reserve Bank of Boston.
Tyler Williams is a senior research assistant in the research department at the Federal Reserve
Bank of Boston.
Michael Zehnder is a graduate student at the University of Basel.

july 2007 – december 2007 39

federal reserve
bank of boston

TM

Research Department
Federal Reserve Bank of Boston
600 Atlantic Avenue
Boston, MA 02210
www.bos.frb.org/economic/index.htm
change service requested

PRSRT STD
US POSTAGE
PAID
NEW BEDFORD, MA
PERMIT NO. 450