View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

recent research

research review

Issue No. 4, July 2005 - December 2005

federal reserve bank of boston

Research Department
Jeffrey C. Fuhrer
Executive Vice President and
Director of Research

Economists
Jane Sneddon Little, VP
Geoffrey M. B. Tootell, VP
Mark Aguiar
Michelle L. Barnes
Katharine Bradbury
Mary A. Burke
Christopher L. Foote
Peter Fortune
Fabià Gumbau-Brisa
Jane Katz
Yolanda K. Kodrzycki
Borja Larrain
Stephan Meier
Giovanni P. Olivei
Scott Schuh
Joanna Stavins
Robert K. Triest
J. Christina Wang
Paul Willen
Manager
Patricia Geagan
Editor
Suzanne Lorant

research review
Issue No. 4, July 2005 – December 2005
Research Review provides an overview of recent research by economists of
the research department of the Federal Reserve Bank of Boston. Included are
summaries of scholarly papers, staff briefings, and Bank-sponsored conferences.
Research Review is available without charge. To be placed on the mailing list,
or for additional copies, please contact the Research Library:
Research Library—D
Federal Reserve Bank of Boston
600 Atlantic Avenue
Boston, MA 02210
Phone: 617.973.3397
Fax: 617.973.4221
E-mail: boston.library@bos.frb.org
Research Review is available on the web at
http://www.bos.frb.org/economic/ResearchReview/index.htm.
Views expressed in Research Review are those of the individual authors and do
not necessarily reflect official positions of the Federal Reserve Bank of Boston
or the Federal Reserve System. The authors appreciate receiving comments.

Graphic Designer
Heidi Furse

Research Review is a publication of the
Research Department of the Federal Reserve
Bank of Boston.
ISSN 1552-2814 (print)
ISSN 1552-2822 (online)
©Copyright 2006
Federal Reserve Bank of Boston

Research Department Papers Series of the Federal Reserve Bank of Boston
Working Papers present statistical or technical research. They are generally written
for economists and others with strong technical backgrounds, and they are intended
for publication in professional journals.
Public Policy Discussion Papers present research bearing on policy issues. They are
generally written for policymakers, informed business people, and academics. Many
of them present research intended for professional journals.
Public Policy Briefs present briefing materials prepared by Boston Fed research
staff on topics of current interest concerning the economy.
Research department papers are available online only.
http://www.bos.frb.org/economic/research.htm

Executive Summaries in This Issue
Public Policy Discussion Papers
p-05-2

Alternative Measures of the Federal Reserve Banks’ Cost of Equity Capital
Michelle L. Barnes and Jose A. Lopez

4

p-05-3

Lifecycle Prices and Production
Mark Aguiar and Erik Hurst

5

p-05-4

Deciding to Distrust
Iris Bohnet and Stephan Meier

7

Working Papers
w-05-9

The Roles of Comovement and Inventory Investment
in the Reduction of Output Volatility
F. Owen Irvine and Scott Schuh

8

w-05-10

Tom Sawyer and the Construction of Value
Dan Ariely, George Loewenstein, and Drazen Prelec

10

w-05-11

Large Stakes and Big Mistakes
Dan Ariely, Uri Gneezy, George Loewenstein, and Nina Mazur

11

w-05-12

New Approaches to Ranking Economics Journals
Yolanda K. Kodrzycki and Pingkang David Yu

13

w-05-13

Changes in the Federal Reserve’s Inflation Target:
Causes and Consequences
Peter N. Ireland

15

w-05-14

Real Wage Rigidities and the New Keynesian Model
Olivier Blanchard and Jordi Galí

17

w-05-15

Testing Economic Hypotheses with State-Level Data:
A Comment on Donohue and Levitt (2001)
Christopher L. Foote and Christopher F. Goetz

19

w-05-16

Heterogeneous Beliefs and Inflation Dynamics:
A General Equilibrium Approach
Fabià Gumbau-Brisa

20

w-05-17

Contracts with Social Multipliers
Mary A. Burke and Kislaya Prasad

21

w-05-18

Does Firm Value Move Too Much to be Justified
by Subsequent Changes in Cash Flow?
Borja Larrain and Motohiro Yogo

23

Public Policy Briefs
b-05-2

Additional Slack in the Economy: The Poor Recovery in Labor Force
Participation During This Business Cycle
Katharine Bradbury

24

Public Policy Discussion Papers
p-05-2

Alternative Measures of the Federal Reserve Banks’
Cost of Equity Capital
by Michelle L. Barnes and Jose A. Lopez
complete text: http://www.bos.frb.org/economic/ppdp/2005/ppdp052.pdf
email: michelle.barnes@bos.frb.org, jose.a.lopez@sf.frb.org

Motivation for the Research
The Monetary Control Act of 1980 requires the Federal Reserve System to provide payment services to depository institutions through the twelve Federal Reserve Banks at prices that fully reflect
the costs a private-sector provider would incur, including a cost of equity capital (COE). The
Reserve Banks require such an estimate every year, despite argument in the literature that COE
estimates are “woefully” and “unavoidably” imprecise.
Research Approach
The authors examined several COE estimates based on the Capital Asset Pricing Model (CAPM)
and compared them using econometric and materiality criteria. First, they used common model
selection criteria and hypothesis-testing methods to examine whether it is appropriate to introduce
additional factors to the benchmark CAPM. Next, in order to examine whether COE estimates
generated by alternative calculation methods differ significantly from COE estimates based on the
benchmark approach, they generated almost 200 alternative COE estimates for the Reserve Banks’
payments business, using various modeling decisions applied primarily to data on publicly traded
bank holding companies.
Key Findings
• The benchmark CAPM applied to a large peer group of competing firms provides a COE
estimate that is not clearly improved upon by using a narrow peer group, introducing additional
factors into the model, or taking account of additional firm-level data, such as leverage and lineof-business concentration.
• While there are various important empirical decisions to be made ex ante, these decisions do not
generate materially different COE estimates than simpler methods.
• A standard implementation of the benchmark CAPM provides a reasonable COE estimate,
which is needed to impute costs and set prices for the Reserve Banks’ payments business.
Implications
Since an important goal of the Private Sector Adjustment Factor methodology used by the Reserve
Banks for estimating their imputed cost of equity capital is public replicability, using these simpler
methods would seem to be preferred. However, the caveats regarding the theoretical validity and
empirical accuracy of the COE estimates generated by the CAPM must always be kept in mind.

4 research review

p-05-3

Lifecycle Prices and Production
by Mark Aguiar and Erik Hurst
complete text: http://www.bos.frb.org/economic/ppdp/2005/ppdp053.pdf
email: mark.aguiar@bos.frb.org, erik.hurst@gsb.uchicago.edu

Motivation for the Research
This paper studies how households substitute time and money. The vast majority of the literature
on this question focuses on labor supply decisions. However, such an exclusive focus overlooks a
number of other mechanisms that households use to substitute time for money.
The economic theory that motivates this paper originated in two seminal works of the 1960s.
Becker formalized the notion that consumption is the output of a production function that combines market goods and time. Such a “home production” function allows households to substitute
time for expenditures optimally in response to fluctuations in the relative cost of time. A similar
implication lies behind Stigler’s model of search. In the presence of informational frictions, the
same good may sell for different prices at a given point in time. By shopping more intensively, a
household can lower the market price paid for a given basket of goods.
These theoretical insights are now familiar. However, the quantitative importance of these margins is
difficult to pin down. In exploring how households substitute time for money, this paper investigates
how prices for goods vary across households in practice, and to what extent this variation accords with
standard theory. The paper also estimates the parameters of a home production function.
Research Approach
Using scanner data and time diaries, the authors document how households substitute time for
money through shopping and home production. The scanner database is novel in that it has
detailed demographics about the household making the purchases and in that it tracks household
purchases across multiple retail outlets. The scanner data set also includes information about the
shopping trip, making it possible to infer the household’s shopping intensity, while the time diaries
data enable examination of time spent in shopping over the life cycle. From the elasticity suggested by the data and observed shopping intensity, the authors impute the shopper’s opportunity cost
of time. Using this measure of the price of time and observed time spent in home production, they
estimate the parameters of a home production function. Finally, they use the estimated elasticities
for shopping and home production to calibrate an augmented lifecycle consumption model that
predicts the observed empirical patterns quite well.
Key Findings
• When households have access to home production and shopping technologies, market expenditure is a poor proxy for actual consumption.
• Observed household behavior, in terms of expenditure, time use, and prices, is consistent with
standard economic principles once the analytical framework allows households to access home production and shopping technologies.
• There is substantial heterogeneity in prices paid across households for identical consumption
goods in the same metro area at any given point in time.
• For identical goods, prices paid are highest for middle-aged, rich, and large households, consis-

july 2005 – december 2005 5

tent with the hypothesis that shopping intensity is low when the cost of time is high.
• Households with more children pay higher prices than households with fewer or no children.
• Households that shop more frequently pay lower prices for identical goods: The data suggest that a
doubling of shopping frequency lowers the price paid for a given good by approximately 10 percent.
• The cost of time is hump-shaped over the life cycle, in a manner that differs from the wage of the
household head, reflecting the reality that the shopper may not face the same wage as the household head and/or that the household may not be able to adjust labor hours at the margin.
• The shopper’s opportunity cost of time peaks in middle age at a level roughly 40 percent higher
than that of retirees.
• The estimated elasticity of substitution between time and market goods in home production is
close to 2.

Price Paid, by Age of Household Head
Price Index
1.08
1.06
1.04

All
Married

1.02
1.00
0.98
0.96
0.94
0.92
0.9
25-29

30-34

35-39

40-44

45-49

50-54

55-64

65+

Age of Head

Note: Data from AC Nielsen Homescan. See paper for details on construction of Price Index.

Implications
The paper provides a microfounded story of how the ability to home produce and shop implies a
nonseparability between expenditure and leisure even when utility is separable over consumption
and leisure.
Taken together, the results highlight the danger of interpreting lifecycle expenditure without
acknowledging the changing demands on time over the life cycle and the available margins of substituting time for money.

6 research review

There is a growing interest in the role of nonmarket activities and the allocation of work between
the market and the household. The insights of household production have already proved fruitful in
explaining phenomena as disparate as baby booms and business cycles. While the focus of this paper
is primarily on lifecycle consumption, the authors believe that the data and analysis presented in the
paper support a broader emphasis on how time is spent outside of market labor.
p-05-4

Deciding to Distrust
by Iris Bohnet and Stephan Meier
complete text: http://www.bos.frb.org/economic/ppdp/2005/ppdp054.pdf
email: irisbohnet@harvard.edu, stephan.meier@bos.frb.org

Motivation for the Research
With the recent corporate scandals as background, this paper examines why people sometimes do
not distrust enough. The paper investigates whether the default option influences individuals’ trust
(distrust) level and their counterparts’ trustworthiness. For example, in doctor-patient
relationships “full trust” is normally the default; patients generally have to actively distrust their
doctors to seek confirmation of medical advice, as second opinions are uncommon. What is the
influence of this default on patients’ trust levels and doctors’ trustworthiness?
Traditional economic models based on selfish material preferences and common knowledge
thereof assume no trustworthiness and no trust. Various behavioral regularities, such as social
preferences, predict outcomes away from the traditional equilibrium. However, like traditional
models, such outcome-based social preference models do not predict that the default affects behavior—if the choice sets remain the same.
In contrast to the vast literature on trust, in which the default is “no trust,” the authors study the
effects of a default of “full trust” to test the hypothesis that in many principal-agent relationships
characterized by incomplete contracts, the trust default leads people to trust more than they would
if the default were “no trust,” often leaving them worse off than if they had not trusted at all.
Research Approach
To study the behavioral consequences of changing the default in a trust relationship, the authors
conducted two types of experiments, enabling them to compare results obtained employing a
traditional experimental design with results obtained using a new design. Students from various
universities in the greater Boston area served as subjects.
The paper investigates the effect of changing the default from “no trust” to “full trust” in an interaction between two parties, a “principal” and an “agent.” In both situations, the principal decides on
a level of trust and the agent decides how much to reciprocate, depending on the principal’s level
of trust. In one condition, trust has to be built up incrementally. The principal receives $10 and can
entrust all or any of the money to her agent. The money entrusted is tripled before being transmitted to the agent to simulate the gains in efficiency created by trust.
In the other condition, the principal decides on her preferred level of trust by backing away from
total trust. The agent starts with $30 (as if the principal trusted fully) and the principal can show
full or partial mistrust of her agent by taking away some or all of the money. In symmetry with the
first condition, the money withdrawn becomes one-third as valuable if the principal is not trusted,
in order to capture the losses in efficiency created by distrust.

july 2005 – december 2005 7

After principals show their trust by either building up trust towards total trust or by reducing trust
from a situation of total trust, in a second stage agents have to decide how much of the money
currently in their possession to return to their principals.
Key Findings
• People are more trusting when the default is “full trust” (that is, in the distrust game) than when
it is “no trust” (in the trust game).
• At the same time, trustworthiness levels are lower if “full trust” is the default than in the
situation with “no trust” as default.
• Agents (second movers) punish distrust more in the distrust game than lack of trust in the trust
game, but principals (first movers) do not correctly anticipate this.
• The distrust game produces more efficient outcomes than the trust game but also more inequality:
Principals end up much worse than their agents in the distrust game.
• The results support the behavioral hypothesis that the default in a trust relationship matters.
Implications
In the long run, behavior that is away from the equilibrium path and by which principals keep
losing out is not sustainable. If we want to avoid a fallback to complete distrust and preserve some
of the efficiency gains that go along with trust, a default of “no trust” is more likely to yield success.
Trust should not be taken for granted. Rather, following the dictum “trust but verify,” second opinions
should be the standard procedure in doctor-patient relationships and parents should be encouraged
to pay surprise visits to their children’s schools.

Working Papers
w-05-9

The Roles of Comovement and Inventory Investment
in the Reduction of Output Volatility
by F. Owen Irvine and Scott Schuh
complete text: http://www.bos.frb.org/economic/wp/wp2005/wp059.pdf
email: irvinef@msu.edu, scott.schuh@bos.frb.org

Motivation for the Research
A substantial decline in the volatility of U.S. real GDP growth since the early 1980s has spawned a
growing literature filled with attempts to explain the decline. The primary explanations for the moderation in business cycles are: (1) improved monetary policy, (2) “good luck” in the form of fewer and less
severe shocks, (3) structural changes in the sales (demand) process, and (4) improved production and
inventory management techniques. No consensus has emerged yet on any of these explanations.
A common feature of most prior studies is the use of aggregate models and data. Each of
the leading explanations tends to abstract from heterogeneity among agents and to view aggregate
fluctuations as emanating from a single aggregate source, such as a monetary policy instrument or
some kind of aggregate shock. Comovement among aggregate variables is well established, but
comovement among industries (or sectors) has received less attention.

8 research review

This paper investigates the role of the synchronized actions of heterogeneous agents in explaining
the reduction in GDP volatility. The authors provide evidence on the magnitude and nature of the
reduction in comovement among industries and offer an explanation based on structural changes
in the production and inventory behavior within and between industries.
Research Approach
Testing the competing hypotheses requires a macroeconometric framework with heterogeneous
agents and complete aggregation conditions that can encompass all of the hypotheses. The authors
use two frameworks to evaluate the data. One is a standard factor model, which does not include an
explicit role for structural relationships among industries. The other is a heterogeneous-agent vector
autogression (HAVAR) model, which is well suited to evaluate the effects of structural changes on
comovement because it permits disaggregation of industries within an otherwise standard vector
autoregression model while maintaining all the necessary aggregation conditions. The authors
disaggregate the data into two sectors, manufacturing and trade.
Key Findings
• Most of the reduction in GDP volatility since 1983 is accounted for by a decline in comovement
of output among industries that hold inventories.
• This decline in comovement was not the passive byproduct of a reduction in the volatility of common
factors; rather, the structure of the U.S. economy has undergone important changes in the behavior of sales and inventory investment among goods-producing industries.
• In the simulations, structural changes that occurred in long-run and dynamic relationships among
industries’ sales and inventory investment behavior accounted for more than 80 percent of the
reduction in output volatility—especially in the automobile and related industries.
• The greatest reduction in comovement occurred among industries that are linked by supply and
distribution chains.
• Both output volatility and comovement between industries declined in response to all kinds of
shocks (not just monetary policy shocks).
Implications
These findings significantly weaken the case for the “good luck” hypothesis that the U.S. economy
simply has been fortunate to have experienced less volatile shocks since 1983. Although that
hypothesis may appear to be correct when explored using macroeconomic data, when the goodsproducing sector is disaggregated into just two industries (manufacturing and trade), structural
changes in the relationships among sales, inventory investment, and output behavior among industries become quite clear.
The fact that output volatility and comovement between industries both declined in response to all
kinds of shocks suggests that something changed fundamentally in the structure of the real economy.
Further the economy’s dynamic responses to monetary policy shocks also changed, and these
changes differ from the responses to other types of shocks. This fact, along with apparent changes
in the monetary policy rule, suggests that something associated with monetary policy changed.
In order to understand the precise nature of the structural changes in production and inventory
management techniques and their implications for aggregate behavior and policy, additional theoretical and empirical research is needed.

july 2005 – december 2005 9

w-05-10

Tom Sawyer and the Construction of Value
by Dan Ariely, George Loewenstein, and Drazen Prelec
complete text: http://www.bos.frb.org/economic/wp/wp2005/wp0510.pdf
email: ariely@mit.edu

Motivation for the Research
In a famous passage of Mark Twain’s novel, The Adventures of Tom Sawyer, Tom induces his friends
to relieve him of the tedious task of painting a fence by presenting the chore as a rare opportunity.
Tom’s friends wind up not only paying for the privilege of painting the fence, but deriving pleasure
from the task. In Twain’s words, Tom “had discovered a great law of human action, without knowing it—namely, that in order to make a man or a boy covet a thing, it is only necessary to make the
thing difficult to attain.”
Tom’s “law” challenges the intuition that whether a familiar activity or experience is pleasant or
unpleasant is a self-evident matter—at least to the person participating in that activity. If true,
Tom’s law would pose a fundamental challenge to economics. In a world in which people don’t
reliably know what they like, it cannot be assumed that voluntary trades will improve well-being or
that markets will increase welfare.
In a set of previous experiments, the authors demonstrated that valuations of goods and experiences
have a large arbitrary component and that, despite this arbitrariness, after one valuation has been
made, people’s subsequent valuations are scaled appropriately relative to the first; that is, they
exhibit “coherent arbitrariness.” A stronger test showed that individuals did not seem to have a preexisting personal dollar value for ordinary products and experiences.
Taking these findings as a starting point, the present paper asks a more basic question: Do people
even have a pre-existing sense of whether an experience is good or bad? Tom’s “law” suggests that
they do not—that the exact same experience can be desired or avoided, depending on context
and presentation.
Research Approach
Three experiments were conducted to test whether there exist experiences that individuals can
perceive as either positive or as negative. The subjects in each case were a class of undergraduate
college students. The experiments involved asking the students questions as to whether they would
be willing to experience an event for which they would have to pay (asked of one subgroup, comprising half the students) or be compensated for (the other subgroup). The first experiment was
designed to test whether an initial, hypothetical question would affect whether subjects viewed the
prospective experience positively or negatively. The second experiment was designed to test
whether subjects would assign successively greater valuations to successively greater durations of the
same prospective experience, whether they were initially induced to perceive the experience as
positive (one subgroup) or negative (the other subgroup). The third experiment provided a brief
taste of the experience at the outset in order to address the concern that, in the earlier experiments,
subjects might have inferred the quality of the experience from the initial, anchoring question.
Key Findings
• Individuals can be made to classify exogenously some experiences as either positive or negative,
depending on whether the preceding question asks them if they would pay or need to be paid for
the experience in question.

10 research review

• After one arbitrary response is given (as either positive or negative), other responses follow in a
seemingly coherent fashion, showing that people can respond sensibly to changes in conditions
even when they do so from arbitrary baseline levels.
• “Tom’s law” holds even when the random assignment of individuals to either the “pay” or “be paid”
conditions is made transparent.
Implications
There are two main reasons why it matters whether economic decisions are determined by fundamental values. First, coherent arbitrariness violates the basic economic assumptions about how the
“general equilibrium” of an economy comes into existence. The assumption that exogenous
consumer preferences interact with “technologies” and initial endowments to produce equilibrium
states of the economy—prices and production levels—falls apart if preferences are themselves
influenced by the very equilibrium states that they are presumed to create. Indeed, in the domain
of economic decision making, the most salient and potentially powerful anchors may well be the
public parameters of the economy itself—the relative prices and scarcities of different
commodities. If these parameters function as public anchors, then consumer tastes no longer exist
independently of prices but are endogenous to the economy. In that case, the equilibrium price
and production levels of the economy are no longer uniquely determined by its physical and
human resources and characteristics. Rather, a certain price level may prevail because of collective
anchoring, triggered by historical accidents or manipulations.
Second, economics as practiced today is a prescriptive as well as a descriptive social science.
Economists derive the “welfare implications” of alternative policies, where welfare is defined in terms
of the degree to which a policy leads to the satisfaction of individual preferences. Although economists have identified many situations in which free market exchange may not increase
welfare, such market failures usually arise from interactions between people with asymmetric
information or from situations in which people do not internalize the costs they impose on each
other. In contrast, the suboptimalities that arise from coherent arbitrariness begin at the level of the
individual. If preferences have a large arbitrary component, then even strictly personal consumption
choices by fully informed individuals need not maximize welfare.
Moreover, these individual level effects can be exacerbated by social and market interaction. The
literature on information cascades already shows that when people are uncertain about the quality of
consumption goods, initial choices can have big effects on market outcomes. Thus, for example, if a
small number of early diners arbitrarily choose new restaurant A over new restaurant B, A can end
up packed and B empty. The scope for such effects is enlarged to the degree that people are uncertain about their own preferences. The authors’ research suggests that the degree of uncertainty may
be very substantial, even when individuals have relevant experience with the objects of choice.
w-05-11

Large Stakes and Big Mistakes
by Dan Ariely, Uri Gneezy, George Loewenstein, and Nina Mazar
complete text: http://www.bos.frb.org/economic/wp/wp2005/wp0511.pdf
email: ariely@mit.edu

Motivation for the Research
Most upper-management and sales force personnel, as well as workers in many other jobs, are paid based
on performance, which is widely perceived as motivating effort and enhancing productivity relative to

july 2005 – december 2005 11

non-contingent pay schemes. However, psychological research suggests that excessive rewards can in
some cases produce supra-optimal motivation, resulting in a decline in performance.
The expectation that people will improve their performance when given high performance-contingent incentives rests on two subsidiary assumptions: (1) that increasing performance-contingent
incentives will increase motivation and effort, and (2) that this increase in motivation and effort
will result in improved performance.
Although there appear to be reasons to question the generality of the first assumption regarding
the positive relationship between effort and pay, in this paper the authors focus on the second
assumption and address the question of whether increased effort necessarily leads to improved
performance.
Research Approach
To test whether very high monetary rewards can decrease performance, the authors conducted a set of
experiments at MIT, at the University of Chicago, and in rural India. Subjects worked on various tasks
and received performance-contingent payments that varied in amount from small to large relative to
their typical levels of pay. By providing subjects with different levels of incentives, including some that
were very high relative to their normal income, the authors examined whether, across different tasks,
an increase in contingent pay leads to an improvement or decline in performance.
Key Findings
• Relatively high monetary incentives can have perverse effects on performance.
• Adding additional incentives can decrease performance: In eight of the nine tasks examined across
the three experiments, higher incentives led to worse performance.
• Tasks that involve only effort are likely to benefit from increased incentives, while for tasks that
include a cognitive component, there seems to be a level of incentive beyond which further increases
can have detrimental effects on performance.
Implications
Many existing institutions provide very large incentives for exactly the types of tasks used here—
those that require creativity, problem solving, and concentration. These results challenge the
assumption that increases in motivation necessarily lead to improvements in performance.
The prevalence of very high incentives contingent on performance in many economic settings raises
questions about whether administrators base their decisions on empirically derived knowledge of the
impact of incentives or whether they are simply assuming that incentives enhance performance.
The results suggest that the challenge for administrators is to create ways to use incentives
to levels at which they are motivating, while avoiding higher levels at which the added stress they
create may result in impaired performance. Examples of ways to use incentives that may be more
likely to achieve the desired result include giving bonuses more frequently or awarding incentive
compensation based on a running average over a few years.
The fact that some of the tasks revealed nonmonotonic relationships between effort and
performance further cautions against generalizing results obtained with one level of incentives to
levels of incentives that are radically different. For many tasks, introducing incentives where previously there were none or raising small incentives on the margin is likely to have a positive impact
on performance. The experiments suggest, however, that one cannot assume that introducing or

12 research review

raising incentives always improves performance. It now appears that beyond some threshold level,
raising incentives may increase motivation to supra-optimal levels and result in perverse effects on
performance. Given that incentives are generally costly for those providing them, raising contingent incentives beyond a certain point may be a losing proposition.
w-05-12

New Approaches to Ranking Economics Journals
by Yolanda K. Kodrzycki and Pingkang David Yu
complete text: http://www.bos.frb.org/economic/wp/wp2005/wp0512.pdf
email: yolanda.kodrzycki@bos.frb.org and pingkang_yu@yahoo.com

Motivation for the Research
For at least the past two decades, economists have devoted serious effort to ranking economics
journals based on their intellectual influence. Despite various innovations, studies have continued to
assess economics journals according to how frequently they cite one another. While this approach
may produce the appropriate methodology for some purposes, it is not suitable for analyzing the
broader influence of economics journals, nor does it produce rankings that address the varying
needs of different researchers within economics.
Economists may be interested in knowing whether the journals they hold in highest esteem are the
same as or different from the ones that other social scientists use in their evaluation of economic
research. In addition, this study is intended to guide publication decisions and evaluations of journals. For example, scholars may seek a more systematic understanding of the channels through
which economic research is disseminated to other fields.
Research Approach
The current study extends the literature on journal rankings by developing a flexible, citationsadjusted ranking technique that allows a specified set of journals to be evaluated using a wide range
of alternative criteria. Thus, the set of evaluated journals is not constrained to be identical to the set
of evaluating journals. While the methodology is general, specific applications developed in the
study rank economics journals according to their influence on the social science literature as well as
on policy, as measured by citations in policy-oriented journals.
Rather than relying on the Journal Citation Reports definition of economics, the authors inspect the content of a wide range of journals in order to identify those whose articles make extensive use of
concepts and methodologies central to economics. The authors then rank these journals according to the
number of times their articles are cited by, respectively, other economics journals, other social science
journals, and economics-oriented policy journals. Citations are weighted according to the influence of
the citing journal, and the citing journal’s influence is computed by applying an iterative process. The
analysis is based on citations in journal articles that appeared in 2003, and pertain to articles published
from 1996 onward. An economics-oriented policy journal is defined as one that either focuses directly
on policy issues, presents clear recommendations for policy, and is written so as to be easily accessible
to policymakers, or contains a substantial proportion (more than one-third) of articles that have policy
implications and are written in a style accessible to non-specialists. The authors follow the existing
literature in excluding citations outside of scholarly journals.
The authors draw a critical distinction between the influence of a journal and the influence of a
journal article and adjust rankings of journals according to their influence per article. The impactper-article ranking is intended to filter out the size effect of a journal in a meaningful way.

july 2005 – december 2005 13

Key Findings
• The list of journals with very high influence within the economics discipline generally agrees with
the apparent consensus of the economics profession, as well as with previous studies.
• In the within-economics approach, per-article rankings and all-articles journal rankings are
strongly correlated. The most noteworthy exceptions are journals that publish only a small number
of articles but manage to achieve relatively high influence for the journal as a whole.
• The list of top economics journals changes noticeably when one examines citations in the social
science and policy literatures, and when one measures citations—either within or outside
economics—on a per-article basis rather than in total.
• The changes in rankings are due to the relatively broad interest in applied microeconomics and
economic development, to differences in the relative importance that different literatures assign to
theoretical and empirical contributions, and to the lack of a systematic effect of journal size on
average influence per article.
• Although, in general, the overall-impact rankings give greater prominence to journals with comparatively broad accessibility, there are exceptions: Two econometrics journals highly influential in
the economics discipline remain highly influential in their overall impact on the social sciences.
• The study also highlights mutual links between some economics journals and journals in the
environmental studies and planning and development literatures that have received little attention
from other researchers.
Implications
The research confirms other researchers’ conclusions that economics is more self-contained than
almost any other social science discipline, while finding, nevertheless, that economics draws knowledge from a range of other disciplines.
The relatively high influence of two important econometrics journals on the social sciences
suggests that econometrics, as a tool, has been widely applied across the whole spectrum of social
sciences, and not just in economics.
This paper focuses on characteristics of articles and journals, and on the intensity of citations across
journals. Much more extensive research would be needed to identify the types of contributions from
the economics literature that are used most in other fields—contributions to methodology, theory, or
empirical questions or results. This would require categorizing and identifying the nature of
specific citations, not just tallying them.
With the increasing importance of the Internet as a communications channel within the intellectual community, studies appear to be cited more and more often in electronically available
working-paper form before being published, and several journals have “gone electronic” without
abandoning the refereeing process that characterizes many existing academic publications.
Application of the impact-adjusted citations methodology to these alternative outlets would
require their inclusion in the data as both citing and cited publications. The criteria for inclusion
in the JCR database do not impose any obvious barriers to the inclusion of electronic journals.
Users of ranking studies should hope that the entry of electronic journals with relatively short
refereeing and publication lags will serve to produce faster dissemination of economic research
in general. This would reduce the proportion of studies that are cited as working papers, which
generally lack the quality controls imposed by journals.

14 research review

The authors conclude that, in the meantime, based on the paper’s findings regarding total versus
per-article citations, those who point to the growing influence of working paper series should
consider the impact of references to these series on a per-working-paper basis, not just in total.
That is, instead of counting only the number of references to various working paper series, the
authors advocate counting the number of references to an average paper in such a series.
w-05-13

Changes in the Federal Reserve’s Inflation Target:
Causes and Consequences
by Peter N. Ireland
complete text: http://www.bos.frb.org/economic/wp/wp2005/wp0513.pdf
email: irelandp@bc.edu

Motivation for the Research
“Inflation is always and everywhere a monetary phenomenon.” This famous statement by Milton
Friedman articulates a principle underlying an interest-rate rule for monetary policy of a type
proposed by John Taylor: Transitory movements in the measured rate of inflation can be driven by
shocks of various kinds, but large and persistent movements in inflation cannot occur without the
help of monetary policy.
Under the simplest such “Taylor rule,” the central bank adjusts the short-term nominal interest rate
in response to deviations of output and inflation from their target or steady-state levels according to
a simple formula. In adopting such a rule, the central bank accepts responsibility for choosing the
inflation target and for choosing a policy response coefficient that is sufficiently large to stabilize the
actual inflation rate around its target. In the short run, movements in measured inflation may occur
for many reasons, but in the long run, inflation remains tied down by monetary policy.
Nothing, however, dictates that the central bank’s inflation target must remain constant over time.
Even in the relatively stable postwar U.S. economy, inflation has exhibited large and persistent
swings, trending upward throughout the 1960s and 1970s before reversing course and falling
during the 1980s and 1990s. Friedman’s “always and everywhere” dictum strongly suggests that
movements of this size and persistence could not have taken place without ongoing shifts in the
Federal Reserve’s inflation target.
The Federal Reserve has never explicitly revealed the setting for its inflation target. Hence, a
statistical or econometric model must be used to glean information about the Federal Reserve’s
inflation target from data on observable variables—that is, to disentangle those movements that
reflect shifts in the inflation target from those that are attributable to other types of shocks.
Research Approach
This paper develops an econometric model, drawing on contemporary macroeconomic theory and
estimated using quarterly U.S. data running from 1959:1 through 2004:2, to shed light on the
patterns, causes, and consequences of changes in the Federal Reserve’s implicit inflation target. The
model includes a generalized Taylor rule that allows the Federal Reserve’s inflation target to
respond systematically to shocks hitting the economy from the supply side. It incorporates the idea
that the upward secular trend in inflation for the period before 1980 is attributable to a systematic
tendency for Federal Reserve policy to translate short-run price pressures set off by adverse supply
shocks into more persistent changes in the inflation rate itself.

july 2005 – december 2005 15

The model offers a tight description of both Federal Reserve policy and the optimizing
behavior of the households and firms that populate the American economy. Hence, estimates of the
structural parameters of this simultaneous-equation model not only provide a detailed interpretation of historical movements in output, inflation, and interest rates as seen in the U.S. data, but also
allow for an equally detailed consideration of counterfactual scenarios such as: What would the
behavior of these variables have been if, instead, the Federal Reserve had maintained a constant
inflation target throughout the postwar period?
Two sets of estimates are reported: one obtained from an unconstrained “endogenous target”
version of the model in which all the parameters are estimated freely, the other from a constrained
“exogenous target” version of the model in which the inflation target response coefficients are fixed
at zero while the other parameters are estimated freely.
Key Findings
• Technology shocks represent the dominant source of movements in output, although preference
and monetary policy shocks do play a supporting role in driving short-run output fluctuations.
• Preference shocks become more important in accounting for movements in the nominal interest rate.
• Both inflation and output growth enter significantly into the Taylor rule for the nominal interest
rate, but the policy response to inflation appears considerably more vigorous than the associated
response to output growth.
• The estimates from the model suggest that the Federal Reserve’s unobserved inflation target rose
from 1-1/4 percent in 1959 to over 8 percent in the mid-to-late 1970s before falling back below
2-1/2 percent in 2004.
• Both model variants attribute low-frequency movements in inflation to changes in the inflation target;
the constrained model interprets these movements as purely exogenous, whereas the unconstrained
model views them as reflecting the Fed’s deliberate policy response to cost-push shocks.
• The output path looks similar under the counterfactual scenario to its historical behavior, whereas inflation, of course, becomes much more stable under the counterfactual scenario.
• The estimates also suggest that absent changes in the Fed’s unobserved inflation target, U.S.
inflation would never have exceeded 4 or 4-1/2 percent.
Implications
By attributing the bulk of inflation’s rise and fall to Federal Reserve policy, the results confirm that
to a large extent, postwar U.S. inflation is indeed a “monetary phenomenon.”
Estimates from the best-fitting, “endogenous target” version of the model provide some
support for stories that attribute the rise in U.S. inflation during the 1960s and 1970s to a systematic
tendency for Federal Reserve policy to translate short-run price pressures set off by adverse supplyside shocks into more persistent movements in the inflation rate itself. Symmetrically, those same
estimates confirm that, since 1980, the Fed has acted “opportunistically” to bring inflation back
down in the aftermath of more favorable supply-side disturbances. However, considerable
uncertainty remains about the true source of movements in the Federal Reserve’s inflation target.
The results also provide some support for an interpretation of the data in which a time-consistency problem accounts for the Fed’s unwillingness to prevent inflation from rising in the face of

16 research review

adverse supply-side shocks and, later, for its ability to bring inflation back down following more
favorable supply-side disturbances.
Alternatively, to the extent that the adverse supply shocks of the 1970s can be blamed for
inaccuracies in official estimates of the output gap, the results are consistent with the argument that
mismeasurement of the output gap led Fed officials to mistakenly adopt an overly accommodative
monetary policy throughout that decade, fueling the coincident rise in inflation.
Finally, results from the exogenous target model might be reinterpreted in line with the hypothesis
that the Fed actively pushed inflation higher during the 1960s and 1970s in a futile effort to exploit
a misperceived Phillips curve tradeoff.
Further extensions and refinements of the model developed in this paper are needed to discriminate
more sharply among these competing views of the data, to understand more fully the policy
mistakes of the past, and to guard more reliably against similar mistakes in the future.

Actual Inflation and the Federal Reserve’s Target
As Implied by the Unconstrained/Endogenous Target Model
Percent
14
12
10
8
Actual Inflation
Federal Reserve Target

6
4
2

Jan 2004

Jan 2001

Jan 1998

Jan 1995

Jan 1992

Jan 1989

Jan 1986

Jan 1983

Jan 1980

Jan 1977

Jan 1974

Jan 1971

Jan 1968

Jan 1965

Jan 1962

Jan 1959

0

w-05-14

Real Wage Rigidities and the New Keynesian Model
by Olivier Blanchard and Jordi Galí
complete text: http://www.bos.frb.org/economic/wp/wp2005/wp0514.pdf
email: blanchard@mit.edu and jgali@mit.edu

Motivation for the Research
Most central banks perceive a tradeoff between stabilizing inflation and stabilizing the gap between
actual output and desired output. However, the standard new Keynesian framework implies no
such tradeoff. In the new Keynesian framework, stabilizing inflation is equivalent to
stabilizing the welfare-relevant output gap. This property of the new Keynesian framework, which
the authors call the divine coincidence, contrasts with a widespread consensus on the undesirability

july 2005 – december 2005 17

of policies that seek to stabilize inflation fully at all times and at any cost in terms of output. In this
paper, the authors propose a modification of the New Keynesian model in which the divine coincidence no longer holds.
Research Approach
The authors lay out a baseline new Keynesian model, with staggered price setting and no labor
market distortions. They then introduce real wage rigidities and show how their presence generates
a meaningful tradeoff between stabilization of inflation and the welfare-relevant output gap. They
consider the implications of alternative stabilization policies and the costs of disinflation. Finally,
they derive empirical relationships between inflation, unemployment, and observable supply shocks
implied by their framework and provide evidence on its ability to fit the data.
Key Findings
• The divine coincidence is tightly linked to a specific property of the new Keynesian model, namely,
the fact that the gap between the natural level of output and the efficient level of output is constant
and invariant to shocks.
• The divine coincidence depends on the absence of nontrivial real imperfections in the standard new
Keynesian model.
• When the baseline new Keynesian model is extended to allow for real wage rigidities, the divine
coincidence disappears, and central banks indeed face a tradeoff between stabilizing inflation and
stabilizing the welfare-relevant output gap.
• The extended model provides a natural interpretation for the dynamic inflation-unemployment
relationship found in the data.
Implications
While the paper focuses on the implications of real wage rigidities, the authors see their results as
an example of a more general proposition. The optimal design of macroeconomic policy depends
very much on the interaction between real imperfections and shocks. In the standard new
Keynesian model, these interactions are limited. In particular, the size of real distortions is either
constant or varies over time in response to exogenous shocks to the distorting variables themselves
(for example, tax rates). This has strong implications for policy, one of them being the lack of a
tradeoff in response to most shocks, including conventional supply shocks. In reality, distortions are
likely to interact with shocks, leading to different policy prescriptions.
In the model developed in this paper, the interaction between real imperfections and shocks works
through endogenous variations in wage markups resulting from the sluggish adjustment of real
wages. However, a similar interaction could work through other mechanisms—for example,
through the endogenous response of desired price markups to shocks. Understanding these interactions should be high on macroeconomists’ research agendas.

18 research review

w-05-15

Testing Economic Hypotheses with State-Level Data:
A Comment on Donohue and Levitt (2001)
by Christopher L. Foote and Christopher F. Goetz
complete text: http://www.bos.frb.org/economic/wp/wp2005/wp0515.pdf
email: chris.foote@bos.frb.org and christopher.goetz@bos.frb.org

Motivation for the Research
State-level data are often used in the empirical research of both macroeconomists and microeconomists. Using data that follow states over time allows economists to hold constant a host of
potentially confounding factors that might contaminate an assignment of cause and effect.
A good example is a paper by Donohue and Levitt that purports to show that hypothetical individuals resulting from aborted fetuses, had they been born and developed into youths, would
have been more likely to commit crimes than youths resulting from fetuses carried to term.
Donohue and Levitt correctly point out that this is an empirical question, quite apart from the
issues surrounding the morality of abortion.
Donohue and Levitt suggest two possible channels for a causal link between abortion and crime.
First, holding current pregnancy rates constant, abortion lowers the total number of young people
15 to 25 years later, and since young people are more likely to commit crimes than their elders, the
criminal propensity of the future population declines. The second, more controversial channel
suggested by Donohue and Levitt is that abortion can lower crime by preventing the births of
persons most likely to become criminals. The rationale for this channel is that women who choose
abortion tend to be women who would find it difficult to provide a nurturing environment for
raising a child and that therefore the aborted fetuses, had they not been aborted, would have grown
up in disadvantageous circumstances and thus developed into youths with a higher propensity to
commit crimes than others in their age cohort.
Research Approach
Employing Donohue and Levitt’s data and methodology, the authors revisit that paper, attempt to
reproduce the results, and then re-run the regression model, including state-level population data
and controls for state-year effects omitted in the original paper.
Key Findings
• Although Donohue and Levitt’s paper is an excellent illustration of the power of state-level data
to test controversial hypotheses, the actual implementation of Donohue and Levitt’s statistical test
in their paper differed from what was described: Specifically, controls for state-year effects were left
out of their regression model. Remedying this omission reduces the coefficient on the abortion
term by about one-half.
• Performing the analysis on a per-capita basis reduces the abortion-term coefficient by about half
from its already reduced level.
• When Donohue and Levitt’s key test is run as described and augmented with state-level population
data, evidence for higher per-capita criminal propensities among the youths who would have developed, had they not been aborted as fetuses, vanishes.

july 2005 – december 2005 19

Implications
Abortion may lower crime by reducing the overall number of young persons in the population, yet
because only one channel operates in the abortion-crime link, the overall impact of abortion on
crime would be smaller than Donohue and Levitt claim.
w-05-16

Heterogeneous Beliefs and Inflation Dynamics:
A General Equilibrium Approach
by Fabià Gumbau-Brisa
complete text: http://www.bos.frb.org/economic/wp/wp2005/wp0516.pdf
email: fabia@bos.frb.org

Motivation for the Research
This paper presents a channel through which heterogeneous beliefs shape how monetary
policy impacts the macroeconomy. In particular, it determines the importance of endogenous
nominal rigidities in an imperfect common knowledge environment with learning. This is shown
to have implications for the dynamics of inflation, and in particular, for inflation inertia.
The Calvo pricing mechanism assumed in New Keynesian models implies that inflation must lead
the cycle. In particular, it generates a forward-looking Phillips Curve in which inflation responds
to current and forecastable output gaps. Contrary to this prediction, evidence from
estimated vector autoregression models indicates that inflation has a maximal response to a monetary policy shock several quarters after output does. Moreover, empirical evidence seems to point
to the need to motivate a backward-looking component in the Phillips Curve relationship in order
to generate sufficient inflation inertia.
The complications posed by these empirical regularities for standard New Keynesian models have
spawned a vast amount of empirical and theoretical research. Some researchers have attempted
to sidestep these problems by amending and expanding the New Keynesian framework. Others
have focused on alternative modeling strategies for nominal rigidities. This paper adopts the latter
strategy, avoiding the Calvo pricing assumption.
Research Approach
The author develops a dynamic stochastic general equilibrium model with rational expectations
and imperfect common knowledge that is able to generate substantial inflation inertia even though
nominal contracts last only one period and can be adjusted without cost. Price rigidities are endogenous, in contrast with the main building block in New Keynesian models.
Key Findings
• The presence of a persistent policy shock provides incentives to accumulate any relevant information
over time. Namely, it leads to endogenous learning.
• A calibration exercise indicates that the percentage of relevant information processed every
period by the private sector is in the neighborhood of 25 percent.
• Inflation inertia results from the interaction of two phenomena. First, price-setters’ limited capacity to process information leads to heterogeneous beliefs about the persistent shock. They need
to learn from others’ decisions, and this introduces sluggish price adjustment dynamics. Second,
monetary policy has some control over how this heterogeneity of beliefs impacts the equilibrium
level of inflation inertia.

20 research review

• Monetary policy and the extent of inflation inertia are linked through the effects of policy on the
degree of strategic complementarity in pricing. In a setting with imperfect common knowledge and
monopolistic competition, price-setters need to form conjectures about the aggregate price level. In
this way, price-setting depends on beliefs of what others perceive. These beliefs, in turn, also
depend on other price-setters’ beliefs. And so on. Monetary policy can reduce the impact of this
dependency on higher-order beliefs on the equilibrium through a rule that stabilizes
inflation around a commonly known target level.
• After a monetary policy shock, the relationship between inflation and output takes the form of a
Hybrid Phillips Curve, including both past and future inflation. This relationship can also be
expressed as a Backward-Looking Expectations-Augmented Phillips Curve.
• In both formulations, some of the coefficients of the Phillips Curve are not constant over time;
they may change as price-setters learn from their environment.
• The dynamics of the rational expectations monetary model discussed in this paper present certain
realistic features that are difficult to reproduce within the standard New Keynesian framework:
After a monetary policy shock, the inflation response peaks after the response of output, and
inflation is inertial and can be characterized by a Hybrid Phillips Curve.
Implications
Taylor rules are not only an answer to the central bank’s time-inconsistency problem, but also a
mechanism that influences the impact of uncertainty on the equilibrium. The model shows that a
Taylor rule with a stronger emphasis on inflation targeting reduces the extent of nominal rigidities
and inflation inertia. This type of policy brings the behavior of the economy closer to that of the
perfect-information/flexible-price equilibrium.
When price-setters know that policy is more aggressive in maintaining inflation around the known
target, the dependence on others’ beliefs is reduced. Price setting becomes less interdependent
because the behavior of the aggregate price level is easier to predict. Therefore, heterogeneity has a
smaller impact on the equilibrium dynamics, reducing the extent of inertia, the degree of nominal
rigidities, and thereby the real effects of monetary policy.
The convenience of a strong focus on inflation targeting needs to be reassessed by introducing other
shocks into the model. In particular, this is so for shocks that cause output and inflation to move in
opposite directions (for example, an oil price shock). In such a case, aggressive anti-inflationary policy would lead to larger output gaps, with potentially detrimental welfare effects. A more general
shock structure would then allow for analysis of the optimal monetary policy.
w-05-17

Contracts with Social Multipliers
by Mary A. Burke and Kislaya Prasad
complete text: http://www.bos.frb.org/economic/wp/wp2005/wp0517.pdf
email: mary.burke@bos.frb.org and kprasad@rhsmith.umd.edu

Motivation for the Research
This paper addresses a conspicuous gap in the economic theory of contracts and organizations:
namely, the social aspects of work within partnerships, teams, and firms. While the non-economic
literature on organizational behavior stresses the importance of social-psychological factors in the

july 2005 – december 2005 21

workplace, such as peer pressure and corporate culture, the economic approach, as advanced by
agency theory, assumes that actors within organizations are narrowly self-interested and interact
with each other only indirectly, by way of contracts and production technologies.
Formal economic models of social interactions, involving strategic complementarity among the
actions of individuals in a social group, are by now well-established. Extending this theoretical
framework, the authors examine the implications of conformity pressure among co-workers for
optimal contract design and organizational productivity, thereby providing a bridge between
economic and non-economic approaches to organizations.
Research Approach
The authors develop a model of contracting in which individual effort choices are subject to social
pressure to conform to the average effort level of others in the same risk-sharing group. As in related models of social interactions, a change in exogenous variables or contract terms generates a social
multiplier. In this environment, small differences in fundamentals such as skill or effort cost can
lead to large differences in group productivity. The authors characterize the optimal contract for
this environment and describe the properties of equilibria—properties that agree with stylized facts
on effort compression in revenue-sharing settings.
The theoretical analysis of the paper is complemented with an empirical investigation, based on
medical partnership data. The authors test whether individual physician output, measured in office
visits per week, depends positively on the average work output of peers in the same practice group.
Given the multiple possibilities for estimation bias in this setting, such as endogenous
selection into partnerships and simultaneous choice of work effort, the authors adopt an instrumental
variables approach and include a number of controls for factors that could lead to spurious
correlation in physician output levels.
Key Findings
• For a given contract, a unique equilibrium profile of effort levels exists; furthermore, an optimal
contract—maximizing the sum of utilities for the group, subject to enforceability constraints—
exists quite broadly.
• Given the social interaction effects, a small increase in a given individual’s cost of effort diminishes
his or her own effort as well as that of group members, although the own effect is more significant.
• Individuals with low cost of effort prefer to be in a group of similar individuals; consequently, it is
possible to separate individuals into a “high action” group and a “low action” group through an
appropriate choice of group-specific fees.
• Preference for conformity leads to the use of low-powered incentives in the high-cost groups,
while the low-cost groups use high-powered incentives.
• The empirical tests indicate the presence of a significant social multiplier on group output. The
estimated magnitude implies that exogenous shocks to fundamentals, such as the price of an office
visit, will result in a 60 percent greater impact on group average output than it would in the absence
of conformity pressure.
Implications
The model shows that social interactions among workers in a partnership significantly change the
character of the optimal contract. In addition, the model implies that small differences in underlying
characteristics or environmental conditions can lead to large differences in group productivity levels.

22 research review

We also find that conformity pressure within groups strengthens the incentives for individuals to sort
into groups on the basis of preferences over work effort. Such sorting will lead to further
disparities in performance across groups, independently of social multiplier effects. The empirical
results support the hypothesis of conformity pressure, but as with other empirical work on social interactions, there may be residual sources of estimation bias, and results are interpreted cautiously.
The analytical model provides an explanation for a phenomenon that has been observed empirically:
the compression of individual effort levels in the presence of a social multiplier as an effect of introducing group profit-sharing.
More broadly, the model shows that a given organization’s success may depend upon details of the
social interaction among the members of the organization. Replicating such success, therefore, is
difficult because such interactions cannot be reproduced at will. This lesson applies quite generally
at the societal level, despite this paper’s formal focus on organizations. In response to the puzzling
observation that societies in apparently similar situations, given similar economic prescriptions,
experience vastly different results, there has been a push toward understanding the role of norms and
social capital in development. The consensus emerging from this literature is that such divergent
paths can be understood only in light of differences in institutions. Institutions, in turn, can be
understood only in light of both market and nonmarket interactions.
w-05-18

Does Firm Value Move Too Much to be Justified
by Subsequent Changes in Cash Flow?
by Borja Larrain and Motohiro Yogo
complete text: http://www.bos.frb.org/economic/wp/wp2005/wp0518.pdf
email: borja.larrain@bos.frb.org and yogo@wharton.upenn.edu

Motivation for the Research
The purpose of this paper is to examine the valuation of corporate assets in relation to cash flow.
Movements in stock price cannot be explained by changes in expected future dividends. The wedge
between the log real value of a stock price index and the present value of future dividends discounted
at a constant rate represents variation in discount rates above and beyond the common variation
with expected dividend growth. In contrast, movements in the value of corporate assets (equity plus
liabilities) can be explained by changes in expected future cash flow. Asset value moves in lockstep
with the present value of future net payout (the total cash outflow from the corporate sector);
almost every movement in discount rates is matched by an offsetting movement in expected cash
flow growth.
Research Approach
The authors explain the role of equity and debt repurchase and issuance in the valuation of total
market assets and develop an econometric model to provide an analytical and empirical description of
net payout yield in the context of a firm’s intertemporal budget constraint. The estimation period is
1926–2004. The primary data source for the model is the Flow-of-Funds accounts for the period since
1946, extended back to 1926 by the authors using data from original sources.
Key Findings
• The ratio of net payout to assets, or net payout yield, mostly predicts net payout growth rather
than asset return, especially over long horizons.

july 2005 – december 2005 23

• A variance decomposition of net payout yield shows that 12 percent of its variation is explained
by asset returns, while 88 percent is explained by net payout growth. The hypothesis that none of
the variation in net payout yield is explained by asset returns cannot be rejected.
• Variation in discount rates above and beyond the common variation with expected cash-flow
growth plays little or no role in the present-value relationship between cash flow and asset value.
Implications
The key to understanding asset valuation is a comprehensive measure of cash flow. Dividends are
the appropriate measure of cash flow for an individual investor who owns one share of a valueweighted portfolio. The investor essentially follows a portfolio strategy in which dividends are
received and net repurchases of equity are reinvested. In contrast, net payout is the appropriate
measure of cash flow for a representative investor who owns the entire corporate sector. From a
macroeconomic perspective, net repurchase of equity and debt is a cash outflow from the corporate
sector that, by definition, cannot be reinvested.
A key reason to focus on net payout is the recent literature on corporate payout policy, which
has broadened the scope of payout beyond ordinary dividends. Because firms jointly determine all
components of net payout, rather than dividends in isolation, a comprehensive measure of cash flow
is necessary for understanding firm behavior.
Firms tend to use dividends to distribute the permanent component of earnings because
dividend policy requires financial commitment. Consequently, changes in dividends are slow and
mostly independent of discount rates. In contrast, firms tend to use repurchases to distribute
the transitory component of earnings because repurchase and issuance policy retains financial
discretion. Consequently, changes in repurchases and issuances are cyclical and share a common
component with discount rates.
This paper makes an indirect contribution to the corporate payout literature by documenting the
history of payout, issuance, and asset value of the U.S. nonfinancial corporate sector since 1926.
These data allow the authors to quantify, from a macroeconomic perspective, the relative importance of historical events such as the tightening of bond markets during the Great Depression, the
leveraged buyouts of the 1980s, and the surge of equity repurchase activity in the last twenty years.

Public Policy Briefs
b-05-2

Additional Slack in the Economy: The Poor Recovery in
Labor Force Participation During This Business Cycle
by Katharine Bradbury
complete text: http://www.bos.frb.org/economic/ppb/2005/ppb052.pdf
email: Katharine.Bradbury@bos.frb.org

Motivation for the Research
Based on materials presented in a briefing to the President and Academic Advisory Council of
the Federal Reserve Bank of Boston in March and April of 2005, this brief examines labor force
participation rates of various groups—older workers, women, and teens—in this recession and
recovery relative to patterns in earlier business cycles. While GDP growth since the 2001 recession
was reasonably robust through the spring of 2005, the labor market was much slower to recover.

24 research review

Payroll jobs continued to decline through May 2003, and the unemployment rate continued to rise
through June 2003, a year and a half after the official trough of the recession. Even after job counts
began to rise and joblessness subside, the share of the population that was employed did not
increase, and it had not improved measurably by the date this brief was written.
Research Approach
The analysis compares changes in participation rates in this business cycle for the men and women in
seven age groups with those in the five previous cycles, using monthly data from January 1948 through
February 2005. Monthly participation rates during the 12 months before the NBER business cycle
peak and four years after are expressed as percentage points above or below the participation rate in the
peak month and used to estimate the number of people who would be participating in the labor force
if each group’s participation rate were at its typical cyclical level.
Several alternative scenarios were simulated to explore the implications for labor market slack if
it were the case that the differences between this cycle and previous cyclical patterns reflected secular
changes involving selected groups—specifically teens, prime-age women, and older workers.
Because the labor market was unusually tight in the late 1990s, participation rates were relatively
high before this recession. Therefore, alternative calculations calibrated the current shortfalls by
group against rates from 1996, the latest pre-recession year when average annual participation was
lower than in 2000.

Implications for Unemployment of Alternative Assumptions
Regarding Return to “Normal” Participation Patterns
Numbers in bars are unemployed labor force participants, in millions
Unemployment Rate (percent)
9

Simulations measured relative to
November 2004 – February 2005 current situation
Current unemployed
Hypothetical additional unemployed

8
7
6
5

1.6 m

5.1 m

4.6 m

2.3 m

1.8 m

7.9 m

7.9 m

7.9 m

7.9 m

7.9 m

Under-55s revert
to historical pattern;
1996 baseline

No trend for
under-55 women:
they act like men;
2001 baseline.

Under-55 women
act like men;
1996 baseline

4
3
2
1
0
All age groups
Under-55 men and
revert to historical
women revert to
average cyclical pattern; historical pattern,
2001 baseline
while 55 & older
hold steady;
2001 baseline

july 2005 – december 2005 25

Key Findings
• Measured relative to the business cycle peak in March 2001, labor force participation rates almost
four years later had not recovered as much as usual, and the discrepancies are large. The depth of
the shortfall is most pronounced among teens and for women of all ages.
• Depending on the scenario, the current labor force shortfall as of March 2005 ranged from 1.6
million to 5.1 million men and women. With 7.9 million people unemployed as of the writing of
this brief, the addition of these hypothetical participants would have raised the unemployment rate
by 1 to 3-plus percentage points.
• Among age-by-sex groups, the participation shortfall is especially pronounced among teens and
for prime-age women. Adult men’s rates are only modestly below average. Only for men and
women ages 55 and older has participation risen more than is usual four years after the business
cycle peak: 3.4 million men and women ages 55 and older would not be participating if this age
group’s rates had rebounded only as much as usual.
• Almost one-half of the overall rise in participation of individuals 55-and-older during the 2000to-2004 period is the result of a decrease in the average age of this group as the leading edge of the
baby boom entered the group, according to a shift-share analysis of the changing age mix.
• Another non-cyclical contributor to the rise in participation of older workers is that older cohorts
of women are being replaced by younger ones whose participation rate over their lifetimes has
exceeded that of their predecessors.
• If participation by women below age 55 had risen to its typical degree by this time in the cycle,
the labor force would be larger by 4.1 million persons. However, under the extreme assumption
that the long uptrend in women’s participation is entirely over—that is, that women’s participation
will rebound only as much as men’s typically does—the prime-age women’s shortfall would be only
1.3 million.
• Use of the 1996 baseline implies that about 20 percent fewer under-55 women and about 20
percent more under-55 men would be “expected” in the labor force by this point than with the 2001 baseline, reflecting women’s lower participation rates and men’s higher rates in 1996 than in March 2001.
• On net, the participation shortfall for under-55-year-olds using the 1996 base would amount to almost
4.6 million individuals; this implies an increase of 3.0 percentage points in the unemployment rate.
• Combining the extreme assumption that the long uptrend in women’s participation is entirely
over with the use of 1996 participation rates, the total under-55 shortfall reduces to a still-substantial 1.8 million potential labor force participants, who could raise the unemployment rate by 1.3
percentage points.
Implications
To the extent that the explanations for this sub-normal participation rebound are cyclical, additional
workers would be expected to join the labor force in the coming months as the recovery proceeds,
reducing potential pressures on wage costs and prices. That is, the substantial numbers of potential
workers who might still (re)enter the labor market can be seen as representing slack in the economy, slack that is not reflected in the unemployment rate.
Were it not for the 55-and-older men and women, the total participation shortfall would be three
times as large as it is. It seems unlikely that the 55-and-older men and women who are participat-

26 research review

ing to a much greater degree than typical will all withdraw from the labor force as the economy
picks up further. Indeed, it is difficult to tell a cyclical story about them: Their participation
rates began increasing well before the recession began, and the continuing increases are partly
attributable to baby boomers’ beginning to cross the age-55 threshold. What seems likely is that
the over-54s will remain in the labor force as the recovery proceeds. By the same token, primeworking-age individuals are unlikely to have left the labor force permanently.
The discrepancies between current and previous cyclical patterns are so large for women because
women’s participation rates have trended upward steeply since at least the 1960s; however, the trend
appears to have stalled in the mid-1990s. The critical question is the extent to which the atypical
declines in women’s participation in this cycle are permanent or transitory. Based on recent
pre-recession patterns, it seems likely that part of the below-average participation rebound for
women in this cycle reflects a secular downshift in women’s participation rather than a cyclical
response to the recession’s weak labor market.

july 2005 – december 2005 27

federal reserve
bank of boston

TM

Research Department
Federal Reserve Bank of Boston
600 Atlantic Avenue
Boston, MA 02210
www.bos.frb.org/economic/index.htm
change service requested

FPO
please
Use
RDW
postage
permit #