View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

CURRENT REAL BUSINESS CYCLE
THEORIES AND AGGREGATE LABOR
MARKET FLUCTUATIONS
Lawrence J. Christiano and Martin Eichenbaum
Working Paper Series
Macro Economic Issues
Research Department
Federal Reserve Bank o f Chicago
June, 1990 (WP-90-9)

Current Real Business Cycle
Theories and Aggregate Labor Market Fluctuations

by

Lawrence J. Christiano
Federal Reserve Bank of Minneapolis
and
Martin Eichenbaum
Northwestern University, NBER and
Federal Reserve Bank of Chicago

January 1990

Abstract
In the 1930s, Dunlop and Tarshis observed that the correlation between hours worked and the return to
working is close to zero. This observation has become a litmus test by which macroeconomic models are
judged. Existing real business cycle models fail this test dramatically. Based on this result, we argue
that technology shocks cannot be the sole impulse driving post-war U.S. business cycles. We modify
prototypical real business cycle models by allowing government consumption shocks to influence labor
market dynamics in a way suggested by Aschauer (1985), Barro (1981,1987), and Kormendi (1983). This
modification can, in principle, bring the models into closer conformity with the data. Our results
indicate that when aggregate demand shocks arising from stochastic movements in government
consumption are incorporated into the analysis, and an empirically plausible degree of measurement error
is allowed for, the model’s empirical performance is substantially improved.

This paper is a substantially revised version of NBER Working Paper no. 2700, entitled "Is Theory
Really Ahead of Measurement Error: Current Real Business Cycle Theories and Aggregate Labor Market
Fluctuations". We are grateful to S. Rao Aiyagari, Finn Kydland, Edward C. Prescott and Mark
Watson for helpful conversations.




1.

Introduction

This paper assesses the quantitative implications of existing real business cycle
(RBC) models for the time series properties of average productivity and hours worked. We
find that the single most salient shortcoming of existing RBC models lies in their
predictions for the correlation between average productivity and hours worked.
RBC models predict that this correlation is well in excess of .9.

Existing

The actual correlation

which obtains in the aggregate data is roughly zero . 1 These results lead us to incorporate
aggregate demand shocks arising Rom stochastic movements in government consumption
into the RBC framework.

In addition, we investigate the impact of two types of

measurement error on our analysis: (i) misalignment between standard measures of output
and hours worked and (ii) classical measurement error in hours worked. In combination,
these changes generate substantial improvements in the models’ empirical performance.
The ability to account for the observed correlation between the return to working
and hours worked is a traditional litmus test by which aggregate models are judged. For
example, Dunlop (1938) and Tarshis’ (1939) critique of the classical and Keynesian models
was based on the implications of those models for the correlation between real wages and
employment. Both models share the common assumption that real wages and hours lie on a
stable downward sloped marginal productivity of labor curve . 2 Consequently, they predict,
counterfactually, a strong negative correlation between real wages and hours worked.
Modern versions of the Dunlop—Tarshis critique continue to play a central role in assessing
the empirical plausibility of different business cycle models. For example, in discussing the
Fischer (1977) sticky wage business cycle model, McCallum (1989,p.l91) states

As it happens, the main trouble with the Fischer model concerns its
real wage behavior. In particular, to the extent that the model itself
explains fluctuations in output and employment, these should be
inversely related to real wage movements: output should be high,




1

according to the model, when real wages are low. But in the actual
U.S. economy there is no strong empirical relation of that type.
In remarks that are particularly relevant for RBC models, Lucas (1981,p.226) writes

Observed real wages are not constant over the cycle, but neither do
they exhibit consistently pro—or countercyclical tendencies. This
suggests that any attempt to assign systematic real wage movements
a central role in an explanation of business cycles is doomed to failure.
Existing RBC models fall prey to this (less well known) Lucas critique. In contrast
to the classical and Keynesian models which u n d e r s ta te the correlation between hours
worked and the return to working, existing RBC models grossly o v e r s ta te that correlation.
This is because, according to existing RBC models, the o n ly impulses generating
fluctuations in aggregate employment are stochastic shifts in the marginal product of labor.
Loosely speaking, the time series on hours worked and the return to working are modeled
as the intersection of a stochastic labor demand curve with a fixed labor supply curve. It is
therefore not surprising that these theories predict a strong positive correlation between the
return to working and hours of work. 3
There are at least two strategies for modeling the observed weak correlation
between measures of the return to working and hours worked.

The first is to consider

models in which the return to working is unaffected by shocks to agents’ environments,
regardless of whether they correspond to aggregate demand or aggregate supply shocks.
Pursuing this strategy, Blanchard and Fischer (1989, p. 372) argue that the key assumption
of Keynesian macro models — nominal wage and price stickiness —is motivated by the view
that aggregate demand shocks affect employment while leaving real wages unaffected. The
second response is to simply abandon one shock models of the business cycle. The basic
idea here is that aggregate fluctuations are generated by a variety of impulses. Under these
circumstances the Dunlop—Tarshis observation imposes no restrictions per se on the
response of real wages to any particular type of shock. But, given a particular structural
model, it does impose restrictions on the relative frequency of different types of shocks.




2

This suggests that one strategy for reconciling existing RBC models with Dunlop—Tarshis
type observations is to find measurable economic impulses that shift the labor supply
function . 4 W ith impulses impacting on both the labor supply and demand functions there is
no a priori reason for hours worked to display any sort of marked correlation with the
return to working.
Candidates for such shocks include tax rate changes, shocks to the money supply,
demographic changes in the labor force, and shocks to government spending. In this paper
we focus on the latter type of shocks, namely changes in government consumption.

By

ruling out any role for government consumption shocks in labor market dynamics, existing
RBC models implicitly assume that public and private consumption have the same impact
on the marginal utility of private spending. Aschauer (1985) and Barro (1981, 1987) argue
that when $1 dollar of additional public consumption drives the marginal utility of private
consumption down by less than does $1 of additional private consumption, then shocks to
government consumption in effect shift the labor supply curve outwards.

Coupled with

diminishing labor productivity, these type of impulses will, absent technology shocks,
generate a negative correlation between hours worked and the return to working in RBC
models.
In our empirical work we measure the return to working by the average productivity
of labor rather than, for example, the real wage. We do this for two reasons. First, from
an empirical point of view our results are not very sensitive to whether the return to
working is measured using real wages or average productivity.
positive correlation with hours worked.

Neither displays a strong

For this reason, it seems appropriate to refer to

the low correlation between the return to working and hours worked as the Dunlop—Tarshis
observation, regardless of whether the return to working is measured by the real wage or
average productivity. Second, from a theoretical point of view it is well known that there
are a variety of ways to support the quantity allocations emerging from RBC models. By
using average productivity as our measure of the return to working we avoid imposing the




3

assumption that the market structure is one in which real wages are equated to the
marginal

product

of labor on

a period—by-period

basis.

In

addition,

existing

parameterizations of RBC models imply that the marginal and average productivity of
labor are proportional to each other, so that the two are interchangeable for the
calculations we perform.
Our empirical results indicate that incorporating government into the analysis does
lead to some improvements in the model’s performance. Interestingly, the impact of this
perturbation is about as large as allowing for nonconvexities in labor supply of the type
stressed by Hansen (1985) and Rogerson (1988).

However, as long as we abstract from

measurement error, this improvement is not sufficiently large so as to allow the model to
account for the Dunlop—Tarshis observation. At the same time, taking measurement error
into account substantially affects our results.

Indeed, once measurement error and

government are incorporated into the analysis, one cannot reject that a version of our
model is consistent with the observed correlation between hours worked and average
productivity, as well as the observed volatility of hours worked relative to average
productivity.

This is not the case if we account for measurement error, but exclude

government from the analysis.
The remainder of this paper is organized as follows.

In section 2 we describe a

general equilibrium model which nests as special cases a variety of existing RBC models.
In section 3 we present our econometric methodology for estimating and evaluating the
empirical performance of the model. In section 4 we present our empirical results. Finally,
in section 5 we offer some concluding remarks.

2. Two Prototypical Real Business C vde Models

In this section we present two prototypical real business cycle models. The first
corresponds to a stochastic version of the one sector growth model (see, e.g. Kydland and




4

Prescott [1980, p.174]). The second corresponds to a version of the model economy
considered by Hansen (1985) in which labor supply is indivisible. In both cases, we relax
the assumption implicit in existing RBC models that public and private spending have
identical effects on the marginal utility of private consumption.

2.1

The Models

Consistent with existing RBC models we assume that the time series on the
beginning of period t per capita stock of capital, k^., private time t consumption c^, and
hours worked at time t, n^., correspond to the solution of a social planning problem which
can be decentralized as a Pareto optimal competitive equilibrium. The following problem
nests both our models as special cases. Let N be a positive scalar which denotes the time t
endowment of the representative consumer and let 7 be a positive scalar. The social
planner ranks streams of consumption services, ct , leisure, N—nt and publicly provided
goods and services, g^, according to the criterion function:

ao

(2.1)

E0 L
U

/ r i M O + 7 V(N—n )}.
t=0

1

1

Following Kormendi (1983), Aschauer (1985), and Barro (1981,1987) we suppose that
consumption services are related to private and public consumption as follows:

( 2 .2 )

ct = CP + agt ,

where a is a parameter which governs the sign and magnitude of the derivative of the
marginal utility of c^ with respect to g t -5 Throughout, we assume that agents view gt as an
uncontrollable stochastic process. In addition, we suppose that gt does not depend on the




5

current or past values of the endogenous variables in the model. 8
We consider two specifications for the function V (-).

In the d iv is ib le la b o r m o d e l ,

V(>) is given by,

(2.3)

V (N -n t ) = ln (N -n t )

for all t.

In the in d iv is ib le la b o r m o d e l, V(*) is given by,

(2.3)'

V (N -n t ) = (N -n t )

for all t.

There are at least two interpretations of specification (2 .3 )'. First, it may just reflect the
assumption that individual utility functions are linear in leisure. The second interpretation
builds on the assumption that there are indivisibilities in labor supply. Here individuals can
either work some positive number of hours or not at all. Assuming that agents’ utility
functions are separable across consumption and leisure, Rogcrson (1088) bIiowb that a
market structure in which individuals choose lotteries rather than hours worked will
support a Pareto optimal allocation of consumption and leisure. The lottery determines
whether individuals work or not. Under this interpretation (2.3)' represents a reduced form
preference ordering which can be used to derive the Pareto optimal allocation using a
fictitious social planning problem. This is the specification used by Hansen (1985).
Per capita output, yt , is produced using the Cobb—Douglas production function

(2.4)

where 0 < 9 < 1 and zt is an aggregate shock to technology which has the time series
representation




6

(2.5)

zt = *t_ 1 exp(At ).

Here Afc is a serially uncorrelated iid process with mean A and standard error a ^ . The
aggregate resource constraint is given by

( 2 .6 )

c? + ®t + kt + l

^

^ kt - y t ’

i.e., per capita consumption and investment cannot exceed per capita output.
At date 0 the social planner chooses contingency plans for {c^, k ^ ^ , nt : t > 0} to
maximize (2.1) subject to (2.3) or (2.3)’, (2.4) — (2.6), kp and a law of motion for gt Because of the nonsatiation assumption implicit in ( 2 . 1 ) we can, without loss of generality,
impose strict equality in (2.6). Substituting (2.2), (2.4) and this version of (2.6) into (2.1),
we obtain the following s o c ia l p la n n in g p ro b le m :
Maximize

ao

(2.7)

E0 S

F

t=0

ln [(ztnt ^

^ kt + ( 1 - *)kt “ kt + l + ( o - 1 )^ ] + 7 V (N -n t )

subject to kQ given, a law of motion for gt , and V(*) is given by either (2.3) or (2.3)', by
choice of contingency plans for { k ^ ^ , nt : t > 0 } .
It is convenient to represent the social planning problem (2.7) in a way that all of
the planner’s decision variables converge in nonstochastic steady state. To this end we
define the following detrended variables:

(2.8)

E,+1 = kt+1/zt,

yt - y t/»t,

\ =

«t = V V

To complete our specification of agents’ environment we assume that gt evolves according




7

to

(2.9)

ln(gt ) = (1—p)ln(g) + p ln fg ^ j) + ^

where ln(g) is the mean of ln(gt ), |p | < 1 and ^ is the innovation in ln(gt ) with standard
deviation

Notice that gt has two components, zt and gt - Movements in the former

give rise to permanent changes in the level of government consumption, whereas
perturbations in the latter produce temporary changes in gt - With this specification, the
factors that give rise to permanent shifts in government consumption are the same as those
which permanently enhance the economy’s productive ability.
Substituting (2.8) into (2.7) we obtain the criterion function:

CD

(2.10)

k+

Eo i L ^ r ( n t ,f t,Et+ 1 ,gt ,At),
t— u

(2.11)

r(nt,Et,Et^ p g t,At) =

' ln [n/ 1 _ ^ t exp(- ^At) +

k

~ f t+l +

+ 7 V (N -n t ) >,

= Eq tE ^ lln(z^),and V (* ) is g iv en by e it h e r ( 2 .3 ) or ( 2 . 3 ) ' . Consequently the

original planning problem is equivalent to the problem of maximizing ( 2 . 1 0 ), subject to Icq,
(2.9), ( 2 . 1 1 ) and V ( - ) is given by either (2.3) or (2.3)'. Since

k

is beyond the planner’s

control, it can be disregarded in solving the planner’s problem.
The only case in which it is possible to obtain an analytical solution for the problem
just discussed is when a = S = 1 and the function V(*) is given by (2.3). This case is
analyzed in, among other places, Long and Plosser (1982). For general values of a and 8




8

analytical solutions are not available. Here we use Christiano’s (1988) log linear
modification of the procedure used by Kydland and Prescott (1982) to obtain an
approximate solution to our social planning problems. In particular, we approximate the
optimal decision rules with the solution to the linear quadratic problem obtained when the
function r in (2.11) is replaced by a function R which is quadratic in ln(nt ), ln(kt ),
ln(IEt _|_^), ln(g^) and A^.. The function R is the second order Taylor expansion of
r[exp(A 1 ),exp(A 2 ),exp(A 3 ),exp(A 4 ),A 5]

about

the

point

[A ^ A ^ A ^ A ^ A g]

[ln(n),ln(lc),ln(Ic),ln(g),A]. Here n and f denote the steady state values of n^ and
nonstochastic version of ( 2 . 1 0 ) obtained by setting a \ —

=

in the

0-

It follows from results in Christiano (1988) that the decision rules which solve this
problem are of the form:

(2.12)

St+1 = J(Et/E)rk(gt/g)dk«p[ek(A,-A)],

and

(2.13)

nt = n(Et /Ic)rn(gt /g ) dnexp[en(At-A )].

In (2.12) and (2.13), r^, d^, e^, r , dQ and en are scalar functions of the models’ underlying
structural parameters . 7
To gain intuition for the role of gt in aggregate labor market fluctuations it is useful
to briefly discuss the impact of three key parameters,

a,

p

and 7 , on the equilibrium

response of n^. to g^,. This response is governed by the coefficient dQ. First, notice that
when a = 1 , the only way in which c^ and gt enter into the social planner’s preferences and
constraints is via their sum, c£ + gt - Thus, exogenous shocks to gt induce one-for-one
offsetting shocks in c^, leaving other variables like y^,

and n^ unaffected.

implies that the coefficients dn and d^ in the planner’s decision rules for




9

This

and n^ both

equal zero.

Consequently, the absence of a role for gt in existing EBC models can be

rationalized by the assumption that a = 1.
Second, consider the case when a is less than one. The limiting case of a = 0 is
particularly useful for gaining intuition.

Here, government consumption is formally

equivalent to a pure resource drain on the economy. Thus, agents respond to an increase in
government consumption as if they had suffered a reduction in their wealth (as footnote 5
indicates, this does not imply that they have suffered a reduction in utility.) Since we
assume that leisure is a normal good, dQ is positive, i.e. increases in gt are associated with
increases in nt and decreases in yt /n t> Continuity suggests that dfl is decreasing in a. The
same

logic

suggests

that

dQ

is

an

increasing

function

of

p.

This

is

because the wealth effect of a given shock to g^. is increasing in p. For a formal analysis of
the effects of government consumption in a more general environment than the one
considered in this paper, see Aiyagari, Christiano and Eichenbaum (1989).
Finally, consider the impact of 7 on aggregate labor market fluctuations. In several
experiments we found that en and dfl were increasing in 7.8 To gain intuition into this
result it is useful to think in terms of a version of our model in which the gross investment
decision rule is fixed exogenously.

In this simpler model economy, labor market

equilibrium is the result of the intersection of static labor demand and supply curves. It is
straightforward to show that, given our assumptions regarding the utility function, the
response of labor supply to a change in the return to working is an increasing function of 7,
i.e., the labor supply curve becomes flatter as 7 increases. Consequently, the equilibrium
response of n^ to

(which shifts the labor demand curve) is increasing in 7.

This is

consistent with the finding that in our model en is increasing in 7. With respect to dfl, it is
straightforward to show that, in the static framework, the extent of the shift in the labor
supply curve induced by a change in gt is also an increasing function of 7.

This is

consistent with the finding that in our model, dn is an increasing function of 7.
The fact that en and dn are increasing in 7 leads us to expect that the volatility of




10

hours worked will also be an increasing function of 7. However, we cannot say a priori how
larger values of 7 will impact on the Dunlop—Tarshis correlation. This is because larger
values of en drive that correlation up, but larger values of dQ drive it down.

3. Econometric Methodology.

In this section we accomplish three tasks. First, we describe our strategy for
estimating the structural parameters of the model as well as various second moments of the
data. Second, we describe our method for evaluating the models’ implications for aggregate
labor market fluctuations. Third, we describe the basic data set used in our empirical
analysis.
While similar in spirit, our empirical methodology is quite different from the
methods typically used to evaluate RBC models. Much of the existing RBC literature
makes little use of formal econometric methods at either the stage when model parameters
values are selected, or at the stage when the fully parameterized model is compared with
the data. Instead a variety of informal techniques, often referred to as "calibration", are
used. In contrast, we use a version of Hansen’s (1982) Generalized Method of Moments
(GMM) procedure at both stages of the analysis. Our estimation criterion is set up in a
way so that, in effect, estimated parameter values succeed in equating model and sample
first moments of the data. As it turns out, these values are very similar to the values
employed in existing RBC studies. However, an important advantage of our GMM
procedures is that we are able to quantify the degree of uncertainty in our estimates of the
model’s parameters. This turns out to be an important ingredient of our model evaluation
techniques.

3.1 Estimation




11

In this subsection we discuss our estimation strategy. The parameters of interest can
be divided into three groups. Let ^

(3.1)

^

denote the structural parameters of the model:

= {£, 6, 7, P, g, 0^ A, <rA}.

The parameters N, 0 and a were not estimated. Instead we fixed N at 1369 hours per
quarter, and set the parameter 0 so as to imply a 3% annual subjective discount rate, i.e. 0
= (1.03) ’ . Two alternative values of a were considered: a = 0 and a = 1.
The second and third set of parameters, ^

a^d ^3, correspond to various second

moments of the data. Our measures of c£, dkt , kt , yt , (y/n)t , and gt all display marked
trends, so that some stationary inducing transformation of the data must be adopted. The
two sets of second moments correspond to the two different transformations which we
employ. The first transformation is motivated by the fact that according to our models, the
first difference of the logarithms of all the variables which enter into the analysis are
stationary stochastic processes. The second transformation corresponds to the Hodrick and
Prescott (HP) detrending procedure discussed in Hodrick and Prescott (1980) and Prescott
(1986). Our use of the HP transformation is motivated by the fact that many authors,
including most prominently Kydland and Prescott (1982, 1988), Hansen (1985) and
Prescott (1986), have investigated RBC models using data which have been filtered in this
manner. That the HP filter is a stationary inducing transformation for difference stationary
stochastic processes follows directly from results in King and Rebelo (1988). This result is
applicable because according to our model, the logarithms of c£, dkt , kt , yt , (y/n)t , and gt
are all difference stationary stochastic processes.
Let #2 denote a vector of population moments corresponding to the output of our
first stationary inducing transformation, i.e. growth rates of the data:

(3.2)




*2 = {<TcP/<Ty,<rdk/< y v W
12

a ’V

v

corr(y/n>n)>’

where a denotes the standard deviation of the growth rate of the variable x, x = {cp, y,
dk, n, y/n, g}, and corr(y/n,n) denotes the correlation between the growth rate of the
returns to working and the growth rate of hours worked. To carry out inference regarding
an'la y and ay’. we exploit the fact that these are exact functions of the elements of 'l'r>.°
2
Let tfg denote a vector of population moments of the HP filtered data:

( 3 .3 )

* 3 = {(<rc p /c ry )''P,(<Td k / ^ ) hP,(<ril) l' P , ( ^ / ^ / l l ) h p , ( y < ' y ) h p . ( c o r r ( y / Ii >n ) ) ''P }

The superscript hp denotes the fact that the population moment refers to HP filtered data.
In carrying out inference regarding (crn/<r )hP and (a )hP, we exploit the fact that these are
exact functions of the elements of tyg.

The Unconditional Moments Underlying Our Estimator of ^
♦

According to our model, 5 = 1 + dkt /k t —kt + j/k t - Let 6 denote the unconditional
mean of the time series [1 + dk^/k^. —k

(3.4)

i/k ,), i.e.

E{«* - (1 - dkt/k , - k t+ 1 /k,)} = 0.
♦

We identify 6 with a consistent estimate of the parameter 6 .
The social planner’s first order necessary condition for capital accumulation requires
that the time t expected value of the marginal rate of substitution of goods in consumption
equals the time t expected value of the marginal return to physical investment in capital.
It follows that,

(3.5)

E f/T 1- M yt+ 1 /k t+ 1 ) + 1 - «|c,/ct+ 1 } = o.




13

This is the moment restriction that underlies our estimate of 0.
The first order necessary condition for hours worked requires that, for all t, the
marginal productivity of hours times the marginal utility of consumption equals the
marginal disutility of working. This implies the condition 7 = (1—0)[yj./nt]/[Cj.V'(N—nt )]
*
for all t. Let 7 denote the unconditional expected value of the time series on the right
hand side of the previous expression, i.e.

(3.6)

E{7* - ( l- 0 ) ( y t /n ,)/[c tV '(N -n t )]} = 0.
♦

We identify 7 with a consistent estimate of the parameter 7 .
Fourth, consider the random variable At = ln(zt /z t_^) = (1—0)~*Aln(yt ) —Aln(nt )
—0(1—0)~*Aln(kt ). Here A denotes the first difference operator. Under the null hypothesis
of balanced growth A = EA. equals the unconditional growth rate of output, u . It follows

1

y

that

(3.7)

E[Aln(yt ) - ^ yl = 0, and

1

E[(*t -**J2 - * )= 0.

Relation (3.7) summarizes the moment restrictions underlying our estimators of A (= u )
J

and <Ty
Our

assumptions

regarding

the

stochastic

process

consumption imply the unconditional moment restrictions,

(3.8)




E[ln(gt ) - (1—p)ln(g) - pln(g^_1)] = 0,
E[ln(gt ) - (1—p)ln(g) - />ln(gt_ 1)]gt_ 1 = 0,
E{[ln(gt ) - (l-p)ln(g) - p l n ^ J ] 2 - <r2} = 0.

generating

government

These moment restrictions can be used to estimate p, g and a .
Equations (3.4) —(3.8) consist of eight unconditional moment restrictions involving
the eight elements of ^

(3.9)

and

X t_^ — [dkt/kt,kt_j_j/kt,kt/kt_pc£_|_j/c£,yt_|_j/kt_|_pyt/kt,yj_j/kt_ p

yt /y t_i ,ttt,nt_ i ,yt/s t >ct + 1

+ 1 .c?/st .«t

•

With this notation we can summarize (3.4) — (3.8) as

(3.10)

E{H1[Xt+1>^J]} = 0,

where

is the true value of

for all t > 0,

and H^(*,») is the 8 x 1 vector valued function whose

elements are left hand sides of (3.4) —(3.8) before expectations are taken.

The Unconditional Moments Underlying Our Estimator of'll2

Our model implies that the unconditional expected value of the growth rate of n^.
equals zero. It follows that

(3.11)

E{Aln(nt )2 - <r2} = 0.

which can be used to estimate
acpl<Ty'

(3.12)




The unconditional moments underlying our estimators of

°g/ ay 411(1 <V <7y/n can be written ^

E[(Aln(yt ) - Hy )2{\l< ry )2 ~ (Aln(xt ) - /*y)2] = 0 for xt = [cf, dkt , g j
E{[Aln(y/n)t - / i y]2(an/a y/ n)2 - Aln(n)t 2} = 0.

15

Here we have used the fact that, under balanced growth, y^, c£, dk^, (y/n)^. and

have the

same unconditional growth rate, u .
J

Finally, to estimate corr(y/n,n) we exploit the unconditional moment restriction,

(3.13)

E{[<7^/(an/ a y/ n)]corr(y/n,n) - [Aln(y/n)t - ^y]Aln(nt )} = 0,

where again we have used the balanced growth restriction that the unconditional growth
rate of (y/n). equals u .
1
y
Equations (3.11)—(3.13) consist of six unconditional moment restrictions involving
the six elements of ^

48

sis the elements of

W ith this notation we can

summarize (3.11) —(3.13) as

(3.14)

E{H2[Xt+ 1 , ^ ] } = 0,

where ^

is the true value of ^

for all t > 0,

and H g^,*) is the 6 x 1 vector valued function whose

elements are the left hand sides of (3.11) —(3.13) before expectations are taken.

The Unconditional Moments Underlying Our Estimator o f

Data which are transformed via the HP filter are zero mean by construction. It
follows that we can estimate the parameters of ^

by exploiting the unconditional moment

restrictions,

(3.15)




E ^ K ^ / ^ P ) 2 - ( i ,) ) 2} = 0 x( = [cp, dk,, gt],

E { ( 7 / B ) ? [ ( W « )hPl2 - ” t2} = 0'

E{[ff^P]2/I(^n/ ffy/n)llp]corr(y/nin),lp —(y/n),n,} = o,.
16

where the superscript " denotes data which have been transformed using the HP filter.
Equation (3.15) consists of six unconditional moment restrictions involving the six
elements of

an^

vector valued function Xt = [cP, yt , dk^, gfc, (y/n)t , n j. With this

notation we can rewrite (3.15) as

(3.16)

E{H3[Xt ,^3]} = 0,

where

for all t > 0,

is the true value of ’P.j and H3(- ,*) is the 6 x 1 vector valued function whose

elements are the left hand sides of (3.15) before expectations are taken.
In order to discuss our estimator it is convenient to define the 20 x 1 parameter
vector $ = [’Pj
vector

^3]'> the 20 x 1 vector valued function H = [H^ H2 H3] ', and the data

= f^ t+ 1 ’ ^ t ^ ‘ ^ i t h t ^ s notation the unconditional moment restrictions

(3.10), (3.14) and (3.16), can be written as

(3.17)

E{H[Ft+ 1 ,tf°]} = 0 V t > 0,

for 'P = 'P°, the true parameter vector. Let g^ denote the 20 x 1 vector valued function

(3.18)

T
gT (¥ ) = (l/T )E oH(Ft+ 1 ,¥),

which can be calculated given a sample on {Ft : t= l,2 ,...T + l} . All of our models imply
that Ft ^

is a stationary and ergodic stochastic process. Since g>p(*) is of the same

dimension as

it follows from Hansen (1982) that the estimator ^

condition, g.^'P.p) = 0, is a consistent estimator of ^ °.
Let D,j, denote the matrix of partial derivatives




17

defined by the

(3.19)

Dip —

evaluated at tfp. Then it follows from results in Hansen (1982) that a consistent estimator
of the variance —covariance matrix of tfp is given by

(3.20)

V ar(tfp) = [D p f^ S p lD p rV T .

Here Sp is a consistent estimate of the spectral density matrix of H(F^.,^°) at frequency
zero.10

3.2 Testing

In this subsection we describe how a Wald—type test statistic described in
Eichenbaum, Hansen and Singleton (1984) and Newey and West (1987) can be used to
formally assess the plausibility of the models’ implications for subsets of the second
moments of the data.

Our empirical analysis concentrates on assessing the model’s

implications for the labor market moments, [corr(y/n), <^n/^yyn] and [corr(y/n)hp,
(<rn/<7y/n)hp].11 Here we describe our procedure for testing the first set of moments. The
procedure for testing the second set of moments is completely symmetric. More generally,
the test procedure can be used for any finite set of moments.
Given a set of values for ’Pp our model implies particular values for [corr(y/n),
g

<7n/<7yj^ \ in population. We represent this relationship via the function f that maps IR into
IR2:

(3.21)




f(tfj) = [corr(y/n,n),

18

The function f(*) is highly nonlinear in ^

and must be computed using numerical

methods. Here we used the spectral technique used in Christiano and Eichenbaum (1989).
Let A be the 2 x 20 matrix composed of zeros and ones with the property

(3.22)

A* = [corr(y/n,n), <rn/<Ty/ n]'

and let

(3.23)

F (#) = f(^ j) - A*.

Under the null hypothesis that the model is true

(3.24)

F (#°) = 0.

In order to test the hypothesis that F(4'°) = 0, we require the asymptotic
distribution of F(^^,) under the null hypothesis. Taking a first order Taylor series
approximation of F(^^,) about 4'° yields,

(3.25)

F (¥ t ) 2! F(tf°) + F '( # ° ) [ * t - tf0]'.

It follows that a consistent estimator of Var[F(^^,)] is given by

(3.26)

Var[F(¥T )] = [ F '( ¥ T )]Var(*T )[F '(* T )]'.

An implication of results in Eichenbaum, Hansen and Singleton (1984) and Newey and
West (1987) is that the test statistic




19

(3.27)

J = F (# T )'V ar[F (« T )] 1F (# T )

is asymptotically distributed as a chi-square random variable with two degrees of freedom.
This fact can be used to test the null hypothesis (3.24).

3.3 The Baseline Dataset

Here, we discuss the baseline dataset, which we use initially in our analysis. Later,
in section 4.2, we modify the dataset in order to assess the impact of measurement error on
our analysis.

In all of our empirical work, private consumption, c^, was measured as

quarterly real expenditures on nondurable consumption goods plus services, plus the
imputed service flow from the stock of durable goods. The first two measures were obtained
from the Survey of Current Business. The third measure was obtained from the data base
documented in Brayton and Mauskopf (1985).

Government consumption, g^, was

measured by real government purchases of goods and services minus real government
(federal, state and local) investment.13 A measure of government investment was provided
to us by John Musgrave of the Bureau of Economic Analysis. This measure is a revised
and updated version of the measure discussed in Musgrave (1980). Gross investment, dk^,
was measured as private sector fixed investment plus real expenditures on durable goods
plus government fixed investment. The capital stock series, k^, was chosen to match the
investment series.

Accordingly, we measured kt as the stock of consumer durables,

producer structures and equipment, plus government and private residential capital plus
government nonresidential capital. Gross output, yt , was measured as c^ plus gt plus dkt
plus time t inventory investment.

Given our consumption series, the difference between

our measure of gross output and the one reported in the Survey of Current Business is that
ours includes the imputed service flow from the stock of consumer durables but excludes
net exports. Our baseline measure of hours worked corresponds to the one constructed by




20

Hansen (1984) which is based on the household survey conducted by the Department of
Labor Statistics. The data were converted to per capita terms using an efficiency weighted
measure of the population. All series cover the period 1955,3 —1983,4.13

4.

Empirical Results

In this section we report our empirical results. The section is organized as follows.
In subsection 4.1 we report results obtained using the baseline dataset described in Section
3.3. In subsection 4.2 we consider the impact of measurement error on our analysis.

4.1

Results for the Baseline Dataset

Table la reports our estimates of 4^ along with standard errors for the different
structural models. The coefficients of the equilibrium laws of motion for
corresponding to the estimated values of

and nt

are displayed in Table 2. One way to assess

the plausibility of our estimates of 4^ is to investigate their implications for various first
moments of the data. To do this, we used the equilibrium laws of motion reported in Table
2 to simulate 1000 time series, each of length 113, the number of observations in our
dataset. First moments were calculated on each synthetic data set. Table 3 reports the
average value of these moments across synthetic data sets, as well as estimates of the
corresponding first moments of the data. As can be seen, all of the models do extremely
well on this dimension. Notice that the model predicts the same mean growth rates for c£,
kt , gt and y^. This reflects the balanced growth property of our model. This restriction
does not seem implausible given the point estimates and standard errors reported in Table
2.

The model also predicts that the unconditional growth rate of n^ is zero. Again, this

restriction seems reasonably consistent with the data.
Tables 4A and 4B display estimates of a subset of the second moments of the data




21

as well as the analog model predictions. The first table reports results corresponding to HP
filtered data, while the second table reports results obtained working with growth rates of
the data. Since the results are qualitatively similar, we concentrate on Table 4A. All of the
models do reasonably well at matching the estimated values of (<rcP/^ y )hp, (<T(jk /<Ty)hp»
(a l a )hp, and (a )hp. Interestingly, introducing government into the analysis, i.e. moving

6

*

«

from a = 1 to a = 0, actually improves the performance of the models with respect to
(<rpi a )hp and (<7_/<? )hp, but has relatively little impact on their predictions for

c

y

6 y

( < W O hp 01 ( O hpcontrast, the models do less well at matching the volatility of
y
y
hours worked relative to output. Not surprisingly, incorporating government into the
analysis (a = 0) generates additional volatility in n^, as does allowing for indivisibilities in
labor supply. Indeed, the quantitative impact of these two perturbations to the base model
(divisible labor, o = 1) is similar. Nevertheless, even when both effects are operative, the
model still underpredicts the volatility of n^ relative to y^.. Similarly, allowing for
nonconvexities in labor supply and introducing government into the analysis improves the
model’s performance with respect to the volatility of n^ relative to yt /n t _ In fact the
fourth model which incorporates both of these effects actually overstates the volatility of nt
relative to yt /n t .14
Next we consider the ability of the different models to account for the
Dunlop—Tarshis observation. From Table 4A we see that the basic model (i.e., divisible
labor, a = 0) fails dramatically along this dimension. Introducing nonconvexities in labor
supply has almost no impact on the model’s prediction for this correlation. Introducing
government into the analysis (o = 0) does reduce the correlation between nt and yt /n t But, despite the improvement, the models with a = 0 still substantially overstate the
correlation between average productivity and hours worked.
Table 5 reports the results of implementing the diagnostic procedures discussed in
section 3. Columns labeled HP and DIFF refer to results generated from HP filtered data
and growth rates, respectively.




The first three rows report results for the correlation

22

between average productivity and hours worked.

The second set of three rows report

results for the relative volatility of hours worked and average productivity. The last row of
the table labeled "J" reports the statistic for testing the joint null hypothesis that the
model predictions for both corr(y/n,n) and <rn h y /n (or corr(y/n,n)hp and (^n/^ y/ n)hp)
are true. As can be seen, this null hypothesis is overwhelmingly rejected for every version
of the model, irrespective of whether growth rates or HP filtered data is used. Notice also
that the t statistics associated with corr(y/n,n) and corr(y/n,n)hp are, in every instance,
larger than the corresponding t statistics associated with ffn/ ffy j n and (<7n/ ay /n)hp- This
is consistent with our claim that the single most striking failure of the models lies in their
implications for the Dunlop—Tarshis observation, rather than the relative volatility of
hours worked and average productivity.

4.2

Measurement Error

There are at least two reasons to believe that the negative correlation between hours
worked and average productivity reported in section 4.1 is spurious and reflects
measurement error. One potential source of distortion lies in the fact that our base output
measure covers more sectors than do our base hours data (see Appendix 1 of Christiano and
Eichenbaum (1988)).
measurement error.

In addition, the base hours data may suffer from classical

This type of measurement error can have a particularly important

impact on estimates of corr(y/n,n) and corr(y/n,n)hp because average productivity is
constructed using the hours worked data.

Alignment Error

In order to investigate the quantitative impact of alignment error, we considered
alternative measures of hours worked and the returns to working which do not suffer from




23

this problem:

output per hour of all persons in the non—agricultural business sector

(CITIBASE mnemonic LBOUTU) and per capita hours worked by wage and salary
workers in private non—agricultural establishments as reported by the Bureau of Labor
Statistics (IDC mnemonic HRSPST). For convenience, we refer to this measure of n^ as
establishment hours.

With the new data, the estimated values of corr(y/n,n) and

corr(y/n,n)hp

are

.21

and .16 with corresponding standard errors of .07 and .08. These results are consistent
with the view that the negative correlations reported in Table 5 reflect, in part, alignment
error. Interestingly, our estimates of ffn/ ffy j n and (<rn/cr^yn)hp are also significantly
affected by moving to the new data sets.

These now equal 1.27 and 1.64 with

corresponding standard errors of .13 and .16. So while the models’ performance with respect
to the Dunlop—Tarshis observation ought to be enhanced by moving to the new data set, it
ought to deteriorate with respect to the relative volatility of hours worked and output per
hour.

Therefore, the net effect of the new dataset on overall inference cannot be

determined a priori.
To assess the net impact on the models’ performance, we reestimated the
structural parameters and redid the diagnostic tests discussed in section 3.
parameter estimates are reported in Table lb.

The new

The results of our diagnostic tests are

summarized in Table 6, which is the exact analog of Table 5. The data used to generate
Tables 5 and 6 are the same, with two exceptions. First, in the calculations associated
with the intratemporal Euler equation, i.e. the third element of
measure of average productivity, which is actually an index.

we used our new
This measure of average

productivity was scaled so that the sample mean of the transformed index coincides with
the sample mean of our measure of y^ divided by establishment hours.

The second

difference is that, apart from the calculations involving yt /n t , we measured nt using
establishment hours.
Notice that, for every single model and both stationary inducing transformations,




24

the J statistics in Table 6 are lower than the corresponding entries in Table 5.
Nevertheless, as long as government is not incorporated into the analysis, i.e. a = 1, the
models are still rejected at essentially the zero percent significance level. However this is
no longer true when government is incorporated into the analysis, i.e. a = 0. In particular,
when we work with growth rates, we can no longer reject the divisible labor model at the
one percent significance level.

Even more dramatically, when we work with HP filtered

data, we cannot reject the indivisible labor model at even the ten percent significance level.15
To understand these results, consider first the impact of the new data set on
inference regarding the correlation between hours worked and average productivity.
Comparing the a = 0 models in Tables 5 and 6 we see a dramatic drop in the t statistics.
There are two principal reasons for this improvement.

The most obvious is that

corr(y/n,n) and corr(y/n,n)hp are positive in the new data set (.21 and .16) while they are
negative in the base data set (—.71 and —.20). In this sense the data have moved towards
the model.

Second, the new values of ^

generate smaller values for corr(y/n,n) and

corr(y/n,n)hp. For example in the indivisible labor model (a = 0), corr(y/n,n)hp drops
from .737 to .575. In part, this reflects the new values of p and 7. Consider p first. With
the baseline set, p is .96 (after rounding) for all of the models. In the new data set, p is .98
(after rounding) for all the models.

As we emphasized in section 2, increases in p are

associated with decreases in the correlation between yt /n t and nt -18 Next, consider 7. With
the new data set the estimates of 7 are consistently larger than we obtained with the old
data set.17 For example, in the indivisible labor model (a = 0), 7 was .0037, while now 7 =
.0046.

As we noted in section 2, the impact of a change in 7 on corr(y/n,n) and

corr(y/n,n)hP cannot be determined a priori. As it turns out, the increase in 7 contributes
to a drop in these statistics.18
We now examine the impact of the new data set on inference regarding the relative
volatility of hours worked and average productivity. Comparing Tables 5 and 6 we see
that in all cases but one, the t statistics drop. In the exceptional case, i.e. the divisible




25

labor model with a = 0, the change is very small. There are three factors which influence
the change in these t statistics. First, the point estimates of ^n/^yjn and (an/ay/n)hp are
larger with the new dataset. Other things equal, this hurts the empirical performance of all
the models, except the indivisible labor model with a = 0. Second, these statistics are
estimated less precisely with the new data set. Other things equal, this contributes to a
reduction in the t statistics. Finally, the new parameter estimates lead to an increase in
each model’s implied values of aQ/ ay j n and ( <7n/ <7y /n)hp- For example, the value of
(<7n/<7y^n)hP implied by the indivisible labor model with a = 0 rises to 1.437 from 1.348.
In part this reflects the new values of p and 7. For example, the value of (<7n/ <Tyyn)hp
implied by the baseline indivisible labor model (a = 0) with p increased to .98 is 1.396.
The analog experiment with 7 increases the value of this statistic to 1.436.

Classical Measurement Error in Hours Worked

Recall that we have two different measures of hours worked, BLS establishment
hours and the baseline measure (i.e., Gary Hansen’s measure of hours worked.) Denote
G

ll

^

these two time series by n^ and n t , respectively. Let n^ denote true hours worked at time t.
We assume, as does Prescott (1986a), that the measurement error in these two time series
are independently and identically distributed, and orthogonal to each other as well as to
the logarithm of true hours worked, so that

(4.1)

In n® = In nt + v®
h
*
h
l n n t = l n n t + vt .

It follows that

(4.2)




o*e = - 5 { ^ ne ~ cov[Aln(nJ), Aln(nJ)]}

26

and

V

= .5{<7^n h -co v [A ln (n ® ), A ln(nJ)]},

where <7^ne and 0 ^ nh denote the variance in the growth rates of n® and n^, respectively,
while <r2e
and <r2h
denote the variance of v?l and v!\
V
V
l
We can estimate

2

2

and <ryh by replacing the objects to the right of the equalities

in (4.2) by their sample counterparts. We map this estimator into our GMM framework in
2
2
order to take into account the impact of sampling uncertainty in erye and
on our model
diagnostics. The unconditional moment restrictions associated with (4.2) are,

(4.3)

E{<ryh - .5[Aln(nJ)]2 - ,5Aln(n®)Aln(nJ)} = 0
E{tr2e - ,5[Aln(n®)]2 - .5Aln(nJ)Aln(n^)} = 0.

In redoing the empirical analysis underlying Table 5 (Table 6) we added the left hand side
of the first (second) equation in (4.3), before expectations are taken, to our specification of
2

2

the function H(« ,•) and added <ryh (<7ye) to our specification of tyj.
Next, we show how measurement error impacts on the remaining unconditional
moment conditions which define our estimator of tfj. To do this, we let v^ denote v^ or
vt , depending on whether measurement error is being incorporated into the new version of
Table 5 or Table 6. The corresponding variance of the measurement error is denoted by
2

V

*

Obviously, if hours are mismeasured then so will the Solow residual, z^.

Let z^

♦

denote the true Solow residual at time t. Relation (4.1) and the definition of z( imply that

(4.5)

zt = z * - v t .

It follows that the second equation in (3.7) must be replaced by




27

B[(At-A y)2 - a \ -

(4.6)

=

0.

Since zt is mismeasured, we must also modify (3.8), the unconditional moment restrictions
used to estimate p, a , and g. Given our model of measurement error, we now have the
restrictions

(4.7)

E[ln(gt ) -(l-p )g

= 0

E([ln(gt ) -(1-P )g -Pln(gt_ 1)]ln(gt_ 1) + pc? } = 0

E ( N g t ) - ( l-p)g -#>ln(gt_,)]2- <72 - (1+/>2)<t2} = o.

The last unconditional moment condition which involves n^ is the one which
defines our estimator of 7.

Under our assumptions regarding the nature of the

measurement error, this estimator remains valid, to a first approximation. Consider, for
example, the case of the divisible labor model. Under our assumptions the expected value
of our estimator of 7, E (l—0)(yt /c t )[(N/nt )—1], equals

E (l-0 )(y t /c t )[(N /n*)-l] + (l-0 )E (y t /c t )(N/n*)[exp(-vt )-l]

= 7* + (1—0)E(yt /c t )(N/n*)E[exp(—vt )—1]

S 7* + (M )E [(y t /c,)(N/D*)]<r*/2.
♦

The equality in the above expression exploits the fact that, by definition, 7

=

*

E (l—0)(yt /c t )[(N/nt )—1], and makes use of our independence assumptions on v^. The last

2

relation makes use of the approximation exp(—vt ) % 1—Vj+.5vt * It follows that the bias in
our estimator of 7 is roughly (1—0)E[(y^/Cj)(N/nt )]crv/2.

The same logic indicates that

the bias in the indivisible labor model equals (1—^)E[(y^./ct )(l/n ^)]av/2. Relative to the




28

point estimates provided below, these biases are negligible.
Using the methodology discussed above, we reestimated all the models, thus
generating eight new sets of parameter estimates. The first four were obtained using the
baseline dataset. Here, establishment hours are incorporated into the analysis in order to
estimate estimate crvh. The results of the corresponding diagnostic tests are reported in
Panel A of Table 7. The second four sets of parameter estimates were obtained using the
alignment corrected data set. Here, the baseline hours data where incorporated into the
analysis in order to estimate <r e. In both cases the only parameter estimates which were
significantly affected are those of a , a and p. In the baseline dataset
c

(<V Op P) =

p

(.018, .020, .96), without measurement error
(.014, .017, .98), with measurement error,

for all models. W ith the alignment corrected dataset,

(o , a , p) =

( 012, .016, .98), without measurement error, a = 0,1,
(.010, .014, .98), with measurement error, a = 1,
(.011, .014, .98), with measurement error, a = 0,

for the divisible and indivisible labor models. The estimated standard error of a and a is
e

V

always .001, while the estimated standard error of p is .03 in all cases. Not surprisingly,
taking measurement error into account reduces our estimates of a e and ap,, but increases
our estimate of p. The estimated values of crye and ayh are .0041 (.0006) and .0087 (.0009),
respectively, where numbers in parentheses denote standard errors. Evidently, our baseline
measure of hours suffer more from measurement error than does the establishment
measure.

For example, our estimates imply that roughly 80 percent of the standard

deviation of the growth rate in the baseline measure of hours worked can be attributed to




29

measurement error. The corresponding figure for establishment hours worked is 58 percent.
We now consider the results of our formal diagnostic tests. Comparing Panel A of
Table 7 with Table 5, we see that allowing for measurement error in the baseline dataset
has a dramatic impact on the models’ implications for the observed values of corr(y/n,n)
and corr(y/n,n)hp. For example in the base model, (divisible labor, a = 1), the predicted
values of corr(y/n,n)hp and corr(y/n,n) go from .951 and .960 to —.145 and —.638
respectively.

The corresponding t statistics drop from 10.56 and 25.18 to .48 and 1.39,

respectively.

In this sense, measurement alone resolves the Dunlop—Tarshis puzzle.

However, the J statistic reveals that all of the models are rejected at very high significance
levels. To see why, notice that the models overpredict corr(y/n,n) and corr(y/n,n)hp, but
with one exception, they underpredict ^n/ ffy j a and (<7n/<ry/n)hp- At the same time, the
correlation between Fj('I') and F j ^ ) for the different models lies between .4 and .8.
(Here, F is the 2 by 1 dimensional vector function F = [F^ F j]' defined in [3.23].) This is
why the J statistics assign low probability to these estimates. We conclude that
measurement error alone does allow these models to account for the Dunlop—Tarshis
observation and the relative volatility of hours worked and average productivity when each
statistic is considered individually. But measurement error alone does not account for their
joint behavior.
It is precisely on the joint behavior of these statistics that the models with
government do somewhat better. Setting a = 0 increases the predicted values for the
relative volatility of hours worked and average productivity. This effect is particularly
dramatic in the indivisible labor model where the model actually overpredicts (o^/ay^n)hp
In conjunction with the low t statistics, this produces a value for the J statistic according
to which the model cannot be rejected at even the thirty percent significance level.
The combined impact of correcting for alignment and classical measurement error
can be seen by comparing Table 5 and Panel B of Table 7. In all cases, the reported J
statistic falls by a factor of three or more. To assess the impact of classical measurement




30

error, conditioning on having adjusted for alignment error, compare Table 6 with Panel B
of Table 7. With two exceptions, incorporating classical measurement error into the
analysis substantially improves the performance of the models. Here, the indivisible labor
models with government cannot be rejected at even the ten percent significance level,
regardless of whether we work with growth rates or HP filtered data. Finally, to see the
impact of incorporating government into the analysis when both types of measurement
error are dealt with, consider the reported J statistics in Panel B of Table 7. In every case,
moving from a = 1 to a = 0 substantially improves the performance of the model. Indeed,
when we work with growth rates, neither model can be rejected at the three percent
significance level. Working with HP filtered data yields more ambiguous conclusions. Here,
the indivisible labor model cannot be rejected at the thirty percent significance level, but
the divisible labor model can be rejected at the one percent significance level. We conclude
that once measurement error is taken into account, incorporating government into the
analysis substantially alters inference about the plausibility of the models.

5. Concluding Remarks

Existing RBC theories assume that the only source of impulses to post war US
business cycles are exogenous shocks to technology. We have argued that this feature of
RBC models generates a strong positive correlation between hours worked and average
productivity. Unfortunately, this implication is grossly counterfactual, at least for the post
war US.

This leads us to conclude that there must be other quantitatively important

shocks driving fluctuations in aggregate U.S. output. This paper focused on assessing the
importance of shocks to government consumption.

Our results indicate that when

aggregate demand shocks arising from stochastic movements in government consumption
are incorporated into the analysis, and measurement error is allowed for, the model’s
empirical peformance is substantially improved.




31

We wish to emphasize two important caveats about our empirical results. First, we
have implicitly assumed that public and private capital are perfect substitutes in the
aggregate production function. A number of authors, including most prominently Aschauer
(1989), have argued that this assumption is empirically implausible. To the extent that
these authors are correct, and to the extent that public investment shocks are important,
our assumption makes it easier for our model to account for the Dunlop—Tarshis
observation. This is because these kinds of shocks impact on the model in a manner very
similar to technology shocks, so that they contribute to a positive correlation between
hours worked and productivity.

Second, we have implicitly assumed that all taxes are

lump sum. We chose this strategy in order to isolate the role of shocks to government
consumption per se.
We leave to future research the important task of incorporating distortionary
taxation into our framework. It is not dear what impact distortionary taxes would have
on our model’s ability to account for the Dunlop—Tarshis observation.

Recent work by

Braun (1989) and McGratten (1989) indicates that randomness in marginal tax rates
enhances the model on this dimension. On the other hand, some simple dynamic optimal
taxation arguments suggest the opposite. For example, suppose that it is optimal for the
government to immediately increase distortionary taxes on labor in response to an increase
in government consumption that is persistent. This would obviously mitigate the positive
employment effect of an increase in government consumption, thus hurting the model’s
ability to account for the Dunlop-Tarshis observation.
optimal for the government to increase taxes with a lag.
enhance the model’s empirical performance.




32

Suppose, however, that it is
We suspect that this would

References
Aiyagari, S. Rao, Christiano, Lawrence J. and Eichenbaum, Martin, "The Output and
Employment Effects of Government Spending," Federal Reserve Bank of
Minneapolis, January 1989.
Aschaner, David A., "Fiscal Policy and Aggregate Demand," American Economic
Review, March 1985, 75, 117—27.
Aschaner, David A., "Does Public Capital Crowd Out Private Capital?," Journal of
Monetary Economics, September 1989, 24, 171—88.
Ashenfelter, Orley, "Macroeconomic Analyses and Microeconomic Analyses of Labor
Supply," in K. Brunner and A. Meltzer, eds, Carnegie-Rochester Conference Series
on Public Policy, Autumn 1984, 21, 117—56.
Barro, Robert J., "Output Effects of Government Purchases," Journal of Political
Economy, 1981, 89.
Barro, Robert J., Macroeconomics, Second edition, New York: John Wiley and Sons, 1987,
Chapter 12.
Barro, Robert J. and King, Robert, "Time—Separable Preferences and Intertemporal
Substitution Models of Business Cycles," Quarterly Journal of Economics, 1984,
817—40.
Bendvenga, Valerie, "An Econometric Study of Hours and Output Variation with
Preference Shocks," manuscript, 1988.
Blanchard, Olivier Jean and Fischer, Stanley, Lectures on Macroeconomics, Cambridge,
MA: MIT Press, 1989.
Braun, Anton R., "The Dynamic Interaction of Distortionary Taxes and Aggregate
Variables in Postwar U.S. Data," manuscript, University of Virginia, 1989.
Brayton, F. and Manskopf, E., "The MPS Model of the United States Economy," Board of
Governors of the Federal Reserve System, Division of Research and Statistics,
Washington, D.C., 1985.
Christiano, Lawrence J., "Dynamic Properties of Two Approximate Solutions to a
Particular Growth Model," Research Department Working Paper 338, Federal
Reserve Bank of Minneapolis, 1987a.
Christiano, Lawrence J., "Technical Appendix to ‘Why Does Inventory Investment
Fluctuate so Much?’," Research Department Working Paper 380, Federal Reserve
Bank of Minneapolis, 1987b.
Christiano, Lawrence J., "Why Does Inventory Investment Fluctuate So Much," Journal of
Monetary Economics, March/May 1988, 21, 247-80.
Christiano, Lawrence J. and Eichenbaum, Martin, "Unit Roots in GNP: Do We Know
and Do We Care?," National Bureau of Economic Research Working Paper 3130,
1989.




33

Dunlop, John T., "The Movement of Real and Money Wage Rates," Economic Journal,
1938, XL VIII, 413-34.
Eichenbaum, Martin and Hansen, Lars P., "Estimating Models with Intertemporal
Substitution Using Aggregate Time Series Data," Journal of Business and Economic
Statistics, 1988.
Eichenbaum, Martin, Hansen, Lars P. and Singleton, Kenneth J., "Appendix to ‘A Time
Series Analysis of Representative Agent Models of Consumption and Leisure Under
Uncertainty’," unpublished manuscript, Department of Economics, Northwestern
University, 1984.
Fischer, Stanley, "Long-Term Contracts, Rational Expectations and the Optimal Money
Supply Rule," Journal of Political Economy, February 1977, 85, 191—205.
Fischer, Stanley, "Recent Developments in Macroeconomics," Quarterly Journal of
Economics, 1988.
Ghez, Gilbert R. and Becker, Gary S., The Allocation o f Time and Goods Over the
Life Cycle, New York: National Bureau of Economic Research, 1975.
Hall, R. E., "A Non-Competitive Equilibrium Model of Fluctuations," manuscript,
Stanford University, 1987.
Hansen, Gary D., "Fluctuations in Total Hours Worked: A Study Using Efficiency Units,"
Working Paper, University of Minnesota, 1984.
Hansen, Gary D., "Indivisible Labor and the Business Cycle," Journal of Monetary
Economics, November 1985, 16, 309—28.
Hansen, Lars P., "Large Sample Properties of Generalized Method of Moments
Estimators," Econometrica, 1982, 50, 1029-^4.
Hodrick, Robert J. and Prescott, Edward C., "Post-W ar U.S. Business Cycles: An
Empirical Investigation," manuscript, Carnegie—Mellon University, 1980.
Keynes, John Maynard, The General Theory o f Employment, Interest and Money,
New York: Harcourt and World, Inc., 1964.
King, Robert G. and Rebelo, Sergio T., "Low Frequency Filtering and Real Business
Cycles," manuscript, University of Rochester, February 1988.
Kormendi, Roger C., "Government Spending, Government Debt and Private Sector
Behavior," American Economic Review, 1983, 78, 994—1010.
Kydland, Finn E. and Prescott, Edward C., "A Competitive Theory of Fluctuations and
the Feasibility and Desirability of Stabilization Policy," in Stanley Fischer, ed.,
Rational Expectations and Economic Policy, National Bureau of Economic Research,
Chicago: The University of Chicago Press, 1980.
Kydland, Finn E. and Prescott, Edward C., "Time to Build and Aggregate Fluctuations,"
Econometrica, November 1982, 50, 1345—70.
Kydland, Finn E. and Prescott, Edward C., "The Work Week of Capital and Its Cyclical




34

Implications," The Journal of Monetary Economics, March/May 1988, 21, 343—60.
Long, John B. and Plosser, Charles I., "Real Business Cycles," Journal of Political
Economy, 1983, 91, 39—69.
Lucas, Robert E. Jr., Studies in Business-Cycle Theory, Cambridge, MA: The MIT Press,
1981.
McCallum, Bennett, Monetary Economic:
Publishing Company, 1989.

Theory and Policy, New York:

Macmillan

McGratten, Ellen, "The Macroeconomic Effects of Tax Policy in an Equilibrium Model,"
unpublished manuscript, Department of Economics, Duke University, 1989.
Musgrave, John C., "Government—Owned Fixed Capital in the United States, 1925—79,"
Survey of Current Business, March 1980, 33-43.
Newey, Whitney K. and West, Ken D., "A Simple,
Heteroskedasticity and Autocorrelation Consistent
Econometrica, May 1987, 55, 703—708.

Positive Semidefinite,
Covariance Matrix,"

Prescott, E. C., "Theory Ahead of Business Cycle Measurement," Federal Reserve Bank of
Minneapolis Quarterly Review, Fall 1986a, 10, 9—22.
Prescott, E. C., "Response to a Skeptic," Federal Reserve Bank of Minneapolis Quarterly
Review, Fall 1986b, 10, 28—32.
Rogerson, R., "Indivisible Labor, Lotteries and Equilibrium," Journal of Monetary
Economics, January 1988, 21, 3—17.
Shapiro, Matthew and Watson, Mark, "Sources of Business Cycle Fluctuations," National
Bureau of Economic Research Working Paper 2589, May 1988.
Tarshis, L., "Changes in Real and Money Wage Rates," Economic Journal, 1939,
XLIX, 150-54.




35

Footnotes
lThis finding is closely related to McCallum’s (1989) observation that existing RBC models generate
grossly counterfactual predictions for the correlation between average productivity and output.
2ln Keynes* own words: "Thus I am not disputing this vital fact which the classical economists have
(rightly) asserted as indefeasible. In a given state of organisation, equipment and technique, the real
wage earned by a unit of labour has a unique (inverse) correlation with the volume of employment."
(Keynes [1964,p. 17].)
3Although Prescott (1986a) and Kydland and Prescott (1982) never explicitly examine the hours/real
wage correlation implication of the RBC, Prescott (1986a) nevertheless implicitly acknowledges that
failure to account for the Dunlop—'Tarshis observation is the key remaining deviation between "economic
theory" and observations. He states (p.21): "The key deviation is that the empirical labor elasticity of
output is less than predicted by theory." Denote the empirical labor elasticity by TJ. By definition, 7} =
corr(y,n)cTy/crn, where corr(i,j) is the correlation between i and j,
is the standard deviation of i, y is
log detrended output and n is log hours. Simple arithmetic yields corr(y—n,n) = [Tf—l](0n/<Ty-n).
If----as Prescott claims----the magnitude of <7n /0y-n In the RBC is empirically accurate, then saying
that the RBC overstates 7] is equivalent to stating that it overstates corr(y—n,n). In Prescott’s model
corr(y—n,n) is exactly the same as the correlation between real wages and hours worked. (Also, under
log detrending, y—n is log detrended productivity.)
4An alternative strategy is pursued by Bencivenga (1987), who allows for shocks to labor suppliers*
preferences. Shapiro and Watson (1988) also allow for unobservable shocks to the labor supply function.
5We can generalize the criterion function (2.1) by writing it as ln(c ) + 7V(T—n ) + ^(g ), where 0(»)
t
b
b
is some positive concave function. As long as g is modeled as an exogenous stochastic process, the
b

presence of such a term has no impact on the competitive equilibrium. However, the presence of <fr(g.) >
b

0 means that agents do not necessarily feel worse off when g is increased. The fact that we have set
t
(/>(•) E 0 reflects our desire to minimize notation, not the view that that the optimal level of g is zero.
b

®Under this assumption, gt is isomorphic to an exogenous shock to preferences and endowments.
Consequently, existing theorems which establish that the competitive equilibrium and the social planning
problem coincide are applicable.
7Chri8tiano (1987a; 1988,ftn 9,18) discusses the different properties of the the log—linear approximation
which we use here and linear approximations of the sort used by Kydland and Prescott (1982).
8The statements in the text about the relation between (en,dn) and 7 are based on the following
experiments involving the divisible and indivisible labor models with OL = 0. For the divisible labor
model we computed the decision rule parameters in (2.12) and (2.13) at two sets of model parameter
values. First we set the model parameters to their baseline values reported in the relevant column in
Table la. The associated decision rule parameters are reported in the relevant column in Table 3.
Second, we perturbed the baseline parameter values by setting 7 = 5.15. With these new parameter
values, k = 9820.22, rk = .95, g = 190.81, dk = -.0017, ek = -.95, A = .0040, n = 266.26, rn = -.42,
dn = .20, en = .42. We also computed two sets of decision rules for the indivisible labor (a = 0) model.
The first is reported in the relevant column of Table 3, and is based on the baseline parameter values
reported in the relevant column of Table la. The second is based on perturbing the baseline parameter
values by setting 7 = .0046. The decision rule parameters corresponding to this are k = 9874.80, rk =
.94, g = 190.81, dk = .0023, ek = -.94, A = .0040, n = 267.74, rn = —.61, dn = .28, en = .61. In each
experiment we found that en and dn increased with 7 .
®Let b and c denote the fourth and sixth elements of $ 2> respectively.
manipulation, <Jnf a y = b/V (l+2cb+b2).




36

Then, after some algebraic

10Let S

O

= E E[H(F
.
k = — OD

,

1l^f°)][H(F

1,^ ° ) ] / denote the true spectral density matrix of H (F .,^ °) at

t+1

t

frequency zero. Proceeding as in Hansen (1982) we can estimate Sq by replacing the population moments
in the previous expression by their sample counterparts evaluated at
estimate of

In order to guarantee that our

is positive definite we use the damped truncated covariance estimator discussed in

Eichenbaum and Hansen (1988). The results we report were calculated by truncating after 6 lags.
l^Our formal test does not include (Jn/(Jy and (<7n/ 0y)kP because these are exact function of [corr(y/n),
CTn/tfyyn] and [corr(y/n)hP, (<Tn/0yyn)hp]> respectively (see footnote 9.)
l2It would be desirable to include in gt a measure of the service flow from the stock of government
owned capital, since government capital is included in our measure of kt- Unfortunately we know of no
existing measures of that service flow. This contrasts with the case of household capital, for which there
exist estimates of the service flow from housing and the stock of consumer durables. The first is included
in the official measure of consumption of services, and the second is reported in Brayton and Mauskopf
(1985).
13For further details on the data, see Christiano (1987b).
l*These results differ in an important way from those in Hansen (1985). Using data processed using the
HP filter, he reports that the indivisible labor model with Ot = 1 implies a value of (o^ /tf^ ^ k P equal
to 2.7 (see Hansen [1985], Table 1.) This exceeds the corresponding empirical quantity by over 220%.
Our version of this model (a = 1) underpredicts

by over 20%. The reason for the

discrepancy is that Hansen chooses to model innovations to technology as having a transient effect on
z , whereas we assume its effect is permanent. Consequently the intertemporal substitution effect of a
t
shock to technology is considerably magnified in Hansen’s version of the model.
^Interestingly, despite the small t statistics associated with the indivisible labor model (a = 0), the J
statistic computed using growth rates is large. This is because the estimated correlation between F2(\tr)
and F2(^ ) is —.51. At the same time Fi(^) is .4 while F2(^ ) is .2. Because of the negative correlation
between these statistics, the J statistic, which is computed under the null hypothesis that both are zero,
assigns very low probability to this outcome. The principal reason why this correlation is negative has to
do with the important role played by the sampling uncertainty in p. When p is high, the model
correlation between yt/nt and nt is low, but the relative volatility of nt and yt/nt is high. In fact, the
correlation between fi(^ ) and f2(^ ) is —.95. As it turns out, for the model under consideration, this
correlation is the key empirical determinant of the correlation between Fi('P) and F2(^).
^Consistent with this, the value of corr(y/n,n)bP that emerges from the baseline indivisible labor model
(Ot = 0) with p increased to .98 equals .644.
17To see why the new data set generates a higher value of 7, it is convenient to concentrate on the
divisible labor model. The parameter

0 is

invariant to which data set or model is used. In practice, our

estimator of 7 is approximately




37

.

(1-0) N

7 = --- 1--- l].

c/y

n

where c /y denotes the sample average of (c£ + Q;g^)/y^, and N /n denotes the sample average of N/n^.

Obviously, 7 is a decreasing function of n. The value of n with our baseline set is 320.4, and the implied
value of n/N is .23. In the new data set, n = 257.7 and the implied value of n/N is .19. Our estimates
of 7 are different from the one used by Kydland and Prescott (1982). This is because they deduce a
value of 7 based on the assumption that n/N = .33. In defending this assumption, Prescott (1986b,p.15)
states: "Ghes and Becker (1975) find that the household allocates approximately one—third of its
productivity time to market activities and two—thirds to nonmarket activities.11 We cannot find any
statement of this sort in Ghes and Becker (1975).
18For example, the value of corr(y/n,n)kp that emerges from the baseline indivisible labor model (a = 0)
with 7 increased to .0046 equals .684 (see footnote 8 for more details about this computational
experiment.)




38

Table la
Model Parameters Estimates (Standard Errors)
Generated by Baseline Dataset1

(a= l)

Divisible
with
Gov’t
(*=0)

Indivisible
with
Gov’t
(or=0)

1369

1369

1369

1369

6

0.0210
(0.0003)

0.0210
(0.0003)

0.0210
(0.0003)

0.0210
(0.0003)

0

1.03“ 025

1.03- 0 25

1.03~025

1.03-0'25

e

0.339
(0.006)

0.339
(0.006)

0.344
(0.006)

0.344
(0.006)

7

2.99
(0.03)

0.00285
(0.00003)

3.92
(0.05)

0.00374
(0.00005)

A

0.0040
(0.0015)

0.0040
(0.0015)

0.0040
(0.0015)

0.0040
(0.0015)

0.018
(0.001)

0.018
(0.001)

0.018
(0.001)

0.018
(0.001)

Divisible
Labor
(<*=!)

Indivisible
Labor

T

ff€

g

186.0
(10.74)

p

0.96
(0.028)

a iM

0.020
(0.001)

186.0
(10.74)

190.8
(7.09)

0.96
(0.028)

0.96
(0.029)

0.020
(0.001)

0.021
(0.001)

190.8
(7.09)
0.96
(0.029)
0.021
(0.001)

Standard errors are reported only for estimated parameters. Other parameters were set a
priori.




Table lb
Model Parameters (Standard Errors)
Estimated on Alignment-Corrected Data Set1

Divisible
Labor
(a = l)

Indivisible
Labor
(a = l)

Divisible
with
Gov’t
(a=0)

Indivisible
with
Gov’t
(a=0)

s

0.0210
(0.0003)

0.0210
(0.0003)

0.0210
(0.0003)

0.0210
(0.0003)

e

0.339
(0.006)

0.339
(0.006)

0.344
(0.006)

0.344
(0.006)

7

3.92
(0.03)

0.00353
(0.00003)

5.15
(0.05)

0.00463
(0.00005)

A

0.0040
(0.0015)

0.0040
(0.0015)

0.0040
(0.0015)

0.0040
(0.0015)

0.012
(0.0008)

0.012

0.012

(0.0008)

(0.0008)

0.012
(0.0008)

g

144.9
(22.30)

144.9
(22.30)

148.9
(19.65)

148.9
(19.65)

p

0.98
(0.03)

0.98
(0.03)

0.98
(0.03)

0.98
(0.03)

0.016
(0.001)

0.016
(0.001)

0.016
(0.001)

0.016
(0.001)

aH
p*

Standard errors are reported only for estimated parameters. Other parameters were set a
priori.




Table 2
Decision Rule Parameters of Baseline Models
kt + l = ^ V

V

l ^

k[(«t/*t)/«l kexp[ek(At -A)]

nt = n( ( V zt - i ^ n[(gt/ zt)/s] nexp[en(At_A^
zt /z t_i = exp(At )
log gt = log zt + (l-p)log g + p[log gt-1 - log zt-1 ] +

Divisible
Labor
(0=1)

Indivisible
Labor
(0=1)

Divisible
with
Gov’t
(o=0)

Indivisible
• with
Gov’t
(a=0)

E

11,113.4

11,062.75

11,614.36

11,569.25

rk

0.95

0.94

0.95

0.94

g

186.0

186.0

190.8

190.8

0.0

0.0

-0.0020

0.0020

ek

-0.95

-0.94

-0.95

-0.94

A

0.0040

0.0040

0.0040

0.0040

n

315.52

314.08

314.91

313.68

rn

-0.30

-0.49

-0.38

-0.59

0.0

0.0

0.15

0.23

0.30

0.49

0.38

0.59

dk

dn
en




Table 3
Selected First Moment Properties,
Baseline Models

<=?/*»

« t/y t

dkt / y t

kt + i / y t

nt

Divisible
Labor

Indivisible
Labor

Divisible
with
Gov’t

Indivisible
with
Gov’t

Data
(1955.41983.4)

0.56

0.56

0.56

0.56

0.55

(0.012)

(0.012)

(0.010)

(0.010)

(0.003)

0.177

0.178

0.176

0.177

0.177

(0.007)

(0.007)

(0.006)

(0.006)

(0.003)

0.260

0.260

0.264

0.264

0.269

(0.009)

(0.010)

(0.009)

(0.009)

(0.002)

10.54

10.54

10.68

10.68

10.62

(0.268)

(0.260)

(0.307)

(0.293)

(0.09)

315.60

314.24

315.19

314.12

320.2

(3.01)
Alog CP

Aiog yt

Alog kt

Alog gt

Alog nt

(4.09)

(4.47)

(5.74)

(1.51)

0.0040

0.0040

0.0040

0.0040

0.0045

(0.0017)

(0.0016)

(0.0016)

(0.0016)

(0.0007)

0.0040

0.0040

0.0040

0.0040

0.0040

(0.0017)

(0.0017)

(0.0017)

(0.0017)

(0.0014)

0.0040

0.0040

0.0040

0.0040

0.0047

(0.0015)

(0.0016)

(0.0015)

(0.0016)

(0.0005)

0.0040

0.0040

0.0040

0.0040

0.0023

(0.0019)

(0.0019)

(0.0019)

(0.0019)

(0.0017)

0.1E-04

0.2E-04

0.1E-04

0.1E-04

0.0002

(0.0002)

(0.0003)

(0.0003)

(0.0005)

(0.0013)

lu m b e r s are averages, across 1,000 simulated data sets of length 113 observations each,
of the sample average of the corresponding variable in the first column. Numbers in
parentheses are the standard deviation, across data sets, of the associated statistic.
2Empirical averages, with standard errors.



Table 4a
Second Moment Properties After HP Detrending
Models Estimated Using Baseline Dataset

iviuueio

Statistic1

a

la

cp

Indivisible
with

U .S.3
Data
(1955.4-

Gov’t

1983.4)

Divisible

Indivisible

Divisible
with

Labor
( a = i)

Labor
(0 = 1 )

Gov’t
(<*=0)

(a=0)

0.57

0.53

0.49

0.46

0.44

(0.085)

(0.076)

(0.049)

(0.05)

(0.027)

2.33

2.45

2.11

2.24

2.24

(0.16)

(0.17)

(0.16)

(0.17)

(0.062)

0.36

0.50

0.46

0.62

0.86

(0.004)

(0.006)

(0.02)

(0.03)

(0.060)

0.54

0.96

0.79

1.36

1.21

(0.01)

(0.03)

(0.07)

(0.14)

(0.11)

1.76

1.55

1.66

1.44

1.15

(0.24)

(0.21)

(0.20)

(0.16)

(0.23)

0.020

0.023

0.021

0.025

0.019

(0.0026)

(0.003)

(0.003)

(0.003)

(0.001)

y

a TVl a V
11 J

“V ^ y / n )

V 'y
a

2

v

j

corr(y/n,n)

0.95
(0.014)

0.92
(0.022)

0.81
(0.058)

0.73
(0.074)

-0.20
(0.11)

1A11 of the statistics in this table are computed after first logging and then detrending the
data using the Hodrick-Prescott (HP) method, er- is the standard deviation of variable i
detrended in this way. corr(x,w) is the correlation between detrended x and detrended
w.
2Average of corresponding statistics in column 1, across 1,000 simulated data sets each of
length 113. Number in parentheses is the associated standard deviation.
3Results for U.S. data. Numbers in parentheses are associated standard errors, computed
as discussed in the text.




Table 4b
Second Moment Properties After Log First Differencing
Models Estimated Using Baseline Dataset

MnHrl"2
IVlUUUlo

Statistic1

a

Indivisible

Labor
(0 = 1 )

Labor
(o = i)

Gov’t
(a = 0 )

Gov’t
(a = 0 )

1983.4)

0.55

0.51

0.47

0.44

0.46

(0.049)

(0.05)

(0.04)

(0.04)

(0.039)

2.35

2.48

2.12

2.26

1.93

(0.15)

(0.17)

(0.14)

(0.15)

(0.095)

0.36

0.51

0.46

0.62

1.30

(0.003)

(0.004)

(0.013)

(0.014)

(0.14)

0.55

1.00

0.80

1.41

0.98

(0.010)

(0.025)

(0.039)

(0.083)

(0.05)

1.76

1.54

1.66

1.43

1.29

(0.12)

(0.10)

(0.10)

(0.08)

(0.13)

0.016

0.018

0.017

0.019

0.011

(0.001)

(0.001)

(0.001)

(0.001)

(0.001)

0.97
(0.016)

0.95
(0.023)

0.84
(0.030)

0.77
(0.040)

-0.71
(0.07)

y

'd k ^ y
VA A

U .S.3
Data
(1955.4-

Divisible

la

cp

Indivisible
with

Divisible
with

J

a la

n' jv
u

V ^ y /n )
a /a

g' y

J

corr(y/n,n)

‘In this table, c, dk, y, y /n , n refer to the first difference of the log of the indicated
variable. Then, cr- is the standard deviation of variable i and corr(l,P) is the correlation
between l and P.
2Average of corresponding statistics in column 1, across 1,000 simulated data sets each of
length 113. Number in parenthesis is the associated standard deviation.
3Results for U.S. data.
discussed in the text.




Numbers in parentheses are standard errors, computed as

Table 5: Diagnostic Results for Baseline Models1

Models3 -----------------------------------------------------------Divisible
Indivisible
Statistic

U.S. Data2
-H P — D IFF-

corr(y/n,n)

-.20
(.11)

V * (y /n )

J

-.71
(.07)

1.21

.98

(.11)

(.05)

-

-

Divisible Labor

Indivisible Labor

-D IF F -

— HP—

.951

.960

.915

r
( li)
[10.56]

r (*07)
[25.18]

.543

.548

,(•
[5.87]

(.05)
[811]

-H P -

n)

168.84
{0}

1004.33
{0}

— DIFF—

with Govt.
— HP—

with Govt.

— DIFF—

-H P -

-D IF F -

.818
(.!4)
[7.10]

.826
(11)
[13.97]

.737

.759

[10.23]

.940
(.07)
[24.94]

,(•15)
[617]

r
(12)
[12.08]

.959

.985

.785

.791

1.348

1.386

[2.13]

159

(•12)
[3.67]

,(•05)
[3.63]

,(•12)
[116]

(.06)
[714]

r (-11)

r(n)
119.29
{0}

712.54
{0}

62.18
{0}

202.32
{0}

41.46

(0)

261.60
{0}

Notes:
All results axe based on data detrended by the Hodrick-Presoott filter (HP) or the log first difference filter (DIFF), as indicated.
^Point estimates based on U.S. data of the statistic in the first column. These numbers are taken directly from tables 4a and 4b. Number in ( ) is the
associated standard error estimate.
^Number not in parentheses is the value of the statistic in the first column implied by the indicated model at its estimated parameter values. Number in ( )
is the standard error of the discrepancy between the statistic reported above and its associated sample value, reported in the corresponding U.S. Data
column. For the "DIFF1 columns, this standard error is computed by taking the square root of the appropriate diagonal element of (3.26). The number in [ ]
is the associated t statistic. The J statistic is computed using (3.27). For the "HP" columns these standard errors, t and J statistics are computed using
the analogue of equations (3.26) and (3.27), valid when the data have been transformed by the HP filter. The number in { } is the probability that a
Chi-square with 2 degrees of freedom exceeds the reported value of the associated J statistic.



Table 6: Diagnostic Results for Alignment Corrected Data Set

Statistic

U S. Data
-H P — D IFF-

corr(y/n,n)

V ^ y /n )

J

*See notes to Table 5.




.16
(.08)

.21
(.07)

1.64

1.27

(.16)

(.13)

-

-

Divisible Labor
-H P -

Indivisible Labor
— DIFF—

1

Divisible
with Govt.

Indivisible
with Govt.

-D IF F -

— HP—

.958

.940
(.07)
[10.25]

.659
(.22)
[2.30]

.668
,(•21)
[2.25]

.575 .593
(.22) (.22)
[1.84] [1.79]

— HP—

- -D IF F —

-H P -

-D IF F -

.946
(.08)
[9.43]

r[10.50]
(-07)

.915
(.08)
[9.02]

.605

.612

.959

.985

.951

.960

1.437 1.476

( ,16)
[6.45]

(.!2)
[5.28]

(•16)
[4.23]

,(•12)
[2.30]

(•18)
[3.75]

,(•15)
[2.14]

,(•19) (.15)
[1.07] [1.35]

14.55
{.0007}

6.40
{.041}

3.48 9.91
{.176} {.007}

131.35
{0}

139.24
{0}

100.53
{0}

111.12
{0}

Table 7: Impact of Measurement Error on Diagnostics

Statistic

U.S. Data
-H P — DIFF-

Divisible Labor
-H P -

-D IF F -

Divisible
with Govt.

Indivisible Labor
— HP—

—DIFF—

— HP—

Indivisible
with Govt.

- -D IF F —

-H P -

-D IF F -

PANEL A: Results Based on Baseline Dataset
corr(y/n,n)

an / a(y/n )

-.20 -.71
(.11) (.07)

1.21

.98

(.11)

(.05)

J

-.638
(.05)
[1-39]

.010
(.H )
[1.86]

-.544
(.061
[2.69]

-.098

.766

.897

.978

.996

( .n )
[4.13]

(.05)
[1.59]

[2.06]

22.62
{0}

15.61
{.0024}

16.11
{.0003}

-.145

i-8<l|

-.596
(.05)
[210]

-.015
,( • « )
[1.46]

-.508
(.06)
[3.19]

.907

.959

1.225

1.113

(.H )
[2.68]

( - 11)
a

17.15
{.0002}

10.14
{.0064}

(.06)
[2.39]

a

a

11.07
{.0039}

2.20
{.335}

10.15
{.0055}

PANEL B: Results Based on Alignment Corrected Data Set
corr(y/n,n)

J

.16
(.08)

.21
(.07)

1.64

1.27

(.16)

(.13)

to o l

-.254
(•22)
[2.11]

.429
(•19)
[1.43]

-.132
(-23)
[1.471

.709

.830

.969

.993

.986

.996

1.365

1.249

(.15)

(.11)

(.16)

(.12)

(.17)

(.12)

(.19)

(.16)

[6.30]

[4.10]

[4.21]

[2,24]

[3.80]

[2.26]

[1.40[

[.14]

45.70

18.22

28.90

5.17

14.55

6.82

2.09

4.13

{.033}

{3 5 }

.326

{0}

1



{.0001}

{0}

{.075}

.238
|

m

|

{.0007}

-.222
.(•21)
[2.06]

.232
I S

-.162
(.20)
[1.80]

{.127}

Federal Reserve Bank of Chicago
RESEARCH STAFF MEMORANDA, WORKING PAPERS AND STAFF STUDIES
The following lists papers developed in recent years by the Bank’s research staff. Copies of those
materials that are currently available can be obtained by contacting the Public Information Center
(312) 322-5111.
Working Paper Series—A series of research studies on regional economic issues relating to the Sev­
enth Federal Reserve District, and on financial and economic topics.
Regional Economic Issues
♦WP-82-1

Donna Craig Vandenbrink

“The Effects of Usury Ceilings:
the Economic Evidence,” 1982

David R. Allardice

“Small Issue Industrial Revenue Bond
Financing in the Seventh Federal
Reserve District,” 1982

WP-83-1

William A. Testa

“Natural Gas Policy and the Midwest
Region,” 1983

WP-86-1

Diane F. Siegel
William A. Testa

“Taxation of Public Utilities Sales:
State Practices and the Illinois Experience”

WP-87-1

Alenka S. Giese
William A. Testa

“Measuring Regional High Tech
Activity with Occupational Data”

WP-87-2

Robert H. Schnorbus
Philip R. Israilevich

“Alternative Approaches to Analysis of
Total Factor Productivity at the
Plant Level”

WP-87-3

Alenka S. Giese
William A. Testa

“Industrial R&D An Analysis of the
Chicago Area”

WP-89-1

William A. Testa

“Metro Area Growth from 1976 to 1985:
Theory and Evidence”

WP-89-2

William A. Testa
Natalie A. Davila

“Unemployment Insurance: A State
Economic Development Perspective”

WP-89-3

Alenka S. Giese

“A Window of Opportunity Opens for
Regional Economic Analysis: BEA Release
Gross State Product Data”

WP-89-4

Philip R. Israilevich
William A. Testa

“Determining Manufacturing Output
for States and Regions”

WP-89-5

Alenka S.Geise

“The Opening of Midwest Manufacturing
to Foreign Companies: The Influx of
Foreign Direct Investment”

WP-89-6

Alenka S. Giese
Robert H. Schnorbus

“A New Approach to Regional Capital Stock
Estimation: Measurement and
Performance”

**WP-82-2

*Limited quantity available.
**Out of print.



Working Paper Series (cont'd)

WP-89-7

William A. Testa

“Why has Illinois Manufacturing Fallen
Behind the Region?”

WP-89-8

Alenka S. Giese
William A. Testa

“Regional Specialization and Technology
in Manufacturing”

WP-89-9

Christopher Erceg
Philip R. Israilevich
Robert H. Schnorbus

“Theory and Evidence of Two Competitive
Price Mechanisms for Steel”

WP-89-I0

David R. Allardice
William A. Testa

“Regional Energy Costs and Business
Siting Decisions: An Illinois Perspective”

WP-89-21

William A. Testa

“Manufacturing’s Changeover to Services
in the Great Lakes Economy”

WP-90-1

P.R. Israilevich

“Construction of Input-Output Coefficients
with Flexible Functional Forms”

WP-90-4

Douglas D. Evanoff
Philip R. Israilevich

“Regional Regulatory Effects on
Bank Efficiency”

WP-90-5

Geoffrey J.D. Hewings

“Regional Growth and Development Theory:
Summary and Evaluation”

WP-90-6

Michael Kendix

“Institutional Rigidities as Barriers to Regional
Growth: A Midwest Perspective”

Issues in Financial Regulation

WP-89-11

Douglas D. Evanoff
Philip R. Israilevich
Randall C. Merris

“Technical Change, Regulation, and Economies
of Scale for Large Commercial Banks:
An Application of a Modified Version
of Shepard’s Lemma”

WP-89-12

Douglas D. Evanoff

“Reserve Account Management Behavior:
Impact of the Reserve Accounting Scheme
and Carry Forward Provision”

WP-89-14

George G. Kaufman

“Are Some Banks too Large to Fail?
Myth and Reality”

WP-89-16

Ramon P. De Gennaro
James T. Moser

“Variability and Stationarity of Term
Premia”

WP-89-17

Thomas Mondschean

“A Model of Borrowing and Lending
with Fixed and Variable Interest Rates”

WP-89-18

Charles W. Calomiris

“Do "Vulnerable" Economies Need Deposit
Insurance?: Lessons from the U.S.
Agricultural Boom and Bust of the 1920s”

*Limited quantity available.
**Out of print.




Working Paper Series (cont'd)

WP-89-23

George G. Kaufman

“The Savings and Loan Rescue of 1989:
Causes and Perspective”

WP-89-24

Elijah Brewer III

“The Impact of Deposit Insurance on S&L
Shareholders’ Risk/Retum Trade-offs”

Macro Economic Issues
WP-89-13

David A. Aschauer

“Back of the G-7 Pack: Public Investment and
Productivity Growth in the Group of Seven”

WP-89-15

Kenneth N. Kuttner

“Monetary and Non-Monetary Sources
of Inflation: An Error Correction Analysis”

WP-89-19

Ellen R. Rissman

“Trade Policy and Union Wage Dynamics”

WP-89-20

Bruce C. Petersen
William A. Strauss

“Investment Cyclicality in Manufacturing
Industries”

WP-89-22

Prakash Loungani
Richard Rogerson
Yang-Hoon Sonn

“Labor Mobility, Unemployment and
Sectoral Shifts: Evidence from
Micro Data”

WP-90-2

Lawrence J. Christiano
Martin Eichenbaum

“Unit Roots in Real GNP: Do We Know,
and Do We Care?”

WP-90-3

Steven Strongin
Vefa Tarhan

“Money Supply Announcements and the Market’s
Perception of Federal Reserve Policy”

WP-90-7

Prakash Loungani
Mark Rush

“Sectoral Shifts in Interwar Britain”

WP-90-8

Kenneth N. Kuttner

“Money, Output, and Inflation: Testing
The P-Star Restrictions”

WP-90-9

Lawrence J. Christiano
Martin Eichenbaum

“Current Real Business Cycle Theories
and Aggregate Labor Market Fluctuations”

WP-90-10

S. Rao Aiyagari
Lawrence J. Christiano
Martin Eichenbaum

“The Output, Employment, And Interest Rate
Effects of Government Consumption”

♦Limited quantity available.
♦♦Out of print.



Staff Memoranda—A series of research papers in draft form prepared by members of the Research
Department and distributed to the academic community for review and comment. (Series discon­
tinued in December, 1988. Later works appear in working paper series).

**SM-81-2

George G. Kaufman

“Impact of Deregulation on the Mortgage
Market,” 1981

**SM-81-3

Alan K. Reichert

“An Examination of the Conceptual Issues
Involved in Developing Credit Scoring Models
in the Consumer Lending Field,” 1981

Robert D. Laurent

“A Critique of the Federal Reserve’s New
Operating Procedure,” 1981

George G. Kaufman

“Banking as a Line of Commerce: The Changing
Competitive Environment,” 1981

SM-82-1

Harvey Rosenblum

“Deposit Strategies of Minimizing the Interest
Rate Risk Exposure of S&Ls,” 1982

♦SM-82-2

George Kaufman
Larry Mote
Harvey Rosenblum

“Implications of Deregulation for Product
Lines and Geographical Markets of Financial
Instititions,” 1982

•SM-82-3

George G. Kaufman

“The Fed’s Post-October 1979 Technical
Operating Procedures: Reduced Ability
to Control Money,” 1982

SM-83-1

John J. Di Clemente

“The Meeting of Passion and Intellect:
A History of the term ‘Bank’ in the
Bank Holding Company Act,” 1983

SM-83-2

Robert D. Laurent

“Comparing Alternative Replacements for
Lagged Reserves: Why Settle for a Poor
Third Best?” 1983

**SM-83-3

G. O. Bierwag
George G. Kaufman

“A Proposal for Federal Deposit Insurance
with Risk Sensitive Premiums,” 1983

*SM-83-4

Henry N. Goldstein
Stephen E. Haynes

“A Critical Appraisal of McKinnon’s
World Money Supply Hypothesis,” 1983

SM-83-5

George Kaufman
Larry Mote
Harvey Rosenblum

“The Future of Commercial Banks in the
Financial Services Industry,” 1983

SM-83-6

Vefa Tarhan

“Bank Reserve Adjustment Process and the
Use of Reserve Carryover Provision and
the Implications of the Proposed
Accounting Regime,” 1983

SM-83-7

John J. Di Clemente

“The Inclusion of Thrifts in Bank
Merger Analysis,” 1983

SM-84-1

Harvey Rosenblum
Christine Pavel

“Financial Services in Transition: The
Effects of Nonbank Competitors,” 1984

SM-81-4
**SM-81-5

*Limited quantity available.
♦♦Out of print.




Staff Memoranda (cont'd)

SM-84-2

George G. Kaufman

“The Securities Activities of Commercial
Banks,” 1984

SM-84-3

George G. Kaufman
Larry Mote
Harvey Rosenblum

“Consequences of Deregulation for
Commercial Banking”

SM-84-4

George G. Kaufman

“The Role of Traditional Mortgage Lenders
in Future Mortgage Lending: Problems
and Prospects”

SM-84-5

Robert D. Laurent

“The Problems of Monetary Control Under
Quasi-Contemporaneous Reserves”

SM-85-1

Harvey Rosenblum
M. Kathleen O’Brien
John J. Di Clemente

“On Banks, Nonbanks, and Overlapping
Markets: A Reassessment of Commercial
Banking as a Line of Commerce”

SM-85-2

Thomas G. Fischer
William H. Gram
George G. Kaufman
Larry R. Mote

“The Securities Activities of Commercial
Banks: A Legal and Economic Analysis”

SM-85-3

George G. Kaufman

“Implications of Large Bank Problems and
Insolvencies for the Banking System and
Economic Policy”

SM-85-4

Elijah Brewer, III

“The Impact of Deregulation on The True
Cost of Savings Deposits: Evidence
From Illinois and Wisconsin Savings &
Loan Association”

SM-85-5

Christine Pavel
Harvey Rosenblum

“Financial Darwinism: Nonbanks—
and Banks—Are Surviving”

SM-85-6

G. D. Koppenhaver

“Variable-Rate Loan Commitments,
Deposit Withdrawal Risk, and
Anticipatory Hedging”

SM-85-7

G. D. Koppenhaver

“A Note on Managing Deposit Flows
With Cash and Futures Market
Decisions”

SM-85-8

G. D. Koppenhaver

“Regulating Financial Intermediary
Use of Futures and Option Contracts:
Policies and Issues”

SM-85-9

Douglas D. Evanoff

“The Impact of Branch Banking
on Service Accessibility”

SM-86-1

George J. Benston
George G. Kaufman

“Risks and Failures in Banking:
Overview, History, and Evaluation”

SM-86-2

David Alan Aschauer

“The Equilibrium Approach to Fiscal
Policy”

♦Limited quantity available.
♦♦Out of print.



6

Staff Memoranda (cont'd)

SM-86-3

George G. Kaufman

“Banking Risk in Historical Perspective”

SM-86-4

Elijah Brewer III
Cheng Few Lee

“The Impact of Market, Industry, and
Interest Rate Risks on Bank Stock Returns”

SM-87-1

Ellen R. Rissman

“Wage Growth and Sectoral Shifts:
New Evidence on the Stability of
the Phillips Curve”

SM-87-2

Randall C. Merris

“Testing Stock-Adjustment Specifications
and Other Restrictions on Money
Demand Equations”

SM-87-3

George G. Kaufman

“The Truth About Bank Runs”

SM-87-4

Gary D. Koppenhaver
Roger Stover

“On The Relationship Between Standby
Letters of Credit and Bank Capital”

SM-87-5

Gary D. Koppenhaver
Cheng F. Lee

“Alternative Instruments for Hedging
Inflation Risk in the Banking Industry”

SM-87-6

Gary D. Koppenhaver

“The Effects of Regulation on Bank
Participation in the Market”

SM-87-7

Vefa Tarhan

“Bank Stock Valuation: Does
Maturity Gap Matter?”

SM-87-8

David Alan Aschauer

“Finite Horizons, Intertemporal
Substitution and Fiscal Policy”

SM-87-9

Douglas D. Evanoff
Diana L. Fortier

“Reevaluation of the Structure-ConductPerformance Paradigm in Banking”

SM-87-10

David Alan Aschauer

“Net Private Investment and Public Expenditure
in the United States 1953-1984”

SM-88-1

George G. Kaufman

“Risk and Solvency Regulation of
Depository Institutions: Past Policies
and Current Options”

SM-88-2

David Aschauer

“Public Spending and the Return to Capital”

SM-88-3

David Aschauer

“Is Government Spending Stimulative?”

SM-88-4

George G. Kaufman
Larry R. Mote

“Securities Activities of Commercial Banks:
The Current Economic and Legal Environment

SM-88-5

Elijah Brewer, III

“A Note on the Relationship Between
Bank Holding Company Risks and Nonbank
Activity”

SM-88-6

G. O. Bierwag
George G. Kaufman
Cynthia M. Latta

“Duration Models: A Taxonomy”

G. O. Bierwag
George G. Kaufman

“Durations of Nondefault-Free Securities”

*Limited quantity available.
**Out of print.




Staff Memoranda (cont'd)

SM-88-7

David Aschauer

Is Public Expenditure Productive?”

SM-88-8

Elijah Brewer, III
Thomas H. Mondschean

Commercial Bank Capacity to Pay
Interest on Demand Deposits:
Evidence from Large Weekly
Reporting Banks”

SM-88-9

Abhijit V. Banerjee
Kenneth N. Kuttner

Imperfect Information and the
Permanent Income Hypothesis”

SM-88-10

David Aschauer

'Does Public Capital Crowd out
Private Capital?”

SM-88-11

Ellen Rissman

Imports, Trade Policy, and
Union Wage Dynamics”

Staff Studies—A series of research studies dealing with various economic policy issues on a national
level.
SS-83-1
**SS-83-2

Harvey Rosenblum
Diane Siegel
Gillian Garcia

Competition in Financial Services:
the Impact of Nonbank Entry,” 1983
“Financial Deregulation: Historical
Perspective and Impact of the Garn-St
Germain Depository Institutions Act
of 1982,” 1983

. 1

;>

•

-V

**

•

A..:; •>*»<

"'Limited quantity available.
**Out of print.