View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

M E A SU R E S O F F IT F O R C A L IB R A T E D M O D E L S
M ark W. W atson
W orking Paper Series
M acro Econom ic Issues
Research D epartm ent
Federal Reserve B ank o f Chicago
M ay, 1991 (W P-91-9)

First Draft: February 22, 1990
This Draft: April 30, 1991

Measures of Fit for Calibrated Models

Mark W. Watson
Department of Economics
Northwestern University
Evanston, IL 60208
and
Federal Reserve Bank of Chicago, Chicago, IL 60604

This research has benefited from constructive comments by many seminar
participants; in particular I thank John Cochrane, Marty Eichenbaum, Jon
Faust, Lars Hansen, Robert Hodrick, Robert King and Robert Lucas. The first
draft of this paper was written while I was visiting the University of
Chicago, whose hospitality is gratefully acknowledged. This research was
supported by the National Science Foundation through grant SES-89-10601.




Measures of Fit for Calibrated Models

Abstract
This paper develops a new procedure for assessing how well a given dynamic
economic model describes a set of economic time series.

To answer the

question, the variables in the model are augmented with just enough error so
that the model can exactly mimic the second moment properties of the actual
data.

The properties of this error provide a useful diagnostic for the

economic model, since they show the dimensions in which model fits the data
relatively well and the dimensions in which it fits the data relatively
poorly.

Mark W. Watson
Department of Economics
Northwestern University
Evanston, IL 60208




1. Introduction

The appropriate method for assessing the empirical relevance of economic
models has been debated by economists for many years.

The standard

econometric approach can be traced back to Haavelmo (1944), who argued that an
economic model should be embedded within a complete probability model and
analyzed using statistical methods designed for conducting inference about
unknown probability distributions.

The appeal of this approach follows from

interpreting the probability distribution as a likelihood function, which in
turn provides the basis for a unified theory of estimation and inference. In
the modern literature, this approach is clearly exemplified in work like that
of Hansen and Sargent (1980) or McFadden (1981).

However, many economic

models do not provide a realistic and complete probability structure for the
variables under consideration.

Using the standard econometric approach these

models must be discarded as empirically irrelevant, or augmented in some way
with additional random components.

Inferences drawn from these augmented

models are meaningful only to the extent that the additional random components
do not mask or change the salient features of the original economic models.
Another econometric approach, markedly different from the one advocated by
Haavelmo, is becoming increasingly popular in empirical macroeconomics.

This

approach, which I'll call calibration/simulation, is most clearly articulated
in the work of Kydland and Prescott (1982) and Prescott (1986).

In a general

sense, calibration/simulation asks whether data from a real economy share
certain characteristics with data generated by the artificial economy
described by an economic model.




There is no claim that the model explains all

1

of the characteristics of the actual data, nor is there any attempt to augment
the model with additional random components to more accurately describe the
data.

Because of this, calibration/simulation results are often easier to

interpret than results form traditional econometric analysis, since the
economic model is not complicated by additional random elements added solely
for statistical convenience.

Yet, inference procedures for

calibration/simulation lack statistical foundations and are necessarily ad

hoc , since the economic model does not provide a complete probability
structure.

For example, a researcher may determine that a model fits the data

well because it implies moments for the variables under study that are "close"
to the moments of the actual data, even though the metric used to determine
the distance between the moments is left unspecified.
This paper is an attempt to put the latter approach on a less ad hoc
foundation by developing goodness of fit measures for the class of dynamic
econometric models whose endogenous variables follow covariance stationary
processes.

It is not assumed that the model accurately describes data from an

actual economy; the economic model is not a null hypothesis in the statistical
sense.

Rather, the economic model is viewed as an approximation to the

stochastic processes generating the actual data, and goodness of fit measures
are proposed to measure the quality of this approximation.

A standard device

-- stochastic error -- is used to motivate the goodness of fit measures.
These measures answer the question: "How much random error would have to be
added to data generated by the model so that the autocovariances implied by
the model+error match the autocovariances of the observed data?"
The error represents the degree of abstraction of the model from the data.
Since the error can't be attributed to a data collection procedure or to a




2

forecasting procedure, etc., it is difficult a priori to say much about its
properties; in particular its covariance with the observed data cannot be
restricted by a priori reasoning.

Rather than making a specific assumption

about the error's covariance properties, a representation is constructed which
minimizes the contribution of the error in the complete model.

Thus, in this

sense, the error process is chosen to make the model as close to the data as
possible.
Many of the ideas in this paper are close to, and were motivated by, ideas
in Altug (1989) and Sargent (1989).

Altug (1989) showed how a one-shock real

business cycle model, similar to the model developed in Kydland and Prescott
(1982), could be analyzed using standard dynamic econometric methods, by
augmenting each variable in the model with an idiosynchratic error.

This

produced a restricted version of the dynamic factor analysis or unobserved
index models developed by Sargent and Sims (1977) and Geweke (1977).

Sargent

(1989) discusses two models of measurement error; in the first the measurement
error is uncorrelated with the data generated by the model, and in the second
it is uncorrelated with the sample data.^

While similar in spirit, the

approach taken in this paper differs from that of Altug and Sargent in two
respects.

First, in this paper, the error process is not assumed to be

uncorrelated with the model's artificial data or with the actual data.
Rather, the correlation properties of the error process are determined by the
requirement that the variance of the error is as small as possible.

Second,

the joint data-error process is introduced to motivate goodness of fit
measures; it is not introduced to describe a statistical model that can be
used to estimate unknown parameters or to test statistical hypotheses.




3

2

The minimum approximation error representation motivates two sets of
statistics that can be used to evaluate the goodness of fit of the economic
model.

First, the variance of the approximation error can be used, like the

2

variance of the error in a regression model, to form an "R 11 measure for each
variable in the model.

This provides an overall measure of fit.

Moreover,

since the model is dynamic, spectral methods can be used to calculate the R
measure for each frequency.

2

These can be used, for example, to measure the

fit over the "business cycle" or "growth" frequencies.

Second, the minimum

measurement error representation can be used to form fitted values of the
variables in the economic model using actual data.

These fitted values show

how well the model explains specific historical episodes; for example, can a
real business cycle model simultaneously explain the growth in the 1960's and
the 1981-1982 recession?
The plan of the paper is as follows.

The next section develops the minimum

approximation error representation and goodness of fit measures.

The third

section calculates these goodness of fit statistics for a standard real
business cycle model using post-war U.S. macroeconomic data on output,
consumption, investment and employment.

The fourth section discusses a

variety of statistical issues, and the fifth section concludes.

2. Measures of Fit

Consider an economic model that describes the evolution of an nxl vector of
variables x^..

Assume that the variables in the model have been transformed,

say by first differencing or forming ratios, so that xt is covariance




4

stationary.

As a notational device, it is useful to introduce the

autocovariance generating function (ACGF) of x^, Ax (z), which completely
summarizes the unconditional second moment properties of the process.

In what

follows "economic model" and "Ax (z)" will be used interchangeably: the
analysis considers only the unconditional second moment implications of the
model.

Nonlinearities and variation in conditional second and higher moments

are ignored to keep the problem tractible.

The analysis will also ignore the

unconditional first moments of xt ; modifying the measures of fit for
differences in the means of the variables is straightforward.
The empirical counterparts of xt are denoted yt#
from x^ in an important way.

These variables differ

The variables making up xt correspond to the

variables appearing in the theorist's simplification of reality; in a
macroeconomic model they are variables like "output," "money" and the
"interest rate."

The variables making up yt are functions of raw data

collected in a real economy; they are variables like "Per capita Gross
National Product in the United States in 1982 dollars," and "U.S. M2" and "the
yield on 3 Month U.S. Treasury Bills."
The question of interest is whether the model generates data with
characteristics similar to those of the data from the real economy.

Below,

goodness of fit measures are proposed to help answer this question.

Before

introducing these new measures, it is useful to review standard statistical
goodness of fit measures to highlight their deficiencies for answering the
question at hand.
Standard measures of fit use the size of sampling error to judge the
coherence of the model with the data.




They are based on the following:

5

First , Ay(z) , the population ACGF of the data, is unknown but can be estimated
from sample data.

Discrepancies between the estimator Ay(z) and A^(z) arise

solely from sampling error in A^(z), and the likely size of this error can be
deduced from the stochastic process that generated the sample.

Now, i f

Ay(z)=Ax (z), sampling error also accounts for the differences between Ay ( z )
and Ax (z).

Standard goodness of fit measures show how likely it is that

Ay(z)=Ax (z), based on the probability that differences between Ay(z) and
Ax (z) arise solely from sampling error.

If the differences between Ay(z) and

Ax (z) are so large as to be unlikely, standard measures of fit suggest that
the model fits the data poorly, and vice versa if the differences between
Ay(z) and Ax (z) are not so large as to be unlikely.

The key point is that

the differences between A ( z ) and A (z) are judged by how informative the

y

x

sample is about the population moments of yt .

This is a sensible procedure

for judging the coherence of a null hypothesis, Ay(z)=Ax (z), with the data,
but is arguably less sensible when this null hypothesis is known to be false.
Rather than rely on sampling error, the meaures of fit proposed here are
based on the size of the stochastic error required to reconcile the
autocovariances of

with those of y^.

In particular, letting u^ denote an

nxl error vector, the importance of the difference between Ax (z) and Ay(z)
will be determined by asking: "How much error would have to be added to xt

so

that the autocovariances of xt+ut are equal to the autocovariances of yt?"

If

the variance of the required error is large then the discrepancy between Ax (z)
and Ay(z) is large, and conversely if the variance of ut is small.

The vector

ut is the approximation error in the economic model interpreted as a
stochastic process.




It captures the (second moment) characteristics of the

6

observed data that are not captured by the model.

Loosely speaking, it is

analogous to the error term in a regression where the set of regressors is
interpreted as the economic model.

The economic model might be deemed a good

approximation to the data if the variance of the error term is small (i.e. the
R

2

of the regression is large) and might be deemed a poor approximation if the

variance of the error term is large (i.e. the R

2

of the regression is small.)

To be more precise, assume that xt and yt are jointly covariance stationary
and define the error ut by the equation

(2.1)

ut - yt - xfc,

so that

(2.2)

Au (z) = Ay(z) + Ax (z) - Axy(z) - Ayx(z)

where Au (z) is the autocovariance generating function of ut , Ax^(z) is the
cross autocovariance generating function between xt and yt , etc.
right hand side of (2.2),

From the

three terms are needed to calculate Au (z).

The

first, Ay(z), can be consistently estimated from sample data, the second,
Ax (z) , is completely determined by the model, but the third, Ax^.(z) , is not
determined by the model nor can it be estimated from the data, since this
would require a sample drawn from the joint (xt ,yt) process.

To proceed, an

assumption is necessary.
A common assumption used in econometric analysis is that Ax^(z)=Ax (z) so
that x^ and ufc are uncorrelated at all leads and lags. Equation (2.1) can then




7

be interpreted as the dynamic analogue of the classical errors-in-variables
model.

Sargent (1989) discusses this assumption and an alternative

assumption, AXy(z)=Ay(z). He points out that under this latter assumption, u^
can be interpreted as signal extraction error, with yt an optimal estimate of
the unobserved "signal" x^.

3

In many applications, these covariance

restrictions follow from the way that the data were collected or the way
expectations are formed.

For example, if x^ represented the true value of the

U.S. unemployment rate and yt the value published by the U.S. Department of
Labor, then yt would differ from x^. because of the sampling error inherent in
the monthly Current Population Survey (CPS) from which yt is derived.

The

sample design underlying the CPS implies that the error, ut , is statistically
independent of x^..

Similarly, if yt denoted a rational expectation of x^_,

then the error would be uncorrelated with yt . Neither of these assumptions
seems appropriate in the present context.

The error isn't the result of

imprecise measurement; it isn't a forecast or signal extraction error.
Rather, it represents approximation or abstraction error in the economic
model.

Any restriction used to identify Axy(z), and hence Au (z), is

arbitrary.^
Is it possible, however, to calculate a lower bound for the variance of u^
without imposing any restrictions on A

xy

(z). When this lower bound on the

variance of ut is large, then under any assumption about
fits the data poorly.

If the lower bound on the variance of ut is small, then

there are possible assumptions about A
data well.

(z) that imply that the model fits the

Thus, this bound is potentially useful for rejecting models based

on their empirical fit.

Needless to say, models that appear to fit the data

well using this bound require further scrutiny.




(z), the model

8

The bound is calculated by choosing Ax^(z) to minimize the variance of ut
subject to the constraint that the implied joint autocovariance generating
function of

and yt is positive semi-definite.

Equivalently, since the

spectrum is proportional to the autocovariance generating function evaluated
at z=e ltJ,

the cross spectrum between x^ and yt , Ax^(e ltJ) , must be chosen so

that the spectral density matrix of (xt ' yt ')' is positive semi-definite at
all frequencies.
Since the measures of fit proposed in this paper are based on the solution
to this minimization problem and the implied minimum approximation error
representation of the (xt ,yt) process, it is useful to discuss the problem and
its solution in detail.

This is done by considering a few simple models

before proceding to the general case.

Four models are considered.

model is very simple, and the solution follows by inspection.

The first

The second

model is more complicated than the first, the third more complicated than the
second, etc.

In the first model, xt and yt are scalar serially uncorrelated

random variables.

In the second model, x^ and yt are serially uncorrelated

random vectors with non-singular covariance matrices.

Since many economic

models contain fewer sources of noise than variables, x^ is allowed to have a
singular covariance matrix in the third model.
and yt are allowed to be serially correlated.

Finally in the last model, x^.
After discussing these four

models in general terms, an example is presented.

Model 1:
Suppose that
variables.

xt , yt and ut

are scalar serially uncorrelated random

The problem is to choose a
to minimize the variance of
xy

2 2 2
au=<7x+ay -2axy . subject
to the constraint that the covariance matrix of
J




9

xt and yt remains positive semidefinite, i.e., IaXyI—axay

2
2
oA y -a a a y and yields a u =(a a -ay ) as the minimum.

The solution sets

Since aA Y =cra cry , x.l and y.l

are perfectly correlated with

(2.3)

where

xt=7yt .

7*<7x/ay .

Equation (2.3) is important because it shows how to calculate

fitted values of xt , given data on y^.
for all of the models considered.

Variants of equation (2.3) will hold

In each model, the minimum approximation

error representation makes (xt) perfectly correlated with {yt ).

In each

model, the analogue of (2.3) provides a formula for calculating the fitted
values of the variables in the model given data from the actual economy.

Model 2:
Now suppose that xt and yt are serially uncorrelated random vectors with
nonsingular covariance matrices 2x and 2y respectively.
denote the covariance matrix of ut .

Let 2u=2x+2y -2xy -2yx

Since 2U is a matrix, there is no unique

definition of a "small" variance for ut . Any metric comparing 2u with 0 will
do.

A convenient measure of the size of the variance of u^ is the trace of

2U , tr(Eu )—E£_^2u

where 2u ^

denotes the ij 'th element of 2u . While

convenient, this measure is not always ideal, since it weights all variables
equally.

When the units of the variables are different, or when the

researcher cares about certain variables more than others, unequal weighting
might be preferred, say:




10

(2 4 )

S - l 2u, li"i'

where w^, i=l,...n, are a set of nonzero constants or weights.
The appendix shows how E

xy

can be chosen to minimize (2.4) subject to the

constraint that the covariance matrix for (x^. y^.) ' is positive semidefinite.
C

y

There it is shown that the solution sets Ex^=C^R'Cy, where Cx and

are arbitrary "square roots" of E

x

and E

y

(i.e., E =C'C

x

x

and E =C'C ;

x

y y y

so for example, Cx and Cy can be the Cholesky factors of Ex and Ey ).

The
t
orthonormal matrix R is a function of C=C WC , where W is a diagonal matrix
x y
with w^ as the i'th diagonal element.

In particular, writing C'C=DAD', where

the columns of D contain the orthonormal eigenvectors of C'C and A is a
diagonal matrix with the corresponding eigenvalues on the diagonal, the matrix
R can be written as R=DA

D'C'.

One important implication of this solution is that, like the scalar
example, the joint covariance matrix (x^. y£)' is singular and xfc can be
represented as

(2.5)

xt = ryt ,

where r=C'R'C
x
y

(Since R is orthonormal, this simplifies to the scalar

result when xt and yt are scalars.)

Model 3:
In many economic models, the number of variables exceeds the number of
shocks.




In this case Sx is singular, and the solution derived in the appendix

11

for non-singular 2X is not immediately applicable.
applied to a slightly modified problem however.

The solution can be

Suppose that Ex has rank k<n.

Then the analysis for Model 2 can be applied to a kxl subset of the elements
of xt and yt .
rank.

In particular, let S be a kxn matrix, such that SEXS' has full

Let xt=Sxt , yt=Syt , E^=SExS' and E^,=SEyS'.

can then be used to find the value of

The results for Model 2

cov(xt ,yt) that minimizes the

(weighted trace of the) variance of u^x^-y^..

Moreover, from (2.5), the

solution of minimum variance problem implies that

(2.6)

xt=ryt=rSyt,

where T is the analogue of T in (2.4) constructed using E~ and 2~ in place
x
y
of

2

x

and

2

y

.

Now, since Sx and S2XS' both have rank k, it is possible to express xt as
a linear combination of the elements of x^..

In particular xt=Bxt , where the

nxk matrix B is easy to compute from Ex and the matrix S.^

(2.7)

Thus,

xt=Bit=Bryt=BrSyt,

so that E = B r S 2
.
*
yy

Model 4:
This same approach can be used in a dynamic multivariate model with
slight modifications;

when ut is serially correlated, the weighted trace of

the spectral density matrix, rather that the covariance matrix can be
minimized.




12

To motivate the approach, it is useful to use the Cramer representations
for xt , yt and ut (see e.g. Brillinger [1981], section 4.6).

Assume that xt ,

yt and u^ are jointly covariance stationary with mean zero; the Cramer
representation can be written as:

r2n i w t ,
/ N
d z x (u)
xt “ J0 e
(2.8)

r2n i w t ,
, x
dZy<W >
y t - JO e
ut -

r2n itot,
, N
Jo e
dzu < » -

where dz(w)=(dzx (w)9 dz^(w)' dzu (w)')' is a complex valued vector of
orthogonal increments, with E(dz(tj)dz(A)')= 6 (w-A)S(w)dwdA, where 6(w-A)
is the dirac delta and S(ij) is the spectral density matrix of
(x^_ y£ u£)' at frequency w.

Equation (2.8) represents xt , yt , and u^ as

the integral (sum) of increments dz^(w), dz^(w) and dzu (w) which are
uncorrelated across frequencies and have variances and covariances given by
the spectra and cross spectra of xt , yt , and u^.

Since the spectra are

proportional to the autocovariance generating functions evaluated at z=e ltJ,
E(dzx (w)dzx (w)') is proportional to Ax (e ±w),
proportional to A

xy

E(dzx (w)dz^(w)') is

(e ltJ) , etc.

Now consider the problem of choosing Ax^(z) to minimize the variance of ut
Since ut can be written as the integral of the uncorrelated increments dzu (t<;)
the variance of ut can be minimized by minimizing the variance of dz^(w) for
each w .

Since the increments are uncorrelated across frequency, the

minimization problems can be solved independently for each frequency.

Thus,

the analysis carried out for Models 1-3 carries over directly, with spectral




13

density matrices replacing covariance matrices.

The minimum trace problems

for Models 2 and 3 are now solved frequency by frequency using the spectral
density matrix.

In principle this introduces additional flexibility into the

representation since the weights, w^, in the objective function (2.4) can
depend on frequency as can the matrix S used for Model 3 to select the
variables of interest.
Like Models 1-3, the solution yields:

(2.9)

dzx (w) - r(w)dzy (w)

where T(w) is the complex analogue of T from (2.5) when the spectral density
matrix of xt is non-singular, and the analogue of BTS from (2.7) when the
spectral density matrix of xt is singular.

-iw

-Iw

Equation (2.9) implies

) , and

The variance and covariances of ut and all autocovariance follow directly from
(2.11).

Moreover, since dz (w) and dz (w) are perfectly correlated from

x

y

(2.9), xt can be expressed as a function of leads and lags of y^:

(2.12)

xt « £(L)yt ,

where j8(L)—

, with j8j-J^7rr(w)e^tJ^dw.

be calculated from leads and lags of yt *




14

Thus, fitted values of xt can

An Example:
The model considered in the next section describes the dynamic properties
of output, consumption, investment and labor supply as functions of a single
productivity shock.

The mechanics of the minimum approximation error

representation for that model can be demonstrated in a model in which xt and
yt are bivariate, and the elements of x^. are driven by a single iid(0,l) shock

1 2

1

2

€^ . Letting xt , x^, yt , and yt , denote the elements of x^. and yt ,
suppose

<*]_(L)
(2.13)
a2(L)

where a^(L) and a2(L) are scalar polynomials in the lag operator.

(2.14)

V

Ax,ll(z)

A x ,12(z)

-Ax,21<z>

Ax ,22(z)-

=

z> =

Thus,

a1(z)a1(z'1)

a^(z)a2 (z ^")

/
/ -1 )
a2(z)a1(z

«2(z)a2(z - 1 )

Assume that the data yt have a full rank ACGF, given by

N

Ay,ll(z)

/—*\

1

A (z) =

CM
r-H

(2.15)

J

-Ay,21<z>

Ay j22(Z)J

Since the spectrum of x^. has rank 1, the procedure outlined for Model 3
(modified for serially correlated data) is appropriate.
xt=x^.

Let S=[l 0], so that

This choice of S means that Ax^(z) will be chosen to minimize the

variance of u^=x^-y^.




Let dzx^(w),

and dzu^(w) denote the first

15

elements of dzx (w) , dz^(w) and dzu (w) .

Since dzu^(tj) is a scalar, the

solution to the minimum variance problem is the complex analogue of the
solution described for Model 1.

(2.16)

In particular, the solution sets:

dzxl(w) - 8(w)dzyl<«)

where 8(w) — [Ax ^ ( e *mW)/Ay ^ ( e ^W )]^.

Since the xt process is singular,

dzx 2 (w ) is perfectly correlated with dzx^(w); in particular, from (2.13):

(2.17)

dzx (w) - B(w) dzxl(w)

where B(w)= [ 1

(2.18)

c*2 (e ^W)/a\(e

] '.

Thus:

dzv
,
x (w) - B(w)8(w)Sdz(w)
y

so that

(2.19)

Axy(e'iw) - B M f M S y e ' 1"), and

Au (e”^w ) follows from (2.11).

Relative Mean Square Approximation Error:
A bound on the relative mean square approximation error for the economic
model can be calculated directly from (2.11).
lower bound on 1-R




2

from a regression -- is:

16

This bound -- analogous to a

(2.20)

rj(w)=[Au (z)]jj/[Ay (z>]jj, z=e

where [Au (z )]jj anc^ [Ay(z)]jj are t*ie J *^
Ay(z) respectively.
dz

(id)

U-

-lid

diagonal elements of Au (z) and

Thus, r^ ( i d ) is the variance of the j 'th component of

relative to the j ' th component of dz (td) , i.e. the variance of the

J

error relative to the variance of the data for each frequency.

A plot of

rj(w) against frequency shows how well the economic model fits the data over
different frequencies.

Integrating the numerator and denominator of r^ ( i d )

provides an overall measure of fit.

Note that since u^ and x^_ are correlated,

rj ( i d ) can be larger than 1, i.e. the R

2

of the model can be negative.

One advantage of rj(w) is that it is unaffected by time invariant linear
filters applied to the variables.

Filtering merely multiplies both the

numerator and denominator of rj (id) by the same constant, the squared gain of
the filter.

So for example, r^ (id) is invariant to Hodrick-Prescott filtering

(see Hodrick and Prescott [1980] and King and Rebelo [1989]) or standard
£
seasonal adjustment filters.
The integrated version of the relative mean
square approximation error is not invariant to filtering, since it is a ratio
of averages of both the numerator and denominator across frequencies.

When

the data are filtered, the integrated verson of r^ ( i d ) changes because the
weights implicit in the averaging change.

Frequencies for which the filter

has a large gain are weighted more heavily than frequencies with a small gain.




17

3 . M e a s u r e s o f F i t f o r a RBC M o d e l

In this section we investigate the coherence of a standard real business
cycle model with post-war U.S. data using the measures of fit developed in the
last section.

The model, which derives from Kydland and Prescott (1982) is

the "baseline” model detailed in King, Plosser, and Rebelo (1988b).

It is a

one sector neoclassical growth model driven by an exogenous stochastic trend
in technology.^
This baseline model is analyzed, rather than a more complicated variant,
for several reasons.

First, the calibration/simulation exercise reported in

King, Plosser and Rebelo suggest that the model explains the relative
variability of aggregate output, consumption and investment, and produces
series with serial correlation properties broadly similar to the serial
correlation properties of post-war U.S. data.

Second, King, Plosser, Stock,

and Watson (1991) show that the low-frequency/cointegration implications of
the model are broadly consistent with similar post-war U.S. data.

Finally, an

understanding of the where this baseline model fits the data and where it
doesn't fit, may suggest how the model should be modified.
Only a brief sketch of the model is presented; a thorough discussion is
contained in King, Plosser, and Rebelo (1989a,1989b). The details of the
model are as follows:

Preferences:




’ w ith
u(Ct ,Lt) = log(Ct) + 01og(Lt)

18

Technology:
Qt = Kj'a (AtNt)“ , with
log(At) = at =

7a

+ at l + et ,

2
et iid(0,ae)

K t+1 - <l-«)Kt + It

Constraints:
Qt - Ct +
1

where

- Nt + Lt

denotes consumption,

is labor input,

is leisure, Qt is output, Kt is capital, Nt

is investment, and At is the stock of technology, which is

assumed to follow a random walk with drift ya and iid innovation e.c .
To analyze the model's empirical predictions, the equilibrium of the model
is calculated as a function of the parameters fi, 6 , a,

2

7a , o €

and 8.

This

equilibrium implies a stochastic process for the variables C^, Lt , Nt , K^, I
and Qt , and these stochastic processes can then be compared to the stochastic
processes characterizing U.S. post-war data.

As is well known, the

equilibrium can be calculated by maximizing the representative agent's utility
function subject to the technology and the resource constraints.

In general,

a closed form expression for the equilibrium does not exist and numerical
methods must be used to calculate the stochastic process for the variables
corresponding to the equilibrium.

A variety of numerical approximations have

been proposed (see Taylor and Uhlig (1989) for a survey); here I use the loglinearization of the Euler equations proposed by King, Plosser, and Rebelo
g
(1987).
A formal justification for approximating the equilibrium of this
stochastic nonlinear model near its deterministic steady state using linear
methods is provided in Woodford (1986, Theorem 2).




19

The approximate solution yields a VAR for the logarithms of Qt , Ct , Kt , It
and Nt .

(Following the standard convention, these logarithms will be denoted

by lower case letters.)

Each of the variables except n^ is nonstationary, but

can be represented as stationary deviations about at , the logarithm of the
stock of technology, which by assumption follows an integrated process.
qt , ct , i^, and k^ are cointegrated with a single common trend, a^..

Thus,

Indeed,

the variables in the VAR are not only cointegrated, they are singular; the
singularity follows since et is the only shock to the system.

The

coefficients in the VAR are complicated functions of the structural parameters

ft, 0y a,

7

,

2

and 8.

Values for these parameters are the same as those

used by King, Plosser, and Rebelo (1989b) and the reader is referred to their
work for a detailed discussion of the values chosen for these parameters.
Assuming that the variables are measured quarterly, the parameter values are:
a=.58, £=.025, 7a=.004, a€=.010, /3=.988, and 6 is chosen so that the steady
state value of N is 0.20.

Using these values for the parameters, the VAR

describing the equilibrium can be calculated and the autocovariance generating
function of xt=(Aqt Act Ait nt)' follows directly.

9

These autocovariances will be compared to the autocovariances of post-war
data for the United States.

The data used here are the same data used by

King, Plosser, Stock, and Watson (1989).

The output measure is total real

private GNP, defined as total real GNP less government purchases of goods and
services.

The measure of consumption is total real consumption expenditures

and the measure of investment is total real fixed investment.

The measure of

labor input is total labor hours in private nonagricultural establishments.
All variables are in per capita terms using the total civilian




20

noninstitutional population over the age of 1 6 . ^

Letting qt denote the log

of per capita private output, ct the log of per capita consumption
expenditures etc., the data used in the analysis will be written as
y t = ( Aq t

Act

Alt

nt)'.

The analysis presented in the last section assumed that the autocovariance
generating function/spectrum of yt was known.

In practice of course this is

not the case, and the spectrum must be estimated.
of yt was estimated in two different ways.

In this work, the spectrum

First, an autoregressive spectral

estimator was used, calculated by first estimating a VAR for the variables and
then forming the implied spectral density matrix.

Following King, Plosser,

Stock and Watson (1989) the VAR was estimated imposing the constraint that
output, consumption and investment were cointegrated.

Thus, the VAR was

specified as the regression of yt onto a constant, six lags of yt> and the
error-correction terms ^t-l’^t-l an(* ^t-l”^t-l*

^he Parameters °f the VAR

were estimated using data from 1950 through 1988.

(Values before 1950 were

used as lags in the regression for the initial observations.)
standard nonparametric spectral estimator was also calculated.

Second, a
The spectrum

was estimated by a simple average of 10 periodgram ordinates after pre­
whitening employment with the filter (1-.95L).

These two estimators yielded

similar values for the measures of fit, and to conserve space only the results
for the autoregressive spectral estimator are reported.
For each variable, Figure 1 presents the spectrum implied by the model, the
spectrum of the data, and the spectrum of the error required to reconcile the
model with the data.

Since the spectral density matrix of variables in the

model has rank one, the joint error process is determined by minimizing the




21

variance of only one of the errors. The error spectra shown in Figure 1 were
calculated by minimizing the error associated with output growth, Aqt-Aqt .
For output, consumption and investment, the model sectra and the data spectra
are similar for very low frequencies (periods greater than 50 quarters) and,
for output and investment, at high frequencies (periods less than 5 quarters).
There are significant differences between the spectra for periods typically
associated with the business cycle; the largest differences occur at a
frequency corresponding to approximately 10 quarters.
nt are quite different.

The spectra of nt and

The employment data have much more low frequency

movement than is predicted by the model.^
The figure implies that relatively little measurement error is needed to
reconcile the model and the data for output, consumption and investment over
the very low frequencies.

On the other hand, measurement error with a

variance on the order of 30% to 55% of the magnitude of the variance or the
series is needed for the output, consumption and investment components with
periods in the 6-32 quarter range.

At higher frequencies, this representation

is able to match movements in output, but not in the other variables.
Table 1 provides a summary of the relative mean square approximation error
for a variety of minimum error representations and filters.

Each panel shows

the relative mean square error (mse) for each variable constructed from four
different minimum error representations.

The first column of each panel

provides a summary of the minimum output error representation, the second
column presents results from the representation that minimizes the consumption
error, the third column shows the results from the minimum investment error
representation, and the final column shows the results from the minimum




22

employment error representation.

The top panel presents the results for the

first differences of the data integrated across all frequencies; the middle
panel shows the results for the levels of the series detrended by the HodrickPrescott filter integrated across all frequencies, and the bottom panel
presents the results for the levels of the series integrated over business
cycle frequencies (6-32 quarters).

The tradeoff inherent in the different

representations is evident in all panels.

For example in the top panel, using

the minimum output error representation, the relative mse for output growth is
26%, while the relative mse for consumption growth is 78%; when the minimum
consumption error representation is chosen, the relative mse of consumption
growth can be reduced to 30%, but the relative mse for output growth increases
to 76%.

The bottom two panels show that, at least for output, consumption and

investment, most of this tradeoff occurs at the high frequencies: for the
business cycle frequencies the relative mse's are generally in the 40%-60%

12

range.

Given the minimum measurement error representation developed in section 2,
it is possible to calculate x^_ from the realization (...,y_^» Y q >

...)•

Since the measurement error model represents yt as x^_ plus error, standard
signal extraction formula can be used to extract {xt) from {y^}.

In general,

of course, signal extraction methods will yield an estimate of x^, say xt ,
2
that is not exact in the sense that E[(xt~xt) ]^0.

In the present context,

the estimate will be exact since the measurement error process is chosen so
13
that dzx (w) and dZy(u) are perfectly correlated for all w .

Figure 2 shows

the realizations of the data, and the realizations of the variables in the
model calculated from the data using the minimum output error
14
representation.




23

Looking first at Figure 2a which shows the results for output, the model
seems capable of capturing the long swings in the post-war U.S. data, but not
capable of capturing all of the cyclical variability in the data.

Using the

standard NBER peak and trough dates, U.S. private per capita GNP fell by 8.4%
from the peak in 1973 to the trough in 1975 and by 7.9% from the peak in 1979
to the trough in 1982.

In contrast, the corresponding drops in Qt -- output

in the model -- were 2.7% and 3.3% respectively.

The dampened cyclical swings

in consumption and fixed investment, shown in Figures 2b and 2c are even more
dramatic.

Finally, figure 2d shows that model predicts changes in labor input

that have little to do with the changes observed in the U.S. during the post­
war period.
Before leaving this section six additional points deserve mention.

First,

the fitted values in figure 2 are quantitatively and conceptually similar to
figures presented in Christiano (1988) and Plosser (1989).

They calculated

the Solow residual from actual data and then simulated the economic model
using this residual as the forcing process.

Implicitly, they assumed that the

model and data were the same in terms of their Solow residual, and then asked
whether the model and data were similar in other dimensions.

Figure 2 is

constructed by making the model and data as close as possible in one dimension
(in this case the variance of output growth) and then asking whether the model
and data are similar in other dimensions.

The difference between the two

approaches can be highlighted by considering the circumstance in which they
would produce exactly the same figure. If the Solow residual computed from the
actual data followed exactly the same stochastic process as the change in
productivity in the model, and if the approximation error representation was




24

constructed by minimizing the variance of the difference between the Solow
residual in the data and productivity growth in the model, then the two
figures would be identical.

Thus, the figures will differ if the stochastic

process for the empirical Solow residual is not the same as assumed in the
model, or the approximation error representation is chosen to make the model
and data close in some dimension other than productivity growth.
Second, the inability of the model to capture the business cycle properties
of the data is not an artifact of the minimum measurement error representation
used to form the projection of xt onto y r , r=l,...,n.
directly from a comparison of the spectra of x^ and y^.

Rather, it follows
The fitted values are

constrained to have an ACGF/spectra given by the economic model.

Figure 1

shows that, for all the variables, the spectral power over the business cycle
frequencies is significantly less for the model than for the data.

Therefore,

fitted values from the model are constrained to have less cyclical variability
than the data.
Third, the ability of the model to mimic the behavior of the data depends
critically on the size of the variance of the technology shock.

The value of

oe used in the analysis above is two and half times larger than the drift in
the series.

Thus, if e^_ were approximately normally distributed, the stock of

technology At would, on average,

fall in 1 out of 3 quarters.

Reducing the

standard deviation of the technology shock so that it equals the average
growth in at drastically increases the size of the measurement error necessary
to reconcile the model with the data.

For example, integrated across all

frequencies, the size of the measurement error variance relative to the
variance of observed data increases to 63% for output.




25

Fourth, there is nothing inherent in the structure of the model that
precludes the use of classical statistical procedures.

Altug (1990) used

maximum likelihood methods to study a version of the model which is augmented
with serially correlated classical measurement errors.

Singleton (1988) and

Christiano and Eichenbaum (1990) pointed out that generalized method of
moments procedures can be used to analyze moment implications of models like
the one presented above.

In the empirical work of Christiano and Eichenbaum

the singularity in the probability density function of the data that is
implied by the model was finessed in two ways.

First, limited information

estimation and testing methods were used, and second, the authors assumed that
their data on labor input was measured with error.
Fifth, many if not all of the empirical shortcomings of this model have
been noted by other researchers.

King, Plosser, and Rebelo clearly show that

the model is not capable of explaining the variation in labor input that is
observed in the actual data.

The implausibility of the large technology

shocks is discussed in detail in Mankiw (1989), McCallum (1989) and Summers
(1986).
Finally, the analysis above has concentrated on the ability of the model to
explain the variability in output, consumption, investment and employment
across different frequencies.

While it is possible to analyze the covariation

of these series using the cross spectrum of the measurement error, such an
analysis has not been carried out here.

This is a particularly important

omission, since this is the dimension in which the baseline real business
cycle model is typically thought to fail.

For example, Christiano and

Eichenbaum (1990) and Rotemberg and Woodford (1989) use the model's




26

counterfactual implication of a high correlation between average productivity
and output growth as starting points for their analysis, and the empirical
literature on the ICAPM beginning with Hansen and Singleton (1982) suggests
that the asset pricing implications of the model are inconsistent with the
data.

It would be useful to derive simple summary statistics based on the

cross spectra of the measurement error and the data to highlight the ability
of the model to explain covariation among the series.

4.

Statistical Issues

The empirical analysis in the last section highlights two related
statistical issues.

First, how can uncertainty about the parameters of the

economic model and uncertainty about the ACGF of the data be incorporated in
the analysis, and second, when the parameters of the economic model are
unknown, does it make sense to estimate these parameters by minimizing the
relative mean square approximation error?
It is conceptually straightforward to incorporate uncertainty about Ax (z)
and A (z).
y

Let r .(w) be an estimator of r .(u) constructed from A (z) and
j
j
x

distribution of A (z) and A (z) the distribution of r .(w) can be readily
x
y
J
deduced.

This distribution can be used to construct confidence intervals for

rj (w) or to carry out other standard inference procedures.

This exercise

2

would be like constructing the confidence interval for a regression R , which
is possible (see Anderson [1984]), but almost never done.
The second issue, using the relative mean square approximation error as a
criteria for choosing parameters is more subtle.




27

Dropping the standard

statistical assumption that the economic model is correctly specified raises a
number of important issues.
parameters.

Foremost among these is the meaning of the

If the model doesn't necessarily describe the data, then what do

the parameters measure?

Presumably, the model is meant to describe certain

characteristics of the data's stochastic process (the business cycle or the
growth properties, for example), while ignoring other characteristics.

It is

then sensible to define the model's parameters as those that minimize the
differences between the model and the data's stochastic process in the
dimensions that the model is attempting to explain.

So, for example, it seems

sensible to define the parameters of a growth model as those that minimize
(w) over very low frequencies or parameters of a business cycle model as
those that minimize rj(w ) over business cycle frequencies.

Given this

definition of the parameters, constructing an analog estimator (see Manski
[1987]), by minimizing

(w) corresponds to a standard statistical practice.

The parameters may also be defined using other characteristics of the model
and the stochastic process describing the data.

If the model is meant to

describe certain moments of the data, then the parameters are implicitly
defined in terms of these moments and can be efficiently estimated using GMM
techniques (see Hansen [1982])."^

In any event, the important point is that

the parameters must be defined in terms of the stochastic process for yt
before properties of estimators of the parameters can be discussed.




28

D is c u s s io n

Two final points deserve mention.

First, while this paper has concentrated

on measures of fit motivated by a model of measurement error, other measures
are certainly possible.

For instance, one measure, which like the measures in

this paper uses only the autocovariances implied by the model and the data, is
the expected log likelihood ratio using the gaussian probability density
function of the data and the model.

More precisely, if g(x) denotes the

gaussian pdf constructed from the autocovariances of the data, f(x) denotes
the gaussian pdf constructed for the autocovariances implied by the model, and
Eg is the expectation operator taken with respect to g(x), the expected log
likelihood ratio I(g,f)=E (log[g(x)/f(x)]) can be used to measure the distance
o
between the densities f(.) and g(.).

I(g,f) is the Kullback-Leibler

Information Criterion (KLIC) which plays an important role in the statistical
literature on model selection (e.g. Akaike [1973]) and quasi-maximum
likelihood estimation (White [1982]).

Unfortunately, the KLIC will not be

defined when f(x) is singular and g(x) is not; the KLIC distance between the
two densities is infinite. Thus for example, it would add no additional
information on the fit of the real business cycle model analyzed in Section 3
beyond pointing out the singularity.
Finally, since the measures of fit developed in this paper are based on a
representation that minimizes the discrepancy between the model and the data,
they only serve as a bound on the fit of the model.

Models with large

relative mean square approximation errors don't fit the data well.

Models

with small relative mean square approximation errors fit the data well given




29

certain assumptions about the correlation properties of the noise that comes
between the model and the data, but may fit the data poorly given other
assumptions about this noise.




30

Footnotes

1.

Also see Hansen and Sargent (1988).

2.

The spirit of the analysis in this paper is similar to the analysis in

Campbell and Shiller (1989), Cochrane (1989), Durlauf and Hall (1989), and
Hansen and Jaganathan (1991).

Each of these papers uses a different approach

to judge the goodness of fit of an economic model by calculating a value or an
upper bound on the variance of an unobserved "noise" or a "marginal rate of
substitution" or a "discount factor" in observed data.

3.

The reader familiar with work on data revisions will recognize these two

sets of assumptions as the ones underlying the "news" and "noise" models of
Mankiw, Runkle, and Shapiro (1984) and Mankiw and Shapiro (1986).

4.

Interestingly, it is possible to determine whether the dynamic errors-in­

variable model or the signal extraction error model is consistent with the
model and the data.

The dynamic errors -in-variables model implies that

Ay(z)-Ax (z)>0 for |z|=l, so that the spectrum of yt lies everywhere above the
spectrum of x t ; the signal extraction error model implies the converse.

If

the spectrum of x^ lies anywhere above the spectrum of yt , the errors-in­
variables model is inappropriate; if the spectrum of yt lies anywhere above
the spectrum of x^, the signal extraction model is inappropriate.

If the

spectra of xt and yt cross, neither model is appropriate.

5.

Since 2x has rank k, there exists an (n-k)xn matrix S, with full row

rank, such that Sxt= 0 .

(The rows of S can be computed as the eigenvectors

of Zx corresponding to zero eigenvalues.)
x.
t

=

0

Thus,

's '
s

Since SEx S ' has rank k, (S' S')' is non-singular, which implies that x l=Bx.,
r
where the nxk matrix B contains the first k columns of [(S' S')']"'*'.




31

6.

Standard seasonal adjustment filters, such as the linear approximations to

Census X-ll have zeros at the seasonal frequencies, so that r. (w) is undefined
at these frequencies for filtered data.

7.

This model is broadly similar to the model analyzed in Kydland and

Prescott (1982).

While the baseline model does not include the complications

of time to build, inventories, time non-separable utility, and a transitory
component to technology contained in the original Kydland and Prescott model,
these have been shown to be reasonably unimportant for the empirical
predictions of the model (see Hansen [1985]).

Moreover, the King, Plosser and

Rebelo baseline model appears to fit the data better at the very low
frequencies than the original Kydland and Prescott model since it incorporates
a stochastic trend rather than the deterministic trend present in the Kydland
and Prescott formulation.

8.

Sergio Rebelo kindly provided the computer software to calculate the

approximate solution.

9.

Of course, this is not the only possible definition of xt>

The only

restriction on xt is covariance stationarity, so for example the log ratios
c -q

10.

and

could be included as elements.

All data are taken from Citibase.

Using the Citibase labels, the precise

variables used were gnp82-gge82 for output, gc82 for consumption, and gif82
for investment.

The measure of total labor hours was constructed as total

employment in nonagricultural establishments (lhem) less total government
employment (lpgov) multiplied by average weekly hours (lhch).

The population

series was P16.

11.

Figure 1 is reminiscent of figures in Howrey (1971) (1972) who calculated

the spectra implied by the Klein-Goldberger and Wharton Models.

A similar

exercise is carried out in Soderlind (1991), who compares the spectra of
variables in the Kydland-Prescott model to the spectra of post-war U.S. data.1
2

12.

Using the notation introduced in Section 2 (see equation 2.6), Table 1

shows the relative mean squared approximation errors for four different




32

choices of S.

Lars Hansen has suggested that it would be useful to

graphically present the results for all values of S, which would trace out the
complete set of possible rmse combinations and more effectively show the
tradeoff.

13.

More precisely, the estimate is exact in the sense that P(xt |yt_j,...,y
y0 >

14.

•.•,yt+j) converges in mean square to

as j -> «.

The estimates of x^ were calculated as the inverse fourier transform of

the fourier transform of yt multiplied by the estimated gain from equation
(2.10), i.e., x^ is calculated as the inverse fourier transform of
r(w)dZy(u), where r(w) is given in equation (2.9) and dz^(w) is the finite
fourier transform of yt , t=l,...,n.
the beginning and end of the sample.

This procedure induces slight errors near
However, because the lead/lag

coefficients in the projection of xt onto y , r=l,...n, are small for this
model, this error is not expected to be large.

15.

A careful analysis of a more complicated version of the model discussed

in the last section is carried out by Christiano and Eichenbaum (1990) using
GMM methods.




33

A p p e n d ix
Derivation of (2.5):
The function to be minimized is:

(A-1)

S = 1 2U> 1-1.w.i*.

where E U. y -.L-.L is the ii'th element of 2U =2A +2 V -EX V -2y X , and w.
are a set of non1
zero constants.

Since E

x

and E are given, (A.l) can be minimized by
y

maximizing the function:

(A-2)

XJ=]S.xy iiw i'

where Ex y ,n.. is the ii'th element of Sxy .
It is convenient to parameterize the covariance matrices as 2x=F'F,
Ey=G'G + H'H, and Exy=F'G, where the matrices F, G and H will be chosen to
maximize (A.2).

This parameterization imposes the constraint that the

resulting covariance matrix for (x^. y£) ' is positive semi-definite.

The

minimum approximation error representation can be found by choosing F, G, and
H to maximize (A. 2) subject to the constraints: Ex=F'F and Ey=G'G+H'H.
Letting F^, G^ and
the Lagrangian is:




denote the i'th column of F, G and H respectively,

where A-j and

are the Lagrange multipliers for the constraints.

The first

order conditions are:

(A.5.i)

8L/d Gi = F.*^

(A.6.i)

dL/3*^ =
- F'F,

- I?-!*!

1=1,•• .,n

• Sj-l'i

‘ °-

i—l
I
•I
H

3L/3F. = G-w-^

o
ll

(A.4.i)

5 -1*1 j hj - “ ■

.,n

i=l,.. .,n

and

(A.7)

S

(A.8)

2y = G'G + H'H.

Horizontally concantenating (A.4.i), (A.5.i) and (A.6.i) for i=l,...,n,
yields:

(A.9)

GW = FA

(A.10)

FW = G0

(A.11)

0 - H0,

where W is a diagonal matrix with w^ on the diagonal, and A and 0 are
symmetric matrices with typical elements A^j and 9 -j , respectively.
Since F and W are non-singular, (A.7)-(A.11) imply that H=0.

The first

order conditions can then be solved by finding factors of 2x and 2^, F and G,
such that F ^GW and G ^FW are symmetric.

Equivalently, F and G must chosen so

that FWG' is symmetric.
Let C and C denote (arbitrary) matrix square roots of 2 and 2 , i.e.,
x
y
x
y
2 =C'C
x x x




and 2 =C'C , and let C=C WC'

y

y y

x

y

Notice that C'C can be decomposed

35

as C'C=DAD', where the columns of D are the orthonormal eigenvectors of C'C
and A is a diagonal matrix with the eigenvalues of C'C on the diagonal.
The solution to the problem sets G=C

y

and F=RC , where R=DA

x

-k
D'C'.

This

solution can be verified by noting that FWG'=RC=DA D' is symmetric and that
RR'=R'R=I, so that F'F=C'C
A

first order conditions.

. Note that both F=RC
A

A

and F=-RC
A

satisfy the
A

The first, F=RCx , corresponds to the value of F that

maximizes the weighted covariance between the elements of xt and yt (and
minimizes the weighted sum of the approximation error variance).

The second,

F=-RCx , corresponds to the value of F that minimizes the weighted covariance
between the elements of xt and yt (and maximizes the weighted sum of the
approximation error variance).




36

References

Altug, S. (1989), "Time-to-Build and Aggregate Fluctuations: Some New
Evidence," International Economic Review. 30, 889-920.
Anderson, T.W. (1984), An Introduction to Multivariate Analysis. Second
Edition, John Wiley and Sons, New York.
Brillinger, D. (1981), Time Series . Data Analysis and Theory. Holden Day, San
Francisco.
Campbell, J.Y. and R.J. Shiller (1989), "The Dividend-Price Ratio and
Expectations of Future Dividends and Discount Factors," The Review of
Financial Studies. 1, 195-228.
Christiano, L.J. (1988), "Why Does Inventory Fluctuate So Much," Journal of
Monetary Economics. 21, 247-80.
Christiano, L.J. and M. Eichenbaum (1990), "Current Real Business Cycle
Theories and Aggregate Labor Market Fluctuations," Discussion Paper 24,
Institute for Empirical Macroeconomics.
Cochrane, J. H. (1989), "Explaining the Variance of Price Dividend Ratios,"
manuscript, University of Chicago.
Durlauf, S.N. and R.E. Hall (1989), "Measuring Noise is Stock Prices,"
manuscript, Stanford University.
Geweke, J. (1977) "The Dynamic Factor Analysis of Economic Time Series," in
D.J. Aigner and A.S. Goldberger eds., Latent Variables in Socio-Economic
Models. North-Holland, Amsterdam, Ch. 19.
Haavelmo, Trygve (1944), "The Probability Approach in Econometrics,"
Econometrica. Vol. 12, Supplement.
Hansen, G. D. (1985), "Indivisible Labor and the Business Cycle," Journal of
Monetary Economics. 16, 309-327.
Hansen, G.D. and T. Sargent (1988), "Straight Time and Overtime in
Equilibrium" Journal of Monetary Economics. 21.
Hansen, L.P. and R. Jaganathan (1991), "Implications of Security Market Data
for Models of Dynamic Economies," Journal of Political Economy. 99, 225262.
Hansen, L.P and T. Sargent (1980), "Formulating and Estimating Dynamic Linear
Rational Expectations Models," Journal of Economic Dynamics and Control. 2,
7-46.




37

Hansen, L.P. and K.J. Singleton (1982), "Generalized Instrumental Variable
Estimation of Nonlinear Rational Expectations Models," Econometrics. 50,
1269-1286.
Hodrick, R. and E.C. Prescott (1980), "Post-War U.S. Business Cycles: An
Empirical Investigation," manuscript, Carnegie-Mellon University.
Howrey, E.P. (1971), "Stochastic Properties of the Klein-Goldberger Model,"
Econometrics. 39, 73-87.
Howrey, E.P. (1972), "Dynamic Properties of the Wharton Model," in Econometric
Models of Cyclical Behavior, volume 2, edited by B. Hickman, Columbia
University Press, New York.
King, R. G . , C.I. Plosser and S.T. Rebelo (1987), "Production, Growth, and
Business Cycles: Technical Appendix," Manuscript, University of Rochester.
King, R. G., C.I. Plosser and S.T. Rebelo (1988a), "Production, Growth, and
Business Cycles: I. The Basic Neoclassical Model," Journal of Monetary
Economics. 21, 195-232.
King, R. G. , C.I. Plosser and S.T. Rebelo (1988b), "Production, Growth, and
Business Cycles: II. New Directions," Journal of Monetary Economics. 21,
309-342.
King, R. G. , C.I. Plosser, J.H. Stock and M.W. Watson (1991), "Stochastic
Trends and Economic Fluctuations," forthcoming American Economic Review.
King, R. G. and S.T. Rebelo (1989), "Low Frequency Filtering and Real Business
Cycle Models," Working Paper No. 205, Rochester Center for Economic
Research.
Kydland, F.E. and E.C. Prescott (1982), "Time To Build and Aggregate
Fluctuations," Econometrica. 50, 1345-1370.
Mankiw, N.G. (1989), "Real Business Cycles: A New Keynesian Perspective,"
Journal of Economic Perspectives. 3, 79-90.
Mankiw, N.G., D.E. Runkle and M.D. Shapiro (1984), "Are Preliminary
Announcements of the Money Supply Rational Forecasts," Journal of Monetary
Economics. 13, 15-27.
Mankiw, N.G. and M.D. Shapiro (1986), "News or Noise: An Analysis of GNP
Revisions," Survey of Current Business. 66, 20-25.
Manski, C. F. (1987), Analogue Estimation Methods in Econometrics. Chapman
and Hall, New York.
McCallum, Bennett T. (1989), "Real Business Cycle Models," in Modern Business
Cycle Theory, edited by Robert J. Barro, Cambridge: Harvard University
Press.




38

McFadden, D. (1981), "Econometric Models of Probabilistic Choice," in
Structural Analysis of Discrete Data, edited by C. Manski and D. McFadden,
198-272, Cambridge: MIT Press.
Plosser, C. I. (1989), "Understanding Real Business Cycles," Journal of
Economic Perspectives. 3, 51-77.
Prescott, E. C. (1986), "Theory Ahead of Business Cycle Measurement,"
Carnegie-Rochester Conference Series on Public Policy. 25 (Autumn), 11-44.
Rotemberg, J. and M. Woodford (1989), "Oligopolistic Pricing and the Effects
of Aggregate Demand on Economic Activity," NBER Working Paper 3206.
Sargent, T. J. (1989), "Two Models of Measurements and the Investment
Accelerator," Journal of Political Economy. 97, 251-287.
Sargent, T.J, and C.A. Sims (1977), "Business Cycle Modeling without
Pretending to have Too Much a-priori Economic Theory," in C. Sims et al.,
New Methods in Business Cycle Research. Minneapolis: Federal Reserve Bank
of Minneapolis.
Singleton, K.J. (1988), "Econometric Issues in the Analysis of Equilibrium
Business Cycle Models," Journal of Monetary Economics. 21, 361-386.
Soderlind, Paul (1991), "Is There a Cycle in Real Business Cycle Models?"
manuscript, Stockholm University.
Summers, L.H. (1986), "Some Skeptical Observations of Real Business Cycle
Theory," Federal Reserve Bank of Minneapolis Quarterly Review. 10, 23-27.
Taylor, J.B. and H. Uhlig (1990), "Solving Nonlinear Stochastic Growth Models
A Comparison of Alternative Solution Methods," Journal of Business and
Economic Statistics. 8, 1-18.
White, H. (1982), "Maximum Likelihood Estimation of Misspecified Models,"
Econometrica. 50, 1-26.
Woodford, Michael (1986), "Stationary Sunspot Equilibria, The Case of Small
Fluctuations Around a Deterministic Steady State," mimeo, The University
of Chicago.




39

Table 1
Relative Mean Squared Approximation Error
Baseline Real Business Cycle Model
Minimum Variance Representations

A. First Differences -- all frequencies

Variable
Output
Consumption
Investment
Employment

Output
.26
.78
.63
.71

Error Minimized with respect to Consumption
Investment
.76
.30
.76
.79

.64
.75
.28
.71

Employment
.79
.98
.78
.56

B. HP Levels -- all frequencies

Variable
Output
Consumption
Investment
Employment

Output
.38
.62
.50
.74

Error Minimized with respect to Consumption
Investment

Employment

.61
.36

.51

.66

.66

.66
.86

.38
.73

.89
.65
.61

C. Levels - - 6 - 3 2 quarters

Variable
Output
Consumption
Investment
Employment

Output
.40
.58
.48
.73

Error Minimized with respect to Consumption
Investment
.57
.40
.61
.85

.44
.60
.43
.72

Employment
.60
.81
.61
.61

Notes: Output, Consumption, and Investment are log first differences of
quarterly values. Employment is the log of quarterly labor input. See the
text for precise definitions. Each column presents the relative mean
square of the row variable constructed from the representation that
minimizes the measurement error variance for the column variable. Relative
mean square approximation error is the lower bound on the variance of the
approximation error divided by the variance of the data.




40

Figure 1
Decom position of Spectra
Data, Model and Approxim ation Error
(V ariance of Output Error M inimized)

B. Consumption

A. Outp ut
t------------ 1------------ 1------------ 1-------------1-------------1------------1—

------ t------------- r

/\
\

0.0002

0.0004

i

i .

'•
/
f
' /
f •

0.0000

'I

I

___ N

\
\

0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50

Cycles Per Quarter

D. Em plo ym ent

0.0000

0.0008

0.0016

0.0024

0.0032

C. In ve stm e n t

Cycles Per Quarter




Spectrum of Model
Spectrum of Data
Spectrum of Error

Figure 2
H istorical Series
A ctual Data and Realization from Model
(V ariance of Output Error M inimized)
(Log Scale)
A. Output

50

53

56

53

62

65

68

71

73

77

80

83

86

71

79

77

80

83

86

B. Consumption

50

53




56

59

62

65

68

Realization from U.S. Economy
Realization from Model

Figure 2
(Continued)

C. Investm ent

50

53

56

59

62

65

68

71

79

77

80

83

86

83

86

D. Em ploym ent

50




53

56

53

62

65

68

71

79

77

80

Realization from U.S. Economy
Realization from Model