View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

CO

o
GO

O
cz
03
0Q

April 1983
Vol. 65, No. 4

CD
CD
CO
CD

"CO
CD
"O
CD



5 Why Do Food Prices Increase?
13 Polynomial Distributed Lags and the
Estimation of the St. Louis Equation
26 Weekly Money Supply Forecasts: Effects of
the October 1979 Change in Monetary
Control Procedures

The Review is published 10 times per year by the Research and Public Information D epartment o f
the Federal Reserve Rank o f St. Louis. Single-copy subscriptions are available to the public free o f
charge. Mail requests fo r subscriptions, back issues, or address changes to: Research and Public
Information Department, Federal Reserve Rank o f St. Louis, P.O. Rox 442, St. Louis, Missouri
63166.
Articles herein may be reprinted provided the source is credited. Please provide the Rank s
Research and Public Information Department with a copy o f reprinted material.




Federal Reserve Bank of St. Louis
Review

April 1983

In This Issue . . .




This issue of the Review contains three articles that investigate the influence of
changes in money growth and monetary policy actions on diverse economic
behavior.
In the first article, “Why Do Food Prices Increase?” Michael Belongia discuss­
es the various explanations that have been offered to account for increases in food
prices. Many popular explanations (for example, unionization, price supports and
“m iddlem en”) fail to distinguish betw een relative prices and nominal (or money)
prices. Taking this distinction into account, the author analyzes graphically the
different patterns of price behavior that would be observed under each type of
price change. Plots of actual data suggest that most of the recent changes in food
prices have followed a path similar to that for changes in the nominal prices of other
goods. Therefore, models that explain isolated changes in relative prices are of
limited use, at best, in explaining ongoing changes in nominal food prices.
A statistical analysis of food prices from 1960 through 1982 shows that the
primary cause of changes in the food component of the Consum er Price Index
(CPI) has been the past growth of the money stock. Belongia’s analysis thus
indicates that, while many of the current explanations are inconsistent with the
actual behavior of food prices, the rate of increase in the food component of the
CPI in the current quarter shares an approximate one-to-one correspondence with
the rate of growth of the money stock over the previous four quarters.
In the second article, “Polynomial D istributed Lags and the Estimation of the
St. Louis Equation,” Dallas S. Batten and Daniel L. Thornton engage in a detailed
re-estimation of the nature of the impact of money growth and government
expenditures in the well-known St. Louis equation.
The major purpose of the study is to determ ine w hether the conclusions drawn
from previous estimations of this equation depend on the selection of lag length or
the imposition of polynomial restrictions. In conducting this examination, the
authors generalize a procedure for selecting the lag length and polynomial degree
that is both convenient and computationally efficient.
They find that the St. Louis equation’s policy conclusions are unaffected by
the lag length selected or the polynomial restrictions imposed. In particular, the
long-run effectiveness of money growth on nominal spending growth and the
long-run ineffectiveness of the growth in government spending are substantiated.
Their investigation also identifies a different specification of the equation that
outperforms the currently used St. Louis equation in terms of both in-sample and
out-of-sample criteria. This new specification has substantially longer lags for both
money and government spending growth and more polynomial restrictions than
the currently specified St. Louis equation.
In the third article, R. W. Hafer focuses on the predictions of weekly money
growth that financial analysts use in attem pting to anticipate Federal Reserve
policy actions. Although several studies have shown the weekly M l num bers to be
3

In This Issue . . .

Digitized for 4
FRASER


unreliable predictors of long-term policy trends, weekly predictions of M l fre­
quently are used to determ ine short-term financial market strategies. In “Weekly
Money Forecasts,” Hafer examines whether the October 6, 1979, change in the
Federal Reserve’s procedures to control the money supply affected the fore­
casters’ abilities to predict the change in M l. More specifically, he addresses the
issue of w hether the change in operating procedures affected the unbiased and
efficiency characteristics of these M l forecasts.
To answer this question, the author assesses the money supply forecasts from a
survey of actively participating money market analysts. Using the average forecast
as the “market s’’ prediction, he finds that the change in monetary control proce­
dures significantly altered the characteristics of the weekly money supply fore­
casts. Prior to October 1979, forecasts of the weekly change in M l generally were
unbiased and efficient estimates of the actual change; since October 1979, these
forecasts have been biased and inefficient. These findings, along with those
presented in studies that analyze the effects of unanticipated weekly money
changes on interest rates, “suggest that a more predictable [monetary policy]
control procedure would contribute to a more stable financial m arket.”

Why Do Food Prices Increase?
MICHAEL T. BELONGIA

O v E R the past decade economists have devoted
much research effort to identifying factors that in­
fluence the direction and magnitude of changes in food
prices. Under the widely-accepted belief that “food
prices rose faster than nonfood prices during the
1970s,” many have attem pted to identify the unique
characteristics of food products and their marketing
system that have caused food prices to rise faster than
the general rate of inflation.1 These studies typically
concluded that market concentration and increases in
the costs of assorted inputs were the chief causes of
increases in retail food prices.
Not all analysts share these views, however. First,
there is some disagreement concerning w hether food
has, in fact, become relatively more expensive in re­
cent years. Second, recent empirical research has
found that increases in food prices are more directly
related to the monetary policy of the Federal Reserve
than they are related to unique marketing practices of
firms in the food industry. Thus, contrary to the pre­
dom inant view, these argum ents contend that in­
creases in food prices, on average, share the same path
as that followed by other prices.
The following discussion attem pts to clarify some of
these issues. After several basic economic concepts are
defined, a statistical analysis of the data is conducted.
The evidence suggests that virtually all of the long-run
increases in food prices can be explained by past rates
of growth of the money stock. Conversely, the discus­
sion in the article’s final section indicates that predic-

'See, for example, R. McFall Lamm, “Prices and Concentration in
the Food Retailing Industry,” Journal o f Industrial Economics
(September 1981), pp. 67-78; Larry E. Salathe and William T.
Boehm, Food Prices in Perspective: A Summary Analysis, Eco­
nomics, Statistics and Cooperatives Service (U.S. D epartm ent of
Agriculture 1978); and R. McFall Lamm and Paul C. W escott,
“The Effects of Changing Input Costs on Food Prices, ” American
Journal o f Agricultural Economics (May 1981), pp. 187-96.



tions of competing theories often are contradicted by
actual events.

RELATIVE VS. NOMINAL PRICES
The first step necessary in a discussion of price
changes draws the distinction betw een relative and
nominal prices. Put most simply, nominal (or money)
prices are the actual, dollar-denominated prices at
which goods are exchanged; for example, a news­
paper’s nominal price is 25 cents. A relative price,
however, expresses the cost of a good in term s of other
goods, not in term s of money. That is, if a book’s
nominal price is $2, the relative price of a newspaper
— relative to a book — is ’/s($0.25 -r- $2.00 = Vs). This
shows that the newspaper is “worth” one-eighth of a
book.
The importance of this distinction is more than
numerical in nature. There is a crucial economic dis­
tinction betw een nominal and relative prices. Changes
in relative prices reflect changes in the rate of exchange
betw een goods caused by relative changes in the sup­
ply and/or dem and for goods; changes in nominal
prices reflect changes in the rate of exchange between
goods and money associated with changes in the supply
and/or demand for money. For example, under a neu­
tral inflation, in which all nominal (money) prices in­
crease at the same rate, a 20 percent increase in the
price of newspapers to 30 cents would be matched by a
20 percent increase in the price of a book to $2.40 (1.20
X $2.00 = $2.40). This equal percentage increase in
all money prices is neutral because relative prices are
unaffected; that is, with a neutral 20 percent inflation,
the relative price of a newspaper is still Vs ($0.30 h$2.40 = Vs) of the book.
The distinguishing feature of an equal percentage
change in all nominal prices is that it has no long-run
impact on economic activity; that is, it does not change
5

FEDERAL RESERVE BANK OF ST. LOUIS

the allocation of resources betw een newspapers and
books.2 In other words, when all prices — including
incomes — are rising at equal rates, relative prices
remain unchanged. In this instance, an individual who
allocates fixed proportions of his income to newspa­
pers, books, food and housing is unaffected by a neutral
inflation: even though all prices rise by 10 percent,
these changes are offset by a 10 percent increase in
income. Nominal price changes of this nature share a
one-to-one correspondence with past rates of growth of
the money stock.3
Conversely, relative price changes for individual
products both result from, and contribute to, changes
in economic relationships. For example, if an increase
in demand doubled the price of newspapers from 25
cents to 50 cents, an individual who purchased news­
papers would adjust his spending patterns to reflect
this increase. That is, if one person previously had
purchased four newspapers per week for $1(4 X $0.25)
out of a $100 weekly income, there would be $99 per
week to spend on other items. W hen the newspaper
price rises to 50 cents, the four newspapers cost $2 and
only $98 remains for other purchases. The change in
the relative price of newspapers forces this individual
to reallocate the $100 of weekly income: either the
purchase of newspapers or other goods must be re­
duced by $1.
The issue of changes in food prices also can be re­
duced to this simple dichotomy betw een movements
in relative and nominal prices. Analysts who believe

2Rational expectations theorists may argue that real economic activ­
ity will be affected in the short run unless price changes are forecast
perfectly, e.g., Robert E. Lucas, Jr., “Expectations and the N eu­
trality of Money , ”Journal o f Economic Theory (April 1972), pp.
103-24. The present analysis also ignores the effects of factors like a
progressive tax structure, usury laws and other im pedim ents that
prevent or complicate a com plete indexation of this type of price
change. For purposes of illustration, however, this simple example
is intended only to draw a distinction betw een relative and nominal
prices.
‘The linkage betw een past growth rates of the money stock and the
current rate of inflation has been established in a num ber of stud­
ies. Among these are: P eter I. Berman, Inflation and the Money
Supply in the United States, 1956-1977 (Lexington Books, 1978);
Yash P. Mehra, 'An Empirical Note on Some M onetarist Proposi­
tions,” Southern Economic Journal (July 1978), pp. 154-67; Robert
E. Lucas, “Two Illustrations of the Q uantity Theory of M oney,”
American Economic Review (D ecem ber 1980), pp. 1005-14; Denis
S. Karnoskv, “The Link Between Money and Prices,” this Review
(June 1976), pp. 17-23; Keith M. Carlson, “The Lag from Money to
Prices,” this Review (October 1980), pp. 3-10; and John A. Tatom,
“Energy Prices and Short-Run Economic Perform ance,” this Re­
view (January 1981), pp. 3-17.
F urther discussion of the distinction betw een inflation and
changes in relative prices can be found in Lawrence S. Davidson,
“Inflation Misinformation and M onetary Policy,” this Review
(June/July 1982), pp. 15-26.
Digitized for 6FRASER


APRIL 1983

Figure 1

Theoretical Differences Between Rates of Price Change
and C hanges in Price Levels
N a tu r a l
lo garithm s
o f p rice

Food

N o n fo o d

1970

food prices have risen faster than nonfood prices are
arguing that shifts in the relative supply and demand
conditions for both food and nonfood products have
resulted in a net increase in the relative price of food.
Conversely, those who argue food prices grew at the
same rate as other prices believe that most of the
recent changes in food prices can be linked directly to
the high rate of money growth that existed over this
period. The distinction betw een these views is illus­
trated in the graphical analysis that follows.

ALTERNATIVE INTERPRETATIONS
OF HISTORICAL DATA
Those who argue that food prices increased at a
relatively faster rate than nonfood prices in the 1970s
(see footnote 1) base their conclusion on the observa­
tion that, over this period, the food component of the
Consumer Price Index (CPIF) increased by 87 percent
compared to a 66 percent increase for the nonfood
component (CPINF). Although these statistics are cor­
rect technically, they are based on total increases for
the 10-year period. That is, the 87 percent increase for
CPIF is determ ined by constructing the simple differ­
ence of index values for D ecem ber 1969 and Decem ­
ber 1979. This simple calculation of price change, how­
ever, fails to distinguish betw een changes in price
levels and average rates of price change.
To see the problem with this type of calculation,
consider figure 1. Lines A, B and C represent different
growth paths for the food and nonfood components of

FEDERAL RESERVE BANK OF ST. LOUIS

the CPI. The horizontal lines drawn at levels denoted
by Food and Nonfood indicate, respectively, the 87
and 66 percent increases these indices registered dur­
ing the 1970s.
Although lines A and B both are consistent with the
actual 87 percent increase in food prices that occurred
during the 1970s, the differences in their slopes imply
very distinct economic interpretations of this statistic.
On one hand, lines B and C are compatible with the
popular view that food prices increased at a relatively
faster rate over this 10-year interval. That is, since
1970, the slope of line B, which represents a constant
rate of growth for food prices, has been greater than the
slope of line C, which depicts the growth rate of non­
food prices. This suggests that fundamental differences
in production and marketing processes established
different long-run growth rates for food and nonfood
prices in the 1970s. Or, because the difference in
slopes appears to be a perm anent structural difference,
lines B and C also carry the implicit hypothesis that
food will continue to increase in value, relative to
nonfood products.
Lines A and C also are consistent with the historical
data but do not imply any fundamental changes in the
relative growth rates of food and nonfood prices. In­
stead, line A illustrates the effect of certain events in
1973 on the relative level of food prices. But, aside from
this isolated change caused by relative shifts in world
food supply and demand relationships, lines A and C
have the same slope. That is, with the exception of
1973’s adjustment in relative prices, both food and
nonfood prices, on average, have grown at the same
rate both before and since 1973. Therefore, lines A and
C are consistent with the nominal price changes that
occur during a neutral inflation. Or, stated differently,
the slopes of lines A and C depict the shared increases
in all nominal prices that are associated commonly with
past rates of growth of the money stock.
These theoretical relationships can be compared to
plots of actual price changes shown in chart 1. In
general, these plotted lines reflect the same qualitative
results suggested by lines A and C in figure 1. The level
of food prices did increase, relative to nonfood prices,
in 1973 but, after the effects of this relative price
change dissipated, food and nonfood prices tended to
follow the same trend rate of growth. In fact, declines
in the relative price of food in every year since 1978
have caused the food price and the nonfood price lines
to converge. Or, rather, the large increase in the rela­
tive price of food during 1973-74 has been offset by five
consecutive declines in relative food prices since 1978.



APRIL 1983

Food Prices and Money Growth
The distinctions of the two preceding sections sug­
gest that the problem for an analysis of food prices is to
specify a statistical model that can distinguish between
changes in relative and nominal prices or, alternative­
ly, betw een the types of change depicted by lines A
and B in figure 1. One such model can be specified as:
4

(1) C P IF = a +

X

1

1

S 1^ x M, _i + 2 d; x yt _ f + E a
i= 0
j= 0
k= 0

RP,_ k + h

X

Z, + q

X

Z2 + e„

where CPIF is the CPI for food; M is the narrowly
defined money stock, M l; y is real GNP; RP is the ratio
of the Producer Price Indexes for the “food” and “non­
food” groups;4 Zj and Z2 are 0/1 dummy variables for
phases I—II and phases III—IV, respectively, of Nixon
administration price controls; b, d, g, h and q are
estimated coefficients; t indicates time (quarterly in­
tervals, 1960-82); and et is a model error term. Dots
over variable names indicate data m easured in growth
rates. All data are seasonally adjusted.
The reasoning behind this model of food price be­
havior derives from the basic considerations of figure 1
and the discussion of relative versus nominal prices.5
Because we know any observed change in food prices is
likely to be apportioned in some m anner between
changes in relative and nominal values, a model of
price change must include variables associated with
general inflation and with changes in product supplydemand relationships. Therefore, the model includes
past growth rates of the money stock to account for that
portion of changes in food prices that is associated with
general inflation. Changes in the growth rate of real
GNP are included to represent a cyclical effect on
prices not captured by money growth. That is, if the
equation of exchange is rew ritten as: P = M + V — v,
then, for a given rate of increase in money and a given
M l velocity, a higher rate of real income growth will
tend to be associated with a slower rate of nominal
price increase. Therefore, the signs on coefficients d.

'The actual commodity groups are the Producer Price Indexes for
“all farm foods and feed and “all industrial com m odities,” respec­
tively; these groups represent, essentially, a “food” and “nonfood”
division of the PPI.
T h is same basic model, estim ated with monthly data, and a more
detailed explanation of its theoretical support is found in Michael
T. Belongia and Richard A. King, “A Monetary Analysis of Food
Price D eterm ination,” American Journal o f Agricultural Eco­
nomics (February 1983), pp. 131-35.

7

FEDERAL RESERVE BANK OF ST. LOUIS

APRIL 1983

Chort 1

Actual Movements in Food and Nonfood Prices

1970

1972

1974

1976

1978

a

19 80

1982

|J[ D a t a a r e from C o n s u m e r P ric e In d ic e s .

are expected to be negative. Changes in basic food
supplies are represented by a proxy of changes in the
growth rate of relative food prices at the producer, or
wholesale, level. The effects of official price controls
from August 1971 through January 1974 are repre­
sented by variables Zj and Z2. Together, these vari­
ables encompass the sources and types of price changes
discussed earlier.
This model implies several specific hypotheses.
First, a one-to-one relationship betw een past rates of
money growth and nominal prices would be supported
by a test of the full impact of all current and past values
of M on CPIF; the specific hypothesis to be tested is:
Digitized for 8FRASER


4

(2)

2 b, = 1,
i= 0
or that an X percent increase in the rate of money
growth over the most recent five quarters will cause a
similar X percent change in the current growth rate of
nominal food prices.6
6The postulated lag length is considerably shorter than th e 20quarter lag betw een money and prices reported in oth er studies.
The reason for this difference is the choice of price index for the
model’s dependent variable. Because supply and dem and func­
tions for food products tend to be more inelastic than those associ­
ated with other goods, changes in the supply of, or dem and for food
will tend to affect prices more quickly than is typical in other
markets.

APRIL 1983

FEDERAL RESERVE BANK OF ST. LOUIS

Another hypothesis concerns changes in relative
prices. Here, the concern is the net impact of a change
in the growth rate of real income and a change in
relative producer prices. In addition to the effect of
real activity on nominal price growth shown via the
equation of exchange, a change in product supplies also
could affect CPIF by changing the relative price of
food. Because these effects are expected to be offset­
ting, the hypothesis test takes the form:
1
1
(3) 2 d, + 2 gk = 0.
j= 0
k= 0
Finally, it is interesting to know w hether general
price controls during the 1971-74 period had signifi­
cant effects on food prices, which were treated dif­
ferently than other controlled commodities. If controls
were effective, the coefficient on Z x should be negative
and the coefficient on Z2, when controls were gradually
relaxed, should be positive.




0.58

0.097

0.66

M ,-i

0.345

2.25

M, —2

0.300

2.07

M.-3

0.155

1.02

M, 4

0.238

1.60

y.

-0 .2 1 8

-1 .9 0

y,~,

-0 .1 5 6

-1 .4 2

RP.

0.168

5.03

RP,~,

0.058

1.74

Z,

-0 .8 3 8

- 2 .0 6

Zz

1.770

3.58

Hypothesis tests
4
F = 0.43

2 b, = 1
i= 0
1
2

1
dj +

2
II

'’This relationship also appears to be stable over time. The model
also was estim ated over 1960-72, 1970-82 and 1973-82 sub­
samples and, in each case, the growth rates of the money stock and
food prices shared an approximate one-to-one correspondence.

0.165

M,

gk = 0

F = 1.37

o

7Although the coefficients on the third and fourth lags of money
growth are nonsignificant individually, an F-test on their joint
significance suggests these term s should be retained in the model.

t-statistics

a

II

This result is supported by the tests of other a priori
hypotheses. The net effect of changes in the growth
rates of real income and relative producer prices is
shown to be zero, indicating that relative food prices
have not changed significantly over this sample period.
This provides further support for the notion that food
prices have increased, on average, in a fashion similar
to general inflation. Therefore, as the discussion in the
next section indicates, studies based only on factors
affecting supply and demand conditions are in substan­
tial disagreement with the historical data: if relative
prices have not changed appreciably, studies based on
factors that shift supply and demand functions will not

Coefficient
estimates

Variable

o

The ordinary least squares results in table 1 support
these propositions. The hypothesis test for equation 2
suggests that the net impact of money growth is not
significantly different from one; the rate of money
growth over the current and past four quarters causes
an equal change in the subsequent growth rate of retail
food prices.7 Therefore, except for transitory short-run
deviations, the observed changes in retail food prices
have been changes in their nominal values, not in their
relative prices. Changes in food prices are related most
closely to changes in the growth rate of the money
stock.8

Table 1
Estimated Results for Equation 1

Critical value for F1i79 = 3.97 (a = 0.05)
R2 = 0.55
DW = 1.66

p re se n t accurate descriptions of observed price
changes.
Finally, the coefficients on price control variables
are of the expected sign. From August 1971 through
the end of 1972, when controls were applied most
stringently, they apparently did reduce the rate of
increase in reported food prices.9 Then, from 1973
through 1974, controls were relaxed gradually and food
'T his does not imply, however, that controls were an effective
anti-inflationary policy. In fact, although there is an observed
statistical effect on food prices in these results, controls themselves
were abandoned, in large part, because of the resource allocation
problems they caused. That is, controls masked changes in relative
prices that give signals to producers concerning their output deci­
sions. Consider, for example, that higher food prices are caused by
product shortages. H igher prices, however, will tend to encourage
increased production and, in the longer run, increased production
will cause lower prices. Therefore, if price controls limit or forbid
price increases, their negative impact on production incentives will
exacerbate the shortage-high price conditions.

9

FEDERAL RESERVE BANK OF ST. LOUIS

APRIL 1983

prices began to increase at a faster rate. These results
again support expected price behavior during this
period.
The general conclusion of this analysis might be seen
m ore clearly by constructing a comparison of the
effects of M, y and RP on the growth rate of retail food
prices. After adjusting C PIF for the effects of the mod­
el’s intercept, Zj and Z2, it is possible to write:
_____
(1') C P IF ~

4
_
2 b; x M +

i=0

1
2 dj x y +

j=0

1
___
2 gk x R P

k= 0

where the bars over variable names indicate their aver­
age, or mean, values. By summing the coefficient esti­
mates as indicated and inserting the data means, equa­
tion 1' can be rew ritten as:
(4)

1.280 ~ (1.136 x 1.32) + ( - 0 . 3 7 4 x 0.77)
+ (0.226 x ( - 0 . 2 3 ) )

or,
(5)

1.280 ~ 1.500 -

0 .2 8 8 -

0 .0 5 2 ~ 1.160.

In this form, an evaluation of the model’s results at
the data means indicates that M l and CPIF share an
approxim ate one-to-one correspondence, whereas
changes in real activity — over this sample period —
tend to decrease the relative price of food. Contrary to
the popular belief, food price increases would have
been larger had it not been for the mitigating effects of
real income growth and shifts in relative producer
prices.

NONMONETARY EXPLANATIONS
FOR FOOD PRICE INCREASES:
A CRITIQUE
A num ber of studies have offered alternative ex­
planations for why food prices increase and, further,
why they have increased relative to other prices. These
explanations inclu d e increasing prices for farm
products,10 farm price support program s,11 unioniza-

10See, for example, Don Paarlberg, Farm and Food Policy (Uni­
versity of Nebraska Press, 1980); Albert Eckstein and Dale Heien,
“The 1973 Food Price Inflation,” American Journal o f Agricul­
tural Economics (May 1978), pp. 186-96; Rodney C. Kite and
Joseph M. Roop, “Changing Agricultural Prices and Their Impact
on Food Prices U nder Inflation,” American Journal o f Agricul­
tural Economics (D ecem ber 1981), pp. 956-61; and Lamm and
W escott, “The Effects of Changing Input Costs . . .
UJ. R. Penn, “Commodity Programs and Inflation, ”American Jour­
nal o f Agricultural Economics (D ecem ber 1979), pp. 889-95.
Digitized for10
FRASER


tion of food sector employees12 and increased concen­
tration of the food industry.13 The following discussion
indicates that these explanations either are unrelated
to the trend growth rate of food prices or predict results
contrary to observed events.

Rising Input Costs
One alleged cause of increased food prices attributes
observed increases in the CPIs for various food groups
to increases in the prices of inputs used to produce
finished retail food products. Specifically, some pre­
vious studies have found that increases in the nominal
costs of raw farm products have led to subsequent
increases in the retail prices of foods purchased by
consumers. The logic behind this explanation is, essen­
tially, that if the prices of the inputs used to produce
food items are increased, those processors and retailers
who produce and sell food products also must raise
their prices to maintain previous profit margins or
avoid losses.
The explanation that rising input costs have caused
increases in retail food prices is flawed on an empirical
basis, if for no other reason. That is, because the rela­
tive prices of major food groups at the producer level
declined during most years of the 1970s, these inputs
actually becam e relatively less expensive for food
manufacturers. These declines in relative prices for
raw farm products should have put downward pressure
on both producers’ costs and output prices. Or, other
things being equal, these data suggest that food manu­
facturers should have been able to produce a given
quantity of food at lower — and declining — costs. This
is an unlikely explanation for increasing retail food
prices.

Concentration Ratios and Prices
Higher concentration ratios for the food industry or
relatively higher union mem bership among workers in
the food industry might explain why food prices are at a
higher level than their values under perfect competi­
tion. But these structural characteristics of the indus­
try could only cause food prices to rise continuously
if it is shown that these monopolistic elem ents also
strengthened continuously over the same period. In­
stitutional arrangements — like union bargaining pow­
er and pricing strategies among a few relatively large
12R. McFall Lamm, “Unionism and Prices in the Food Retailing
Industry,” Journal o f Labor Research (W inter 1982), pp. 69-79.

13Lamm, “Prices and C oncentration . . . .”

FEDERAL RESERVE BANK OF ST. LOUIS

APRIL 1983

firms — usually act in a m anner similar to price support
programs. That is, some degree of control over pricing
decisions — such as a union’s ability to secure higher
nominal wages for union workers — can act like a price
support which raises a commodity’s price above its
competitive market value. The ability of a union or a
highly-concentrated food industry to raise wages or
prices to higher levels, however, is not the same as an
ability to raise relative wages or prices continuously.
Again, there is a necessary distinction betw een rates of
price change and changes in relative price levels.
There are at least two reasons why neither type of
m arket pow er is likely to explain ongoing price
changes. On the one hand, a producer facing a down­
ward-sloping linear demand curve will have an incen­
tive to raise prices until profits are more affected by
declining sales than by higher prices. If a firm starts at a
position where raising prices is profitable and decides
to raise its product’s price, the firm will benefit in two
ways. The increased price will, ceteris paribus, reduce
the quantity sold, which will reduce costs. At the same
time, total revenue will increase because the per­
centage reduction in the quantity sold will be less than
the percentage increase in the output price. At some
point, where the product’s price elasticity is equal to
— 1, total revenue will be maximized. At prices above
this level, total costs will continue to decline but total
revenue also will fall. Therefore, as Batten has ex­
plained, price increases beyond some level will result
in reductions in marginal revenue (from a smaller
quantity sold) larger than the associated decreases in
marginal costs (from producing less).14 In this case, the
price increases will reduce profits and, if other firms do
not follow the price increases — as traditional oligopoly
theory suggests — the firm’s market share also will be
diminished.
A second counterargum ent to the alleged rela­
tionship betw een increasing concentration ratios and
inflation is found in the reason why an industry be­
comes more concentrated. Eckard, who found no
relationship between concentration ratios and price
increases, argues that industries become more concen­
trated because firms are able to produce at lower
cost.15 The sequence of events begins with gains in
productivity (most notably, labor productivity) that
reduce a firm’s input costs and allow it to price its

14Dallas S. Batten, “The Cost-Push M vth,” this
1981), pp. 20-25.

Review (June/July

15E. Woodrow Eekard, Jr., “Concentration Changes and Inflation:
Some E vidence,”
pp. 1044-51.

Journal o f Political Economy




(October 1981),

output below the level charged by competitors. Conse­
quently, more efficient production and lower prices
provide an opportunity for this firm to increase sales
which, in turn, tends to make its industry more con­
centrated. This sequence of events — increased pro­
ductivity and lower input costs ultimately resulting in
increased industry concentration — is supported by
empirical evidence provided by Peltzm an.16 The con­
centration ratio-inflation hypothesis also suffers from
its own predictions, however: if these models were
correct, actual declines in the relative price of food
must imply that the food industry has become less
concentrated over this period.

Union Power and Prices
Similarly, the existence of union bargaining power
might explain a higher level of costs for a firm purchas­
ing this type oflabor. And, a higher level of costs might
be used to explain a higher price level for the products
produced by a firm using union labor. For the same
reasons used in the previous argum ent, however, the
existence of bargaining power in wage negotiations is
unlikely to explain why nominal or relative food prices
would rise continuously.
One extension of the sequence by which union pow­
er causes higher prices through increased wages is
presented explicitly in a model by Moore and implicit­
ly in some food price studies.17 The argument pre­
sented is that union wage negotiations and their wage
contracts are ongoing processes that result in con­
tinuous upward adjustments in nominal wage levels.
Further, it is recognized that because wages are just
one price among all prices, an increase in the relative
price oflabor necessarily must be offset by a decline in
the relative price of one or more other goods unless the
money stock is increased. So, instead of an adjustment
of relative prices and wages, the models argue that the
Federal Reserve will monitor nominal wage increases
and “ratify’’ them by increasing the money supply.
Increases in the growth rate of the money stock will
cause inflation, however, and therefore will reduce the
purchasing power of wages as product prices increase.
This reduction in purchasing power will, it is alleged,
set off another round of wage increases to re-establish
purchasing power. But, the effort is futile as the money

''’Sam Peltzman, “The Gains and Losses from Industrial C oncentra­
tion, Journal o f Law and Economics (October 1977), pp. 229-63.
''B asil J. Moore, “M onetary Factors,” in Alfred S. Eichner, ed., A
Guide to Post-Keynesian Economics (M. E. Sharpe and Co.,
1979), pp. 120-38'.

11

APRIL 1983

FEDERAL RESERVE BANK OF ST. LOUIS

stock grows again and the rate of inflation increases
further.
Although a plausible explanation for ongoing in­
creases in food prices, this type of model rests on the
assumptions that (a) wage increases established by
union power cause increases in product prices, and (b)
the Federal Reserve will ratify nominal wage increases
with an expansion of the money stock. These are test­
able hypotheses of real-world behavior. But, an em pir­
ical investigation of these relationships rejected the
notions that wage increases cause increases in food
prices and that the growth rate of the money stock
responds to changes in nominal wages.18 Therefore, in
the one case when unions and food prices might be
related, the statistical evidence does not support any
direct linkage betw een wage rates and food prices.
1SM. Belongia, “A Note on th e Specification of Wage Rates in

Cost-Push Models of Food Price D eterm ination,” Southern Jour­
nal o f Agricultural Economics (D ecem ber 1981), pp. 119-24.

Digitized for 12
FRASER


CONCLUSIONS

Changes in food prices since 1970 have been attrib­
uted to a variety of sources. These explanations, how­
ever, often are based on some confusion over the basic
distinction betw een isolated changes in relative prices
and ongoing changes in nominal price levels. After
accounting for this distinction, statistical analysis of the
data suggest that the recent increases in food prices are
increases in nominal price levels that share an approxi­
mate one-to-one relationship with past rates of money
growth. Com peting explanations of food price be­
havior — unionization, oligopoly power and rising in­
put prices, among others — actually predict results
that are contrary to the observed data over this period.
Specifically, competing models are based on theories
that predict increases in the relative price of food; in
fact, the relative price of food has declined over much
of the sample period. Relating money growth to food
prices appears to offer a better explanation of what
actually produced the food price increases during the
1970s, and what is likely to do the same in the 1980s.

Polynomial Distributed Lags and the
Estimation of the St. Louis Equation
DALLAS S. BATTEN and DANIEL L. THORNTON

c

K J INCE its introduction in 1968 to investigate the
relative impact of monetary and fiscal actions on eco­
nomic activity, the St. Louis equation has been the
focus of considerable criticism .1 Much of this criticism
stemmed from the fact that Andersen and Jordan’s
conclusions were substantially different from those of
the larger econometric models. In particular, they
found that changes in the money stock have a sig­
nificant, lasting impact on nominal income, while
changes in high-em ploym ent governm ent expendi­
tures and revenues, although having a short-run im­
pact, have no significant, lasting effect.
Criticism of the St. Louis equation generally has
fallen into two categories: the specification of the equa­
tion and the use of the polynomial distributed lag
(PDL) estimation technique.2 The second category has

The authors would like to thank R. Carter Hill and Thomas B.
Fomby fo r their suggestions and comments.
'T he St. Louis equation first appeared in Leonall C. Andersen and
Jerry L. Jordan, “M onetary and Fiscal Actions: A Test of Their
Relative Im portance In Economic Stabilization,” this Review
(November 1968), pp. 11-24.
2There have been three major criticisms of the specification of the
St. Louis equation. First, since th e equation is not derived explicit­
ly from a structural macroeconomic model, relevant exogenous,
right-hand-side variables may be excluded, and, as a result, the
equation may be misspecified. See, for example, Franco Modi­
gliani and Albert Ando, “Impacts of Fiscal Actions on Aggregate
Income and the Monetarist Controversy: Theory and Evidence,”
in Jerom e L. Stein, ed., Monetarism, vol. 1, Studies in Monetary
Economics (North-Holland, 1976), pp. 17-42; and Robert J. G or­
don, “Comments on Modigliani and A ndo,” in Monetarism, pp.
52-66.
Second, failure to specify the appropriate indicators of monetary
and fiscal actions may distort their exhibited relative importance.
See Frank D e Leeuw and John Kalchbrenner, “M onetary and
Fiscal Actions: A Test of Their Relative Im portance in Economic
Stabilization — C om m ent,” this Review (April 1969), pp. 6-11;
Edward M. Gramlich, “The Usefulness of Monetary and Fiscal
Policy as Discretionary Stabilization Tools,” Journal o f Money,
Credit, and Banking (May 1971), pp. 506-32; and E. Gerald C orri­
gan, “T he M easu rem e n t and Im p o rta n c e of Fiscal Policy
C hanges,” Federal Reserve Bank of New York Monthly Review
(June 1970), pp. 133-45.



received far less attention in the literature, and inves­
tigations of it have been conducted in a far less sys­
tematic m anner than investigations of the other cate­
gory. Consequently, we have undertaken a thorough
examination of the use of the PDL estimation tech­
nique to determ ine w hether the conclusions of the St.
Louis equation are sensitive to either the lag structure
employed or the polynomial restrictions imposed.

A BRIEF SURVEY OF THE ST. LOUIS
EQUATION
The St. Louis equation has not changed substantially
since its introduction. The original specification was:
(1)

AY, = a +

+

3
2

3 ,A M t _, +

3
2

8 jARt j + e t,

i= 0

3
2

i= 0

7,

i= 0
where Y = nominal GNP,
M = a monetary aggregate (either M l or the mone­
tary base),
G = high-employment federal government expen­
ditures,
Finally, ordinary least squares (OLS) estim ates of the param­
eters will exhibit simultaneous equation bias if the right-hand-side
variables are not exogenous with respect to nominal income. See
Stephen M. Goldfeld and Alan S. Blinder, “Some Implications of
Endogenous Stabilization Policy,” Brookings Papers on Economic
Activity (3: 1972), pp. 585-640; Robert J. Gordon, “Notes on
Money, Income, and Gramlich, ” Journal o f Money, Credit, and
Banking (May 1971), pp. 533-45; De Leeuw and Kalchbrenner,
“Monetary and Fiscal Actions: C om m ent;” J. W. Elliott, “The
Influence of Monetary and Fiscal Actions on Total Spending,”
Journal o f Money, Credit, and Banking (May 1975), pp. 181-92;
Keith M. Carlson and Scott E. Hein, “M onetary Aggregates as
Monetary Indicators,” this Review (November 1980), pp. 12-21;
and R. W. Hafer, “The Role of Fiscal Policy in the St. Louis
Equation,” this Review (January 1982), pp. 17-22.

13

FEDERAL RESERVE BANK OF ST. LOUIS

R = high-em ploym ent federal government rev­
enues and
= error term .3
e

The As indicate that all variables are first differences
(i.e., AYt = Yt — Yt_ j). The coefficients of each lagged
variable were constrained to lie on a fourth degree
polynomial with both endpoint coefficients for each
variable constrained to equal zero.4 In the original
article, longer lag lengths were estimated but, since no
coefficient past the third lag was statistically signifi­
cant, these lags were excluded. None of the reported
results indicated any investigation of different lag
lengths or different polynomial degrees for each vari­
able individually.0 In addition, equation 1 also was
estimated in a modified form by combining the highemployment governm ent spending and revenue terms
into the high-employment surplus/deficit (i.e., R-G).
W hen Andersen and Carlson made the St. Louis
equation the cornerstone of the St. Louis model, it
contained the contemporaneous value and four lags of
AM and AG; AR, however, was excluded from the
equation.6 The same degree polynomial was em ­
ployed, and the endpoint constraints were imposed.
Many studies of the estimation of the St. Louis equa­
tion, both critical and supportive, appeared during the
1968-1975 period. These studies investigated, among
other things, the sensitivity of the original results to
the choice of lag structure and, indirectly, the ap­
propriateness of the restrictions imposed by the use of
a PDL m odel.' Frequently, however, these studies

3Andersen and Jordan, “M onetary and Fiscal Actions. ”
4W ithout these constraints, the use of a PD L model would have
been erroneous, as each variable in the original equation had only
four coefficients in its lag structure while five param eters are
needed to construct a fourth degree polynomial; the imposition of
the endpoint constraints reduces the num ber of param eters to
three. Thus, the use of a PD L model in the original St. Louis
equation conserves three degrees of freedom.
°A ndersen, in a subsequent paper, did investigate longer lag
lengths (again with the same lag length specified for each variable)
using the minimum standard error of the regression as the criterion
for choosing the appropriate lag structure. He concluded that,
based on the above criterion, the appropriate lag structure was
longer than the one chosen originally, b u t that the qualitative
results w ere not sensitive to the lag structure chosen. See Leonall
C. Andersen, “An Evaluation of the Impacts of Monetary and
Fiscal Policy on Economic Activity,” Proceedings o f the Business
and Economic Statistics Section (American Statistical Association,
1969), pp. 233-40.
6Leonall C. A ndersen and Keith M. Carlson, “A M onetarist Model
for Economic Stabilization,” this Review (April 1970), pp. 7-25.
‘P eter Schmidt and Roger N. W aud, “The Almon Lag Technique
and the M onetary Versus Fiscal Policy D ebate,” Journal o f the
American Statistical Association (March 1973), pp. 11-19; Elliott,
“The Influence of M onetary and Fiscal Actions;” Leonall C.

14FRASER
Digitized for


APRIL 1983

made several changes simultaneously (e.g., employing
different measures of monetary and/or fiscal policy
actions and imposing a different polynomial degree
and/or a different lag structure), so that it is difficult to
identify the marginal impact of any individual change.8
Moreover, with one exception, the polynomial restric­
tions were never examined directly.9
Schmidt and W aud w ere the first to investigate the
lag lengths for the individual variables of the St. Louis
equation. They did so, however, within the framework
of a fourth degree polynomial.10 They refrained from
using endpoint constraints, arguing that the behavior
of the polynomial outside of the range defined by the
param eters is irrelevant. Using the minimum standard
error as their criterion, they determ ined the appropri­
ate lag structure for the original equation to be six lags
of AM, five lags of AG and seven lags of AR. Despite
these changes, their results w ere not qualitatively
different from those of Andersen and Jordan.
Elliott attem pted to examine systematically the sen­
sitivity of the results to the choice of lag structure and
the impact of the polynomial restrictions. Using a
fourth degree PDL procedure, he estimated the equa­
tion as modified by Andersen and Carlson with four,
eight and twelve lags for each variable. He also em ­
ployed both ordinary least squares (OLS) and Shiller’s
method of fitting lags with smoothness priors. His
results indicated that the conclusions drawn from the
estimation of the St. Louis equation do not depend
importantly upon the lag structure chosen or the re­
strictions imposed by using a fourth degree PDL.
Elliott did not conduct statistical tests of these proposi­
tions. Instead, he based his conclusions on a casual
comparison of the results. Furtherm ore, he eonsid-

Andersen, “An Evaluation of the Impacts of M onetary and Fiscal
Policy on Economic Activity;” Corrigan, “The M easurem ent and
Im portance of Fiscal Policy Changes;” D e Leeuw and Kalchbrenner, “Monetary and Fiscal Actions: C om m ent;” William L. Silber,
“The St. Louis Equation: ‘Democratic’ and ‘Republican’ Versions
and O ther E xperim ents,” The Review o f Economics and Statistics
(Novem ber 1971), pp. 362-67; Gramlich, “T he Usefulness of
Monetary and Fiscal Policy;’’ and Leonall C. A ndersen and Denis
S. Karnosky, “The A ppropriate Tim e F ram e for C ontrolling
Monetary Aggregates: The St. Louis E vidence,” in Controlling
Monetary Aggregates II: The Implementation, Proceedings of a
Conference Sponsored by the Federal Reserve Bank of Boston
(Series No. 9, 1972), pp. 147-77.
sFor example, see Corrigan, “The M easurem ent and Im portance of
Fiscal Policy Changes;” Silber, “The St. Louis Equation: ‘D em o­
cratic’ and Republican’ Versions;” Gramlich, “The Usefulness of
Monetary and Fiscal Policy;” and De Leeuw and Kalchbrenner,
“Monetary and Fiscal Actions: C om m ent.”
T h e one exception is Elliott, “The Influence of M onetary and Fiscal
Actions.”
10Schmidt and W aud, “The Almon Lag Technique. ”

FEDERAL RESERVE BANK OF ST. LOUIS

ered only three possible lag structures (which were
assumed to be the same for each distributed lag vari­
able) and only a fourth degree polynomial.
After the Andersen-Carlson modifications of the
original Andersen-Jordan equation, the only substan­
tive change in the equation took place as a result of an
exchange betw een Friedm an and Carlson in the late
1970s.11 In updating the sample period over which the
equation had been estimated, Friedm an noticed that
the cumulative effect of government spending became
statistically significant. In his response Carlson
pointed out that when the original sample was ex­
panded, the standard error of the regression nearly
doubled. This in d icated th at th ese errors w ere
heteroscedastic.12 Using annual rates of change in
place of the original first differences of the variables,
Carlson respecified the equation.13 In this form, the
errors were homoscedastic and the cumulative effect of
government spending was no longer statistically sig­
nificant. Since the Friedman-Carlson exchange, the
growth rate specification (or an approximately equiva­
lent alternative, first differences in natural logarithms)
has been the widely accepted o ne.14
In summary, even though a num ber of studies have
attem pted to investigate the effects of the lag length
and PD L specification of the St. Louis equation, rel­
atively little work has been directed at investigating

''B enjam in M. Friedm an, “Even the St. Louis Model Now Be­
lieves in Fiscal Policy, "Journal o f Money, Credit, and Banking
(May 1977), pp. 365-67; and Keith M. Carlson, “Does the St.
Louis Equation Now Believe in Fiscal Policy?” this Review
(February 1978), pp. 13-19.
12W hen the variance-covariance matrix is misspecified, the esti­
mated t-ratios are biased, and n either the direction nor extent of
the bias can be determ ined a priori. See G. S. Watson, "Serial
Correlation in Regression Analysis. I,” Biometrika (Decem ber
1955), pp. 327-41.
I3This re-specification was proffered as an alternative to first differ­
ences in the original Andersen-Jordan article. John Vrooman,
“Does the St. Louis Equation Even Believe in Itself? "Journal of
Money, Credit, and Banking (F ebruary 1979), pp. 111-17,
attempts to correct for heteroscedasticity in the first difference
specification. He does so by dividing the observation matrix by the
square-root of AYt. This transformation, however, creates correla­
tion betw een the error term and the right-hand-side variables — a
violation of one of the classical assumptions of ordinary least
squares estimation.

14See, for example, Keith M. Carlson, “Money, Inflation, and Eco­
nomic Growth: Some U pdated Reduced Form Results and Their
Implications,” this Review (April 1980), pp. 13-19; Carlson and
Hein, “Monetary Aggregates as M onetary Indicators;” John A.
Tatom, “Energy Prices and Short-Run Economic Performance,”
this Review (January 1981), pp. 3-17; Laurence H. M eyer and
Chris Varvares, “A Comparison of the St. Louis Model and Two
Variations: Predictive Performance and Policy Im plications,” this
Review (D ecem ber 1981), pp. 13-25; and Hafer, “The Role of
Fiscal Policy in the St. Louis E quation.”



APRIL 1983

and testing the propriety of the polynomial constraints
or the lag structure employed. Furtherm ore, most
previous investigations have been conducted using the
first difference specification of the equation. Thus,
whether the policy conclusions drawn from the estima­
tion of the equation (especially for the growth rate
specification) are influenced significantly by the choice
of lag length and polynomial restrictions employed
remains unresolved.

POLYNOMIAL DISTRIBUTED LAGS
The PD L estimation technique forces the coef­
ficients of each lagged variable of an equation to lie on a
polynomial of degree p. In the presence of a high
degree of multicollinearity, OLS estimates are not pre­
cise. Thus, the rationale for the use of the PD L tech­
nique is that it increases the precision of the estimates.
Estimates of the individual lag weights, however, will
be biased generally unless the correct lag length and
degree of polynomial are specified.15 Therefore, it is
important that the appropriate specification be deter­
mined.
There are a num ber of procedures and criteria for
determining the appropriate lag length and polynomial
degree.16 W e use a computationally efficient proce­
dure outlined recently by Pagano and Hartley (here­
after PH ).17 Details of the PH technique and other
relevant considerations are presented in the appendix.
W hen Almon first introduced PD L models, she sug­
gested that endpoint constraints always be employed.

15Let £, p and £*, p* denote th e assumed and correct lag length and
degree of polynomial, respectively. Estimates of the param eter
vector will be biased if (a) £ = £* and p < p*, (b) 6 < 8 * and p = p*
or (c) £ > £*, p = p* and £ — £* > p*. In the instance w here
£ — £* =£ p*, the polynomial distributed lag estimates may be
biased, but need not be. That is, there are restrictions that may or
may not be satisfied by the data. Furtherm ore, PD L estimators
will be inefficient if £ = £ * a n d p > p * . See P. K. T rivediandA . R.
Pagan, “Polynomial D istributed Lags: A Unified Treatm ent,
Economic Studies Quarterly (April 1979), pp. 37-49.
16See Trivedi and Pagan, “Polynomial D istributed Lags: A Unified
Treatm ent;” D. F. H endry and A. R. Pagan, “D istributed Lags: A
Survey of Some R ecent D evelopm ents,” unpublished manu­
script; Robert J. Shiller, “A D istributed Lag Estim ator D erived
from Smoothness Priors,” Econometrica (July 1973), pp. 775-88;
J. D. Sargan, “The Consum er Price Equation in the Post War
British Economy: An Exercise in Equation Specification Testing,”
The Review o f Economic Studies (January 1980), pp. 113-35; and
George G. Judge and others, The Theory and Practice o f Econ­
ometrics (John Wiley and Sons, Inc., 1980), chap. 11.
17See Marcello Pagano and Michael J. H artley, “O n Fitting D istri­
buted Lag Models Subject to Polynomial Restrictions, "Journal o f
Econometrics (June 1981), pp. 171-98.

15

FEDERAL RESERVE BANK OF ST. LOUIS

The suggested endpoint constraints take the form
P b+ i = 3 - i = 0,

where £ is the chosen lag length. Although the end­
point constraints put explicit restrictions on the dis­
tributed lag weights outside of their relevant range,
they also imply homogeneous restrictions on the lag
weights inside the range via homogeneous restrictions
on the polynomial coefficients.18 Thus, the endpoint
constraints add two additional homogeneous restric­
tions for each PD L variable to those already implied by
the PDL model. The problem is that endpoint con­
straints have no basis in either economic or econo­
metric theory, as Schmidt and W aud have pointed
o u t.19 As a result, they represent a set of ad hoc restric­
tions whose sole purpose is to increase the efficiency of
estimation. Nevertheless, their validity can be tested.

APPLICATION TO THE ST. LOUIS
EQUATION
To investigate the appropriate lag lengths and
polynomial degrees for the St. Louis equation, we
employ the growth rate specification20
J
Yt = a +

2

i= 0

K
M ,- i +

2

i=0

-yi G ,_ j + et.

The dots over each variable represent quarter-toquarter annualized rates of change, and Y, M and G
represent nominal GNP, money (the M l definition)
and high-employment governm ent expenditures, re­
spectively. The estimation period considered is II/
1962 to III/1982.

Lag Length Selection
The first step of the PH technique is to select
lsThis can be seen by noting that the endpoint constraints require
5o + S,( —1) + S , ( - l )2 + . . . + S,,( —l)1’ = 0 and

8,, + 8,(6+ 1) + 5,(S!+ l)2 + . . . + SpU+1)1’ = 0.
These restrictions can be w ritten as R8 = 0, because for a PDL
model,
= H 8 , so that
= H +J5, w here H + =
Therefore, R8 = R H +Ji = R*f} = 0. Thus, the endpoint con­
straints impose a set of homogeneous restrictions R* on p. See
Daniel L. Thornton and Dallas S. Batten, “E ndpoint Constraints
and the St. Louis Equation: A Clarification,” Federal Reserve
Bank of St. Louis Research Paper No. 83-001 (1983).
19See Schmidt and W aud, “The Almon Lag T echnique,” p. 12.
20W e chose to employ this specification because it is the one in­
cluded in the St. Louis model. For a com plete specification of the
St. Louis model, see the appendix to Keith M. Carlson, “A Mone­
tary Analysis of the Administration’s Budget and Economic Pro­
jections,” this Review (May 1982), pp. 3-14.
Digitized for16
FRASER


APRIL 1983

appropriate lag lengths 0 , K) for money and govern­
m ent expenditure growth. Once these lag lengths are
selected, a re-application of the technique results in
the selection of the polynomial degrees.21 The PH
procedure is somewhat complicated when appropriate
lag lengths and polynomial degrees must be selected
for two variables.22
The use of the PH technique, like other procedures
for specifying a distributed lag model, requires the
choice of a maximum lag length (L). W e considered
two choices of L: 12 and 16.23
An application of the PH technqiue to the St. Louis
equation results in a choice of 10 lags on M and 9 on G.
This selection is basically consistent with the results of
a standard F-test.24 Ordinary least squares estimates of
this lag specification, as well as the usual specification
with four lags on both M and G, are presented in table
1. Note that the standard error of the regression is
reduced substantially and the adjusted R2 is increased
substantially by including the additional distributed
lag variables. Furtherm ore, the coefficients on the
longest lag terms are significant in the longer lag spec­
ification. These results suggest that this specification is
preferable. Indeed, a likelihood ratio test of the restric­
tions implied by the current specification rejects them
at the 5 percent level.20
Nevertheless, it is interesting to note that the con­
clusions about the long-run efficacy of monetary and
fiscal policy are unaffected by the choice of lag struc­
ture. The hypothesis of the long-run ineffectiveness of
money can be rejected for both lag specifications; the
21Standard statistical procedures cannot be used to select the lag
length if the polynomial degree is specified first. See footnote 6 of
the appendix for further details.
22The choice of lag length and polynomial degree also involves
sequential hypothesis testing. As we note in th e appendix, care
m ust be taken in conducting sequential tests. Given the problems
with sequential tests (and those of prelim inary test estimation), we
initially chose a relatively low significance level of 15 percent,
opting to guard against incorrectly excluding relevant components
of the distributed lag. As a general rule, one would have expected
the chosen lag length to be shorter had we used a m ore common
significance level, such as 5 percent. In our case, the lag specifica­
tion would have been th e same had we selected a 5 percent
significance level.
23The results for L = 16 w ere identical to those for L = 12. Thus,
the PH technique seems to b e relatively insensitive to the choice
of L.
24W ith L = 12 for both \1 and G, the_F-statistic calculated to test the
hypothesis that the 10th lag on M is significant was 2.45*. The
F-statistic calculated for the same test for th e 8 th and 9th lags on G
w ere 2.55* and 1.77, respectively. (The * indicates significance at
the 1 0 percent level.)
2SThe likelihood ratio statistic was 32.13, which compares with a
critical value of x 2 ( H ) of 19.68 at th e 5 percent level.

FEDERAL RESERVE BANK OF ST. LOUIS

APRIL 1983

Table 1
Ordinary Least Squares Estimates of
Alternative Lag Length Specifications
of the St. Louis Equation,
11/1962-111/1982
Estimated Coefficients
Variable
Constant

PH
Specification

Current
Specification

2.342

(1.56)

1.643

(107)

M0
M,
m2
m3
m4
m5
m6
m7
m8
m9
M10
SM

0.767*
0.635*
0.295
-0 .3 7 7 *
0.233
-0 .1 2 7
-0 .1 3 4
-0 .1 2 6
0.297
0.230
-0 .5 3 0 *
1.163*

(4.61)
(3.66)
(1.80)
(2.36)
(1.38)
(0.68)
(0.79)
(0.74)
(1.69)
(1.15)
(2.77)
(4.50)

0.474*
0.441*
0.356*
-0 .1 7 9
0.022

(3.37)
(3.09)
(2.51)
(1.22)
(0.15)

Go
G,
g2
g3
g4
g5
g6
g7
g8
g9
ZG

0.110*
0.056
-0 .0 9 5 *
0.028
-0.001
-0 .0 4 2
0.095
0.047
-0 .1 1 6 *
-0 .1 1 6 *
-0 .0 3 4

(2.34)
(1.24)
(2.11)
(0.61)
(0.03)
(0.90)
(1.93)
(0.92)
(2.32)
(2.33)
(0.26)

0.108*
0.034
-0 .0 9 6 *
0.040
-0 .0 0 4

(2.21)
(0.71)
(2.04)
(0.84)
(0.09)

0.082

(0.82)

1.114* (4.69)

SE = 3.21

SE = 3.58

R2 = 0.47

R2 = 0.33

DW = 2.17

DW = 2.01

‘ Indicates significance at the 5 percent level. Absolute value of
t-statistics in parentheses.

same hypothesis about government expenditures can
not be rejected.

Polynomial Degree Selection
The chosen lag structure is used in the selection of
the appropriate polynomial degree. The appropriate
polynomial degree is selected by re-parameterizing
the model and applying the same technique used to
select the lag length.
A direct application of the PH technique to the
question of polynomial degree selection results in
selecting a ninth degree polynomial on M and a



seventh degree polynomial on G. The results of con­
ventional F-tests, however, indicate that there are
more restrictive specifications that cannot be rejected
at the 5 percent level. Given that the polynomial re­
strictions tends to smooth out the distributed lag weights
and, thus, might result in more accurate out-of-sample
forecasts, we decided to present the results of both the
PDL specification resulting from a strict application of
the PH technique and the one determ ined by em ­
ploying the greatest num ber of polynomial constraints
that satisfy a conventional F-test at the 5 percent level.
The latter specification has a sixth degree polynomial
on M and a third degree polynomial on G. The results
of the estimation of these specifications (denoted A and
B, respectively) and the PD L specification presently
used (denoted C) are given in table 2. These equations
were estimated with restricted least squares (BLS).26
We believe RLS is preferable to the standard PDL
method because it makes the param eter restrictions
explicit and permits ease in testing the individual and
joint PDL restrictions.
It is clear from these results that each of the two
longer lag PD L specifications performs better than the
current one. Each has a smaller standard error and a
larger adjusted R2. Nevertheless, it is interesting to
note that the tests of the long-run efficacy of the mone­
tary and fiscal policy variables also are insensitive to
the PDL specification. The long-run effect of money is
not significantly different from one, while the long-run
effect of government expenditures is not significantly
different from zero, for all three specifications.27 The
short-run distributed lag response patterns, however,
differ significantly.

Tests of the Endpoint Constraints
As we noted earlier, endpoint constraints represent
ad hoc restrictions and, thus, should not be employed
routinely. Nevertheless, since the current specifica­
tion of the St. Louis equation employs polynomial
restrictions only in the form of endpoint constraints,
we decided to test these constraints for all three spec­
ifications. The results of these tests for the relevant
joint and individual restrictions are presented in table

26For a discussion of th e equivalence betw een standard PD L
estimation and RLS, see Judge and others, The Theory and Prac­
tice o f Econometrics, pp. 640-42.
2‘Estimates of two other PD L specifications yielded the same con­
clusions regarding the efficacy of m onetary and fiscal policy. See
the appendix for details of these specifications.

17

APRIL 1983

FEDERAL RESERVE BANK OF ST. LOUIS

Table 2
Estimates of Various PDL Specifications of the St. Louis
Equation, 11/1962-111/1982______________________________
Estimated Coefficients
Variable
Constant

A

B

C

2.366

(1.56)

2.608

(1.63)

1.799

(1.16)

(4.14)
(5.01)
(1.56)
(2.15)
(0.57)
(0.63)
(1.85)
(0.51)
(1.27)
(2.30)
(3.50)
(4.38)

0.557*
0.677*
0.198*
-0 .0 5 3
-0.06 1
-0 .0 3 7
-0.08 1
-0 .0 8 7
0.114
0.355*
-0 .5 0 1 *
1.081*

(3.90)
(5.01)
(2.27)
(0.57)
(0.78)
(0.42)
(1.05)
(0.96)
(1.20)
(2.19)
(2.64)
(3.96)

0.461*
0.458*
0.244*
0.015
-0 .0 9 2

(3.87)
(5.62)
(2.46)
(0.19)
(0.76)

Me
m7
m8
m9
M10
2M

0.642*
0.771*
0.236
-0 .3 1 2 *
0.075
0.080
-0 .2 4 3
-0 .0 8 0
0.209
0.410*
-0 .6 4 5 *
1.143*

G0
G,
g2
g3
g4
g5
g6
g7
g8
g9
2G

0.118*
0.039
-0 .0 6 8
-0 .0 0 2
0.011
-0 .0 1 6
0.041
0.096*
-0 .1 2 5 *
-0 .1 2 0 *
-0 .0 2 6

(2.52)
(0.88)
(1.64)
(0.06)
(0.31)
(0.43)
(1.10)
(2.18)
(2.54)
(2.42)
(0.19)

0.106*
0.022
-0 .0 1 6
-0.02 1
-0 .0 0 8
0.012
0.024
0.016
-0 .0 2 7
-0 .1 1 6 *
-0 .0 0 8

(2.32)
(0.80)
(0.58)
(0.82)
(0.35)
(0.54)
(0.94)
(0.60)
(1.07)
(2.53)
(0.07)

0.094*
0.022
-0.04 1
-0 .0 2 6
0.034

(2.18)
(0.65)
(1.12)
(0.77)
(0.78)

0.110

(0.82)

M0
M,
m2
m3
m4
m5

1.086* (4.52)

SE = 3.24

SE = 3.42

SE = 3.65

R2 = 0.46

R2 = 0.39

R2 = 0.31

DW = 2.27

DW = 2.41

DW = 2.17

'Indicates significance at the 5 percent level. Absolute value of t-statistics in parentheses. Specification A
has ninth degree and seventh degree polynomials on M and G, respectively. Specification B has sixth
and third degree polynomials on M and G, respectively. Specification C is the current specification with
four lags on both M and G and endpoint constraints.

3. The test of all four endpoint constraints rejects these
constraints for both specifications A and B, but not for
the current specification. The head constraint on M,
however, is never rejected by the F-test, and the tail
constraint is rejected only for specification B. Never­
theless, in general, the endpoint constraints do not fare
well when applied to the longer lag specifications.

Out-of-Sample Forecast Comparisons
While it is clear that the alternative PD L repre­
sentations of the St. Louis equation perform better on
an in-sample comparison, it is interesting to see how
Digitized for18
FRASER


well they perform on the basis of out-of-sample fore­
casts. To this end, we estim ated these specifications
from 11/1962 to a term inal period and forecasted out-ofsample for four quarters. W e then added four quarters
to our estimation period, re-estim ated the equation
and repeated the process. W e did this for six periods
beginning with a terminal date of III/1976, generating
24 out-of-sample forecasts of the growth of nominal
GNP. The root mean square errors (RMSEs) of these
forecasts are summarized in table 4. Both the PH
specification and the current specification do about
equally well by a RMSE criterion over the entire
period; there are significant differences, however, in

APRIL 1983

FEDERAL RESERVE BANK OF ST. LOUIS

Chart l
F o re c a s t E rro rs o f A lt e r n a t iv e S p e c ific a t io n s
o f the St. L o u is E q u a t io n
Actual-Predicted
Percent

Actual-Predicted
P er cent

Table 3
Tests of Endpoint Constraints for
Various PDL Specifications of the St.
_____________
Louis Equation
F-Statistics for Constraints

Specification/
Variable

Tail

Head

Head and tail

Specification A
M

3.22

1.99

1.61

G

3.66

8.42*

4.21*
3.15*

M and G
Specification B
M

2.40

7.09*

3.59*

G

6.46*

6.86*

4.72*
1976

Specification C
M

0.81

1.84

1.13

G

1.83

4.11*

2.18
1.68

M and G
’ Indicates significance at the 5 percent level.

Table 4
Root Mean Square Error of the
Forecast for Various Specifications of
the St. Louis Equation ______________
Period
IV/1976-111/1982

A

B

C

4.77

4.49

4.70

IV/1976-111/1977

4.13

2.77

2.98

IV/1977-111/1978

3.42

5.31

6.28

IV/1978-111/1979

5.35

3.81

2.02

IV/1979-111/1980

4.17

2.89

4.17

IV/1980-111/1981

6.29

5.96

4.87

IV/1981-111/1982

4.72

5.16

6.25




1977

1978

1979

1980

1981

1 982

3.74*

M and G

their subperiod forecast performances.28 The most re­
stricted PDL specification shows an improvement over
the current specification, reducing the out-of-sample
RMSE by nearly 5 percent over the period and produc­
ing a smaller RMSE of the forecast in four of the six sub­
periods. A graph of the out-of-sample forecast errors
for specifications B and C is presented in chart 1. It is
clear from chart 1 that both specifications produce
similar patterns of forecast errors over the period. The
only significant exception occurs in the third quarter
of 1982, when specification B underpredicts nominal
GNP growth by about as much as specification C over­
predicts it.

SUMMARY AND CONCLUSIONS
This paper has investigated the lag length and
polynomial degree specifications of the St. Louis equa­
2S()ne could argue that the result may be biased in favor of our PDL
specification because the lag structure was chosen over the entire
period. Indeed, the lag structure appears to lengthen during the
latter part of the sample. The estim ated lag structure for the
period ending III/1976 was four on M and six on C. Thus, the lag
structure chosen was nearly that of the current specification. The
PDL specification was a first degree polynomial on M and a sixth
degree on G. W hen this specification was used to forecast out-ofsample, it performed somewhat worse than the current specifica­
tion, with a RMSE of 4.89. O ur estimates indicate that the lag
structure lengthened when the terminal date of the sample period
was extended to III/1979. If the shorter lag structure were used
over the first three subperiods and the longer lag structure (spec­
ification B) used over the last three, the RMSE for the entire
period would be 4.39, somewhat better than either specification
alone.

19

FEDERAL RESERVE BANK OF ST. LOUIS

tion to determ ine w hether its conclusions about the
long-run efficacy of monetary policy and inefficacy of
fiscal policy are affected by the lag length employed or
its polynomial distributed lag specification. In so
doing, we have employed a computationally efficient
method for determ ining the appropriate lag length and
polynomial degree of a general polynomial distributed
lag model.
Our results indicate that the important policy con­
clusions of the St. Louis equation are insensitive to the
lag length specified and to the polynomial restrictions
imposed. In particular, the long-run effectiveness of
money growth and the long-run ineffectiveness of
growth in high-employment government expenditures
are substantiated by ordinary least squares estimates of
model param eters using both the Pagano-Hartley-

APRIL 1983

determ ined lag length and the current lag length
specifications, as well as by estimates of several PDL
specifications. Thus, there is no evidence that the con­
clusion of the St. Louis equation can be traced to these
types of econometric misspecification.
We did find a PDL specification that outperforms
the current specification by both in-sample and out-ofsample criteria. This specification has considerably
longer lags on both the monetary and expenditure
variables and more polynomial restrictions.
Finally, we found that the Pagano-Hartley tech­
nique, used in conjunction with standard F-tests, is a
convenient and com putationally efficient tool for
selecting the lag length and polynomial degree of a
PDL model.

APPENDIX
Pagano and Hartley have recently developed a
methodology for determ ining the appropriate lag
length and degree of polynomial which is computa­
tionally efficient.1 In order to illustrate the use of the
Pagano-Hartley (PH) technique, consider the general
distributed lag model

K
(A.l)

s*

Y, = 2 |xkZkt + 2 pjXt_j + et, t = l , 2, ..., T,
k= 1
j=0

where et ~ NID (0, cr2), and where Z^, is the kt!l
independent variable and Xt is an independent vari­
able which affects Yt with a lag of length £*.
The polynomial distributed lag (PDL) model in­
volves imposing restrictions on the (3 coefficients such
that
Pj = 80 + 5J + S^j2 + . . . + 8P. jp*.
That is, each of the individual lag weights falls on a
polynomial of degree p*, where p* < £*.2 These re­
strictions can be written more compactly in matrix
notation as
J3 = H8,

a (£* + 1) by (p* + 1) matrix of coefficients.3 Substitut­
ing the above restrictions into the model, we get

K
(A.l') Y, =

p*

2 ^ kZkt + 2 S„X*,
k=l
q=0

where X*t =

£
(Xt^ jh j + 1, q+1) and where hj + 1, q+1
j=0
is the (j + l)th, (q+ l)th elem ent of H, j = 0, 1, 2, ... C*
and q = 0, 1, 2 ,..., p*. It is clear that imposing the
polynomial restrictions reduces the num ber of param­
eters by £ * - p * and, thus, imposes £* —p* homoge­
neous restrictions on the param eter vector j}. Thus,
estimating equation A .l' is tantamount to estimating
equation A .l subject to homogeneous restrictions of
the form RJJ = 0, where R is a (£* —p*) by (£* + 1)
matrix.4 It should be apparent that the validity of the

Specifically, H takes the general form

4

0
l
8

0
l
2 >’*

e *2

e *3

C*P*

1
1
1

0
1
2

0

1

e*

i

whereJ3 = (p0 Pi • • . Pj*)',8_= (8081,..8p*)', and H is

'Pagano and Hartley, “On Fitting D istributed Lag M odels.”
2Strictly speaking, p* could equal C*; however, there would be no
polynomial restrictions. Thus, it is doubtful that one would de­
scribe a model as a PD L if p* = 5*.
Digitized for20
FRASER


'‘T here are a num ber of ways of generating the restriction matrix, R.
See Shiller, “A D istributed Lag Estim ator;” and Judge and others,
The Theory and Practice o f Econometrics (John W iley and Sons,
Inc., 1980), pp. 642-44.

FEDERAL RESERVE BANK OF ST. LOUIS

polynomial restrictions, including the endpoint con­
straints, can be tested easily.5
Of course, the correct values of the lag length and
degree of the polynomial are generally unknown.
Since the selection of an im proper lag length or polyno­
mial degree generally leads to biased coefficient esti­
mates, the selection of £ and p is extremely important.
The selection process, however, is not easy. For one
thing, the appropriate lag length cannot be deter­
mined using standard procedures if the degree of the
polynomial has been selected.6 Even though a num ber
of techniques have been suggested for selecting £ and
p, the PH method was chosen, in part for its computa­
tional convenience.7
The PH method proceeds by determ ining the lag
length and then the degree of the polynomial. The PH
technique can best be illustrated by rewriting equation
A .l in matrix form as
(A.2)

Y = Zji + Xp + e,

where Z and X are T by K and T by (£* + 1) matrices of
observations on the independent variables, and y. and
J3 are K by 1 and (£* -I-1) by 1 vectors of parameters.
The procedure begins by choosing a maximum lag
length L. Equation A.2 with the maximum lag length
can be rew ritten as
(A.3) Yl = WL 4»l + e„
where W L = [Z:XL], and]J>L = [jo.: 0 L]'. The observa­
tion matrix W L is then decomposed to

APRIL 1983
WL = q ,n l

by the Gram-Schmidt decomposition. H ere Q l is a
matrix whose columns form an orthonormal basis for
the column space of W L, and NL is an upper triangular
matrix with positive diagonal elem ents.8 Equation A.3
now can be rew ritten as
Y l = Q l A l + £l >

where
XL = [ V \ x £ ] ' = N l ^ (,

Given that QL is orthonormal, the least squares esti­
mate of_XL is given by
h . = [> :A lT = Q l'X l ,

and the structural param eters can be obtained from
NL 4»L =_Al-

An advantage of the PH m ethod comes in noting that
the elements of Al are mutually independent random
variables. In particular,

xf
xf

~ N ID (Xh a 2), i = 0 , 1 , 2 , ..., e*
~ N ID (0 , <r2), i = c* + l , e* + 2 , . .., L.

Pagano and Hartley note that there is a one-to-one
correspondence betw een the null hypothesis involving
the 0s and the Xs. Given this and the orthogonality of
the PH procedure, the following sets ofhypotheses are
equivalent:
H l - j: 3 l = Pl -1 = ••• = Pl - j = 0
j = 0, 1, 2, ..., L

T h e re are a num ber of alternative norms for testing these restric­
tions. See Judge and others, The Theory and Practice o f Econo­
metrics, p. 646.
T h is is seen by noting that, once the polynomial degree is selected,
alternative lag specifications am ount to imposing the polynomial
restrictions on different param eter spaces. Thus, restrictions on
the lag length are non-nested when p is specified. See Peter
Schmidt, “A Modification of the Almon D istributed Lag "Journal
o f the American Statistical Association (Septem ber 1974), pp. 67981; and H endry and Pagan, “D istributed Lags: A Survey of Some
Recent D evelopm ents.” In this regard, it would be appropriate to
use the maximum R2 criterion as Schmidt and W aud do; however,
this procedure may lack power. A more useful procedure has been
suggested by Pesaran. N either procedure, however, provides in­
formation concerning the degree of polynomial. See Schmidt and
W aud, “The Almon Lag Technique”; and M. H. Pesaran, “On the
G eneral Problem of Model Selection,” Review o f Economic Studies
(April 1974), pp. 153-71.
O ne attractive method has been suggested by H endry and Pagan,
“D istributed Lags: A Survey of Some Recent D evelopments. ” This
procedure involves a sequence of hypothesis tests commencing
with an initial arbitrary choice of a lag length. W hile this procedure
has potential merit, it is not w ithout its difficulties. Furtherm ore, it
may involve an extremely laborious test procedure w hen th ere are
two PD L variables, as in the St. Louis equation. For another
procedure, see Sargan, “The C onsum er Price Equation in the Post
W ar British Econom y.”



H£_ j:X£ = X?._, = ... = x t . j = 0
j = 0, 1, 2, . .., L.

Hence, the Gram-Schmidt decomposition provides a
convenient basis for testing the null hypothesis that
there exists a lag length, £, such that the null hypoth­
esis P p = 0 can be rejected. If no such £ can be found,
then there is no distributed lag of X.
The test of the simple hypothesis X^-j = 0 can be
carried out by a t-test of the form
t L- j = X ^ -j/s

j = 0, 1, 2, ..., L,

where

- Y| - Q[ Al .
T h e Gram-Schmidt procedure is often used w hen the observation
matrix is ill-conditioned. If th e diagonal elem ents are chosen to be
positive, as they are in our case, Q Land NL are unique; see G. A. F.
Seber, Linear Regression Analysis (John W iley and Sons, Inc.,
1977), chapter 11.

21

FEDERAL RESERVE BANK OF ST. LOUIS

APRIL 1983

Because of their common divisor, these t-statistics are
not independent; however, they are uncorrelated.9
Pagano and Hartley also suggest that the above
hypotheses are equivalent to
H ' L - j:XE_ j = 0

j = 0, 1, ..., L ,

due to the orthogonality of their procedure. These
hypotheses, however, are not equivalent in any direct
sense. To see this, recall that
A

l

=

where NL is an upper-triangular matrix with positive
diagonal elements. The ith row of NL can be repre­
sented as
N ’l = (0, ..., 0, Tlii, Tlii + 1, ..., TliL),

where
is the ith-jth elem ent of NL. Thus, the
hypothesis test that
= 0 is given by
Mi = tIlPl = 0.
Likewise, the test that
M,-i =

= 0 is given by
+

tilP l

= 0,

and so on. Thus, the hypotheses of H 'L-j are really
tests of linear combinations of the distributed lag
weights, where the particular linear combination is
determ ined by the elem ents of rows of NL. In practice
we found that the absolute value of the diagonal ele­
ments of N l tended to be somewhat large relative to
the off-diagonal elem ents for the lag length selection
and very small relative to the off-diagonal elem ents in
the polynomial selection. In the former case, there­
fore, testing the hypothesis that
= 0 was very near
testing the hypothesis that P, = 0, while in the later
case it was closer to the null hypothesis H* j.
Given this, we decided to supplem ent the use of
t-tests on the Xs with conventional F-tests of the
equivalent hypotheses of H and H*. W e recommend
that one investigate the NL matrix to identify the na­
ture of the hypotheses being tested when using the PH
t-statistics.
We should note also that the use of the PH method is
complicated somewhat by the presence of two distrib­
uted lag variables on the right-hand side. One can
readily see that, in view of the upper-triangular form of
N l, hypothesis tests involving a second distributed lag
will not be consistent with H* -, unless the GramSchmidt procedure is applied to each set of distributed
lag regressors separately. Unfortunately, the resulting

sets of jointly orthogonal regressors will not them ­
selves be orthogonal to each other. As an alternative,
we ran two separate Gram-Schmidt regressions with
each distributed lag variable entered last. F urther­
more, we did this by reducing by one the lag length or
polynomial degree for one variable and holding the
maximum lag length or polynomial degree for the
other variable (which was entered last) constant. In this
way, we determ ined w hether the lag length chosen for
one variable was affected by the lag length specified for
the other. Of course, we were particularly concerned
that the lag length selected for one be the same if
the chosen lag length of the other was used instead of
L. The procedure had the added advantage of allowing
us to calculate an L by L matrix of F-statistics for all
possible combinations of lag structures (or in the case of
PDL selection, degrees of polynomials) from L ortho­
gonal regressions.10

Hypothesis Testing Considerations
W hen determ ining the “correct” lag length using
either the t-tests or the F-test, care must be taken in
choosing a critical value on which to test the null
hypothesis. Two considerations are important. First,
the null hypotheses

Ht_j: \£_j = 0

j = 0, 1, 2, ..., L

represent a set of sequential hypotheses. It is usually
assumed that these hypotheses are nested so that if any
one is true, the preceding hypotheses must be true also
and, if any one is false, so must be the succeeding ones.
Thus, the null hypothesis becomes more restricted as
each successive test is conducted, and the probability
of committing a Type I error increases. If we let
denote the significance level of the jth test, it can be
shown that the probability of committing a Type I error
for the jth test, a,, is
a ■=
1

i 5,
\ £ j(l-a j-i) + a j- i

ifj = 1
ifj 3= 2.

Thus, the probability of rejecting the null hypothesis
when it is true will rise as the length of the lag is
reduced. Anderson suggested that one would like to
balance the desirability of not overestimating the lag
length with the sensitivity to non-zero coefficients.11
He recommends setting L fairly large, but letting £j be
10This can be seen by noting that the RSS w hen j lags are om itted is
given by

RSSi

K

L -j-1

k=l

k= 0

= Y, ’Y, - S (\£)2 -

2

„

(\£)2.

1'Anderson also provides a test procedure for orthogonal regressors
'T his perm its the use of t-tables from Seber. See Seber,
Regression Analysis, pp. 404—5.
Digitized for22
FRASER


Linear

which have some optimal properties; however, the test is som e­
what cumbersome. SeeT . W. Anderson, The Statistical Analysis
o f Time Series (John Wiley and Sons, Inc., 1971), pp. 30-43.

FEDERAL RESERVE BANK OF ST. LOUIS

small for j near L. While no optimal rules exist, Ander­
son suggests
(A.4)

{j = g(L + 1 — j), j = 1, 2, 3 ........... L
L

for subsequent tests. An alternative would be to use
the t-tables from Seber.
In addition to the above problem, we have the prob­
lem that an estimator based on a prior test is a prelimi­
nary test estimator. While nothing is known about such
estimators when the sequence of tests is greater than
one, it is known that, in the case of one pre-test, the
estimator has a risk function which may exceed that of
O LS.12 Furtherm ore, the difference betw een the risk
of the preliminary test estimator and OLS increases as
the significance level is reduced. While the optimal
critical value will vary with the particular choice of loss
function, the evidence suggests that standard signifi­
cance levels of 5 or 10 percent may be below the
optimal level for one pre-test.13 These considerations,
coupled with the fact that overestimates of the lag
length are less likely to result in bias than underesti­
mates, suggest that one may want to consider an initial
value of the significance level that is fairly large.14

POLYNOMIAL DEGREE SELECTION
Having selected a lag length, £, the next step is to
determ ine a polynomial degree, p. This can be accom12The risk function is E[(ip* —q>)'X'X(ip* —9 )], w here <p* is the p re ­
test estim ator of 9 .
l !For example, Sawa and Hiromatsu have shown that the standard
critical values of the t-statistic arc substantially above the optimal
critical values in the case of a mini-max regret loss function with
one restriction. On the other hand, Toyoda and Wallace have
shown that OLS should always be chosen when the num ber of
linearly independent restrictions are less than five if one wishes to
minimize the average regret. See Takamitsu Sawa and Takeshi
Hiromatsu, “Minimax Regret Significance Points for a Prelim i­
nary Test in Regression Analysis,” Econometrica (November
1973), pp. 1093-1101; andT . T oyodaandT. D. Wallace, “Optimal
Critical Values for Pre-Testing in Regression,” Econometrica
(March 1976), pp. 365-75.
14To guard against incorrectly excluding com ponents of the distrib­
uted lag or imposing invalid polynomial restrictions, an initial
significance level of 15 percent was chosen. The critical t-values
for testing each successive hypothesis are as follows:
j

t-value

1

2
3
4
5
6
7
8
9
10
11
12




1.46
1.51
1.56
1.61
1.67
1.74
1.81
1.90
2.00
2.12
2.30
2.57

APRIL 1983

plished by simply re-applying all of the procedures
outlined above to the PDL model with lag length £. To
see this, write the model with the selected lag length as
(A. 5)

Y e = Z^l +

+

Fj,.

Recall that = H8 where H is (£ + 1) by (p* + 1) and 8
is (p* + 1) by 1. Thus, this equation can be rew ritten as
(A.6)

Y f — Zp. 4- X CH8 +

f^

(A.6') Y* = Z\x = X*8 + e*.

It is clear from this expression that the choice of a
polynomial degree p is completely analogous to the
choice of the lag length above, where the maximum
degree of the polynomial considered, p, initially is set
equal to £.15

EMPIRICAL RESULTS
In applying the PH technique, we initially chose a
maximum lag length of 12; however, we also consid­
ered L = 16. The PH t-statistics for those runs with
both M and G last are given in table A .I. This proce­
dure chose 10 lags on M and 9 on G for L = 12 and 16.
We then chose these lags for one variable and let the
other be set at L = 12. The results were unchanged.
These results also appear in table A .I. Furtherm ore,
F-tests of the restrictions implied by this section were
basically consistent with the PH results, when L was
set at 12 (see footnote 24 of the text). This was not true,
however, for L = 16. In this instance, the presence of a
num ber of insignificant coefficients prior to the first
significant one diluted the calculated F-statistic so that
a very short lag would have been chosen by an F-test.
Thus, the PH t-statistics appear to be less sensitive to
the choice of L than the standard F-test.
Letting the maximum degree polynomial be 10 for
M and 9 for G, we then re-applied the PH technique to
loPagano and H artley offer an equivalent two-step procedure,
which is not discussed here. See Pagano and Hartley, “On Fitting
D istributed Lag Models Subject to Polynomial Restrictions.” As
an efficient alternative to eith er of these approaches, one could
employ the stochastic information from the lag length selection
process with the nonstochastic information in the design matrix in
a Theil-Goldberger mixed estimation procedure similar to Schil­
ler’s Bayesian method. Fomby has shown that such stochastic
restrictions can be tested under a generalized mean square error
norm. See H. Theil and A. S. G oldberger, “On Pure and Mixed
Statistical Estimation in Economics,” International Economic Re­
view (January 1961), pp. 65-78; Thomas B. Fomby, “MSE Evalua­
tion of Shiller’s Smoothness Priors,” International Economic Re­
view (February 1979), pp. 203-15; and Judge and others, The
Theory and Practice o f Econometrics, pp. 652-53.

23

APRIL 1983

FEDERAL RESERVE BANK OF ST. LOUIS

Table A.1
Pagano-Hartley t-statistics for Lag Length Selection
G with t o n M equal to

M with 8 on G equal to
16

12

9

16

12

10

0

4.84

5.45

5.42

2.68

2.67

2.72

1

4.49

4.33

4.61

1.04

1.13

1.16

2

2.51

2.36

2.24

-1 .8 4

-1 .8 9

-1 .9 0

3

-2 .2 0

- 1 .7 3

-1 .7 1

0.97

0.96

1.01

4

0.28

0.09

0.60

0.23

0.17

0.19
-1 .2 2

Lag

5

-1 .9 6

-2 .0 5

-2 .1 1

-0 .8 9

-1 .2 1

6

-0 .4 2

-0 .0 1

-0 .4 7

1.34

1.37

1.41

7

-0 .4 2

-0 .6 1

-0 .4 3

0.58

0.44

0.44

8

0.77

0.88

1.22

-2 .3 0

-2 .3 8

-2 .3 4

9

-0 .5 0

0.10

-0 .1 3

-2 .2 2 *

-2 .2 2 *

-2 .3 2 *

10

-2 .5 8 *

- 2 .7 0 '

- 2 .7 2 '

-0 .3 0

-0 .5 8

-0 .6 5

11

0.09

-0 .1 3

0.19

0.93

1.18

1.20

12

-0 .1 0

0.17

0.31

0.98

0.64

0.68

13

-0 .5 7

14

0.41

1.01

15

-0 .8 2

- 1 .2 4

16

0.19

1.28

1.15

'First significant t-statistic

Table A.2
Pagano-Hartley t-statistics for
Polynomial Degree Selection
Polynomial
Degree

M with p
on G equal to 9

G with p
on M equal to 10

0

3.44

- 0 .2 7

1

-5 .7 3

-2 .8 5

2

2.84

-0 .1 7

3

-2 .1 7

-2 .7 7

4

-2 .3 4

0.73

5

-0 .4 8

1.05

6

-2 .3 2

1.12

7

0.44

2.55

8

1.11

-1 .6 5 *

9

-1 .8 5 *

-0 .4 7

10

1.26

'First significant t-statistic.

determ ine the polynomial degree. The PH t-statistics
are presented in table A.2. The PH technique selected
a ninth degree polynomial on money and an eighth
degree polynomial on government expenditures for
the same significance level as used before. W hen we
re-estimated the equation on the lower degree polyno­
Digitized for 24
FRASER


mials, however, the coefficient of the eighth degree on
G failed to be significant. The seventh was significant,
regardless of the lag length on M. Thus, the PH tech­
nique suggests a ninth degree polynomial on M and a
seventh degree on C. This implies only one polynomial
restriction on M and two on G. (An F-test of these
restrictions could not reject the null hypothesis. The
calculated F-statistic was 1.43.)
Furtherm ore, the matrix of F-statistics of all possible
polynomial restrictions on a PD L model with 10 lags
on Kl and 9 on G, given in table A.3, suggests that even
more restricted models could pass an F-test. Clearly, a
num ber of different polynomial degree specifications
satisfy an F-test at the 5 percent level. W e can see, for
example, that had we chosen the polynomial degree on
M first and then selected the polynomial degree on G,
we would have chosen a fourth degree polynomial on
M and an eighth degree polynomial on G.
Alternatively, had we investigated G first, we would
have chosen a seventh degree polynomial on G and a
sixth on M. These are circled in table A. 3. W e could
also choose the polynomial degree by selecting the
most restricted model that passes an F-test at, say, the
5 percent level. This criterion would select a sixth de­
gree polynomial on M and a third degree on G. This
F-statistic is bracketed in table A.3. All four of these

FEDERAL RESERVE BANK OF ST. LOUIS

APRIL 1983

Table A.3
F-statistics for Testing Polynomial Restrictions on M and G
Degrees for G

Degrees for M

0

0

1

2

3

4

5

6

7

8

9

4.09

4.13

4.38

4.53

4.75

5.08

5.47

5.62

5.76

6.32
3.37

1

3.00

2.64

2.80

2.82

2.92

3.10

3.32

3.20

3.05

2

2.78

2.46

2.61

2.58

2.65

2.79

2.99

2.87

2.50

2.79
2.51

3

2.80

2.46

2.63

2.54

2.57

2.64

2.82

2.68

2.24

4

2.49

2.13

2.30

2.10

2.21

2.26

2.43

2.13

(EzD

2.02

5

2.61

2.26

2.45

2.28

2.41

2.49

2.69

2.40

2.02

2.37

6

2.58

2.17

2.37

[1 .9 6 ]

2.10

2.21

2.37

CT46)

1.33

1.62

7

2.77

2.35

2.59

2.14

2.33

2.51

2.77

1.74

1.63

2.09

8

3.02

2.56

2.84

2.27

2.54

2.83

3.20

1.82

1.75

2.52
1.60
—

9

3.03

2.48

2.79

2.09

2.37

2.63

2.94

1.43

0.87

10

3.13

2.51

2.86

2.06

2.37

2.69

3.16

1.48

0.22

PDL specifications — the one selected by the PH
technique and the three indicated in table A. 3 — were
estim ated; however, only the results for the one
selected by the PH technique and the most restricted
specification are presented in this paper. The results of
the other specifications were similar to those of the




most restricted PD L specification and, hence, are not
reported h e re .16
16The hypothesis tests concerning the effects of monetary and fiscal
policy yielded conclusions identical to those reported here. The
out-of-sample RMSEs of the forecast for the period III/19Y6—III/
1982 w ere smaller than the RMSEs of specifications A or C.

25

Weekly Money Supply Forecasts:
Effects of the October 1979 Change in
Monetary Control Procedures
R. W. HAFER

JL

HE activity of most financial market participants
on Friday afternoons can be predicted with great
accuracy: they anxiously will be awaiting the 4:15 p.m.
EST announcem ent of the new weekly money stock
data. Despite the fact that the weekly data are con­
taminated by a great deal of “noise, ” a fact that greatly
reduces the data’s usefulness in revealing any policy
trend, market participants still wager large sums and
reputations on correctly anticipating the elusive week­
ly money figure.1
The impact of unanticipated changes in the weekly
money supply on short-term interest rates has been
investigated extensively. In general, the evidence
shows a positive relationship betw een unanticipated
changes in money and movements in market rates.2
Although this empirical relationship existed through1See David A. Pierce, “T rend and Noise in the M onetary Aggre­
gates,” in Federal Reserve Staff Study, New Monetary Control
Procedures, vol. II (February 1981), especially pp. 19-22. Pierce
estimates that the noise in weekly money data is around $3 billion
dollars, assuming an aggregate level of $400 billion. As he notes,
“In general, these results are further evidence that very little can
be inferred from any bu t the most atypical movements in weekly
data” (p. 2 2 ).

out the 1970s, the relative impact of weekly money
“surprises” on short-term interest rates has been great­
er since the October 1979 change in monetary control
procedures. In fact, over 25 percent of the volatility of
the 3-month Treasury bill rate during the time period
of the money supply announcem ent can be attributed
directly to the increased volatility of unanticipated
weekly changes in m oney since O ctober 1979.3
Moreover, unanticipated money supply changes that
lie outside the Federal Reserve’s announced money
growth range appear to have a relatively greater effect
on interest rates than money surprises falling within
the announced growth range.4
The evidence clearly indicates that unanticipated
changes in the money stock have an important effect on
interest rates. Consequently, examining the character­
istics of the money supply forecasts that give rise to
such behavior is important. Several studies have ex­
amined the weekly money supply forecasts for the
period prior to October 1979; but little has been done
on com paring the forecasts across the announced
change in monetary control procedures.5 The purpose
of this article is to analyze the effects of the October

2 See, for example, Jacob Grossman, “The ‘Rationality’ of Money
Supply Expectations and the Short-Run Response of Interest Rates
to M onetary Surprises,” Journal o f Money, Credit and Banking
(November 1981), pp. 409-24; V. Vance Roley, “The Response of
Short-Term Interest Rates to Weekly Money A nnouncem ents,”
W orking Paper No. 82-06, Federal Reserve Rank of Kansas City
(Septem ber 1982); Thomas Urich, “The Information C ontent of
Weekly Money Supply A nnouncem ents,” Journal o f Monetary
Economics (July 1982), pp. 73-88; and Thomas J. Urich and Paul
W achtel, “ M arket R esponse to th e W eekly M oney Supply
Announcements in the 1970s,” Journal o f Finance (Decem ber
1981), pp. 1063-72. For another interpretation, see Bradford Cor­
nell, “Money Supply A nnouncements and Interest Rates: Another
View "Journal o f Business (January 1983), pp. 1-23.
Digitized for26
FRASER


3 Roley, “The Response of Short-Term In terest Rates.”
4 Ibid. See also, Neil G. Berkman, “On the Significance of Weekly
Changes in M l,”
1978), p p . 5-22.

New England Economic Review

(M ay-June

5Studies investigating the forecasts prior to the O ctober 1979 policy
shift are Grossman, "The Rationality’ of Money Supply Expecta­
tions,” and Thomas Urich and Paul W achtel, “The Structure of
Expectations of the Weekly Money Supply A nnouncem ent,” (New
York University, February 1982; processed). Roley, “The Re­
sponse of Short-Term Interest R ates,” provides some evidence on
this issue for the period February 1980 to N ovem ber 1981.

FEDERAL RESERVE BANK OF ST. LOUIS

1979 change in monetary control on the weekly money
supply forecasts. U nder the assumption of rational ex­
pectations, a change from one recognized monetary
control procedure to another should have no effect on
the forecast characteristics.6 In other words, a change
from one m onetary control procedure to another
should not affect the unbiased and efficiency aspects of
the forecasts. If, however, the new procedure is not
“well-defined” — that is, the rules of the game are
changing constantly — then weekly money supply
forecasts may appear biased and inefficient.7

WHAT DOES “RATIONALITY” IMPLY?
The theory of rational expectations is based on the
prem ise that market participants construct forecasts of
the future in a m anner that fully reflects the relevant
information available to them. Because wealth-maxi­
mizing individuals will not make forecasts that are
continually wrong in the same direction, the rational
expectations approach suggests that forecasts of eco­
nomic phenomena should be unbiased. Moreover, if
the forecast errors could not have been reduced by
using other available information, then forecasters
have efficiently utilized the relevant data at their dis­
posal.
The issue investigated here is w hether the weekly
forecasts of the M l money stock change have been
affected noticeably by the October 1979 change in
monetary control procedures. More specifically, the
question asked is: assuming rational expectations, has
®The concept of rational expectations is based on the belief that
economic agents are utility maximizers. Thus, market participants
form expectations that fully reflect all available information. More
formally, rational expectations imply that individuals’ subjective
probability distribution of possible outcomes is identical to the
objective probability distributions that actually occur. Conse­
quently, the only way policymakers can affect behavior is to “fool”
the people in an inconsistent manner. This concept is developed
more fully in John F. M uth, “Rational Expectations and the Theory
of Price M ovem ents,” Econometrica (July 1961), pp. 315-35;
Robert E. Lucas, Jr. “Expectations and the Neutrality of M oney,”
Journal o f Economic Theory (April 1972), pp. 103-24; Robert J.
Rarro, “Rational Expectations and the Role of Monetary Policy,”
Journal o f Monetary Economics (January 1976), pp. 1-32; and
Thomas J. Sargent and Neil Wallace, “Rational Expectations, the
Optimal M onetary Instrum ent, and the Optim al Money Supply
Rule, "Journal o f Political Economy (April 1975), pp. 241-54.
im p lic it in this is the presum ption that market participants will
expend resources to decipher the new policy procedures and adapt
their forecast formation process accordingly. This does not seem
unreasonable given the sophistication of financial market analysts
in gauging actual Federal Reserve behavior. For a discussion on
the transition from one policy to another and the implications for
rational expectations, see Benjamin M. Friedm an, “Optimal Ex­
pectations and the Extrem e Information Assumptions of Rational
Expectations’ M acrom odels,” Journal o f Monetary Economics
(January 1979), pp. 23-41.



APRIL 1983

the change in monetary control procedures affected
the unbiased and efficiency characteristics of the week­
ly money supply forecasts? If the forecasts from the
post-October 1979 period are not different than those
from before, we then would conclude that the fore­
casters have adapted to the new policy regime. If they
differ, however, the evidence would not reject the
hypothesis that they have been unable to ascertain the
policymaker’s behavioral rule.8
Three sample periods are used in the following anal­
ysis. The full period is from the week ending January
11, 1978, to the week ending June 16, 1982. Given the
change in operating procedures in late 1979, the rele­
vant subperiods are from the week ending January 11,
1978, to the week ending October 3, 1979, and from
the week ending October 10, 1979, to the week ending
June 16, 1982.9 W ith these sample periods, the un­
biased and efficiency characteristics of the weekly
money supply forecasts across the change in monetary
control procedures can be investigated.

Weekly Money Supply Data
The money data series used in this article are the
actual and expected, initially announced week-to-

sThe dilemma facing market participants is known as the “Lucas
problem .” Essentially, even though individuals act rationally in
making their forecasts — that is, use all of the information thought
to be relevant — failure to account for a procedural shift will lead to
incorrect forecasts. Thus, forecasting guidelines used under one
procedure may not apply under another. For the specific problem
tested here, it may be the case that the announced policy differs
from that actually followed. If policy actions are not characterized
easily, that is, if policy is unpredictable, then forecasts may be
biased and inefficient simply because agents have not determ ined
the structure of the model. For a discussion of this concept, see
Robert E. Lucas, Jr., “Econom etric Policy Evaluation: A C ri­
tique,” in Karl B runner and Allan H. M eltzer, eds., The Phillips
Curve and Labor Markets, The Carnegie-Rochester Conference
Series on Public Policy (vol. 1, 1976), pp. 19-46.
Bradford Cornell recently has argued that apparent irrational
behavior on the part of market participants evidenced by biased
and inefficient forecasts, may very well be due to the change from a
predictable policy regim e to one that continues to be unpredict­
able. As he states, “O n O ctober 6 [1979], market participants
suddenly discovered that even the rules of the game w ere subject
to change. As a result, they began studying weekly money supply
figures not only with the goal of determ ining what the current
policy was, but also with the goal of determ ining how the rules of
the game might be changed.” In this sense, market participants
face a perpetual “Lucas problem .” See Cornell, “Money Supply
Announcements and In terest Rates: A nother View,” p. 21.
9Note that the post-O ctober 1979 period includes the period of
credit controls, essentially the second quarter of 1980. This period
is included because an examination of the error pattern from week­
ly money forecasts indicated no difference betw een this period and
any other. Moreover, market participants continued to forecast
weekly money changes throughout the control period.

27

FEDERAL RESERVE BANK OF ST. LOUIS

week changes in the narrowly defined money stock
(Ml). Figures for the actual changes in M l are taken
from the Federal Reserve’s H .6 weekly statistical re­
lease. Because the sample covers a period of changing
definitions, the following guideline is used: From
January 11, 1978, to January 31, 1980, the weekly
money supply changes are based on the old definition
of M l. From February 8, 1980, to November 20, 1981,
the money stock is defined as the actual M1B measure,
not the M1B figure that was adjusted for NOW account
movements. Finally, from November 27,1981, to June
16, 1982, the data are based on the then-current defini­
tion of M l.
The data used as a measure of the market’s forecasts
were obtained from Money Market Services, Inc.10
Since 1977 this firm has conducted a weekly telephone
survey of 50 to 60 governm ent securities dealers to get
their expectations of the impending change in money.
Prior to early 1980, the poll was conducted twice a
week, on Tuesdays and Thursdays. Since then, howev­
er, only the Thursday survey has been conducted con­
sistently, because of the shift in the Federal Reserve’s
announcement of the weekly money supply figures
from Thursday to Friday afternoon. For our purposes,
therefore, we employ the mean of the Thursday survey
responses.11

Are Weekly Money Forecasts UnbiasedP
Forecasts of weekly changes in the money stock are
unbiased predictors of the actual change if the actual

and forecasted values differ only by som e random

term. Mathematically, this requirem ent can be stated

(1) AM, = t_j AM,E + e,
where AM, is the actual change in the money stock,
,_ xAMf is the expectation held in period t-1 for the
change in the money stock in period t, and e, is a
random error term with zero mean and variance of.

10It has been argued that survey data are not good measures of the
market’s expectations of some macroeconomic variable. This argu­
m ent is founded on the belief that most survey respondents are
not actual market participants. In other words, th eir responses to
the survey are not based on some profit-maximizing behavior that
has generated the forecast. The weekly money forecasts used here
are taken from dealers actively participating in th e financial mar­
ket, thus reducing the force of this criticism. See Edward J. Kane
and Burton G. Malkiel, “Autoregressive and Nonautoregressive
Elem ents in Cross-Section Forecasts of Inflation, ” Econometrica
(January 1976), pp. 1-16.
11For an analysis of the Tuesday and Thursday forecasts, see Gross­
man, “The ‘Rationality’ of Money Supply Expectations.” This
analysis covers only the period 1977 to 1979.
Digitized for 28
FRASER


APRIL 1983

To test for the absence of bias, equation 1 is rew rit­
ten and estimated as
(2)

A M , = a 0 + (3, t - i A M f + e,

where a 0 and (3] are the param eters to be estim ated.12
In this form, the weekly money forecasts are unbiased
predictors of actual money supply changes if the joint
hypothesis that a 0 = 0 and Pi = 1 cannot be rejected.
Moreover, the estim ated residuals from this regression
(et) should not exhibit serial correlation if the forecasts
are unbiased predictions of the actual change in
money.
Table 1 presents the regression results from estimat­
ing equation 2 using the expected and actual money
stock changes. The full-period results suggest that the
forecasts of weekly changes in the money stock are
unbiased predictors of the actual changes. The calcu­
lated F-statistic does not exceed the critical value of
3.04 at the 5 percent significance level. Consequently,
the null joint hypothesis that a 0 = 0 and
= 1 is not
rejected. Moreover, the residuals of the equation show
no indication of first-order serial correlation, as evi­
denced by the Durbin-W atson statistic. Thus, the
weekly money supply forecasts appear to be unbiased
across the full sample.
To see if the forecasts are unbiased before and after
the October 1979 change in monetary control proce­
dures, equation 2 was re-estim ated for the two periods
January 11, 1978, to October 3, 1979, and October 10,
1979, to June 16, 1982. These regression results also
are reported in table l . 13
The estimates from the pre-O ctober 1979 period
again indicate that the forecasts are unbiased. The
calculated F-statistic is not statistically significant, and
the Durbin-W atson statistic again indicates no firstorder serial correlation among the residuals. In con­
trast, the post-October 1979 regression results perm it
us to reject the hypothesis that the forecasts are un­
biased predictors of the actual changes. Although the
estimated constant term is statistically insignificant,
the hypothesis that the estim ated slope term (pi) does
not differ from unity is rejected easily (t = 2.33).
Consequently, the joint hypothesis underlying this

12This type of test is used widely in studies of expectations data. F or
studies examining money stock forecasts, see, for example, Gross­
man, “The ‘Rationality’ of Money Supply Expectations;’’ Urich
and W achtell, “The Structure of Expectations;” and Roley, “The
Response of Short-Term In terest Rates.”
13This dichotomization of the sample is supported statistically by
Chow-test results: the calculated F-value is F(2,228) = 3.93,
which exceeds the critical 5 percent level.

FEDERAL RESERVE BANK OF ST. LOUIS

APRIL 1983

Table 1
Test Results for Bias
Equation Estimated: AMt = a0 +

t-

Estimated coefficients1
Period

«0

Summary statistics2

0.

R2

DW

F3

1/11/78-6/16/82

-0 .0 4 4
(0.30)

1.207
(10.43)

0.32

1.87

1.65

1/11/78-10/3/79

-0 .3 5 2
(1.65)

1.060
(6.54)

0.32

1.89

1.60

10/10/79-6/16/82

0.181
(0.91)

1.373
(8.60)

0.35

1.86

3.70

'Absolute value of t-statistics appear in parentheses.
2R2 is the adjusted coefficient of determination; DW represents the Durbin-Watson test statistic. The
reported F-statistic is used to test the null hypothesis that ( a 0 , (3,) = (0,1).
3The relevant 5 percent critical F-values are: January 11, 1978, to June 16, 1982 — 3.04; January 11,
1978, to October 3, 1979 — 3.10; and October 10, 1979, to June 16, 1982 — 3.07.

test also is rejected; the calculated F-statistic of 3.70
exceeds the 5 percent critical value of 3.07. Thus, the
evidence suggests that forecasts of weekly money
supply changes have been biased since the October
1979 change in implem enting monetary policy.

Are Weekly Money Forecasts Efficient?
The efficiency condition requires that forecasts fully
reflect all pertinent and readily available informa­
tion.14 Since the information available to individuals
includes the past history of the series being forecast, it
is possible to test the hypothesis that the forecasts are
“weakly” efficient; that is, at least the information con­
tained in the history of weekly money supply changes
is used efficiently. This concept of efficiency requires
that the process actually generating observed changes
in weekly money and the process generating the fore­
casts of these changes are the same. The simplest
process to assume is an autoregressive one, where
observed and expected changes are generated solely
by the past history of the series itself. Mathematically,
this concept of efficiency can be stated as

(3)

n
AM, =

2

Pi A M ,_ i + m ,,

i= l
14O f course, additional information will be acquired only if the
marginal benefits are at least as large as the marginal costs of
acquisition. A useful discussion of this point is provided in Armen
A. Alchian, “Information Costs, Pricing, and Resource U nem ­
ploym ent,” in E dm und S. Phelps, and others, Microeconomic
Foundations o f Employment and Inflation Theory (W. W. Norton
& Company, Inc., 1970), pp. 27-52.



n
(4)

,_ ,A M tE =

2

Pi' A M ,_ | + (i2t,

i= l

where (Xk and (x2t are random error terms. In this
format, weak-form efficiency requires that (3j = (3- for
all i; i = 1, 2 ,..., n .15
To determ ine if survey respondents efficiently uti­
lized the information contained in past weekly money
supply changes, equation 4 is subtracted from equation
3, yielding the estim ated equation
n

(5) AM, - ,-iA M f = b„ + 2 b* AM, , + <(>„
i = l

where the dependent variable AM, — , _ iAM,K repre­
sents the forecasters’ errors in predicting weekly
m oney changes, and th e in d ep e n d e n t variables,
AMt _j, are the actual changes in m oney.16 The equa­
tion permits a constant term (b0) to be estimated in­
stead of subsuming it into the error structure, which is
represented by the term <J>t (= |X|, — |x2t). The null
hypothesis to be tested is that the estim ated bi (= (3j —
15This form of the efficiency test was proposed in James E . Pesando,
“A Note on the Rationality of the Livingston Price Expectations
Data "Journal o f Political Economy (August 1975), pp. 849-58.
16The lagged values of data used in the efficiency test are the
one-week revised num bers, not the initially reported weekly
figures. Since the revised figures contain more information than
the originally released data — th e data contained in the revision
itself — using original data would deprive forecasters of some
information. It should be noted, however, that th e conclusions
reached were not affected when originally reported data was used
to generate lagged changes in the money stock.

29

FEDERAL RESERVE BANK OF ST. LOUIS

APRIL 1983

Table 2
Test Results for Weak-Form Efficiency
4
Equation Estimated: AMt - t -iA M tE = b0 + 2 bjAMt _j + 4>t
i= 1
Estimated coefficients1
Period

bo

b2

b,

Summary statistics2

^3

b4

R2

DW

F3

1/11/78-6/16/82

0.139
(0.87)

-0 .0 4 2
(0.69)

-0 .0 6 7
(1.14)

0.087
(1.48)

0.005
(0.09)

0.01

1.92

0.81

1/11/78-10/3/79

-0 .2 5 9
(1.24)

0.026
(0.28)

-0 .0 7 7
(0.84)

-0.021
(0.23)

-0 .0 1 7
(0.20)

0.01

1.95

0.28

0.398
(1.77)

-0.07 1
(0.90)

-0 .0 6 8
(0.90)

-0 .1 1 2
(1.48)

0.007
(0.10)

0.02

1.93

0.82

10/10/79-6/16/82

'See notes accompanying table 1.
2See notes accompanying table 1. The reported F-statistic is used to test the null hypothesis that b, (i =

1,2,3,4) = 0.
3The relevant 5 percent critical F-values are: January 11,1978, to June 16, 1982 — 2.41; January 11,
1978, to October 3, 1979 — 2.48; and October 10, 1979, to June 16, 1982 — 2.44.

P/) are not statistically different from zero for all i (i =
1, 2,..., n) as a group. Moreover, the estimated error
structure should not exhibit serial correlation.17
Table 2 presents the results of estimating equation 5
for the period January 11, 1978, to June 16, 1982. Four
lags were chosen to capture the informational content
of past changes in weekly money. The regression re­
sults indicate that past changes in the money supply do
not explain any significant portion of the forecast error.
The calculated F-statistic (0.81) is far below acceptable
critical values. The Durbin-W atson statistic also indi­
cates that serial correlation is not present among the
residuals. Thus, for the full period, we cannot reject
the hypothesis that forecasters efficiently used the in­
formation contained in past changes in the money stock
in forming their predictions.
We next test the efficiency hypothesis for the preand post-October 1979 periods; these empirical results
also are found in table 2. In both instances, we again
cannot reject the hypothesis that past information
about weekly money changes was used efficiently.
Neither F-statistic is significant at the 5 percent level.
Based on these results, therefore, the weak-form effi­
ciency hypothesis is not rejected by the data, regard­
less of the sample used.

17See Donald J. Mullineaux, “On Testing for Rationality: Another
Look at the Livingston Price Expectations D ata , ”Journal o f Polit­
ical Economy (April 1978), pp. 329-36 for a discussion of this test.

30


Tests of Stronger-Form Efficiency
The above evidence suggests that forecasts of weekly
money stock changes are weakly efficient. Efficiency,
however, also may be considered in a broader sense.
This broader efficiency criterion requires that forecasts
incorporate all of the relevant and available informa­
tion. Thus, similar to the previous hypothesis, efficien­
cy in the broad sense requires that the forecast errors
be orthogonal, or systematically unrelated to all rele­
vant available information sets.18
To test this concept of efficiency, we estimate the
equation
n

(6) AM, - t_| AMf = c0 + 2 C; It_j + w„
i= 0

where I,_j refers to lagged values (i = 0, 1,..., n) of
information that are not incorporated in past money
stock changes, and wt is another random error term.
The analysis is intended to determ ine w hether the
survey resp o n d en t’s weekly errors in forecasting
money supply changes can be explained by some set(s)
of information that are readily available. If the esti-

18Tests using this stronger form of efficiency are presented in Gross­
man, “The ‘Rationality’ of Money Supply Expectations,” and,
using interest rate expectations data, in Renjamin M. Friedm an,
“Survey Evidence on th e ‘Rationality’ of Interest Rate Expecta­
tions, "Journal o f Monetary Economics (October 1980), pp. 45365, w here the phrase “information orthogonality” was coined.

FEDERAL RESERVE BANK OF ST. LOUIS

APRIL 1983

t-

1 AMf

= c0 +

1
+
S

Equation Estimated: AMt -

II M3
O
0

Table 3
Test Results for Stronger-Form Efficiency

Calculated F-statistics
Period
Information set

1/11/78-6/16/82

1/11/78-10/3/79

10/10/79-6/16/82

Consumer and industrial
loans

3.551

1.74

2.941

Demand deposits at large
weekly reporting banks

4.251

0.51

3.571

Float

0.60

0.55

0.61

Adjusted base

3.291

0.55

3.651

1Significant at the 5 percent level of confidence. The relevant critical F-values are: January 11,1978, to June 16,1982 — 2.26; January 11,
1978, to October 3, 1979 — 2.32; and October 10, 1979, to June 16, 1982 — 2.28.

mated e, coefficients are not significantly different from
zero as a group, then we cannot reject the strongerform hypothesis of efficiency. If contrary evidence is
found, then the results would suggest that forecasters
could have reduced their prediction errors by using
the information sets investigated here.
It is, of course, impossible to account for every imag­
inable information set that each forecaster could have
used. Consequently, we analyze several sets of in­
formation that are available on a timely basis and are
potentially useful in estimating future money stock
developments. The information sets used are consum­
er and industrial loans, demand deposits at large week­
ly reporting banks, float and the adjusted monetary
base as defined by the Federal Reserve Bank of St.
Louis. In all cases, the data used are taken from origi­
nal Federal Reserve statistical releases that were avail­
able to forecasters prior to the weekly money stock
announcem ents.19 Although we realize that the series

19A11 data are in term s of level changes from the previous week. Data
sources are the Federal Reserve H4.1 and H4.2 statistical re­
leases, and the Federal Reserve Bank of St. Louis.
This procedure may im part some m easurem ent error since only
the initially released data are used. Given the short tim e horizon
used and the observation that th e weekly data revisions are not
severe, the approach used seems sufficient. It also should be
noted that, since February 1980, data on consum er and industrial
loans and dem and deposits at weekly reporting banks have been
released concurrently with the money supply num bers. Thus,
these two series offer no prior information during the post


chosen do not exhaust the set of possible information
sources, they are sufficiently broad to test the hypoth­
esis at hand.
Table 3 reports the calculated F-statistics from
estimating equation 6 using the different information
sets. In each test, the information set contains contem­
poraneous and four lagged terms. The outcome for the
full period suggests that forecasters efficiently utilized
the information contained in the float information set:
the reported F-statistic is not large enough to reject the
null hypothesis. The results for the other information
sets — consumer and industrial loans, demand de­
posits at large weekly reporting banks and the adjusted
base — reject the efficiency hypothesis. For these, the
F-statistics exceed the 5 percent critical value (2.26),
implying that forecast errors could have been lessened
if the information contained in these data had been
used.
Equation 6 was re-estimated for the pre- and postOctober 1979 periods; these results also are found in
table 3. The full-period results are dominated by the
post-October 1979 period. Prior to the shift in control
procedures, forecasters’ predictions of weekly money
supply changes appear to have efficiently incorporated
the information sets tested here: all the F-statistics are
less than the 5 percent critical value (2.32). In contrast,
February 1980 period. They do, however, provide more informa­
tion that forecasters may use in generating their expected money
numbers.

31

the post-October 1979 results reveal that, except for
float, the forecasters could have improved upon their
ability to predict changes in the money stock by incor­
porating the information contained in the series on
loans, demand deposits and the adjusted base. Thus,
over the recent period, the forecasts do not m eet the
broader efficiency criterion tested here.

CONCLUSION
Previous examinations of survey data on weekly
money supply forecasts have focused primarily on the
effects of unanticipated money changes on market in­
terest rates. Although several studies have examined
the forecasts’ rationality, there has been no systematic
investigation into the effect of the change in monetary
control procedures on the unbiased and efficiency
characteristics of the forecasts.

32


The evidence presented here indicates that the
change in control procedures has had a significant
effect on the characteristics of weekly money supply
forecasts. Prior to October 1979, the forecasts of the
change in the weekly money stock were unbiased and
efficient. In contrast, weekly money forecasts since
October 1979 have been biased and inefficient.
The results of this investigation lend support to the
recently suggested hypothesis that, since O ctober
1979, “market participants [have] concluded that the
rules under which monetary policy is conducted could
no longer be considered constant. ”20 If this indeed is
true, then the combined evidence from this study and
those dealing with the interest rate effects of unantici­
pated money supply changes suggests that a more
predictable control procedure would contribute to a
more stable financial market.
^C ornell, “Money Supply A nnouncem ents,” p. 22.