View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Economic

r a
m
....................

M I C

R E V I E W

Vol. 2 4 , No. 3

2

Rules Versus Discretion:
Making a Monetary

Economic Review is

Rule Operational

published

quarterly by the Research D e p a rt­

b y John B. Carlson

m ent of the Federal Reserve Bank
of C leveland. Copie s of the

Review

The question of whether monetary policy should be conducted by rules known

are available through our Public

in advance to all or by policymaker discretion is an enduring debate. This

Inform ation D epa rtm en t,

paper traces the evolution of rule advocacy from the time of the Federal

216 /579 -215 7.

Reserve Act and examines the problems associated with making a rule
operational. The author concludes that some degree of discretion m ay be
necessary and that it is useful to analyze policies on the continuum between a

C oordinating Eco n o m is t:
Randall W . Eb erts

pure rule and pure discretion. Recent changes in the operating procedure
illustrate how actual policy has varied within this continuum.

Editor: W illiam G . M urm a nn
A s sista n t Editor: Robin Ratliff
Design: M ichael Galka

Actual Competition,
Potential Competition,

14

and Bank Profitability in
Rural Markets

Typesetting: L iz H an n a

Opinions stated in

Review

Economic

are those of the authors

b y Gary W halen

and not necessarily those of the

Despite a great deal of previous empirical work, disagreement remains about

Cleveland or of the Board of

Federal Reserve B ank of

the relationship between market concentration and bank performance. Further,

Governors of the Federal Reserve

the performance impacts of potential competition remain virtually unexplored.

S y s te m .

In this study, the author examines the relationship between bank profitability
and both actual and potential competition. Risk is explicitly incorporated into
the analysis, and possible simultaneity is also explored. The results suggest

Material m ay be reprinted pro­

that n o n -M S A markets are contestable or that potential competition has a

vided that the source is credited.

significant influence on banks’ performance.

Please send copies of reprinted
material to the editor.

Getting the Noise Out:
Filtering Early

24

GN P Estimates
b y John S cadding
The accuracy of the U .S . Department of Commerce’s estimates of G N P is vital
in order for policymakers and private agents to make informed economic
decisions. The early, or provisional, G N P estimates for each quarter can be
seen as rational forecasts of the final numbers. This article takes an
alternative view: that the early G N P numbers are estimates of the final
number, but estimates that are contaminated with error, or “noise.” The author
investigates a w ay to adjust the numbers to remove the error and to produce
more accurate predictions of the final G N P growth estimates.

Comment

32

is s n

0013-0281

R u le s V e rs u s
D is c r e tio n : M a k in g
a M o n e ta r y R u le
O p e r a tio n a l
by John B. Carlson

John B. Carlson is an economist at
the Federal Reserve Bank of
Cleveland. The author would like to
thank Charles Carlstrom, Edward
Gambier, William Gavin, Jam es
Hoehn, Mark Sniderman, and E .J .
Stevens for their helpful comments.

Introduction

The rules-versus-discretion debate is the most
enduring, if not the most central, issue in mone­
tary policy. It concerns whether monetary policy
should be conducted by rules known in advance
to all or by policymaker discretion.
For many years, the case for a monetary rule
was associated with a particular proposal by Mil­
ton Friedman (1959). Building on a tradition
initiated by Henry Simons (1936), Friedman
introduced the idea that the effects of monetary
policy were uncertain, occurring with long and
variable lags. In short, he argued that discretion­
ary management of the money supply in the face
of such uncertainty actually amplified economic
fluctuations. Hence, Friedman argued for a
constant-money-growth rule.
The case for rules has changed fundamentally
since an important paper by Kydland and Pres­
cott (1977). They show that precommitment to a
rule could have beneficial effects that discretion­
ary policies cannot. Unlike Friedman’s argument,
the Kydland-Prescott case was not specific to any
one view of the world, but could be applied to a
very general class of models. In principle, one
cannot deny that a policy rule can have poten­
tially stabilizing effects.

The example of Kydland and Prescott, how­
ever, trivialized an important concern of policy­
makers: how to account for uncertainty in the
link between policy instruments and ultimate
objectives. Once one allows for uncertainty,
there is a potential role for flexibility to deal with
variability in the links. To the extent that some
variation is systematic and can be predicted, it is
possible to incorporate feedback into a rule.
However, some contingencies cannot be fore­
seen. When such events are potentially destabil­
izing, discretion may not be ruled out a priori.
This suggests that it is reasonable to consider
the idea of rules with discretion. Fischer (1988)
has concluded that the dichotomy between rules
and discretion should be seen as a continuum,
in which the extent of the monetary authority is
determined by the immediacy of the link between
its actions and the attainment of the objectives.
The actual practice of monetary policy can be
viewed as a point on the continuum. Moreover,
the rise of monetary targeting in the 1970s,
which led to alternative operating procedures
with differing degrees of commitment, illustrates
that the degree of commitment to any rule can
vary over time. Changes in the degree of com­
mitment are best understood when one confronts
the difficulties in making rules operational.

This paper reviews the historical development
of the rules-versus-discretion debate and exam­
ines the problems associated with making rules
operational. Section I traces the evolution of rule
advocacy from the time of the Federal Reserve
Act. Section II describes the actual operating
procedures from the early 1970s to the present.
The operational problems facing rule advocates
are highlighted in Section III, and Section IV dis­
cusses how two recently proposed rules address
the operational problems. Section V offers some
concluding comments.

I. Rule Advocacy in the
United States After
the Federal Reserve

The original Senate bill to create the Federal
Reserve System in 1913 contained a provision
that the system should
This provision was stricken by the House
Committee on Banking and Currency and was
not included in the original Federal Reserve Act,
reflecting the dominant influence of the real bills
doctrine at that time. By the late 1 9 2 0 s, however,
several bills had been proposed to amend the
Federal Reserve Act explicitly to include a provi­
sion for price stability. 1 Advocates of these bills
essentially sought to legislate a rule establishing
the primacy of the price-level objective.
These efforts culminated in the Strong Hear­
ings, held by the House Banking Committee in
1926-1927. 2 The hearings initially considered a
bill by Representative James G. Strong including
a provision that “all the powers of the Federal
Reserve System should be used for promoting a
stable price level.” Specifically, Congressman
Strong did not want the Federal Reserve to have
the discretion to vary the price level for the pur­
suit of any other objective.
While the bill instructed that the Federal
Reserve’s discount-rate policy was to be deter­
mined with “the view of promoting price stabil­
ity,” no formula was specified. Thus, there was a
certain vagueness about how the rule would be
implemented . 3 It left open the role for discre­
tion in determining how much the discount rate
should be altered when the price level deviated

level.

■

1

promote a stable price

For a thorough review of the debate, see Fisher (1934). It should be

from its objective. A subsequent version of the
bill was even more ambiguous about the objec­
tive of price stability. Eventually, Congressional
interest in establishing the primacy of the objec­
tive of price stability faded.

The Simons Tradition

In a widely celebrated article of 1936, Henry
Simons initiated a case for rules that was to
become known as the Chicago view. Specifically,
Simons contrasted two sharply distinct ways to
conduct monetary policy: one, to assign in
advance specific responsibilities to a monetary
authority to be carried out in accordance with
well-defined operational rules; the other, to
specify a general goal while allowing the mone­
tary authority wide discretionary powers to
achieve the goal. The essential distinction is that
the first regime defines the authority’s objective
in terms of the means, while the second defines
the objective in terms of the ends.
Simons argued for rules in terms of means.
His case was predicated on liberal (19th-century
sense) principles. “The liberal creed demands
organization of our economic life largely
through individual participation in a game
It calls upon the state to provide a
stable framework of rules within which enter­
prise and competition may effectively control
and direct the production and distribution of
goods.” (Simons [1936], p. 1)
The essential notion is that government is
necessary for establishing laws that would define
the rules for a “game” in which competitive free
enterprise could flourish, but that government
should not be a player in the game. The idea
that government would manage the currency to
manipulate aggregate economic outcomes
meant that government would be a player and
thus violated the liberal creed.
An ideal rule according to Simons would be
one that fixed the quantity of the money supply.
He did not believe, however, that such a rule
could be made operational without radical
reform of the financial structure. Essentially, he
believed that an unregulated financial sector was
a source of great instability in money demand.
This instability was reflected in the perverse
behavior of velocity which, he argued, necessi­
tated a role for discretionary actions. Simons

definite rules.

with

noted here that a provision for purchasing power was eventually incorporated
in the Employment Act of 1946. However, the price-stability goal was not
included as the primary objective as most advocates of price stability in the
1920s had sought.

■ 3

Hetzel (1985) notes that Congressman Strong and his supporters

wanted to institutionalize the policy of Governor Strong (no relation) of the

■ 2

For an excellent discussion of the background and events surrounding

the Strong hearings, see Hetzel (1985).

New York Federal Reserve Bank, which they credited for the considerable
price stability that existed after 1922.

therefore suggested a number of ideal reforms to
reduce the variability of velocity to levels condu­
cive to successful implementation of a fixedmoney-supply rule. That is, government would
need to redefine the rules of the game to avoid
having to manage the money supply.
One proposed reform was the elimination of
fractional-reserve banking. By requiring 100percent reserves on all demand deposits, Simons
sought to reduce greatly the threat of bank runs
and the consequent effects on hoarding money
(velocity changes). Such a reform would also
give the monetary authority direct control over
the total money supply by making it equivalent
to the monetary base.
Simons recognized, however, that fixing the
supply of deposits might merely serve to
encourage the creation of effective money substi­
tutes that would also affect velocity. Thus,
another “ideal” (but even more radical) reform
would be to prohibit fixed-money contracts. Re­
stricting claims to residual equity or commonstock form would essentially drive a wedge
between money and other assets and would
tend to minimize the variability of velocity. In
sum, Simons believed that a monetary rule in
terms of means could be made operational only
under a highly regulated financial system.
Simons was not naive about the kind of assent
that could be gained for such radical reforms in
modern democratic societies. He thought that
adoption of an appropriate framework could be
implemented only after decades of “gradual and
systematic reordering of financial practices.” Iron­
ically, liberal principles also seem to support the
notion that financial institutions should be largely
unregulated and free to offer any instruments they
choose. Indeed, institutional reform has moved
in the opposite direction of Simons’s ideal.
Recognizing the practical difficulties of sharp
changes in velocity and that his ideal reforms
might be unattainable, Simons argued for a rule
for price stability in the interim. Because this is
a rule of ends rather than means, the opera­
tional procedures were not well defined. His
basis for this practical solution was that it was
the “least illiberal” of the alternatives he consid­
ered. Thus, he recognized that for immediate
purposes a certain amount of discretionary lati­
tude was necessary. While Simons may have
misjudged society’s willingness to adopt his
ideal reforms (new rules), his liberal view of
economic agents participating in a game was
prescient about the future state of the debate.
The Simons tradition was subsequently m odi­
fied and popularized by Milton Friedman (1948,
1959, 1969). Initially, Friedman offered detailed
proposals much in the spirit of Simons. They

included the 1 0 0 -percent reserves reform applied
to both time and savings deposits at banks.
Subsequently, however, Friedman changed
tack, taking the position that the behavior of
velocity, particularly the velocity of the M2
aggregate, was not so perverse in a relative
sense, even under a fractional-reserve banking
system. He argued that the discretionary actions
of the Federal Reserve (albeit well-intentioned)
were likely to be a more perverse source of
economic instability. Thus, adherence to a
constant-money-growth rule would lead to
greater economic stability than would a rule
with feedback, with or without discretion. In
essence, Friedman maintained the idea that the
monetary authority should not be a player in the
game, but he eventually rejected the need for
wholesale reform of the financial system.
Friedman’s case for a constant-growth rule
was based less on the liberal creed and more on
pragmatism. His premise was that the economic
impact of monetary policy occurs with a long
and variable lag. Feedback, especially of the dis­
cretionary type, would have effects at the inap­
propriate time more often than not. Moreover,
Friedman argued that political pressures and
accountability problems under discretion are
likely to exacerbate the problem.
While Friedman’s case has intuitive appeal, it
is difficult to justify in principle. Potentially sta­
bilizing effects of policy feedback could be ruled
out a priori only if money were the exclusive
determinant of nominal GNP in the short run. If
other identifiable factors also have significant
explanatory power, then judicious use of feed­
back can, in principle, reduce the variability of
nominal GNP, even if the coefficients on lagged
money are stochastic. O n the other hand, the sta­
bilizing effects of policy feedback with parameter
uncertainty are smaller than when parameters
are nonstochastic (see Brainard [1967] ) . 4
By eventually abandoning 100-percent
reserves, Friedman also allowed a control prob­
lem: how to make a constant-growth rule opera­
tional for measures of inside money. Under 100percent reserves there would be virtually no
distinction between money and monetary base.
Since Friedman also proposed closing the dis­
count window, all money would essentially be
outside money, and hence directly controlled
by the Fed.

■ 4

When effects of monetary policy occur with a lag, there is a potential

for instrument instability. The prospect of dynamic instability can be reduced
with appropriate modifications to the objective function.

As advocates for constant money growth
dropped the idea of 1 0 0 -percent reserves, how­
ever, the issue of monetary control became
relevant. When the measure of money is endo­
genous, the problem of making a constantmoney-growth rule operational is far from triv­
ial. Such was an important lesson of monetary
targeting in the early 1980s. Perhaps recognizing
this fact, advocates for money-growth rules now
typically propose closing the discount window
and adopting a constant-growth rule for the
monetary base.
Arguments for a monetary rule in the Simons
tradition remain highly controversial in princi­
ple. One cannot rule out the possibility that an
intelligent policymaker could effectively take
account of incomplete information when decid­
ing optimal monetary policy. As Barro (1985)
notes, “if the policymaker were also wellmeaning, then there was no obvious defense for
using a rule in order to bind his hands in
advance.” Moreover, Fischer argues, “At a formal
level Friedman’s analysis suffered from the logi­
cal weakness that discretion seemed to dom i­
nate rules: if a particular rule would stabilize the
economy, then discretionary policymakers
could always behave that way— and retain the
flexibility to change the rule as needed.”

Kydland and Prescott

The idea that discretion could always replicate a
preferred policy rule seemed to provide a highly
influential argument in which intelligent, wellmeaning policymakers should not be bound by
rules. However, in a widely recognized paper,
Kydland and Prescott (1977) demonstrate a fal­
lacy in this argument. It is now well understood
that if economic outcomes depend on expecta­
tions about future policies, then credible pre­
commitment to a rule could have favorable
effects on the economic outcomes that discre­
tionary policies cannot have.
Applications of the Kydland and Prescott result
to monetary policy are often developed in famil­
iar (and highly abstract) models of output and
inflation. 5 These models assume that wagesetters and the monetary authority are engaged
in a noncooperative game. In this game, wagesetters must specify the nominal wage rate in a

■ 5

before

contract (their play)
prices are deter­
mined (the policymaker’s play). Firms’ decisions
to hire are made after prices are determined, so
that the real wage is known. Since firms are
assumed to be profit maximizers, the real wage
determines the level of output for the economy.
An essential feature of the game is that by
determining the price level, the policymaker’s
play determines the real wage and level of real
output. Moreover, expectational errors of wagesetters determine the deviation of output from
its full employment levels. Thus, the game
yields the familiar output supply function
( 1)

y

Fischer (1988). The use ot a

static

model to illustrate

dynamic

inconsistency

has been criticized as inadequate. The basic concept, however, has been devel­
oped in the context ot a dynamic model (see Roberds [19 86 ]). Since it is the
concept we want to convey here, the static model suffices.

+

b( n - 7Te),

n

y*

where and are output and inflation,
is
full employment output, and
is the expected
inflation rate.
The policymaker is assumed to have a loss
function quadratic in the deviations of inflation
and output from target levels. Here, desired
inflation is assumed to be zero.
(2)

ne

L = an2 + (y - ky*)2

The target rate of output is assumed to be above
the natural rate, that is,
1. One motivation
for this assumption is that tax distortions and
unemployment policy cause the natural rate to
be too low from a social point of view. Alterna­
tively, one might argue that the labor market is
dominated by large unions (see Canzoneri
[1985]). He assumes that the labor supply curve
includes only union members and that wagesetters’ behavior systematically excludes other
workers. By contrast, the loss function includes
all workers. Others have argued that equation
( 2 ) is not really a measure of social utility, but
reflects the bias of policymakers to underesti­
mate the natural rate of unemployment.
To illustrate the advantage of a rule, consider
the case in which the policymaker has discre­
tion in a one-period game. Because the policy­
maker chooses policy
the wage-setters know
that the policymaker has the incentive to take
the
and to
induce higher employment with additional
inflation, if possible. Given the known loss func­
tion, there is only one strategically rational
expectation (that is, Nash solution) for inflation:

k>

specify the wage rate,

after the wage-setters

expected inflation rate as given

(3)
The particular example presented here is the compact static model in

y = y*

rre = a ~ lb ( k - )y*.
1

Under this solution, the policymaker has no
incentive to choose an inflation rate higher than
expected. The gains from the additional output
would be more than offset by the loss of the
additional inflation. Note also that if the policy­
maker had an objective for the inflation rate less
than the expected inflation rate
wagesetters acted, it would be inconsistent
That a zero-inflation objective is not cred­
ible with discretion is an example of the
problem of
The value of the loss function evaluated at the
solution is denoted as
and is given by

before

ward.

after­

time inconsistency.
Ld

(4)

Ld = u

- i ) y 2[i + * V ] .

If the policymaker could credibly precommit
to a policy of zero inflation, that is, a dynami­
cally consistent inflation objective, the loss func­
tion would be
(5)

Lp = ( k - l ) 2y * 2.
Lp

Ld,

Since
<
precommitment to a zero-inflation
objective affects expectations in a way that leads
to a more favorable outcome than pure discretion
would allow. Essentially, discretion buys nothing
in terms of output, which is the same under
both policies, but leads to an inflationary bias.
To be sure, the basic result of Kydland and
Prescott demonstrates in a very precise way a
benefit to precommitment to a policy rule.
Although developed in a highly abstract model,
the result has been widely influential in aca­
demic research. A major shortcoming of the anal­
ysis, however, is that it trivializes the control
problem. Specifically, it presumes that the policy­
maker has a deterministic operating procedure
that enables precise control of inflation. Once
disturbances are introduced into the model, the
precommitment solution does not necessarily
dominate the discretion solution.
To analyze the control problem, Canzoneri
considers a stochastic disturbance to money
demand such that velocity follows a random
walk. In his game, wage-setters cannot see the
disturbance at the time they specify their wage,
but the Federal Reserve has some forecast of
money demand before it chooses its policy for
money growth. If the Fed is left with some flex­
ibility, it can accommodate the predictable
component of the change in velocity. As Can­
zoneri notes, this practice benefits both wagesetters and society as a whole. Thus, the policy
problem becomes one of trading off flexibility
needed for stabilization with the constraint
needed for eliminating the inflation bias. 6

The discussion thus far has been in the con­
text of a one-period framework. In reality, how­
ever, the central bank has a horizon that extends
beyond one period. Indeed, this may explain
why central banks are typically isolated from
political pressures by design. It is now widely
understood that in a multiperiod context, the Fed
may be able to establish a reputation that serves
the same purpose as a monetary rule. This possi­
bility has been investigated by Barro and Gordon
( 1983a, 1983b). They find that under certain con­
ditions, reputation-building can lead to a result
that is superior to pure discretion, although not
as good as precommitment to a rule.
Barro and Gordon assume, however, that wagesetters eventually have access to the same infor­
mation as the Fed. Canzoneri shows that when
the Fed has its own private forecast of money
demand, it has an incentive to misrepresent its
intentions. 7 He further demonstrates that no sta­
ble resolution of the credibility problem can rely
on the Fed’s own announcement of its forecast.
When the Barro and Gordon model is modified
to account for asymmetric information, the Fed
cannot build sufficient credibility by simply run­
ning a noninflationary policy for a few periods.
Rogoff (1985) has shown that other solutions
may mitigate the problem of dynamic inconsis­
tency. One such solution is that society can
benefit by choosing a “conservative” central
banker— one that places a high cost on inflation.
In the context of the simple model above, this
means that the central bank places a high value
on parameter
in its loss function. Equation (4)
reveals that as gets large, the value of the loss
function diminishes, ultimately approaching the
value of the precommitment solution given in
equation ( 5 ).
Like Barro and Gordon, Rogoff assumed sym­
metric information. When the Fed has private
information, it has the incentive to appear more
conservative than it actually is; the wage-setters
have no way of telling. The implication is that
there could be periodic inflationary breakdowns
followed by sustained periods where credibility
builds and wage-setters learn the true intentions
of their central bank. Unfortunately, Canzoneri
shows that it is no simple matter to legislate
incentive-compatible rules that would remedy
the problem posed by private information.

a
a

■

6

Fischer (1988) demonstrates in a formal model that when control error

exists, the ordering of the loss functions under precommitment and discretion
is ambiguous.

■ 7

If the money demand forecast were predicated on a stable model over

time, it would be preferable for the Fed to commit to a contingent rule based
on that model forecast. Thus, while the rule would allow flexibility, it would
not admit discretion. Given the absence of evidence of stability in money
demand, such a rule seems infeasible.

Rogoff also demonstrates that under certain
conditions, intermediate targeting may also pro­
vide a reasonable solution to the problem of
dynamic inconsistency. By providing the central
bank with incentives to hit an intermediate
target, it is possible to induce fewer inflationary
wage bargains in the context of his model. While
the Rogoff result demonstrates some a priori
basis for intermediate targeting, his analysis
abstracts from many problems the policymaker
faces in practice. Nevertheless, the literature
since 1977 suggests there is a reasonable basis
for some precommitment— if not to a rule for all
time— to some monetary policy on a continuum
between a pure rule and pure discretion.

II. The Operating
Strategy of the
Federal Reserve

The operating strategy of the Federal Reserve can
be viewed as a commitment to a policy on the
continuum between a pure rule and pure discre­
tion. The rule-like elements are embedded in
the Fed’s monetary targeting procedure. Mone­
tary targets are not ends in themselves, but are
intermediate variables between the instrument
variables that the Fed directly controls, such as
the federal funds rate or nonborrowed reserves,
and ultimate goals, such as price stability and
stable output growth. Thus, intermediate target
variables must be closely linked to both ultimate
objectives and instruments.
The use of intermediate targets has been
criticized as redundant and inefficient from a
control-theoretic perspective (see B. Friedman
[1975]). These objections, however, are based
on the assumption that policymakers have pre­
cise, reliable knowledge about the relationships
between instruments and final objectives. In
practice, policymakers see great uncertainty in
these links and doubt that such relationships
could be captured by econometric models accu­
rately enough to be operationally useful (see
Black [1987]). In contrast, intermediate target
variables are seen as relatively more controllable
than ultimate variables.
Moreover, policy decisions are made by major­
ity rule. It is therefore difficult, if not impossible
(Arrow’s theorem) to obtain a consensus for
adopting a particular social objective function,
which is necessary under direct targeting of final
objectives. Under an intermediate targeting strat­
egy, the Fed does not need to specify numerical
objectives for goal variables.

Intermediate targeting strategies can vary sub­
stantially in degree of flexibility or commitment.
In principle, intermediate targets may or may not
be designed to allow feedback. For example, a
target could be specified for a five-year horizon
without allowing for revisions, or for a three
month horizon to accommodate frequent
adjustments based on new information. Also, the
operating procedure used to control the target
variable may or may not allow for a high degree
of discretion. Thus, operating rules could be
highly automatic with infrequent discretionary
input or be judgmentally modified day-to-day,
based on the latest information.
Actual practice of monetary7 targeting indicates
that the degree of flexibility and discretion
incorporated into the strategy7 is influenced by
two key factors. The first is evidence concerning
the stability of the relationship on which the strat­
egy is based. If there is a broad consensus about
the reliability of the relationship between the
intermediate target and ultimate goals, then it is
more likely that a central bank would be willing
to commit to closer targeting of the variable with
less feedback from other sources, whether dis­
cretionary or not. The other key factor is the cen­
tral bank’s credibility or reputation in containing
inflationary expectations. If the central bank
establishes its credibility by avoiding inflationary
policies, then the public and Congress are gen­
erally more willing to accept a greater degree of
discretion in strategy and tactics.
The interplay of these factors may well
account for the increased reliance on monetary
aggregates as intermediate targets during the
early 1970s. Before the mid-1960s, there was
scant evidence that discretion exercised by the
Federal Reserve provided a substantive basis for
inflationary expectations. Nominal interest rates
were, on average, too low to indicate expecta­
tions of rising inflation. The public apparently
believed that the Fed would “take the punch­
bowl away just as the party got going,” a percep­
tion consistent with Rogoff s notion of a conser­
vative central bank. Although the Federal Reserve
had intermediate targets for interest rates— a strat­
egy that is now widely viewed as potentially
defective for avoiding inflation— the Fed seemed
to use its discretion judiciously in avoiding infla­
tion and hence in assuaging public doubt about
the efficacy of its operating strategy.
By the early 1970s, however, a basis for doubt
was beginning to emerge, as inflation had
accelerated to new and persistently high levels.
Over that decade the Fed gradually strengthened
its reliance on monetary aggregates as a source
of information about its ultimate objectives.

While the process was initially internal only, the
Fed began to announce publicly its desired
annual growth ranges for selected monetary
aggregates in response to a Congressional reso­
lution in 1975. Evidence in the early 1970s con­
vinced many that the relationship between
money and nominal GNP— as summarized by
velocity—was sufficiently reliable to choose
monetary targets over annual, or even longer,
horizons. Also, the parallel rise in the price level
offered simple but persuasive evidence that infla­
tion could be slowed by slowing growth of the
monetary aggregates. In 1979, the Fed adopted a
strategy for disinflation by gradually reducing
the rate of money growth from year to year.
The strategy was coupled with an automatic
feedback rule to enhance monetary control and
demonstrate a commitment to the strategy. Over
most of the 1970s, the Fed used the federal
funds rate— the interest rate banks charge one
another on overnight loans of reserves— as its
operating target for controlling money growth.
Specifically, it sought to influence the quantity
of money the public demanded by altering the
opportunity cost of money. For example, if
money growth was too rapid, it attempted to
raise the federal funds rate, and thereby raise
other short-term rates.
The higher rates were expected to slow
money growth by inducing the public to shift
from monetary assets to other financial assets.
Over longer horizons, higher interest rates
might also be expected to slow spending
growth and hence the transactions demand for
money. In practice, however, there is always
substantial pressure for the Fed to minimize
interest-rate movements, particularly interestrate increases. For this reason and others, the
Federal Reserve did not respond sufficiently
promptly or intensively to keep monetary
growth from accelerating in the 1970s.
By late 1979, the inflation rate had accelerated
to double-digit levels. Financial markets, espe­
cially foreign markets, began reacting strongly to
the inflationary developments. The dollar was
falling rapidly as foreign investors appeared to
doubt the Fed’s resolve to contain inflation. In
response to the evident inflationary pressures,
the Federal Reserve adopted a new set of tactics
“as a sign of its commitment to longer-run re­
straint on money growth” (Lindsey [1984], p.
12). These tactics in effect eliminated a substan­
tial degree of discretion that the Fed had used to
smooth short-term interest-rate movements.
The new procedures sought to control money
growth by maintaining a short-run target path for
nonborrowed reserves. As Lindsey describes,
“holding to a nonborrowed reserves path essen­

tially introduces in the short run an upward slop­
ing money supply curve on interest rate and
money space” (p. 12). In effect, the nonborrow­
ed reserves target created an
selfcorrecting mechanism that would partially resist
all deviations of money from target. If money
growth in a given week moved above target, the
prespecified level of nonborrowed reserves vir­
tually assured that the federal funds rate would
move upward. In sum, the Federal Reserve gave
up its discretion to minimize federal funds rate
movements to assure financial markets of its
commitment to the disinflation strategy.
While the new procedure involved substantial
commitment at the tactics level, it permitted sig­
nificant discretionary feedback at the strategy
level. Under the strategy, the FOMC was free to
change its short-term monetary target to take
account of new information— a practice that led
to significant deviations of money from
announced annual targets. Such discretionary
feedback was deemed necessary as evidence
mounted that the velocity of money was not as
reliable as expected.
It was well understood at the time that dereg­
ulation in financial markets, changes in transac­
tions technology, and disinflation were having a
substantial impact on individual portfolios and
hence on the velocity of money. While such fac­
tors could account for the target misses in a qual­
itative sense, policymakers lacked means to pre­
dict the impact on money growth in order to
specify reliable target values. By August 1982 the
evidence was compelling that the behavior of
velocity had been altered in some permanent
way. Because time was needed to identify the
new patterns of velocity behavior, attempts to con­
trol monetary aggregates closely appeared futile.
Consequently, the Fed abandoned its operat­
ing procedure and hence its commitment to a
fixed path of nonborrowed reserves in the short
run. It de-emphasized the role of M l and
adopted a more flexible operating strategy.
Since the fall of 1982, the Fed’s operating target
has been the aggregate level of seasonal plus
adjustment borrowings at the discount window.
Under this procedure, the FOMC specifies a
short-term objective for this variable at each of
its regularly scheduled meetings (at approxi­
mately five- to six-week intervals).
Unlike with the nonborrowed reserves oper­
ating target, the current procedure does not
produce automatic self-correcting federal funds
rate responses to resist divergences of money
from its long-run path. Substantial changes in
the federal funds rate are largely a consequence
of judgmental adjustments to the borrowings
target. Thus, the Fed has regained much of the

automatic

leeway to smooth short-term interest rate
changes that it had prior to 1979It is important to note that by the end of 1982
the disinflation process had become credible to
most of the public. Financial markets, particu­
larly those for fixed-income securities, reacted
favorably to the procedural change. Long-term
interest rates continued to decline substantially
after the Fed announced abandonment of the
nonborrowed-reserves procedure. Moreover,
over the long term, wage demands moderated
to pre-1970s levels and have been persistently
moderate to this day. 8 Such would seem strong
evidence that wage setters haven’t suspected the
Fed of “cheating” on its goal of reducing and
maintaining lower inflation.
The evolutionary cycle of the Federal Reserve’s
operating procedure provides a useful illustration
of how the degree of discretion has varied in
response both to evidence concerning the reli­
ability of the money-income relationship and to
the reputation of the Fed. As the Fed’s credibil­
ity on inflation appeared to wane in the 1970s, it
adopted procedures that increased reliance on
monetary aggregates as intermediate targets and
limited its discretion to smooth interest rates. As
evidence suggested a breakdown in the behavior
of velocity, the degree of commitment to mone­
tary control diminished to allow the necessary
operational flexibility. By that time the Fed’s
commitment to maintaining lower rates of infla­
tion had become credible. While the actual strat­
egy can be characterized as a monetary rule with
varying degrees of discretion, it never incorpo­
rated the degree of commitment that most
monetarists had hoped for— one that would
have not altered monetary targets at all.

III. Problems with
Making Rules Operational

The review of the Federal Reserve’s actual oper­
ating strategy also serves to highlight a number
of potential problems with making rules opera­
tional. Poole (1988), a longtime monetary rule
advocate, recently concludes that “there is a
serious and probably insurmountable problem
to designing a predetermined money growth
path to reduce inflation.” Essentially, he argues
that it is not possible to reliably quantify the
effects of disinflation on money demand and,
hence, on velocity. 9 Thus, managed money is

■

8

For evidence concerning moderation in compensation demands, see

Groshen (1988).

unavoidable during the transition to lower infla­
tion. While Poole accepts the eventual efficacy of
a constant-growth rule, he believes there is no
formula to determine when the discretionary
mode should terminate. Presumably, it would
only be after inflation has been eliminated.
Even if the transition to lower inflation were
no longer operationally relevant, the experience
of the early 1980s makes it clear that money
demand and velocity have also been independ­
ently affected by regulatory change and by devel­
opments in transactions technology. McCallum
(1987) has recently argued that a rule should not
rely on the presumed absence of the effects of
such changes. This principle of rule design pre­
cludes simple, fixed rules like the constant
growth rate of money (or monetary base). Oper­
ational feasibility demands that a monetary rule
should at least be flexible enough to accommo­
date the effects of such changes on velocity.
Recognizing a need for some form of flexibil­
ity, some pure-rule advocates now propose nondiscretionary feedback rules. Nondiscretionary
feedback requires specification of a formula link­
ing goal (or target) variables to policy instru­
ments. The formula presumes the existence of
some reasonably stable and hence reliable model,
that is, one that characterizes sufficiently well the
relationship between instruments and objectives.
The absence of a consensus in macroeconom­
ics about an appropriate model poses a serious
obstacle for gaining assent for any
feedback rule in practice. While most economists
adopt
perspective, few seem willing to accept
the notion that a particular (especially simple)
characterization of the economy would be suffi­
ciently reliable for long periods. Even among
rule advocates sharing a common perspective,
there are likely to be subtle differences about the
formula specification that may splinter support
for a given rule.
This problem of model uncertainty is com­
pounded by the important demonstration by
Lucas (1976) that “structural” models are in gen­
eral not invariant to the way in which policy is
implemented. Since this critique, there has been
no widely accepted means of evaluating opera­
tionally concrete policy proposals. 1 0 While many
large-scale econometric models have met the
market test, few economists seem convinced by
policy evaluations based on particular economet­
ric models.

particular

a

■ 10

Advocates of rules sometimes argue that if a nondiscretionary rule

were to be implemented, relationships would stabilize, leading to more favora­
ble outcomes than suggested by simulations based on historical relationships.

■ 9

This point is an example of a more general result of Lucas (1976),

which is discussed below.

While this purely a priori theoretical argument is consistent, it does not appear
to be convincing to most economists.

Without a consensus about how monetary pol­
icy affects aggregate economic outcomes, it is
not compelling to argue that expectations of
economic agents (for example, wage-setters) are
based on any one model of the economy. Any
given rule could possibly be perceived as unsus­
tainable by a sufficient number of agents such
that the rule would not be credible in an aggre­
gate sense. If agents believed the rule was unsus­
tainable, the game between agents and policy­
makers would become extremely complicated,
with no apparent solution. Thus, it would not be
clear that commitment to a rule would be bene­
ficial. It would seem useful that a rule advocate
demonstrate that favorable consequences of a
proposed rule would be robust to alternative
models of the economy.

IV. Two Recently
Proposed Rules

Two recently proposed rules by McCallum (1987,
1988) and Hall (1984) illustrate how the debate
over rules versus discretion has evolved to a
more operationally concrete level. Both authors
appeal to the result of Kydland and Prescott as a
justification for implementing their rules. Both
also recognize a need for flexibility and address
operational problems. In sharp contrast, how­
ever, is the way they incorporate flexibility.
McCallum proposes a nondiscretionary feed­
back rule for nominal income using the mone­
tary base as the instrument. The target path of
nominal income is fixed and grows at a pre­
specified rate of 3 percent per year. The feed­
back formula is
(6 )

A

b, = 0.00739 - ( / I
+ X(x* - .*,_]),
1

bt -

6

)k M -

vt_17]
t),

where
log of monetary base (for period
= log of base velocity,
log of nominal
GNP, and
target path for nominal income.
The constant term 0.00739 is simply a 3 per­
cent annual growth rate translated into quarterly
logarithmic units. The second term subtracts the
average growth rate of velocity, approximated by
the average difference in the logarithm of velocity
over the previous four years. This term can be
thought of as a simple time-series estimate of
trend velocity growth. The third term specifies
how policy is to respond to deviations of nom i­
nal income from its target path.
The moving average of velocity growth is a
simple statistical filter designed to detect per­
manent changes in velocity growth. As such, it
provides a mechanism to maintain a long-term

vt

x* -

xt =

correspondence between the current base
growth path and the long-term nominal objec­
tive to account for changes in transactions tech­
nology. Given the length of the moving-average
period (four years) and the absence of any sys­
tematic feedback from interest rates, however,
the rule provides virtually no adjustment in
response to the current state of the business
cycle or to financial conditions. 11
The third term provides feedback to assure that
nominal income ultimately returns to its trend
path. The choice of parameter A incorporates
some degree of flexibility to deal with the poten­
tial problem of instrument instability. This prob­
lem arises when effects of policy7 occur over time
as they do in actual economies, particularly those
with sticky prices. Large responses to maintain a
target path in the near term could lead to longerterm effects in the opposite direction, requiring
even greater offsetting policy responses in later
periods. This sequence would be unstable if
responses and effects were to become ever in­
creasing. The value of A (presumably less than
zero) should be chosen to minimize the poten­
tial for this dynamic instability, under the con­
straint that it be sufficiently large to provide ade­
quate responsiveness of base growth to target
misses. McCallum suggests that a value of 0.25
appears to be somewhat robust for this objective
over alternative models of the economy.
If velocity growth were constant, and if nom i­
nal GNP were on its target path for a sustained
period, the policy prescribed by McCallum’s rule
would be the same as a 3 percent growth rule
for the monetary base. Thus, McCallum’s rule is
essentially a generalization of the constantmoney-growth rule. Because it is more general,
it allows for flexibility to deal with some of the
problems of making monetarist rules operational.
Moreover, McCallum claims that because the
monetary base is “controllable,” the rule can be
accomplished with no operational discretion. 12

■

11

Recent evidence suggests that velocity has become increasingly

interest-sensitive in the 1980s. To the extent that systematic effects of inter­
est rates could be reliably estimated, additional flexibility could be introduced
into the rule as feedback to compensate for short-run variability in velocity.
McCallum expresses doubt, however, that economists know enough to base
policy on any one short-run empirical model. In this sense he defends, if only
indirectly, the monetarist dictum of Friedman, in which monetary policy affects
the economy with long and variable lags.

■ 12

Under current institutional arrangements, the total monetary base can

be controlled only indirectly, working through effects of changes in interest
rates on the demand for base components. Advocates of base targeting often
call for institutional reforms— such as exactly contemporaneous reserve
accounting and closure of the discount window— to enable direct control of
the base. Alternatively, McCallum’s rule can be applied to the nonborrowed
base, which is directly controllable under existing institutions.

D
In this sense McCallum’s proposal is a flexible
version of a rule for means. The flexibility is
extremely limited, however, involving only feed­
back from simple statistical models to maintain
long-run relationships. No role is given to struc­
tural models that might allow feedback for short­
term economic stabilization. Such a rule shows
little faith in macroeconomic models or in dis­
cretionary decisions of the Fed.
Some rule advocates, on the other hand, pro­
pose a much greater role for economic models
and judgment of the Fed. An example is an endsoriented rule advanced by Hall (1984). Under
Hall’s strategy7, the Federal Reserve is instructed
to stabilize the price level around a constant
long-run average value. To make this strategy7
elastic in the short run, Hall proposes giving the
Fed some prespecified leeway in achieving the
target depending on the amount of unemploy­
ment. The permissible deviation of the actual
price level,
from its target,
is defined by
the simple numerical rule linking it to the devia­
tion of the unemployment rate,
from its nor­
mal rate, presumed to be 6 percent:

p,

p*

u,

(7)

100(p -

p*)/p* = A (u -

6

).

A

The coefficient
is to be specified by the Fed­
eral Reserve. Based on simulations, Hall tenta­
tively recommends that it equal eight.
Specifically, this relationship is to be imposed
as a constraint on policy instrument settings. In
formal terms: “Monetary7 policy is on track when
the deviation of the price level from its constant
target is eight times the deviation of unemploy­
ment times its normal level
Policy is too tight if the price deviation
is less than eight times the employment devia­
tion; it is too expansionary when the price devia­
tion is more than eight times the employment
deviation. The elasticity7 of 8 in this statement is a
matter for policymakers to choose.” (Hall
[1984], p. 140)
Policy formulation under this approach would
be prospective. Thus, the Fed would need to
employ a model that links instrument variables
to the price level and to the unemployment rate
over the criterion period . 13 It would be free to
use whatever model and instruments it chooses.
Instrument settings would be determined by an
iterative process. To begin, an initial forecast for
the unemployment rate and price level would be
compared against the rule formula to be judged
for appropriateness— for example, too tight, too

percent\.

[presumed to be 6

easy, or on track. This process would thereby
determine the direction in which instrument set­
tings should be changed, if necessary. A second
round of forecasts would then be obtained and
compared. The process would continue until the
instrument settings yielded price-level and
unemployment forecasts consistent with the rule.
To impose discipline, Hall would require the
Fed to be explicit about its forecasts, defending
them publicly at the semiannual Congressional
review and in comparison with private forecasts.
Hall argues that forecasting errors of good pri­
vate forecasters would provide a sufficiently
reliable standard to maintain unbiased out­
comes. If the Fed’s forecasts were consistently
different from reputable private forecasts, and if
the outside forecasts were more often correct,
then the Fed would be under public pressure to
modify its way of setting policy instruments. For
Hall, the problem with discretion lies not with
the use of faulty econometric models but with
the absence of a commitment to an explicit rule
for the price level.
Both Hall and McCallum employ small empir­
ical models to generate simulations under their
rules. McCallum uses a variety of models based
on competing views to examine the robustness
of his rule’s performance. His simulations sug­
gest that his rule would have produced a root
mean square error (RMSE) of nominal income
of around 2 percent from 1954 to 1985. This is
approximately one-third the RMSE of actual GNP
around its trend over the same period. He con­
cludes that his rule would have worked rela­
tively well in the United States.
To address the criticism that his simulations
are subject to the Lucas critique, McCallum notes
that his rule relates nominal demand to nominal
policy instruments. He argues that the sensitivity
of parameters to policy regime changes is likely
to be quantitatively less important for such rules
than for rules that relate real to nominal varia­
bles, for example, based on Phillips curve m od­
els. Hall’s simulations, on the other hand, are
based on the presumption that there is a relia­
ble (policy invariant) relationship between the
of the inflation rate and the
of the price level. 14 His simulation results
suggest that
price level variability7 and
unemployment variability would have been less
than actually experienced from 1 9 5 2 to 1 9 8 3
under the elastic-price rule.

variability
ity

■ 14
■

13

Based on the assumption that monetary policy affects the unemploy­

variabil­

both

The analysis of policy in terms of the

variability

of unemployment

and price level was developed by Taylor (1980, 1981). It is important to note

ment rate reliably only after a yearlong lag, Hall argues that the criterion

that there is no implied trade-off in this model between the inflation rate and

period should be the forecast horizon for the year beginning six months ahead.

trend output

growth.

H

While the results presented under both rules
appear favorable, few analysts seem convinced
by small-model simulations. Experience with
large-scale econometric models, for example,
suggests that interest rates would vary sharply
under McCallum’s rule. His models, which do
not allow for interest-rate interactions, cannot
account for the economic consequences of such
interest-rate variation. Fischer (1988) argues that
the natural vehicles for studying policy rules are
the large-scale econometric models, many of
which have met the market test. Nevertheless,
he notes that it would be difficult to justify legis­
lating any nondiscretionary rule given the vari­
ety and inadequacies of existing models. O n the
other hand, existing models may be no more
reliable for discretionary decisions, particularly
when policymakers may use them selectively to
support their own prior beliefs.

V. Some Concluding
Comments

The success of the U.S. disinflation strategy early
in this decade helped reestablish the Federal
Reserve’s credibility as an inflation fighter. Much
of the reputational capital surely persists today.
Recently, however, some analysts have questioned
whether the current strategy is adequate to
extend and maintain the progress against infla­
tion (see Black [1987]).
A key concern is that the strategy may lack suf­
ficient institutional discipline to assure that
short-term objectives—such as interest-rate
smoothing— do not interfere with the achieve­
ment of longer-term price stability. This fear has
led to a renewed interest in alternative strategies
that are closer to a pure rule on the continuum
between a pure rule and pure discretion.
Ideally, a policy strategy should perform ade­
quately well under alternative views about
aggregate economic relationships so that suffi­
cient numbers of agents believe that the rule
could be credibly implemented. Rule advocates
might well follow the example of McCallum and
examine the robustness of their rule’s perfor­
mance, simulating with alternative models of the
economy. The choice of criteria for “adequate
performance” is of course a difficult and contro­
versial matter. We conclude here, as does Fischer
( 1 9 8 8 ), that the discussion of alternative policies
is too important to be suppressed by the econ­
ometric evaluation critique.

References

Barro, Robert J. “Recent Developments in the
Theory of Rules Versus Discretion.”

Confer­
ence Papers: Supplement to the Economic
foumal. 96(1985): 23-37.

------------ , and David B. Gordon. “A Positive
Theory of Monetary Policy in a Natural Rate
Model.”
91(1983a): 598-610.

Journal of Political Economy.

------------ , and David B. Gordon. “Rules, Discre­
tion, and Reputation in a Model of Monetary
Policy.”
1 2 ( 1 9 8 3 b): 1 0 1 2 1 .

foum al of Monetary Economics.

Black, Robert P. “The Fed’s Anti-inflationary Strat­
egy: Is It Adequate?” Federal Reserve Bank of
(September/
Richmond.
October 1987): 1-9.

Economic Review

Brainard, William. “Uncertainty and the Effec­
tiveness of Policy.”
57(May
1967): 411-25.

American Economic
Review Papers and Proceedings.

Canzoneri, Matthew B. “Monetary Policy Games
and the Role of Private Information.”
75( December 1985):
1056-70.

can Economic Review.

Ameri­

Fischer, Stanley. “Rules Versus Discretion in
Monetary Policy.” National Bureau of Eco­
nomic Research Working Paper No. 2518
(February 1988).
Fisher, Irving.

Movement.

Stable Money: A History of the
New York: Adelphi Company,

1934.
Friedman, Benjamin M. “Targets, Instruments,
and Indicators of Monetary Policy.”
1(October 1975):
443-73.

Monetary Economics.

foum al of

Friedman, Milton. “The Lag in Effect of Monetary
Policy. ”
69( Oc­
tober 1961): 447-66. Reprinted in

foum al of Political Economy
The Opti­
mum Quantity o f Money and Other Essays.

Chicago: Adeline Publishing Company, 1969:
237-60.

A Program for Monetary Stability.

------ .
New York: Fordham University Press, 1959.
------ . “A Monetary and Fiscal Framework for
Economic Stability.”
38(June 1948): 245-64. Reprinted in
Chicago: The
University of Chicago Press, 1953-

American Economic
Review
Essays in Positive Economics.

D
Groshen, Erica L. “What’s Happening to Labor
Compensation?” Federal Reserve Bank of
Cleveland.
May 15,

Taylor, John B. “Stabilization, Accommodation,
and Monetary Rules.”
71 ( May
1981): 145-49.

Hall, Robert E. “Monetary Strategy with an Elastic
Price Standard.”
A symposium sponsored by the Fed­
eral Reserve Bank of Kansas City, August 1984:
137-59.

________ . “Output and Price Stability: An Inter­
national Comparison.”
2(February 1980):
109-32.

Economic Commentary,

1988.

Price Stability and Public

Policy.

Hetzel, Robert L. “The Rules Versus Discretion
Debate Over Monetary Policy in the 1920s.”
Federal Reserve Bank of Richmond.
(November/December 1985): 3-14.

Economic

Review

Kydland, Finn E., and Edward C. Prescott.
“Rules Rather than Discretion: The Inconsis­
tency of Optimal Plans.”
85(June 1977): 473-91.

Economy.

Journal of Political

Lindsey, David E. “The Monetary Regime of the
Federal Reserve System.” Conference on
Alternative Monetary Regimes sponsored by
Ellis L Phillips Foundation and Dartmouth
College (August 1984).
Lucas, Robert E. “Econometric Policy Evaluation:
A Critique.”
Supplementary Series 1(1976): 19-46.

Journal of Monetary) Economics.

McCallum, Bennett T. “Robustness Properties of
a Rule for Monetary Policy.”

CarnegieRochester Conference Series on Public Policy>.

Carnegie Mellon University and National
Bureau of Economic Research (February
1988).
________ . “The Case for Rules in the Conduct of
Monetary Policy: A Concrete Example.” Fed­
eral Reserve Bank of Richmond.
(September/October 1987): 10-18.

Economic

Review

Poole, William. “Monetary Policy Lessons of
Recent Inflation and Disinflation.”
2(Summer
1988): 73-100.

nal of Economic Perspectives.

The Jour­

Roberds, William. “Models of Policy Under Sto­
chastic Replanning.” Federal Reserve Bank of
Minneapolis Research Department Staff Report
104 (March 1986).
Rogoff, Kenneth. “The Optimal Degree of
Commitment to an Intermediate Monetary
Target.”
C4(November 1985): 1169-89.

Quarterly Journal of Economics.

Simons, Henry C. “Rules Versus Authorities in
Monetary Policy.”
44(February 1936): 1-30.

my.

Journal of Political Econo­

American Economic
Revieu>Papers and Proceedings.

Journal of Economic
Dynamics and Control.

A c t u a l C o m p e titio n ,
P o te n tia l C o m p e titio n ,
a n d B a n k P r o fita b ility
in R u r a l M a r k e ts
by Gary Whalen

Gary Whalen is an economic advisor
at the Federal Reserve Bank of
Cleveland. The author wishes to
acknowledge the helpful comments
of Kelly Eakin.

Introduction

The nature of the relationship between the struc­
ture of the market in which banks operate— the
number and size distribution of actual competi­
tors in a market— and their performance has been
examined in a considerable number of empirical
studies over the past 20 years. 1 Industrial organi­
zation economists have investigated the structure/
performance relationship for a wide variety of
intra- and interindustry samples of firms.
The typical maintained hypothesis has been
that explicit or tacit collusion is more likely in
markets with a limited number of large competi­
tors and should result in a statistically significant
positive relationship between market concentra­
tion and the profitability of firms operating in the
market. Definitive support for this hypothesis
implies that an activist antitrust policy aimed at
limiting merger-related increases in concentra­
tion is an appropriate public policy goal.
A positive concentration/profits relationship
has been found in some, but far from all, of the
empirical studies investigating bank market
structure and performance. The mixed results of
this body of empirical work have been inter­
preted in widely different ways.
■

1

For reviews of this work, see Rhoades (1982), Gilbert (1984), and

Osborne and Wendel (1983).

Some researchers, predisposed to accept the
reasonableness of the concentration/collusion
hypothesis, have concluded that the weight of
the evidence supports this position and have
advanced a number of reasons to discount the
lack of consistent empirical support for the
expected relationship between concentration
and bank profitability. 2 One is that the equations
estimated in many of these studies have been
misspecified, possibly biasing the estimated
coefficient on the concentration variable. In par­
ticular, several researchers have suggested that
market concentration might impact bank man­
agement’s risk-return preferences or opportuni­
ties. 3 Specifically, bank management operating
in concentrated markets might trade off potential
monopoly profits for lower risk. If this is the
case, significant concentration-related differences
in profitability might not be evident in studies
that fail to explicitly control for risk.
Other researchers have argued that the single­
equation estimation techniques typically used in
previous empirical work, even those where risk
measures have been included as additional

■ 2

This is the conclusion of Rhoades (1982).

■ 3

See Heggestad (19 77), Rhoades and Rutz (1982), Clark (1986b), and

Liang (1987).

□
explanatory variables, may have biased the
results. 4 In their view, profitability and risk are
determined simultaneously, so we should rely
only on the results of studies where the relation­
ships between these variables and concentration
are investigated using simultaneous equation
estimation techniques.
Yet another group of researchers argue that the
concentration/collusion hypothesis is unreason­
able because it embodies a questionable implicit
assumption: that technological conditions, regu­
lation, other barriers to entry, or the threat of
predation allow colluding firms in concentrated
markets to disregard potential competitors.
Concentration-related monopoly power and
profits can exist and persist only when there is
no threat of entry by potential competitors. 5 Mar­
kets in which this type of behavior can occur
have been given the label “noncontestable.” In
theoretical work, researchers have shown that
when entry and exit are not precluded, or a mar­
ket is contestable, then outcomes can approxi­
mate those of perfect competition even if the
number of actual competitors is quite small or if
concentration is high . 6 Consequently, firm prof­
itability should not be expected to vary with
concentration.
The possibility that potential competitors may
significantly affect the prices charged and profits
earned by incumbent firms has been recognized
for some time . 7 Until quite recently, however,
banks and other financial intermediaries faced
numerous regulatory and legislative constraints
on geographic location, on permissible products
and services they could offer, as well as on the
prices they could charge. Thus, few of the geo­
graphic and product markets in which banks
operated approximated the contestable ideal.
This situation has changed dramatically in the
past 10 years. A large number of states have
reduced intrastate and, more recently, interstate

■ 4

This is the conclusion of Clark (1986b) and Liang (1987).

■ 5

See Brozen (1982) and Baumol, Panzar, and Willig (1982).

■

Actually, researchers have differentiated markets according to the

6

barriers to geographic expansion by commercial
banks and by savings and loan institutions. In
addition, the repeal of usury laws and removal of
Regulation Q ceilings on deposit rates have left
financial intermediaries basically free to compete
on a price basis.
Empirical investigations of scale and scope
economies in banking suggest that small-scale
entry is not precluded by cost conditions. 8 A
negligible amount of the costs of branching
appears to be sunk. These circumstances suggest
that banking markets— at least in states that have
liberalized branching to some extent, facilitating
entry by out-of-market firms— have become
contestable. Alternatively, potential competition
may have become an effective disciplinary force,
which could explain the absence of a strong
positive concentration/profitability relationship
in some of the more recent empirical studies. 9
Researchers who do not subscribe to the con­
centration/collusion hypothesis have offered an
alternative explanation for the significant posi­
tive relationship between concentration and
profitability reported in some previous studies.
They argue that such a finding need not neces­
sarily signal collusion or indicate causation run­
ning from concentration to profitability. In their
view, labeled the “efficient structure hypothe­
sis” (ESH), superior efficiency, management, or
luck could result in increased firm profitability
and market share and, ultimately, in higher con­
centration. 1 0 If the ESH is correct, then the posi­
tive relationship between concentration and
profitability detected in empirical work where a
market share variable is not included is spurious
and simply reflects the correlation between
market share and concentration.
At present, then, there continues to be a great
deal of uncertainty and disagreement about the
relationship between market concentration,
potential competition, and bank performance.
Very few of the numerous previous studies have
incorporated risk, controlled for market share,
and investigated possible simultaneity.
More important, virtually no empirical work
on the impact of potential competition in bank­
ing, or in any other industry for that matter, has

degree to which they are contestable. A t one extreme are noncontestable
markets. A t the other extreme are perfectly contestable markets. In essence,
perfectly contestable markets are ones in which entry and exit are costless.
This, in turn, implies no barriers of any kind to entry and exit. In particular,

■

8

See Berger, Hanweck, and Humphrey (1986).

zero sunk costs are required to enter the market. Markets in which entry and
exit can occur but are not costless have been labeled imperfectly contestable.

■ 9

In such markets, potential competition is expected to influence the perfor­

concentration/profitability relationship for a subsample of banks drawn from

For example, Evanoff and Fortier (1988) find evidence of a positive

mance of incumbent firms. For a more detailed discussion of these issues, see

unit banking states but not for the subsample drawn from states where

Schwartz (1986), pp. 37-48, and Morrison and Winston (1987), pp. 53-60.

branching is permitted.

■ 7

■ 10

This possibility was noted in Bain (1949) more than 30 years ago.

See Smirlock (1985).

been done to date. 11 A number of circumstances
make banking an ideal subject for such research.
The partial, gradual elimination of geographic
barriers to market entry, cost conditions, and the
local nature of banking markets mean that entry
can occur if market conditions warrant and that
the number of potential bank entrants for each
local market can be determined.
This paper attempts to provide more defini­
tive evidence on the relationship between com­
petition and bank profitability. The relationship
between bank profitability and both actual and
potential competition is examined in a frame­
work that explicitly includes market share and
risk variables. Further, the impact of possible
simultaneity is also explored.
The sample consists of 159 banks drawn from
non-MSA (metropolitan statistical area) counties
in Ohio. The focus is on non-MSA counties for
several reasons. First, the number of actual bank
competitors in a typical non-MSA county is gen­
erally small, and concentration is high relative
to MSAs in the state. Second, economic and demo­
graphic characteristics of rural counties generally
make them less attractive for entry than urban
counties. Finally, actual and potential competi­
tion from out-of-market and nonbank suppliers
of financial services is likely to be limited.
Thus, if the concentration/collusion hypothe­
sis is correct and if potential competition is a rel­
atively unimportant determinant of firm perfor­
mance, supporting empirical evidence is likely to
be obtained from this data set. Conversely, ab­
sence of support for the concentration/collusion
hypothesis and the finding that potential com­
petition impacts bank performance in rural
markets is strong evidence that local banking
markets, both rural and urban, are contestable.
The time interval examined is from 1979 to
1981. This particular period was chosen because
the bank branching law in Ohio was liberalized
in January 1979. Before then, de novo branching
was limited to a bank’s home office county.
Under the new law, banks could branch de novo
into all counties contiguous to the county in

■

11

The only explicit empirical test to date is Hannan (1979). In many

structure/performance studies, the sign and statistical significance of coeffi­

which their head office was located. Thus, the
partial removal of geographic restrictions on
branching created an identifiable number of po­
tential bank entrants for each county in the state.
The choice of a three-year time period appears
somewhat arbitrary. However, a period of this
length should be short enough to ensure that
ongoing expansion activity by banks does not
materially affect the measure of potential com­
petition used in the study. It should also be
long enough to allow any performance impacts
attributable to potential competition to be
detected statistically.
In the following sections, we discuss the
model to be estimated, describe the sample and
estimation techniques, and present the results. A
summary and conclusions follow.

I. Model Specification

Unfortunately, there continues to be no strong
consensus about the “best” microeconomic
model of the banking firm. As a result,
researchers disagree about how the profitability
equation to be estimated—whether a single
reduced-form equation or a structural equation
in a simultaneous system— should be specified.
No attempt is made here to resolve the theoreti­
cal debate. Our approach is simply to estimate
versions used in previous studies, with market
share, risk, and potential competition variables
explicitly included.
Thus, the profitability equations estimated had
the following general form:
(1)

where

PROF, = f(A C i, PC,, MSr RISKj ,

Z,)

PROFi : a measure of the profitability of
bank i
AC): a proxy for actual competition in
the market in which bank i
operates

PCt: a proxy for potential competition
faced by bank i
MSt: the market share of bank i
RISK;: a measure of the overall risk of
bank i
Z; : a vector of additional control
variables

cients on branching law dummies in estimated profitability equations are used
to draw inferences about the intensity of potential competition. In others, the
statistical significance (or lack of significance) of the estimated coefficient on
the concentration term is used to obtain insight on this issue. In fact, very few
explicit empirical tests of contestability/potential competition have been done
for any industry, including the airline industry, which Baumol, et al. cited as an
example of one with contestable markets. The study by Morrison and Winston
(1987) m ay be the only one published to date.

The profitability measure employed as the
dependent variable in this study is rate of return
on equity (net income after taxes, excluding se­
curities gains and losses, divided by book equity,

both measured at year-end) averaged over the
three years from 1979 to 1981. This profitability
measure best reflects the efforts of managers
interested in shareholder wealth maximization.
The determinants of profitability of primary
interest in this study are actual and potential
competition. The former is proxied in two
alternative ways: by incumbent firm market
concentration and by the number of actual
competitors. The latter is proxied only by the
number of potential competitors. 12
The precise form of the relationship between
the proxies for actual competition, potential
competition, and profitability are unclear and
could take a number of different forms.
The consensus view is that actual competition
will be more intense and incumbent profitabil­
ity will be lower, the greater the number of
actual competitors or the lower the market con­
centration. The relationship between these
proxies, the likelihood of collusion, and the
intensity of competition and ultimately profita­
bility might not be linear, however. 13 For
example, the marginal impact of additional
actual competitors might not be constant, but
could decline as the number of competitors
increased. As a result, we also investigate non­
linear relationships between the proxies for
actual competition and profitability.
As long as entry into rural banking markets is
not precluded, the prices and profits of incum­
bents should also vary systematically with the
number of potential entrants. However, there is
some uncertainty about the precise form of the
relationship between incumbent profitability and
the
of potential competitors because
the relationship between the number of potential
competitors and the intensity of potential com­
petition is unclear . 14 The standard view appears
to be that the larger the number of potential
entrants, the greater the perceived threat of entry
and the lower the incumbent prices and profits.
Some writers, however, have suggested that
when more than one potential entrant exists,
each potential entrant will recognize that entry
by others could occur and could impact its

number

■

12

Since it is not clear that the size distribution of potential competitors

influences their performance impact, and since construction of a measure of

expected profit. 15 Researchers have demon­
strated that mutual awareness among potential
entrants could cause the relationship between
the number of potential entrants and the overall
likelihood of entry to be non monotonic, per­
haps even negative. This type of relationship
implies that the negative marginal impact of
additional potential competitors on incumbent
profitability could decline as the number of
potential entrants increases. Because of this
possibility, a quadratic potential competition
specification is also explored.
Several researchers have also suggested that
the impact of potential competition could vary
with the intensity of actual market competition,
and possibly with the two measures of market
structure employed here to proxy this force. 1 6 In
particular, a given number of potential competi­
tors could impose a larger impact on incumbent
profitability if actual competition in the market
were less intense. To investigate this possibility,
actual competition/potential competition inter­
action variables are included in several versions
of the performance equations estimated.
Our study uses two summary measures of
incumbent market structure: the three-firm
deposit concentration ratio and the number of
actual competitors. Two variants of each of
these measures are employed. One is calculated
using data for commercial banks only. The other
is calculated using data for both banks and sav­
ings and loans, in recognition of the typically
considerable thrift share of deposits in counties
throughout Ohio and their expanding ability to
compete with commercial banks.
The number of holding company organiza­
tions legally permitted to branch de novo into
each market is the measure of potential compe­
tition employed in this analysis. Available data
revealed that holding company affiliates were
responsible for most of the de novo branching
activity in Ohio from 1979 to 1981. We exclude
smaller banks that are unlikely to branch de
novo in order to produce a more precise meas­
ure of potential competition . 1 7

■

15

See Kalish, Hartzog, and Cassidy (1978). Empirical evidence support­

ing this view appears in Hannan (1981) and Morrison and Winston (1987).

potential competitor concentration would be extremely tedious, only the
number of potential competitors is employed,

■ 16

Possible interactions between measures of actual and potential com­

petition are discussed in Hannan (1979), pp. 442-43, and in Morrison and Win­
■

1 3 The possibility of a nonlinear relationship between measures of

ston (1987), p. 63.

market structure and performance is noted in Heggestad (1979), pp. 468-69.
■

B 14

For a discussion of the expected relationship between concentration,

17

Examination of data on branching in Ohio over the 1979 to 1981

period revealed that holding company affiliates established 61 percent of the

potential competition, and incumbent profitability, see Call and Keeler (1986),

total number of de novo branches over this interval. Further, they established

p. 224; Schwartz (1986), pp. 47-48; and Morrison and Winston (1987).

64 percent of those opened in contiguous counties. See Whalen (1981).

Following the approach taken with the con
centration variable, market share for each bank
is defined in two different ways: by its share of
commercial bank deposits in the market and by
its share of bank and savings and loan deposits
in the market. An insignificant coefficient on the
incumbent market structure variable, in con­
junction with a positive, significant coefficient
on the related market share term, is evidence
supporting the efficient structure hypothesis.
The risk measure used in this study is the same
one used by a number of previous researchers:
the standard deviation of return on equity over
the period examined (1979 to 1981). There is
some disagreement about the nature of the rela­
tionship between this variable and profitability.
Heggestad (1979) and Clark ( 1986b) have argued
that the relationship should be positive; Liang
(1987) has suggested that it should be negative. 18
There is empirical evidence in support of both
positions. Because of the uncertainty and
because the precise nature of the relationship
between these two variables is not the primary
focus of this paper, the anticipated sign of the
coefficient on the risk measure is left ambiguous.
The other explanatory variables in the esti­
mated profitability equations are elements of
the vector,
These are presumably exogenous
variables that reflect differences in the character­
istics of an individual bank, or economic condi­
tions in its market or its regulatory environment
that could influence its profitability.
Three bank characteristic variables are
employed: a bank size measure, a dummy vari­
able measure of the number of branches oper­
ated, and a dummy variable indicating whether
the bank was a subsidiary of a bank holding
company. Economic conditions in each bank’s
local market are represented by two variables:
average per capita personal income and per cap­
ita personal income growth. Finally, we use a
Federal Reserve System membership dummy to
control for regulation-related cost differentials.
To determine if the estimated relationship
between actual competition, potential competi­
tion, and profitability is materially influenced by
the neglect of possible simultaneity, the profita­
bility equation is also viewed as a structural
equation in a multi-equation simultaneous sys­
tem. Specifically, a two-equation system similar
to that used in Liang (1987) is employed. In this

Z.

■ 18

In Liang’s model, greater profit variability implies greater expected

costs and associated penalties to the bank, resulting in a negative relationship
between profit variability and expected profit margins.

system, bank risk is the other endogenous vari­
able. The main difference between her specifi­
cation and the one employed here is the addi­
tion of the potential competition term.
Liang’s structural equation for risk contains five
predetermined variables that do not appear in
the profitability equation discussed above.
These variables are designed to proxy market
uncertainty. They are the standard deviation of
market per capita personal income, unexplained
market deposit supply, unexplained variation in
bank
loan demand, unexplained variation in
bank
deposit supply, and the covariance of
bank
unexplained loan demand and deposit
supply. The precise definition of each of these
variables and the reduced-form equations for
this model are detailed in the appendix.

Vs
Vs
Vs

II. Sample and
Methodology

Our sample consists of the 159 single-market
banks headquartered in non-MSA Ohio counties
at the end of 1981. Single-market banks are those
with all offices located within their home office
county. This criterion allows their performance to
be related to the characteristics of their particular
local markets. The presumption is that non-MSA
counties approximate local rural banking markets.
The profitability equations are estimated using
two different statistical techniques. Ordinary least
squares regression (OLS) is used to estimate ver­
sions in which risk is viewed as exogenous.
Two-stage least squares (2SLS) is the technique
used to estimate the profitability equation when
it is viewed as part of a simultaneous system.

III. Results

Regression results are presented in tables 1 and
2. Only the equations containing measures of
actual market structure and market share calcu­
lated using commercial bank data are included
in the tables. The results were essentially the
same when savings and loans were considered
in the calculation of these variables and there­
fore are not reported.
Table 1 contains versions of the profitability
equation estimated using OLS; table 2 contains
abbreviated results obtained by estimating ver­
sions of the equations in table 1 viewed as part
of a two-equation simultaneous model. The esti­
mation technique is 2SLS. Only the coefficients
and t-statistics for the actual competition, poten­
tial competition, market share, and risk variables
are reported. In general, the overall explanatory

B

U

T

A

B

L

E

1

O L S Versions of Profitability Equations
Dependent Variable:

AROE

Variables

CBO

(l)
Coefficient

(2 )
Coefficient

-0.003253
(-0.17)

-0 . 0 1 2 1 2 1
(-0.62)

NCBO
MSBO
PCPIGR
PCPI
OD
FRM
MBHC
SIZE
SDROE
HCPE

(3)

(4)

Coefficient

Coefficient

0.018071
( 0 .1 2 )

-0.520431
(-1.80)

0.036970
( 1-58)

0.043162
( 1.85)

0.036723
( 1-48)

0.035676
( 1.45)

0.096151
( 1 -2 6 )

0.100644

0.094267
( 1 .2 0 )

0.090787
( 1-17)

0.000213
( 0.80)

0.000204
( 0.77)

-0.495045
(-0.76)

( 1.33)

( 0.78)

0.000175
( 0 .6 6 )

-0.634474
(-0.98)

-0.496847
(-0.76)

-0.467602
(-0.72)

-0.003253
(-0.17)

-0.149174
(-0.28)

-0 . 0 3 1 2 2 1
(- 0 .0 6 )

-0.095712
(-0.18)

2.183394
( 3.14)

2.113173
( 3 .0 6 )

2.186579
( 3.10)

2.271347
( 3.26)

-0.741219
(-1.49)

-0.791260
(- 1 .6 1 )

-0.734965
(-1.46)

-0.834644
(- 1 .6 8 )

-0.757202
(-8.07)

-0.737778
(-7.91)

-0.757808
(- 8 .0 6 )

-0.750641
(-8.08)

-0.158573
(-1.29)

-1.219721
(-2.34)

-0.156887
(-1.28)

-0.806247
(-2.46)

HCPESQ

0 .0 0 0 2 10

0.114109
( 2 .1 0 )

HCNCBO
INT

0.112784
( 2 .1 6 )
14.110421

13.787394
( 4.97)

17.385094
( 5.42)

(4.31)

16.894396
(4.83)

F

7.46

7.34

7.41

7.33

RSQ

0.34

0.35

0.34

0 .3 6

NOTE: T-statistics are in parentheses.
SOURCE: Author.

power of the estimated equations is good, given
the size and cross-sectional nature of the sample.
The coefficients on the actual and potential
competition and market share variables are of
primary interest. The signs and statistical signifi­
cance of the other variables in the estimated
equations are of secondary importance here and
will not be discussed.

The coefficient on the concentration variable
is never even marginally significant in any ver­
sion of the equation estimated. 19 The results
were invariant to specification and estimation
techniques. Including savings and loans in the
calculation of this variable and excluding the
market share term did not alter this finding.
When the number of actual competitors is
used as the actual competition proxy, the results
obtained do vary with the specification employed.
The coefficient on the number of actual competi­
tors term is insignificant when a linear specifica­
tion is employed and when an actual competition/
potential competition interaction term is not
included in the estimated equation. However,
when an interaction term is included, the coeffi­
cient on the number of actual competitors varia­
ble becomes negative and significant. This result
holds when savings and loans are included in
this measure and when a simultaneous-equations
estimation technique is employed. The coeffi­
cients are not significant when a quadratic ver­
sion is examined.
The estimated coefficient on the number of
potential competitors variable is negative, but
only marginally significant (that is, 1 0 percent
level, one-tail test) when a linear specification is
employed and when an actual competition/po­
tential competition interaction term is not
included. However, when this variable is used in
an estimated equation in conjunction with the
number of actual competitors and an interaction
term, the coefficient is negative and significant.
In these equations, the actual competition/
potential competition interaction term, con­
structed by multiplying the number of actual and
potential competitors, exhibits a positive signifi­
cant coefficient. This finding supports the view
that the negative marginal impact of additional
actual competitors declines as the number of
potential competitors increases. Similarly, the
larger the number of actual competitors in a
market, the smaller the negative marginal impact
of additional potential competitors.
When a quadratic potential competition speci­
fication is employed, the estimated coefficients
on the number of potential competitors term
and the square of this variable are both signifi­
cant. The pattern of signs (negative and positive,
respectively) could reflect mutual awareness
among potential entrants. This result suggests
that the marginal impact of additional potential
competitors is initially negative.

■ 19

A Herfindahl-Hirschmann Index of market concentration w as also

employed in place of the three-firm concentration ratio. The change in the
definition of the concentration ratio did not materially impact the results.

However, the size of the negative impact
declines as the number of potential competitors
increases and finally turns positive. The magni­
tudes of the coefficients imply that incumbent
firm profitability is constrained in markets with
five or fewer potential entrants. This finding
supports the notion of a nonlinear relationship
between the number of potential entrants and
the overall probability of entry.
Changing the definition of the market struc­
ture and market share variables to include sav­
ings and loans did not alter either the size or the
statistical significance of the coefficients on the
potential competition variables in any of the
specifications examined. Further, a comparison
of each equation in table 1 with its counterpart
in table 2 also demonstrates that the sign and
statistical significance of the coefficients on the
variables of interest in the estimated equations
are not sensitive to the estimation technique
used. 20 This was true for the other exogenous
control variables as well.
■ T

A

B

L

E

2

Summary Results
2 S LS Versions of Profitability Equations
Dependent Variable: AROE

Variables

CBO

(l)
Coefficient

(2 )

(3)

(4)

Coefficient

Coefficient

Coefficient

-0.001529
(-0.08)

-0.010870
(-0.53)
0.007936
( 0.05)

-0.520134
(-1.79)

NCBO
MSBO
SDROE
A

HCPE

( 1-52)

0.042391
( 1-79)

0.035831
( 1-43)

0.035265
( 1-43)

-0.857773
(-2.95)

-0.810309
(-2.78)

-0.870202
(- 3 -0 0 )

-0.803704
(-2.80)

-0.159169
(-1.28)

-1.186875
(-2.17)

-0.158830
(-1.30)

-0.801043
(-2.46)

0 .0 3 6 0 0 2

HCPESQ

0.110422
( 1.93)

HCNCBO

0.111721
( 2.13)

F
RSQ

1.80

2.33

1.81

2 .1 1

0 .1 1

0.15

0 .11

0.14

NOTE: T-statistics are in parentheses.

In general, the coefficient on the market share
variable is positive and at least marginally signifi­
cant (at the 1 0 percent level, one-tail test) in
every variant of the profitability equation esti­
mated. As with the concentration measure,
somewhat stronger results are obtained when
savings and loan deposits are considered in the
construction of this variable.

IV. Summary and
Conclusions

The results support the notion that non-MSA bank­
ing markets are contestable. That is, we found
bank performance to be systematically related to
proxies designed to measure the intensity of
actual and potential competition. The threat of
entry by potential competitors does appear to
limit incumbent firm profitability, although the
threat of entry and the number of potential
competitors may not be monotonically related.
Incorporating risk into the analysis and consider­
ing possible simultaneity between risk and prof­
itability did not materially alter the results.
Both proxies for actual competition were not
found to be consistently related to bank perfor­
mance, however. The concentration measure
was not found to be significantly related to the
profitability of banks operating in rural markets
in Ohio in any specification investigated. Only
the number of competitors proxy was found to
be significantly related to bank profitability in
the expected way.
The finding that potential competition has a
significant impact on incumbent performance is
somewhat surprising for several reasons. First,
potential competition is generally expected to be
a weak force in rural banking markets. Second,
researchers have argued that potential entrants
may not significantly impact incumbent prices
and profits in periods immediately after a change
in regulations that affects entry conditions. The
interval analyzed was just such a period. In addi­
tion, the potential entrant variable used in this
study does not include potential nonbank com­
petitors, particularly savings and loans. Thus, the
variable is obviously not a perfect proxy for the
threat of entry in the markets examined.
Further research on the impact of potential
competition in banking markets appears war­
ranted to determine if the observed relationships

SOURCE: Author.

■ 20

In addition, to further examine the sensitivity of the results to

changes in specification, versions of the profitability equation similar to the one
appearing in the four-equation model developed in Clark (1986b) were also
estimated. The only change in Clark’s specification was the addition of the
potential competition measures used in this study. Again, this change in speci­
fication did not materially alter the results reported above.

are evident for other samples of banks and in
other time periods. However, the results of this
study suggest that it is unclear whether the con­
solidation taking place in banking in recent years
has substantially lessened competition, given the
simultaneous reductions in barriers to market
entry that have occurred.
For bank regulatory agencies, the results also
imply that the competitive impacts of bank
A

P

P

E

N

D

mergers cannot be reliably determined solely
from a mechanical analysis of changes in actual
market structure. Entry conditions and the exis­
tence of potential competition should also be
considered and used to temper conclusions
drawn from an analysis of merger-related
changes in concentration or in the number of
actual competitors.

X

Variable Definitions

AROE:

i

Bank ’s annual after-tax return on equity,
averaged over the 1979-1981 period.

CBO:

PCPI:

Per capita personal income in the market
averaged over the 1979-1981 interval.

PCPIGR:

Three-firm market concentration ratio, banks
only, June 1980.

Per capita personal income growth in the
market over the 1979-1981 interval.

NCBO: Number of banks operating in the market
i, June 1980.

The standard deviation of market per cap­
ita personal income over the 1979-1981 interval.

of bank

HCPE.

Number of holding company organizations
legally permitted to branch de novo into the market.

HCPESQ.

The square of

HC

HCPE.

MSBO:

SDROE:

i ’s deposit market share, banks only,
i

Bank ’s standard deviation of annual
after tax return on equity over the 1979-1981 period.

SIZE:

Log of total assets of bank

i.

OD:

i has

FRM:

i

Dummy variable equal to one if bank
at least one branch, otherwise equal to zero.

MDU:

Market deposit uncertainty variable equal to
proportion of unexplained variation in market depos­
its derived from the regression of market deposits on
market income over the 1979-1981 interval.

LRISK:

HCPE

--- Interaction term.
times various
alternative measures of market structure.
Bank
June 1980.

SDPCPI:

Dummy variable equal to one if bank was
a member of the Federal Reserve System, otherwise
equal to zero.

DRISK: Deposit uncertainty variable for bank
i equal to proportion of unexplained variation in
total transactions deposits derived from the regres­
sion of total transactions deposits on market income
over the 1979-1981 interval.

COVLD:

Covariance of unexplained loans and de­
posits for bank over the 1979-1981 period.

SDROE:
A

MBHC:

i

i

SDROE

Predicted value for
derived from
the following first-stage regression with the relevant
actual and potential competition variable(s) added:

SDROE = f (MSBO, SIZE, OD, FRM, MBHC, PCPI,
PCPIGR, SDPCPI, MDU, LRISK, DRISK, COVLD).
A

Dummy variable equal to one if bank is
a holding company subsidiary, otherwise equal to
zero.

i

Loan uncertainty variable for bank equal
to proportion of unexplained variation in total loans
derived from the regression of total loans on market
income over the 1979-1981 interval.

References

Bain, Joe S. “A Note On Pricing in Monopoly
and Oligopoly.”
39(March 1949): 448-64.

American Economic Review.

Baumol, W illiam J., John C. Panzar, and Robert
D. Willig.
New York: Harcoint, Brace and Jovanovich, 1982.

Contestable Markets and the The­
ory of Industry’ Structure.

Berger, Allen N., and Timothy Hannan. “The
Price-Concentration Relationship in Bank­
ing.” In

Merging Commercial and Investment
Banking: Proceedings from a Conference
on Bank Structure and Competition. Federal
Reserve Bank of Chicago (May 1987): 538-9.

Berger, Allen N., Gerald A. Hanweck, and David
B. Humphrey. “Competitive Viability in Bank­
ing: Scale, Scope, and Product Mix Econo­
mies.” Board of Governors of the Federal
Reserve System.
February 1986.

Research Papers in Banking
and Financial Economics,

Boyle, Stanley E., Richard K Ford, and Charles
G. Martin. “The Application of Industrial
Organization Theory to Commercial Banking:
A Review and Analysis.” University of
Arkansas-Little Rock, Department of Econom­
ics and Finance.
1987.

Working Paper,
Brozen, Yale. Concentration, Mergers, and Pub­
lic Policy. New York: Macmillan Publishing
Company, Inc., 1982.
Burke, J., and S. Rhoades. “Profits and ‘Contest­
ability’ in Highly Concentrated Banking
Markets.”
(Fall 1987).

Review of Industrial Organization.

Call, C., and T. Keeler. “Airline Deregulation,
Fares, and Market Behavior: Some Empirical
Evidence.” In A. Daughety, ed.,
Cambridge,
Mass.: Cambridge University Press, 1986.

Analytical
Studies in Transport Economics.

Clark, Jeffrey A. “Single-Equation, MultipleRegression Methodology: Is It an Appropriate
Methodology for the Estimation of the
Structure-Performance Relationship in Bank­
ing?”
18,
no. 3(November 1986a): 295-312.

Journal of Monetary Economics.

_________ _ “Market Structure, Risk, and Profita­
bility: The Quiet Life Hypothesis Revisited.”

Quarterly Review of Economics and Busi­
ness. 26, no. l(Spring 1986b): 45-56.

Evanoff, Douglas D., and Diana L. Fortier. “Re
evaluation of the Structure-Conduct-Performance Paradigm in Banking.”
1, no. 3(June
1988): 277-94.

Financial Services Research.

Journal of

Gilbert, R. “Bank Market Structure and Competi­
tion.”
16 , no. 4(November 1984).

ing.

Journal oj Money, Credit, and Bank­

Graddy, Duane B., and Reuben Kyle III. “The
Simultaneity of Bank Decision-Making,
Market Structure, and Bank Performance.”
34, no. l(March 1979):
1-18.

Journal of Finance.

Graham, David R., Daniel P. Kaplan, and David
S. Sibley. “Efficiency and Competition in the
Airline Industry.”
14, no. l(Spring 1983): 118-38.

BellJournal of Econom­

ics.

Hannan, Timothy H. “Mutual Awareness Among
Potential Entrants: An Empirical Examina­
tion.”
47, no.
3(January 1981): 805-8.

Southern Economic Journal.

__________ “Limit Pricing and the Banking
Industry7.”
11, no. 4(November 1979): 438-46.

Banking.

Journal oj Money, Credit, and

Heggestad, Arnold A. “Market Structure, Compe­
tition, and Performance in Financial Indus­
tries: A Survey of Banking Studies.” In Frank­
lin R. Edwards, ed.,
New York: McGraw Hill Book
Company, 1979.

Regulation.

Issues in Financial

_________ _ “Market Structure, Risk, and Profita­
bility in Commercial Banking.”
32, no. 4(September 1977):
1207-16.

Finance.

Journal of

Kalish, Lionel, Jerry Hartzog, and Henry Cas­
sidy. “The Threat of Entry With Mutually
Aware Potential Entrants: Comment
8 6 , no. 1( February
1978): 147-50.

oj Political Economy.

."Journal

Liang, Nellie. “Bank Profitability and Risk.”
Unpublished paper, Board of Governors of
the Federal Reserve System, November 1987.
Morrison, Steven A., and Clifford Winston.
“Empirical Implications and Tests of the Con­
testability Hypothesis.”
30(April 1987): 53-66.

Economics.

Journal of Law and

Osborne, Dale K , and Jeanne Wendel.
“Research on Structure, Conduct, and Per­
formance in Banking, 1964-1979 ” Oklahoma
State University, College of Business Adminis­
tration.
83-8, July 1983-

Working Paper

Rhoades, Stephen A. “Structure Performance
Studies in Banking: An Updated Summary
and Evaluation.” Board of Governors of the
Federal Reserve System, Staff Study No. 119,
August 1982.
--------------, and Roger D. Rutz. “Market Power
and Firm Risk: A Test of the ‘Quiet Life’
Hypothesis.”
9(Januarv 1982): 73-86.

ics.

Journal of Monetary Econom­

Schwartz, Marius. “The Nature and Scope of
Contestability Theory.”
38, Supplement (November 1986):
37-57.

Papers.

Oxford Economic

Smirlock, Michael. “Evidence on the (Non)Relationship Between Concentration and Profit­
ability in Banking.”
17, no. l(February 1985).

and Banking.

Journal of Money’, Credit,

Whalen, Gary. “Concentration and Profitability
in Non-MSA Banking Markets.” Federal
Reserve Bank of Cleveland.
(Quarter 1, 1987): 2-14.

Review

Economic

----- - “Bank Expansion in Ohio.” Federal
Reserve Bank of Cleveland.
April 6 , 1981.

mentary,

Economic Com­

G e ttin g th e N o is e O u t:
F ilte r in g E a rly G N P
E s tim a te s
by John Scadding

John Scadding was a visiting scholar
at the Federal Reserve Bank of
Cleveland when he wrote this paper.
Currently, he is an economist with
the California Public Utilities Com­
mission. The author would like to
thank Stephen M cNees for helpful
comments.

Introduction

Real, or inflation-adjusted, gross national product
(GNP) is the most inclusive measure of the
nation’s economic activity. As such, it is probably
the most closely monitored economic barometer
for the information it contains about the eco­
nomic well-being of the economy and about the
economy’s prospects. It is the central focus of
most macroeconomic models and their forecasts,
and it plays a decisive role in shaping monetary
and fiscal policy decisions.
Given the critical role that GNP plays, it is not
surprising that the accuracy of GNP estimates is
crucial if informed decisions are to be made by
both private agents and government policymak­
ers. There is a trade-off, however, between the
estimates’ accuracy and their
Delays in
reporting and revising data as more inclusive
information becomes available means later esti­
mates will typically be more accurate than earlier
ones; but waiting longer entails forgoing the
opportunity to take action sooner, when that
may be a critical factor.
In the United States, the first offical estimate
for a particular quarter’s GNP is released by the
U. S. Department of Commerce approximately
three weeks
that quarter has ended. Much
of the data needed to construct GNP are still not
available at that point, even though the quarter

timeliness.

after

has ended. The missing data therefore must be
estimated by the U.S. Department of Commerce’s
Bureau of Economic Analysis (BEA), which is
responsible for compiling the official estimate of
GNP. This first estimate is followed in relatively
rapid succession by two additional estimates,
one and two months after the initial number is
released. Thereafter, the delays in revisions
become much longer. Estimates are usually sub­
ject to three further annual revisions. After that,
an estimate is usually subject to further so-called
benchmark revisions every five years as data
from the Bureau of Census’ quinquennial eco­
nomic census are incorporated. At each stage,
source data are incorporated that had not been
available previously, and revisions to previous
data are incorporated as well . 1
It is clear from this description that there is
never a final estimate of GNP that could be
equated with the “truth.” Nevertheless, the three
estimates are
early preliminary or
obviously distinct from the later ones in terms of
their timeliness. Although based on incomplete
and preliminary information, the provisional esti­
mates have the advantage that they are available

provisional

■

1 Carson (1987) provides a comprehensive overview of the source data

and estimation methods for constructing the different G N P estimates. See also
Young (1987).

m

much sooner than the later, more comprehensive,
and presumably more accurate numbers. It is rele­
vant, therefore, to examine their accuracy in pre­
dicting the later numbers. As Allan Young, direc­
tor of the Bureau of Economic Analysis, noted in
a recent comprehensive survey of the properties
of GNP estimates: “Much of the concern with the
reliability of GNP comes down to whether the
early . . . quarterly estimates . . . provide a useful
indicator of the estimates . . . When complete
and final source data are available.” (Young
[1987], p. 18)
T

A

B

L

E

1

that subsequent revisions to the GNP numbers
would be correlated with the preliminary esti­
mates. This alternative view implies that the early
GNP numbers are estimates of the final number,
but estimates that are contaminated with error.
If this alternative view is correct, then it is pos­
sible in principle to make estimates of the error
in the preliminary numbers and to adjust the lat­
ter to remove the error— in other words, to filter
out the “noise.” This paper investigates one
method of doing this. The results suggest there
is scope for adjusting the provisional GNP growth
rate numbers to make them better predictors of
what the final numbers will turn out to be.

Final Revisions to Real G N P Growth and
Components, 19 74 :110 - 19 8 4 :10

I. The Data

final revisions

Final Revisions

Estimated
Observation
Error

Residual
Forecast
Error

Mean Variance Mean Variance Mean Varianct
0 .6 3 0

4.087

-0 . 6 3 0

0.764

0 .0 0

3.323

Final minus
45-day

0.413

2.876

-0.413

0.694

0 .0 0

2.182

Final minus
75-day

0.205

2.742

-0.205

0 .8 9 0

0 .0 0

1.852

Final minus
15-day

SOURCE: Author.

One important strand of the literature examin­
ing this question has concluded that the early
numbers can be viewed as
forecasts of
the actual numbers. The term
is used in
the sense that the differences between a final
GNP growth number and its corresponding pre­
liminary estimates are uncorrelated with the pre­
liminary numbers themselves (Mankiw and Sha­
piro [1986]; Walsh [1985]). On the face of it,
this is a surprising result. It denies the intuitively
appealing, and perhaps prevalent, view that if a
preliminary estimate showed large positive
growth for real GNP in a quarter, for example, it
would be more likely than not that later esti­
mates would be revised down— in other words,
that the final GNP number would be smaller
than its preliminary estimate. And, similarly, a
large (in absolute value) negative preliminary7
estimate would be revised upward subsequently.
In a preliminary analysis reexamining this ques­
tion, Scadding (1987) concluded that the statisti­
cal test used in the analyses mentioned above
could not discriminate very well between the ra­
tional forecasts hypothesis and the alternative view

rational
rational

Table 1 has estimates of the
for
real GNP growth— that is, the difference between
the final estimate and the three provisional esti­
mates. There are three final revisions, corres­
ponding to the difference between the final
numbers and each of the three provisional
numbers. For the sample period used in this
paper (1974-1984), the early estimates came out
15 days, 45 days, and 75 days after the quarter
ended, and the usual nomenclature is to refer to
them as the 15-day estimate, and so on. Corres­
pondingly, there is the 15-day final revision,
which is the difference between the final number
and the 15-day estimate, and so on. I follow the
usual practice and define the “final” number as
the currently available final number as of the
quarter in question. Thus, final estimates in the
earlier part of the sample will have been through
more revisions than those later in the sample . 2
For the 15-day estimate of GNP, many of the
source data are not complete and are subject to
revision. The data available for this estimate are
monthly data, like retail sales, manufacturers’
shipments of machinery and equipment, and
merchandise trade figures. Some of these data,
like retail sales, are based on surveys, and typi­
cally are revised substantially. In addition, some
of the monthly source data are not available for
all three months of the quarter. For example,
only one to two months of data are available for
estimating consumer spending on services,
which is about one-half of total consumer spend­
ing. And there are no monthly data at all for

■ 2

The data are from a study prepared by the Bureau of Economic Analy­

sis and are the data used by M ankiw and Shapiro (1986), Mork (1987), and
Walsh (1985). The data were adjusted to abstract from the effects of defini­
tional changes and the change in the base year for calculating constant-dollar
G N P . See Young (1987), p. 2 5 .1 am indebted to Professor Mork for providing
me with a copy of these data.

about 40 percent of spending on services. This
component, therefore, is estimated by the
Department of Commerce, either by extrapolat­
ing by related series or by judgmental projection.
The succeeding 45- and 75-day estimates incor­
porate new monthly data unavailable for the 15day estimate, and as well incorporate revisions to
the monthly data that were included in the 15-day
number. As well, these two estimates include new
information available only on a quarterly basis—
domestic corporate profits, balance of payments
figures, and data on financial assets from the
Federal Reserve Board’s flow of funds accounts.
The latter two sources are incorporated in the
75-day estimate only (Carson [1987], p. 107).
As table 1 shows, the final revisions are not
trivial. On average for the sample they are posi­
tive, suggesting a systematic tendency for the
preliminary numbers to understate the final
estimates, a phenomenon that has been noted
elsewhere (Mork [1987] ). The deviations
implied by the sample variance estimates
reported in table 1 are large when measured
against the the mean growth of real GNP for the
period, which was 2.9 percent. Thus, plus or
minus one standard error about a preliminary’
estimate equal to this trend growth translates
into an economy that, with equal probability,
could be enjoying near boom-like conditions or
behaving as if it was close to recession.

II. The Nature of the
Provisional GN P Estimates

As discussed briefly in the introduction, one pos­
sible way of thinking of the early GNP growth
of what the final esti­
numbers is as
mate will turn out to be. Thus, suppose * is
the final estimate of GNP growth for quarter
that estimate of course will not be made until
some time after quarter In the meantime,
however, a provisional estimate (in fact three),
will be available soon after quarter
call it
has ended. This provisional estimate
can be
thought of as a forecast of what
will be.
From that perspective, it is natural to ask whether
is a
forecast in the sense that, at a m ini­
mum, it is unbiased and is uncorrelated with the
forecast error, which is equal to the final revi­
sion,
If this description fits
then

forecasts

X

t;

t.

Xt,

X*

Xt

Xt

t

good

X*

X*

(1)

zt

Xt .

=

Xt,

Xt + z t,

where
is a zero-mean, serially uncorrelated
forecast error (white noise) that is uncorrelated
with t .

X

Walsh (1985) defines these to be the proper­
ties of a rational forecast. The competing charac­
terization of
is that it is an early observation
or “reading” of what
will be, but an obser­
vation measured with error. Thus,

Xt

(2)

X*

X,= X* + u,
ut
X*

where
is also white noise, and uncorrelated
with
in this case. Note that this characteriza­
tion implies that the final revision
correlated
with the provisional estimate; in other words:
(3)

is

E(X* - X ,) Xt = -E u)
= -°z>

where ctJ is the variance of the observation
error
The evidence on which characterization better
describes the nature of the provisional estimates
is decidedly mixed. Mankiw and Shapiro (1986)
adduce evidence in favor of the position that
preliminary numbers are rational forecasts, on
the criteria just described. However, I have argued
in a technical companion piece to this paper
(Scadding [1987]) that their test is likely to have
little power. They themselves raise this possibil­
ity because of the apparent contradiction of their
conclusion with evidence elsewhere that two
important data sources for the GNP estimates—
retail sales and inventories— have significant
measurement errors in them (Howrey [1984]
and Conrad and Corrado [1979]).
Walsh, using a slightly different sample from
Mankiw and Shapiro, finds corroborating evi­
dence for their result, but this conclusion is
compromised by his additional finding that the
provisional estimates are inefficient forecasts. In
addition, Mork, using different estimation tech­
niques from the other studies, found evidence
that the provisional estimates were biased
downwards, and that the final revisions were
correlated with previous-quarter GNP growth
and a forecast of GNP growth from a publicly
available survey of private forecasters.

u.

III. Filtering the Early Data

I have argued elsewhere (Scadding [1987]) that
Walsh’s evidence of inefficient forecasting is
equally compatible with the view that provi­
sional GNP numbers are observations rather than
forecasts, with the observation errors in the three
provisional numbers being sequentially corre­
lated. Howrey (1984) found this to be a useful
characterization of the inventory investment
component of GNP. In my earlier paper, I

m

devised a test for discriminating between an inef­
ficient forecasts model and a serially correlated
measurement error model based on restrictions
on the variance-covariance matrix of the final
revisions. The results of that test suggest that the
provisional estimates of real GNP growth contain
measurement error.
The purpose of this paper is to estimate the
amount of observation (measurement) error in
the provisional GNP growth numbers and sub­
tract that error to obtain modified, or filtered,
provisional GNP estimates that have the proper­
ties of a rational forecast. Let X
be the filtered
estimate; then the estimated measurement and
forecast errors are defined by

*

(4a)

ut -

(4b)

z t = X* - X*.

Xt - X* and

interim

The definitions (4a and 4b) implicitly define
the decomposition of a final revision, X* - X t ,
into its measurement and forecast error
component:
(5)

X* - X t

= z t - ut .

Nonrecursive Kalman filtering, described
below, is used to specify equations for estimat­
ing X * . Least-scjuares estimation of these equa­
tions yields an X
series with the desired fore­
casting properties:

*

( 6 a)

E(X*

( 6 b)

E(X* -

-

X*) = 0

and

X*)X* = 0.

As well, the estimated measurement and fore­
cast errors are orthogonal to each other:
( 6 c)

E(m,

z t)

about the size of the final revision in the current
quarter. The time-series forecast presumably is
picking up this information. In addition, filtering
improves the precision of forecasting by exploit­
ing the fact that part of the final revision is meas­
urement error and therefore can be forecast from
the provisional estimates.
Note the uniformly negative means of the
estimated observation errors, indicating a syste­
matic tendency of the provisional GNP estimates
to underpredict the final numbers. This tendency
has been noted by Mork, who ascribes it to con­
cern by the Department of Commerce that the
provisional estimates not be seen as being too
optimistic and therefore serving some political
agenda.
The presence of serially correlated measure­
ment errors makes it relatively easy to predict
revisions— in other words, from the 15day to the 45-day, and so on— compared to final
revisions. As we shall see, the standard errors of
the regression predicting the provisional esti­
mates are about 50 percent lower than the
standard errors of the equations predicting the
final GNP estimates. Thus, the methodology out­
lined here provides forecasters with a relatively
accurate way of forecasting subsequent prelimi­
nary estimates. More generally, this result sug­
gests that the provisional estimates are more like
each other than they are like the later estimates,
a point that has been made by McNees (1986).
Many economists presumably would be
offended by the notion that any attention should
be paid to forecasting the provisional estimates
themselves when what obviously matters is get­
ting a good estimate o f the final or “true” num­
ber. However, that is “obvious” only to the
extent the Federal Reserve or private agents, in
reacting to new provisional estimates, discount
the measurement error in them, an assumption
that is not obvious on its face at least. It is cus­
tomary to test market forecasts o f GNP by their
ability to predict final GNP; it would be interest­
ing to inquire whether they do a better job o f
forecasting provisional GNP estimates.
A final observation suggested by this paper’s
result is that the frequent practice by forecasters
of discarding their GNP growth forecast for a
quarter when the first provisional estimate for
that quarter becomes available probably is not
efficient. The filtering technique used in this
paper combines the provisional estimates of
GNP growth with a forecast from a simple timeseries model. The results suggest that the fore­
cast still has information about final GNP growth
even after the preliminary estimates become
available. As McNees has noted: “...the distinc­
tion between forecasts and ‘actual’ data is often

= 0.

Summary statistics for the final revisions and
the estimated measurement and forecast errors
are shown in table 1. Clearly, the filtering im ­
proves the forecasting precision of provisional
numbers. The sample variance of the forecasting
error after filtering is on the order of 25 to 30
percent lower than the variance of the unfiltered
final revision. Nevertheless, the residual forecast
variance is still quite large.
The improvement in forecasting precision
would appear to be based on two factors. First,
the filtered estimates are derived by combining
the provisional estimates with a simple timeseries forecast of GNP growth. Mork has noted
that the prior quarter’s GNP has information

exaggerated. Both are estimates based on partial,
incomplete information.” (McNees [1986], p. 3)

(12a) X tn = X * +
(12b)

X* = <t>X *_,

+

w„
,

where 0 is a fixed parameter and w is a ran­
dom, serially uncorrelated term with zero mean
and constant variance (white noise).
We cannot observe X* directly but have mea­
surements of it, X, that involve error (here X
would be a provisional estimate of GNP growth).
X t = hX * + u t,

where h is a fixed parameter and u is also white
noise.
The Kalman filter optimally weights the fore­
cast of X* from equation (7) with the observa­
tion to form the best linear unbiased predictor of
X * , called the filtered value:
(9)

X*

= X t + Kt( X ,

____

A

(10a) X

, +1

= <I>X*

(10b )x

; +1

=

a Xt/2, Xt/ 2 -

(13c)

; /2

X't/l

-

x%

-

X ,/0),

-^ 2 3

(1

^ X t /i ], and

- a n )X*t/2 ).

X t/0

The initial forecast
is taken from a simple
time-series model for real GNP growth,
Given the forecast and estimates of the
and
one could then calculate the filtered esti­
mates directly. The approach taken here, how­
ever, is to estimate the
using ordinary least
squares to produce a set of estimated measure­
ment errors and residual forecast errors that are
uncorrelated. 3 Thus, the estimation equations
corresponding to ( 1 3 ) are

X*.
K's

as,

K's

X*

=

(14b)

^ 1(^ /1 ~

* ,/ o

) +

z if

*t/2 ~ a 2i Xf/\ + a 23^X*/l +
x r = A■y*//i + K2v 2t + *2 /.
( 1

A

(14c)
(14d)
and

X t/5

= a 12

X,/2 + ~a \2^X*/2 +
( 1

-

for n
1,2, 3, where n
1 refers to the 15-day
estimate, n
2 to the 45-day estimate, and
n
3 to the 75-day estimate. Similarly,

-

t// x - ( 1

X*/2 +
- a n X t/2) -

=

X t/o +

=

-

a

=

K 3 [(X , / 3

(14a)

Two modifications are necessary to apply this
algorithm to the program at hand. First, the three
provisional estimates of GNP growth are repeated
observations on the same final estimate. Thus,
within the quarter, the law of motion is

K,(X,n

x]n +
K-zKXf/i - a 2^X

(i3b )i?

X

X,/\,

(13a) X * 7, = X , / 0 +

X t+1 + A^+|(X ?+1 - h X /+1).

x : /n

+ « 3„

where the
are the provisional estimates
and the u n,'t s are the corresponding measure­
ment errors. Thus, within the quarter, the </> in
equation (7) is unity, as is h in ( 8 ), while the
intraquarter w ’ s are uniformly zero.
The other modification follows from the fact
that preliminary estimation suggests that the u ’ s
(12) are sequentially correlated. This serial
correlation structure is shown in table 2 .
The filtering framework is easily adapted to
this circumstance by expressing the observation
variables in quasi-difference form, / / 3 12
« 2 3
where the ’s are the
respective serial correlation coefficients of the
errors from table 2 (see Bryson and Ho [1969],
pp. 400-405). The modified set of filtering equa­
tions becomes

- h X t),

where X is the forecast and X* is the filtered
value. The weighting coefficient, K, is called the
Kalman gain, and is a function of the variances
of w, u, and of h. The filtered value is used to
update the forecast. Using (7), this new forecast
is combined with the next observation to calcu­
late the next filtered value.

d o

+ u2t, and

X t/n's

The general idea of filtering data is easily
sketched out. Suppose the variable we are inter­
ested in, X * (which in our case is the final esti­
mate of GNP growth), evolves over time accord­
ing to the law of motion

(8 )

X*

(12c) X t/5 =

IV. The Filtering
Framework

(7)

X t/2 = X*

u \t>

■ 3

Conrad and Conrado (1979) and Howrey (1984) have used the Kalman

framework for analyzing retail sales and inventory investment data,
respectively.

(I4 e ) X * = X */ 2 + K} v3, + z 5t.

+

=

* 1

(\4b’)X*t/2

o

(1 4 a ')* ;/!

H

To complete (14) we append the set of defini­
tions of the filtered estimates of GNP growth:

x*n

(14c ') X ? /3 = K n

—

T A

B

L

+

+

K ,(Xt/x - X t/0),
K2v2n
K 5v 5t

E

and

■

2

on the 15-day estimate and the first filtered esti­
mate of GNP growth.
The innovation in the 45-day number then is
used to update the filtered estimate of final GNP
growth by regressing the final GNP number on
the first filtered estimate and the measurement
innovation in the 45-day number (equation 14c).
The residual z 2t provides an estimate of the
forecast error in the 45-day number. The same
sequence of estimations is performed to calcu­
late the new filtered estimate of final GNP condi­
tional on having the 75-day provisional GNP
estimate, and its corresponding forecast error.

Correlation Structure of Final1Revisions
V. Estimation Results

Standard
Error of
Estimate

X*

-

Xt/} =

X*

-

Xt/2 =

(X*

-

Xt/2)

+

v3t

0.503

(X*

-

Xt/1)

+

v2t

0.503

0.230 + 0.932
(2.79)
(19.61)

0.077 + 0.784
(0.76)
(16.36)

X* - X t/ 1 =-0.698 + vu
(-2.18)

2.02

SOURCE: Author.

The estimation of (14) proceeds sequentially.
First ( 14a) is estimated, by regressing final GNP
growth on the time-series forecast,
and the
15-day provisional estimate,
. The residual,
, is the forecast error for the filtered 15-day
estimate. The first filtered estimate of final GNP
growth,
is calculated using (14a')- The cor­
responding measurement error in the 15-day
provisional GNP growth rate is estimated as

Xf/1

Zj

Xt/0,

X*/v

(15)

u u = X ,n

-

X*n

=

( 1 - * , ) ( X t/, - X , /0),

X t/2

which by construction is uncorrelated with the
forecast error.
The next step is to calculate the innovation in
the measurement error in the 45-day provisional
GNP number. The correlation structure between
the measurement errors in the 15-day and 45-day
number is
(16)

X t/2 - X*

=

« 23

( X t/] - X * ) +

v2t,

where v2t is the innovation in the measurement
error. Rearranging (16) and substituting
for
yields (14b), which is then estimated by
regressing the 45-day provisional GNP number

X*

The results of estimating equations (I4a)-(l4e)
are shown in table 3- Almost uniformly, with one
important exception discussed later, the esti­
mated coefficients in table 3 are statistically dif­
ferent from zero at the 95 percent confidence
level. Perhaps more importantly, again with the
same exception just noted, the restrictions
implied by equations (14) are all met. Thus, for
example, equation (14a) implies that the sum of
coefficients on the time-series forecast and the
15-day provisional estimate sum to one. In other
words, the 15-day filtered estimate of real GNP
growth is a simple weighted average of the fore­
cast and 15-day GNP number. The last column
reports the F-test statistic, and it clearly cannot
reject the hypothesis of the 9 5 percent confi­
dence level that the coefficients sum to unity.
Similarly, the restrictions in equations (14b)
and (I4 d ) that the coefficients sum to unity and
that the coefficients on the lagged dependent
variables equal the estimated correlation coeffi­
cients from table 2 are also met. In>Athe
case of
,
equation (14d) the coefficient on X* 2 was not
itself statistically significant, even though the
joint hypothesis could not be rejected. When the
coefficient on
was constrained to be
0 .9 3 2 2 — its a priori value as indicated by table
2— the coefficient on X* became significant,
which is the result reported in (I4d).
Only equation ( I4e) gave any significant troub­
le. In this case, the estimated K} was not signifi­
cantly different from zero, indicating that the 75day estimate did not have any additional informa­
tion about the final GNP number that was not
already contained in the two preceding provi­
sional estimates and the time-series forecast.
This last result stands in sharp contrast to the
information provided by the first two provisional
GNP numbers about final GNP. The estimated
Kalman gain Kx and K2 in (14a) and (14c) are

X*/x

2

Estimated Filtering Equations
19 74 :IIQ - 19 84:10
(T-statistics in parentheses)

(14a)

=

X t/0

6 X t/1

(14b)
(14c)

0.336 + 0.291
(0.89) ( 2 .6 1 )
-0.047 + 0.81
(-.29) (7.33)

=

X tn

-0.077 + 1.020
(-0.28)
(19.61)

+

zu

1.823

1.4793

X*n

+

v2t

0.714

2.247b

1.477

2.378c

0.560

0.578d

1 .3 6 1

0.339e

+ 0.236
(1.99)
+ 1.483
(4.30)

0.235 + 0.932X / / 2 + 0.098
(2.54)
(5.95)

(I4 e )

0.083 + 0.974
(0.04) (21.86)

v2t + z 2t
X*/2

X t/2 + 0.784 v)t

X t_j + wt -

+
+

v3t

z )t

(1.77)

Addendum: time-series forecasting model

X*=

F-Statistic

0.774Xt/1

+
(8.45)

(I4 d ) * * =
=

Standard
Error o f
Estimate

0.511 + 0.828
0.415
(2.19) (7.87)
(2.40)
Standard error of estimate = 3-323

wt_x

a. Test that sum o f coefficients is unity.
b. Test that coefficient on X

= 0.0784 and that sum o f coefficients is unity.

c. Test that coefficient on X t^2 is unity and that K-, = 1.234.
d. Test that coefficient on X t^2 is 0.932 and that coefficients sum to unity. The equation was reestimated with coefficient on X ^ 2 re­
stricted to 0.932; the results are the ones reported in (l4 d ).
e. Test that coefficient on X * ,2 is unity: the restriction that the coefficient was unity and that the coefficient on v$t was n on zero was
rejected at the 5 percent confid ence level.
SOURCE: Author.

0.774 and 1.483, respectively, and both are statis­
tically different from zero at the 95-percent con­
fidence level. The latter number may seem too
high—presumably it should be between zero
and one. However, with serial correlation in the
measurement errors the constraint is that
must be less than one. This constraint is satisfied
by the calculated theoretical Kalman gains
shown in table 4 . 4 Clearly, the estimated
of
1.483 is not statistically different from the theo­
retical value of 1.234, given the size of its stan­
dard error, a conclusion that is substantiated by
the results of the F-test in table 3.
The nonzero coefficient on the time-series fore­
cast variable,
t/0, suggests that the provisional
estimates do not fully incorporate information
about final GNP contained in the previous quar-

a 25 K2

K2

X

■ 4

The similarity of the "theoretical" and estimated Kalman gains suggest

ter’s estimates. 5 This suggests perhaps a tendency
on the part of the BEA to be conservative in
extrapolating trends in the GNP data. And it also
suggests that the typical practice in forecasting
and policy analysis to discard forecasts for the
immediately prior quarter once the provisional
estimates become available may be inefficient.
The fact that the 75-day estimate does not
appear to add any additional information about
final GNP is interesting given that it is the first
estimate to incorporate quarterly data. The high
degree of serial correlation between the 45-day
and 75-day provisional estimates shown in (I4 d ),
with relatively low variance in the residual, indi­
cates, however, that the two estimates are not
very different from each other despite the addition
of the quarterly information. Indeed, to an impor­
tant extent this is true of all three provisional esti­
mates: they provide more information about each
other than they do about the final GNP number.

there would be no advantage from calculating the filtered estimator using the
theoretical numbers. There does not appear to be any clear consensus
whether regression-based weighting of forecasts is preferable to sample-

■ 5

estimated optimal-weighting. See, for example, Lupoletti and Webb (1986), pp.

new provisional estimator first became available, not past values of the final

279-281.

G N P growth estimator.

The time-series forecasts used only past data available or the time the

References
Calculated and Estimated Kalman Gains

Optimal Control.

Calculated

Estimated

Standard
Error of
Estimated Gains

0.719

0.774

(0.092)

1.234

1.483

(0.345)

0.679

0.784

(0.443)

Kx
K2
K}

Applied

Bryson, Arthur Earl, and Yu-Chi Ho.
Waltham, Mass.: Blaisdell
Publishing Co., 1969.

SOURCE: Author.

VI. Conclusion
A recent and interesting analysis of the early GNP

estimates has concluded that “they behave
neither as efficient forecasts nor as observations
measured with error” (Mork [1987], p. 173)- The
purpose of this paper has been to filter the early
GNP numbers, to remove the measurement error,
and to produce more accurate predictions of the
final GNP growth estimates. In a related paper
(Scadding [1987]), I have shown that these fil­
tered estimates do not exhibit the unconditional
bias and inefficiency that Mork found for the raw
estimates. Another interesting sidelight of the
results of this paper is that the Mankiw-Shapiro
test for discriminating between observation and
forecast errors does a poor job when applied to
the estimated observation and forecast errors
calculated in this paper, corroborating other
indications of the poor power of the test.
For the forecaster, the filtering approach out­
lined in this paper provides an easy and syste­
matic way of adjusting the provisional numbers
to make them better estimates of “actual” GNP
growth. It would be intriguing to inquire
whether forecasters do in fact adjust the early
numbers in a way that is consistent with the
approach taken here.
The estimation results reported are model
specific in the sense that they depend, to an
unknown extent, on the specific forecasting
model used to initialize the filtering procedure.
Again, it would be interesting to see the extent
to which the filtering results were sensitive to
the forecasting model by using forecasts from
alternative models. One offshoot of such an
exercise would be that a particular model’s per­
formance could be evaluated in terms of the
extent to which its forecasts contributed to
improving the forecasting ability of the prelimi­
nary GNP numbers.

Carson, Carol S. “GNP: An Overview of Source
Data and Estimating Methods.”
67, no. 7 (July 1987): 103-26.

Survey of Cur­

rent Business.

Conrad, William, and Carol Corrado. “Applica
tion of the Kalman Filter to Revisions in
Monthly Retail Sales Estimates.”
1, no. 2
(May 1979): 177-98.

Journal of
Economic Dynamics and Control.

Howrey, E. Philip. “Data Revision, Reconstruc­
tion and Prediction: An Application to Inven­
tory Investment.”
6 6 , no. 3 (August 1984): 386-93-

Statistics.

Review oj Economics and

Lupoletti, William M., and Roy H. W ebb. “Defining and Improving the Accuracy of Macroeco­
nomic Forecasts: Contributions from a VAR
Model.”
59 (April 1986)
part 1 : 263-85.

Journal of Business.

Mankiw, N. Gregory, and Matthew D. Shapiro.
“News or Noise? An Analysis of GNP Revi­
sions.”
6 6 , no. 5
(May 1986): 20-25.

Survey of Current Business.

McNees, Stephen K. “Estimating GNP: The
Trade-off between Timeliness and Accuracy.”
Jan/Feb.
1986: 3 -1 0 .

New England Economic Review.

Mork, Knut A. “‘Ain’t Behaving’: Forecast Errors
and Measurement Errors in Early GNP Esti­
mates.”
5, no. 2 (April 1987): 165-75.

Journal of Business and Economics
Statistics.

Scadding, John L. “The Nature of GNP Revi­
sions.” Federal Reserve Bank of Cleveland.
No. 8718, December 1987.

Working Paper

Walsh, C.B. “Revisions in the ‘Flash’ Estimate of
GNP Growth: Measurement Error or Forecast
Error?” Federal Reserve Bank of San Francisco.
( Fall 1985): 5-13-

Economic Review

Young, Allan H. “Evaluation of the GNP Esti­
mates.”
67, no.
(August 1987): 18-42.

Survey of Current Business.

8

C o m m e n t— In te r v e n tio n
a n d th e D o lla r ’s D e c lin e
by Owen F. Humpage

Owen F . Humpage is an economic
advisor at the Federal Reserve Bank
of Cleveland. This comment con­
cerns an article he wrote for the
preceding issue of

Review

After publication of “Intervention and the Dol­
lar’s Decline” in the preceding issue of
some confusion arose regarding
exactly when the exchange-rate quotes in that
article were taken and from what market they
were derived. This comment will explain the dif­
ferences and respecify some of the equations to
dispel any misinterpretation.
The daily data for the article were taken from
DRI-FACS in August 1987. We understood from
reading the DRI-FACS manual that the data series
from August 7, 1984 to August 28, 1987 were
morning opening exchange-rate quotes from the
New York market.
The recently revised DRI-FACS manual (now
called DRIFACS PLUS) indicates that after
October 8 , 1986, the data refer to closing quotes
in the London market. 1 We therefore reesti­
mated the equations in tables 3 and 4 of the arti­
cle to determine if this change had any signifi­
cant effect on the results.
While some of the point estimates are slightly
different under these new estimations, the over­
all conclusion of the article remains the same:

nomic Review,

■

1

Eco­

D R IF A C S P L U S , the Dictionary of M oney Markets and Fixed Income

Data, Data Resources, Inc., February 1988. Data prior to October 8, 1986 are
as originally reported.

Economic

(Quarter 2 1988), pp. 2-16.

Between August 1984 and August 1987, day-today U.S. intervention did not systematically affect
day-to-day exchange-rate movements. However,
on some occasions, intervention did have a tem­
porary effect on mark-dollar and/or yen-dollar
exchange rates.
Statistical tests in the article included U.S.
intervention with a one-day lag to avoid prob­
lems with bidirectional causality between
exchange rates and intervention. Generally, the
results are interpreted on the assumption that
the effects of U.S. intervention on day
occurred between the opening quote on day
and the opening quote on day After October 8 ,
1986, however, the data are closing quotes from
the London market. Since the New York market
opened before the London market closed, U.S.
intervention on day
could have affected the
London closing exchange-rate quote on day
and on day
To allow for this possibility, we reestimated
the relevant equations, including a contempo­
raneous intervention term. Tables 3A and 4A,
which correspond to tables 3 and 4 of the origi­
nal article, present the results.

t-l

t.

t-l

t.

t-l

t-l

T A B L E
6 7 Intervention

I. Estimation Period: February 23, 1987 to July 2, 1987
A.

Dependent Variable: mark-dollar exchange rate

Independent Variables

Coefficient T-statistic

Intervention dummies
Initial purchases

no lag
lagged
Subsequent purchases no lag
lagged
Initial sales
no lag
lagged
Subsequent sales
no lag
lagged
Lagged dependent

(1 )
(1 )
(0 )
(0 )
(3)
(3)
(2 )
(2 )

0.009
-0.007
—
—
-0.007
-0 . 0 0 6
-0 . 0 0 6
-0.008
1.00

1.73a
-1.35b
—
—
-2 .3 8 c
-2 .0 6 c
-1.14
-1.56
994.8d

Sum of Squared Residuals = 0.002
R2 = 0.824
n = 90
B.

Dependent Variable: yen-dollar exchange rate

Independent Variables
Intervention dummies
Initial purchases

no lag
lagged
Subsequent purchases no lag
lagged
Initial sales
no lag
lagged
Subsequent sales
no lag
lagged
gged dependent

Coefficient T-statistic
—
(0 )
(0 )
—
(0 )
—
—
(0 )
-0 . 0 1 1
(2 )
-0 . 0 0 1
(2 )
( 1 6 ) -0.007
(1 6 )
0.0005
1.0 0 0

—
—
—
—
-1.893
-0 . 2 1
-3.08d
0 .2 1

70l6.4d

Sum of Squared Residuals = 0.003
R2 = 0.969
n = 90

NOTE: Intervention refers to U.S. purchases or sales o f foreign currencies.
Numbers in parentheses indicate the number o f times the dum m y equals 1.
a. Significant at the 10% con fid en ce level.
b. Significant at the 10% con fid en ce level (one-tailed).
c. Significant at the 5% confid ence level.
d. Significant at the 1% confid ence level.
SOURCE: Author’s calculations.

Table 3A lists the results for the period February
23, 1987 to July 2, 1987. For the West German
mark, the coefficient for initial purchases of
marks is positive and significant. One cannot
interpret this coefficient unambiguously, because
causality is bidirectional without the lag; never­
theless, the positive cofficient is not consistent
with the view that intervention purchases of
marks produced a dollar depreciation.
The lagged value on initial intervention is
marginally significant and correctly signed. The
United States bought a small amount of marks
on March 11, as the dollar rose above 1.85
marks. The dollar depreciated on the following
day. The coefficients on the sales of marks are
incorrectly signed and/or insignificant. For the
Japanese yen, all of the coefficients are either
incorrectly signed or insignificant.
Table 4A presents the results for the period
July 5, 1987 to August 28, 1987. For the West
German mark, the coefficient for initial pur­
chases of marks is positive and significant. As
before, this coefficient cannot be unambiguously
interpreted, but the sign is not consistent with
the view that intervention purchases of marks
produced a dollar depreciation. The remaining
intervention variables are not significant. For the
yen, the coefficients are either incorrectly signed
or are not significant.

6 7 Intsrvsntion

I. Estimation Period: July 5, 1987 to August 28, 1987
A.

Dependent Variable: mark-dollar exchange rate

Independent Variables

Intervention dummies
Initial purchases

no lag
lagged
Subsequent purchases no lag
lagged
Initial sales
no lag
lagged
Subsequent sales
no lag
lagged
Lagged dependent

Coefficient

(1 )
(1 )
(3)
(3)
(0 )
(0 )
(0 )
(0 )

0 .0 12

-0 . 0 0 1
0.003
0.0 02

—
—
—
—
0.999

T-statistic

2.53a
-0.27
0.75
0.47
—

—
—
—
758.5b

Sum of Squared Residuals = 0 . 0 0 1
R2 = 0.849
n = 38
B.

Dependent Variable: yen-dollar exchange rate

Independent Variables
Intervention dummies
Initial purchases

no lag
lagged
Subsequent purchases no lag
lagged
Initial sales
no lag
lagged
Subsequent sales
no lag
lagged
Lagged dependent

Coefficient T-statistic
—
—
—
—
( 1 ) -0.018
0.009
(1 )
(0 )
—
(0 )
—
(0
(0
(0
(0

)
)
)
)

1.0 0 0

—
—
—
—
-2.513
1.2 0

—
—
4l66.2b

Sum of Squared Residuals = 0 . 0 0 2
R2 = 0 . 8 3 0
n = 38

NOTE: Intervention refers to U.S. purchases or sales o f foreign currencies.
Numbers in parentheses indicate the number o f times the dum m y equals 1.
a. Significant at the 5% confid ence level.
b. Significant at the \% con fid en ce level.
SOURCE: Author’s calculations.

Economic Review

■ Quarter III 1987
Can Services Be a Source o f Export-led
Growth? Evidence from the Fourth District
by Erica L Groshen
Identifying Amenity and Productivity Cities
Using Wage and Rent Differentials
by Patricia E. Beeson
and Randall W. Eberts
FSLIC Forbearances to Stockholders and
the Value o f Savings and Loan Shares
by James B. Thomson

■ Quarter IV 1987
Learning, Rationality, the Stability o f
Equilibrium and Macroeconomics
by John B. Carlson
Airline Hubs: A Study o f Determining
Factors and Effects
by Paul W. Bauer
A Comparison o f Risk-Based Capital and
Risk-Based Deposit Insurance
by Robert B. Avery
and Terrence M. Belton

■ Quarter I 1988
Can Competition Among Local Governments
Constrain Government Spending?
by Randall W. Eberts
and Timothy J. Gronberg
Exit Barriers in the Steel Industry
by Mary E. Deily
Why Do Wages Vary Among Employers?
by Erica L Groshen

■ Quarter II 1988
Intervention and the Dollar’s Decline
by Owen F. Humpage
Using Financial Data to Identify
Changes in Bank Condition
by Gary Whalen and James B. Thomson
Developing Country Lending and
Current Banking Conditions
by Walker F. Todd

a

Third Quarter
Working Papers
Working Paper
Notice

Current

Working Papers

of the Cleve­

land Federal Reserve Bank are listed in

Economic

Single copies of individual papers will

Institutional subscribers, such as librar­

be sent free of charge to those who

ies and other organizations, will be

request them. A mailing list service for

placed on a mailing list upon request

personal subscribers, however, is not

and will automatically receive

available.

Papers

■ 8805

■ 8807

L e s s o n s o f th e P a s t
and P ro s p e c ts fo r the
Fu tu re in Le n d e r

T e s tin g fo r

■ 8809
In te ru rb a n
C o m p a ris o n s o f the
Q u a lity o f Life

each quarterly issue of the

Review.

Copies of specific papers may

be requested by completing and mail­

Working

as they are published.

ing the attached form below.

S p e c u la tiv e B u bble s
in S to c k P ric e s

o f L a s t R e s o rt T h e o ry

by Asli Demirguc-Kunt

by Patricia E. Beeson

by W alker F. Todd

and Hashem Dezhbaksh

and Randall W. Eberts

■ 8806

■ 8808
F in a n c ia l S tru c tu re
and th e A d ju s tm e n t

■ 8 8 10
U s in g S M V A M a s a

C a p ita l R e q u ire m e n ts
and O p tim a l B a n k
P o rtfo lio s :
A R e e x a m in a tio n

o f C a p ita l S to c k

L in e a r A p p r o x im a tio n
to a N o n lin e a r F u n c tio n :

by William P. Osterberg

A N o te

by William P. Osterberg

by Asli Demirguc-Kunt

and James B. Thomson

and James B. Thomson

Please com plete and detach the form below and mail to:
Federal Reserve B ank of Cleveland
Research D ep a rtm en t
R O . Box 6387
C leve lan d, O h io 44101

Check item(s)

Send to :

Please send the fo llo w in g W o rkin g Paper(s):

□ 8805

□ 8807

□ 8809

□ 8806

□ 8808

□ 8 8 10

________________________________________________________
Nam e

Please p rin t
A ddress

C ity

State

Zip