View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Authorized for public release by the FOMC Secretariat on 2/3/2021

BOARD OF GOVERNORS
OF THE

FEDERAL RESERVE
WASHINGTON,D.C.

SYSTEM

20551

January 30,

1976

CONFIDENTIAL (FR)
CLASS II-FOMC

To:

Federal Open Market Committee

From:

Arthur L.

Broida

Attached are the first four sections of the Interim
Staff Report:

Stage II for the Subcommittee on the Directive.

The remaining two sections should be completed and ready for
mailing early next week.

Attachment

Authorized for public release by the FOMC Secretariat on 2/3/2021

CONFIDENTIAL (FR)
CLASS II-FOMC
January 30, 1976

INTERIM STAFF REPORT: STAGE II
FOR THE SUBCOMMITTEE ON THE DIRECTIVE

John H. Kalchbrenner

Authorized for public release by the FOMC Secretariat on 2/3/2021

Interim Staff Report: Stage II
For the Subcommittee on the Directive

Table of Contents
Section
I.

Introduction.............................................

Page
1

II.

An Overview of Optimal Control Analysis
Introduction.......................................... 5
6
Optimal Control Methods ..............................
21
Feedback and Filtering...............................
25
Further Extensions...................................
Some Concluding Observations ......................... 27

III.

Instruments, Information Variables and Targets in the
Determination of Monetary Policy
Introduction......................................... 29
Optimal Control Analysis of Intermediate Targets......31
Questions of the Definition of Instruments and
Intermediate Targets in Practice...................35
Concluding Remarks...................................40
Suggested Changes in Operating Procedures Over the Near
Term
Closer Integration of the Green Book and The Blue

IV.

Book..............................................43

The Role of Alternative Conditional Forecasts in FOMC
48
Discussions .......................................
The 'Decision' Tree...................................50

V.

Reappraisal of the Stage I Analysis, Results and
Recommendations
Introduction..........................................51
Review of Stage I Empirical Results...................52
Re-specification of the Stage I Equations.............54
Possible Use of the Federal Funds Rate as an Operating
Target.............................................58

Optimal Control Analysis of Operating Targets or
Instruments.......................................60
Recommendations Concerning the Federal Funds Rate
64
Constraint and Operating Procedures ..............

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 1 -

Introduction ¹
The Subcommittee on the Directive reported its findings and
conclusions from the first stage of its investigation in March 1975.
While the more limited objectives of Stage I were being achieved,
research proceeded on the broader inquiry encompassed by the second
stage of the Subcommittee's research program.

The second stage of the

research program focussed on the question of the appropriate role of
intermediate targets, such as the monetary aggregates, in the determination and conduct of monetary policy.

The analysis of the role of

intermediate targets led to investigation of such related questions as
relationships among operating, intermediate and ultimate targets; the
appropriate policy responses to incoming information and forecast
errors; and how best to take account of uncertainties concerning the
structure of the economy, the current position of the economy and the
likely course of important exogenous variables.

¹ In the preparation of this report, the author received helpful guidance
and comments from many individuals. Thanks are due to the members of the
Subcommittee and S. Axilrod for pointing out and providing assistance in
areas that needed special attention or that were inadequately developed.
J. Enzler, J. Kareken, M. Keran, W. Poole and P. Tinsley were especially
helpful in providing contributions, interpretations, and comments. The

author also wishes to express appreciation to the individual researchers
throughout the System for completing the project in a timely fashion and for
providing valuable interpretations. Of course, none of those mentioned is
responsible for any remaining errors in the report. Special thanks are due
J. Pierce who organized the Subcommittee on the Directive research program
and guided a large portion of the work to its completion. Mary Flaherty
bore the burden of typing this report and numerous memoranda, as well as
coordinating the duplication and distribution of the large number of
research papers. Her efficiency saved the author from total innundation,

and her help is gratefully acknowledged.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 2 A major part of recent macroeconomic analysis has been conducted
in the framework of single equation relationships or relatively small
analytical models that include little institutional or sectoral detail.
Many of the policy-related propositions that have occupied center-stage in
macroeconomics in recent years have been derived from analyses of such
Unfortunately, while their simplicity

relatively simple formulations.

makes possible rigorous analysis, this same simplicity limits the questions
that can be considered or the alternative hypotheses that can be tested
effectively. While progress has been evident, too many issues remain
unresolved, and seemingly contradictory propositions enjoy empirical
support from the same data bases.
Recent macroeconomic analysis has also been done in the context
of large structural econometric models that were developed principally
for short-run forecasting applications.

Because of the complexity of these

models and the principal use for which they were designed, they are not
amenable to application of the same techniques of analysis that can be
used with simple formulations.

Instead, the greater part of the policy

analysis that has been conducted with these models has involved

simula-

tion techniques to analyze and illustrate such matters as policy multipliers, the results of alternative policy assumptions, model stability
and alternative model structures.

While the results of these efforts

have been instructive, there remains a need for a method of analyzing
important policy questions systematically and efficiently.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 3 In the Stage II analysis of the Subcommittee, a technique
is employed that has existed in economics for over twenty-five years, but
has received relatively little use.
control analysis.

The technique is called optimal

It provides a systematic method for analyzing policy

questions that is equally applicable to simple or complex model constructions. More specifically, optimal control analysis permits explicit
inclusion in the analysis of different degrees of uncertainty; it takes
into account complex dynamic behavior efficiently, thereby permitting
analysis of questions of timing of policy actions; it incorporates means
of analyzing and correcting for forecast errors in operational situations;
and it provides valuable information about the characteristics of the
models to which it is applied.

The results of this research effort,

while they do not settle many of the open questions, provide the basis
for a reorientation of thinking about a number of issues that have been
focal points of debate in macroeconomic and monetary analysis in recent
years, as well as suggesting several avenues of further research.
This report has four principal sections.
contains an overview of

The first section

the optimal control analysis underlying the

analysis of the questions addressed in the second stage of the
Subcommittee research program.

Section II applies optimal control analysis

to the question of the role of instruments, information variables and targets
in the determination of monetary policy.

The second section also discusses

the extent to which the issues remain open and the appropriate strategy to
follow until these questions can be answered.

The third section contains

Authorized for public release by the FOMC Secretariat on 2/3/2021

-4a number of suggested changes in FOMC and staff procedures that could
be made currently. These changes, some of which stem from optimal
control analysis and some from independent analyses, are principally
designed to increase the effectiveness of the information and analysis
provided to the FOMC in its determination of monetary policy.

The final

section contains a reappraisal of the Stage I analysis and conclusions.
In addition to providing a re-specification of the Stage I empirical
work that confirms the earlier findings, this section also contains an
optimal control analysis of the question of choice of operating instruments or targets.
The report is not highly technical, although it introduces and
uses the terminology of optimal control analysis.

The intent is to provide

a comprehensive summary of the Stage II research program that will be
accessible to economists as well as to members of the Subcommittee
and the Federal Open Market Committee.

It is hoped the report will provide

a helpful elaboration of the matters discussed in the Subcommittee on the
Directive preliminary second stage report to the Federal Open Market
Committee which necessarily treats the issues in a brief manner.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 5 -

II.

An Overview of Optimal Control Analysis²
Introduction
Reliance on the techniques of optimal control theory underlies

much of the analysis done during Stage II of the work of the Subcommittee
on

the Directive.

Consequently, a brief non-technical introductory

survey of relevant topics from optimal control analysis should aid
understanding of many of the conclusions and recommendations that resulted
from these efforts.³

Unfortunately, control analysis is not widely under-

stood in the economics profession because of its mathematical complexity
and the belief that it is an analytical technique of interest to only
a limited audience concerned with a narrowly circumscribed set of
problems.
More recently, however, it has been recognized that the techniques
of optimal control are applicable to a broad variety of problems in economic
analysis, both macroeconomic and microeconomic.

Analysis of the rational

expectations hypothesis, differential game behavior in the theory of
oligopoly, and the implications of including uncertainty in general equilibrium analysis are among the more important examples of areas in which

optimal control techniques are being applied.

These techniques have also

been applied to the theory of the firm, providing a potentially powerful
tool for analyzing the impact of regulation on individual bank

behavior.4/

²Peter Tinsley is the co-author of this section.
³More detailed discussions and further references can be found in Chow (4),

(5), (6 ), Friedman (11), Kalchbrenner and Tinsley (14), Kareken and Miller
(17) and Theil (33).
4/ Technically, in each of these areas the common element has been the application of a technique for optimization of objectives subject to stochastic

constraints and imperfect information.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 6 In concept, optimal control analysis is not a recent development.

The basic techniques have been employed in macroeconomics for more

than twenty years in varying forms.

Early work concentrated on the

problems of economic stabilization policy in the context of deterministic
models (i.e. models with no uncertainty), or models in which simplified
representations of uncertainty could be dealt with by using the mathematical
expectations of stochastic (uncertain) relationships and proceeding as if the
model were deterministic.

Recent work has extended the analysis to more

complex non-linear models and less restrictive treatments of the nature of
uncertainty in econometric models.

These extensions of optimal control

techniques were made possibly by generalizations of numerical techniques

developed for aerospace applications in electrical engineering.
Optimal control studies have not been confined solely to
theoretical analysis.

Significant progress has been made in the devlop-

ment of the computational techniques that are required to be able to apply
optimal control methods to large-scale, non-linear econometric models such
5/
as the quarterly econometric model used in the Federal Reserve System.
Even though theoretical questions remain unanswered and there are many
unexplored applications of existing control theory, the analysis can already
be applied in a limited fashion to quite complex and realistic situations.
Optimal Control Methods
In reaching policy decisions, the monetary authority has available
a limited number of instruments or control variables that are under

5/ Examples of the application of control theory to large scale systems can
be found in Craine, Havenner and Tinsley (9 ) and Kalchbrenner and Tinsley (15).

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 7 the direct control of the authority and can be manipulated in order to
achieve desired broad economic objectives.

The decision process is

complicated first by the fact that economic behavior is imperfectly understood and is subject to change through time.

The problem is further compli-

cated by the fact that all of the likely objectives or ultimate targets
cannot in general be achieved simultaneously with the limited number of
policy instruments that are available.
In the context of monetary policy decision-making, the relevance
of optimal control analysis stems from what might be labeled the "policy
problem."

The policy problem is to determine the optimal manner in

which to set or manipulate the policy instruments in order to come as close
to achieving the policy objectives as possible.

As is the case in many

problems in economic analysis, choosing the optimal policy strategy reduces
fundamentally to an optimization problem of maximization or minimization.
Put in simplest terms, optimal control analysis is nothing more than a set
of rules for efficient calculation of solutions to the policy problem.
The solution to an optimal control problem provides a rule (the policy
strategy) for setting the policy instruments over a planning horizon that
will minimize the (expected) deviations from desired targets, given the
constraints imposed by the assumed behavior of the economic system and
any further restrictions placed on the behavior of the component parts of
the system.

Among the constraints imposed by the economy are the limits

to output at any time as determined by productive-capacity and resources,

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 8 or the fact that fiscal policy actions must somehow be financed, causing
secondary effects throughout the economy.

Other constraints, such as

concern about the effects of monetary policy on housing or state and
local finance,also limit the freedom of monetary policy actions.
From this discussion, it is apparent that an explicit representation of the economic system--a model of the economy--is required in order
to be able to apply optimal control analysis.

This model must relate the

policy instruments or control variables to the ultimate target variables,
or the model provides no guidance in making policy decisions.

The model

itself represents a system of constraints in the sense that policy actions
are transmitted by means of the given model relationships, most often
indirectly.
The second principal requirement of optimal control analysis is
an explicit representation of the policy objectives.

This is referred to

variously as the criterion, the loss function or the objective function.
In economics, the loss function to be minimized has been written most often
in terms of the weighted squared deviations of actual values from the desired
targets or objectives.

In addition to the policy objectives, the loss

function often also includes additional ad hoc policy constraints, such as
restrictions on the volatility of policy instruments or other variables.
Conceptually, these ad hoc considerations can be accommodated by more detailed
modeling of the impact of policy instruments.

By definition, the optimal

solution to the control problem is the solution that minimizes this loss
function (or its expectation).

The popularity of the quadratic form stems

Authorized for public release by the FOMC Secretariat on 2/3/2021

-9 from its mathematical tractability, but other more complicated representations of the loss function can and have been employed in deterministic

6/

control problems.6/

The appropriate specification of the loss function poses one of
the significant problems for application of optimal control analysis.

At

the theoretical level, it is often objected that the loss functions
typically used are largely ad hoc and provide only a gross, perhaps badly
distorted, approximation of the true objectives of policy makers or society
in general.

On the other hand, it should be noted that this problem exists

not because a complicated objective function cannot be incorporated in optimal
control analysis, but because such a function is, in fact, not known.

As a

consequence, the same objection can be raised in any policy decision context,
and the problem does not disappear simply because no attempt is made to be
explicit.

In the absence of a loss function based on true social welfare

criteria, it can be argued that the currently available variations on the
simple quadratic form (or alternative mathematical forms) can approximate
adequately the objectives that policy makers seek in practice, so long as the
end results are interpreted carefully and cautiously.
Optimal control analysis can be approached from several levels of
complexity and realism.

The simplest case is the single-period horizon,

deterministic example studied by Tinbergen where the policy objectives can be
achieved exactly if the number of policy instruments equals the number of

6/ For examples, see Friedman (11) , Kalchbrenner and Tinsley (15) and Ando
and Palash (1). Also see Waud (37).

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 10 -

policy objectives.

In this case, the settings required to achieve the

objectives can be determined directly without the need to minimize a
loss function.7/ But if the number of instruments is less than the number
of targets, a loss function must be employed in order to determine the
solution because the structure of the model (the constraints) will generally
mean that the desired target variables cannot all be achieved simultaneously.
Where the desired objectives are inconsistent in this sense, the role of the
loss function is to define the extent to which each objective should be
achieved given the preferences and weights specified in that function.
Solutions to highly oversimplified problems of this type also provide
examples of the sensitivity of the solution to the choice of variables that
appear in the loss function, as well as to the weights that are assigned to
these variables.

At the risk of belaboring the obvious, it should be noted

that there will be a different optimal solution associated with each loss
function.

A single, global optimal solution does not exist except in the

abstract world in which it is possible to posit a generalized macroeconomic
social welfare function.
The second level of complexity involves the addition of additive
random error terms (random

intercepts) to the model in order to reflect

the uncertainty associated with economic behavior as well as the inadequacies of models in representing the true economic structure.

The optimal

control solution in this case can be obtained by applying the certainty
equivalence theorem. This theorem demonstrates that an optimal policy is
obtained by deterministic solution methods after all the additive random
elements are replaced by their mathematical expectations.
7/ It is usually not necessary to specify tradeoffs among policy objectives
when the number of policy instruments is greater than or equal to the
number of policy objectives. See Tinbergen (34).

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 11 -

The situation becomes more complicated when account is taken
of dynamic characteristics of the economy system under uncertainty, and
longer horizons are included in the solution.

Under these circumstances,

the optimal control solution must determine a path of settings for the
policy variables that will minimize the expected value of the loss function
over the entire planning horizon.
The exact specification of the policy horizon involves several
issues.

Theoretically, the policy horizon should approach infinity because

the objectives of public policy are the objectives of a nation which has no
finite life-span.

In practice, however, the policy horizon that can be

used will be limited by the availability of econometric models that maintain
reasonable properties as the analysis is extended forward in time.
Given the long lags that most econometric studies indicate are
involved in realizing major price effects, restrictions on the magnitude of
changes in the policy instruments in the short run, and the likely importance
of price behavior in policy objective functions, the policy horizon should
be at least three to five years.

Initial explorations of policy horizons

of this length have uncovered stability problems with some econometric
models that were designed principally for use in short-run forecasting. In
addition, it becomes more important in the context of long horizon problems
to investigate the implications of uncertain forecasts of non-controlled
exogenous variables such as fiscal policy actions.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 12 -

There are means of getting around these difficulties, even
though they are not totally satisfactory.

It is not difficult to use

two to three year horizons in practice, and if the solutions are recalculated every quarter over a moving horizon, the dangers of myopic solutions
are lessened considerably.

In addition, it is possible to impose constraints

on values of important variables such as the policy instruments that inhibit
the solution to the optimal control problem from ignoring price or other
effects beyond the horizon.

Finally, the sensitivity of the control solu-

tion to alternative assumptions about the behavior of uncertain forecasts of
exogenous variables can be examined in order to gain perspective on the
likely risks involved.
In the extension to the multi-period policy horizon under
uncertainty, there are two traditional methods of obtaining the optimal
control settings.

The first is an open-loop strategy based on the principle

of first-period certainty equivalence, and the second is a closed-loop
strategy derived by dynamic programming.
In the case of linear models and quadratic loss functions, the
single-period certainty equivalence analysis has been extended to the multiperiod case.

As before, all random variables in the model are set at their

conditional expectations, and the system is solved as a multi-period deterministic problem.

However, only the policy instrument solution for the

first period is executed.

For this reason, this procedure is referred to

as first-period certainty equivalence.

At the end of the first period, the

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

13 -

estimated position of the economy is then adjusted for measurements of
economic activity that were received during the period.

8/

The alternating

process of deterministic optimization and measurement is then repeated
sequentially for each of the remaining periods in the policy horizon.
This process is referred to as a feedback procedure.
For any period in the feedback procedure, the optimal instrument
setting of that period is a function of both past measurements and forecasts of future behavior of the economy.

Note that the forecasts of the

economy are a function of the distributed lag effects of both current and
future settings of the policy instruments.

Thus, the optimality of the

current instrument setting can only be evaluated in the context of the
optimal path of the instruments over the entire planning horizon. Therefore,
the planning horizon should be of some reasonable length relative to the
distributed lag impact of the instrument, regardless of the fact that the
plan for future instrument settings will be revised by measurements in
subsequent periods.
If the solution determined for the control instruments for each
period were applied without subsequent modification as new information
became available, the policy strategy would be termed an open-loop strategy.
As noted, only the first period choice of the instruments would be truly

8/ That is, the conditional expectations of the random variables are
sequentially updated. For an example of this procedure, see Kalchbrenner
and Tinsley (15).

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 14 optimal under this strategy, essentially because it ignores the fact that
new information will become available on future residuals or model forecast

errors.

If the observations on the residuals, or the new initial conditions,

are taken into account and the open-loop strategy is recalculated each
period by applying first-period certainty equivalence, the policy strategy
is referred to as open-loop with feedback.
Under an open-loop with feedback procedure, a path of settings
for the control instruments is based onthe initial conditions and the forecasts of the exogenous variables, with the knowledge that only the first
period solution would be executed.

The entire solution path is only

conditionally optimal given current information and will be revised when
future information becomes available.

The path solution is recalculated

each period to determine the next period control setting and the conditional
path over the remainder of the horizon, based on the most recent information.
Because of the need to recalculate the entire solution each period this procedure is inefficient, but it has the important advantage that it can be
and is applied to complex problems in practice. It has the additional advantage of making explicit the expected results of the policy strategy each
time the latest information is taken into account.
The second method of obtaining the optimal control solution is
based on dynamic programming.

By this method, instead of

determining expected optimal instrument values for all periods in the
planning horizon, the solution is in the form of a control rule or feedback
control function which can be applied period-by-period, thereby automatically
taking new information into account as events unfold.
is

The instrument choice

made at the beginning of each period, for that period only, on the basis

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

15 -

of a solution rule that indicates the optimal response to conditions at the
beginning of that period.

The rule itself is dependent on the loss function

and the structure of the model that is employed, and will, therefore, vary
according to the circumstances under which it is derived.
In contrast to the open-loop procedure, a feedback control function
provides a period-by-period update and revision of the control settings for
the subsequent period in response to revised initial conditions, and does
not explicitly indicate what happens to future control instrument settings.
Intuitively, the rule might be interpreted as relating revisions in openloop instrument settings to realizedforecast errors.9/ Because the rule
implicitly imbeds or contains allowance for future exogenous variables
and other characteristics of the economic system, it is not a fixed rule
in the sense of a rule dictating steady money growth or some other fixed
response.

The parameters of a feedback control function vary over

time as the underlying situation to which it is being applied is
altered.
The feedback control function is derived by dynamic programming
techniques based on the principal of optimality.

10/ This principal

states that an optimal policy has the property that whatever the initial
conditions and the initial decision, the implied remaining decisions must be
optimal with respect to the initial conditions resulting from the first decision. Determination of an optimal path can be determined most efficiently by
working backward in much the same way that the most efficient way to find
the path through a maze is to begin at the exit and work backward to the

9/ For linear models and quadratic loss functions, the open-loop strategy
can also be formulated in an explicit feedback form. In so doing, the levels
of the open-loop control settings are related to the initial conditions of
the economy, and future values of the exogenous variables are implicitly
imbedded in the parameters of the feedback rule. See Kalchbrenner and Tinsley

(14).
10/

See Bellman ( 2).

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

entrance gate.

16 -

By proceeding in this manner all paths that do not lead

to the exit are ruled out, and this is an efficient means of ensuring
that the path to the exit will be optimal from each point along the
solution path.

The dynamic programming method determines what is referred

to as a closed-loop or explicit feedback solution to the control problem.
There are several advantages to using this solution format.
In announcing the basis of the plan or the strategy, it is clear that the
plan is contingent on actual occurrences.

Intended actions will not be

carried through unless forecasts of future events are realized,so it is
not necessary to announce and explain a revised plan each time new information dictates a change.

In addition, it is convenient to compare the out-

come of applying the rule with competing reaction rules that are-often
proposed, including the no-response rule.

Finally, as indicated, it is

an efficient procedure that avoids recalculating the entire solution each
period.
Nevertheless, there are also disadvantages to using the closedloop approach.

It appears to be an automatic reaction rule.

And, without

additional steps, it does not permit direct examination of the expected
sequential effects of the action prescribed.

Finally, except under the

special case of a linear model and a quadratic loss function, the feedback
control rule is difficult to compute for all but very simple models.

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

17 -

If the objective function is quadratic and the econometric model
is linear, both the open-loop with feedback and the closed-loop approaches
are equivalent.

Theoretically, the latter is preferable on the grounds

that it is a multi-period generalization of the certainty equivalence
approach,and the only substantive task in each decision period is to
provide the necessary revised estimates of model variables needed for feedback.
If the optimal control solution is derived using a non-linear
model, either of these two approaches can still be employed.

And, as

indicated earlier, a simple quadratic objective function can be made quite
complicated and still be used with either method.

But, the solutions

derived with a non-linear stochastic model (a model incorporating uncertainty)
will no

longer be optimal in a formal sense.

All solutions must be

approximate because the solution for the expected policy loss cannot be
exactly measured.

Further, approximations of the expected loss are fairly

sensitive to where one expects the economy to be in the future.

Because

future developments will, in all probability, differ from expectations, the
expected loss for any remaining periods in a policy horizon must be
recalculated in response to forecast error measurements.

Currently, it is

believed that if a model is locally linear, obtaining solutions to non-linear
models incorporating additive uncertainty does not pose serious problems in

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

practice.

18 -

Many large econometric models do appear to be locally linear.

If, however, the uncertainty extends to the structure of the model (random
coefficient models, discussed below), obtaining solutions becomes more
difficult.
Nevertheless, if the model does not depart from linearity too
importantly, variants of the methods discussed above can be used to obtain
approximately optimal solutions for the instrument settings.

One straight-

forward method is to use a linearized version of the non-linear model.

Since

this is usually not desirable because it reduces forecasting capabilities,
an alternative is to derive the solution path using the non-linear model
by means of a sequence of iterative approximations of the model that can
minimize a given loss function to any desired degree of accuracy without
sacrificing non-linear detail.

In applications involving large, non-linear

econometric models(such as the quarterly model used within the Federal
Reserve System), the open-loop with feedback approach combined with iterative
linearization appears preferable because of the computational expense of
working with the closed-loop approach.
The final complicating factor that can be introduced into the
optimal control problem is to allow for uncertainty about not only the
intercepts of the econometric model (additive random error terms) but the
slope coefficients as well (uncertainty in the parameters of the model).
Under these circumstances the econometric model is known as a random coefficient model.

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

19 -

Allowing for random coefficients complicates optimal control
analysis greatly.

For this reason, very little applied work has been done

with random coefficient

models to date, and there is doubt that such an

approach can be applied to large-scale econometric models in the foreseeable
future, despite the obvious desirability of doing so.
Two important features about random coefficient models should
be noted.

First, if the coefficients vary over time in a predictable

fashion, filtering techniques

(discussed below) can be used to correct for

predictable movements in the coefficients.
mean or first-order characteristics.

These are corrections of the

Second, knowledge about the degree

of uncertainty concerning the estimates of the model coefficients can be
used to take into account more precisely the degree of uncertainty about
the effects of changes in the policy instruments (the policy multiplier
effects).

These are considerations involving second-order characteristics.

Given the same mean estimates of the model coefficients
multiplier predictions),

(i.e. identical

the optimal strategy will be quite different depend-

ing on whether uncertainty is allocated to the model intercepts or to the
general model structure.
Work is beginning on small manageable random coefficient
models in order to gain empirical insights concerning the likely policy
restrictions or responses that would be appropriate in the context of largescale structural random coefficient models.

One suggestion stemming from

analysis of simple random coefficient models has an important policy

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

implication.

20 -

Analysis of these models suggests that optimal strategies

under uncertainty about the coefficients will be more conservative than
under more certain conditions.

Intuitively, if the likely response to a

given policy action is highly uncertain, then it is reasonable to proceed
cautiously in changing the policy instruments. Whether optimal policy strategies would be more conservative in practice depends technically on the covariances between the model coefficients. While this is an empirical question in
the last analysis, the conditions required to justify highly active policy
strategies appear less likely to exist.11/
One interesting means of characterizing proponents of aggressive
or conservative monetary policy is in terms of their implicit a priori
judgments about model reliability.

Proponents of aggressive monetary

policy seem to attribute errors made by models to simple additive error
processes.

In this case, all that is required to follow relatively active

policy strategies is that the models be adjusted appropriately to incorporate
new information so that a feedback strategy can be used.

Advocates of

conservative policies, on the other hand, appear to assume such great
uncertainty about the model coefficients that active control strategies
would be imprudent if not damaging to economic stability.

11/ For more extended discussions, see Chow (4 ), Craine and Havenner (8 ),
and Kalchbrenner and Tinsley (14).

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 21 Feedback and Filtering
A discussion of one additional topic, the relationship between
feedback and filtering, is necessary to complete this overview of optimal
control analysis.

In the discussion of open-loop and closed-loop control

strategies, it was noted that feedback is essentially a revision of the
policy strategy (the control instrument settings) in response to information
that actual economic performance had deviated from that anticipated (i.e. in
response to forecast errors).
In practice, while monetary policy is conducted almost continuously,
a relatively complete and reliable picture of the economy is only available at
12/
quarterly intervals, and then only with about a six-week lag. This absence of
frequent complete information makes it difficult to obtain the information needed
to adopt a weekly or even a monthly feedback strategy.

Nevertheless,

pictures of the economy can be constructed more often on the basis of data
that are available with greater frequency.

Judgmental forecasting techniques

rely heavily on such information implicitly or explicitly to revise the
estimates of the position or condition of the economy during intra-quarterly
intervals.

The 'add-factor' adjustments usually made to econometric models

during intra-quarterly periods also rely on these partial or indirect
observations.

In an optimal control setting, filtering plays the same role.

Filtering, in this context, is simply a technique for estimating
a set of unobservable variables by using observable variables or intermediate
variables that maintain a predictable correspondence to the unobserved
variables.

Information from the observable variables can be exploited to

12/ Preliminary information is available with about a two-week lag, but is highly
unreliable. The usefulness of these preliminary data can also be investigated
by means of the techniques described in this section. See Conrad (7).

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

22 -

extract information about unobservable variables on the basis of structural
links between the variables in an econometric model and relationships
between the forecast errors of the two sets of variables.
For example, if complete information about the quarterly model
data bank were available at the end of a quarter, a prediction for the
entire quarterly data bank could be made for the next quarter.

Once into

the new quarter, intermediate information becomes available daily, weekly or
monthly about some of the variables in the quarterly model data bank, or
about variables that are related to that data bank.

At the beginning of the

quarter, forecasts of some of these intermediate variables can also be made.
In general, there will be errors in these shorter-term forecasts that can be
measured within the quarter as new observations become available.

It is the

information contained in the measurement of these errors that is useful
in carrying out a frequent feedback strategy using the quarterly model.

For

this reason, the intermediate variables are also referred to as information
variables or indicators.
If some of the more frequently measured variables are
components of the quarterly model data bank, the observed forecast errors
provide a certain update or correction of the forecasts of those variables as
soon as they are measured.

Given the relationships between the corrected

variables and the remaining unobserved variables, the known forecast error can
also be used to provide updated estimates of the behavior of the unobserved
variables. This potential usefulness of filtering techniques provides an

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

23 -

obvious reason for interest in structural, or more detailed, econometric
models.

Simple relationships between a few important economic variables

cannot be used as readily to exploit the information in intermediate
variables from a variety of sources.
The observable intermediate variables need not be a part of the
quarterly model data bank.

It is possible to apply filtering techniques to

extract information from data that are not part of the quarterly data set so
long as a relationship between the observable forecast errors and the errors
in the quarterly set can be established.

By the same token, filtering

procedures can be useful to judgmental forecasters by providing forecast
corrections of some important unobserved economic variables that can then be
extended judgmentally to other important unobserved variables.

13/

For example

information from the industrial production index might be useful in revising
forecasts of inventory behavior or investment, and therefore income and
output.

Or, retail sales data or data on auto sales might be used to

revise consumption estimates with consequent revisions of inventory, output
and income forecasts.
Any source of intermediate information is potentially a useful
indicator of the current position of the unobserved portion of the economy.
In practice, this does not mean that all intermediate variables are used.

13/ Examples of analysis incorporating filtering can be found in Kalchbrenner
and Tinsley (14), Kareken, Muench and Wallace (18), LeRoy (20), and LeRoy and
Waud (21). Applications of filtering can be found in Conrad (7 ), Kalchbrenner
and Tinsley (15)
and Tinsley (35). An alternative means of predicting
unobserved macroeconomic variables is described in Porter (28) (29).

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

24 -

That is, filtering does not imply "looking at everything" indiscriminately.
Some intermediate variables provide relatively little useful information
because they are not strongly linked to variables of interest or their
early measurements contains too much statistical "noise."

Others are

valuable for updating the information for a few important unobserved
variables, but not others.

In any case, a single indicator will not

provide the same information content as consideration of a set of intermediate variables carefully processed through a structural model.

Finally,

it should be noted that the frequency of feedbacking is related to the
availability of useful information.

In cases where frequent measurement

or filtering provide little useful information, there is little gain to
be had from more frequent feedback, and, therefore, little reason to do so.
This discussion has focussed on the use of relatively frequent
measurement to update the forecast of a quarterly model.

The process can

work in reverse as well where shorter-term models are linked to a quarterly
model.

For example, in the monthly money market model, a key variable is

personal income, a variable that is also included in the quarterly model
To the extent that personal income forecasts in the quarterly

data bank.

model can be revised by filtering techniques, the forecasts of the monthly
model will also be revised.

In this fashion, the richer detail that is

typical of structural quarterly models can be exploited tc enhance the
performance of less detailed shorter-term models.
For the case of models with uncertain coefficients, it has been
suggested that, conceptually, filtering techniques could be used to obtain
estimates of the effects of shocks to the economy on the random coefficients.

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

25 -

This would be an extension to a random coefficients situation of the 'addfactor' procedures typically used in current practice. Under these circumstances, the 'add-factor' adjustments would alter both the initial conditions
and the expected policy multipliers associated with the model.
Further Extensions
One important area of control analysis that is not included
in the preceding discussion is the question of coordination of policy instru14/

ments, 14/

In general, the greater the number of policy instruments that can

be used, the closer can multiple objectives be achieved.

Optimal control

techniques provide a means of estimating the extent to which objectives
could be achieved more closely if instrument settings were coordinated, or
the extent of the constraints placed on monetary policy by a failure to
coordinate actions.

Given the long lags with which monetary policy effects

occur, it is desirable to explore the implications of situations in which
monetary policy must take as given, and respond to, independently determined
fiscal policy actions.
A natural extension of the optimal control techniques discussed
in this section is the provision of confidence intervals for the projections
of the important ultimate variables.

The availability of these intervals

provides a means of determining the range of outcomes that is associated
with any given choice of the instruments settings.

It also provides

information on the areas of the econometric model that would, if they could

14/ This issue is discussed in Ando and Palash (1 ) and Craine, Havenner
and Tinsley (9 ).

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 26 be improved, yield the greatest gains in the overall performance of the
model.

Presenting these ranges is a simple matter in an applied control

setting because filtering applications require calculation of the
historical error performance of the equations in the econometric model.

Finally, it is possible to conduct sensitivity analyses related
to a variety of issues in a control context.

For example, for a given loss

function, if the acceptable loss is increased by five or ten per cent, what
is the effect on instrument variability and the behavior of the objective
variables?

Or, how sensitive are the instrument settings or policy

strategies to such matters as changes in

the variables included in the

loss function or their weights, the functional form of the loss function,
the length of the policy horizon or alternative model structures?
Even though most of these extensions are designed to provide
analytical information about the nature of the control solution under differing circumstances, they can also be employed in an operational setting for
supplementing the analysis that is provided on a regular basis. They can be
useful also in providing policy makers with a better understanding of the
decision environment, and of the relative importance of implicit choices
of priorities.

The less the sensitivity of the control solution to

differences in model structure or the form and content of the loss function,
the less is the necessity to focus on the specific differences involved.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 27 Some Concluding Observations
Likely areas of benefit from the application of optimal control
techniques as well as substantive problems have been discussed in this
brief overview of optimal control analysis.

From this discussion it should

be apparent that optimal control analysis is not a panacea for policy
decision problems.

By the same token, however, neither is it a 'black box'

approach competitive with general economic or econometric analysis that is
likely to lead policy makers seriously astray.

Even though optimal control

techniques can be of assistance in taking into account uncertainty and model
forecast errors systematically, they are not substitutes for improved
economic and econometric analyses.
While much work remains to be done, it is now possible to apply
optimal control analysis to complex and realistic situations in the context
of large-scale structural econometric models, and early studies of the gains
from filtering are quite promising. Moreover,

even at the present stage of

development, optimal control provides a useful conceptual framework that is
both comprehensive and systematic within which policy issues can be analyzed.
Because optimal control analysis is an optimization technique that can only
be applied to specific economic or econometric formulations, the technique
itself is neutral with respect to 'Keynesian' and 'Monetarist' issues, as
well as to questions of fine tuning versus conservative policy strategies.
Nevertheless, it provides a means by which the unresolved issues in these
areas can be clarified and analyzed in a systematic manner.

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

28 -

However, care must be taken not to overstate the gains that
can be expected from optimal control applications.

No one, now or in

the foreseeable future, seriously advocates that policy should be determined by optimal solutions of econometric models.

It has been suggested

that applications of these techniques are likely to (a) provide a systematic
exploration of the implications for monetary policy of alternative goals,
(b) permit explicit consideration of uncertainty in the formulation of
strategies, (c) improve the mixed judgmental-econometric model analyses
that are prepared as briefing material and (d) indicate those areas of model
improvement that would most aid policy decisions.

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

III.

29 -

Instruments, Information Variables and Targets in the Determination
of Monetary Policy
Introduction
For all intents and purposes, monetary policy is in continuous

operation, but the frequency with which monetary policy decisions can be
made is limited by the fact that the information required to reach sound
decisions is incomplete or available infrequently.

This situation led to

the well-known proposal that the monetary authority adopt monetary aggregate
intermediate targets that are observable with greater frequency and bear a
reasonably stable relationship with the ultimate targets of monetary policy.
This proposal rests on the proposition that if short-run policy instruments
are manipulated in such a way as to achieve intermediate targets thought to
be consistent with some ultimate targets, then the ultimate targets will not
deviate substantially from their desired paths.
The emphasis in the second stage of the Subcommittee on the
Directive's investigations was on the question of whether pursuit of such
intermediate targets is desirable or appropriate in formulating monetary
policy.

Included in the research were the related questions of the relation-

ships between operating, intermediate and ultimate variables; the appropriate
time horizon for monetary policy; the conditions under which targets should
be altered in response to incoming information; and the best way to take
into account the uncertainties under which policy decisions must be made.
This is an area that has been plagued by semantic problems that have made

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 30 -

it difficult to focus on the real issues involved.

Analysis of these

questions in an optimal control framework has provided a clarification
of the issues as well as indicating that a reorientation of thinking about
intermediate targets would be desirable.

15/

In recent years, the Federal Open Market Committee has shifted
its focus toward greater emphasis on monetary aggregate intermediate targets.
This shift in focus and the events leading up to it resulted in wide discussion of such related issues as the controllability of money, the appropriate
choice of policy instruments to control money effectively, how strategies
should be altered to respond to misses in money targets, the role of money
market interest rate constraints, and the appropriate period over which
monetary aggregate targets should be specified.

In much of this discussion,

there has been an implicit assumption that the monetary aggregate targets
were consistent with ultimate economic objectives that were never specified
exactly. Frequently, it has not been clear exactly what the expected
links were between the short-term instrument choices or settings
(e.g. reserve measures and/or interest rates), the intermediate targets
(e.g. various measures of money and bank credit) and the ultimate objectives
(e.g. the rate of growth of real output, unemployment, the rate of change of
prices and international considerations).
15/The material in this section is discussed in greater detail in
Kalchbrenner and Tinsley Q4), Kareken, Muench and Wallace (8 ), and
Earlier work that pointed in the same direction
Kareken and Miller (17).
as the conclusions discussed here was done by Kareken (16) and Poole (26).

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

31 -

Analysis of these issues in an optimal control framework has
the important advantage of including most of the relevant issues in the
analysis simultaneously.

And, once this is done, it becomes apparent that

there are significant similarities between the optimal control approach
and Federal Open Market Committee procedures of recent years.

The latter

16/
will be brought out in greater detail below in section IV.
Optimal Control Analysis of Intermediate Targets
In order to clarify the issues more precisely when viewed in an
optimal control context, it is helpful to ignore, temporarily, questions of
practicality and concentrate on

qualitative insights.

In section II, it

was indicated that an optimal control analysis of a policy problem begins
with the specification of desired ultimate targets or objectives that in
general cannot be achieved exactly given a limited number of instruments,
the constraints represented by the model of the economy and the uncertainty
inherent in economic analysis.

Relying on a first-period certainty equivalence

approach, the solution to the policy problem will provide settings for the
policy instruments in the next period that are based on expected settings of
the instruments and expectations about the behavior of the economy over the
remainder of the policy horizon.
The forecast of economic behavior can only be an expectation
because of the uncertainty associated with the econometric model.

The

forecasts of the individual variables are, in addition, conditional expectations
because they depend on, or are "conditioned" by the settings of the policy

16/The correspondence between FOMC procedures and optimal control is discussed
in some detail in the paper by Kareken and Miller (17).

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 32 instruments and assumptions about the other important but non-controlled
exogenous variables.

If monetary aggregates are viewed as being determined

by an interaction between the economy and the monetary authority, they too
are among the conditional expectations that result from the optimal control
solution.
The important insight to be gained about the role of intermediate
variables from optimal control analysis lies in the behavior of these
variables relative to expectations once the policy strategy is carried out.
This role can be seen clearly by tracing through the steps that would be
involved if an open-loop with feedback strategy were carried out.
With the passage of time in the first period as the policy decisions are carried out, information becomes available on the behavior of the
observable intermediate variables that are either variables included
in the model, or excluded variables that can be related to variables in
the model explicitly.

Instrument settings in this context are

operations affecting reserves or interest rates in the short run
(Trading Desk operations), while intermediate variables include
interest rates, various money and credit measures, the unemployment
rate, and a variety of other real sector variables measured monthly
or weekly.

These observations, by definition, will be available prior to

quarterly information on the behavior of the ultimate variables of
concern that are included in the loss function.

Because the relationships

between the instruments and the intermediate variables are stochastic
(or subject to uncertainty), in general the expectations held at the outset

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 33 -

concerning the response of the intermediate variables to particular
instrument settings, the values of other exogenous variables and other
determinants of the initial conditions will not be realized. That is,
forecast errors will generally be observed.
A filtering process can then be applied to these forecast
errors for the intermediate variables to obtain revised forecasts of the
unobservable variables, including the ultimate targets.

These revisions

will then become part of a new set of initial conditions (along with adjustments in the econometric model where indicated) that can be used in a feedback procedure to re-calculate the optimal settings of the policy instruments
in the manner described earlier.

Note that no adjustments would be made

in the policy instrument settings (short-run monetary policy) unless
information could be extracted from the forecast errors of the intermediate
variables that could be used to revise the forecasts of the unobservable
variables by a sufficient amount to alter the optimal control solution.
A number of points about following an optimal control procedure
in this fashion should be stressed.

First, the optimal control solution for

the policy instruments is obtained in terms of the relationships between
instruments, intermediate variables and ultimate variables.

These relation-

ships are not typically recursive in the sense of a simple sequential line
of causality running from the instruments to the intermediate variables
and then to the ultimate variables.

Second, at the beginning of the policy

period, the intermediate variables are expectations, but they become
information variables that provide a monitor of the way the adopted policy

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 34 strategy is working out in practice as time passes and they can be observed
or measured.

Third, because the intermediate variables are coupled or

linked to both the instruments and the variables of ultimate interest, the
interim information they provide can be used either to validate the original
policy, or to serve as the basis for changing the policy when it appears
expectations are not being realized.

Finally, the expected values of the

intermediate variables are not ends in themselves.

The expected values

they take on are the result of the solution process relating ultimate targets
to instruments, and those values are only appropriate to the extent that
the expected relationships between the instruments and the ultimate targets
remain unchanged.
This same conceptual approach can be used to see the implications
of treating a limited set of intermediate variables as true targets in the
sense that deviations from the original intermediate expectations (i.e. forecast errors)are to be corrected by short-run monetary policy.

For this

to be an optimal procedure in terms of achieving the ultimate objectives,
the shocks hitting the intermediate targets would have to be such
that correcting the resulting forecast errors and returning the
the intermediate targets to the original expected path would simultaneously
correct for shocks that were displacing the ultimate targets from
their desired paths or values.

Otherwise, correcting errors in forecasts

of the intermediate targets would push the errors into some other unknown
area of the economy, thereby reducing the chances of achieving the ultimate

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

objectives.

35 -

The alternative of using a filtering-feedback procedure

can provide information that yields the appropriate response, if any, to
shocks or forecast errors in the intermediate variables on the basis of
a specific analysis of the impact of the errors on the total linkage from
policy instruments to ultimate targets.
The conclusion from optimal control analysis is that, conceptually,
pursuit of intermediate targets is a sub-optimal procedure to follow.

Inter-

mediate variables should be treated as information variables to provide the
basis for adjustments in policy instruments in order to achieve ultimate targets
Questions of the Definition of Instruments and Intermediate Targets in Practice

In the immediately preceding discussion, questions of practicality
and the readiness or inadequacy of both econometric models and optimal control
techniques were ignored.

Given the discussion in section II of the problems

involved in applying optimal control techniques, and the known performance
of econometric models, some caveats are in order.

To this end, a summary

list of the likely near-term contributions of optimal control to actual
monetary policy decisions was given at the end of section II.
In addition, other points were raised during the course of reviewing the research done in Stage II, although it should be noted that the
soundness of the conceptual conclusions from the foregoing optimal control
analysis was not questioned. 1 7 / The first issue is in part semantic
and in part substantive. As indicated in section II, an instrument is

17/ The following material is discussed in greater detail in Dew (10), and
Poole (27).

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 36 a variable that is under the direct control of the Federal Reserve.
Strictly speaking, direct control means that the variable can be set
exactly by the Federal Reserve with no uncertainty.

Under this definition

of a control instrument, the Federal Reserve controls either very shortterm interest rate movements or the Federal Reserve portfolio of assets.
Each of the reserve measures considered in Stage I (and reviewed in
section V below), as well as the several alternative measures of money are
'intermediate variables' that respond to changes in these two fundamental
policy operating instruments.
In the preceding discussion, reserve measures (or interest rates)
are treated as instruments and the monetary aggregates are treated as
intermediate variables or information variables.

In the review of the

Stage II results, this usage was challenged on the grounds that both
reserve measures and the monetary aggregates are intermediate variables.
Therefore, it is argued, the monetary aggregates can also be treated as
instruments of monetary policy instead of as intermediate information
variables.
This argument is based on the proposition that the Federal Reserve
could, in fact, control monetary aggregates very closely if it set out to
do so, even though this might require extensive changes in daily operating
variables (the policy instruments under the strict definition given above).
Control theory requires an instrument that the controller sets exactly.
Therefore, if the monetary aggregates are controlled very closely, the

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

37 -

monetary aggregates meet the definition of an instrument just as well as
reserve measures.
If this argument is accepted, and the monetary aggregates are
classified as policy instruments rather than as intermediate variables,
the entire optimal control analysis can be applied as above with the
policy instrument, money, subject to adjustment based on information
concerning the behavior of the ultimate objectives derived from intermediate variables or direct observation.

Under this classification,

monetary aggregate measures would indeed be targets since they would be
the instrument used by policy makers to carry out policy strategy.
In the discussion of the issue of whether money should properly
be viewed as an instrument or an information variable, institutional
changes to make money more controllable were excluded on the grounds that
the institutional environment was taken as given in Stage II.

Given this

restriction, this issue is an empirical matter in practice rather than
a matter of theory. In practice, quarterly econometric analysis within the
Federal Reserve is conducted principally in the context of a large
structural model that has used money (Ml) as the policy instrument in the
past.

The model can also be used with nonborrowed reserves as the policy,

instrument, but this has not been done normally.

Monthly and shorter-term

econometric analysis has been conducted using smaller models that take
the real sector as given and concentrate on financial variables.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 38 In these models, the policy instrument can be short-term interest rates or
reserve measures, and in recent periods the former has been used.

The

shorter-term models have only rather minor linkages to the quarterly model.
model.
On the basis of these considerations, it is argued that
the existing shorter-term models can be most useful in controlling money
intra-quarterly, but that little information can be extracted from
the variables in these models (and presumably other frequently observed
variables) to revise estimates of behavior of the ultimate variables intraquarterly.

Therefore, it is argued that the control or policy procedure

ought to be partitioned into (1) deciding the quarterly desired paths of
money to achieve desired ultimate objectives (by the FOMC), and (2) deriving
the monthly reserve paths to achieve the desired quarterly paths of money
(by the Desk and the staff).

Feedback could occur at weekly or monthly

intervals in achieving the quarterly money targets, and quarterly in
achieving the ultimate target variables.
The advantage of greater policy clarity is also cited in favor
of this two-stage approach.

It is felt that a policy strategy defined in

terms of a reserve instrument rather than a monetary aggregate would involve
more extensive changes in the instrument that are not related to changes in
information about the behavior of ultimate targets so much as to changes
in operating factors or regulatory changes.

Variations in such factors as

reserve requirement ratios, the distribution of deposits

by type and class

of bank, or the distribution of deposits between member and non-member
banks would require changes in a reserve instrument, but no change,

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 39 necessarily, in a monetary aggregate instrument.

Since such instrument

changes would be largely technical and would occur relatively frequently,
it is argued that policy decisions would be facilitated and public
understanding would be enhanced by classifying money as the instrument
rather than reserves (or short-term interest rates).
Theoretically, this position can be challenged (see the discussion on pp. 31-35 above).

If there is a model that relates shorter-term

instruments (reserve measures or short-term interest rates) to the longerterm instrument or intermediate target (money), and there is a model that
relates money to ultimate targets, then there is a model linkage relating
reserves on interest rates to ultimate targets.

In practice,the issue can

only be resolved by determining which procedure yields the closest control
over ultimate targets.
In control terminology, the question comes down to evaluating
three possible strategies.

The first is to pursue a quarterly open-loop

with feedback strategy in terms of the relatiobships between money and the
ultimate targets.

Coupled to this strategy is a shorter-term (weekly or

monthly) open-loop with feedback strategy relating a quarterly money target
to operations of the Trading Desk,

The quarterly open-loop between money

and the ultimate targets could be interrupted if special circumstances
dictated a change in the money instrument setting.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 40 -

A second possibility is to adopt a monthly open-loop with
feedback strategy relating reserves directly to the ultimate targets with
the monetary aggregates serving as intermediate information variables.
Coupled to this strategy is a shorter-term open-loop with feedback strategy
relating a monthly reserve target to operations of the Trading Desk on a
weekly basis.
The third possibility is to adopt a Federal funds rate instrument
(a control instrument in the strictest sense) and relate the instrument
directly to the ultimate targets by means of a frequent feedback strategy
using reserves, monetary aggregates and other intermediate variables as
information variables.

For reasons explained in sectionV, this alternative

is not recommended by the Subcommittee at this time, but the possible merits
of such an approach will be investigated in coming months.
Stated another way, it depends on whether the forecast errors
from monthly or more frequent data can be used by means of filtering
techniques to revise quarterly data and vice versa. To investigate this
question, a project is currently underway to determine if the quarterly
and monthly models can be linked, with information flows filtered both
from the monthly model to the quarterly model, and from the more detailed
quarterly model to the monthly model.

The results will shed light on the

question of whether feedback should or can occur at quarterly or more
frequent intervals, at least from the viewpoint of econometric model use.
Concluding Remarks
Optimal control analysis of the question of the desirability of
specifying intermediate targets in monetary policy indicates that such a

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

41 -

practice is sub-optimal theoretically.

As a practical matter, however,

the desirability of specifying intermediate monetary aggregate targets is
an empirical question that remains open.

Those who argue for the use of

quarterly monetary aggregate intermediate targets believe the greatest
policy gains would come from truly controlling money

over this interval, and

that more complex and frequent feedback procedures would yield small gains by
comparison.

Work currently underway to try to measure the gains to be

derived from short-run filtering and feedback should provide at least partial
answers to these questions within nine months.
Another result from this analysis concerns the recently adopted
FOMC practice of announcing one-year targets for the monetary aggregates.
The feedback analysis suggests that this practice would be undesirable if the
announced choices were viewed as true targets, invariant with respect to economic
developments. It would be preferable to treat these announced 'target' rates
of growth as expectations in the same manner as the instrument paths beyond
the first period of a first-period certainty equivalence solution are
treated.

On statistical grounds, it might be preferable to state the

announced 'targets' as single values (point estimates) subject to a standard
error, rather than stating target ranges.

Nevertheless, a range specifica-

tion might be justified by the earlier discussion of the difficulty of
specifying a loss function, or doubts about the 'model' on which the policy
decision was based.
In practice, the FOMC implicitly appears to have adopted a moving
horizon, quarterly, open-loop with feedback strategy since the longer-term
'targets' are reconsidered and extended by one quarter at quarterly intervals.
This is a contingency strategy as pointed out in the discussion in section

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 42 II of feedback control rules and feedback procedures.

It would perhaps

increase the flexibility of the Committee and public understanding of
monetary policy if this practice were made more explicit.
In this report, the terms mathematical expectations or expected
values have been used in referring to forecasts of stochastic variables
because this is the correct usage in an analytic context.

In the Sub-

committee report to the FOMC, on the other hand, the reader will find the
term intended values for reserves or monetary aggregate 'targets'.

The

use of the word intended rather than expected was considered preferable in
a policy decision context on the grounds that once the decision is made, the
FOMC indeed intends to achieve the values decided upon until the Committee
decides there are sufficient grounds to change the decision.

Thus, while

the policy choices may well be expected values rather than invariant targets
analytically, the decision element involved endows them with a dimension
that the Subcommittee felt was better captured by use of intended values.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 43 IV.

Suggested Changes in Operating Procedures Over the Near Term
A review of the analyses carried out in Stage II suggests

changes that should be made in the manner in which staff materials are
prepared and presented.

The staff is currently in the process of adopting

some of these changes and considering others.

The changes involve making

the interconnection and consistency between the Green Book and the
Blue Book more explicit, and providing a broader analysis of the options
available to the FOMC in determining monetary policy.
Closer Integration of the Green Book and the Blue Book
Interest in making the relationships between the Green Book and
the Blue Book more explicit arises from two basic considerations.

First,

given the complexity of the processes by which both analyses are prepared,
more explicit specification of the linkages between the short-run and
the long-run analyses would provide a better check on consistent projections
by the staff.

Second, an explicit linkage between the longer-run and

shorter-run analyses, and the consequent linkage of FOMC decisions, should
lessen the likelihood of inadvertently inconsistent policy decisions.
The staff has recently changed its procedures in order to improve
Blue Book and Green Book consistency as described below.

Prior to the

change, members of the staff provided a first approximation of the
likely quarterly patterns of interest rates and monetary growth that would
be consistent with achieving some longer-term rate of growth of Ml within
the range established by the FOMC, usually at the previous meeting.

The

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 44 specific longer-term rate within the range established by the FOMC to be
used as the conditioning assumption in making these initial forecasts
was obtained from the senior staff.
For a variety of reasons, this initial estimate often did not
correspond to the patterns of interest rates and monetary growth that
were discussed in or implied by the corresponding month's final Blue
Book.

Further, the quarterly patterns implicit in the conditional Green

Book forecast were not emphasized in
FOMC discussions.

staff briefings or subsequent

The short-term focus of the Blue Book meant that

interest rate behavior over the full period of the Green Book forecast
was discussed in only general terms in that document.

To some extent,

this general treatment of interest rates can be attributed to the difficulty
of forecasting interest rates by either judgmental or econometric
means.
However, it is the expected consistent patterns of quarterly
interest rates and monetary growth that provide the principal link
between the Green Book and the Blue Book.

This is important, because

it is not the case that virtually any timing of monetary growth or
interest rate patterns assumed over the shorter horizon of the Blue
Book will be consistent with the longer-run forecast of financial
activity of the Green Book.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 45 -

Although the evidence is not conclusive, it seems to indicate
that the alternative financial patterns typically presented in the Blue
Book differ from each other by such a small amount that they do not
generate very different behavior in the nonfinancial sector over a oneyear horizon.

18/

Nevertheless, the alternatives do imply different quarterly

interest rate patterns and some differences in nonfinancial behavior even
over this short a period.

Furthermore, the differential nonfinancial

effects may be enlarged over longer time periods because of the lags in
the system.

This information should be made available to the FOMC so

comparisons between alternative short-run timing patterns can be assessed
relative to the likely longer-term financial and nonfinancial market
19/
effects.19/
The more fundamental need for consistency between the Green Book
and the Blue Book has to do with the ultimate goals of monetary policy
decisions.

The current Green Book forecast is not the result of an optimiza-

tion procedure designed to indicate the 'best' strategy for achieving some
stated objectives.

Instead, it is a forecast conditioned by the assumption

of relatively smooth money growth and interest rate paths that typically do
not deviate substantially from recent past trend paths.

Nevertheless, if the

Green Book forecast does in fact indicate an outcome acceptable to the FOMC
for the coming year and beyond, considering both implicit objectives and
18/ For simulation experiments using the quarterly econometric model that
are related to this issue, see McElhattan (19).
19/ Comparisons between alternative short-run timing patterns might be made by
using more elaborate versions of the decision tree discussed below (p. 48).

Authorized for public release by the FOMC Secretariat on 2/3/2021

-

46 -

constraints, the associated consistent quarterly financial patterns must
be known in order to determine the appropriate shorter-term strategy to
achieve that projected outcome.
Furthermore, there is a 'best' monthly and shorter-term strategy
to follow in order to achieve the quarterly financial patterns associated
with the Green Book forecast of real economic activity and price behavior.
It is true that a fairly wide range of short-term behavior of interest
rates or monetary aggregates is consistent with given quarterly interest
rates or monetary aggregates.

But there is only one path of expected

interest rates, or reserves or monetary aggregates that will yield,
simultaneously, the same expected quarterly patterns as contained in the
quarterly analysis for reserves, monetary aggregates and interest rates.
That is the consistent path that links the Green Book and the Blue Book.
Different sets of constraints placed on the quarterly and monthly process
may mean that the 'best' short-term paths cannot be achieved.

But, if this

were a serious problem, it would be a matter of inconsistent constraints in
different periods of analysis or operation rather than a matter of the
nonexistence or lack of importance of consistent short-term and longer-term
paths.
Uncertainty about current information, the structure of the
economy or future exogenous variables does not prevent the determination of
such consistent paths.

With uncertainty, the appropriate consistent paths

are conditional expectations subject to revision if subsequent information
indicates these expectations are incorrect.

As uncertainty increases, the

confidence with which the expectations are held declines, and this could

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 47 be noted in the analysis.

In general, with the passage of time,errors in

the expected consistent paths will occur.

As discussed earlier, measurement

of these errors used in conjunction with other incoming information, can then
form the basis for monitoring progress in achieving more fundamental but
unobservable goals, and may be useful for sequentially revising conditional
expectations and policy.

In the absence of operational filtering procedures,

an alternative aid to reaching the necessary decisions is discussed below.
It is particularly important to attempt to determine the consistent short-term and long-term strategies if restrictive interest rate
smoothing constraints are imposed on the monetary policy process.

20/

When such

restrictions are imposed, ignoring the short-term consistency requirements
increases the likelihood of not achieving the desired quarterly patterns
or timing and, therefore, of not achieving the ultimate objectives.
Under the revised procedures that are being developed currently,
the practice of preparing pre-Green Book conditional financial forecasts
will be continued, but with greater involvement of staff members principally
responsible for the preparation of the Blue Book.

During the process of

preparing the Green Book forecast, the accuracy and consistency of these
initial estimates will be monitored more closely to take into account altered
policy assumptions or new

information about economic performance.

By taking a more active role in the preparation of the Green
Book, staff members that prepare the Blue Book are, in effect, making early
estimates of the Blue Book analysis.

In turn, they are reviewing in greater

detail than previously the current nonfinancial behavior that forms the
basis for the Green Book conditional forecast.

This improvement in exchange

of information should improve the consistency of the two documents.

The end

20/ In addition, Henderson (13) discusses reasons why similar restrictions on
international economic variables are likely to be important. In Kalchbrenner
and Tinsley (15), there is an example of the impact of a restrictive interest
rate constraint on the expected path of monetary growth in the face of a
substantial unanticipated change in economic conditions.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 48 -

result of this interaction will be to make explicit the quarterly interest
rate and monetary growth patterns underlying the Green Book, as well as
providing a Blue Book analysis of the expected behavior of shorter-term
financial market variables consistent with the Green Book patterns.
The Role of Alternative Conditional Forecasts in FOMC Discussions
In a formal control approach, policy alternatives are explored
in a systematic and computationally efficient manner based on specification
of ultimate objectives, policy constraints and a model of the economy.

The

result is a single conditional forecast of economic behavior over the time
horizon, and a path of policy instrument settings that condition and are
consistent with that forecast.

Alternatives need not be considered explicitly

given the model, the objectives,and constraints because the optimal control
solution considers alternatives implicitly in determining the 'best' path
21/
for the policy instruments. Even though this technique cannot now be used
in practice for policy purposes, it suggests changes in current procedures
that should improve the usefulness of the analyses presented to the FOMC.
Under present procedures, staff presentations to the FOMC center
around two analyses.

The first is a single detailed consensus conditional

forecast of economic behavior over a one to one and one-half year horizon in
the Green Book (generally one year).

The second has been a Blue Book

presentation that is based on an analysis of the likely effects over the next
six months of maintaining current money market conditions, as well as an
assessment of the effects over the same horizon of varying reserve and money
market conditions by relatively small amounts on either side of existing
21/ Strictly speaking, this statement is true only if the loss function is
known. In the absence of knowledge about the characteristics of the loss
function, it is prudent to test the sensitivity of the solution to changes in
the loss function specification and other assumptions as noted in section II.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 49 -

conditions.

The discussion of each of the alternatives has usually been in

terms of achieving the FOMC's stated one-year monetary growth objectives.
This procedure has been varied somewhat in recent staff presentations.

In recent months, the Blue Book discussion has been extended to a

one-year horizon in a general way.

During chart shows, an alternative condi-

tional forecast based on the Board's quarterly econometric model adjusted to
incorporate judgmental information has been discussed.

In addition, alterna-

tive econometric model simulations or conditional forecasts have always been
prepared, but they have been discussed only when the issue of alternatives
was raised during FOMC deliberations.

No attempt has been made in the past to

provide Blue Book analyses consistent with any of the alternatives.
Quite obviously,

there are distinct limitations to expanding the

number of alternatives presented to the FOMC because of time constraints
on both the staff and the Committee.

Nevertheless, it would be possible to

explore the policy options of the FOMC more fully by considering a greater
number of conditional forecasts on a regular basis.
approximation to a formal control approach.

This would be a working

Initial explorations of the

feasibility of providing several alternatives to the FOMC indicate that it
could be accomplished, and that the use of some variant of a 'decision tree'
appears to offer the most promising avenue.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 50 The 'Decision Tree'

2 2/

The amount of detail currently included in the consensus Green
Book forecast precludes presentation of a number of alternatives in the same
detail.

In order to make it feasible to provide a number of alternatives,

a 'decision tree' capability has been programmed for the quarterly model.
The program permits the division of a given forecast or simulation horizon
into as many periods as desired.

Any combination of monetary growth rates

(or some other measure of monetary policy) can be chosen for the various
periods.

For example, in the attached graphical representation of a

decision tree, 12 quarters are divided into three one-year periods and two
alternative money growth rates have been considered for each of the three
periods.

As shown, this results in 8 possible money growth strategies.

More than three periods or two monetary growth rates could have been used.
In this decision tree representation, the number of variables
shown at each stage has been limited to four.

More variables could be

included and they could be shown in terms of differences from a detailed
central conditional forecast.

By choosing the principal variables of interest,

the complexity problems can be made manageable.

Thus, the decision tree for-

mat provides a means of presenting a large number of alternatives economically.
It also provides a convenient way to present the expected outcomes of varying
rates of growth of the monetary aggregates rather than using different constant growth rate assumptions for each alternative as is now done. This
permits consideration of the behavior of ultimate variables under a fairly
wide number of alternative policy mixes without the necessity of specifying
an explicit loss function.
2/ The suggestion that 'decision trees' might be useful arose from joint discussions between members of the Special Studies Section and the Econometric and
Computer Applications Section at the Board. The capability to use this technique
was developed by J. Enzler and D. Battenberg.

Authorized for public release by the FOMC Secretariat on 2/3/2021
- 51 Projected Conditions, Alternative Ml Growth Rates

Initial
Conditions
1975 II

Ml Growth
1975 IV
Through
1976 II

Ml Growths

Conditions
1976 II

P = Percent increase over 4
quarters earlier, non-farm
compensation per man-hour
y = Level of Real GNP
R = Treasury Bill Rate

1976 IV
Through
1977 II

Conditions
1977 II

Ml Growths
1977 III
Through
1978 II

Conditions
1978 II

Authorized for public release by the FOMC Secretariat on 2/3/2021

-52

-

Decision tree analyses might also be used to evaluate the
implications of uncertain exogenous shocks to the economy such as the
possibility of further OPEC oil increases.

This could be accomplished by

re-running the decision tree output after making allowance for the likely
impacts of some contingency, and comparing the results with an analysis
based on the assumption the contingency will not occur.
A truncation of the amount of detail presented from full simulations also provides a means to deal with situations in which last-minute
data revisions or new information necessitate changes in the patterns of
monetary growth assumed when preparing the Green Book after the Green Book
forecasting process has been completed.

Similarly, the decision tree might

simplify the problem of making explicit the quarterly interest rate and
monetary growth patterns implied by Blue Book forecasts that are conditional
on the same average money growth rate as assumed in the Green Book, but with
different patterns over the year.
In the Subcommittee report to the FOMC, it is recommended that
the staff be asked to present at least three alternative longer-term and
shorter-term forecasts at each quarterly chart show, or more frequently
under special circumstances.

It is further recommended that the longer-term

forecasts be prepared over a policy horizon long enough to indicate the
differences in effects of the alternative policy assumptions for major
variables.

Finally, the report recommends

that the staff develop and

present the ranges of probable forecast error associated with the presentations.

Authorized for public release by the FOMC Secretariat on 2/3/2021

- 53 -

The Subcommittee concluded that only a limited number of
variables should be presented for the alternatives as in the preceding
sample decision tree, and that the results should be expressed in terms
of differences from a central detailed conditional forecast.

If these

alternatives are confined to smooth policy paths, the decision tree will
only provide a minimal amount of information about expected dynamic policy
multipliers.

It would be preferable to explore alternatives that are

not restricted to smooth policy instrument paths.

A wide variety of

alternatives could be provided currently using a more elaborate decision
tree than the example shown, but the resulting complexity presents
problems, particularly in the context of Committee deliberations.

It is

assumed that the current detailed Green Book format with accompanying
text will be continued.

These alternatives are viewed as an addition to

current material presented to the FOMC.
The suggested changes discussed in this section are by no means
derived solely from optimal control analysis, but they are consistent
with that approach.

The suggested changes are also consistent with both

longstanding and recent developments in FOMC procedures.

The FOMC has

lengthened its policy horizon in recent years to take into account lagged
relationships; the relationships between longer-term and shorter-term
strategies have been discussed more frequently; and, it has always been a
feature of FOMC procedures to consider the behavior of a large number of
variables.

The suggested changes are designed to enable the committee to

analyze these various aspects of the decision process more effectively.