View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

NO MI

FEDERAL

RESERVE
CLEVELAND

BANK

OF

E C O N O M I C
1 9 8 7

2

Learning, Rationality, the Stability of
Equilibrium and Macroeconomics. The

issue of how agents learn to form rational
expectations has received increasing atten­
tion lately. The approach taken in many pa­
pers treats model stability as a problem in
learning. In reviewing this literature, the
author examines carefully the assumptions
about individual behavior required for learn­
ing to form rational expectations. The mean­
ing of rationality in a macroeconomy charac­
terized by highly decentralized markets is
also discussed.

R E V I E W

Q U A R T E R

Economic Review

4

is published quar­

terly by the Research Department of
the Federal Reserve Bank of Cleve­
land. Copies of the issues listed
here are available through our Public
Information Department,
216/579-2047.

Editor: William G. Murmann
Assistant Editor: Robin Ratliff
Design: Michael Galka
Typesetting: Liz Hanna

Opinions stated in

Economic Review

are those of the authors and not
necessarily those of the Federal

"I O Airline Hubs: A Study of Determining
-L
Factors and Effects. One of the most
widely noted and least studied changes in the
airline industry has been the switch to huband-spoke networks. While there have been
some analyses that explain the advantages and
effects of hub-and-spoke networks, there have
been no attempts to study empirically the de­
terminants that influence where airlines
choose to establish their hubs. This paper pro­
vides insights into the future evolution of the
airline industry by identifying the quantitative
effects of these determinants and examining
the effect of hub status on airport traffic.

O

A Comparison of Risk-Based Capital
and Risk-Based Deposit Insurance.

The author develops a bank-risk model, based
on six FDIC variables for predicting bank
failure or loss, and uses it to compare alterna­
tive proposals for controlling the level of
bank risk. He finds that both risk-based capi­
tal and risk-based insurance systems would
affect banks’ behavior. The impact of the two
systems, however, would most likely not be
identical, but implementation of either system
would probably lead to significant progress
in the effort to control bank risk.

Reserve Bank of Cleveland or of the
Board of Governors of the Federal
Reserve System .

Material m ay be reprinted provided
that the source is credited. Please
send copies of reprinted materials to
the editor.

IS S N 0013-0281

Learning, Rationality, the
Stability of Equilibrium and
Macroeconomics
by John B. Carlson
John B. Carlson is an economist at
the Federal Reserve Bank of
Cleveland. The author would like to
acknowledge helpful criticisms from
Richard Kopcke, Mark Sniderman,
E .J . Stevens, Alan Stockman, Owen
Humpage, Steve Strongin, William
T . Gavin, Randall W . Eberts, and
Charles Carlstrom, on earlier drafts.

Introduction
It is sometimes argued that the strength in mod­
els that assume rational expectations is the weak­
ness of their competitors. For example, McCallum
(1980) says: “Each alternative expectational
hypothesis, that is, explicitly or implicity posits
the existence of some particular pattern of system­
atic expectational error. This implication is unat­
tractive, however, because expectational errors
are costly. Thus, purposeful agents have incen­
tives to weed out all systematic components.”
This alluring intuition, however,
glosses over a very difficult problem that remains
unsolved in general: How do agents acquire the
information and understanding sufficient to enable
them to “weed out’’ systematic error? The acquisi­
tion of information is costly and no one actually
believes anyone knows the true underlying model
of the economy. Discovering systematic error is
one thing; knowing what to do about it is
another. The central issue is one of learning.
The problem of learning in models
that assume rational expectations has received
increasing attention lately.1 The approach taken
in many papers treats stability of equilibrium as a
problem in learning. That is, the issue of conver­
gence to rational expectations equilibrium (REE)
is presumed tantamount to the question of how
agents acquire sufficient information to weed out

1

For a concise review of these models see Blume, Bray, and
Easley (1982).

systematic expectational error. While several
modeling approaches have found such “stability”
under different and reasonably plausible assump­
tions, there are no general theorems. More
importantly, however, even the limited results
found in these models presume continuous
market clearing. Thus, the meaning of stability is
quite restricted. The fundamental issue— how
individual behavior will lead to the necessary
price adjustment— is never explicitly modeled.
Neglect of this issue is not new; it has long hin­
dered progress in general equilibrium theory.
The purpose of this paper is to ex­
amine carefully the assumptions about individual
behavior required for stability in models where
agents learn to form rational expectations. Section
one provides a restatement of the importance of
stability analysis for deriving meaningful results
from equilibrium models, and introduces the
idea of developing learning models to describe
the transition process to systemic equilibrium.
To illustrate the correspondence
between learning processes and stability of REE,
two examples are presented. The first, presented
in section two, presumes rational agents know
the structure (that is, the functional form) of the
true economic model, but not the parameters.
The example presented in section three pre­
sumes agents don’t even know the model struc­
ture while they are learning. The precise meaning
of stability in both models is discussed in section
four. A distinction is made between expectational
equilibrium and equilibrium of the aggregative
economy. In section five, we discuss the difficul­
ties facing the researcher who seeks to model

learning in an aggregative economy. The issues
are developed in a general model employing a
notion of equilibrium proposed by Frank Hahn
(1973). Section six offers concluding remarks.

I. Importance of Stability
Analysis of positions and characteristics of equilib­
rium is by far the most widely accepted mode of
economic analysis. Typically, such equilibria are
derived from (or presumed to be) the solution of
individual optimization problems. A key hypothe­
sis that begets coordination of individual plans
(aggregative consistency) is that certain
variables— usually prices—take on values that
make all individual plans mutually consistent.
Under these circumstances, no individual has any
incentive for further change. Economists rarely
specify a behavioral process that could account
for how variables, like prices, adjust to recoordi­
nate individual plans when conditions change.
Rather “changes” in equilibrium outcomes are
generally developed in comparative static anal­
ysis, which compares equilibria corresponding to
different values of underlying parameters.
The use of comparative statics in
economics was first explained in rigorous detail
by Samuelson (1947). He recognized, however,
that to obtain definite
theorems in comparative statics, one has to spec­
ify a hypothesis about the dynamical properties
that will lead to equilibrium values. The ‘duality’
between the problem of stability and the prob­
lem of deriving fruitful theorems in comparative
statics is what Samuelson called the Correspon­
dence Principle.
The importance of dynamical foun­
dations has recently been restated by Fisher
(1983). He argues that if general equilibrium
models are to be of any use then we must have
some confidence that the system is
that is,
that it must converge to an equilibrium,
that
such convergence to equilibrium must take place
relatively quickly:
If the predictions of comparative statics are
to be interesting in a world in which con­
ditions change, convergence to equili­
brium must be sufficiently rapid that the
system, reacting to a given parameter shift,
gets close to the predicted new equilib­
rium before parameters shift once more. If
this is not the case, and
if the
system is unstable so that convergence
never takes place, then what will matter
will be the ‘transient’ behavior of the sys­
tem as it reacts to disequilibrium. O f
course, it will then be a misnomer to call
such behavior ‘transient’ for it will never
disappear, (p. 3)

operationally meaningful

stable,
and

a fortiori,

Fisher goes on to emphasize his
point in the context of models assuming rational
expectations:
In such models, analysis generally pro­
ceeds by finding positions of rational
expectations equilibrium if they exist. At
all other points, agents in the model will
have arbitrage opportunities; one or
another group will be able systematically
to improve its position; ....The fact that
arbitrage will drive the system away from
points that are
rational expectations
equilibria does not mean that arbitrage will
force the system to converge to points that
are rational expectations equilibria. The lat­
ter proposition is one of stability and it
requires a separate proof. Without such a
proof— and, indeed without a proof that
such convergence is rapid—there is no
foundation for the practice of analyzing
only equilibrium points of a system which
may spend most or all of its time far from
such points and which has little or no ten­
dency to approach them. (pp. 3-4)
Fisher argues that analysis of this
problem requires a full-dress model of disequilibrium — one that is based on explicit behavior of
optimizing agents.2 A general model would
accommodate trading, consumption and produc­
tion while the model is out of equilibrium. That
is, such an approach would provide a theoreti­
cally based alternative to the Walrasian auctio­
neer. Arbitrage would follow from
rationality. Unfortunately, practitioners of this
approach have not advanced the subject enough
to address the stability of model-consistent (that
is, of rational) expectations.
The stability of REE has been
addressed, extensively, however, on a less fun­
damental level. This approach presumes that
markets clear and that REE is the true underlying
equilibrium. It examines different pro­
cesses by which agents might acquire (learn) the
information necessary for an expectations equilib­
rium consistent with REE. An important paper by
Cyert and DeGroot (1974) defends the use of
models of the learning process:
The attempt to develop process models
immediately opens us to the criticism of
developing
models. We acknowl­
edge that there may be a large number of
models that could potentially describe the
process to equilibrium. Our position is

not

individual

long-run

ad hoc

2

Fisher (1983) does make a contribution in this direction but only
under the assumption of perfect foresight. His monograph illus­

trates the burden that lies ahead of any serious theoretician in this
matter.

3

4

that, while the models have a certain
amount of face validity, our major contri­
bution is the introduction of an explicit
learning process described in Bayesian
terms. The notion of developing models to
describe the transition process toward
equilibrium of a system disturbed by some
random shocks may be questioned by
some economists. The development of
comparative statics and the neglect of
dynamic analysis is in part a reflection of
such attitudes in the profession. Yet with­
out well-developed process models, the
concept of rational expectations is essen­
tially a black box. (p. 522)
Thus, models of the learning pro­
cess are essentially provisional tools that enable
us to interpret REE in a more realistic way. We
may think of the development of such models as
an attempt to justify the use of the rational expec­
tations hypothesis.
These models, at the very least,
allow us to ask if it is conceivable that agents
could “learn their way” to equilibrium in the
model at hand. This problem is not simple.
Because agents are presumed to base their deci­
sions on their own estimates of a model’s
parameters, their actions cannot be considered
exogenous to parameter estimation. If estimates
of parameters change, agents adjust their behav­
ior accordingly. Moreover, agent actions generate
the data on which the estimates of parameters are
made, making learning an endogenous process.
To correctly specify the model, agents would
need to take the endogeneity into account. Con­
ventional econometric techniques are typically
not well-suited for this task.
The question of convergence to REE
has been examined in two frameworks. The first
assumes that agents know the functional form of
the model or, at least, the appropriate specifica­
tion of the likelihood function underlying the
generation of the data. In this framework, agents
are presumed to learn about the value of param­
eters either through classical statistical methods,
repeated use of Baye’s Theorem, or some other
statistical method. The second framework does
not require that agents know the model, although
some of this work assumes that agents base their
expectations on the basis of one model chosen
from a set that includes the true model.

II. Learning When Agents Know the Model
To illustrate a process of learning and its connec­
tion to the stability of REE, we first examine one
approach taken by Cyert and DeGroot (1974).
They proposed to design models that describe
the process by which rational expectations may
develop within a market. They build on a version

of the cobweb model used by Muth (1961) to
propose the concept of rational expectations.
Muth posited a partial equilibrium model for a
homogeneous good with a production lag. Using
the notation of Bray and Savin (1986), the market
equations have the following form in any period

t:

(1)
(2)
(3)

dt - m j - m 2p t
st = m0 + m ^p f + v2t
dt = st
m(), m m
pt
p e(

(demand)
(supply)
(equilibrium),

where
x,
2, and
are fixed parameter
values;
is the market-clearing price of the
good;
is the market-anticipated price before
trade takes place; and
is an exogenous shock
to supply. It is assumed that all units demanded
are consumed in period and that firms make
production decisions before trade takes place.
Thus, the deterministic component of supply is
fixed in period
The assumption of market clearing
yields:

v2t

t

t.

p, = M -a p f + u ,,
M = (m x - m0)m2\ a = m^rrr/
u, -nr2lv2r

(4)
where
=

and

rational

Under the usual assumption of
expecta­
tions, the market-anticipated price equals the
objective mathematical expectation for price
given the model and as conditioned on the data
available when the expectation was formed.3 That
is, p f =
Cyert and Degroot propose a
similar basis for determining pf. They assume
meaning that the
that expectations are
firms’ expectations are based on the mechanism
implied by the model. The essence of this dis­
tinction is that while agents are presumed to
know the correct likelihood functions, they are
not required to know the parameter values. Cyert
and DeGroot derive an explicit expression for
market-anticipated price by taking expections of
both sides of (4), substituting
and solving for

Et_x(pt).

consistent,

p et for Et_x(pt)
p et:
E,_xM -E t_x[ut]

(5)

1 + El_1(a)
Note that since the parameter
values are unknown, the market-anticipated price
is expressed in terms of
values of the
parameters, not true values. Agents (firms) learn
to form rational expectations if, with additional
data, the expected values of the parameters con­
verge to their true values. Note also that marketanticipated price will differ from actual market

expected

3

It is perhaps more accurate to call such expectations modelconsistent instead of "rational.'’ (See Simon 1978).

price both because of expectional error and the
supply shock.
The economic process evolves as
follows: In each period, the firms form
expectations of the price in the next period from
(5) based on
parameter values (priors).
The actual price is then generated according to
the model incorporating the
expecta­
tions, that is, price is given by (4). The observed
values of actual price contains new information
that leads firms to change their expectations of
the values of the parameters and, hence, to
change their expectations of the price in the fol­
lowing period. The actual price in the next
period is again generated by the model and the
process continues in this manner.

consistent

expected

consistent

Cyert and DeGroot verify that such
a process can, in fact, converge to REE when
slope coefficients
and ra5 are known, even if
intercepts
and m, are not. In this example,
the authors assume that the random (supply
intercept) error has a normal distribution with
mean 0 and known precision (inverse of var­
iance). Moreover, they posit a posterior distribu­
tion for
at the end of period -1 that is normal
with finite mean and precision. Finally, they show
that a Bayesian updating of parameter values does
converge to the true value of
The convergence result was
encouraging. It showed that one need not
assume all knowledge is innate, but that, from a
Bayesian point of view, the relationship between
expectations and other variables in the model
arises naturally when economic agents form
expectations in a manner internally consistent
with the mechanism generating the data. In sim­
ple terms, this means that agents can learn
parameter values even though their expectations
affect outcomes of the model. An essential
assumption is that all agents can correctly specify
likelihood functions of unknown parameters, that
is, that they “know” the structure of the model.
An implicit assumption underlying
this and all other models obtaining convergence
when agents know the model is that the solution
concept being employed is Nash equilibria. This
means that each agent has no reason to alter his
specification of the likelihood function, given his
own specification and those of all other agents.
Thus, the approach assumes not only that agents
know the model, but also that agents know that
other agents know the model. The implications of
this are discussed by Blume, Bray, and Easley

m0

m2

M

t

M.

( 19 8 2 ):

The concept of a Nash equilibrium in
learning strategies has much to commend
it. Any other learning process is to some
degree ad hoc; if some or all of the agents
are learning by using mis-specified mod­
els, at some stage they should realize this

and change the specification. Nash equilib­
ria in learning strategies are rational expec­
tations equilibria in which agents take into
account their uncertainty about features of
the world which they are assumed to know
in standard models of rational expectations
equilibria. However, Nash equilibria in
learning strategies are liable to be consid­
erably more informationally demanding
than conventional rational expectations
equilibria, as agents require extensive
knowledge about the structure and dynam­
ics of the model that prevails while they
learn. There may also be problems with
the existence of equilibrium. Thus, while
this approach yields convergence to a con­
ventional rational expectations equilib­
rium, its extreme informational demands
make it an unsatisfactory answer to the
initial question of how agents learn how to
form rational expectations, (p.315)
In sum, employing the Nash solu­
tion concept begs the question as to how agents
learn the structural form of the underlying model.
Moreover, it provides no economic justification for
why any agent should believe that all other agents
will know what forecast methods other agents
use. What incentives are there for such behavior?

III. Learning When Agents Don’t Know the Model
When agents know the structural form of the
economy, it is a relatively straightforward task to
identify informational requirements sufficient to
obtain convergence to REE. As we have seen,
however, these requirements are quite demand­
ing. They presume that agents have extensive
knowledge about what other agents believe as
they all learn about the parameters. It is some­
what interesting, however, that in situations
where agents don’t know the model, convergence
can occur under somewhat weaker assumptions
about the learning process. These results, how­
ever, are model specific. Other, equally reason­
able, approaches lead to instability of REE.
Achieving convergence depends not only on the
nature of learning but on the structural and sto­
chastic parameters of the underlying model.
When agents don’t know the
model, the problem of learning has been
addressed in two distinct ways. The first approach
provides an explicit model that allows agents to
modify their forecasting rules in light of observa­
ble outcomes (see Blume and Easley [1982]).
Typically, they choose among a set of models that
includes the true one. Convergence occurs when

5

all agents eventually adopt the true model. In this
approach, we find that the results are mixed. In
some models, rational expectations equilibria are
locally stable but not unique.
The second approach examines the
possibility of convergence when agents never
switch models, despite the fact that they may
have misspecified the model while they are learn­
ing. Essentially, this approach considers whether
“irrational” learning can lead to rational expecta­
tions equilibrium.
An interesting model by Bray and
Savin (1986) examines the second kind of learn­
ing. An appealing feature of this model is that
agents leam using conventional techniques—
such as by estimating the parameters of a stand­
ard linear-regression model. While this is the cor­
rect econometric specification for their postulated
model
the econometric model is
misspecified while people are learning. Moreover,
Bray and Savin use simulations to examine the
rate at which convergence takes place and to
assess the possibility that agents discover that
their estimated model is misspecified.
Following Townsend (1978), they
extend the cobweb model to include stochastic
demand, to allow for exogenous shocks to aggre­
gate supply, and to accommodate diversity of firm
expectations and decisions. All firms are assumed
to face the same technology as defined by a
quadratic cost function

in equilibrium,

6

cit = q,2t /2m 5,
where m- > 0 and qit is the output of firm i at
date t. Under the profit-maximizing postulate,
firm i chooses an output level equal to m 5p eit
where p eit is the mean of its prior on marketanticipated price.4
The aggregate of these expecta­
tions over all firms is denoted as p f. Their model
is thus given by:

dt = mx- m 2p t + uu
(demand)
st = m %p f + x't m4 v2t, (supply)
dt = st
(equilibrium),
where x ' m4 + v2t is an exogenous supply shock
and x\ is observable. Market clearing implies that:

(6)
(7)
(8)

p, = x'm + a p f + ut,
where x ' is redefined to include 1 as the first
component and m = [ra,:ra4] m2x and as in (4)
a - m^m^ , but ut = (vu - v2t)m2x.
(9)

4

Bray and Savin consider a continuum of firms producing a homo­
genous good. The set of firms is the unit interval [0 ,1] indexed by

/. Thus, market-anticipated price is a Lebesque intergral. It is in that
sense an average expected price.

If agents knew both the model
structure and the values of the parameters, the
REE price forecast would be:
(10)

p\\= x'tm (\-a Y x
i,

a

for all assuming
1. Together (9) and (10)
imply that the REE price, for each
is:
(11)

t,

p t - x \m (\-a )~ x + ut.

The linear relationship between actual price and
exogenous-supply influences applies only in
equilibrium when agents all share the same
expectations. This simple relationship does not
hold when agents are learning the values of the
parameters. To illustrate this, Bray and Savin
assume agents maintain the hypothesis that:
(12)

p t = x'tb

+

ut

satisfies the assumptions of the standard linear
model, and estimate accordingly. They con­
sider the consequences that agents may be classi­
cal or Bayesian statisticians. If all agents (firms)
are Bayesian statisticians who assume
is
as
and if firm s initial prior on
is
and prior on precision is
firm
may obtain revised priors on
after observing
(a:, , £ , ) , . . .,(*M
t_, ), which will have mean
, , and precision
where,

b

i.i.d. N(0,ip2),
b bi0
i

b

(13)

ut

V

,p

b

So/a2,

St_x/o 2

* / , M = (|
( v„

S.,

+

" '( s o ^ .o

t-\

= So + 2

7=1

+ Z *

j

P j)

and

x :x 'j.
J

J

Note that the classical statistician is essentially a
Bayesian Statistician whose initial prior on
is
diffuse (-Sy = 0).
With this revised prior, agent s
forecast o f
is
The aggregate of
market-anticipated price is
where
is
an aggregate of
over all firms. Substituting this
in (9) gives:

b

p t p ejt - x \bit.
p et - x 'bt
bit

(14)

p t - x'(m + abtX)

+

V

bt

ut.

Equation (14) generates the actual
observed price given both the market mechanism
and the way agents form expectations. Note that
+
), varies with
the coefficient of tA, (
time. Thus, agents are
assuming that
price is generated by a standard linear model
with a constant coefficient. The model is incor­
rect because it fails to take account of the effects
of learning on the parameter values. If agents

x

m abtA
incorrectly

knew what we know, they would not use linear
regressions to form expectations.
Despite the fact that agents may
misspecify the model, Bray and Savin are able to
show that: (1) the difference between the indi­
vidual estimates
and the average estimate
tends to zero with probability one as tends
to infinity; and (2) the average estimate
cannot
converge to any value other than the REE value
(
. The intuition they offer is that if
tends to for large the actual price is
=
+
Since the data generation
process closely approximates the standard linear
model with coefficient
+
the estimate
tends to
which is impossible unless
=
These results enable Bray and
Savin to obtain the restrictions on parameters
and that are necessary and sufficient for exis­
tence, uniqueness, and ‘stability’ of the REE. The
conditions are precisely the same conditions for
the existence, uniqueness, and tantonnement
stability of a market in which supply and demand
are simultaneous, that is, a Walrasian model in
which supply at time is based on actual price at
as opposed to market-anticipated price.
The intuition behind the conver­
gence process of the Bray-Savin model is straight­
forward. Suppose suppliers’ beliefs are such that,
in the aggregate, they underestimate price cor­
responding to a given set of exogenous influences.
This would lead them to supply less than they
otherwise would have done. Consequently, the
auction would assure that the market-clearing
price would be above the market-anticipated
price. Taking account of the newly observed
price, suppliers would, on average, raise their
estimate of price corresponding to the same set
of exogenous influences. Provided they don’t
overreact, learning would bring them closer to
REE in each successive period.
An important feature of the Bray
Savin approach is that the specified learning pro­
cess is reasonably simple and plausible despite
the fact that the underlying mechanism is much
more complicated. A potential problem, however,
is that agents might discover that they have incor­
rectly specified the model. Since the estimated
model is not the true one while they are learning,
the data may confirm the misspecification. On the
other hand, if convergence is sufficiently fast,
their test may fail to spot the misspecification.
To examine this possibility, Bray
and Savin use computer simulations. The simula­
tions suggest that the rate of convergence can be
slow if the ratio of the slopes of demand and
supply are near the boundary of the stability

bit

b'

t
bt

m I-a )'1
b
t,
x'( m ba) + ut.

pt

m

bt
m + ba,
b m { \- a ) x.

ba,

a

b

t

bt

t

region, especially if the initial prior mean is
incorrect for REE and the prior precision is high.
Thus, the fact that equilibrium may be stable may
not mean much. Equilibrium behavior may not
provide a reasonable enough approximation of
the actual behavior to be meaningful.
Bray and Savin also use the simula­
tions to examine the likelihood that agents will dis­
cover that their estimated model is misspecified.
Agents are presumed to examine the DurbinWatson statistic as a diagnostic check for model
misspecification. The results suggest that if REE is
stable, and if the estimates converge rapidly,
agents are unlikely to identify the misspecification.
Thus, it is reasonable to expect that agents could
persist using simple linear (misspecified!) meth­
ods and eventually learn all they need to know to
form expectations in a manner consistent with REE.

IV. The Meaning o f Stability
The major contribution of the learning models
discussed above is that they provide an explicit
framework for describing a transition process
toward equilibrium of a system disturbed by
some random shocks.5 While they successfully
demonstrate how rational expectations may
develop in a perfectly competitive market, learn­
ing models do not provide the kind of underpin­
nings sought by general equilibrium theorists in
stability analysis. They focus only on the devel­
opment of
equilibrium. No attempt
is made to specify the dynamics of price forma­
tion. Rather, the framework implicitly assumes an
auction process not substantively different from
that required to achieve standard competitive
(Walrasian) equilibrium.
Thus, these models beg the central
question that continues to plague general equilib­
rium theorists: how to derive behavioral founda­
tions for price adjustment. This is not a criticism
specific to the models at hand, but is a fundamen­
tal problem with all equilibrium models, including
fixed-price models. To appreciate the problem, it
is useful to review briefly the theoretical founda­
tions of the stability of competitive equilibrium.
Stability analysis of competitive
equilibrium builds on the earliest notions about
price adjustment, which were imbedded in the
“law of supply and demand.” It essentially holds
that in competitive markets, prices will rise when
there is excess demand and fall when there is

expectational

3

It is the view of Cyert and DeGroot that such a process has to be
developed if the rational hypothesis is to be a scientific truth

rather than a religious belief.

7

excess supply. This argument has the familiar
dynamic formulation first proposed by Samuelson
in 1941 (see 1947):
(15)
(16)

- h (D -S ),h (0 )

= 0, and

dt
D = D (p,a) S = S ( p ) ,
D

S

> Oand

p

static

static

j,

i,

Each time a new set of prices is quoted, each
trader submits a revised ticket. The process con­
tinues until excess demand is zero, that is, equil­
ibrium price is determined.
Essentially, this is a
description of a
process by which market
clearing can be achieved and thus fails to help in
understanding the
of price.
The only difference between this
Walrasian situation and the one implied by the
Bray-Savin model is that, under the latter, suppli­
ers commit to production levels prior to trade.
Suppliers therefore must base their decisions for
output levels on the anticipated price for their
good. While these anticipated prices may initially
differ when suppliers use Bayesian learning
models, the observed market-clearing price at any
point in time must be the same for all suppliers.
Because the model used by suppliers to deter­
mine anticipated price specifies the
marketclearing price as the dependent variable, a tantonnement process is necessary to generate data
that is essential for the process to be operational.
Clearly, the auction process plays an essential
role in consolidating information that is necessary
for convergence.
A key distinction between the BraySavin process and a pure Walrasian process in­
volves a restriction on what suppliers can learn
about the
supply function. In a standard
Walrasian auction, suppliers are free to adjust the
quantities they would produce for all the prices
quoted. In this way, the auction process also syn­
thesizes for all agents all the relevant properties
about both aggregate supply and demand. In the
Bray-Savin model, on the other hand, suppliers
offer the same quantity for all prices quoted. The
auction essentially determines the point on the de­
mand curve that corresponds to the predetermined
level of output. That is, the auction synthesizes
only responses of consumers to the array of price
quotes. Suppliers learn from the (temporary)
equilibrium price about whether they under or
overestimated prices, but they do not know how
well other suppliers estimated prices and, conse­
quently, how aggregate supply might adjust to
different prices. This information is revealed only
through a succession of auction outcomes.
Notwithstanding information lags,
the situation in the Bray-Savin model may not be
very plausible for markets where prices are not

or production takes place?
timeless

Until then no trade

dynamics

where
and are quantities demanded and
supplied for a homogeneous good;
is the
market price of that good, and
is an exogenous
shift parameter. The properties of the
de­
mand and supply functions are derived under the
standard hypothesis that households and firms
maximize familiar objective functions. Formal
proofs for the stability of competitive markets
essentially derive sufficient conditions for the
dynamic relations expressed by (15) to yield time
paths of prices that approach their equilibrium
values from arbitrary points.6 Unfortunately, glo­
bal stability is obtained only under very severe
restrictions on excess demand functions, the
most notable being the assumption that all goods
be gross substitutes.
While the assumption implicit in
(15) seems plausible, it is beset by some impor­
tant conceptual difficulties. The first problem is
that (15) has never been deduced as the maxi­
mizing response of economic agents to changing
data. Sonnenschein (1973) has shown that the
standard assumptions about individual behavior
do not imply any restrictions on excess demand
functions beyond homogeneity of degree zero
and Walras’ Law—conditions not sufficient for
stability. Thus, adjustment to Walrasian equilib­
rium lacks the rigorous basis that is accorded to
the properties of
supply and demand func­
tions. Moreover, it is not clear who changes prices
when the system is not in equilibrium. In com­
petitive equilibrium, sellers and buyers are typi­
cally treated as price takers. Therefore, it is pre­
sumed that there is some implicit market
manager who sets price.
The idea of a market manager
whose behavioral rule for price adjustment is
given by (15) was, of course, the ingenious
answer given by Walras. This approach is tanta­
mount to an assumption that all consumers and
suppliers gather in one place. The market mana­
ger quotes a set of prices for each commodity.
Then each trader writes on a piece of paper (a tic­
ket) the amounts of each of the commodities he
wishes to buy or sell at the given set of prices. If
there is excess demand for the commodity the
manager raises the price of if there is an excess
supply for commodity he lowers the price of

a

8

ti

j.

single

aggregate

i,

7

The requirement that no trade take place before equilibrium is
determined is essential if such a process is to converge to a

unique equilibrium. Fisher (1983) shows how trading at “false" prices
affects endowments of agents and, hence, the ultimate outcome of the
process. Thus equilibrium would depend not only on initial endowments,

6

See Arrow and Hurwicz, (1958) and Arrow , Hurwicz, and Block

but also on the process that achieves equilibrium. Such a property is

(1959).

sometimes called hystersis.

determined by auction processes, even though
the markets may appear competitive. Arrow
(1959) noted that there is an inconsistency
between the assumptions required of individuals
in a state of equilibrium and those necessary to
explain behavior in disequilibrium. He argued
that, in situations of excess demand, firms do not
behave as price takers but, in fact, use pricesetting tactics similar to the profit-maximizing tac­
tics of a monopolist.
The problem is somewhat more
complex in that a firm’s competitors will also be
raising prices. Moreover, on an individual basis,
no seller would have the incentive to agree to an
auctioneer, since the market-clearing price would
be less than what he could obtain in disequilib­
rium. In situations of excess supply, Arrow shows
that firms are still monopolists, but buyers are
monopsonists; thus, it is a joint decision that
establishes price. The lesson is that disequilib­
rium price adjustment may need to recognize
elements of imperfect competition.
Theories of imperfect competition
require elements of strategic behavior, that is,
situations in which two or more agents choose
strategies that interdependently affect each other.
Such problems involve game theory. Arrow
(1986) recently concluded that analysis of games
with structures that are extended over time leads
to very weak implications— in the sense that
there are a continua of equilibria. The fact is that
we know very little about how economic man
interacts with other economic men in situations
of excess demand or supply. Unfortunately, the
learning models considered above provide no
shortcuts around this problem.

V. Learning in the Macroeconomy
While Bray-Savin learning shows that agents using
“plausible” models can “learn their way” to REE
in
markets, it is doubtful that such a
result could obtain for a highly decentralized
market economy. This section identifies some dif­
ficulties, apart from the problems o f modeling
strategic behavior, that confront a modeler seek­
ing to extend the Bray-Savin result to the macro­
economy. The issues are sketched using a notion
o f equilibrium proposed by Frank Hahn (1973).
It is the essence of a decentralized
economy that individuals have different informa­
tion.8 Furthermore, each individual is specialized
in certain activities and has, in general, special­
ized knowledge about those activities. There is no

auction

8

reason to believe that individuals base their expec­
tations on the rather general kind of information
that econometricians use. Instead, different indi­
viduals base their decisions on different sets of
information. In short, a “plausible” model of
learning in macroeconomics would need to incor­
porate the existence of heterogenous information.
The problem of learning when
agents have incomplete and different information
has recently been studied by Marcet and Sargent
(1986b).9 In their approach, agents use leastsquares estimation to formulate expectations that
they think are relevant to understanding the under­
lying law of motion as it affects them. Marcet and
Sargent assume that agents do not respecify their
regressions over time, but maintain the same
“theory” about the world they observe. As with
Bray-Savin, their model accommodates feedback
from agent expectations to the actual law of
motion of the system. Marcet and Sargent show
that the existence of informational asymmetries
does not preclude convergence to REE when the
law of motion is a linear stochastic process.
While the class o f learning models
studied by Marcet and Sargent imposes some re­
strictions on the economic environment, the
mechanism can accommodate a wide class o f
economic theories. Nothing inherent in the leastsquares learning schemes precludes convergence
to a non-Walrasian equilibrium.
The idea that an economic system
might converge to a non-Walrasian equilibrium
is, no doubt, difficult to accept for some econo­
mists. For example, won’t arbitrage opportunities
arise? Although there would be such opportuni­
ties vis-a-vis a Walrasian ideal, it is not evident
that agents can perceive the ideal to identify the
opportunities. Because agents don’t observe con­
tinuous market-clearing equilibrium outcomes in
a non-Walrasian environment, there is no reason
that their expectations will ever become consis­
tent with Walrasian equilibrium in the long run.
The point here is that agents’ ex­
pectations
become consistent with the
conventions (including price-setting mecha­
nisms) that determine the laws of motion of the
system. While equilibrium expectations would
not be systematically inconsistent with observed
outcomes of the model, agent choices would not
necessarily be Pareto-optimal. Nevertheless, to the
extent that market forces operate, it is conceiva­
ble that price-setting conventions could develop

could

This point and the following were made by Arrow (19 78 ) as a criticism of the use of Muthian expectations to the aggregate

economy.

..................................................................................................................
I
I

C \
/

See Marcent and Sargent (1986a).

that would lead to an equilibrium that is “approx­
imately competitive.”10
To understand what “approximately
competitive” might mean, it is useful to introduce
a notion of equilibrium proposed by Hahn
(1973). In Hahnian equilibrium, each agent holds
his own theory about the way the economy will
develop
about the consequences of his own
actions.11 The agent abandons his theory when it
produces systematic and persistent errors. To the
extent the agent maintains a theory, his actions
are conditioned on his perceptions about the
laws of motion of such a system. The agent is
said to be in equilibrium when he maintains his
theory. The economy is said to be in equilibrium
if it doesn’t produce outcomes systematically and
persistently inconsistent with agents’ perceptions.
In the context of Marcet-Sargent
learning, the theories agents hold are embodied in
the regressors they choose. Under the assumption
that the true law of motion is linear, agents will
not be able to falsify their theories.12
Thus, they would have no reason to abandon the
theory. In the context of Hahn’s notion, each
agent would be considered in equilibrium.
Moreover, since the actual outcomes would not
be inconsistent with predictions of agents’ theor­
ies, the economy would be in equilibrium.
Although Hahn was not completely
precise about his notion of equilibrium, he
clearly intended it to be more general than the
equilibrium obtained in Marcet-Sargent learning.
For Hahn, the
of true “laws of motion”
need not be independent of the theories agents
choose. The theories could determine the struc­
ture of the laws of motion— a structure that could
have nonlinearities that agents could never com­
prehend. In the model of Sargent and Marcet, the
underlying structure is constrained to obey a lin­
ear (stochastic) law of motion.
Another important difference is
that Hahnian equilibrium would accommodate
agent behavior that could be inconsistent at any

and

ultimately

10

structure

1

The meanin 9 0< “approximately competitive” equilibrium devel-

_L v y

oped below is different from the sense that allocations in the

point in time, but not persistently so. In the
Marcet-Sargent limit point, agents ultimately learn
enough so that their expectational error is white
noise, that is, agent actions lead to a steady-state
equilibrium. This means that agent expectations
would ultimately become mutually consistent in
every period, given what they can know. Because
Hahn only imposes that actions (expectations) of
agents not be systematically
in­
consistent, his equilibrium would not be unique.
Hence, at any point in time, equilibrium would
be distinct from a steady state. Local stability
would mean that, for short enough periods and
for small enough disturbances, the set of equilibria
is large but that it shrinks.
It is useful to stress here that the
agents in the Hahnian concept of equilibrium are
rational in the spirit of McCallum’s intuition. That
is, agents do not maintain their “theory” when
systematic errors are sufficiently persistent for fal­
sification of the theory. However, the meaning of
rationality is much less restrictive (hence more
plausible) than is presumed in conventional for­
mulations of rational expectations. Agents in
Hahnian equilibrium are rational only in a subjec­
tive sense. Nothing inherent in the Hahnian
approach would assure that aggregate economic
outcomes would converge to a stationary stochas­
tic process with a unique
probability
distribution. Without such convergence, agents’
subjective expectations could not coincide with
an objective expectation of aggregate outcomes.
Imposing the restriction that agents’
expectations be mutually consistent with each
probability
other and with a particular
distribution underlying a given model seems too
restrictive to be very useful in practice. This point
has been developed in an alternative model pro­
posed by Swamy, Barth, and Tinsley (1982).13
An attractive feature of Hahnian
equilibrium concept is that it can accommodate
more plausible market structures such as the
“approximately competitive” economy suggested
above. Agents may adopt stable reaction rules that
allow them to cope in a competitive environment
without requiring unreasonable computational
abilities necessary for analyzing the aggregative

and persistently

objective

subjective

objective

core are said to be approximately competitive. The latter refers to out­
comes of a bargaining process, while the former refers to outcomes
derived from habitual behavior that allows agents to "survive" in a com­
petitive economy.

"1
1

O

S w am y et. al., show how confounding 'objective' and 'subjective’ notions of probability m ay violate the axiomatic basis of

statistical theory. They propose an alternative model for aggregation of

n

Clearly, this notion abstracts from many difficult problems
posed by strategic behavior. For a more complete description
of Hahn's notion of equilibrium and a comparison to the Austrian view,

see Littlechild (1982).

subjective expectations. The problem with conventional formulations of
the rational expectations hypothesis in macroeconomic models lies not
with the concept of

individual

rationality but with the context in which it

is developed— namely in the representative agent model. Once one
allows agents to differ both in the information they have and in the the­
ories they hold, a model can accommodate arbitrage opportunities that

-1

^

_L L d

It is not evident that agents would maintain their theories in

are deemed essential for a process leading to a rational expectations

the early stages of learning. For any given model, one might

equilibrium. H ow agents learn to recognize arbitrage opportunities, how ­

w ant to provide sensitivity analysis a-la Bray-Savin.

ever, remains an open, but difficult, issue.

impacts of strategic behavior. Moreover, the
equilibrium of such a model would accommo­
date a wide variety of nonstationarities in the vari­
ables. Nevertheless, Hahnian equilibrium too has
some severe limitations.
A key difficulty for a researcher
modeling approximately competitive environ­
ments is that an infinite set of plausible conven­
tions could be developed that would lead to
“model consistent” (rational, in the sense of
Hahn?) expectations. This may not be relevant for
the individual agent in Hahnian equilibrium. The
agent could be satisfied with his own conven­
tions for dealing in his specialized corner of the
world. A macromodeler, on the other hand, may
not have access to all relevant information. His
estimates of underlying relationships would be
inconsistent because of omission of relevant
explanatory variables bias. Thus, it may be impos­
sible for a modeler of
economic activ­
ity to discover adequately the law of motion for
the economy as a whole, even when the econ­
omy is in Hahnian equilibrium. This, of course, is
the essence of the Austrian criticism of macro­
economics, both Keynesian and New Classical.14
The most difficult problem for
modeling learning in an approximately competi­
tive model, however, is the situation in which
agents change theories.15 In the context of
Hahnian equilibrium, this is the problem of glo­
bal stability. That is, when a shock to equilibrium
is so big, it causes agents to change their theories.
Hahn argued that it is impossible to make any
claims about global stability. He concluded that
this limitation was imposed by the current state
of economic knowledge. Economists know very
little about how agents adapt to a changing eco­
nomic environment.
When confronted with the limits of
equilibrium analysis, economists are often more
willing to invoke a convenient fiction than to
modify their fundamental tools. The urge to close
the model typically prevails over a venture into a
methodological frontier. As is often noted, some
people searching for a lost wallet at night prefer
to look under a street lamp even though it may

aggregate

Another w ay of looking at the same problem is that the
specification of "approximately competitive" behavior in this
paper is too general to have empirical content. Nevertheless, the
researcher is free to specify his own set of conventions— provided, of
course,, that they are logically consistent. Because of the difficulties in
falsifying economic theories, one might choose among alternative speci­
fications on the basis of out-of-sample forecasts. The foundations of
such a method are found in Sw am y, Conway, and von zur Muehlen
(1985).

"I

_L

^

y

This is what Hahn calls learning. It is also the sense of learning examined by Blume and Easley.

be more likely that they lost the wallet in the dark
alley. Hahn’s proposed reformulation of equilib­
rium was useful in illuminating the problems of
learning in a large, decentralized economy. In
this sense, it demonstrates the potential value of
building new streetlamps.

VI. Concluding Remarks
This paper opened with the idea that rational,
purposeful individuals have incentives to weed
out systematic errors in their own expectations.
Thus, it is argued that economic models should
not allow expectational errors to persist. Conven­
tional formulations of rational expectations,
which assume Walrasian market-clearing, do not
violate this restriction. The implicit auction pro­
cess works to assure that all decisions are mutu­
ally consistent both with what agents can know
about the model and with the underlying model.
This paper presented the BraySavin result that shows that agents may use “plau­
sible” learning mechanisms to “learn their way”
to rational expectational equilibrium in auction
markets. Thus, learning models extend the results
of tatonnement stability analysis to situations
where agents form model-consistent expectations
about the environment they are in. The restriction
that economic models not permit systematic
expectational errors to persist, however, does not
require that agents behave in a mutually consis­
tent manner in each period of time as in Walra­
sian equilibrium. The restriction is weaker than
that and hence allows for a broader scope in the
meaning of rationality than is generally considered
in conventional formulations of the rational
expectations hypothesis. That is, the restriction
allows a broader class of economic models than
the Walrasian economy.
The model of “approximately
competitive” equilibrium sketched in this paper
illustrates one potential subclass of such models.
The sketch provides a plausible example of how
rational, self-seeking agents might “learn their
way” to non-Walrasian equilibria. Without an auc­
tioneer in each and every market, a modeler can­
not rule out such equilibria
simply by
assuming agents have incentives to weed out sys­
tematic expectational errors.

a priori

11

REFERENCES

Arrow, Kenneth, J., L Hurwicz, and H.D. Block,
“On the Stability of Competitive Equilibrium
II,”
vol. 27, (January, 1959) pp.
82-109.

Econotnetrica,

Arrow, Kenneth, J., and L Hurwicz. “On the Sta­
bility of the Competitive Equilibrium I,”
vol. 26, (October, 1958) pp.
522-552.

Eco-

nometrica,

Arrow, Kenneth J. “Rationality of Self and Others
in An Economic System.”
vol. 59, no. 4, part 2 (October 1986), pp.
385-399.

TheJournal of Busi­

ness,

Arrow, Kenneth J. “Toward a Theory of Price
Adjustment,”
Stanford University Press, (1959),
pp. 41-51.

Resources,

The Allocation of Economic

Arrow, Kenneth J. “The Future and the Present in
Economic Life,”
vol. XVI,
No. 2, (April 1978), pp. 157-169.

Economic Inquiry,

12
Blume, Lawrence E. and D. Easley. “Learning to
be Rational,”
vol. 26 no. 2, (April 1982), pp. 340-351.

ry,

Journal oj Economic Theo­

Blume, L.E., M.M. Bray, and D. Easley. “Introduc­
tion to the Stability of Rational Expectation
Equilibrium,”
vol. 26., no. 2 (April 1982), pp. 317-317.

ry,

Journal of Economic Theo­

Bray, M.M., and N.E. Savin. “Rational Expectations
Equilibria, Learning and Model Specification,”
vol. 54, no. 5, September 1986.
pp. 1129-1160.

Econometrica,

Bray, M.M. “Convergence to Rational Expectations
Equilibrium.”
Cambridge University Press.
1983, (pp. 123-137).

Individual Forecasters & Aggre­
gate Outcomes,

Cyert, Richard M. and Morris H. DeGroot.
“Rational Expectations and Bayesian Analysis.”
vol. 82 no. 3,
(May/June 1974), pp. 521-536.

Journal of Political Economics,

Fisher, Franklin M. “Disequilibrium Foundations
of Equilibrium Theory,”
New York:
Cambridge University Press, 1983.

Econometric Society>
Monographs in Pure Theory #6,

Hahn, F.H. “On the Notion of Equilibrium in
Economics,” Cambridge: Cambridge University
Press, (1973).

Littlechild, S. C. “Equilibrium and the Market Pro­
cess,”
ed.
Israel M. Kirzner, Lexington Books, 1982.

Method Process, and Austrian Econom­
ics: Essays in Honor of Ludwig Von Mises,

Marcet, Albert, and Thomas J. Sargent. “Conver­
gences of Least Squares Learning Mechanisms
in Self Referential Linear Stochastic Models,”
Manuscript, October 1986.
Marcet, Albert, and Thomas J. Sargent. “Conver­
gence of Least Squares Learning In Environ­
ments with Hidden State Variables and Private
Information,” Manuscript, October 1986.
Muth, John F. “Rational Expectations and the
Theory' of Price Movements,”
vol. 29, no. 3 (July 1961), pp. 315-55.

Econometri­

ca,

McCallum, Bennett T. “Rational Expectations and
Macroeconomic Stabilization Policy: An Over­
view
vol. XII, no. 4, part 2, (November 1980), pp.
697-746.

,^" Journal of Money, Credit, and Banking Foundations of Economic

Samuelson, Paul A.
Harvard University Press, Cambridge,
Mass., and London, England (1947).

Analysis,

Models of Man,

Simon, HA.
York, 1957.

Wiley Press, New

Simon, H.A “Rationality as Process and as Product
of Thought,”
vol.
68, no. 2, (May 1978), pp. 1-16.

American Economic Review,

Sonnenschien, Hugo. “Do Walras’ Identity and
Continuity Characterize the Class of Commun­
ity Excess Demand Functions?”
vol. 6, no. 4, (August 1973),
pp. 345-354.

Economic Theory\

Journal of

Swamy, P.A.V.B., R. K. Conway, and P. von zur
Muehlen. “The Foundations of Econometrics Are There Any?”
vol. 4,
no. 1 (1985), pp. 1-61.

Econometric Reviews,

Swamy, PAV.B.,J. R. Barth, and PA Tinsley. “The
Rational Expectations Approach to Economic
Modelling,”
Vol. 4 no. 2, (May 1982), pp.
125-147.

foum al of Economic Dynamics
and Control,

Taylor, John B., “Monetary Policy During a Tran
sistion to Rational Expectations,”
vol. 83, no. 5, (October
1975), pp. 1009-1021.

PolicitalEconomy,

foum al of

Townsend, Robert M. “Market Anticipations,
Rational Expectations, and Bayesian Analysis,”
vol. 19, no. 2,
(June 1978), pp. 481-494.

International Economic Review,

Airline Hubs: A Study
of Determining Factors
and Effects
by Paul W. Bauer
Paul W . Bauer is an economist at
the Federal Reserve Bank of Cleve­
land. The author would like to thank
Jam es Keeler, Randall W . Eberts,
and Thomas J . Zlatoper and others
who provided useful comments on
earlier drafts of this paper. Paula
Loboda provided valuable research
assistance.

Introduction
The Airline Deregulation Act (ADA) of 1978
caused many changes in the industry. For the first
time in 40 years, new airlines were permitted to
enter the industry7, and all airlines could choose
the routes they would serve and the fares they
would charge. Airlines were also free to exit the
industry (go bankrupt), if they made poor choices
in these matters. Naturally, this has led to many
changes in the way airlines operate.
Many aspects of airline behavior,
particularly fares, service quality, and safety, have
been subjected to intense study and debate. The
development of hub-and-spoke networks is one
of the most important innovations in the industry
since deregulation, and it has affected all of these
aspects. Yet comparatively little research has been
done on this phenomena.
A hub-and-spoke network, as the
analogy to a wheel implies, is a route system in
which flights from many “spoke” cities fly into a
central “hub” city. A key element of this system is
that the flights from the spokes all arrive at the
hub at about the same time so that passengers
can make timely connections to their final desti­
nations. An airline must have access to enough
gates and takeoff and landing slots at its hub air­
ports in order to handle the peak level of activity.
An example of a hub-and-spoke
network can be seen in figure 1, which shows the
location of the hub and spoke cities used in this
study. From Pittsburgh, USAir offers service to
such cities as Albany, Buffalo, Cleveland, DallasFort Worth, London, New York, Philadelphia, and

Syracuse to name just a few. Hub cities tend to
have much more traffic than spoke cities. Much
of the hub-city traffic centers on making connec­
tions. For example, over 60 percent of the pas­
sengers who use the Pittsburgh airport hub are
making connections, vs. 25 percent at the Cleve­
land spoke airport.
The advantages of hub-and-spoke
networks have been analyzed by several sets of
researchers. Bailey, Graham, and Kaplan (1985)
discussed the effects of hubbing on airline costs
and profitability. Basically, hubbing allows the air­
lines to fly routes more frequently with larger air­
craft at higher load factors, thus reducing costs.
Morrison and Winston (1986) looked at the
effects of hubbing on passenger welfare, finding
that, on average, passengers benefited from the
switch to hub-and-spoke networks by receiving
more frequent flights with lower fares and slightly
shorter travel times.
It is important to note, however,
that while passengers benefit on average from
hub-and-spoke networks, there are some detrimen­
tal effects such as the increased probability of miss­
ing connections or losing baggage and having di­
rect service converted into connecting service
through a hub (although this is partially offset in
many cases by more frequent service). Current
public perceptions about the state of airline ser­
vice have been strongly influenced by the transi­
tory problems many of the carriers have had inte­
grating acquired airlines into their service network.

1 3

Hub and Spoke Network

Boston
"• \
Pittsburgh

New York
• y Newark
Philadelphia

1 4

Miami

Source: Author

FIGURE

1

McShan (1986) and Butler and
Huston (1987) have shown another aspect of the
switch to hub-and-spoke networks. McShan argues
that airlines with access to the limited gate space
and takeoff and landing slots at the most desira­
ble hub locations before deregulation have bene­
fited the most from deregulation. Butler and Hus­
ton have shown that the airlines are very adept at
employing their hub market power, charging
lower fares to passengers flying through the hub
(who typically have more than one choice as to
which hub they pass through) than to passengers
flying to the hub (who have fewer options).
Some of these authors have specu­
lated as to why hubs exist in some locations but
not in others. Bailey, Graham, and Kaplan (1985)
and McShan (1986) have suggested that an ideal
hub network would have substantial local traffic
at the hub and would be centrally located to
allow noncircuitous travel between the airline’s
hub and spoke cities. However, no empirical
exploration of this issue has yet been attempted.

In an attempt to more fully under­
stand the hubbing phenomena, this paper looks
for the main factors that airlines consider in eval­
uating existing and potential hubs, and investi­
gates the impact of the hubbing decision on air­
port traffic.
The paper is organized as follows.
Section I discusses the cost and demand charac­
teristics of the airline industry that lead to hub
and spoke networks. From these stylized facts
about the airline industry, a two-equation empiri­
cal model is constructed in section II. The first
equation predicts whether a city is likely to have
a hub airline and the second equation estimates
the total revenue passenger enplanements the
city is likely to generate as a result of the hub
activity. Empirical estimates are obtained for this
model, using data from a sample of the 115largest airports in the U.S., and are discussed in
section III. The implications of these results on
the present and future structure of the U.S. airline
industry are discussed in section IV.

I. Characteristics of Airline Demands and Costs
To understand the factors that influence the loca­
tion of hubs, it is first necessary to look at the
demand determinants and costs for providing air
service. Basically, people travel for business or
pleasure. Travelers usually can pick from several
transportation modes. The primary modes of
intercity travel in the U.S., are automobiles, air­
lines, passenger trains, and buses. A traveler’s
choice of transport is influenced by the distance
to be traveled, the relative costs of alternative
transportation, and the traveler’s income and
opportunity cost of time spent traveling.
Aggregating up from individual
travelers to the city level, the flow of airline pas­
sengers between any two cities is largely
explained by the following factors:
1) the air fare between the two cities and the
cost of alternative transportation modes,
2) the median income of both cities,
3) the population of both cities,
4) the quality of air service (primarily the
number of intermediate stops and the
frequency of the flights),
5) the distance between the two cities, and
lastly,
6) whether either of the cities is a business
or tourist center.
It is important to distinguish
between business and tourist travelers. While
both generate traffic, business travelers are more
time-sensitive and less price-sensitive than tourist
travelers. Business travelers would prefer to pay
more for a convenient flight, whereas tourists
would prefer to pay less, even if it means spending
more time en route. These factors influence the
demand for air service. The cost of providing that
service can now be discussed.
As with any firm, airline costs are
determined by how much output is produced
and by the price of the inputs required to pro­
duce that output. Output in the airline industry is
usually measured in revenue passenger miles
(rpm), which is defined as one paying passenger
flown one mile. Average cost per revenue pas­
senger mile declines as either the average stage
length (the average number of miles flown per
flight) or the average load factor (the average
number of seats sold per flight) increases.
It is easy to see why costs behave in
this manner. First, every flight must take off and
land. These activities incur high fixed costs. In
addition to the usually modest takeoff and landing
fees, much more fuel is used up when taking off
than at other stages of the flight. Taxiing to and
from the runways also takes up a significant
amount of time. Those costs are unrelated to the
distance of the flight or to the number of pas­
sengers. By comparison, flying at the cruising alti­

tude is relatively inexpensive. Thus, with each
mile flown the high fixed costs per flight are dis­
tributed over more and more miles, which lowers
the average cost per revenue passenger mile.
Second, average cost per revenue passenger mile
declines as the average load factor is increased,
because it is cheaper to fly one airplane com­
pletely full than it is to fly two planes half full.
Studies have shown that the cost of
airline operations do not exhibit increasing re­
turns to scale.1 In other words, large airlines do
not enjoy cost advantages over small airlines if
load factors and stage lengths are taken into
account. This does not mean that large airlines
may not have other advantages over their smaller
rivals. One advantage that they may have is that
they have more flights to more destinations with
more connections, so that they may be able to
achieve higher load factors, which reduces cost.
Frequent-flyer programs also tend to favor larger
airlines, since passengers will always try to use
one airline to build up their mileage credits faster.
The larger airlines, having more flights and more
destinations, are more likely to be able to satisfy
this preference.
Under these cost and demand con­
ditions, the chief advantage to establishing a suc­
cessful hub is the increase in the average load
factor, which lowers average cost. Hubbing en­
ables an airline to offer more frequent nonstop
flights to more cities from the hub because of the
traffic increase from spoke cities. Passengers orig­
inating from the hub city thus enjoy a higher level
of service quality than would have been possible
if spoke travelers were not making connections
there. Passengers from the spoke cities may also
enjoy better service, because they can now make
one-stop flights to many cities that they may have
only previously reached by multistop flights.
Hubbing has a significant effect on
the demand for air travel through its effects on
both air fares and the quality of air service. Pas­
sengers prefer nonstop flights to flights with
intermediate stops, and if there are intermediate
stops, passengers prefer making “online” connec­
tions (staying with the same air carrier) to mak­
ing “interline” connections. Nonstop and online
flights minimize flying time and are less stressful
and exhausting to passengers. The development
of a new hub increases the number of nonstop
and one-stop flights in a region, while reducing
multistop flights, which were common on some
routes prior to deregulation. In general, service

1

See Bauer (19 8 7 working paper) and White (1979).

15

quality increases for both the hub city and the
spoke cities when a hub-and-spoke network is
created. However, some of the larger spoke cities
could end up worse off, because they may lose
some nonstop service to other cities that may now
have to be reached by flying through the hub.
Now the problem of how to deter­
mine whether a particular city might make a suc­
cessful hub, and the resulting implications for the
volume of air traffic at the airport, can be
considered.

II. Empirical Model of
the Hubbing Phenomenon

i-th
h{

h{ -

{DBTP,

corp)

Places Rated Almanac

The potential for airlines to serve a number of
city pairs and the flow of passengers between
those city pairs depends upon the demand and
cost factors discussed in the last section. Given
these factors, airlines trying to maximize profits
face the simultaneous problem of choosing
which cities to serve and how to serve them, that
is, which cities to make hubs, which cities to
make spokes, and which pairs to join with non­
stop service. This is a complicated problem since
the choice of a hub affects fares and service qual­
ity and, hence, passenger flows. Decisions by the
airline’s competitors will also affect the passenger
flows within its system.
To investigate how important each
of the various demographic factors discussed
below is in deciding whether a given city would
make a viable hub, a data-set containing informa­
tion on 115 cities with the largest airports in the
U.S. was compiled. These cities range in size from
New York City, to Bangor, Maine and are shown
in figure 1 with the hub cities in green and the
spoke cities in orange. Notice that most of the
hubs are located east of the Mississippi in cities
surrounded by a large population base.
The data were collected from sev­
eral sources. Information on whether a city was
considered to have a hub airline (if the
city
had a hub airline, then
1, otherwise
= 0)
and the total revenue passenger miles handled by
the city was obtained from 1985 Department of
Transportation statistics. Data on the population
and the per capita income (
of the city
were obtained from the State and Area Data
Handbook (1984) and from the Survey of Current
Business (April 1986 issue).
In addition, a set of variables was
collected to identify whether the city was a busi­
ness or tourist center. The first of these variables
“Dummy Business-Tourist-Proxy”) is a
dummy variable that is set equal to one if the
total receipts from hotels, motels, and other lodg­
ing places for each city is greater than an arbitrary
threshold, and is zero if otherwise. This series
was also collected from the State and Area Hand­

{pop),

book (1984). A value of one for this variable
should correspond to cities that are either a busi­
ness or tourist center. Unfortunately, this variable
only measures the joint effect of both activities
and does not distinguish between business and
tourist travelers.
To construct separate measures of
business and tourist activity, three variables are
introduced. The number of Standard and Poors
500 companies headquartered in each city (
was compiled to be used as a proxy for the busi­
ness traffic that each city is likely to generate.
Measures of the likelihood that a city will gener­
ate significant tourist activity are obtained from
the
published by RandMcNally. The measures are respectively the rank
of the city in recreation (
and the rank of the
city in culture
These variables were trans­
formed so that the higher the rank the higher the
city’s scores were in that catagory.
In this study, a long-run approach
is implicitly taken that ignores individual airport
characteristics. In the long run, runways, gates,
and even whole airports can be constructed.2 The
decision concerning where to locate hubs in the
long run is determined by the location of those
cities and by demographic variables that determine
the demand for travel between cities. Unfortunate­
ly, deriving an economically meaningful measure
of location is difficult in this context. Hubs can be
set up to serve either a national or regional mar­
ket, or to serve east-west or north-south routes.
Thus, while location is an important factor in
determining the location of hubs, constructing an
index that measures the desirability of a city’s
location is beyond the scope of the current study.3
A more formal model of the hub­
bing decision can be constructed as follows. Let
the viability of a given airport as a potential hub
be a log linear function of the demographic vari­
ables discussed above where:

inc)

(cult).

(1)

rec)

h*=a0 +a x ln(popt) + a2 ln(inc ;)
a} DBTP) +aA In(corp) + a^ ln(rec ,)
+ a6 ln(cult{) + v{.
+

h*

Here,
measures the viability of a hub in the
city. If this index is above a given threshold
(at which point the marginal cost of setting up
the hub is equal to the marginal revenue that the

i-th

2

For short-run analysis, information on individual airport
characteristics is required. This approach will be employed in

future research.

3

Future research will attempt to look at this question more
directly.

Parameter Estimates from Decision to Hub Equation
^-statistic

Estimate

Parameter

Constant

-0.627
1.60
-0.795
0.920
1.29
-0.902
1.46

-0.347
0.869
-1.57
0.478
0.138
-0.00232
0.0110

pop
inc
DBTP
corp
rec
cult

87.0.

Percentage o f predictions correct
Chi-squared statistic = 69-4
SOURCE: Author.

TABLE

1

hub brings in), then an airline will set up a hub
there. Thus,
is related to
as follows:

h*

(2)

hf

0, otherwise,
where
is the threshold between hubs and
nonhubs and
is statistical noise.
The traffic an airport can be
expected to handle will depend on the same
demographic variables that also influence
whether a city is a hub, and by whether or not
the city actually is a hub. Thus, traffic, as mea­
sured by revenue passenger miles (rpm), can be
modeled as a log linear function of the demogra­
phic variables and the hub variable:
(3)

vf

ln( rpe,) = b0 +bx In (pop) + b2 In(inc)
DBTPi + b4 In (corp) + In ( rec)
+ b6 In (cult) + b- ht + er

+

7

where

ei is statistical noise.

Since the model is diagonally recur­
sive (only one of the equations includes both
endogenous variables and it is assumed that there

Estimates from Revenue Passenger Enplanements Equation
Parameter

Constant

pop
inc
DBTP
corp
rec
cult
hub

Estimate

/-statistic

16.6
0.545
1.15
0.914

118.0
5.13
2.73
5.53
-1.46
1.71
0.922
4.98

-0.0131
0.00101
0.00107
0.795

III. Results
Results from estimating the above model are
presented in tables 1 and 2. Table 1 presents the
parameter estimates from the equation that pre­
dicts the viability of a hub in any given city. The
overall prediction power of the model is quite
good. The point estimates of the parameters all
have the expected signs except for the coefficient
on per-capita income, though the level of statisti­
cal significance is very weak. The high correlation
among most of the demographic variables sug­
gests that multicollinearity is a problem and that
the standard errors are inflated leading to lower
statistics. Even with this problem, estimates from
this equation do correctly predict whether or not
a city will be a hub 87 percent of the time.
A city is more likely to become a
hub as its population, lodging receipts (
),
or number of S&P 500 corporations increase, or
as its ranking for recreation or culture improves.
Business travelers (being more time-sensitive and
less price-sensitive) should be more important to
an airline than tourist travelers in the location of
hubs, so that the number of S&P 500 corporations
should be more important than either recreation
or culture. One-tailed tests conducted at the 90
percent confidence level indicate that increasing
a city’s population and number of S&P 500 cor­
porations, and improving the cultural ranking, all
have nonnegative effects on the viability of a hub
for a given city, other things being equal. It would
have been reasonable to expect that increases in
per-capita income would also increase the viability
of the hub, but higher per-capita incomes reduce
the likelihood of a city being a hub, although this
result is not statistically significant.
The results from the estimation of
the traffic equation are presented in table 2. Most
of the parameter estimates are statistically signifi­
cant in this equation. All the estimates have the
expected sign, except the coefficient on the
number of S&P 500 corporations, although it is
not statistically significant.
Given the construction of the
model, some of these parameters can be inter­
preted as elasticities. For example, a one percent

t-

A,= i , i f A ; > *

k

are no cross equation correlations), each equa­
tion of the model can be estimated separately.4
The equation predicting the viability of the hub
was estimated using the Probit maximum likeli­
hood method. The traffic equation was estimated
by ordinary least squares.

DBTP

/?-squared = 0.850.
/^statistic = 86.3SOURCE: Author.
The results reported here are not sensitive to the assumption of
no cross equation correlations.

17

Outlier Cities
Likely, but do not have a hub

Unlikely, but do have a hub

Cleveland
San Diego
New Orleans
Phoenix
Tampa

Raleigh
Syracuse
Orlando
Nashville
Kansas City

Nashville are situated near the center of the coun­
try, giving them an advantage over Phoenix or
San Diego in the competition for hubs. The
second factor involves the problem of deciding
what constitutes hub service at a city. Clearly the
activity going on in Chicago by both United Air­
lines and American Airlines is quantitatively dif­
ferent from what USAir is doing in Syracuse, yet in
this study both cities are counted as hubs.

SOURCE: Author.

TABLE

3

IV. Summary and Implications for the Future
increase in a city’s population would lead to a
0.55 percent increase in revenue passenger
enplanements, while a one percent increase in a
city’s per capita income would lead to a 1.15 per­
cent increase in revenue passenger enplane­
ments. The coefficient of lodging receipts
( DBTP) can be interpreted as follows. From these
estimates, it can be calculated that cities classified
as business/tourist centers have roughly 2.49
times the traffic that other cities have.
The coefficient for the hub variable
has a similar interpretation, given its construction.
If two cities are identical, except that one has a
hub and the other does not, then the city with
the hub can be expected to have over 2.19 times
more revenue passenger enplanements than the
other city. For example, Cleveland and Pittsburgh
have very similar demographic characteristics, yet
as a result of USAir’s hub, Pittsburgh has about 2.3
times the revenue passenger enplanements that
Cleveland has. It was noted earlier that pas­
sengers making connections in Pittsburgh
account for most of this difference because only
25 percent of the passengers who use Cleveland’s
airport are there making connections, whereas
over 60 percent of the passengers at Pittsburgh’s
airport are there making connections. Clearly, the
creation of a hub greatly increases the activity
occurring at an airport.
Table 3 presents two lists of outliers
as a by-product of the estimation process. The
first list is of cities that the model predicts should
be hubs, but are not. The second list is of cities
that the model predicts should not be hubs, but
are. It is likely that San Diego, Phoenix, and
Tampa would not be outliers if a location variable
were included in the model, since these cities lie
in the southwest and southeast corners of the
country (see figure 1). Cleveland and New
Orleans, on the other hand, appear to be more
likely candidates for future hubs. Other midwest
cities to watch are Indianapolis and Columbus.
Two factors can explain why most
cities made the second list: location and measure­
ment problems with the hub variable. Although it
is hard to develop an index for location, it is easy7
to get an intuitive feel for it. Both Kansas City and

This paper has explored the characteristics that
influence hub location and the effect on airport
traffic as a result of hub activity. The results indi­
cate that population is the most important factor
determining hub location. An increase in percapita income leads to a larger proportional
increase in revenue passenger enplanements,
whereas an increase in population leads to a less
than proportional increase. One of the most
interesting findings was that the creation of a hub
at a city leads to a more than doubling of revenue
passenger enplanements generated at that city.
The framework developed here is
implicitly long run: airlines, passengers, and air­
ports are assumed to have fully adjusted to the
new deregulated environment. Given the recent
merger wave in the industry, this does not appear
to be the case, and many changes are likely in the
coming years. More cities will probably become
hubs, as traffic cannot increase much further at
some large airports that have almost reached their
capacity limits using current technology.
The only question is where to hub,
not whether to hub. As the airline industry
evolves, it will be interesting to track what
happens to the air service provided to the com­
munities listed in table 3- Given the expected
growth in future air travel, cities on the first list
are more likely to receive hub service than cities
on the second list are to lose hub service.

REFERENCES
Bailey, Elizabeth E., David R. Graham, and David
P. Kaplan.
Cam­
bridge, MA: The MIT Press, 1985.

Deregulating the Airlines.

Bauer, Paul W. “An Analysis of Multiproduct
Technology7and Efficiency Using the Joint Cost
Function and Panel Data: An Application to the
U.S. Airline Industry.” Ph.D. Dissertation, Uni­
versity of North Carolina at Chapel Hill, 1985.
Butler, Richard V., and John H. Huston. “Actual
Competition, Potential Competition, and the
Impact of Airline Mergers on Fares.” Paper
presented at the Western Economic Associa­
tion meetings, Vancouver, B.C., July 1987.
McShan, William Scott. “An Economic Analysis of
the Hub-and-Spoke Routing Strategy in the Air­
line Industry,” Northwestern University, Ph.D.,
1986.
Morrison, Steven, and Clifford Winston.

nomic Effects of Airline Deregulation.

The Eco­
Washing­

ton, D.C.: Brookings Institution, 1986.
White, Lawrence, J. “Economies of Scale and the
Question of “National Monopoly” in the Air
line Industry,”
vol. 44, no. 3, (1979) pp. 545-573-

merce,

Journal of Air Law and Com­

A Comparison of RiskBased Capital and RiskBased Deposit Insurance
by Robert B. Avery
and Terrence M. Belton

Robert B. A very is a senior

An earlier version of this paper was

economist in the Division of

presented at the Federal Reserve

Research and Statistics at the

Bank of Cleveland’s fall seminar on

Board of Governors of the Federal

the role of regulation in creating/

Reserve System . Terrence M .

solving problems of risk in financial

Belton is a senior economist at the

markets — November 3 ,19 8 6 .

Federal Home Loan Mortgage
Corporation.
The authors would like to thank
Randall W . Eberts, Edward Ettin,
Gerald Hanw eck, Myron Kw ast,
Jam es Thomson, and Walker Todd
for helpful comments and
suggestions.

Introduction
The perception of increased bank risk-taking has
raised concerns as to whether changes and
improvements are needed in our system of regu­
latory supervision and examination. These con­
cerns clearly underlie recent proposals for riskbased capital standards issued by all three bank
regulatory agencies—the Federal Reserve Board,
the Federal Deposit Insurance Corporation
(FDIC), and the Comptroller of the Currency— as
well as proposals by the FDIC and Federal Sav­
ings and Loan Insurance Corporation (FSLIC) for
risk-based deposit insurance premiums. None of
these approaches has, as yet, been implemented,
and each is still under active consideration by at
least one regulatory body.
As part of an ongoing evaluation of
the potential effectiveness of various methods of
controlling bank risk-taking, this paper presents a
comparison of risk-based capital and risk-based
deposit insurance premium proposals. Although
these proposals may appear to represent quite
different methods of controlling bank risk, the
results presented below suggest that this need
not be the case and that, if implemented prop­
erly, the two methods can produce a similar level
of bank risk-taking.
The paper also suggests that differ­
ences that exist between the two methods lie not
in the fact that one controls premiums and the
other capital levels, but that one prices risk and
the other sets a risk standard. This is discussed
informally in section I, while evidence of how
both a risk-based insurance and risk-based capital

system could be implemented using similar mea­
sures of risk is presented in the section that
follows.

I. Discussion
In the current regulatory environment, commer­
cial banks are subject to a fixed minimum level
of primary capital per-dollar of assets and a fixed
deposit insurance premium per-dollar of domestic
deposits regardless of the risk that they present to
the FDIC. As many critics have pointed out, this
presents a potential problem of incentives in that
banks may not bear the full social costs of
increased risk-taking. Both a risk-based capital
and risk-based insurance system are designed to
address this problem by inducing banks to inter­
nalize the expected costs that their risk-taking
imposes on the FDIC and society in general.1 The
programs appear to differ significantly, however,
in how they attempt to achieve this goal.
As proposed, a risk-based deposit
insurance system would explicitly price risktaking behavior on the part of insured banks.
Periodically, the FDIC would assess the risk
represented by each bank and charge an insur­
ance premium reflecting the expected social

1

Another objective m ay be to distribute the costs of risk-taking
more equitably across banks even if such differences stem from

exogenous factors and if issues of moral hazard and allocative efficiency

are irrelevant.

Risk Variables

Symbol

Definition
KTA

percent ratio of primary capital to total assets,

PD90MA

percent ratio of loans more than 90 days past
due to total assets,

LNNACCA

percent ratio of nonaccruing loans to total
assets,

RENEGA

percent ratio of renegotiated loans to total
assets,

NCOFSA

percent ratio of net loan charge-offs (annual­
ized) to total
assets,

NETINCA

percent ratio of net income (annualized) to
total assets.

Source: Board o f Governors o f the Federal Reserve System.

TABLE

1

costs attributable to it.2 Because banks would in
principle bear the full expected cost of their
actions, they would either be deterred from
excessive risk-taking or would pay the full
expected costs to the FDIC.
A risk-based capital standard works
by setting a standard that, by absorbing losses,
limits the amount of risk an insured bank can im­
pose on the FDIC, rather than by explicitly pricing
risk. If the regulators determine that a bank
represents a risk above the allowable standard at
its current level of capital, they would require the
bank to raise more capital. By adjusting capital
“buffers,” regulators can control the size of poten­
tial losses irrespective of bank behavior.
The regulator uses information on
differences in risk-taking behavior across banks to
require different amounts of capital or coinsurance, not to charge different premiums.
Indeed, since adjustment of the capital buffer is
used to reduce the risk represented by each bank
to the same level, it is then appropriate that they
be charged a flat premium rate.3 Bank risk-taking
behavior may be deterred because banks would
recognize that they will incur higher expected
capital costs, an implicit price, even though banks
do not face explicit prices for risk. In both
schemes, overall system risk-taking would be
reduced because banks would take full account

2

If the FD IC cannot fully assess the ex-ante risk represented by

of the expected consequences of their actions,
either through explicit insurance premiums or
implicit prices via higher capital costs.

Current Proposals on Risk-Based Deposit
Insurance and Risk-Based Capital
In recent years, there have been several specific
proposals made by the federal regulatory agen­
cies for basing insurance premiums or capital
requirements on the perceived risk of depository
institutions. In 1986, for example, the FDIC asked
for legislation authorizing the adoption of a riskbased deposit insurance system and has devel­
oped a specific proposal for implementing such
a system. More recently, the Federal Reserve
Board, in conjunction with the Bank of England
and with other U.S. banking regulatory authorities
has published for public comment a proposal for
risk-based capital requirements.
The FDIC proposal for risk-based
deposit insurance utilizes two measures for
assessing bank risk-taking.4 The first measure is
based on examiner-determined CAMEL ratings for
individual commercial banks. CAMEL ratings,
which range from 1 through 5 (with 5 representing the least healthy bank) are intended to mea­
sure the bank’s capital adequacy (C), asset quality
management skills (A/), earnings (£), and
liquidity (Z). The FDIC’s problem-bank list con­
sists of all banks with CAMEL ratings of 4 and 5.
The second measure of bank risk
employed in the FDIC proposal is a risk index
developed by the FDIC that is based on publicly
available Call Report data. The index is defined as:

(A),

(1)

/ =

.818-A51KTA+.2UPU90MA +
265LNNACCA + I l l RENEGA +
.

.151NCOFSA - .347NETINCA,
where all variables are defined in table 1. The
weights in the index were estimated from historiical data with a probit model that predicts whether
or not an individual bank is on the FDIC’s problembank list. The index can be interpreted as provid­
ing a measure of the likelihood that a bank is a
problem bank. Banks with higher index values of
the index are more likely to be problem institu­
tions and therefore more likely to impose higher
expected costs on the FDIC.
Premiums would be assessed,
under the FDIC proposal, by defining two pre­
mium classes. Banks having a positive value of
the risk index and a CAMEL rating of 3, 4, or 5,
would be classified as above-normal risk. These

each bank, perhaps because monitoring costs would be exces­

sive, then the ''optimal" risk premium would also include “penalties"
over and above the FD IC 's estimate of each bank's expected social cost.

3

Assuming the risk-based capital requirement is binding so that no
institution holds capital in excess of its requirement.

4

The proposal is described in “ Risk-Related Program," FD IC Dis­
cussion Paper, September 20 ,19 8 5 , and Hirschhorn, E ., “ Developing

a Proposal for Risk-Related Deposit Insurance,"

Review,

FD IC , September/October 1986.

Banking and Economic

21

Summary of Risk Weights and Major Risk Categories for State Member Banks and Bank
Holding Companies
Category Al

(0 percent weight)
Cash— domestic and foreign
Claims on Federal Reserve Banks

Category A2

(10 percent weight)
Short-term (one year or less) claims on U.S. Government and its Agencies.

Category A3

(25 percent weight)
Cash items in process of collection.
Short-term claims on domestic depository institutions and foreign banks, including foreign
central banks.
Claims (including repurchase agreements) collateralized by cash or U.S. Government or
Agency debt.
Claims guaranteed by the U.S. Government or its Agencies.
Local currency claims on foreign central governments to the extent that bank has local cur­
rency liabilities.
Federal Reserve Bank stock.

Category A4

(50 percent weight)
Claims on U.S. Government-sponsored Agencies.
Claims (including repurchase agreements) collateralized by U.S. Government-sponsored
Agency debt.
General obligation claims on states, counties and municipalities.
Claims on multinational development institutions in which the U.S. is a shareholder or con­
tributing member.

Category A5

(100 percent weight)
All other assets not specified above, including:
Claims on private entities and individuals. Long-term claims on domestic and foreign banks.
All other claims on foreign governments and private obligators.

Source: Board o f Governors o f the Federal Reserve System.

TABLE

2

institutions would be charged an annual pre­
mium equal to one-sixth of one percent of
domestic deposits, or twice the current premium
level. All other institutions (that is, institutions
having either a negative value for the risk index
or a CAMEL rating of 1 or 2) would be classified
as normal-risk banks and be charged the current
premium of one-twelfth of one percent.
The risk-based capital requirement
proposed by the Federal Reserve Board, in con­
junction with other regulatory authorities, mea­
sures bank risk-taking in a somewhat different
fashion than the FDIC’s deposit insurance pro­
posal. Capital requirements would be assessed,
under the Board’s proposal, as a fraction of the
on- and off-balance-sheet activity of individual
commercial banks.5 Specifically, the proposal

The proposal is described in two press releases of the Board of
Governors of the Federal Reserve System titled "Capital M ainte­
nance: Revision to Capital Adequacy Guidelines," dated February 12,
1987 and March 18, 1987.

defines five asset categories that are shown in
table 2. These categories are intended to mea­
sure, in broad terms, assets having varying
degrees of credit risk. Cash and claims in Federal
Reserve Banks (category A l) are deemed to have
no credit risk and require no capital support.
Commercial loans to customers other than banks,
(Category A5) are deemed to have the greatest
amount of credit risk. The minimum primary cap­
ital level,
required under the proposal would
be defined as:

K,

(2)

K= a( 0AI +.10 A2 +.25 A3 +.5 A4+ 1/15),
a

where denotes the minimum required ratio (not
yet specified in the proposal) and Al to A5
denote the asset categories defined in table 2.
The requirement shown in equa­
tion (2) effectively imposes different minimum
capital standards on each of the five asset catego­
ries. If is set at 7 percent, for example, all

a

commercial loans, except those to other banks
(category A5), would effectively have minimum
required capital ratios equal to 7 percent; claims
on U.S. government-sponsored agencies (cate­
gory A3) would have required capital ratios equal
to 1.75 percent; and short-term treasury securities
(category'A2) would have required capital ratios
of 0.7 percent.6
It is clear that a major difference
between the risk-based capital and risk-based
deposit insurance proposals just described is the
type of information that is used to assess bank
risk-taking. The risk-based deposit insurance
proposal focuses on measures of bank perfor­
mance, such as earnings and asset quality; the
risk-based capital proposal focuses on the types
of activities in which banks are involved. The
former view is based on statistical evidence that
suggests these performance measures provide the
best forecast of future bank problems.7 The latter
approach to measuring bank risk-taking is based
on the view that certain activities are inherently
more risky than other activities and that these
more risky activities should be capitalized at
higher levels.
In contrasting the two approaches
to measuring bank risk, it should be emphasized
that the different measures used do not represent
an inherent difference between risk-based capital
and risk-based insurance. Indeed, both systems
could, in principle, use identical information in
assessing the risk of individual banks. The differ­
ence between the two systems lies not in what
information the regulator collects, nor in how it
uses that information to assess bank risk; rather,
the difference results primarily because one sys­
tem controls risk by a
and the other by
In the next subsection, we de­
scribe how these differences affect both banks
and bank regulators.

explicit prices.

6

standard

In addition to imposing capital requirements on various balancesheet asset categories, the proposal also addresses the risk from

off-balance-sheet activities. Capital requirements for those activities are
determined by first converting the face-amount of off-balance-sheet
items to a balance-sheet equivalent. This is done by multiplying the face
amount of the off-balance-sheet contract by an appropriate credit con­
version factor. The resulting balance-sheet equivalent is then assigned to
one of the five risk categories depending on the identity of the obligator
and, in certain cases, on the maturity of the instrument.

7

In addition to the empirical work on predicting problem banks, the

Differences Between Risked-based Capital and
Risk-based Deposit Insurance
Because one system is based on a minimum
standard and the other on a price, a number of
differences are likely to exist between risk-based
capital and risk-based insurance. One difference
is that enforcement of a risk-based capital system
is likely to offer the
and
potential for discretion than a risk-based pre­
mium system. If an annual insurance assessment
appeared on a bank’s income statement, and there­
fore was public, it would be difficult to waive or
adjust the fee without alerting competing banks,
financial market participants, and the public. More­
over, enforcement would likely be very mechani­
cal. Banks would be assessed a fee, and examin­
ers would have to deal individually only with
those banks that could not or would not pay.
However, enforcement of a riskbased capital standard is likely to be of a very dif­
ferent nature. Enforcement might focus only on
those firms close to or under the standard, and
would likely entail more individual examiner
input. Moreover, the judgement of whether or
not a bank with a continually changing balance
sheet meets the standard—and if not, how long it
has to comply— is likely to offer considerable
potential for discretion. Thus, in a regulatory
environment based on judgement and discre­
tionary supervision and regulation, a risk-based
capital standard might be more attractive.
Another difference is that because
a risk-based premium system prices risk rather
than limiting it by forced capital adjustments, it is
likely to offer
and there­
fore potentially more efficient, means of response.
Under a risk-based capital system, a risky bank
facing abnormally high capital costs does not
have the option of paying the FDIC for the right
to take excessive portfolio risk even though this
may be its most cost-effective response.8 This fea­
ture is likely to favor a risk-based premium
approach under virtually all regulatory environ­
ments. It might be argued that banks should not
be allowed too much freedom as they may not
properly respond to prices. However, this could
be accommodated in a risk-based premium sys­
tem by shutting down banks with excessive risktaking or by altering their behavior by other
supervisory means.
The two proposals are also likely
to have significant differences in the amount of
information that they reveal to the public. At

regulator more flexibility

banks a more flexible,

literature also suggests that earnings, capital and asset quality

measures are important predictors of future bank failure. See J . Bovenzi,

J . Marino, and F. M cFadden, “ Commercial Bank Failure Prediction M od­
els," in

Economic Review,

Federal Reserve Bank of Atlanta (November

1983) and Robert B. A ve ry, Gerald A . Hanweck and Myron L . Kw ast,
“ A n Analysis of Risk-Based Deposit Insurance for Commercial Banks,”

8

Technically, raising capital is not the only adjustment available to
the bank as it can adjust any factor used in the regulator's

assessment of risk. Thus, the relevant price banks face is the price of

Preceedings of a Conference on Bank Structure and Competition (1985),

the minimum-cost method of meeting the standard. If this price is not

Federal Reserve Bank of Chicago.

equal to the regulator's price, there will be an inefficiency.

23

!4

most, a risk-based capital standard would reveal
only whether or not a bank met the standard.
One could not even infer that a bank adding cap­
ital was doing so because it had become exces­
sively risky; the extra capital might be needed
because of anticipated expansion, etc. However,
it would be very difficult to keep a bank’s insur­
ance premium confidential. Low-risk banks
would have an incentive to advertise this fact and
investors would have incentives to identify highrisk banks. This might cause particular problems
in the use of confidential data to calculate premi­
ums. Knowledge of a bank’s premium could be
used to draw strong inferences about values of
any confidential inputs used. To the extent that
this would deter the use of confidential data in a
risk-based premium system, it might mean that
risk assessment with a risk-based capital system
would be more accurate and therefore fairer.
Moreover, even if confidential data
were not used, public disclosure of a bank’s pre­
mium might create the possibility of bank runs.
The official declaration of the FDIC that a bank
was risky, even if based on a mechanical calcula­
tion from publicly available balance sheet data,
might be sufficient to induce significant
withdrawals.
Yet another difference between the
two methods is likely to occur in the regulatory
response lag. Because it is based on a standard, a
risk-based capital system may have a built-in
response lag that is not present with a risk-based
premium system. Under a risk-based premium
system, a bank could be required to compensate
the FDIC immediately for its risk exposure. In
contrast, particularly if it entails raising new capi­
tal, adherence to a capital standard would likely
entail some lag, thereby delaying the ability of
the insurer to control its risk exposure.
Finally, even if the FDIC’s assess­
ment rate were adjusted so that it bore equivalent
actuarial risk, there may be some differences in
the number of bank failures under the two sys­
tems. Either system should reduce the number of
bank failures from current levels because of the
reduced risk-taking that should result when banks
are required to bear the full costs of their risktaking.9 The magnitude of this reduction, how­
ever, may differ for the two systems. As noted ear­
lier, risk-based deposit insurance systems allow
banks the flexibility of holding capital levels

9

Some critics have charged that a risk-based capital or deposit
insurance system might actually increase failures and incentives

for risk-taking because regulators would measure risk poorly or misprice
it. While this m ay be true, it should be pointed out that the current sys­
tem assumes all banks represent the same risk. The relevant question,
therefore, is not whether regulators would do a perfect job, but whether
they could differentiate among banks at all.

below those required under a comparable riskbased capital system and of offsetting the higher
risk by paying larger insurance premiums. For
those banks that opt to hold capital levels below
those required under a capital standard and pay
correspondingly larger insurance premiums, the
incidence of failure would be higher under a riskbased insurance system than that observed under
a risk-based capital standard.
By the same token, a risk-based
insurance system would provide other banks the
flexibility of holding capital levels well above
those required under a risk-based capital standard
and of being compensated for this increased capi­
tal by paying lower insurance premiums. For
such banks, the incidence of failure will be lower
under a risk-based insurance system than under a
capital standard. This difference between the two
systems stems from the fact that a capital standard
does not reward banks for having capital greater
than the minimum standard; a risk-based insur­
ance system provides such a reward in the form
of a reduced premium.
The foregoing analysis suggests
that, in the aggregate, it is unclear which of the
two systems would reduce bank failures by the
greatest amount. Prediction of whether an indi­
vidual bank’s capital would be greater under a
risk-based capital standard than under a riskbased premium system depends on the cost of
capital faced by the bank and upon the degree to
which the risk-based insurance system penalizes
banks for reductions in their capital. When the
cost of raising capital in the private market (or
other adjustment methods) is high relative to the
penalty rate charged by the deposit insurer for
reductions in capital, banks will be more likely to
choose lower capital levels under a risk-based
insurance scheme than that required under a riskbased capital standard. Conversely, when the insur­
ance system assigns a relatively steep penalty rate
for reductions in bank capital, individual banks
would be more likely to hold larger amounts of
capital under a risk-based insurance system,
implying a lower incidence of bank failure.
Despite these differences, if based
on the same method of assessing bank risk,
proposals for risk-based capital and risk-based
insurance should have a similar impact on bank
risk-taking. To provide a glimpse as to how such
proposals might work, a practical system of riskbased deposit insurance and risk-based capital is
developed and presented in the next section.
Both proposals are based on the same method of

Sample Variable Statistics
Variable

KTA
PD90MA
LNNACCA
RENEGA
NCOFSA
NETINCA

Means of Failed Banks

Means of
Nonfailed Banks

6.14
3-41
3-64
0.28
2.89
-2.94

9.26
0.77
0.57
0.07
0.43
0.90

Source: Board o f Governors o f the Federal Reserve System.

TABLE

3

assessing bank risk. As this represents only part of
an on going effort to develop such systems, we
only briefly summarize our work.10

II. A Model of Bank Risk
Both the risk-based capital and risk-based insur­
ance premium proposals require an accurate
method of assessing bank risk. Forming an index
or rank ordering of banks by risk entails two
steps. First, variables must be selected that are
good predictors of risk; and second, weights must
be calculated to transform values of the vector of
predictor variables into a single-valued index.
Development of a good index is a
substantial task and is well beyond the scope of
this paper. It was decided somewhat arbitrarily,
therefore, to use the same six predictor variables
used by the FDIC in its risk-based insurance pro­
posal (see table 1). One good method of forming
weights for the index is to use historical data to
“fit” values of the predictor variables to an observ­
able ex-post measure of loss. Candidates for ex­
post measures of bank performance might be
bank failure and FDIC losses when failure occurs,
or bank earnings or loan charge-offs. Although we
use other measures of bank performance in other
work, for the illustrative proposals developed for
this paper it was decided to utilize bank failure.
The basic strategy followed was to use historical
data on bank failure to estimate weights that
could be used to transform values of the six vari­
ables listed in table 1 into an index of risk. This

index forms the basis of both our risk-based capi­
tal and risk-based deposit insurance proposals.
In selecting data used in this study
for both estimation and model evaluations, the
following specific procedures were used. The
sample was restricted to insured commercial
banks headquartered in the United States. Mutual
savings banks were excluded. Microdata were col­
lected for each bank for each of the five semian­
nual call and income reports filed from Decem­
ber 1982 through December 1984.11
Each of the “calls” represented a
potential observation with the following adjust­
ments (thus each bank could appear in the sam­
ple five times). Because new banks are thought
to follow a different behavioral process, all calls
were eliminated whenever a bank had not been
in continuous existence for three years at that
point. Banks without assets, deposits, or loans
were also eliminated. The sample was further
reduced by eliminating all banks with assets
above $1 billion (approximately two percent of
all banks) because of the virtual absence of large
bank failures.12 These adjustments reduced the
banks available in December 1984, for example,
from 14,460 to 13,388. The actual estimation
sample was further reduced by only using 10
percent (randomly selected) of the calls reported
by banks that did not fail within a year of the call.
This stratification of the nonfailed
banks (which was corrected for in the estimation
procedure) was done to create an estimation
data-set of manageable size. All calls where the
bank failed within a year of the call were used
(thus a failed bank could contribute two calls to
the sample). The final estimation sample con­
sisted of 6,869 observations, 160 of which repres­
ented calls for banks that failed within six months
of the call and 138 for banks that failed between
six months and a year after the call.
The data used for the study were
taken directly from the bank’s filed call report,
with slight adjustment. June values for the two
income variables—charge-offs and net income—
were recalculated to reflect performance over the
previous year rather than the 6-month period
reported. Means of the variables for the estima­
tion data are given in table 3. The data were fit
using a logistic model to predict bank failure

See Robert B. A very and Gerald A . Hanw eck, " A Dynamic
Analysis of Bank Failures,”

Bank Structure and Competition (1984),

Proceedings of a Conference on
Federal Reserve Bank of Chi­

cago; Robert A . A ve ry, Gerald A . Hanweck and Myron L . Kw ast, "An

Pro­
ceedings of a Conference on Bank Structure and Competition (1985),
Analysis of Risk-Based Deposit Insurance for Commercial Banks,"

n

More time periods could have been used. How ever, it was
decided to limit the length of the estimation period so that an
“out of sample" measure of the model's performance could be

computed.

Federal Reserve Bank of Chicago; and Terrence M . Belton, “ Risk-Based
Capital Standards for Commercial Banks," presented at the Federal
Reserve System Conference on Banking and Financial Structure, New

"I

Orleans, Louisiana, September 19-20, 1985.

1 Ld

^

The elimination of large banks had virtually no effect on the
results.

25

where a bank was deemed to have failed if it
failed within a year following the call. The esti­
mated risk index is:
(3)

R= -2.42 - .501 KTA +.428 PD90MA +
(3.07) (4.89)

(5.16)

314LNNACCA+ 269RENEGA
(4.31)

(1.07)

223NCOFSA- .331NETINCA,
( 1 .60 )
( 2 .68 )
where the logistic form of the model implies that
the probability that a bank will fail within a year is,
(3a)

PROB = ------ ------1 - exp ( - R )

7-statistics for the estimated coefficients are given
in parenthesis under each weight.13 All weights
are statistically significant except those for NCOFSA
(which has a perverse sign) and RENEGA14
Although the overall fit of the mod­
el suggests that predicting bank failure is difficult,
the failed banks in the sample had an average pre­
dicted probability of failure of 0.24, a number 69
times larger than the average predicted failure
probability of nonfailed banks in the sample.
Hence, the model clearly does have some ability
to discriminate between high- and low-risk banks.

III. Risk-Based Deposit Insurance Premiums
Several somewhat arbitrary assumptions were
used to convert the estimated risk-assessment
model into a risk-based deposit insurance pre­
mium system. First, the FDIC’s expected cost of
Coefficients for a logistic model have a less straightforward
interpretation that those in regression models. When multi­
plied by

PROB (1 -PROB) each

coefficient represents the expected

change in the probability of failure resulting from a one-unit change in
the variable. Thus, if a bank with a probability of failure of O.l raised its
capital ratio one percentage point, the model implies that its probability
of failure would fall by .045, that is, (-.5 0 1 x .1 x .9). Although they
were estimated using the same variables, and with data drawn from
similar time periods, the coefficients in (3) differ somewhat from those in
(1). This occurs, in part, because the FD IC model was estimated using a
probit rather than logistic specification, which effects the scaling of the
variables (logistic coefficients should be approximately

1.8 times

as

large). It also stems from the fact that the FD IC used problem-bank sta­
tus rather than bank failure as a dependent variable.

The model's log-likelihood R squared, a concept similar to the
R squared in a regression model, is 0.22. The sign on the

insuring each bank (per-dollar of deposits) was
computed as the estimated probability of failure
(from the formula in [3]) times the average FDIC
loss when failure occurs (13 6 cents/ per dol­
lar).15Assessment of this premium, which aver­
aged 7.2 basis points per dollar of deposits in
December 1985, would be actuarially fair if there
were no monitoring or social costs. Since these
factors are not known, and to provide compara­
bility with the current system, an intercept (or flat
premium) of 1.1 basis points per dollar of depos­
its was added to the risk-based assessment so that
the total assessment would be equivalent to the
FDIC’s actual revenues as of December 1985
(with the current flat-rate assessment of 8.3 basis
points). While certainly not a necessary ingre­
dient of a risk-based system, the FDIC revenue
constraint was adopted in order to allow the con­
centration of effort and discussion on estimating
the risk-based component of the premium while
not having to address the issue of what the
appropriate level of gross revenues should be.
Finally premiums were “capped” at 100 basis
points because of the belief that premiums above
this level would be difficult to collect.
Estimates of December 1985 riskbased premiums under this system are presented
in table 4. Premiums are computed across seven
asset-size classes of banks (rows [1] through [7])
and six premium-size intervals (columns [l]
through [6]). It should be emphasized that while
premiums for banks with over $1 billion in assets
are computed and reported, these are extrapola­
tions as no banks of this size were included in
the sample used to estimate the risk index. Rowrs
(8) and (9) show the premium distribution for
banks that subsequently failed in 1986 and 1987
(through September 30), giving an idea of the
system’s capacity to identify and penalize risky
banks. Row (10) and column (7) present totals
for all banks. The first number in each cell is the
average risk-based premium expressed in basis
points of total domestic deposits. The second
number is the average estimated (percentage)
probability of failure by banks in that cell, and the
third figure is the number of banks, based on the
total of 13,522 banks used to compute the pre­
mium, that are predicted to fall into each size and
risk-class category.
The primary conclusion to be drawn
from table 4 is that the risk-based system
depicted there would divide banks into three
major groups. First, even with the FDIC revenue
constraint imposed, the vast majority of banks

weight of N C O F S A may be not be as perverse as it appears. The coeffi­
cient on charge-offs represents the marginal impact on failure holding net
income constant. Because charge-offs are also in net income, they are
effectively counted twice. The positive sign on charge-offs indicates they

-i

^

This number is the average ratio of the FD IC ’s loss reserve

have less impact on failure than other contributory factors toward earn­

J_

y

to total domestic deposits calculated for banks that failed

ings. The total impact of charge-offs (the sum of the coefficients of

between 1981 and 1984. See A ve ry, Hanw eck, and Kw ast “A n Analysis

N C O F S A and N ET IN C ) has the expected negative sign.

of Risk-based Deposit Insurance.”

Estimated Commercial Bank Risk-based Premiums — D ecem ber 1985
(Basis Points of Total Domestic Deposits)
First number is the average prem ium for banks in the cell. Second num ber is average estimated probability o f failure in percent. Third
number is number o f banks.
Asset Size Class
(f millions)

(1 )

(2)

(3)

(4)

(5)

(6 )

(7)

(8 )

(9)

<

$10

$10

$25

-

$25

-

$50

$50

-

$100

$500

>

Premium Size Class
(2 )
8.3-12.4

0 )
< 8.3

$100

-

-

$500

$1000

$1000

Banks failing
in 1 9 8 6
Banks failing
in 1987

(10) All Banks

(3 )
12.5-24

(4)

25-49

(5 )
50-99

(7)

(6 )
100

All Banks

2.4

10.1

17.2

3 2.1

6 1 .6

1 0 0 .0

6.3

.1

.6

1.2

2.3

933.0

29.0

23.0

1 6 .0

4.5
9.0

34.5
25.0

1035.0

2 .6

1 0 .0

17.2

1 0 0 .0

6.9

1.2

3135.0

.7
109.0

33-3
2.4

6 8 .8

.1

131.0

6 1 .0

5.0
44.0

42.7
78.0

3558.0

2.9

10.1

17.1
1.2
105.0

35.0
2.5
47.0

70.4
5.1
26.0

1 0 0 .0

1 6 .8

33.9
2.4
29.0

74.3
5.4
19.0

1 0 0 .0

32.9
2.3
28.0

71.7
5.2
7.0

1 0 0 .0

.1

.7

3258.0

1 1 2 .0

3.1
.2
2485.0

9.9
.7

3.7
.2
1859.0
4.3
.2
171.0

1 1 6 .0

1.2
72.0

9.8

16.4

.6

1.1

85.0

65.0

9.3

29.4

14.0

17.3
1.2
9.0

9.8

15.9

.6

2.1

3.0

33-6
54.0
35.6
3 6 .0

1.1

1.2

5.9
.7
3 6 0 2 .0

5.9
.7
2757.0

71.1

5.7
.5

1 6 .0

2 0 6 0 .0

69.7
5.0
3.0

1 0 0 .0

54.8
2.0

7.5
.9
202.0

78.8
5.7

0.0
0.0
0.0

7.0
.4
308.0

5.1
.3
230.0

.6

1.1

6 0 .0

15.0

37.7
2.7
2.0

4.8

1 0 .8

.7

&o

17.1
1.2
9.0

38.1
2.7
12.0

71.5
5.2
12.0

1 0 0 .0

.3
17.0

51.8
75.0

68.7
30.1.
133-0

4.6
•3
44.0

10.2
.7
11.0

16.9
1.2
20.0

32.2
2.3
17.0

69.8
5.1
9.0

100.0
35.6
31.0

37.3
9.3
132.0

3.0

9.9
.7
525.0

16.9
1.2
420.0

33.6
2.4
186.0

69.8
5.0
109.0

100.0
37.4
211.0

6 .2

.1

12071.0

1.0

.8

13522.0

Source: Board o f Governors o f the Federal Reserve System.

TABLE

4

would pay a lower insurance premium under the
estimated risk-based scheme than the current
gross premium of 8.3 basis points. As may be
seen from the table, this is true for all size classes,
with the proportion paying less ranging from a
low of 75 percent to 90 percent. Overall, 89 per­
cent of all institutions are estimated to pay less
with an average premium of 3.0 basis points.
The second group of banks is com­
posed of the 9 percent of all banks that would
pay an increased premium ranging from a low of
8.3 basis points to 99 basis points (columns 2
through 5). This range of almost 92 basis points

is quite large and appears wide enough both to
provide a strong incentive to alter current risktaking behavior by banks and to deter excessive
risk-taking in the future. Some perspective on the
size of the estimated risk-based premium is given
by noting that the average bank’s return on total
deposits in 1985 was only 82 basis points. The
average bank’s premium would have been almost
1 percent of its previous year’s total capital, and
somewhat over 4 percent of its net income. But
in the higher risk categories (columns 4-6), the
capital percentages range up to 25.5 percent.

28

The third group of banks is the one
percent that would have been asked to pay an
insurance premium of over one percent (capped
at 100 basis points) of total domestic deposits in
1985 (column 6 of table 4). For these banks it is
not unusual for the average expected cost imposed
on the FDIC to exceed 500 basis points. Indeed,
the total cost that would have been expected to
be imposed on the FDIC in 1986 by the 211
banks in column 6 was $477 million, or 25 per­
cent of the total expected cost of $1.9 billion for
all 13,522 commercial banks for which premiums
were computed. Clearly, because the size of the
assessment might be sufficient, by itself, to force
these banks into insolvency, special measures
might be needed to deal with them.
The ability of the system to identify
risky banks in advance is illustrated by the pre­
miums that would have been charged in Decem­
ber 1985 to banks that subsequently have failed.
Over 87 percent of the banks that failed in 1986
would have been required to pay higher premi­
ums than they pay currently, a figure in sharp con­
trast to the overall figure of 11 percent. Over onehalf of the 1986 failed banks would have been
assessed premiums at the highest rate of 100 basis
points. Figures for banks that failed in 1987 are
somewhat less dramatic. Still, 67 percent of 1987
failed banks would have been required to pay
higher premiums in 1985, and almost one-fourth
would have been placed in the highest risk class.

IV. Risk-based Capital
Conversion of the bank failure model estimates
into a risk-based capital system was somewhat
more complicated than procedures used for the
risk-based insurance premium system. To ensure
comparability with the current system, it was
decided to set a standard so that if all banks held
exactly the required capital ratio, the expected
losses to the FDIC would be identical to its
expected losses under the current system. It was
determined that this would occur if each bank in
December 1985 were required to hold enough cap­
ital so that its probability of failure was 0.7 per­
cent (about 95 expected bank failures per year).
A floor and ceiling were also im­
posed so that no bank would be required to have
a capital ratio of less than 3 percent nor more
than 15 percent. This particular standard was
chosen in order to make the expected losses to
the FDIC of the risk-based capital system as close
as possible to the risk-based insurance system out­
lined in the previous section. Imposition of the 3
percent minimum floor was similar to the addition
of an intercept term in the risk-based premium

system, and is a tacit admission that any realistic
risk-based capital system would have to have a
floor. The 15 maximum capital standard is similar
to the cap imposed on the risk-based premium.
Solution for the amount of capital
each bank would have to hold follows straight­
forwardly from the estimated risk index. The for­
mula given in equation (3a) implies that a bank
with a risk index value of -4.95 would have a
probability of failure of precisely 0.7 percent.
Equation (3), therefore, implies that the required
minimum capital level,
must satisfy

KTA*,

50\KTA*+ A28PD90MA +
314LNNACCA+ .269 RENEGA .223NCOFSA - .331 NETINCA,

(4)

-4.95 = -2.42 -

or,
(5)

KTA*= 5.04 +.854PD90 MA +
.627LNNACCA + .537RENEGA .445NCOFSA - .661 NETINCA,

which can be solved for each bank.16
Table 5 gives an indication as to
how a risk-based capital system might work. It
shows the December 1985 distribution of
required capital by bank-size class and future
failure. Rows (1) through (7) represent banks of
increasing size, row (8) shows banks that failed
in 1986, row (9) shows banks that failed in 1987
(through September 30), and row (10) shows the
sum of all banks. The columns show the number
and percent of banks in each size class that
would have been assigned to various required
capital classes. For each cell, the first number
given is the average required capital level for
banks in the cell, the second number is the per­
centage of banks that would have to raise capital
to meet the new standard, and the third number
is the number of banks in the cell.
The numbers in table 5 suggest
several interesting conclusions. Eighty-six percent
of all banks would have a risk-based capital
assessment below 6.5 percent. A middle group
would be required to hold capital ratios between
6.5 and 10 percent; and a small group (3.4 per­
cent of the total) would have to hold capital of
over 10 percent of assets. There is an indication
that banks with higher risk already hold more
capital than required. Thus, almost 92 percent of
banks would not have to raise more capital under
the risk-based standard. However, there is a small

"I

Z '

Jl O

The formula implies that a bank would

reduce

its index value

by 0.501 for each percentage point rise in its capital ratio.

Thus, a bank with a 5.5 percent capital ratio and a risk index- of -3.70
would be required to raise its capital ratio 2.5 percentage points to

8

percent, that is 2.5 = [4.95 - 3 ,71]/.5 0 1, Banks with risk indices below
-4.95 would be allowed to divest one percentage point of capital for
each 0.501 they were below -4.95.

Estimated Commercial Bank Risk-based Required Capital — December 1985
(Percent of total assets)
First number is the average capital ratio for banks in the cell. Second num ber is percent o f banks that w ou ld have to raise capital. Third
number is number o f banks.
Asset Size Class
(1 millions)

Required Capital Class
<

(1)

<

$10

(i)
5.5

(2 )
5.5-6.4

(3 )
6 .5 -7 4

(4 )
7 .5 -9 9

(5 )
10.0-14.9

(6 )
15.0

(7 )
All Banks

4.6
0.0
529.0

6.0
1.0
198.0

3.3
119.0

8.5
27.7
130.0

11.8
76.1
46.0

15.0
84.6
13.0

6.1
8.5
1035.0

7.0

(2)

$10

-

$25

4.7
.1
1936.0

5.9
.9
775.0

7.0
9.0
365.0

8.5
50.0
326.0

11.6
92.9
141.0

15.0
97.1
35.0

5.9
10.4
3558.0

(3)

$25

-

$50

4.8
.2
2158.0

5.9
1.1
749.0

6.9
14.0
336.0

8.5
54.0
252.0

11.8
95.7
92.0

15.0
100.0
15.0

5.7
8.3
3602.0

(4)

$50

-

$100

4.8
.4
1752.0

5.9
3.0
535.0

6.9
16.7
239.0

8.4
53-8
158.0

11.7
90.2
61.0

15.0
91.7
12.0

5.6
7.8
2757.0

(5)

$100

-

$500

4.9
.1
1366.0

5.9
4.0
448.0

6.9
24.1
116.0

8.3
69.8
96.0

11.7
100.0
31.0

15.0
100.0
3-0

5.5
7.2
2060.0

(6)

$500

-

$1000

4.9
1.5
137.0

5.9
10.8
37.0

6.9
27.8
18.0

8.7
100.0
6.0

10.9
100.0
3-0

15.0
100.0
1.0

5.5
10.4
202.0

(7)

>

5.0
3.1
191.0

5.9
29.0
93-0

6.8
47.4
19.0

8.6
100.0
4.0

10.2
100.0
1.0

0.0
0.0
0.0

15.3
308.0

$1000

5.4

(8)

Banks failing
in 1986

4.6
0.0
5.0

5.9
33.3
3.0

7.1
53.3
15.0

9.0
86.4
22.0

12.4
98.1
54.0

15.0
100.0
34.0

11.5
86.5
133-0

(9)

Banks failing
in 1987

5.0
9.1
11.0

6.0
16.7
12.0

6.8
21.0
19.0

8.8
75.5
49.0

12.1
96.7
30.0

15.0
72.7
11.0

9.2
61.4
132.0

4.8

5.9
2.9
2815.0

6.9
13.7
1212.0

8.4
51.1
972.0

11.7
91.7
375.0

15.0
94.9
79.0

5.7
8.8
13522.0

(10) All Banks

.3
8069.0

Source: Board o f Governors o f the Federal Reserve System.

TABLE

5

group that would have to raise a substantial
amount of additional capital. The efficiency of a
risk-based system is evident from the fact that
aggregate bank capital would be reduced by 18
percent from the actual December 1985 total, yet
expected FDIC losses would be exactly the same
as under the current system. This happens
because the risk-based system shifts capital to
those banks most likely to fail.

The evidence of the banks that
failed in 1986 and 1987 is particularly telling. All
but 18 of the 133 banks that failed in 1986 would
have been required to raise additional capital in
December 1985. As a group, these banks would
have been required to almost double their aggre­
gate capital. Over 60 percent of the banks that
failed in 1987 would have been required to raise
additional capital and over 90 percent would
have been assigned a capital ratio above the cur­
rent standard.

V. Final Comments
The systems presented here are meant to be illus­
trative and would probably require substantial
modification before they could be actually imple­
mented. They do show, however, that both riskbased capital and risk-based insurance systems
could be constructed that discriminate between
banks in a way that would likely affect behavior.
The similarities between the dis­
tribution of banks shown in the tables summariz­
ing the proposals is striking. This, however,
should not be surprising since both systems are
based on the same risk measure. Indeed, if we
had arrayed banks by the amount of new capital
they would have to raise, instead of by required
levels, the rank orderings of banks in the two sys­
tems would have been identical. They differ in
the arrangements shown only because some
banks that would otherwise have higher risk hold
more capital than required under the current sys­
tem, and thus, would reduce their premiums.
This does not mean that the two
systems would have identical impacts on bank
behavior or on overall system risk. As argued ear­
lier, the regulatory environment surrounding
each system is likely to differ. If banks face prices
for risk in the capital market different from those
charged by the FDIC, there will be inefficiencies
in a risk-based capital standard that could pro­
duce different levels of system risk.
The incentives for banks to alter
their risk-taking activities are very likely to differ
between the two systems. It is not clear, however,
that the impact of such differences would be
major. Both systems share a common basis in the
principle of differentially regulating banks accord­
ing to the risk they represent to society. Imple­
mentation of either type of system is likely to
lead to significant progress in the battle to control
bank risk.

Economic Review

Quarter II 1986
Metropolitan Wage Differentials:
Can Cleveland Still Compete?
by Randall W. Eberts
and Joe A. Stone
The Effects o f Supplemental Income
and Labor Productivity on Metropolitan
Labor Cost Differentials
by Thomas F. Luce
Reducing Risk in Wire Transfer Systems.
by E.J. Stevens

Quarter III 1986
Exchange-Market Intervention:
The Channels o f Influence
by Owen F. Humpage
Comparing Inflation Expectations
o f Households and Economists: Is
a Little Knowledge a Dangerous Thing?
by Michael F. Bryan
and William T. Gavin
Aggressive Uses o f Chapter 11
o f the Federal Bankruptcy’ Code
by Walker F. Todd

Quarter I 1987
Concentration and Profitability
in Non-MSA Banking Markets
by Gary7Whalen
The Effect o f Regulation on
Ohio Electric Utilities
by Philip Israilevich
and K.J. Kowalewski
Views from the Ohio Manufacturing Index
by Michael F. Bryan
and Ralph L Day

Quarter II 1987
A New Eflfective-Exchange-Rate Index
for the Dollar and Its Implications
for U.S. Merchandise Trade
by Gerald H. Anderson,
Nicholas V. Karamouzis
and Peter D. Skaperdas
How Will Tax Reform Affect Commercial Banks?
by Thomas M. Buynak

Quarter III 1987
Can Services Be a Source o f Export-led Growth?
Evidence from the Fourth District
by Erica L Groshen

Quarter IV 1986
Disinflation, Equity Valuation,
and Investor Rationality
by Jerome S. Fons
and William P. Osterberg

Identifying Amenity and Productivity Cities
Using Wage and Rent Differentials
by Patricia E. Beeson
and Randall W. Eberts

The Collapse in Gold Prices:
A New Perspective
by Eric Kades

FSLIC Forbearances to Stockholders and
the Value o f Savings and Loan Shares
by James B. Thomson

“Don’t Panic”: A Primer
on Airline Deregulation
bv Paul W. Bauer

Fourth Quarter
Working Papers
W o rk ing Pa pe r Notice

The Federal Reserve Bank

A s of January 1, 1987, we no

of Cleveland has changed its

longer send

method of distribution for the

ing Paper series

Work­

produced by the

Bank’s Research Department.

Working Papers

of charge to those w ho request

to indi­

viduals as part of a mass mailing.
Our current

Working Papers

will be

listed on a quarterly basis in each
issue of the

Economic Review.

Indi­

viduals m ay request copies of spe­
cific

0*71 1

3 2

Working Papers

however, will not

ers. Libraries and other organiza­
tions m ay request to be placed on a
mailing list for institutional subscrib­
ers and will automatically receive

Working Papers as

form below. Papers will be sent free

published.

A Test o f Tw o Views o f the
Regulatory Mechanism: AverchJohnson and Joskow
by Philip Israilevich
and K.J. Kowalewski
Deposit Insurance and the Cost
o f Capital
by William P. Osterberg
and James B. Thomson

they are

g -y ^ ^

Exit from the U.S. Steel Industry
by Mary E. Deily

8716

Monetary Policy Under Rational
Expectations with Multiperiod
Wage Stickiness and an
Economy-Wide Credit Market
by James G. Hoehn

Q ~ 7 1 O Implicit Contracts, On-the-job
Search and Involuntary7
Unemployment
by Charles T. Carlstrom

8714

Working Papers,

be maintained for personal subscrib­

pleting and mailing the attached

Estimating Multivariate ARIMA
Models: When Is Close Not
G ood Enough?
by Michael L. Bagshaw

8713

listed by com­

them. A regular mailing list for

0 “ 7 1 ”7 Turnover Wages and Adverse
Selection by Charles T. Carlstrom
0*7 1 O

The Nature o f GNP Revisions
by John Scadding

8719

Interest Rate Rules Are
Infeasible and Fail to Complete
M acroeconom ic Models
by James G. Hoehn

Please complete and detach the form below and mail to:
Federal Reserve Bank of Cleveland
Research Department
P .O . Box 6387
Cleveland, Ohio

Check item(s)

44101

Please send the following Working Paper(s):

requested.

Send to:

□ 8711

□ 8714

□ 8717

□ 8712

□ 8715

□ 8718

□ 8713

□ 8716

□ 8719

____
Name

Please print

........
Address

City

State

Zip Code