View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Vol. 27, No. 1

ECONOMIC REVIEW
1991 Quarter 1
Why a Rule for Stable Prices
May Dominate a Rule for
Zero Inflation

2

by William T. Gavin and Alan C. Stockman

Predicting Bank Failures
in the 1980s

9

by James B. Thomson

A Proportional Hazards Model
of Bank Failure: An Examination
of Its Usefulness as an Early
Warning Tool
by Gary Whalen




FEDERAL RESERVE BANK
OF CLEVELAND

21

EEB. P.r-.

ECONOMI C

REVI EW

1991 Quarter I
Vol. 27, No. 1
Why a Rule for
Stable Prices May
Dominate a Rule for
Zero Inflation

2
Economic Review is published
quarterly by the Research Depart­
ment of the Federal Reserve Bank
of Cleveland. Copies of the Review
are available through our Public
Affairs and Bank Relations Depart­
ment, 216/579-2157.

by William T. Gavin and
Alan C. Stockman
There is a technical distinction between a zero-inflation rule and a
price-level rule. The former allows bygones to be bygones; random
shocks to the price level are allowed to accumulate over time. A
price-level rule would require the Federal Reserve to offset these
accumulated effects eventually. This paper shows that a rule for the
price level may dominate a rule for the inflation rate, even in the
case where, for purely economic reasons, an inflation rule is pre­
ferred. A price-level rule constrains the current behavior of policy­
makers because today’s choices directly affect tomorrow’s options.

Editors: Tess Ferg
Robin Ratliff
Design: Michael Galka
Typography: Liz Hanna

Opinions stated in Economic
Review are those of the authors
and not necessarily those of the
Federal Reserve Bank of Cleveland
or of the Board of Governors of the
Federal Reserve System.

Predicting Bank
Failures in the 1980s
by James B. Thomson
This paper uses a single-equation logit model to discriminate be­
tween samples of failed and nonfailed banks over the 1984-1989
period. Previous failure prediction studies had to pool bank fail­
ures across years to obtain an adequate sample. The historically
high number of failed banks over the past decade, however, allows
each year in the sample period to be examined separately. The
author incorporates measures of economic conditions in the fail­
ure prediction equation, along with the traditional balance-sheet
risk measures, and finds that the majority of these variables are
significantly related to bank failure as much as four years before
an institution actually folds.

A Proportional
Hazards Model of
Bank Failure: An
Examination of Its
Usefulness as an
Early Warning Tool

Coordinating Economist:
Randall W. Eberts

21

by Gary Whalen
The large number of bank failures in recent years has created incen­
tives for both regulators and providers of funds to identify high-risk
banks accurately. One potentially cost-effective way to do this is
through use of a statistical “early warning model.” In this article, the
author shows that a Cox proportional hazards model can identify
both failed and healthy banks with a high degree of accuracy using
 a relatively small set of publicly available data.


Material may be reprinted
provided that the source is
credited. Please send copies of
reprinted material to the editors.

ISSN 0013-0281

Why a Rule for Stable
Prices May Dominate
a Rule for Zero Inflation
by W illia m T. Gavin
and Alan C. Stockm an

William T. Gavin is an assistant
vice president and economist at
the Federal Reserve Bank of
Cleveland. Alan C. Stockman is a
professor of economics at the
University of Rochester and a con­
sultant to the Federal Reserve Bank
of Cleveland. The authors thank
Shaghil Ahmed and Joseph
Haubrich for useful comments.

Introduction
Economists have long debated the wisdom of
various constitutional constraints on monetary
policy. Milton Friedman argued that economists
do not know enough about the complexities of
the economy to make discretionary policies that
would be better than rules, and that the attempt
to improve economic performance through dis­
cretionary policies has led to consequences
worse than those that would have resulted from
rules.1 Another, entirely different, argument in
favor of rules comes from the time-consistency
literature: Rules affect expectations, and commit­
ting in advance to a policy rule (typically of a
state-contingent nature) leads to better out­
comes than can be obtained with optimal discre­
tionary actions.2
Opponents of rules typically focus on the
complexity of optimal state-contingent rules, ar­
guing that these complex rules might be better
approximated by discretionary policy actions

■

1 See Friedman (1959).


2 See Kydland and Prescott (1977) for an original statement of the
http://fraser.stlouisfed.org/
time-consistency problem.
Federal Reserve Bank of St. Louis

■

than by simple rules that could be written and
enforced at reasonable cost.3
One implication of the time-consistency lit­
erature is that institutions and rules might be
used to improve an economy’s inflation perform­
ance without sacrificing output. Some of our cur­
rent monetary institutions have been rationalized
as attempts to achieve a lower inflation outcome
than occurs in a world in which the optimal
short-run policy is not time-consistent. One
institution that has lowered inflation is the inde­
pendent central bank.4 Another is the practice
of appointing conservative central bankers.5
In this paper, we show that a rule for the price
level may dominate a rule for the inflation rate,
even in the case where, for purely economic
reasons, an inflation rule is preferred. In our
model, policymakers do not have perfect con­
trol over inflation, some policymakers have a
preference for more inflation than is socially
desirable, and the penalty for breaking the rules

■
■

3

■

5

See Summers (1988) for three arguments against rules.

4 See Bade and Parkin (1987) and results summarized in Alesina
(1988), table 9 on page 41.
See Rogoff (1985).

is not overly severe. Under these conditions, an
inflation rule will lead some policymakers to
attribute policy-induced inflation to nonpolicy
causes. Because nonpolicy shocks to the infla­
tion rate can occur in any time period, a severe
penalty is not optimal.
Under a price-level rule, the source of the in­
flation in any time period does not matter. The
penalty associated with this rule provides an
incentive for policymakers to offset inflation,
regardless of the source. A price-level target con­
strains the current behavior of policymakers
because today’s choices directly affect tomor­
row’s options.

I. Stable Prices vs.
Zero Inflation
We present a simple example of inflation and
monetary policy in which two types of policy­
makers might be in charge of monetary policy.
These two types want different levels of infla­
tion. They differ because inflation has two
effects: 1) a negative effect on overall social wel­
fare and 2) uninsurable redistributive effects
that benefit some people at the expense of
others. We assume one type of potential
policymaker receives private gains from infla­
tion that may dominate his share of the overall
social loss. The other type loses more than the
aggregate social loss from inflation. We do not
model the reasons for the lack of insurability of
these redistributive consequences of inflation;
our model simply assumes there are limits on in­
surance or financial markets that prevent such
insurance from operating perfectly.
We assume that while inflation is observable,
the behavior of policymakers is not— people
observe and understand inflation, but not the
monetary policy that affected it. Monetary
policy cannot be perfectly inferred from either
inflation or monetary growth because random
factors, such as shifts in output supply and the
demand for money, also affect these variables.
We interpret rules as penalties (or rewards)
for policymakers based on observed outcomes
of inflation: They are features of the overall com­
pensation package of policymakers. This pack­
age could include implicit as well as explicit
payments, and deferred as well as current pay­
ments (in such forms as fame, praise by the
news media, and opportunities to give speeches
and write books, or to take various desirable
positions after the policymaker’s term of office
expires).



If there were no limits on the penalties that
could be imposed on policymakers, a rule
could specify an extreme penalty for policy­
makers whenever inflation deviates from zero
by some threshold amount. Then any policy­
maker would try to achieve zero inflation. But
the random forces affecting inflation would
sometimes make it exceed that threshold, so the
penalty would sometimes apply. To induce any­
one to be a policymaker, the salary would have
to compensate for the risk of high inflation due
to random events and the subsequent penalty.
With risk-averse individuals, the required salary
would have to be very high to compensate for
the risk of a severe penalty.
Thus, an optimal reward structure for policy­
makers involves a limited penalty for deviating
from target inflation and a correspondingly
smaller expected reward. We do not model the
incentives to enforce the rule; we simply assume
that constitutional rules are enforceable and are
actually enforced.
For simplicity, we assume the socially op­
timal inflation rate is zero (though the optimal
inflation rate is immaterial to our argument).6
We compare two policy rules (compensation
packages for policymakers)— one that penalizes
policymakers whenever inflation deviates from
the socially optimal rate (zero), and another that
penalizes them whenever prices deviate from a
stable level.
We believe that a stable price level is a better
goal for monetary policy than a zero-inflation
goal that allows drift in the price level. A stable
price policy eliminates inflation and the asso­
ciated uncertainty that interferes with efficient
long-term nominal contracting and borrowing.
To avoid biasing our results in favor of the
price-level rule, however, we ignore these argu­
ments for a stable price level. Instead, we assume
that society gains from a zero rate of inflation
(even if this means price-level drift). The stableprice-level rule requires inflation or deflation to
correct for past changes in the price level.
Although this inflation or deflation causes a
social loss when it occurs, the stable-price rule
can generate a socially better outcome because
it alters the incentives of policymakers.

■

6 We have argued elsewhere for zero Inflation (see Gavin and
Stockman [1988]). Our arguments there suggest that policymakers
should stabilize the price level rather than its rate of change. In our cur­
rent example, we assume that the socially optimal policy is designed to
achieve a zero rate o f change of prices.

II. A Simple Model
We examine a simple two-period model in
which inflation, n , results from a monetary
policy variable, m, and an exogenous random
disturbance, e :
(1)

n-m +e.

We assume E (e) = 0 and is observed only after
m is chosen. This random disturbance may be
thought of as a combination of shocks to output
supply, shifts in money demand, and errors in
monetary control. This random component
prevents people from observing policy actions
directly.
Inflation is socially costly. We assume there
is a social loss from inflation z(n ), where
(2)

Z (n ) = z n 2, z > 0.

The population of the economy is fixed and nor­
malized at two. The social cost of inflation is
divided equally among all households, so each
bears one-half of this social cost.
There are two types of households in the
economy— type-i households, who privately
benefit from inflation at the level n* > 0, and
type-0 households, who privately lose from non­
zero inflation. The population of each type is
normalized at one.
The purely private component of the loss to
each type-0 household from nonzero inflation is
H (n ), where
(3)

H (n ) = ( h / 2 ) n 2, h > 0 .

The total loss each period to each type-0
household is the sum of the two losses, Z (7t)/2
+
The purely private component of the loss to
each type-i household is G (n), where
(4)

We consider two alternative rules for mone­
tary policy. Each rule is a set of penalties to the
group in charge of policy, for deviating from
some target inflation outcome. We assume these
rules can be perfectly enforced. Section III con­
siders a rule for zero inflation— one that does
not penalize the policymaker for failing to cor­
rect past changes in the price level. Section IV
then considers a rule for a stable price level— a
zero-inflation rule that penalizes policymakers
for failing to correct past changes in the price
level.

III. A Zero-Inflation
Rule that Allows
Price Drift
Consider a rule for zero inflation that does not
penalize a policymaker for failing to correct
past changes in prices. The rule consists of a
penalty (smaller total compensation) for infla­
tion. We assume it takes the form
(5)

K(n) = ( k / 2 ) n 2, k > 0 .

We do not derive the optimal penalty in this
paper. To do so would require the explicit spec­
ification of the relationship between the cost of
compensating policymakers and the level of the
penalty. The optimal penalty would be chosen so
that the marginal benefit from a lower inflation
trend associated with a higher penalty would just
offset the increased compensation required by the
policymaker at the higher penalty rate.

Type-0 Policymakers
If a type-0 individual controls policy, his prob­
lem in the second period ( i = 2) is to choose m
to minimize

G(n) = (g/2)(n -n*)2, g > 0.

The total loss each period to each type-i
household is Z(n)/2 + G (n ).
In our example, as inflation rises from zero
to 7U*, some households gain at the expense of
others. In addition to this redistribution, infla­
tion has a social cost of Z (71).
The monetary policy variable, m, is con­
trolled by a central bank that may be captured
by either group. We do not model this capture
here, as it is largely immaterial for our argument.
The outcome of this process is a random vari­
able. We assume that the same policymaker is
in charge for both periods.




E[H(K) + ^ ^

+ K(n)]

subject to (1). Let q = z+ k. The type-0
policymaker minimizes
h+ q
E l— -— (m+ e) 2] ,
which implies that he chooses m = 0. His mini­
mized expected loss is then

5

The optimization problem of a type-0 policy­
maker in the first period is to choose m to mini­
mize

(6)

h+q
b +q 2
g [— _— (m + e )2 + P — -— e2 ],

where (3 is a discount factor and e2 denotes the
second-period realization of the random distur­
bance e. This obviously has the same solution
as at t = 2, namely m = 0. A type-0 policymaker
subject to this rule would choose monetary
policy that results in zero expected inflation
each period.

Type-i Policymakers
We now turn to the optimization problem of a
type-i policymaker. At t = 2, he chooses m to
minimize
g
q
E\— (m + e- 71*) 2 + — (m + e ) 2 ] .
This implies

(7)

8
m = ---- 71 = Ll 71.
g+ q

The policy rule for zero expected inflation
results in positive expected inflation if a type-i
policymaker is in charge because he balances
the penalty for higher inflation against his
private gains from inflation. The limitations on
penalties discussed earlier prevent the penalty
from being so large that this policymaker would
set m = 0.

IV. A Stable-Price
Zero-Inflation Rule
We now turn to a stable-price rule, which in­
vokes a penalty in the second period if inflation
deviates from a level that would return the price
level to its original position in the first period.
We assume the penalty at t = 2 raises the perhousehold cost of inflation, to the households
in charge of policy, from K(n) to K (n - n T),
where n r is the target in flatio n specified by
the rule. This target inflation is, in our setup,
simply the negative of actu al inflation at t= 1:
n T= -71j = - (m 1+ e1), where ml is the firstperiod money growth rate and el is the firstperiod exogenous disturbance. This implies
K ( n - n T) = k (m + e+ m1 + ex) 2.

The minimized expected loss of the type-i
policymaker at t = 2 is

Type-0 Policymakers

(8)

A type-0 policymaker at t = 2 chooses m to
minimize

E{ -| [ ( n- l ) 7 T* + i?]2
q
+ y

_ h +z .
. , k .
E [— ^— (m+ e )z + —(m + e+ m1+ el ) z \,

(LtTt* + e ) 2 1.

In the first period, this policymaker chooses
m to minimize

which implies

g
q
E 1— ( m + e - k *) 2 + — ( m + e) 2 ]
(10)
+ |3 E {

[ (|i - 1 ) 71* + e2 ] 2

q
+ — {\LK* + e2) 2 \,
which implies

(9)

g
m = ----71* = LL7C* .
g+q

This is the same monetary growth rate as in the
second period. So, a type-i policymaker
chooses a time-invariant money growth rate
that yields positive expected in flatio n .




—k
m - ------ - (m, + e.) = -r(m , + e, ) .
h+ z+ k
1
1
1
1

The policymaker weighs the costs of non­
zero inflation against the costs of deviating from
the rule. He then chooses money growth to at­
tempt to reverse a fraction r of the previous
period’s inflation. His minimized expected loss
at t = 2 under the stable-price rule is

(11)

E{

[-r ( m1 + e1 ) + g ] 2

+ “ t (1 - r) {ml + e:) + e ) 2 } .

6

Now consider the incentives of this policy­
maker in the first period. He chooses mxto
minimize

[ |i

+ r h

A - Z

+

k

,

-

,

7t* (1 - r )

(mx+ ex) + e] 2 1 .

1

E[-------( mx+ ex) 2]

In the first period, a type-i policymaker
knows that positive inflation will be costly in
the second period and chooses m to minimize
+ — 1 (1 -

r) (mx+ e1 )

+

e ] 2 }.

E[

This implies that mx = 0 in the first period and
that the second-period policy simplifies to m =
- rex. So a type-0 policymaker would choose
the policy m = 0 in the first period. In the second
period, he would choose policy to try to reverse
a fraction r of any accidental inflation in the
first period resulting from the random shock e.

+

Z+2

^ (m+ e) 2+

$E { —

2

[ (a 71* -

+ — [ |i 7i* -

^

(m + e-

71*)

2

]

r ( m + e) + e7 ] 2

z

r ( m + e) + e2 - k * ]2

+ — [ (1 7U* (1 -

2

r ) (m + e) +

z

]2}.

So,

Type-i Policymakers

(z + k) m + G (m -K*)
Finally, we turn to the behavior of a type-i
policymaker subject to the stable-price rule. In
the second period, he chooses m to minimize

+ (3 { z ( | i

rrri) (-r)

+ G" [ ( j i — 1) 7T* —
+

E [~ (m + e)2 + ^ (m + e -

7C* -

k[

(17t* + (1 —

rm\(-r )

r) m] (1

- r) }= 0 ,

7 i* ) 2

which implies

k

+ - ( m + e+ ml + ex) 2] .
g 71* [ z + g + (1 - (3) k]
(13)

This implies

( z+ g + k ) 2 + ( z+ g ) |3&
2"+

E[ z ( m + e) + g ( m + e- n*)

+ k (m + e + mx+ ex) ] = 0

m=

= |i 7T

,

g+ k —

z+ g+ k+ (1

- |i) P&

= qm*

or

< [l K*

g n* - k ( mx+ ex)
z+ g + k
= (I

n* - r(m 1+ ex) .

In period 2, the type-i policymaker chooses an
inflation rate that balances the private gain from
positive inflation against the cost of the penalty
for deviating from zero. Under the stable-price
regime, however, the cost of deviating from
zero inflation also depends on the inflation rate
in the first period. The period 2 money growth
will be modified to offset some of the period 1
inflation. The minimized expected loss of the
policymakers is, conditional on t = 1 variables,

z
E 1—



[ (i 7C* -

r(m x+ ex) + e ]2

An important feature of this solution for
money growth is that it is positive but smaller
than jll 7t*, the money growth rate that the
policymaker would choose in the absence of
the stable-price rule.
The solution in (13) for first-period policy im ­
plies that the second-period policy choice is
(14)

ra =

|i 7U* - r(cp7t* +

ex) .

We summarize these results in table 1. The
stable-price rule has costs and benefits relative
to the rule permitting price-level drift. If type-0
households control monetary policy, they
choose zero-money-growth rates under the lat­
ter rule, but they choose money growth that at­
tempts to reverse a portion of previous inflation

T A B L E

V. Conclusion

1

Money-Growth Rates Under
Alternative Policy Rules

Z ero- Inflation R ule P e rm ittin g
Price-Level D rift
Type-0
h o use h o ld s

Type-i
h o u se h o ld s

0
0

(171*
¡171*

Stable-Price R ule
Type-0
h o use h o ld s

Type-i
h o u se h o ld s

0

(p7U*

-re j

|17t* - r(cp7t* + £ J )

NOTE: (X , r, and tp are as defined in equations (9), (10), and (13). Recall
that cp is smaller than p ., so the stable-price rule results in less inflation in
each period if the policymaker is type-i. This is the social benefit o f a stableprice rule. The cost o f that rule is the nonzero expected inflation in the
second period that occurs if the policymaker is type-i.
SOURCE: Authors.

under the stable-price rule. This is a cost of a
stable-price rule, because it would be socially
optimal, ignoring incentives, for money growth
to be zero each period.
But the stable price level also has important
benefits. Under this rule, if a type-i person con­
trols monetary policy, he chooses lower money
growth each period. In the second period, the
stable-price rule operates directly by penalizing
him for failing to return the price level to its tar­
get level. In the first period, expectations of this
penalty lead him to choose less money growth.
Suppose the probability that the policymaker
is type-0 is p and the probability that the policy­
maker is type-i is 1 - p . Then expected inflation
under a zero-inflation rule that permits price drift
is (1 - p) |i n* each period; expected inflation
under a stable-price rule is (1 - p) 971* < (1 - p)
|i K* in the first period and (1 - p) (|i - r (p) 71*
- rex< (1 - p) )i n* in the second period.
The variance of inflation under the zeroinflation rule allowing drift is p (1 - p) fi2 7t*2
+ o 2 each period. The variance under the stableprice rule is p (1 —p) (p 2 7t*2 + ct2 in the first
period a n d p ( l - p) (jj. - re p )2 7t*2 + o 2 in the
second period. Since |i > (p > r<p > 0, the stableprice rule also reduces the variance of inflation.



We have presented an example in which a rule
for monetary policy specifying a stable price
level dominates a rule for zero inflation with
price-level drift. This result occurs despite our
assumptions that zero inflation— rather than a
stable price level— is socially optimal and that
policymakers cannot perfectly control inflation.
Our example thereby ignores the arguments we
have made elsewhere for a stable price level.
Nevertheless, a stable-price rule can be better
than a rule for zero inflation that permits price
drift, particularly because policy is unobserv­
able. The stable-price rule raises the penalty on a
policymaker who purposely engineers positive
inflation but falsely claims that it was the unin­
tended result of random forces. The cost of this
rule is a change in incentives of policymakers
who would act in the social interest without the
mle. But, as in our example, this cost can be
second-order, while the benefit is first-order.
In this two-period model, the policymaker
always prefers the zero-inflation rule over the
price-level mle. Well-intentioned policymakers
know that they would deliberately aim for the
social optimum without the rule. Those who
would privately gain from inflation would find
that the zero-inflation rule is less costly than the
price-level rule.
There are several artificial features of our
example. For simplicity, we assume a twoperiod model. There is likely to be some infla­
tion on average over the two periods even
under a constant-price-level rule. This average
inflation converges to zero as the number of
periods increases.
We have not explained the social costs of
inflation, though we have attempted to summa­
rize them elsewhere. We have interpreted a rule
as a penalty function for failing to achieve some
goal, and we have ignored the problem of
incentives for enforcement. Nevertheless, there
may be enforceable rules that the government
can impose on the behavior of one of its agen­
cies, such as a central bank. If so, our conclu­
sion may be fairly general.
This paper has not addressed the question of
an optimal rule. But it shows why a simple
stable-price rule can dominate a simple zeroinflation rule by reducing the policymaker’s
incentive to create inflation for special interests
and blame it on random events.

8

References
Alesina, Alberto. “Macroeconomics and
Politics,” in NBER M acroeconom ics A n n u a l
1988. Cambridge, Mass.: MIT Press, 1988.

Bade, Robin, and Michael Parkin. “Central
Bank Laws and Monetary Policy,” m anu­
script, University of Western Ontario, June
1987.

Friedman, Milton. A Program fo r M onetary
Stability. New York: Fordham University
Press, 1959.

Gavin, William T., and Alan C. Stockman.
“The Case for Zero Inflation,” Federal
Reserve Bank of Cleveland, Econom ic Com­
m entary, September 15, 1988.

Kydland, Finn E., and Edward C. Prescott.
“Rules Rather than Discretion: The Inconsis­
tency of Optimal Plans "Jo u rn a l o f P olitical
Economy, vol. 85, no. 3 (June 1977), pp.
473-91.

Rogoff, Kenneth. “The Optimal Degree of
Commitment to an Intermediate Monetary
Target,” Q uarterly Jo u rn a l o f Economics,
vol. 100, no. 4 (November 1985), pp.
1169-89.

Summers, Lawrence. Comment on “Postwar
Developments in Business Cycle Theory: A
Moderately Classical Perspective, "Jo u rn a l o f
Money, Credit, a n d B anking, vol. 20, no. 3,
part 2 (August 1988), pp. 472-76.




Predicting Bank
Failures in the 1980s
by James B. Thomson

Introduction
From 1940 through the 1970s, few U.S. banks
failed. The past decade was a different matter,
however, as bank failures reached record postDepression rates. More than 200 banks closed
their doors each year from 1987 through 1989,
while 1990 saw 169 banks fold. And because
more than 8 percent of all banks are currently
classified as problem institutions by bank regu­
lators, failures are expected to exceed 150 per
year for the next several years.1 The recent dif­
ficulties in the commercial real estate industry,
especially in the Northeast and the Southwest,
will likely add to the number of problem and
failed banks in the 1990s.
The increase in bank failures in the 1980s was
accompanied by an increase in the cost of resolv­
ing those failures. Furthermore, the cost of failure
per dollar of failed-bank assets, which is already
high, may continue to rise. For banks failing in
1985 and 1986, failure resolution cost estimates
averaged 33 percent of failed-bank assets, while

James B. Thomson is an assistant
vice president and economist at
the Federal Reserve Bank of
Cleveland. The author would like
to thank David Altig, Brian Crom­
well, and Ramon DeGennaro for
helpful comments, and Lynn
Seballos for excellent research
assistance.

the estimated loss to the Federal Deposit In­
surance Corporation (FDIC) reached as high as
64 percent of bank assets (see Bovenzi and Murton [1988]).
One characteristic that is different for some
of the recent failures is bank size, as large-bank
failures became more common in the 1980s. In
1984, for example, the FDIC committed $4.5 bil­
lion to rescue the Continental Illinois National
Bank and Trust Company of Chicago, which at
that time had $33-6 billion in assets. In 1987,
BancTexas and First City Bancorporation of Dal­
las were bailed out by the FDIC at a cost of $150
million and $970 million, respectively. The $32.5
billion-asset First Republic Bancorp of Dallas col­
lapsed in 1988, costing the FDIC approximately
$4 billion, while 20 bank subsidiaries of MCorp
of Houston, with a total of $15.6 billion in assets,
were taken over by the FDIC in 1989 at an esti­
mated cost of $2 billion.2 Most recently, the
Bank of New England, with $22 billion in assets,
was rescued by the FDIC at an estimated cost of
$2.3 billion.
■

■

1 Examiners rate banks by assessing five areas of risk: capital ade­

 quacy, asset quality, management, earnings, and liquidity. This is called
the CAMEL rating. For an in-depth discussion of the CAMEL rating sys­
http://fraser.stlouisfed.org/
tem,
Whalen and Thomson (1988).
Federal Reserve Bank of
St.seeLouis

2 In the 1980s, other large banks such as Texas American Bankshares, National Bank of Texas, First Oklahoma, and National of Okla­
homa were either merged or sold with FDIC assistance. In addition,
Seafirst of Seattle, Texas Commerce Bankshares, and Allied Bankshares
had to seek merger partners to stave off insolvency.

The study of bank failures is interesting for
two reasons. First, an understanding of the fac­
tors related to an institution’s failure will enable
us to manage and regulate banks more efficient­
ly. Second, the ability to differentiate between
sound banks and troubled ones will reduce the
expected cost of bank failures. In other words,
if examiners can detect problems early enough,
regulatory actions can be taken either to prevent
a bank from failing or to minimize the cost to
the FDIC and thus to taxpayers. The ability to
detect a deterioration in bank condition from ac­
counting data will reduce the cost of monitoring
banks by lessening the need for on-site examina­
tions (see Benston et al. [1986, chapter 10] and
Whalen and Thomson [1988]).
An extensive literature on bank failures
exists.3 Statistical techniques used to predict or
to classify failed banks include multivariate dis­
criminate analysis (Sinkey [1975]), factor analy­
sis and logit regression (West [1985]), eventhistoiy analysis (Lane, Looney, and Wansley
[1986, 1987] and Whalen [1991]), and a two-step
logit regression procedure suggested by Maddala (1986) to classify banks as failed and non­
failed (Gajewski [1990] and Thomson [1989]).
Recently, Demirguc-Kunt (1989a, 1991, and
forthcoming) has extended this work to include
market data and a model of the failure decision.
Unfortunately, market data are available only
for the largest banking institutions, while the
majority of banks that fail are small.
This study uses 1983-1988 book data from
the June and December Federal Financial Insti­
tutions Examination Council’s Reports of Condi­
tion and Income (call reports) in statistical
models of bank failure. In addition to traditional
balance-sheet and income-statement measures
of risk, the failure equation incorporates meas­
ures of local economic conditions.
The historically high number of failures for
every year in the sample period allows each
year to be investigated separately. Previous
studies had to pool the failures across years to
obtain a sufficiently large failed-bank sample,
making it difficult to construct holdout samples
and to do out-of-sample forecasting. This was
especially true for tests across years. The sam­
ple in this study is not limited in this way, how ­
ever. Once failures for a particular year are
classified by the model, failures in subsequent
years can be used to determine the model’s outof-sample predictive ability. For example, the
failure prediction model used to classify failures


http://fraser.stlouisfed.org/
■ 3 For a review of this literature, see Demirguc-Kunt (1989b).
Federal Reserve Bank of St. Louis

in 1985 can be applied to the 1984 data for
banks that failed in 1986 and 1987.

I. Modeling Bank
Failures
The economic failure of a bank occurs when it
becomes insolvent. The official failure of a bank
occurs when a bank regulator declares that the
institution is no longer viable and closes it.4
Insolvency is a necessary condition for regula­
tors to close a bank, but not, Kane (1986) argues,
a sufficient one. He suggests that the FDIC faces
a set of four constraints on its ability to close
insolvent banks. These constraints, which arise
because of imperfect information, budget limita­
tions, and principal-agent conflicts, include
information constraints, legal and political con­
straints, implicit and explicit funding constraints,
and administrative and staff constraints (see
Kane [1989]). Both Thomson (1989, 1991) and
Demirguc-Kunt (1991) formally incorporate
Kane’s constraints on the FDIC’s ability to close
banks into models of the closure decision.
These authors, along with Gajewski (1990),
estimate two-equation models that formally
separate economic insolvency and closure.
The model in this paper is a variant of those
in the traditional bank failure prediction litera­
ture in that it is a single-equation model, the
primary goal of which is to predict bank fail­
ures; therefore, it does not formally distinguish
between insolvency and failure. Thus, unlike
the models in Thomson (1991) and DemirgucKunt (1991), the one presented here does not
allow for the study of bank closure policy. O n
the other hand, unlike the traditional failure
prediction literature, this study includes proxy
variables to control for the effects of Kane’s four
constraints on the probability of failure. Finally,
the model is an extension of the previous fail­
ure prediction models in that it incorporates
general measures of local economic conditions
into the analysis.
The purpose of this study is to model bank
failures of all sizes. This precludes the use of
market data, which are available only for a lim­
ited number of large banking organizations.
Therefore, I use proxy variables based on
balance-sheet and income data from the call
reports. These variables, defined in box 1, are
drawn from the extensive literature on bank
failures.

■ 4

I consider a bank as failed if it is closed or requires FDIC assis­
tance to remain open. For a discussion of the different failure resolution
techniques available to the FDIC, see Caliguire and Thomson (1987).

B O X I

Definitions of Proxy Variables
D ependent
variab le
DFA1L

Dummy variable: equals one for a failed bank,
zero otherwise.

Regressors
NCAPTA

Book equity capital plus the reserve for loan and
lease losses minus the sum of loans 90 days past
due but still accruing and nonaccruing loans/total
assets.

NCLNG

Net chargeoffs/total loans.

LOANHER

Loan portfolio Herfindahl index constructed from
the following loan classifications: real estate
loans, loans to depository institutions, loans to
individuals, commercial and industrial loans,
foreign loans, and agricultural loans.

LOANTA

Net loans and leases/total assets.

LIQ

Nondeposit liabilities/cash and investment
securities.

OVRHDTA

Overhead/total assets.

ROA

Net income after taxes/total assets.

INSIDELN

Loans to insiders/total assets.

BRANCHU Dummy variable: equals one if the state is a unit
banking state, zero otherwise.
D BH C

Dum m y variable: equals one if the bank is in a
bank holding company, zero otherwise.

SIZE

Natural logarithm of total assets.

A VGDEP

Natural logarithm of average deposits per bank­
ing office.

BOUTDVH Output Herfindahl index constructed using statelevel gross domestic output by one-digit SIC
codes.
UMPRTC

Unemployment rate in the county where the
bank is headquartered.

CPINC

Percent change in state-level personal income.

BFAILR

Dun and Bradstreet’s state-level small-business
failure rate per 10,000 concerns.




The dependent variable, DFAIL, is the
dummy variable for failure. The first eight
regressors in the model are motivated by the
early warning system literature. Early warning
systems are statistical models for off-site m oni­
toring of bank condition used by bank regula­
tors to complement on-site examination. These
models seek to determine the condition of a
bank through the use of financial data.7 The
proxy variables used in the statistical monitor­
ing models are motivated by the CAMEL rating
categories, which regulators use during on-site
examinations to determine a bank’s condition.
NCAPTA, the ratio of book equity capital less
bad loans to total assets, is the proxy for capital
adequacy (CAMEL). This variable is similar to
Sinkey’s (1977) net-capital-ratio variable, which
is the ratio of primary capital less classified assets
to total assets.6 Both Sinkey and Whalen and
Thomson (1988) show that similar proxy vari­
ables are better indicators of a bank’s true condi­
tion than is a primary capital-to-assets ratio.
The next three early warning system vari­
ables are proxies for asset quality and portfolio
risk (CAMEL). NCLNG measures net losses per
dollar of loans and, hence, the credit quality of
the loan portfolio. LOANHER is a measure of
the diversification of the risky asset or loan
portfolio and is therefore a measure of portfolio
risk. LOANTA is the weight of risky assets in the
total asset portfolio and, hence, a proxy for
portfolio risk.
OVRHDTA and INSIDELN are proxies for
management risk (CAMEL). OVRHDTA is a
measure of operating efficiency, while INSIDELN
is the proxy for another form of management
risk: fraud or insider abuse. Graham and Horner
(1988) find that for national banks that failed
between 1979 and 1987, insider abuse was a sig­
nificant factor, contributing to the failure of 35
percent of the closed institutions; material fraud
was present in 11 percent of these failures. ROA,
the return on assets, is the proxy for the earnings
component of the CAMEL rating (CAMEL), and
LIQ is included to proxy for liquidity risk
(CAMEL).

■ 5

The purpose of early warning systems is to detect the deterioration
of a depository institution’s condition between scheduled examinations so
that the FDIC can move that institution up in the on-site examination queue.
For further information, see Korobow and Stuhr (1983), Korobow, Stuhr,
and Martin (1977), Pettway and Sinkey (1980), Rose and Kolari (1985),
Sinkey (1975,1977,1978), Sinkey and Walker (1975), Stuhr and Van
Wicklen (1974), Wang and Sauerhaft (1989), and Whalen and Thomson
(1988).

■

6 Classified assets is a measure of bad loans and other problem
assets on a bank's confidential examination report; consequently, it is
measured infrequently and is often unavailable to researchers.

In another study (Thomson [1991]), I show
that LOANTA, LIQ, OVRHDTA, and ROA may
also proxy for the non-solvency-related factors
that contribute to the decision to close insolvent
banks, providing additional justification for the
inclusion of these variables in the failure predic­
tion equation. I include the remainder of the var­
iables listed in box 1 in the failure prediction
equation either because the aforementioned
study has shown them to be related to the clo­
sure decision (BRANCHU, DBHC, SIZE,
AVGDEP), or because they serve as proxies for
the economic conditions in the bank’s home
market (BOUTDVH, UMPRTC, CPINC, BFAILR).
BRANCHU is included in the regression to
control for intrastate branching restrictions.
Branching restrictions effectively limit both the
opportunities for geographic diversification of a
bank’s portfolio and the FDIC’s options for
resolving an insolvency.
DBHC is a dummy variable for holding com­
pany affiliation, motivated by the source-ofstrength doctrine. Source of strength is the
regulatory philosophy, espoused by the Federal
Reserve, that the parent holding company
should exhaust its own resources in an attempt
to make its banking subsidiaries solvent before
asking the FDIC to intercede.
I include SIZE, the natural logarithm of total
assets, in the failure prediction equation to con­
trol for the “too big to let fail doctrine” (TBLF).
Bank regulators adopted TBLF in the 1980s as a
result of the administrative difficulties, the im­
plications for the FDIC insurance fund, and the
political fallout associated with the failure of a
large bank.
The average deposits per banking office,
AVGDEP, is used as the proxy for franchise or
charter value. Buser, Chen, and Kane (1981)
argue that the FDIC uses charter values as a
restraint on risk-taking by banks, and that bank
closure policy is aimed at preserving charter
value in order to minimize FDIC losses. Be­
cause the primary source of a bank’s charter
value is its access to low-cost insured deposits,
the level of deposits per banking office should
be positively correlated with the value of the
banking franchise.
Finally, I include four measures of economic
conditions in the bank’s markets in order to
incorporate the effects of local economic condi­
tions on the bank’s solvency: unemployment
(UMPRTC), growth in personal income (CPINC),
the business failure rate (BFAILR), and a measure
of economic diversification (BOUTDVH). Unlike
Gajewski (1989, 1990), who includes proxies for
 energy and agricultural shocks in his failure pre­


diction models, I have included economic con­
dition proxies that do not require knowledge of
which economically important sectors will
experience problems in the future.

II. The Data
Bank failures from July 1984 through June 1989
comprise the failed-bank sample and are taken
from the FDIC’s Annual Reports from 1984
through 1987 and from FDIC press releases for
1988 and 1989- Only FDIC-insured commercial
banks in the United States (excluding territories
and possessions) are included.
The nonfailed sample includes U.S. banks
operating from June 1982 through June 1989
that filed complete call reports. I have drawn
this sample randomly from the call reports and
have checked the nonfailed sample to ensure
that it is representative of the population of non­
failed banks. For instance, the majority of banks
in the population are small; therefore, the non­
failed sample is drawn in a manner that ensures
that small banks are adequately represented.
Data for the failed banks are drawn from the
June and December call reports for 1982
through 1988 and are collected for up to nine
semiannual reports prior to the date the bank
was closed. I do not collect data for failed banks
from call reports within six months of the failure
date, because call reports are unavailable to reg­
ulators for up to 70 days after a report is issued.
Furthermore, window dressing on the call
reports of distressed banks just prior to their
failure makes that data unreliable. In the cases
where all or the majority of bank subsidiaries of
a bank holding company are closed at once (for
example, BancTexas Group, First City Bancorp
of Houston, First Republic Bancorp of Dallas,
and MCorp of Houston), the closed institutions
are aggregated at the holding company level and
treated as a single failure decision. I include a
total of 1,736 banks in the nonfailed sample.
The number of failed banks in the sample in
each year appears in table 1.
I obtain data on economic condition from
several sources. State-level gross domestic out­
put data are obtained from the Bureau of
Economic Analysis for the years 1980 through

■

7 Call report data are screened for errors. I deleted from the sample
those failed and nonfailed bank samples that were found to have missing
or inconsistent loan data or negative values for expense items such as
operating and income expense. Roughly 2 percent of the failed sample
and 4 percent of the nonfailed sample were eliminated for these reasons.
In addition, I eliminated banks in the nonfailed sample that were missing
a June or December call report between 1982 and 1988.

g

TABLE

1

Number of Failed Banks
in the Sample
Y ear

N um b e r o f b an ks

1984

78

1985

115

1986

133

1987

193
174

1988

ON
00
On

tH

77

a. 1989 failure num ber is for banks closed during the first six months o f the
year.
NOTE: Number o f banks in the nonfailed sample in each year is 1,736.
SOURCES: Author’s calculations and FDIC Annual Reports.

1986. County-level employment data are taken
from the Bureau of Labor Statistics’ Em ploym ent
a n d Earnings for the years 1980 through 1986.
State-level personal income data are from the
Bureau of Economic Analysis’ annual personal
income files for the years 1981 through 1988,
and business failure data are from Dun and
Bradstreet’s Business F ailure Record for the
years 1982 through 1988. Because all of the
economic condition data are annual, I match
the business failure and personal income data
with the December call report data of the same
year and the following June. The gross domes­
tic output and employment data are matched
with the December and June call report data in
a similar manner, but with a two-year lag 8

III. Empirical Results
I estimate the model by logit regression using
the logist regression procedure in SAS. I have
chosen logit estimation rather than ordinary
least squares (OLS) because of the undesirable
properties of the OLS estimator when the
dependent variable in the model is a binary
(Amemiya [1981]). The unequal frequency of

■

8 The output and employment data are matched with the sample
having a two-year lag, because at the time this study was conducted I had
access to these data only through the end of 1986; therefore, I could not
match the employment and output data to the call data through 1988 with­
out lagging them. Because a state’s output mix is unlikely to change much
in two years, this data-matching procedure should not affect the perform­
 ance of BOUTDVH. However, while the decline of the financial sector is
http://fraser.stlouisfed.org/
likely to follow a decline in the real sector, as measured by unemploy­
Federal Reserve Bank of
St. the
Louis
ment,
choice of a two-year lag is clearly ad hoc.

the failed and nonfailed samples suggests the
use of logit rather than probit estimation
because logit is not sensitive to the uneven sam­
pling frequency problem (Maddala [1983]). The
panel nature of the data allows two types of
tests to be performed. First, I pool the data over
time (using the June 1983 through December
1988 call reports) and assess the predictive accu­
racy of the model for up to 48 months before
failure. Then, using the June call reports for
1983 through 1986,1 ascertain the model’s insample and out-of-sample accuracy.
Overall, the results indicate that up to 30
months before failure, solvency and liquidity are
the most important predictors of failure. As the
time to failure increases, however, asset quality,
earnings, and management gain in importance
as predictors of failure. The performance of the
FDIC closure constraint proxies in table 2
demonstrates that the distinction between offi­
cial failure and insolvency is significant and
should be accounted for in studies of bank fail­
ures. Although the performance of the economic
condition variables is mixed, their inclusion
increases the predictive accuracy of the model.
Table 2 shows that the coefficient on
NCAPTA is negative and significant for banks
failing within 30 months of the call date and
positive for banks failing within 30 to 48 months
of the call date. However, the coefficient is only
positive and significant for the 36- to 42-month
subsample. The positive sign on NCAPTA for
banks failing after 30 months is paradoxical,
because it suggests that book solvency is posi­
tively related to failure. This, however, is not a
new result (see Thomson [1991] and Seballos
and Thomson [1990] ). One possible explanation
is that banks beginning to experience difficulties
improve their capital positions cosmetically by
selling assets on which they have capital gains
and by deferring sales of assets on which they
have capital losses. Another explanation,
although not a mutually exclusive one, is that
strong banks are more aggressive in recognizing
and reserving against emerging problems in
their loan portfolios than are weak banks.
The probability of failure is a negative func­
tion of asset quality, as the coefficient on NCLNG
is negative and significant in all of the regres­
sions except the six- to 12-month subsample. In
addition, portfolio risk is positively related to
the probability of failure, as evidenced by the
positive and significant coefficient on LOANTA
for all subsamples.
The positive and significant coefficients on
OVRHDTA and INSIDELN for all subsamples
indicate that management risk and insider

T ABL E

2

Logit Regression Results
from the Pooled Sample
M on th s to fa ilu re after c a ll re p o rt issued
6 to 12
<Po
NCAPTA
NCLNG

2.39
(1.05)a
—41.94
(1.73)b
-5.08
(3.72)

12 to 18

18 to 24

24 to 30

30 to 36

36 to 42

-0.16

-1.42

-3.27

-4.70

-4.52

(0.95)
-30.71
(1.66)b

(0.95)b

(0.93)
-18.28
(1.56)b

-11.08
(1.6l)b

-14.63
(4.57)b

-17.25
(5.07)b

(1.03)b
1.06
(1.73)
-23.92
(6.02)b

0.75
(0.52)

0.22
(0.55)

0.58
(0.56)

42 to 48

(1.12)b

-5.87
(1.27)b

4.72
(1.68)b

7.21
(1.96)b

-19.15
(6.52)b

-19.10
(8.60)b
0.57
(0.69)
8.34
(0.70)b

LOANHER

0.89
(0.61)

-8.61
(4.13)a
0.32
(0.54)

LOANTA

6.84
(0.60)b

8.25
(0.55)b

8.96
(0.53)b

9.36
(0.55)b

9.47
(0.58)b

0.63
(0.59)
8.60
(0.6l)b

OVRHDTA

193.37
(28.04)b

192.71
(26.40)b

234.77
(25.55)b

261.39
(26.1l ) b

229.73
(27.79)b

242.42
(29.43)b

282.60
(34.27)b

INSIDELN

28.44
(4.20)b

30.08
(3.67)b

30.86
(3.48)b

31.40
(3.52)b

29.50
(3.65)b

30.45
(3.85)b

30.62
(4.32)b

-43.87
(5.69)b

-52.74
(5.94)b

-62.87
(6.55)b

-65.97
(6.95)b

-82.64
(8.11)b

-72.64
(8.81)a

-78.57
(10.81)b

UQ

0.49
(0.13)b

0.42
(0.12)b

0.35
(0.12)b

0.19
(0.18)

0.40
(0.22)c

0.23
(0.36)

BRANCHU

0.05
(0.14)

0.48
(0.13)b
0.14
(0.13)

0.10
(0.12)

-0.04
(0.12)

-0.03
(0.14)

0.30
(0.15)a

DBHC

-0.66
(0.13)b

-0.65
(0.12)b

-0.66
(0.1l) b

-0.57
(0.11)b

0.06
(0.13)
-0.50
(0.12)b

-0.59
(0.12)b

-0.59
(0.15)b

SIZE

-0.79
(0.11)b

-0.72
(0.10)b

-0.68
(0.09)b

-0.57
(0.09)b

-0.45
(0.10)b

-0.42
(0.1 l ) b

-0.18
(0.12)

AVGDEP

0.23
(0.14)C

0.37
(0.12)b

0.43
(0.11)b

0.49
(0.12)b

0.45
(0.12)b

-11.12
(3.70)b

-12.88
(3.27)b

-20.62
(3.43)b

-21.40
(4.04)b

0.29
(0.15)a
-18.68
(4.77)b

-0.06
(0.02)

-0.04
(0.02)a

-0.03
(0.02)c

-0.05
(0.02)b

-0.07
(0.02)b

-0.09
(0.03)b

CPINC

-12.19
(2.58)b

-15.12
(2.24)b

-13.95
(2.07)b

-20.70
(3.53)b
-0.04
(0.09)a
-15.01
(2.12)b

0.45
(0.13)b
-22.08
(4.38)b

-14.19
(2.23)b

-16.12
(2.29)b

-19.33
(2.66)b

BFA1LR

-0.00
(0.00)

-0.00
(0.00)

-0.00
(0.00)

-0.00
(0.00)

-0.00
(0.00)

0.00
(0.00)

-0.00
(0.15)

3884.46b

2937.13b

2174.88b

1709.46b

1374.68b

1063.92b

854.93b

17.90

20.36

ROA

BOUTDVH
UMPRTC

1
Type I

16.04

19.95

20.19

7.99

11.99

Type II

6.81

11.04

14.57

16.29

17.72

18.49

17.63

Class6

6.86

11.08

14.64

16.36

17.85

18.56

17.74

PPROB

0.04

0.04

0.05

0.05

0.05

0.05

0.05

a. Significant at the 5 percent level.
b. Significant at the 1 percent level.
c. Significant at the 10 percent level.
d. Model chi-square with 16 degrees of freedom.
e. Percentage o f all banks misclassified.
NOTE: Dependent variable = DFA1L. Standard errors are in parentheses.
SOURCE: Author’s calculations.




m

T A B L E

3

Cross-Sectional Logit
Regression Results
C all Date:
Year Failed:

J u n e 1984
1985

J u n e 1985
1986

J u n e 1986
1987

0.41

0.54

1.38

(2.85)

(2.88)

(2.34)

NCAPTA

-31.53
(5.82)a

-29.90
(4.75)a

-43.51
(5.22)a

NCLNG

21.93
(20.01)

-1.21
(13.72)

7.20
(8.77)

<Po

LOANHER

2.57
(1.39)b

0.41
(1.63)

-0.92
(1.61)

LOANTA

10.13
(1.74)a

9.49
(1.80)a

7.37
(1.52)a

OVRHDTA

301.45
(86.83)a

242.80
(96.12)c

489.16
(86.l4)a

INSIDELN

39.58
(12.15)a

30.84
(10.75)a

50.00
(13.20)a

ROA

—4991
(27.18)b

-69.79
(20.67)a

-0.41
(17.66)

LIQ

2.71
(1.22)c

1.53
(0.81)

0.95
(0.40)c

BRANCHU

0.11
(0.34)

-0.17
(0.35)

0.03
(0.36)

DBHC

-0.79
(0.34)c

-0.16
(0.37)a

-0.99
(0.33)a

SIZE

-1.13
(0.32)a

-0.67
(0.25)

-0.91
(0.26)a

AVGDEP

0.58
(0.38)

0.42
(0.32)

0.50
(0.32)

-10.22
(9.42)

-16.36
(9.32)b

1.87
(8.24)

-0.07
(0.07)

-0.03
(0.05)

CPINC

-22.91
(7.74)a

-30.19
(8.81)a

-0.09
(0.05)c
-28.02
(I4.32)b

BFA1LR

-0.00
(0.00)

-0.00
(0.00)

0.00
(0.00)

T
Type I

402.59a

462.30a

667.31a

11.30

11.28

9.38

Type II

10.48

9.56

7.03

Class6

10.58

9.79

7.45

PPROB

0.13

0.15

0.22

BOUTDVH
UMPRTC

a. Significant at the 1 percent level.
b.
c.
d.
e.

Significant at the 10 percent level.
Significant at the 5 percent level.
Model chi-square with 16 degrees o f freedom.
Percentage of all banks misclassified.

NOTE: Dependent variable = DFAIL. Standard errors are in parentheses.
SOURCE: Author’s calculations.




abuse are positively related to failure. In addi­
tion, the negative and significant coefficient on
ROA in all subsamples and the positive and sig­
nificant coefficient on LIQ for all regressions
(except the 30- to 36-month and 42- to 48month subsamples, for which the coefficient is
positive and insignificant) indicate that the prob­
ability of failing is a negative function of earn­
ings and liquidity.
With the exception of BRANCHU (all sub­
samples) and SIZE in the 42- to 48-month SIZE
subsample, the coefficients on Thomson’s
(1991) closure constraint proxies are all signifi­
cant, with the sign predicted by the author’s calloption closure model in all the regressions.
The results for the economic condition vari­
ables are somewhat mixed. The coefficients on
BOUTDVH, UMPRTC, and CPINC are negative
and significant for all subperiods. In other
words, the probability of failure is negatively re­
lated to state-level economic concentration
(BOUTDVH), to county-level unemployment
(UMPRTC), and to changes in state-level per­
sonal income (CPINC).
The negative sign on CPINC is consistent
with its use as a proxy for differences between
market and book solvency across regions. The
significant negative relationship between the
probability of failure and both BOUTDVH and
UMPRTC is counterintuitive. If the condition of
the banking industry were affected by the health
of the economy, then I would expect the coeffi­
cients on both BOUTDVH and UMPRTC to be
positive. BOUTDVH is a measure of economic
diversity in the state where a bank does busi­
ness. The more diversified a state’s or region’s
economy, the more stable that economy should
be and the lower BOUTDVH should be. It could
be that BOUTDVH and UMPRTC are picking
up the increased political constraints associated
with the closing of banks in depressed regions
like the Southwest. These political constraints in­
crease as the number of insolvencies in a region
grows. Finally, the coefficient on BFA1LR is
negative and insignificant for all subsamples.
Table 3 gives the results when the model is
estimated using cross-sectional data from the
June 1984, 1985, and 1986 call reports and from
failures occurring in the subsequent calendar
year. I use cross-sectional estimation for two rea­
sons: 1) to test indirectly the pooling restriction
imposed in the earlier tests and 2) to investigate
the model’s ability to predict failures outside the
sample. To facilitate out-of-sample forecasting, I
also split the nonfailed sample into two random
samples of 868 banks. One is for use in insample forecasting, and the second is for use in

out-of-sample forecasting. As seen in table 3,
with the exception of the coefficients on ROA
and DBHC, no significant difference seems to
exist between the coefficients of each model
across years. Therefore, the results reported in
table 2 do not appear to be sensitive to the pool­
ing restriction.

In-Sample
Classification
Accuracy
The second criterion for judging bank failure
models is the classification accuracy of the
model. In other words, how precise is the model
in discriminating between failed and nonfailed
banks within the sample, and how effective is it
in discriminating between failed and nonfailed
banks outside the sample?
For the pooled data, I perform only in-sample
forecasting. Tables 2 and 3 report the overall
classification accuracy of the three models, along
with each model’s type I and type II error. Type
I error occurs when a failed bank is incorrectly
classified as a nonfailed bank, and type II error
occurs when a nonfailed bank is incorrectly clas­
sified as a failed bank. The overall classification
error is the weighted sum of both types of errors.
Typically, there is a trade-off between type I
error and overall classification accuracy.
The logit model classifies a bank as failed if
the predicted value of the dependent variable
exceeds an exogenously set probability cutoff
point (PPROB). The PPROB is set according to
the prior probabilities of being in each group —
typically, at 0.5. However, for studies such as
this one, where closed banks are sampled at a
higher rate than nonclosed banks, Maddala
(1986) argues that the use of logit leads to a
biased constant term that reduces the predictive
power of the model. To correct for this, he sug­
gests that one should assume that the prior
probabilities are the sampling rates for the two
groups. In addition, if type I error is seen to be
more costly than type II error, a lower value for
the PPROB is justified.
Overall, the model’s in-sample classification
accuracy is excellent (see table 2). Using the ratio
of failed to nonfailed observations in the sample
as the PPROB, I find that type I error ranges from
7.99 percent in the six- to 12-month subsample
to 20.19 percent in the 42- to 48-month subsam­
ple. Overall classification error ranges from 6.86
percent in the six- to 12-month subsample to
18.56 percent in the 36- to 42-month subsample.



As expected, type I errors and overall classifica­
tion errors increase with time to failure.

Out-of-Sample
Forecasting
One reason for studying bank failures is so that
statistical models can be constructed to identify
banks that may fail in the future. Such models
are referred to as off-site monitoring or early
warning systems in the literature and are used
by bank regulators as a complement to on-site
examinations. Out-of-sample forecasting not
only yields information on the usefulness of the
bank failure model as an examination tool, but
also provides data on the stability of the failure
equation over time.
For the out-of-sample forecasts, I use the
estimated coefficients from the cross-sectional
logit regressions, employing data from the June
call reports of 1984 through 1986 and half of the
nonfailed sample. The failed sample consists of
all banks that failed in the year following the
one from which the call report data were drawn.
The coefficients for the model estimated over
this sample appear in table 3-1 use the second
half of the nonfailed sample as the holdout sam­
ple for forecasting. I also construct three failed
holdout samples using data from the June 1984
and June 1985 call reports. Only two holdout
samples could be constructed for the June 1986
call date, because the failed-bank sample only
runs through June 1989. The first failed holdout
sample consists of banks failing in the second
calendar year following the call report, and the
second consists of banks failing in the third cal­
endar year following the call report. The third
holdout sample (unavailable for forecasting
when the June 1986 call report is used) is com­
prised of banks failing in the fourth calendar
year following the call report.
The results for this out-of-sample forecasting
experiment appear in table 4. The PPROB cut­
off point for classifying banks as failed or non­
failed is the ratio of failed to nonfailed banks
from the in-sample regressions. Other cutoff
points yield similar results. W hen PPROB =
0.132, the model misclassifies 10.19 percent of
the banks in the holdout sample using 1986
failures. The type I error rate indicates that the
model misclassifies nearly two-thirds of the fail­
ures, while roughly 2 percent of the nonfailed
sample (type II error rate) is misclassified. Look­
ing at the results for the 1987 and 1988 failure
holdout samples, one can see that the type I
errors and overall classification errors for all

T ABL E

4

Out-of-Sample Forecasts
F ailure date
D ate o f c a ll re p o rt

1986

1987

1988

June 1984 (PPROB = 0.132)
Type I
Type II
Classa

64.66
1.84
10.19

67.21
1.84
13.23

74.83
1.84
12.66

June 1985 (PPROB = 0.153)
Type I
Type II
Class3
June 1986 (PPROB = 0.221)
Type I
Type II
Class3

1987

1988

1989b

63.73
1.73
13.01

71.93
1.73
13.28

75.34
1.73
7.44

1988

1989b

52.87
3.23
11.52

62.67
3.23
7.95

a. Percentage o f all banks misclassified.
b. 1989 sample o f failed banks consists of banks closed during the first six
months of the year.
NOTE: Forecasts em ploy the half o f the nonfailed sample not used for the
logit regressions in table 3SOURCE: Author’s calculations.

TABLE 5
Additional Out-of-Sample Forecasts
C a ll date
June
June
June
June

1985
1986
1987
1988

In-sample
forecast

Y ear faile d

Type I

1986
1987
1988
1989b

52.63
39.58
27.59
22.08

0.86
1.73
2.07
1.67

4.55
5.50
4.40
2.54

—

12.48

10.21

10.91

Type II

Classa

a. Percentage o f all banks misclassified.
b. 1989 sample o f failed banks consists o f banks closed during the first six
months o f the year.
NOTE: Out-of-sample forecasting is done with PPROB equal to 0.066 (the
ratio of failed to nonfailed banks for the in-sample logit regressions) and using
coefficients estimated from logit regressions on 1985 failures and the nonfailed
sample from the June 1984 call report.
SOURCE: Author’s calculations.




three models increase as we attempt to forecast
further into the future. Note that the results for
the June 1985 and June 1986 call reports are
similar to those obtained using June 1984 data.
Given the high type I error rates, one might
question the usefulness of the model for off-site
monitoring of bank condition. However, the
type I error rate could be lowered by decreas­
ing PPROB enough so that the percentage of
failed banks classified as nonfailed becomes
acceptable. What is interesting from the stand­
point of an early warning application is the low
classification error and the low type II error. If
one wanted to use this model to determine
which banks should be examined next, low
type II error would be an extremely important
consideration, since the FDIC has limited exam­
ination resources.
In practice, the first out-of-sample experiment
is of little use for designing early warning models
because it requires the ability to identify failures
in subsequent years in order to apply it. There­
fore, I perform a second out-of-sample experi­
ment that is able to mimic an early warning
model in practice. Employing the June 1984 call
report data, I estimate the three models using the
entire nonfailed sample and the failures occur­
ring in the next calendar year. I then use the
coefficients to perform out-of-sample forecasting
using 1) June call data for 1985 through 1988 on
the nonfailed sample and 2) failures in the calen­
dar year following the call report as the holdout
samples. The PPROB is again set equal to the
ratio of failed banks to nonfailed banks used in
the in-sample logit regressions.
The results in table 5 show that using the
1984 version of the failure model, the out-ofsample classification error ranges from 5.50 per­
cent in June 1986 to 2.54 percent in June 1988,
and type I error ranges from 52.63 percent in
June 1985 to 22.08 percent in June 1988. It is
somewhat curious that the out-of-sample clas­
sification accuracy of all three models increases
as we move further from the call date of the
in-sample experiment. Again, note that the type
I error for the out-of-sample regressions could
be lowered at the expense of the type II error
(and the overall classification error) by decreas­
ing PPROB. The performance of all the models
in the second out-of-sample forecasting experi­
ment suggests that they could be used as part of
an early warning system of failure.

m

IV. Conclusion

References

This study shows that the probability that a
bank will fail is a function of variables related to
its solvency, including capital adequacy, asset
quality, management quality, earnings perform­
ance, and the relative liquidity of the portfolio.
In fact, the CAMEL-motivated proxy variables
for bank condition demonstrate that the majority
of these factors are significantly related to the
probability of failure as much as four years
before a bank fails.
Overall, the model demonstrates good clas­
sification accuracy in both the in-sample and
out-of-sample tests. For the in-sample tests, it is
able to classify correctly more than 93 percent
of the banks in the six- to 12-month subsamples
and more than 82 percent of the banks in the 42to 48-month subsamples. In addition, the model
correctly classifies more than 94 percent of those
banks that fail between six and 12 months of
the call date and almost 80 percent of those that
fail between 42 and 48 months of the call date.
Out-of-sample classification accuracy is also
excellent, indicating that the model could be
modified for use as an early warning model of
bank failure.
Economic conditions in the markets where a
bank operates also appear to affect the proba­
bility of bank failure as much as four years before
the failure date. However, given that regional
economic risk is diversifiable, the sensitivity of
the banking system to regional economic condi­
tions suggests that policymakers should revise
the laws and regulations that limit banks’ ability
to diversify their portfolios geographically (espe­
cially in light of the fact that the national econ­
omy was relatively strong during the years
covered in this study).
Finally, the performance of the closureconstraint proxy variables indicates that the
probability of failure is not simply the prob­
ability that a bank will become insolvent, but
that it will be closed when it becomes insolvent.
In other words, the results show that the distinc­
tion between official failure and economic insol­
vency is an important one, suggesting the need
for further research on the determinants of the
incentive systems faced by bank regulators (see
Kane [1986, 19891).

Amemiya, T. “Qualitative Response Models:




A Survey "Jo u rn a l o f Economic Literature,
vol. 19 (December 1981), pp. 1483-1536.

Benston, George J., Robert A. Eisenbeis, Paul
M. Horvitz, Edward J. Kane, and George
G. Kaufman. Perspectives on Safe a n d
Sound Banking: Past, Present, a n d Future.
Cambridge, Mass.: MIT Press, 1986.

Bovenzi, John F., and Arthur J. Murton.
“Resolution Costs of Bank Failures,” FDIC
Banking Review, vol. 1, no. 1 (Fall 1988),
pp. 1-13.

Buser, Stephen A., Andrew H. Chen, and Ed­
ward J. Kane. “Federal Deposit Insurance,
Regulatory Policy, and Optimal Bank Capi­
tal,”Jo u rn a l o f Finance, vol. 36, no. 1
(March 1981), pp. 51-60.

Caliguire, Daria B., and Janies B. Thomson.
“FDIC Policies for Dealing with Failed and
Troubled Institutions,” Federal Reserve Bank
of Cleveland, Economic Commentary, Oc­
tober 1, 1987.

Demirguc-Kunt, Ash. “Modeling Large
Commercial-Bank Failures: A SimultaneousEquation Analysis,” Federal Reserve Bank of
Cleveland, Working Paper 8905, May 1989a.
________. “Deposit-Institution Failures: A
Review of Empirical Literature,” Federal
Reserve Bank of Cleveland, Economic
Review, 4th Quarter 1989b, pp. 2-18.
_______ . “O n the Valuation of Deposit Institu­
tions,” Federal Reserve Bank of Cleveland,
Working Paper 9104, March 1991_______ . “Principal-Agent Problems in
Commercial-Bank Failure Decisions,”
Federal Reserve Bank of Cleveland, Working
Paper (forthcoming).

Gajewski, Gregory R. “Assessing the Risk of
Bank Failure,” in Proceedings from a Con­
ference on B ank Structure a n d Competition,
Federal Reserve Bank of Chicago, May 1989,
pp. 432-56.

19

_______ . “Modeling Bank Closures in the
1980’s: The Roles of Regulatory Behavior,
Farm Lending, and the Local Economy,” in
George G. Kaufman, ed., Research in F in a n ­
cial Services: Private a n d Public Policy.
Greenwich, Conn.: JAI Press, Inc., 1990, pp.
41-84.

Graham, Fred C., and James E. Horner.
“Bank Failure: An Evaluation of the Factors
Contributing to the Failure of National
Banks,” in Proceedings from a Conference
on Bank Structure a n d Competition, Fed­
eral Reserve Bank of Chicago, May 1988, pp.
405-35.

Kane, Edward J. “Appearance and Reality in
Deposit Insurance: The Case for Reform,”
Jo u rn a l o f B anking a n d Finance, vol. 10
(June 1986), pp. 175-88.
_______ . The S&L Insurance Mess: How D id It
Happen? Washington, D.C.: The Urban
Institute, 1989-

_______ . “Econometric Issues in the Empirical
Analysis of Thrift Institutions’ Insolvency and
Failure,” Federal Home Loan Bank Board,
Invited Research Working Paper 56, October
1986 .

Pettway, Richard H., and Joseph F. Sinkey,
Jr. “Establishing On-Site Bank Examination
Priorities: An Early Warning System Using Ac­
counting and Market Information, "Jo u rn a l o f
Finance, vol. 35 (March 1980), pp. 137-50.

Rose, Peter S., and James W. Kolari. “Early
Warning Systems as a Monitoring Device for
Bank Condition,” Quarterly Jo u rn a l o f Busi­
ness a n d Economics, vol. 24 (Winter 1985),
pp. 43-60.

Seballos, Lynn D., and James B. Thomson.
“Underlying Causes of Commercial Bank
Failures in the 1980s,” Federal Reserve Bank
of Cleveland, Economic Commentary, Sep­
tember 1, 1990.

Sinkey, Joseph F., Jr. “A Multivariate Statistical
Korobow, Leon, and David P. Stuhr. “The
Relevance of Peer Groups in Early Warning
Analysis,” Federal Reserve Bank of Atlanta,
Economic Review, November 1983, pp.
27-34.

______ , _______ , and Daniel Martin. “A
Nationwide Test of Early Warning Research
in Banking,” Federal Reserve Bank of New
York, Quarterly Review, Autumn 1977, pp.
37-52.

Lane, William R., Stephen W. Looney, and
James W. Wansley. “An Application of the
Cox Proportional Hazards Model to Bank
Failure,” Jo u rn a l o f Banking a n d Finance,
vol. 10, no. 4 (December 1986), pp. 511-31.

______ , _______ , and_______ . “An Examina­
tion of Bank Failure Misclassification Using
the Cox Model,” in Proceedingsfrom a Con­
ference on B ank Structure a n d Competition,
Federal Reserve Bank of Chicago, May 1987,
pp. 214-29-

Maddala, G.S. Limited-Dependent a n d Q ualita­
tive Variables in Econometrics. New York:
Cambridge University Press, 1983.




Analysis of the Characteristics of Problem
Banks,”Jo u rn a l o f Finance, vol. 30 (March
1975), pp. 21-36.
_______ . “Problem and Failed Banks, Bank Ex­
aminations and Early-Warning Systems: A
Summary,” in Edward I. Altman and Arnold
W. Sametz, eds., F in an cial Crises. New
York: Wiley Interscience, Inc., 1977.
_______ . “Identifying ‘Problem’ Banks: How
Do the Banking Authorities Measure a
Bank’s Risk Exposure?”Jo u rn a l o f Money,
Credit a n d Banking, vol. 10 (May 1978), pp.
184-93.

______ , and David A. Walker. “Problem
Banks: Definition, Importance and Identifica­
tion,”Jo u rn a l o f B ank Research, vol. 5, no.
4 (Winter 1975), pp. 209-17.

Stuhr, David P., and Robert Van Wicklen.
“Rating the Financial Condition of Banks: A
Statistical Approach to Aid Bank Supervision,”
Federal Reserve Bank of New York, Monthly
Review, September 1974, pp. 233-38.

Thomson, James B. “An Analysis of Bank
Failures: 1984 to 1989,” Federal Reserve
Bank of Cleveland, Working Paper 8916,
December 1989.

_______ . “Modeling the Bank Regulator’s
Closure Option: A Two-Step Logit Regression
Approach,” unpublished manuscript, January
1991.

Wang, George H. K., and Daniel Sauerhaft.
“Examination Ratings and the Identification
of Problem/Non-Problem Thrift Institutions,”
Jo u rn a l o f F inancial Services Research, vol.
2 (October 1989), pp. 319-42.

West, Robert Craig. “A Factor-Analytic Ap­
proach to Bank Condition,”Jo u rn a l o f B ank­
ing a n d Finance, vol. 9 (June 1985), pp.
253-66.

Whalen, Gary. “A Proportional Hazards Model
of Bank Failure: An Examination of Its Useful­
ness as an Early Warning Tool,” Federal Re­
serve Bank of Cleveland, Economic Review,
1st Quarter 1991, pp. 21-31.

______ , and James B. Thomson. “Using Finan­
cial Data to Identify Changes in Bank Condi­
tion,” Federal Reserve Bank of Cleveland,
Economic Review, 2nd Quarter 1988, pp.
17-26.




E9

A Proportional Hazards
Model of Bank Failure:
An Examination of Its
Usefulness as an Early
Warning Tool
by Gary Whalen

Gary Whalen Is an economic ad­
visor at the Federal Reserve Bank
of Cleveland. The author would
like to thank Paul Bauer and
William Curt Hunter for helpful
comments.

Introduction
The number of U.S. bank failures jumped sharply
in the mid-1980s and has remained disturbingly
high, averaging roughly 170 banks a year over
the 1985-1990 period. Furthermore, large-bank
failures have become increasingly common. For
a variety of reasons, the timing of closures and
the resolution techniques used have severely
strained the resources of the Federal Deposit In­
surance Corporation (FDIC). These develop­
ments have stimulated a great deal of debate
about the causes of costly bank closures and
about alternative ways to prevent them. One
focus of this debate has been on the appropriate
roles of market versus regulatory discipline. A
necessary condition for effective discipline by
either force is the ability to identify high-risk
banks accurately at a reasonable length of time
prior to failure without the use of expensive and
time-consuming on-site examinations. This re­
quires the use of some sort of statistical model,
conventionally labeled an “early warning
model,” to translate bank characteristics into
estimates of risk. There is considerable debate
about whether models of sufficient accuracy can
 be built using only currently available account­
http://fraser.stlouisfed.org/
ing
Federal Reserve Bank of
St.data.1
Louis

This study examines a particular type of early
warning model called a Cox proportional haz­
ards model, which basically produces estimates
of the probability that a bank with a given set of
characteristics will survive longer than some
specified length of time into the future. The sam­
ple consists of all banks that failed between
January 1, 1987 and October 31, 1990 and a ran­
domly selected group of roughly 1,500 nonfailed
banks. Using a relatively small set of publicly
available explanatory variables, the model iden­
tifies both failed and healthy banks with a high
degree of accuracy. Furthermore, a large propor­
tion of banks that subsequently failed are
flagged as potential failures in periods prior to
their actual demise. The classification accuracy
of the model over time is impressive, since the
coefficients are based on 1986 data and are not
updated over time. In short, the results demon­
strate that reasonably accurate early warning
models can be built and maintained at relatively
low cost.
The following section describes the propor­
tional hazards model (PHM) in general terms
and compares it to alternative statistical early

■

1 This is the opinion reflected in Randall (1989), for example.

warning models. A short discussion of sampling
issues follows. Section III contains a more
detailed discussion about the specification of
the model estimated in this paper, and section
IV presents the model’s estimation results and
classification accuracy. The final section con­
tains a brief summary and conclusions.

where F ( t ) is the cumulative distribution func­
tion for the random variable, time to failure. The
probability density function of t is equal to
f ( t ) = - S '(t) . Given these definitions, the gen­
eral form of the so-called hazard function is then
(2)

_ ~ S '( t )

I. The Proportional
Hazards Model
Of the large number of early warning/failure
prediction studies that have been done, most
have employed discriminant analysis or probit/
logit techniques to construct the models. These
models are designed to generate the probability
that a bank with a given set of characteristics
will fall into one of two or more classes, most
often failure/nonfailure.2 Further, the predicted
probabilities are of failure/nonfailure at some
unspecified point in time over an interval im­
plied by the study design.
Like these statistical techniques, a PHM can
be used to generate estimates of the probability
of bank failure or, alternatively, of survival. How­
ever, a PHM has several advantages relative to
these other types of models, including the ability
to produce estimates of probable time to failure.
In fact, it can be used to generate a survival
profile for any commercial bank (the estimated
probability of survival longer than specified
times as a function of time). The other types of
models yield only the probability that a bank
will fail at some point in time over some speci­
fied period, but provide no insight on when the
failure will occur over this period. Additionally, a
PHM does not require the user to make assump­
tions about the distributional properties of the
data (for example, multivariate normality) that
may be violated. In the one somewhat dated
study of bank failures in which a PHM is esti­
mated and used, the model is also found to be
slightly more accurate than alternative models
(see Lane, Looney, and Wansley [1986, p. 525]).
The dependent variable in a PHM is time
until failure, T. The survivor function, which
represents the probability of surviving longer
than t periods, has the following general form:
(1)

S (t) = Prob (T > t) = 1 - F ( t ) ,

/,( ( ) = lim P (t< T < t+ d t\ T > t)
d t —^0
dt

S (t)
The hazard function specifies the instantaneous
probability of failure given survival up to time t.
A number of different types of hazard models
can be specified, depending on the assumptions
made about the nature of the failure time distri­
bution.3 In the PHM, the hazard function is
assumed to have the following rather simple
form:
(3)

h ( t \X, B ) = b0 (t) g (X , B ) ,

where X represents a collection of characteris­
tic variables assumed to affect the probability of
failure (or, alternatively, of survival) and B
stands for the model coefficients to be estimated
that describe how each characteristic variable
affects the likelihood of failure. The first part of
this expression, b0( t ), is a nonparametric term
labeled the baseline hazard probability. This
probability depends only on time. To obtain the
failure probability in a particular case, the base­
line hazard probability is shifted proportionally
by the parametric function that is the second
part of the expression. In the Cox variant used
in this paper, the second function is assumed to
have an exponential form. That is, the Cox PHM
has the following form:
(4)

h (t \X, B ) = hQ(t) e x ' B .

The related survivor function for the Cox
PHM, which is used to calculate the probability
that a commercial bank with a given set of char­
acteristics will survive longer than some given
amount of time into the future, is as follows:
(5)

S(t\ X, B ) = S0 ( t ) « ,

where q = e x B and
S0 (t ) = exp


http://fraser.stlouisfed.org/ 2 In some studies, the categorization of banks into risk classes is
made
the basis of confidential CAMEL ratings.
Federal Reserve Bank of
St.onLouis

■

■

-J

h0 ( u ) du

3 See, for example, the excellent review of a number of
hazard models in Kiefer (1988) or Kalbfleisch and Prentice (1980).

As in the hazard function, the first part of this
expression, S0 (i), is called the baseline sur­
vival probability and depends only on time. It is
the same for every bank. To calculate survival
probabilities for any bank, it is necessary to
choose the relevant time horizon that deter­
mines the relevant baseline probability and then
plug the values of its characteristic variables
into the formula.
The PHM does have several disadvantages,
although some of these are shared by competing
failure prediction models. Perhaps the most
important drawback is that estimation of the
PHM requires data on the time to failure. As
many others have noted, there is a distinction in
banking between insolvency (an economic
event) and failure (a regulatory event). That is,
bank failure represents a regulatory decision. So
whether one uses a PHM or a logit model, it is
actually the regulatory closure rule that is being
modeled. This can be problematic when one is
analyzing bank failures over the late 1980s. Dur­
ing this time, regulators had to resolve a number
of large distressed holding companies in Texas,
where financial problems were concentrated in
some but not all of a holding company’s banks
(generally, the lead or large subsidiary banks).
Typically, closure of the insolvent units was
delayed while attempts were made to dispose of
the entire organization. Thus, in some cases, the
reported financial condition of the larger subsidi­
aries of these holding companies suggests that
they were probably insolvent prior to resolution,
while smaller, sometimes numerous coaffiliate
banks exhibited relatively healthy financials even
shortly before closure. Failure to control for these
circumstances in some way could significantly
affect the coefficients and classification accuracy
of any type of estimated early warning model,
but the nature of the adjustment is critically
important for PHMs given the nature of the
dependent variable.
Empirically, this problem can be dealt with
in a number of ways. Some researchers have
added a consolidated holding-company size
variable to estimated bank failure equations (see
Gajewski [1989]). Others have estimated twoequation systems: a solvency equation and a
failure equation, adding holding company var­
iables to the latter (see Thomson [1989]). Alter­
natively, one could take the view that smaller
bank affiliates in unit banking states are the
functional equivalent of branches and so should
be consolidated into one or more of the larger
subsidiary banks in failure prediction studies.4
Another, somewhat cruder, solution that is
 generally equivalent to consolidation is simply


to exclude some or all of the smaller bank sub­
sidiaries of the holding companies in question.
This is the approach taken here. I include in the
estimation sample only the larger subsidiaries
(more than $500 million in total assets) of the
large Texas holding companies that failed.5
One is still left with the problem of somewhat
ambiguous dates of failure for some of the large
Texas holding companies. For example, in sev­
eral cases, resolution transactions that were
announced (indicating that the company was
judged to be failing as of a specific date) ulti­
mately collapsed, and the institutions were not
closed until some later date. Here, following
standard practice, I use the failure date desig­
nated by the FDIC (typically the date that FDIC
funds are disbursed).
Another possible disadvantage of the simple
PHM is the assumption that the values of the
explanatory variables remain constant over the
time horizon implicit in the specification. Obvi­
ously, this may not be the case, and if this
assumption is violated, classification accuracy of
estimated PHMs could suffer. It is possible to
estimate PHMs that relax this assumption (with
so-called time-varying covariates).6 However,
this complicates the analysis and is not under­
taken here.

II. Sampling
Using the entire population of banks to gen­
erate early warning models is typically not done,
since this method is costly and requires substan­
tial computer time and suitable hardware and
software. Practically, models of comparable ac­
curacy can be built and maintained much more
easily and cheaply using a sample of banks.
This is the approach taken here.
In bank failure studies, sampling is an impor­
tant issue, since it can significantly affect the
reported results. One common approach — the
one used in the only PHM study done to date —
is the use of a matched sample. In this type of
approach, the sample initially consists of some
collection of failed banks. Then, for each failed

■ 4

In fact, limited consolidation was authorized under a change in
Texas branching laws in 1987, and was done in varying degrees by
several of the state’s multibank holding companies.

■ 5

However, I include a ll bank affiliates of failed holding companies
in the holdout sample.

■

6 For a discussion of time-varying covariates, see Kalbfleisch and
Prentice (1980).

m

bank included in the sample, the researcher
adds one or more nonfailed banks determined
to be peers. This method is tedious and costly
and requires numerous subjective judgments on
the part of the researcher. It also is infeasible to
use when analyzing relatively recent failures,
since close matching is simply not possible. Fur­
thermore, it is not clear that models developed
using matched samples could be easily updated/
reestimated, and updating may be necessary to
preserve model accuracy.
I rejected relying solely on random sampling
because of the danger of too few failed banks
and because of cost considerations. Instead, I
employed a choice-based sampling approach
similar to that used in numerous other failure
prediction studies. Specifically, the data set
includes all banks that failed between January 1,
1987 and October 31,1990 for which complete
data could be obtained and that were in oper­
ation for at least three full years prior to failure.
The nonfailed portion of the sample consists of
roughly 1,500 randomly selected banks. The esti­
mation sample is comprised of the 1987 and 1988
failures and approximately 1,000 of the non­
failed banks. The remainder of the failed and
nonfailed banks comprise the holdout sample.

III. The Specific
Form of the Model

In the one-year model, LLW find the follow­
ing ratios to be significant and include them in
the final form of the estimated equation: the log
of the commercial loans to total loans ratio, the
total loans to total deposits ratio, the log of the
total capital to total assets ratio, and the log of
the operating expense to operating income ratio.
In the two-year model, the ratios included are
the total loans to total assets ratio, the log of the
commercial loans to total loans ratio, the log of
the total capital to total assets ratio, the log of the
operating expense to operating income ratio, the
log of the municipal securities to total assets
ratio, and the rate of return on equity.7 It is
interesting that none of the loan quality variables
that LLW examine is found to be significant in
either model. However, their set of loan quality
variables does not include a measure of nonper­
forming loans, since such data were not reported
by banks over the period examined. This may
have lowered the classification accuracy of
LLW’s models, because nonperforming loan data
are probably a better leading indicator of incip­
ient asset quality problems than variables such
as loan loss provisions or net chargeoffs, and
asset quality problems are a primary cause of
bank failure. The out-of-sample classification ac­
curacy of these relatively simple models is good,
although the holdout sample is relatively small
and the time period examined is quite short.

The Current Model
The Approach
of Lane, Looney,
and Wansley
Lane, Looney, and Wansley (1986), hereafter
referred to as LLW, estimate two different ver­
sions of PHMs using a relatively small sample of
banks that failed over the 1979-1983 period and
a matched sample of nonfailed banks. One ver­
sion, labeled a one-year model, is designed to
generate a survivor function that permits the
user to predict the probability that a bank with a
given set of characteristics will survive longer
than times ranging from roughly zero to 12
months into the future. Another version, the twoyear model, allows the user to predict survival
probabilities ranging from roughly 12 to 24
months into the future. In their sample, LLW
pool failures from all the years examined and
use stepwise methods to select a relatively small
subset of 21 financial condition variables for use
as explanatory variables. They do not employ
any local economic condition variables.



I designed the model used here to produce es­
timates of the probability that a bank with some
given set of characteristics will survive longer
than times ranging from roughly zero to 24
months into the future. To accomplish this, I
measure the dependent variable, the time to
failure, as the time in months from the end of
1986 to the failure date for each failed bank in
the estimation sample. For all nonfailed banks
in the estimation sample, I censor the time to
failure at 24 months, since these banks are
known to have survived at least this amount of
time into the future.8 I measure all of the
explanatory variables for both failed and non­
failed banks as of year-end 1986. This approach

■

7 LLW use log values in some cases to transform explanatory vari­
ables that appear to be non-normally distributed. This is done because
the authors estimate competing discriminant models that require the ex­
planatory variables to be multivariate normal.

■

8 An additional advantage of the PHM is that it can accommodate
censored failure times.

25

is feasible given the relatively large number of
failed banks over the 1987-1988 period and the
sampling method used. So, unlike LLW, I do not
pool failures from different years. Furthermore,
by estimating a single model with a 24-month
time horizon, I incorporate the implicit assump­
tion that these survival probabilities depend
only on a single set of explanatory variables.

The Explanatory
Variables
In general, I employ subsets of a relatively small
number of “typical” financial ratios used in
previous bank failure prediction studies as
explanatory variables in this study. All of these
are publicly available numbers drawn from the
year-end reports of the Federal Financial Institu­
tions Examination Council’s Reports on Condi­
tion and Income, known as call reports. The
variable names and definitions, along with the
1986 mean values for banks in the estimation
sample, appear in the appendix. I do not use
loan classification data drawn from examination
reports for a variety of reasons, the most impor­
tant of which is that such data are available only
at irregular intervals.9
The only other type of explanatory variable
used in this study is a single indicator of “local”
economic conditions. Recently, a consensus has
emerged that such variables have a significant
impact on the probability of bank failure and
should somehow be incorporated into the analy­
sis. However, an examination of previous
research reveals that this has not typically been
done in the past. In those studies that use local
economic variables, the standard approach is to
add one or more as explanatory variables in the
estimated failure equation. The identity of the
variables and the precise forms of these relation­
ships differ considerably. Some researchers have
found that such variables are significant and aid
classification accuracy.
More recent studies have used state-level eco­
nomic variables such as the change in personal
income, unemployment, or real estate construc­
tion. Some employ a form of state economic
diversification variable, while others simply add
variables designed to capture the importance of
the energy or farm sector in a given state. In a
few studies, economic data from the county level
or the metropolitan statistical area are employed.

■

9 It would be interesting to add the currently confidential data on 30-

 to 89-day nonpertorming loans to the model to see if this resulted in a sub­
http://fraser.stlouisfed.org/
stantial increase in explanatory power. Such data are likely to be highly cor­
Federal Reserve Bank related
of St.with
Louis
classified loans and are available at regular intervals.

It seems inappropriate to simply add farm- or
energy-sector variables to failure prediction
equations. Although it is true that downturns in
these industries appear to be highly correlated
with bank failures in the recent past, there is no
reason to believe that this pattern will repeat
itself in the future (in the Northeast or the South­
east in the early 1990s, for example). If one
deems it desirable to add local economic vari­
ables to a bank failure model (and this may not
be the best way to proceed), a preferable
approach would be to use local variables such
as unemployment, employment, or some con­
struction series that reflect local economic shocks
regardless of their source.
I employ a state-level variable rather than a
more local variable for several reasons. Incor­
porating more local variables into the analysis is
much more tedious and costly. It would also be
more difficult to update such variables over
time. Furthermore, it is not clear that using more
local variables would produce more accurate
failure probabilities than state-level data. Pre­
vious research indicates that two of the most
useful leading indicators of economic condi­
tions at the state level are movements in build­
ing permits and initial unemployment claims.10
Here, only one state variable is used: the per­
centage change in state residential housing per­
mits issued over the three-year period ending in
the year in which the other explanatory vari­
ables are measured.
Realistically, the response of the financial
condition of any individual bank to local
economic conditions varies across banks and
changes over time as managers react to antici­
pated movements in relevant local and national
economic variables. This view suggests that per­
haps a more correct approach (and a much
more ambitious one) would be to use only fore­
casted bank financial condition variables in the
failure prediction equation. The values of these
variables would be based on forecasts of local
or regional economic conditions generated
using separate models (see Goudie [1987], for
example). Alternatively, one might develop
state-level leading economic index series and
sequential probability models, which can be
used to generate the probability of a local reces­
sion, and then use these probabilities in a fail­
ure prediction model (see Phillips [1990]).
Neither of these approaches is attempted here.

■

10 See Whalen (1990). The leading-indicator variables could also
reflect the divergence between actual and anticipated local economic con­
ditions, which should be an important determinant of bank asset quality
and therefore of the probability of failure.

26

TABLE

1

PHM Estimation Results

C oefficient

S tandard
e rror

LAR

0.0242

0.0055

OHR

0.1766

0.0339

4.39
5.21

ROA

-0.0499

0.0193

-2.58

C ovariate

t statistic

0.0105

0.0050

2.07

NPCR

-0.1419

0.0086

-16.56

PCHP64

-0.0120

0.0019

-6.26

CD100R

NOTE: Model chi square with six degrees o f freedom: 1490.75. See appendix
for variable definitions.
SOURCE: Author’s calculations.

IV. Model Estimation
and Results
I derive the survivor function from the underly­
ing hazard function that is actually estimated.11
Although the focus in this study is on the former,
it should be noted that the coefficients from the
hazard function appear in the survivor function
unchanged. As a result, in a survivor function,
coefficients can be expected to exhibit counter­
intuitive signs. Variables that are expected to be
positively associated with the probability of sur­
vival, like return on assets (ROA ), will exhibit
negative coefficients. Similarly, variables that are
expected to be negatively associated with the
probability of survival, such as the overhead
expense ratio ( OHR ), will have positive coeffi­
cients.
The survivor function consists of estimated
baseline survival probabilities (SQ[i] for various
t ’s) and a vector of estimated model coefficients
(the B vector), which I use to generate survival
probabilities for banks, given their particular set
of characteristics. I estimate a number of alterna­
tive models with differing sets of explanatory
variables. The estimation results for one of these
model specifications appear in table 1 .1 focus
only on a single model because this allows the
classification results to be examined in detail.

■

11 I used the Survival Module of SYSTAT to estimate the model, a
routine that employs the partial likelihood approach to estimate the B
coefficients. This approach does not require that the form of the baseline
 hazard be specified. For tied failure times, I use Breslow’s generalization
http://fraser.stlouisfed.org/
of the Cox likelihood function. For details, see Steinberg and Colla
Federal Reserve Bank of
St. Louis
(1988),
appendix C.

However, I obtain similar classification results
using the other specifications.
All of the estimated coefficients exhibit the
correct sign and are highly significant. However,
it should be noted that, as in multiple regression,
collinearity among explanatory variables can be
and is a problem. Therefore, this specification,
like the others examined, is necessarily parsi­
monious. The variables that consistently exhibit
the strongest statistical relationships to the prob­
ability of bank survival are OHR, the large cer­
tificate of deposit dependence ratio, the loan to
asset ratio, the primary capital ratio, the nonper­
forming loan ratio, the net primary capital ratio,
and the change in housing permits variable. It is
interesting to note that the commercial real
estate loan variable is never found to be signifi­
cant in any version of the equation estimated,
possibly reflecting the somewhat aggregated
form of the variable used. A construction loan
variable was not employed, and this type of ac­
tivity is generally viewed as the riskiest form of
commercial real estate lending.
As noted above, the models estimated here
can be used to generate the probability that a
bank will survive longer than t units, where t
can take on any value from roughly zero to 24
months. This is done by substituting the relevant
X, B, and baseline survival probabilities into
equation (5). Allowing t to vary over the entire
permissible range for a bank with some given
set of characteristics results in the survival
profile for that bank. Thus, this profile shows
the probability that some particular bank will
survive longer than each possible t value, and
vividly portrays the model’s estimate of the
health of a particular institution. Three illustra­
tive profiles are presented in figure 1.
The top curve depicts the survival profile for
a typical “healthy” bank. This profile is derived
by inserting the 1986 mean values of the explan­
atory variables for the nonfailed banks in the
estimation sample into the estimated survivor
function. Thus, the curve shows that the esti­
mated probability of a healthy bank surviving
longer than any number of months ranging from
roughly zero to 24 is high — above 0.9. The
intermediate profile is for a hypothetical
“unhealthy” bank. In this case, the explanatory
variable values are set at the 1986 mean value
for the banks in the estimation sample that failed
in 1988 (that is, those that survived roughly 12 to
24 months into the future). The vertical distance
between the two curves represents the esti­
mated reduction in survival probability for the
unhealthy bank relative to the healthy bank at
every time horizon. The estimated probability

27

FI GURE

1

Survivor Profiles for Three
Hypothetical Banks
Survival probability
0.9

’ * •.

Healthy bank

''" s

0.8

• ••

0.7

_

0.6

—

0.5

■
"

•

•«
* • ••

**«- Unhealthy bank
s
N

«•
Critically ill bank *. •

0.4
0.3

•’ s
••

-

•m
•
• «•

0.2
0.1
0.0

i i i i i i i i i i i i i i ■i i i i i i i i
0

2

4

6

8

10

12

14

16

18

20

22

Months
SOURCE: Author’s calculations.

WK TABLE

2

In-Sample Classification Accuracy
T im e h o riz o n
(m o n th s )

Type I

Type II

12

23(14.0)

139 (11.8)a

18

34(13.5)

115 (10.6)b

24

36 (10.8)

94 (9.3)

a. O f the 139 banks, 107 subsequently failed within 12 to 24 months. Thus,
only 32 type II errors occurred for nonfailed banks (3.2 percent).
b. O f the 115 banks, 60 subsequently failed after 18 months. Thus, only 55
type II errors occurred for nonfailed banks (5.5 percent).
NOTE: In the estimation sample, 164 banks failed within 12 months, 252 failed
within 18 months, and 333 failed within 24 months. The num ber o f nonfailed
banks used in the estimation sample is 1,008. Thus, the type II error rates at the
12-month, 18-month, and 24-month horizons are based o n 1,177 banks, 1,089
banks, and 1,008 banks, respectively. The percentage of banks misclassified is
in parentheses.
SOURCE: Author’s calculations.

of the unhealthy bank surviving longer than 24
months is roughly 0.46. The bottom curve is the
survival profile for a hypothetical “critically ill”
bank: The values of all the explanatory variables
are set at the 1986 mean values for those banks
in the estimation sample that failed within 12
months (that is, 1987 failures). Because the val­
ues of the explanatory variables for this group

of banks are indicative of very high risk and
http://fraser.stlouisfed.org/
likelihood of failure, the survival profile
Federal Reserve Bank greater
of St. Louis

lies well below that of both of the other groups.
The estimated 24-month survival probability for
the critically ill bank is just 0.11.
Tables 2 through 8 present the classification
results produced using the estimated model. The
analysis of classification accuracy and the types
of classification errors made using an estimated
model are the acid tests of the worth of a poten­
tial early warning model.
In the analysis presented here, I focus only
on predicted 12-, 18-, and 24-month survival
probabilities. In order to use the estimated
models to classify banks as failures or nonfail­
ures at each of these time horizons in and out of
sample, the generated survival probabilities
must be compared to some critical probability
cutoff value. Typically, the proportions of failed
and nonfailed banks in the estimation sample
are used to determine the cutoff values. This is
the approach taken here. In the estimation sam­
ple used in this study, the probabilities of a bank
surviving beyond 12, 18, and 24 months are
roughly 0.88, 0.81, and 0.75, respectively. These
are the cutoff values used in the analysis. Thus,
if a bank’s estimated 24-month survival proba­
bility is less than 0.75, it is predicted to fail
within two years. If its estimated survival prob­
ability is greater than 0.75, it is predicted to sur­
vive longer than 24 months.
Type I and type II errors are defined in the
typical fashion: The former is a bank that failed
over some specified time horizon during which
it was predicted to survive, and the latter is a
bank that survived beyond some specified time
horizon during which it was predicted to fail.
Both types of errors are important in evaluating
the potential usefulness of an early warning
model. Obviously, a good model should exhibit
low type I error rates. Missing failures typically
implies delayed resolution, higher resolution
costs, or both. However, if an early warning
model is to be useful in allocating scarce exam­
ination resources, type II error rates should also
be relatively low. One exception to this general
rule is illustrated below. In particular, the catego­
rization of a prediction as a type II error depends
on the time period and the time horizon exam­
ined. Some type II errors could actually represent
banks that ultimately fail in some future period.
In evaluating the accuracy of any early warning
model, it is useful to identify how many banks
fall into this category of type II error, since they
actually represent a success.
The estimated models are quite accurate insample (see table 2). The type I and type II
error rates are typically in the 10 to 15 percent
range, and the overall classification accuracy is

28

TABLE

3

Out-of-Sample Classification
Accuracy: 1988 Failed Banks
T im e h o riz o n

Type II

Type I
1987 D ata

(m o n th s )
12

21 (12.4)

—

18

18(10.7)

—

24

11(6.5)

—

NOTE: Total num ber o f failed banks in the sample is 169- Percentage o f banks
misclassified is in parentheses.
SOURCE: Author’s calculations.

TABLE

4

Out-of-Sample Classification
Accuracy: 1989 Failed Banks
T im e h o riz o n

Type II

Type I

(m o n th s )

1987 D ata
121 (73.3)

12

18

13(15.5)

24

12(7.3)

67 (40.6)

1988 D ata
12

2 0 ( 12.1)

18

14 (8.5)

24

7 (4.2)

NOTE: Total num ber o f failed banks is 165. O f these, 84 failed in the first six
months o f 1989- Percentage of banks misclassified is in parentheses.
SOURCE: Author’s calculations.

above 85 percent for the 12- and 18-month time
horizons. The results for the 24-month time hori­
zon are slightly better. Furthermore, a relatively
large proportion of the type II errors at the 12and 18-month time horizons are banks that ul­
timately failed before 24 months elapsed. Thus,
the model was signaling that these banks were
potential failures prior to their actual closure.
However, the important yardstick of success
for a failure prediction or early warning model is
its out-of-sample forecasting accuracy. To obtain
insight on this issue, I use the estimated model



to generate survival probabilities for all banks in
the estimation and holdout samples using data
for 1987, 1988, and 1989- Obviously, data are
not available for all banks for all years. For ex­
ample, only 1987 data exist for the 1988 failures.
I never reestimate the model coefficients, and
use the same cutoff values detailed above. The
results for every year are presented for each of
the various subsamples in tables 3 to 8.
Turning first to table 3, it is apparent that the
model does a relatively good job of identifying
the 1988 failed banks. The type I error rate
declines from 12.4 percent at the 12- month
horizon to 6.5 percent at the 24-month horizon.
No type II errors are possible for this subsample.
Table 4 shows the results for the 1989 fail­
ures using 1987 and 1988 data. Note that the
type I error rates remain relatively low. A look
at the type II errors again demonstrates that the
model does a reasonably good job of providing
an early warning of high-risk banks. For exam­
ple, using 1987 data, 73.3 percent of the 1989
failures were predicted to fail within 12 months
(that is, by year-end 1988).
Results for the 1990 failures (table 5) are sim­
ilar. The type I error rates are virtually the same
as those for the banks that failed in previous
years. And again, relatively high proportions of
the banks that ultimately failed in 1990 are iden­
tified as potential problems in 1987 and 1988.
Table 6 contains the 1987-1989 results for
the nonfailed banks used in the estimation
sample. Because none of these banks failed, no
type I errors are possible. The number and rate
of type II errors for this nonfailed subsample are
quite low. Table 7 contains virtually identical
results for a holdout sample of nonfailed banks.
Finally, table 8 presents results for the largest
possible sample. The total number of banks and
the numbers classified as failures and nonfailures
necessarily change through time. For the 1987
data, for example, the total number of failed
banks at the 12-month time horizon consists of
all the 1988 failures. The total number of non­
failed banks consists of the 1989 and 1990 fail­
ures and the roughly 1,500 nonfailed banks in
the estimation and holdout samples. At the 18month time horizon, those banks that failed in
the first six months of 1989 are removed from
the nonfailed subsample and considered to be
failures. At the 24-month time horizon, all of the
1989 failures are removed from the nonfailed
subsample and counted as failures. I use the
same procedure to define the subsamples in
subsequent years. This exercise perhaps gives
the best idea of the potential usefulness of a
PHM as an early warning model.

29

TABLE

5

Out-of-Sample Classification
Accuracy: 1990 Failed Banks
Type I
(m o n th s )

Type II
1987 D ata

12

—

18

—

69 (56.7)
87 (71.3)

24

97 (79.5)
1988 D ata

12

92 (75.4)

—

18

9(10.1)

24

9 (7.4)

25 (20.5)
—
1989 D ata

12

15(12.3)

—

18

8 (6.6)

—

24

2(1.6)

—

NOTE: 1990 failed-bank data through October 31. Total num ber o f failed
banks is 122. O f these, 89 failed in the first six months o f 1989. Percentage o f
banks misclassified is in parentheses.
SOURCE: Author’s calculations.

T A B L E

V. Summary and
Conclusions

6

0ut-of-Sample Classification
Accuracy: Nonfailed
Estimation Sample
T im e h o riz o n
(m o n th s )

Type II

TyPe 1
1987 D ata

12

—

18

—

29 (2.9)
58 (5.8)

24

—

101 (10.1)
1988 D ata

12

—

33 (3.3)

18

—

68 (6.7)

24

—

116(11.5)
1989 D ata

12

—

20 (2.0)

18

—

41 (4.1)

24

—

78 (7.7)

NOTE: Total num ber o f nonfailed banks in the estimation sample is 1,008. Per­
centage o f banks misclassified is in parentheses.
SOURCE: Author’s calculations.




The model appears to perform quite well. In
each year, type I error rates are relatively low
for all three time horizons. Similarly, type II
error rates are also quite low, particularly if the
impact of misclassification of subsequent fail­
ures is considered. For example, when 1987
data are used and subsequent failures are ex­
cluded, the type II error rates for the 12-, 18-,
and 24-month horizons fall to 2.7 percent, 5.6
percent, and 9.4 percent, respectively. As noted
above, type II errors attributable to misclassifica­
tion of banks that ultimately fail are not undesir­
able but rather indicate the ability of the model
to identify subsequent failures early. The model
appears to perform this task quite well.
The fact that the classification accuracy does
not decline over time even though the model
coefficients are not reestimated is encouraging.
It indicates that the relationship between the
explanatory variables and bank survival proba­
bilities represented by the estimated model is
relatively stable. This is a desirable characteristic
of an early warning model, since it obviates the
need to update the model coefficients or to
change the specification frequently.12

The results strongly suggest that a PHM with a
relatively small number of explanatory variables
constructed only from publicly available data
could be an effective early warning tool. The
overall classification accuracy of the estimated
model is high, while both type I and type II error
rates are relatively low. Furthermore, the model
flags a considerable proportion of failures early.
Many further refinements (in variables or in
specification, for example) are possible. In par­
ticular, it would be interesting to determine if the
currently confidential data on 30- to 89-day non­
performing loans would have a significant im­
pact on the explanatory power of this type of
equation. It would also be interesting to investi­
gate the relationship between the model’s pre­
dictions and CAMEL ratings, which reflect
additional nonpublic information generated at
considerable cost.

■ 12 Although the errors are not examined in detail, a cursory review
reveals that a considerable number of the type II out-of-sample errors in­
volved Texas banks. This is an important consideration given the earlier
discussion of insolvency versus failure, and may imply that the accuracy
of the model is even slightly higher than indicated by the classification
results reported in the tables.

TABLE

7

Out-of-Sample Classification
Accuracy: Nonfailed Holdout Sample
,

.

Type I

(m o n th s )

Type II
1987 D ata

12
18
24

—
—
—

12 (2.4)
26 (5.1)
40 (7.8)
1988 D ata

12
18
24

—
—
—

8(1.6)
26 (5.1)
43 (8.4)
1989 D ata

12
18
24

8(1.6)
—
—

17 (3.3)
36 (7.1)

NOTE: Total num ber o f nonfailed banks in the holdout sample is 510. Percent­
age of banks misclassified is in parentheses.
SOURCE: Author’s calculations.

T A B L E

8

Out-of-Sample Classification
Accuracy: Maximum Sample

T im e h o riz o n
(m o n th s )

T^

"

1
1987 D ata

12

21 (12.4)

18
24

31 (12.3)
23 (6.9)

231 (12.8)a
238 (13.8)b
238 (14.5)C
1988 D ata

12
18
24

20(12.1)

133 (8.1)d
119 (7.7)e
159(10.5)

23 (9.1)
22 (7.7)
1989 D ata

12

18
24

15(12.3)
8 (6.6)
2(1.6)

28(1.8)
58 (3.8)
114(7.5)

a. 190 o f these subsequently failed.
b. 154 of these subsequently failed.
c. 97 of these subsequently failed.
d. 92 of these subsequently failed.
e. 25 o f these subsequently failed.
NOTE: W h en year-end 1987 data are used, the sample consists of the 1988,
1989, and 1990 failures and the nonfailed estimation and holdout samples. The
num ber o f failed and nonfailed banks at each time horizon depends on the
year and time horizon examined. The percentage o f banks misclassified is in
parentheses.


SOURCE:
Author’s calculations.


Finally, it will be interesting to see how accu­
rately the model forecasts failures in 1991 and
beyond. Some believe that the reasons why
banks are encountering financial difficulties at
present are somehow different than those faced
during the 1980s by southwestern banks, which
make up a large part of the sample used to esti­
mate this model. Many argue that effective m oni­
toring of bank financial conditions requires
disclosure of additional detailed information on
the market value of assets and liabilities. If the
estimated PHM exhibits the same degree of accu­
racy reported here over the next several years, it
suggests that neither of these views is correct.

ü ,

References

A P P E N

Gajewski, Gregory R. ‘Assessing the Risk of

Estimation Sample:
1986 Mean Values3
1988
failures

1987
failures
LAR
COMLR
CRELR
CD100R
ROA
OHR
PCR
NPCR
NCOR
NPLR
PCHPxy

Bank Failure,” in Proceedingsfrom a Con­
ference on B ank Structure a n d Competition,
Federal Reserve Bank of Chicago, May 1989,
pp. 432-56.
N onfailures

63.28

64.41

50.82

19.33
9.39
18.19
-4.57
5.04

21.03
13.14

11.25
8.92
8.44

22.59
-2.03
4.68
7.26
1.52

4.31
-5.62
6.71
16.01

0.59
3.34
9.20
7.64

-32.53

3.13
9.26
-39.06

6.09

37.36

271.72

238.38

ASSETS

1.53
3.10

a. Assets measured in millions o f dollars. All other variables are measured in
percentages.
SOURCE: Author’s calculations.

Goudie, A.W. “Forecasting Corporate Failure:
The Use of Discriminant Analysis within a
Disaggregated Model of the Corporate Sec­
tor,”Jo u rn a l o f the Royal Statistical Society,
vol. 150 (1987), pp. 69-81.

Kalbfleisch, J.D., and R.L Prentice. The Statisti­
cal Analysis o f Failure Time D ata. New
York: John Wiley & Sons, Inc., 1980.

Kiefer, Nicholas M. “Economic Duration Data
and Hazard Functions,”Jo u rn a l o f Economic
Literature, vol. 26 (June 1988), pp. 646-79-

Lane, William R., Stephen W. Looney, and
James W. Wansley. “An Application of the
Cox Proportional Hazards Model to Bank
Failure,”Jo u rn a l o f B anking a n d Finance,
vol. 10, no. 4 (December 1986), pp. 511-31.

Variable Definitions
LAR:
COMLR.
CRELR:
CD100R:

ROA:
OHR:
PCR:
NPCR:
NCOR:
NPLR.
PCHPxy:




Total loans/total assets
Commercial and industrial loans/
total assets
Commercial real estate loans/total
assets
Total domestic time deposits in
denominations of $100,000 or
more/total assets
Consolidated net income/average
total assets
Operating expenses/average total
assets
Primary capital/average total assets
PCR less (total nonperforming
loans/average total assets)
Total net chargeoffs/average net
loans plus leases
Total nonperforming loans/total
loans plus leases
Percent change in state’s residential
housing permits measured over the
198x to 198y period

Phillips, Keith R. “The Texas Index of Leading
Economic Indicators: A Revision and Further
Evaluation,” Federal Reserve Bank of Dallas,
Economic Review, July 1990, pp. 17-25.

Randall, Richard E. “Can the Market Evaluate
Asset Quality Exposure in Banks?” Federal
Reserve Bank of Boston, New E ngland
Economic Review, July/August 1989, pp.
3-24.

Steinberg, Dan, and P. Colla. Survival: A Sup­
plem entary M odule fo r SYSTA T. Evanston,
111.: Systat, Inc., 1988.

Thomson, James B. “An Analysis of Bank
Failures: 1984-1989,” Federal Reserve Bank
of Cleveland, Working Paper 8916, Decem­
ber 1989.

Whalen, Gary. “Time Series Forecasts of
Regional Economic Activity: Recent Evidence
from O hio,” Federal Reserve Bank of Cleve­
land, unpublished manuscript, 1990.

First Quarter
Working Papers
Current Working Papers of the
Cleveland Federal Reserve Bank
are listed in each quarterly issue
of the Economic Review. Copies
of specific papers may be re­
quested by completing and mail­
ing the attached form below.

Single copies of individual
papers will be sent free of charge
to those who request them. A
mailing list service for personal
subscribers, however, is not
available.

■ 9101

■ 9103

Risk-B a se d Capital

Generational Accounts:

and D ep osit Insurance

A M ean ing ful Alternative

Reform

to D eficit Accounting

by Robert B. Avery and
Allen N. Berger

by Alan J. Auerbach,
Jagadeesh Gokhale, and
Laurence J. Kotlikoff

■ 9102

■ 9104

Inflation, Personal

On the V alu ation of

Taxe s, and Real Output:
A D ynam ic Analysis

Dep osit Institutions

Institutional subscribers, such
as libraries and other organiza­
tions, will be placed on a mail­
ing list upon request and will
automatically receive Working
Papers as they are published.

by Ash Demirguc -Kunt

by David Altig and
Charles T. Carlstrom

Please complete and detach the form below and mail to
Research Department
Federal Reserve Bank of Cleveland
P.O. Box 6387
Cleveland, Ohio 44101

Check item (s)

Please send the following Working Paper(s):

requested

□ 9101
□ 9102

□
□

9103
9104

Send to:
Please print




Name
Address

City

State

Zip