View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Remarks by
Ricki Helfer
Chairman
Federal Deposit Insurance Corporation
at the
Brookings Institution
Washington, D.C.
December 19, 1996

The speakers and panelists at this forum have spent the day talking about what has
happened in the five years since the passage of FDICIA and what may lie ahead. I am
here to add another perspective: like the Ghost of Christmas Past, I will take us back to
the way things were -- in this case, to the way things were in the banking crisis of the
1980s and early 1990s, the crisis that led to the passage of the law.
It has been said that experience is a tough teacher -- first you get the test, then you
learn the lesson. Bank regulators were tested by the crisis, and learned lessons. Did we
learn the correct lessons?
When I became FDIC Chairman, I initiated a project to find the answer to that question,
an answer based on objective analysis. The result is a series of 14 papers that the FDIC
will publish over the coming year. Next month, drafts of three papers -- an overview, as
well as an analysis of bank examination and enforcement from 1980 through 1994 and
an analysis of off-site surveillance systems during the same period -- will be presented
at a symposium we are hosting. Although our studies cover many other issues, I will
focus today on our findings relating to examination and supervision.
Effective bank supervision is critical to sound deposit insurance. Without it, the insurer is
potentially faced with writing a blank check. It is also one of the important tools we have
for containing the problem of moral hazard that arises from any form of insurance -whether public or private. The failure of the federal savings and loan insurance fund was
a direct result of the failure of supervision. It resulted in the taxpayer writing a blank
check. Without strong supervision, deposit insurance simply becomes a public resource
that risk takers exploit.
While economic, legislative and regulatory forces all contributed to a demanding
environment for banking, the more immediate cause of bank failures in the 1980s and
early 1990s was a series of severe sectoral and regional recessions. In agriculture,
energy and commercial real estate -- and in the Southwest, the Northeast and California
-- the recession followed periods of exuberant expansion often characterized by
speculative activity. In all these cases, the conventional wisdom was that the boom
would not end. Regulators, too, overreacted to the good times by becoming complacent.

Moreover, two decisions that were embraced by the Office of the Comptroller of the
Currency and the Federal Deposit Insurance Corporation -- and to some extent by the
Federal Reserve System -- to change examination policies during the late 1970s and
early 1980s had an important negative impact on the outcome and severity of the crisis
that was to follow. Those two decisions were (1) to place relatively more weight on offsite supervision and relatively less upon on-site examinations and (2) to concentrate
examination resources on those institutions that posed the greatest risk to the insurance
fund and the stability of the financial system. Both decisions ultimately resulted in fewer
field examiners and reduced numbers of examinations for most of the 1980s,
weakening the ability of bank supervisors to detect -- and respond to -- problems.
The total number of state and federal examiners declined by 13 percent from 1980 to
1984. The OCC and the FDIC experienced a greater decline of 17 percent. Even after
hiring resumed, it was not until 1987 that the examiner force -- federal and state -- was
restored to 1980 levels. In the meantime, the number of annual bank failures increased
from 10 to 184 between 1980 and 1987, while the number of troubled banks increased
from 217 to 1,575 over the same period.
The decline in the number of examiners led to marked changes in the frequency of
examinations. In 1980, the average length of time between examinations was 15
months. By 1986, the average interval had increased to 20 months -- and in the most
extreme cases, had increased to seven years. The greatest change was for CAMEL 1rated banks, whose average interval increased from 15 to 28 months between 1980 and
1986.
With that background, today I will highlight six of the findings of our historical study -findings based on evidence that, indeed, we regulators learned -- and are applying -the correct lessons from our experience.
Lesson #1 -- There is no substitute for regular, on-site examinations of depository
institutions for addressing specific problems at individual institutions. On-site
examinations generate information on the condition of banks that is not available from
any other source.
During the 1980s, examination ratings that were up-to-date generally identified most of
the banks that required increased supervisory attention well before the bank actually
failed. Examinations were generally effective in identifying problem banks in a two-tothree year window prior to failure. As we have seen, however, the problem was that far
too many examinations were out-of-date, and could not, therefore, serve the function of
identifying current difficulties in the industry. Of the 1,617 banks that failed in 1980
through 1994, 36 percent had CAMEL ratings of "1" or "2" two years prior to failure.
FDICIA, of course, requires annual full-scope examinations for all banks, except that an
18-month interval can be substituted for small banks with satisfactory ratings.
Lesson #2 -- Even though up-to-date CAMEL ratings were generally successful in
identifying banks that required greater supervisory attention, they had limitations.

Because CAMEL ratings are based on the internal operations of the bank, they do not
take into account economic developments that may pose future problems. This partly
explains why 1- or 2-rated institutions could fail only two years later.
We at the FDIC have created a Division of Insurance to monitor economic
developments; to provide data to our supervisory staff, as well as to the staffs of the
other regulatory agencies; and to make economic risk assessments available to the
industry in order to bridge the gap between the individual institution and the economic
environment in which it operates. We are also developing a model for projecting bank
failures that will incorporate regional and macroeconomic information in the forecast,
which up to now has been based solely on supervisory and historical information.
Lesson #3 -- Because CAMEL ratings are generally a measure of the current condition
of the bank at the time it is examined, they do not systematically track risk factors that
may produce future losses. In response to this lesson, all of the regulatory agencies
today have programs aimed at tracking risk. At the FDIC, for example, we have
developed a flow chart for our examiners to use in tracking interest rate risk. It reflects a
graduated approach to determining the risk exposure of an institution -- the more risk
the examiner finds, the more steps he or she must take.
We are now field testing 10 more flow charts that cover areas ranging from underwriting
and credit administrative practices to loan review systems to insider transactions. The
purpose of this structured risk-assessment approach is to look beyond the examination
date to how a bank can respond to changing market conditions in the context of its
individual risk profile.
Moreover, risk-based capital takes into account off-balance sheet risk.
Most recently, the CAMEL rating system has been updated to become CAMELS to
emphasize risk assessment and the risk profile of the institution.
Lesson #4 -- Once troubled institutions were identified during the 1980-94 period, they
were subjected to supervisory and enforcement actions that were by and large effective
in reducing failures and losses to the insurance fund. About one-half of all banks rated
"4" or "5 by the FDIC from 1980 through 1994 were the subject of formal enforcement
actions; many of the remaining banks received informal enforcement actions.
About 75 percent of all problem banks recovered, while 25 percent failed.
As opposed to the thrift experience, bank supervisory actions led to lower asset growth,
reduced dividend payments, and increased capital injections at troubled banks. This
had the effect of limiting risk-taking by problem banks and limiting losses to the
insurance fund when the banks failed.
Lesson #5 -- While capital is important as a cushion to protect banks from failure and
the insurance funds from loss, even sizable capital will not save an institution with
significant problem assets and a high risk profile. We looked at banks in 1982 and

separated them into two groups. The first group survived the next five years. The
second were the banks that failed in 1986 and 1987.
In 1982, the banks that did not fail had an average equity ratio of 8.84 percent, while
failed banks had a ratio of 8.29 percent, only 55 basis points lower. Moreover, 8.29
percent -- the lower number -- is above the level needed to be considered wellcapitalized under the risk-based system now in effect.
Capital is a lagging indicator of the health of an institution -- an important point in
weighing the significance of the prompt corrective action requirements of FDICIA.
Examiners analyze considerably more information than capital ratios to determine a
bank's likelihood of failure.
The real value of prompt corrective action, therefore, may be that the regulators must
maintain a staff of examiners sufficient to meet its demands and the demands of
mandated regular examinations. In light of the experience in the early 1980s, that is
valuable.
Lesson #6 -- Based on the experience of the 1980s, risk factors can be used to identify
groups of banks that have a higher risk of failure.
For example, the banks that failed in the years 1982 through 1987 had distinctly higher
risk profiles in 1982 than banks that did not fail. They had higher loan-to-asset ratios
than survivors. They had substantially higher ratios of interest and fee income on their
loan and lease portfolios, which suggests that their loans were riskier. They also had
higher growth rates than the banks that did not fail, but these growth rates were sharply
cut back as the banks approached failure, as FDIC enforcement actions took effect.
This finding suggests that the focus on risk assessment in current supervisory thinking
is on target.
Beyond these and other specific lessons that our studies confirm, the FDIC's history of
the eighties and early nineties project reinforces the general lesson from that time: that
balance is the key to success in both regulating banks and managing deposit insurance.
In banking regulation, balance means that we recognize that when things are going
badly, the pendulum has a way of swinging back -- and when things are going well, the
pendulum will someday swing the other way, too. We regulators can maintain this
balance only if we follow the basic principles of bank supervision both in good times and
in bad.
During good times, we must be alert to problems and do something about them before
they result in severe problems for the banking system. We must be just as realistic
when the cycle turns down as we are when the cycle is on the upswing. Banks are in
the business of accepting risk as financial intermediaries and of making a profit. We
should not fall into the mindset that problems lurk under every rock and in every loan
file. We should justify the balance we maintain as regulators on the basis of fact and
critical analysis.

Balance in managing deposit insurance means assuring stability in the financial system
while addressing the problem of moral hazard that arises from public, or private, deposit
insurance. By protecting depositors against loss, deposit insurance virtually eliminates
the risk of bank runs and disruptive breakdowns in bank lending that damage the
economy.
On the other hand, by assuming the risk of losses that would otherwise be borne by
depositors, deposit insurance provides incentives for increased risk-taking by bank
management, thereby exposing the insurance fund to greater losses. Moral hazard is a
particularly serious concern if the institution is nearing insolvency. Then, the owners
have strong incentives to make risky investments because profits accrue to the owners,
while losses fall on the deposit insurance fund.
In the 1980s, the balance tipped in favor of stability. In assuring stability, the FDIC was
eminently successful. Stability was achieved, however, at great cost -- and with respect
to savings and loan failures, at great cost to the taxpayers. FDICIA was the Congress'
call to us to restore the balance by giving more attention to the problem of moral hazard.
In carrying out the requirements of FDICIA -- and pursuing other initiatives -- we are
doing so through risk-based and higher minimum capital standards, risk-related deposit
insurance premiums, the least-cost test for resolving bank failures, and national
depositor preference.
First, the development of internationally-accepted risk-based capital standards is one of
the most significant innovations in the history of banking regulation. The Basle
Committee on Banking Supervision has laid out a framework for assessing an
institution's capital adequacy by weighing its assets and off-balance sheet exposures on
the basis of counterparty risk. Moreover, recognizing that international banks have been
actively involved in trading securities and derivative products, the Committee has
developed progressive standards through the use of standardized and internal models
to measure the unique market risks of specific portfolios.
Second, higher minimum capital standards are enforced through prompt corrective
action. The principle embedded in prompt corrective action is gradation of risk and of
appropriate regulatory response: The less capital a bank has, the smaller the cushion it
has to absorb losses, and the greater the risk it poses to the insurance fund. The
greater the risk, the more attention it should receive from regulators, but strong capital,
as we have seen, is a necessary but not sufficient condition for safe and sound banking.
Third, the principle of gradation of risk and response is also reflected in our system of
risk-related FDIC insurance premiums. The greater the risk, the higher the premiums
the institutions pay. Risk-related premiums promote safety and soundness -- and help to
address the issue of moral hazard -- by giving institutions an economic incentive -through lower deposit insurance premiums -- to improve their conditions and maintain
lower risk profiles.

The deposit insurance premium for an individual institution is now established on the
basis of its capital and supervisory ratings -- with three categories of each and a nineblock grid. Currently, 94 percent of institutions insured by the Bank Insurance Fund and
89 percent of the institutions insured by the Savings Association Insurance Fund are in
the FDIC's best category for deposit insurance premiums, which means these
institutions are both well-capitalized and either 1- or 2-rated.
We are analyzing whether other factors are relevant to risk to the insurance funds -- and
whether the nine-block grid for setting deposit insurance premiums should be
expanded. We are also examining whether our current 27-basis point spread is
sufficient to price the risks to the insurance funds posed by individual institutions. Those
are questions that we will give a lot of attention to during the next year.
Fourth, in resolving bank failures, the FDIC is required by FDICIA to accept the proposal
from a potential purchaser that is the least costly to the deposit insurance fund of all the
proposals we receive. After the law took effect, in more than half of the failures in 1992 - 66 out of 120 -- uninsured depositors received less than 100 cents on each dollar
above $100,000. That was a significant increase in uninsured depositors experiencing
losses from 1991, when fewer than 20 percent of the failures involved a loss for
uninsured depositors. While the number of bank failures in 1992 was lower than in
previous years, the number of uninsured depositors experiencing a loss was
significantly greater.
Finally, the passage of a national depositor preference law in 1993 gave creditors of
banks other than depositors an extra incentive to be concerned about the condition of
their institutions. If a bank fails, anyone with a non-deposit claim gets nothing until all
depositors, including the FDIC as insurer, have been made whole. It is still too early to
assess the impact of this statutory change.
Conceptually, higher risk-based and minimum capital standards, risk-related deposit
insurance premiums, and the least-cost test for resolving bank failures are direct and
indirect surrogates for the discipline that depositors would logically impose if they had
access to the economist's dream: perfect information in a purely competitive market.
In conclusion, we have been working to improve our system of banking regulation and
supervision -- including the safety net -- for more than a decade. The banking crisis of
the 1980s and early 1990s exposed weaknesses in the banking system -- and in the
system of bank regulation. FDICIA was a reaction, but not the only one. Fortunately,
regulators have continued to work beyond FDICIA's bounds to find better ways of
responding to supervisory issues.
The Ghost of Christmas Past came with the message that the past was prelude to the
future. In the euphoria of a year when the commercial banking industry is likely to make
$50 billion in profits for the first time, perhaps we, too, can benefit from reflecting on that
message. We have seen in our history of the 1980s and early 1990s project, it took
years for problems at banks to surface.

In the end, the chief lesson of the 1980s is a clear one: there is a continuing, strong
need for effective and balanced supervision.

Last Updated 06/28/1999