View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Conference Overview: Major Themes
and Directions for the Future
William J. McDonough

This special issue of the Economic Policy Review presents
the proceedings of “Financial Services at the Crossroads:
Capital Regulation in the Twenty-First Century,” a conference hosted by the Federal Reserve Bank of New York in
partnership with the Bank of England, the Bank of Japan,
and the Board of Governors of the Federal Reserve System.
The conference, held in New York on February 26-27,
1998, examined a wide variety of topics: the impact of
capital standards on bank risk taking, new industry
approaches to quantifying risk and allocating capital, proposals for reforming the current structure of capital rules,
and the role of capital regulation in bank supervision.
Although the speakers at the conference took very
different positions on several regulatory capital issues, their
papers all directly or indirectly point to one question:
Where do we go from here? In this overview, I will try to
summarize some of the main themes that emerged from
the papers and discussion. I will then suggest what these
themes imply for the choices facing financial institutions
and their supervisors in the years ahead and for the future
of capital regulation as a whole.

William J. McDonough is the president of the Federal Reserve Bank of New York.

EVOLUTION IN RISK MEASUREMENT AND
MANAGEMENT PRACTICES IS CONTINUOUS
Risk measurement and management practices have evolved
significantly since the Basle Accord was adopted in 1988,
and there is every reason to believe that this evolution will
continue. In fact, the papers and discussion at this conference suggest that change is the natural state of the world in
risk management and that no model or risk management
approach can ever be considered final.
Even in a well-developed risk measurement area
such as value-at-risk modeling for market risk exposures,
innovations and fresh insights are emerging. These
advances are the outgrowth of both academic research
efforts and financial institutions’ day-to-day experience
with value-at-risk models. The papers presented in the
session on value-at-risk modeling exemplify how academic research can suggest new approaches to addressing
real-world problems in risk measurement.
Evolution is even more evident in the developing
field of credit risk modeling. As the papers in the credit
risk session demonstrate, advances in credit risk measurement are occurring along several fronts. First, financial
institutions are refining the basic empirical techniques that
they use to assess credit risk. In particular, banks have

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

3

developed enhanced methods of evaluating portfolio

The speakers in the opening session of the confer-

effects—effects shaped by credit risk concentrations and

ence argued that regulatory capital requirements and other
supervisory actions can have significant effects on the
risk-taking behavior of financial institutions. In response
to capital requirements, banks adjust their risk profiles,
altering the overall level of risk undertaken and shifting
their exposures among different types of risk that receive
different treatments under regulatory rules. Further, the
speakers indicated that each bank’s response to changes in
regulatory capital requirements will depend on the capital
constraints faced by the bank. Banks under more binding
capital constraints may have greater incentives to engage in

correlations in defaults and credit losses across different
positions—and have improved their ability to measure the
impact of these effects on the overall credit risk exposure of
an institution. In addition, the new empirical techniques
allow financial institutions to assess more accurately the
risk that each transaction contributes to the credit portfolio
as a whole, as well as the risk of each transaction on a
stand-alone basis. Thus, credit risk models, although still
in the early days of development and implementation, have
the potential to deepen banks’ and supervisors’ understanding of the complete risk profile of credit portfolios.
The discussion during the credit risk session
revealed that there are many approaches to credit risk modeling and a variety of applications. The diversity of ideas
about credit risk modeling is the sign of a healthy climate
of exploration and development, which should lead to
improved modeling techniques and a more effective use of
models’ output by financial institutions making internal
risk management, capital allocation, and portfolio decisions.

RAPID CHANGES IN RISK MANAGEMENT
REQUIRE CORRESPONDING CHANGES
IN SUPERVISORY DIRECTION
The rapid evolution in financial institutions’ risk management practices presents a substantial challenge to
supervisors. As several of the conference papers make clear,
the impact of supervisory rules and guidelines—especially
regulatory capital requirements—can vary substantially as
the financial condition, risk appetite, and risk management
approaches used by financial institutions change, both
across institutions and for a given institution over time. In
an environment in which financial institutions are developing new and increasingly complex methods of assuming
and managing risk exposures, regulatory capital requirements and other supervisory practices must continually
evolve if they are to be effective in meeting supervisory
objectives. Simply keeping up with innovations in the
measurement and control of risk is therefore a vital task for
supervisors, although merely a starting point.

4

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

“risk shifting” and other practices to reduce the constraints
from regulatory capital requirements. Taken together, these
findings suggest that supervisors must pay attention to the
incentive effects of regulation as well as the evolution of
risk management practice in the industry.
The discussion in several sessions offers a corollary
to this last point, namely, that supervisors have many ways
to adapt their practices in response to industry developments. They can, for example, build on the incentives that
already motivate financial institutions to improve their
risk measurement and management capabilities. Expanding the use of risk measurement models for regulatory
capital purposes—as some observers now suggest in the
case of credit risk models—is only one way in which
supervisors can take advantage of existing advances in risk
management within financial institutions. Improved risk
management techniques can also enhance the ability of
supervisors to monitor the risk profiles of financial institutions and to assess both the strengths and the vulnerabilities
of the financial institutions under their charge. Although
the focus of this conference is regulatory capital, we should
not lose sight of the fact that supervisors can use innovations
in risk management to deepen their understanding of the
risks facing financial institutions.

“ONE-SIZE-FITS-ALL” CAPITAL RULES
WILL BE INEFFECTIVE
As financial institutions become more complex and more
specialized, “one-size-fits-all” capital rules are more likely

to be ineffective or to induce unintended and undesirable
reactions. Perhaps the most significant theme to emerge
from the discussion at the conference is the idea that such
“one-size-fits-all” approaches to capital regulation will
fail in the long run. Conference participants suggested
that in the future, supervisory practice and capital regulation will be based less on specific rules and prescriptions
and more on a system of general principles for sound and
prudent management. This change will come about in
part because supervisors will find it harder to formulate
precise rules to regulate the increasingly sophisticated
activities of financial institutions. However, a more
important reason for the change—raised in several of the
papers in this conference—is the difficulty of crafting
effective regulatory capital requirements when the circumstances and characteristics of individual financial
institutions heavily influence the way in which each
institution responds to any particular set of rules. Thus, a
single rule or formula could have quite different effects
across institutions—effects that could diverge markedly
from those intended by supervisors.
This last point was made forcefully in the session
on incentive-compatible regulation and the precommitment approach and in the session on the role of capital
regulation in supervision. Papers presented in both sessions
stressed that effective regulatory capital regimes must take
into account the risk profile and characteristics of individual
institutions. Some participants suggested that this principle
should guide the choice of a scaling factor in the internal
models approach to market risk capital requirements;
others applied it to the choice of a penalty in the precommitment approach; still others related it to the overall
nature and structure of regulatory capital requirements.
This principle also emerged, in a slightly different
form, in the sessions on value-at-risk and credit risk modeling. The papers presented in these sessions used a variety
of modeling approaches, reflecting in part contrasting
views of the objectives of risk modeling. Participants
took different positions on the best method of modeling
market and credit risk and of determining an institution’s
optimal level of capital, suggesting that no single formula

for setting capital requirements would be optimal for all
institutions.

FINANCIAL INSTITUTIONS AND SUPERVISORS
FACE CHALLENGES FOR THE FUTURE
The issues that I have discussed define the challenges facing
financial institutions and supervisors entering the
twenty-first-century world of supervisory capital regulation. For financial institutions, one key challenge is to
determine how best to measure the types of risk they face.
The discussion over the past two days has highlighted a
number of areas in credit risk modeling that deserve further
attention—including the shortage of historical data on
default and credit loss behavior, the difficulty of comparing models and modeling approaches across institutions,
and the need to develop methods of model validation.
Although these issues are indeed the focus of much attention, banks and other financial institutions are also
attempting to understand and manage other important
forms of risk—such as operational and legal risk—that are
just as complex and less easily quantifiable. Finally, financial institutions face the challenge of implementing
advances in risk modeling in a coherent and systematic
fashion, whether for pricing, portfolio management, or
internal capital allocation.
For supervisors, the most important challenge
involves developing an approach to capital regulation that
works in a world of diversity and near-constant change.
The papers presented at this conference provide evidence of
an active effort to meet this challenge. Supervisory capital
requirements will undoubtedly continue to evolve, reflecting innovations in risk management and measurement at
financial institutions as well as changes in supervisors’
views of the appropriate capital regime. Whatever the
approaches eventually adopted, the next generation of
supervisory capital rules must take into account the vital
role of incentives in determining the behavior of financial
institutions.
Financial institutions and supervisors alike must
consider how the adoption of new approaches to capital
regulation will affect the overall level of capital in financial

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

5

institutions and the relationship between required capital
and economic capital. To this end, we must address a series
of key questions about capital regulation: What risks
should be covered through capital requirements? How do
we decide on the level of prudence? What is the role of
minimum capital requirements? And what is the supervisor’s role in the assessment of capital adequacy? A number
of the papers given over the past two days have taken up
these vital questions, and the next step is to develop our
thinking on these key issues in a more systematic way.
More fundamentally, we need to give fuller consideration to the purpose of capital, as it is seen by financial
institutions on the one hand and by supervisors and central
bankers on the other. In addition, we need to understand
the relationship between these two perspectives, and to
evaluate how this relationship could influence capital adequacy and the incentives to assume and manage risk under
various regulatory capital frameworks. This task involves
developing a better grasp of the objectives of capital regu-

lation in light of the rapidly changing character of financial
institutions, the availability of new risk management
techniques, and the need for systemic stability.
The challenges highlighted here create a substantial agenda for future research. The need for additional
research, together with the enormous interest that this conference has generated, suggests that it would be wise to
establish a forum for further analysis and discussion of
capital regulation issues. As a first step, a series of seminars
on technical issues might be held. These seminars would be
conceived as an open exchange of ideas rather than a
decision-making or advisory initiative. Such efforts to
foster an ongoing dialogue and to build consensus among
academics, supervisors, and industry practitioners on regulatory issues could be extremely beneficial. Certainly, the
resolution of these issues—or the failure to resolve them
in an intelligent fashion—will shape the future course of
capital regulation for financial institutions.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

6

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Opening Remarks
Chester B. Feldberg

On behalf of the Federal Reserve Bank of New York, I
would like to welcome all of you to New York City and
to our conference “Financial Services at the Crossroads:
Capital Regulation in the Twenty-First Century.” Today’s
large and distinguished audience reflects our good fortune
in deciding early last year to hold a conference on this
particular topic at this particular time. We have more than
250 registered participants as well as many observers from
throughout the Federal Reserve System. Among those
attending today are fifteen members of the Basle Committee
on Banking Supervision, virtually all members of the
Capital Subgroup of the Basle Committee, several senior
U.S. financial supervisors, and representatives of financial
institutions from more than fifteen countries. The academic community is also well represented.
Although we at the New York Fed are the hosts of
this conference, the conference has been organized in close
collaboration with the Bank of England, the Bank of Japan,
and our colleagues at the Board of Governors of the Federal
Reserve System. It is a sure sign of how truly global our
financial system has become that the very first step we took
in planning today’s conference was to enlist the active

Chester B. Feldberg is an executive vice president at the Federal Reserve Bank of
New York.

participation of those institutions. I would like to thank
the individuals from those institutions who helped arrange
the conference—Patricia Jackson of the Bank of England,
Masatoshi Okawa of the Bank of Japan, and Allen Frankel
of the Board of Governors—as well as the team here in
New York, led by Bev Hirtle, for their outstanding work.
It was just about a year ago that we began planning the conference. At that time, we were deeply engaged
in several capital-related activities: the completion and
implementation of the Market Risk Amendment to the
Basle Accord, a Federal Reserve study of credit risk modeling, the development of a supervisory approach to credit
derivatives, and the assessment of a new round of securitization activity. All of these efforts suggested that it was an
appropriate time to hold a forum on capital regulation.
Further stimulus was provided by developments
in the research and financial communities. We were seeing
new techniques of risk management—techniques that
relied on innovations in analytical and statistical approaches
to measuring risk. We were also seeing an increasing integration of traditional banking functions, such as commercial lending and interest rate risk management, with the
full range of capital markets activities. Finally, we could not
ignore the widening gap between the sophisticated risk
management practices of financial institutions and the

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

9

simpler approach to credit risk capital requirements
embodied in our current capital standards.
It is important to remember that the original
Basle Accord incorporated what was, in the mid-to-late
1980s, state-of-the-art assessment of capital adequacy at
large financial institutions. Partly for this reason, the Basle
Accord was, and still is, viewed as a landmark achievement
of the Basle Committee and a milestone in the history of
banking supervision.
The adoption of the Accord was quickly followed
by a critique of everything from the original risk-weighting
scheme to the handling of derivatives-related credit exposures. The Basle Committee has responded by amending
the Accord several times to update it and to incorporate the
new capital standards for market risks—standards that
were seen as necessary even at the time the Accord was first
published. Thus, more than most international agreements, the Accord is truly a living document that has
continued to evolve with advancing financial industry
practices.
Evolution is almost too soft a word to describe the
changes we have witnessed in the financial sector over the
decade since publication of the Accord. Innovation in this
sector seems to come in bursts. Consider, for example, the
development of derivatives in the early 1980s and the
growth of option-related instruments in the late 1980s.
And in the late 1990s, innovation in credit risk management appears to be reaching high gear. Indeed, in the
relatively brief period since we announced this conference
last spring, we have seen the launch of credit-modeling
packages by major financial market participants; new uses
for credit derivatives and credit models in the securitization of commercial credit; and, for supervisors, a sure sign
that an innovation has arrived—the first problems relating
to Asian credit derivatives.
Credit risk is without question the most important risk for banks, but not just for banks. I suspect that
when one tallies the losses racked up in the securities,
insurance, asset management, and finance company industries, no small measure of the total losses can be attributed
to credit risk in some form. Therefore, how we adapt our

10

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

supervisory approaches and our capital requirements to
credit-risk-related innovation has high stakes both for
financial institutions generally and for the global supervisory community.
Credit risk, however, is not the only important
front on which change has been extraordinarily rapid. The
pace of convergence among the banking, securities, and
insurance industries and their various product offerings is
accelerating. For that reason, we have entitled this conference “Financial Services at the Crossroads” rather than
“Banking at the Crossroads.”
As the number of true financial conglomerates
steadily increases and the risks faced by the different industries within the financial sector become more alike, we in
the supervisory community are increasing our dialogue on
such issues as corporate governance, risk management,
and capital adequacy, especially through organizations
such as the Joint Forum. One result of this dialogue is a
growing recognition of the value of choosing regulatory
approaches that can accommodate a wide range of financial
firms and activities. In addition, we are working to unify
our vocabulary and to reach a shared understanding of key
risk concepts and practices. Certainly, a foundation of
common risk concepts and practices would contribute
significantly to greater transparency within the financial
sector.
These are broad issues. But for this conference to
achieve its full purpose, it must take a broad perspective.
One benefit of an academic-style conference, with a call
for papers and a long lead time for paper preparation, is
the ability to search the horizon for as many creative ideas
as possible.
Given our intention to represent a wide range of
thought on capital regulation, it may surprise you to see
that half of the conference sessions with prepared papers
deal with risk modeling. I conclude from the prevalence of
this topic among the papers submitted to us that the financial community, including the supervisory community, has
moved resolutely and irrevocably to incorporate sophisticated financial techniques into its thinking about capital,
risk management, and financial condition. Nevertheless, as

I believe you will see throughout the program, risk modeling is itself a mansion with many, many rooms, which
we and the financial community have just begun to
explore. Therefore, in searching for approaches to twentyfirst-century capital standards, we should not stop at the
very first room. Moreover, the growing industry reliance
on risk modeling itself raises many questions about how
supervisors should make use of information from risk
models and the extent to which we should accept a financial institution’s own assessment of its capital adequacy,
whether assessed through models or other means. Several
papers in the second half of the program will discuss
these issues.
Our hope is that this conference can accelerate
the development of a consensus between the public and
private sectors on an agenda for twenty-first-century capital regulation. My special focus is on the work of the
Basle Committee, of which I am pleased to be a member,
since the Committee has played and continues to play a

leadership role in the development of capital standards for
the industry.
I am very aware that the process of developing
supervisory policy at the international level will take considerable time. We need time to educate ourselves about
the impact of our current capital standards and to examine
how those standards are affected by new developments,
especially innovations in credit risk management. We need
time to study the possible responses to such developments
and the full ramifications of the responses. We need time to
choose carefully among the various options available. And
we need time to plan for implementation and transition.
The need for such a long period of preparation suggests
strongly to me that now is the right moment to devote
the better part of two intensive days to a conference on
twenty-first-century capital standards.
Once again, I am delighted to welcome you to the
Federal Reserve Bank of New York. I am confident that you
will find the conference both provocative and productive.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

11

The Impact of Capital Requirements
on U.K. Bank Behaviour
Tolga Ediz, Ian Michael, and William Perraudin

CAPITAL REQUIREMENTS AND THEIR
POTENTIAL IMPACT ON BANK BEHAVIOUR
The 1988 Basle Accord obliges banks to maintain equity
and quasi-equity funding equal to a risk-weighted proportion of their asset base. Regulators’ intentions in adopting
the Accord were, first, to reinforce financial stability, second,
to establish a level playing field for banks from different
countries, and third, in the case of some countries, to
reduce explicit or implicit costs of government-provided
deposit guarantees. But extensive reliance by banking
supervisors on capital requirements inevitably begs questions about the possibly distortionary impact on bank
behaviour.
The most obvious possible, and undesirable,
impact on bank behaviour of risk-weighted capital requirements is that excessive differentials in the weights applied
to different categories of assets might induce banks to substitute away from highly risk-weighted assets. In the early
1990s, U.S. banks shifted sharply from corporate lending
to investing in government securities, and many commen-

Tolga Ediz is an economist and Ian Michael a senior manager in the Regulatory
Policy Division of the Bank of England. William Perraudin is a professor of
finance at Birkbeck College, University of London, and special advisor to the
Regulatory Policy Division of the Bank of England.

tators and researchers have attributed this shift to the post–
Basle Accord system of capital requirements.
While papers such as Hall (1993), Haubrich and
Wachtel (1993), Calem and Rob (1996), and Thakor
(1996) make a persuasive case that capital requirements
played a role in this switch, the conclusion is not entirely
uncontroversial. Hancock and Wilcox (1993), for example,
present evidence that U.S. banks’ own internal capital targets explain the decline in private sector lending better
than do the capital requirements imposed by regulators.
Furthermore, the fact that capital requirements affect bank
behaviour does not of course imply that the impact is
undesirable. Bank supervisors must judge whether the
induced levels of capital are adequate, or not, given the
broad goals of regulation.
A second potential, undesirable impact on banks
of risk-weighted, capital requirements of the Basle Accord–
type is that banks may shift within each asset category
toward riskier assets. Imposing equal risk weights on
different private sector loans may make the safer, lower
yielding assets less attractive, leading to substitution
toward higher risk investments. Kim and Santomero (1988)
show formally how a bank that maximises mean-variance
preferences and faces uniform proportional capital requirements may substitute toward higher risk assets.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

15

Theoretical contributions by Keeley and Furlong
(1989, 1990) and Rochet (1992) show that such substitution effects are sensitive to assumptions about banks’ objective functions and to whether or not asset markets are
complete. The extent to which banks are affected by this
kind of distortion therefore remains an empirical question.
Several recent econometric studies have looked for substitution effects attributable to capital requirements using
data on U.S. banks. See, for example, Shrieves and Dahl
(1992), Haubrich and Wachtel (1993), and Jacques and
Nigro (1997).

CAPITAL REQUIREMENTS
IN THE UNITED KINGDOM
All the empirical papers cited above draw on the U.S. experience. U.S. data have many advantages, most notably the
very large number of banks for which data are available and
the detailed information one may obtain on individual
institutions. Nevertheless, it is important to examine the
impact of capital requirement systems operating in other
countries. Although the Basle approach provides a basic
framework of minimum capital standards, regulators in
different countries have supplemented it with a range of
other requirements that deserve empirical investigation.
Furthermore, data from other (that is, non-U.S.) banking
markets may shed interesting light on the effects of capital
requirements simply because they constitute a largely
independent sample. The impact of capital requirements
can only really be studied by looking at cross-sectional
information on banks. Since U.S. banks are inevitably subject to large common shocks, banking industries in other
countries provide a valuable additional source of evidence.
In our paper titled “Bank Capital Requirements
and Regulatory Policy” (1998), we employ confidential
supervisory data for British banks to address some of the
issues outlined above. The panel data set we use comprises
quarterly balance sheet and income data from ninety-four
banks stretching from fourth-quarter 1989 to fourthquarter 1995. The two questions we are primarily interested in are (a) does pressure from supervisors affect bank
capital dynamics when capital ratios approach their regulatory minimum, and (b) by adjusting which items in

16

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

their balance sheets do banks increase their capital ratios
when subject to regulatory pressure?

BANK CAPITAL REGULATION
IN THE UNITED KINGDOM
To understand the interest and implications of our study, it
is important to have a clear idea of the operation of bank
capital regulation in the United Kingdom. While the U.K.
approach is fully consistent with the basic standards laid
down in the Basle Accord, various additional requirements
are placed on banks by U.K. supervisors. First, U.K. supervisors set two capital requirements—a “trigger ratio,”
which is the minimum capital ratio with which a bank
must comply, and a “target” ratio set somewhat above the
trigger ratio. The gap between the target and the trigger
acts as a buffer in that regulatory pressure is initiated when
a bank’s risk asset ratio (RAR) falls below the target. If the
RAR falls below the trigger ratio, supervisors take more
drastic action, and ultimately may revoke a bank’s license.
Another important feature of U.K. practice is that
supervisors specify bank-specific capital requirements.
Banks adjudged to be risky by the supervisors must meet
higher capital requirements than less risky institutions.
Risky in this context may reflect supervisors’ evaluation of
the bank’s loan book or possibly their perception that there
exist weaknesses in systems of control or in the competence
of management. For most U.K. banks, capital requirements exceed the Basle minimum of 8 percent. The ability
to vary a bank’s capital requirement administratively provides regulators with a very useful lever with which they
can influence the actions of the bank’s management.
The empirical implications of the system
described above are (a) that one might expect that banks
experiencing or fearing regulatory pressure will seek to
boost their capital ratios when their RARs enter a region
above the regulatory minimum, and (b) that changes in a
bank’s trigger ratio will induce a change in the bank’s capital dynamics. We investigate these hypotheses below.

DATA DESCRIPTION
Before looking at bank capital dynamics statistically, it
is useful to examine our data to understand its basic

structure. In Chart 1, we provide a scatter diagram of
changes over a quarter in banks’ RARs (pooled across
banks and time periods) plotted against the lagged level of
the RAR. Rather than expressing the lagged RAR in its
natural units, we prefer to measure it in terms of deviations
from the trigger ratio divided by the sample standard deviation of the RAR for each individual bank. This approach
makes sense because banks are likely to change their behaviour, boosting their RARs, when they are in danger of hitting their regulatory minimum. The volatility of the RAR
(which varies substantially across different banks) is just as
important, therefore, as the actual distance in percent from
the current RAR to the trigger.
To facilitate interpretation of Chart 1, we include
a simple OLS linear regression line of RAR changes on
lagged RAR levels. As one might expect, this line is downward sloping, reflecting the fact that low initial RAR levels induce banks to rebuild their capital ratios. Perhaps the
most interesting feature of the chart, however, is the fact
that a clear nonlinearity is apparent in that deviations from
the regression line for low levels of the RAR are consistently positive. This bears out our hypothesis that there
exists a regime switch in bank capital dynamics in the
region immediately above the trigger level.
The second question that interested us is exactly
how banks go about increasing their capital ratios when
they are low. Either banks might cut back private sector

loans that bear high risk weighting in favour of government securities, for example, which attract low risk
weights. Alternatively, banks might boost their capital
directly by issuing new equity or by cutting dividends. As
we noted in the introduction, the substitution by banks
toward low-risk-weighted assets, which one might term
the credit crunch hypothesis, has been thoroughly discussed in the case of U.S. banks in the early 1990s by a
series of papers.
Chart 2 shows the change in 100-percentweighted assets as a ratio to total risk-weighted assets
(TRWA) plotted against the lagged level of the RAR. Once
again, the RAR level is expressed as a deviation from the
bank-specific trigger and is scaled by the standard deviation of the RAR appropriate for each bank. The chart indicates that there exists only a slight positive relationship
between changes in 100-percent-weighted assets and
lagged RARs. Furthermore, the nonlinearity clearly evident in Chart 1 appears not to be present. Thus, banks only
slightly reduce their holdings of 100-percent-weighted
assets when their RARs fall close to trigger levels, and the
credit crunch hypothesis appears not to be borne out.
Charts 3 and 4 repeat Chart 1 except for different
capital ratios. Respectively, they show changes in Tier 1
and Tier 2 capital as ratios to total risk-weighted assets
plotted against the lagged level of the RAR. Tier 1 represents narrow capital, mainly consisting of equity and

Chart 1

Chart 2

Change in Risk Asset Ratio

Change in 100-Percent-Weighted Assets/TRWA
Change in 100-percent-weighted assets/trwa
0.4

Change in RAR
0.4

0.2
0.2
0

0.0

-0.2

-0.4

-0.2
-4

-2

0
2
4
6
8
RAR standard deviations from trigger

10

12

-4

-2

0

2

4

6

8

10

12

RAR standard deviations from trigger

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

17

Chart 3

Change in Tier 1 Capital/TRWA
Change in tier 1 capital/trwa
0.3
0.2
0.1
-0.0
-0.1
-0.2
-0.3
-4

-2

0
2
4
6
8
RAR standard deviations from trigger

10

12

retained earnings. Recall that the Basle Accord specifies
that banks have to hold a ratio of Tier 1 capital to
risk-weighted assets of at least 4 percent. Tier 2 consists of
broad capital less narrow capital and primarily comprises
subordinated debt and other equity-like debt instruments.
Both the Tier 1 and the Tier 2 scatter plots exhibit strong
negative relationships between capital and the distance of
the RAR from the trigger ratio.

REGRESSION ANALYSIS
Although scatter plots provide valuable clues to the
bivariate relationship between capital changes and the
lagged level of capital, a formal regression analysis must be

Chart 4

Change in Tier 2 Capital/TRWA
Change in tier 2 capital/trwa
0.25

0.15

0.05

-0.05

-0.15
-4

18

-2

2
4
6
8
0
RAR standard deviations from trigger

10

12

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

performed if one wishes to understand the impact on capital changes of regulatory pressure, holding other influences
on capital constant. This is important because when a firm
falls into financial distress, it may seek to adjust its capital
in line with its own internally generated capital targets,
even without intervention by regulators (see the discussion
in Hancock and Wilcox [1993]). We, therefore, formulate a dynamic, multivariate panel regression model in
which changes in capital ratios depend on the lagged level
of the ratio, a range of conditioning variables describing
the nature of the bank’s business and its current financial
health (these proxy for the bank’s internal capital target),
and variables that may be regarded as measuring regulatory
pressure. Formally, our model may be stated as:
N

Y n, t + 1 – Y n, t = β 0 +

∑ β j X n, t, j + γYn, t + εn, t ,

j=1

where E ( ε n, t ) = E ( X n, t, j ε n, t ) = 0 , t indicates the time
period, and where X n, t, j j = 1, 2 ,.....N are a set of
regressors.
ε n, t + 1 = ρε n, t + ν n, t ∀n, t ,
where E ( ν n, t ) = 0 for all n, t, and E ( ν n, t ν m, s ) = 0 for all
t, s, n, m except when t = s and n = m. To include random
2
2
effects, we suppose that for any bank, E ( ν n, t ) = σ n.
Our conditioning variables designed to proxy the
bank’s own internal capital target include net interest
income over total risk-weighted assets, fee income over total
risk-weighted assets, bank deposits over total deposits, total
off-balance-sheet exposures over total risk-weighted assets,
provisions over total risk-weighted assets, profits over total
risk-weighted assets, and 100-percent-weighted assets over
total risk-weighted assets. The net interest income, fee
income, and 100-percent-weighted asset variables reflect
the nature and riskiness of the bank’s operations. Bank
deposits and off-balance-sheet exposure variables reflect the
bank’s vulnerability to runs on deposits although they may
also reflect the degree of financial sophistication of the bank
and its consequent ability to economise on capital. Total
profit and loss and provisions variables indicate the bank’s
state of financial health.
We measure regulatory pressure in two ways.
First, we incorporate a dummy variable that equals one if

the bank has experienced an upward adjustment in its trigger ratio in the previous three quarters. Second, we include
a dummy that equals unity if the RAR falls close to the
regulatory minimum. As we argue above, the degree that a
bank is “close” to its trigger depends not just on the absolute percentage difference between the current RAR and
the trigger but also on the volatility of the RAR. Hence,
we calculate the dummy in such a way that it is unity if the
RAR is less than one bank-specific standard deviation above
the bank’s trigger. Thus, our hypothesis is that there exists
a zone above the trigger in which the bank’s capital ratio
choices are constrained by regulatory pressure. In this respect,
our study is comparable to Jacques and Nigro (1997).
The dummy associated with a one-standarddeviation zone above the trigger may be regarded as
introducing a simple regime switch in the model for low
levels of the RAR. To generalise this regime switch, we
also estimate switching regression models in which all the
parameters on the conditioning variables (not just the
intercept) are allowed to change when the RAR is less than
one standard deviation above the trigger. This specification

allows for the possibility that all the dynamics of the capital ratio change when the bank is close to its regulatory
minimum level of capital.
In formulating our panel model, we adopt a random rather than a fixed-effects specification. We are not
so interested in obtaining estimates conditional on the
particular sample available that is the usual interpretation of the fixed-effect approach (see Hsiao [1986]) and so
the random-effects approach seems more appropriate.
Thus, we suppose that the variance of error terms has a
bank-specific component. Furthermore, we suppose that
the residuals are AR(1). The latter assumption seems natural as one might expect shocks to register in bank capital
ratios over more than a single quarter. The fact that error
terms are autocorrelated somewhat complicates estimation since our model contains lagged endogenous variables. To avoid the biases in parameter estimates this
would otherwise induce, we employ the instrumental
variables approach introduced by Hatanaka (1974).
Table 1 reports regression results for the case in which
the dependent variable is the RAR. Note that estimates in

Table 1

RAR AND 100-PERCENT-WEIGHTED ASSETS REGRESSION RESULTS

Constant
Change in trigger dummy
Fee income/net interest income
Net interest income/TRWA
Deposits from banks/TRWA
(RAR trigger) less than 1 s.d.
Off-balance-sheet assets/TRWA
Profit and loss/TRWA
Total provisions/TRWA
100-percent-weighted assets/TRWA
Lagged dependent variable

0.05
(1.38)
0.27
(1.42)
0.00
(0.40)
0.04
(0.02)
-0.19
(-1.82)
0.44
(4.64)
2.21
(1.65)
-3.93
(-1.13)
1.29
(1.26)
0.19
(1.52)
-0.44
(-0.81)

RAR
< trig + 1 s.d.
0.08
(1.63)
1.46
(1.94)
-0.01
(-0.17)
4.57
(0.41)
0.54
(1.88)
—
—
2.74
(0.80)
-8.35
(-0.57)
3.96
(1.32)
0.31
(1.05)
-2.62
(-0.92)

> trig + 1 s.d.
-0.38
(-0.73)
—
—
0.00
(0.35)
-0.66
(-0.23)
-0.30
(-2.47)
—
—
2.68
(1.64)
-4.45
(-1.27)
0.86
(0.70)
0.05
(0.32)
0.77
(1.13)

100-Percent-Weighted Assets/TRWA
< trig + 1 s.d.
> trig + 1 s.d.
-0.01
-0.11
-0.48
(-0.28)
(-2.21)
(-3.17)
-0.16
-0.58
—
(-0.90)
(-0.58)
—
0.00
-0.00
0.01
(0.06)
(-0.15)
(0.70)
1.30
-8.95
1.72
(0.67)
(-1.71)
(0.83)
0.14
-0.12
0.32
(1.47)
(-0.87)
(2.49)
-0.03
—
—
(-0.39)
—
—
-1.01
-1.57
-0.43
(-0.90)
(-0.62)
(-0.29)
-1.42
-1.41
-3.58
(-0.49)
(-0.14)
(-1.29)
-0.59
-1.08
-0.18
(-0.54)
(-0.27)
(-0.16)
—
—
—
—
—
—
-0.08
-1.64
-0.06
(-1.14)
(-3.03)
(-0.72)

Notes: TRWA and RAR denote total risk-weighted assets and risk/asset ratio. Data are for ninety-four banks from fourth-quarter 1989 to fourth-quarter 1995. Estimates
are scaled by 100. All regressions employ the Hatanaka (1974) method. t-statistics appear in parentheses.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

19

the table are scaled by 100. Our estimates strongly suggest
that capital requirements significantly affect banks’ capital
ratio decisions. The coefficient of the regime dummy is
positive and significant. The point estimate implies that
banks increase their RARs by around 1/2 percent per
quarter when their capital approaches the regulatory minimum. In addition, we find that banks raise their RAR by
1/3 percent per quarter following an increase in their
trigger ratio by the supervisors.
In columns 2 and 3 of Table 1, we report estimates
for a switching regression model in which the coefficients
on all the conditioning variables are allowed to change
depending on whether the RAR is greater than or less than
one standard deviation above the trigger. One might note
that the impact of being near to or far from the trigger
appears to change little between the simpler model and
this generalised switching regression model. In the first
case, the parameter estimate on the dummy for proximity
to the trigger was 1/2 percent, while the difference between
the two intercepts in the switching regression model is also
around 1/2 percent. By contrast, the magnitude of the

Table 2
TIER 1 AND

dummy for recent increases in the trigger is far greater
when we relax the specification, rising from 1/3 percent in
the simpler model to 1 1/2 percent in the switching regression model.
One should also note that the coefficients on the
conditioning variables in the regressions all have plausible
signs. For example, higher profits reduce capital ratios
while higher provisions or 100-percent-weighted assets
increase them. It is also interesting that in the switching regressions model, banks with greater reliance on bank
deposits tend to increase their capital ratios. Overall, we
conclude that capital requirements induce banks to
increase their capital ratios even after one allows for internally generated capital targets. This conclusion is in
contrast to that of Hancock and Wilcox (1993) in their
study of U.S. banks.
The second question we are interested in is exactly
how banks achieve changes in their capital ratios if they are
subjected to regulatory pressure. The most obvious possibilities
are either that they adjust the asset side of their balance
sheets, for example, substituting government securities

TIER 2 CAPITAL REGRESSION RESULTS

Constant
Change in trigger dummy
Fee income/net interest income
Net interest income/TRWA
Deposits from banks/TRWA
(RAR trigger) less than 1 s.d.
Off-balance-sheet assets/TRWA
Profit and loss/TRWA
Total provisions
100-percent-weighted assets/TRWA
Lagged dependent variable

0.08
(1.95)
-0.15
(-0.69)
0.00
(0.32)
3.15
(1.49)
-0.15
(-1.52)
0.17
(2.54)
2.22
(2.04)
-2.73
(-0.87)
-0.04
(-0.04)
0.16
(1.44)
0.52
(1.13)

Tier 1 Capital/TRWA
< trig + 1 s.d.
0.15
(3.03)
2.61
(1.97)
0.02
(0.41)
3.25
(0.37)
0.40
(1.77)
—
—
-0.40
(-0.14)
-4.86
(-0.38)
3.83
(1.49)
-0.32
(-1.25)
-3.89
(-1.83)

> trig + 1 s.d.
-0.88
(-2.64)
—
—
0.00
(0.22)
7.72
(3.89)
-0.19
(-1.85)
—
—
3.29
(2.73)
-3.99
(-1.55)
-2.71
(-2.68)
0.35
(2.52)
1.86
(4.38)

-0.05
(-3.40)
0.06
(0.74)
0.00
(0.63)
-0.20
(-0.23)
-0.03
(-0.75)
0.15
(3.58)
0.38
(1.04)
-1.53
(-1.15)
-0.22
(-0.53)
0.09
(1.86)
-3.09
(-4.90)

Tier 2 Capital/TRWA
< trig + 1 s.d.
> trig + 1 s.d.
-0.08
0.11
(-3.63)
(0.83)
0.13
—
(0.27)
—
0.01
0.00
(0.31)
(0.38)
0.08
-3.16
(0.02)
(-3.54)
-0.00
-0.03
(-0.01)
(-0.50)
—
—
—
—
2.39
0.18
(2.06)
(0.28)
-8.63
0.10
(-1.53)
(0.08)
-1.85
0.85
(-1.14)
(1.89)
0.23
-0.09
(1.75)
(-1.61)
-0.78
-2.82
(-0.37)
(-3.27)

Notes: TRWA and RAR denote total risk-weighted assets and risk/asset ratio. Data are for ninety-four banks from fourth-quarter 1989 to fourth-quarter 1995. Estimates
are scaled by 100. All regressions employ the Hatanaka (1974) method. t-statistics appear in parentheses.

20

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

(which attract low-risk weights in bank capital calculations) for private sector loans (which attract high-risk
weights), or alternatively that they raise extra capital by
issuing securities or by retaining earnings.
The three right-hand-columns of Table 1 show
regressions of changes in 100-percent-weighted assets as
a ratio to total risk-weighted assets on the lagged level
of this ratio and on the same conditioning variables as
those included in the RAR regressions. Although the
parameters for the two regulatory intervention dummies
have the right signs, they are insignificant. The magnitudes of the point estimates are fairly small as well. In
general, t-statistics are low, suggesting that the
100-percent-weighted asset ratio does not behave in a
statistically stable way over time and across banks. In
summary, it seems fair to conclude that banks do not
significantly rely on asset substitution away from
high-risk-weighted assets to meet their capital requirements as they approach the regulatory minimum.
Table 2 reports results for regressions similar to
our RAR regressions reported above but using different
capital ratios. Both the Tier 1 and Tier 2 capital ratio
regressions we perform indicate that banks raise their ratios
when they come close to their triggers. The response of
banks to increases in their triggers is much higher for
Tier 1 than for Tier 2 capital, suggesting that the bulk of

the adjustment comes through increases in narrow capital.
The adjustment in capital that occurs when banks are close
to their triggers is more evenly spread across the two categories of capital.

CONCLUSION
In this paper, we summarise some of the results of Ediz,
Michael, and Perraudin (1998) on the impact of bank
capital requirements on the capital ratio choices of U.K.
banks. We use confidential supervisory data including
detailed information about the balance sheet and profit and
loss of all British banks over the period 1989-95.
The conclusions we reach are reassuring in that
capital requirements do seem to affect bank behaviour over
and above the influence of the banks’ own internally generated capital targets. Furthermore, banks appear to achieve
adjustments in their capital ratios primarily by directly
boosting their capital rather than through systematic
substitution away from assets such as corporate loans,
which attract high-risk weights in the calculation of Basle
Accord–style capital requirements.
In short, this interpretation of the U.K. evidence
makes capital requirements appear to be an attractive regulatory instrument since they serve to reinforce the stability of the banking system without apparently distorting
banks’ lending choices.

The views expressed in this article are those of the authors and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

21

ENDNOTE

The views expressed in this paper are the authors’ and do not necessarily reflect the
views of the Bank of England.

REFERENCES

Calem, P.S., and R. Rob. 1996. “The Impact of Capital-Based Regulation
on Bank Risk-Taking: A Dynamic Model.” Board of Governors of the
Federal Reserve System, Finance and Economics Discussion Series 96,
no. 12 (February): 36.

Jacques, K. T., and P. Nigro. 1997. “Risk-Based Capital, Portfolio Risk
and Bank Capital: A Simultaneous Equations Approach.” JOURNAL OF
ECONOMICS AND BUSINESS: 533-47.

Dietrich, J. K., and C. James. 1983. “Regulation and the Determination of
Bank Capital Changes.” JOURNAL OF FINANCE 38, no. 5: 1651-8.

Keeley, M. C., and F. T. Furlong. 1989. “Capital Regulation and Bank
Risk-Taking: A Note.” JOURNAL OF BANKING AND FINANCE 13:
883-91.

Ediz, T., I. Michael, and W. R. M. Perraudin. 1998. “Bank Capital
Dynamics and Regulatory Policy.” Bank of England, mimeo.

———. 1990. “A Reexamination of Mean-Variance Analysis of Bank
Capital Regulation.” JOURNAL OF BANKING AND FINANCE 14: 69-84.

Hall, B. 1993. “How Has the Basle Accord Affected Bank Portfolios?”
JOURNAL OF THE JAPANESE AND INTERNATIONAL ECONOMIES 7:
408-40.

Kim, D., and A. Santomero. 1988. “Risk in Banking and Capital
Regulation.” JOURNAL OF FINANCE 43: 1219-33.

Hancock, D., and J. Wilcox. 1993. “Bank Capital and Portfolio
Composition.” BANK STRUCTURE AND COMPETITION. Federal
Reserve Bank of Chicago.
Hatanaka, M. 1974. “An Efficient Two-Step Estimator for the Dynamic
Adjustment Model with Autoregressive Errors.” JOURNAL OF
ECONOMETRICS 2: 199-220.
Haubrich, J. G., and P. Wachtel. 1993. “Capital Requirements and Shifts
in Commercial Bank Portfolios.” Federal Reserve Bank of Cleveland
ECONOMIC REVIEW 29 (third quarter): 2-15.

Rochet, J. C. 1992. “Capital Requirements and the Behaviour of
Commercial Banks.” EUROPEAN ECONOMIC REVIEW 36: 1137-78.
Shrieves, R. E., and D. Dahl. 1992. “The Relationship Between Risk and
Capital in Commercial Banks.” JOURNAL OF BANKING AND FINANCE
16: 439-57.
Thakor, A. V. 1996. “Capital Requirements, Monetary Policy, and
Aggregate Bank Lending: Theory and Empirical Evidence.” JOURNAL
OF FINANCE 51, no. 1: 279-324.

Hsiao, C. 1986. “Analysis of Panel Data.” New York: Cambridge
University Press.

22

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

Assessing the Impact of Prompt
Corrective Action on Bank
Capital and Risk
Raj Aggarwal and Kevin T. Jacques

In December 1991, the U.S. Congress passed the Federal
Deposit Insurance Corporation Improvement Act (FDICIA),
which emphasized the importance of capital ratios in
addressing the problems that led to the large number of
bank and thrift failures in the 1980s. In addressing these
issues, FDICIA contained two key provisions designed to
reduce the cost and frequency of failed banks. First, FDICIA
contained a provision for early closure of institutions that
allowed bank regulators to close failing institutions at a
positive level of capital. Such an early closure policy had
been advocated as a solution to excessive losses to the
deposit insurance fund, as discussed by Kane (1983). The
second key provision of FDICIA, prompt corrective action
(PCA), involved early intervention in problem banks by
bank regulators. While PCA was intended to supplement the
existing supervisory authority of bank regulators, FDICIA
legislated mandatory intervention, rather than regulatory
discretion, in undercapitalized institutions in an effort to
save banks from becoming insolvent.
To date, the PCA provisions of FDICIA appear to
have been a major success in improving the safety and

Raj Aggarwal is the Edward J. and Louise E. Mellen Chair and Professor of
Finance in the Department of Economics and Finance at John Carroll University.
Kevin T. Jacques is a senior financial economist at the Office of the Comptroller of
the Currency.

soundness of the U.S. banking system. Failures declined
precipitously in the years following the passage of FDICIA,
while a casual observation of bank capital ratios and levels
suggests that PCA has been successful in getting banks to
increase capital. From year-end 1991 through year-end
1993, equity capital held by U.S. commercial banks in
the aggregate increased by over $65 billion, an increase
of 28.0 percent, while the ratio of equity capital to assets
increased from 6.75 percent to 8.01 percent.
While the adoption and implementation of PCA
has focused attention on bank capital ratios, two issues
merit further attention. First, did PCA cause banks to
increase their capital ratios, or is the increase attributable
to some other factor such as bank income levels in the
early 1990s? Second, a number of theoretical and empirical
studies suggest that increasingly stringent regulatory
capital standards in general, and PCA in particular, may
have the unintended effect of causing banks to increase
their level of portfolio risk.
This paper examines the impact that the PCA
standards had on bank portfolios following the passage of
FDICIA in 1991. To do this, the simultaneous equations
model developed by Shrieves and Dahl (1992), and later
modified by Jacques and Nigro (1997) to study the impact
of risk-based capital, is used to examine how PCA simultaneously influenced bank capital ratios and portfolio risk

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

23

levels. Unlike prior studies on this topic, by using a simultaneous equations model, the endogeneity of both capital
and portfolio risk is explicitly recognized, and as such, the
impact of possible changes in bank capital ratios on risk in
a bank’s portfolio can be examined.

THE PROMPT CORRECTIVE ACTION STANDARDS
In December 1991, the U.S. Congress passed FDICIA,
with the PCA provisions becoming effective in December
1992. Specifically, Section 131 of FDICIA, defined for
banks five capital thresholds used to determine what supervisory actions would be taken by bank regulators, with
increasingly severe restrictions being applied to banks as
their capital ratios declined. As shown in Table 1, banks are
classified into one of five capital categories depending on
how well they meet capital thresholds based on their total
risk-based capital ratio, Tier 1 risk-based capital ratio, and
Tier 1 leverage ratio.1 For example, in order to be classified as
well capitalized, a bank must have a total risk-based capital
ratio greater than or equal to 10 percent, a Tier 1 risk-based
capital ratio greater than or equal to 6 percent, and a Tier 1
leverage ratio greater than 5 percent, while adequately capitalized institutions have minimum thresholds of 8 percent,

Table 1
CAPITAL THRESHOLDS AND BANK
PROMPT CORRECTIVE ACTION
Capital Threshold
Well capitalized
Adequately capitalized
Undercapitalized
Significantly undercapitalized
Critically undercapitalized

CLASSIFICATION UNDER

Total RiskTier 1 RiskTier 1
Based Capital Based Ratio Leverage Ratio
≥10%
≥6%
≥5%
≥8%
≥4%
≥4%
<8%
<4%
<4%
<6%
<3%
<3%
Tangible equity ≤ 2%

NUMBER OF BANKS AND PERCENTAGE
OF TOTAL B ANK A SSETS BY PCA Z ONE
PCA Zone
Well capitalized
Adequately capitalized
Undercapitalized
Significantly undercapitalized
Critically undercapitalized

1991
10,725
43.30
807
45.82
221
10.17
71
0.39
96
0.32

1992
10,989
87.51
335
11.72
67
0.29
33
0.17
42
0.32

1993
10,752
96.24
171
3.51
22
0.11
16
0.12
10
0.03

Source: Data are from the Office of the Comptroller of the Currency.

24

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

4 percent, and 4 percent, respectively. If a bank falls into
one of the three undercapitalized categories, mandatory
restrictions are placed on its activities that become increasingly severe as the bank’s capital ratios deteriorate. For
example, undercapitalized banks are subject to restrictions
that include the need to submit and implement a capital
restoration plan, limits on asset growth, and restrictions on
new lines of business, while significantly undercapitalized
banks face all of the restrictions imposed on undercapitalized banks, as well as restrictions on interest rates paid on
deposits, limits on transactions with affiliates and affiliated
banks, and others. Finally, once a bank’s tangible equity
ratio falls to 2 percent or less, the bank is considered to
be critically undercapitalized and faces not only more
stringent restrictions on activities, but also the appointment
of a conservator (receiver) within ninety days of becoming critically undercapitalized.2
Table 1 also shows the breakdown of insured commercial banks by PCA zone over the period 1991-93. For
example, at year-end 1991, the time when FDICIA was
passed, 10,725 banks, accounting for only 43.3 percent of
the total assets in the U.S. banking system, were classified
as well capitalized. In contrast, 221, 71, and 96 banks were
classified as either undercapitalized, significantly undercapitalized, and critically undercapitalized, respectively. In
total, 388 banks with 10.88 percent of all bank assets were
undercapitalized to some degree at the end of 1991 and
therefore faced at least some degree of regulatory sanction
if their capital ratios did not improve by the time PCA
went into effect.
By year-end 1992, the period after PCA provisions
were announced but before they went into effect, the
results in Table 1 show that well-capitalized banks numbered 10,989, accounting for over 87 percent of all bank
assets, while all types of undercapitalized banks fell to only
142, thus accounting for less than 1 percent of total bank
assets. A similar but less dramatic shift is seen in 1993, the
first year the PCA regulations were in effect. By year-end
1993, 96.24 percent of banking assets were in banks classified as well capitalized, while only forty-eight banks were
classified in the three undercapitalized zones, and those
banks accounted for less than 0.25 percent of all banking

assets. These findings suggest that PCA had a significant
announcement effect on bank capital ratios during 1992, as
well as a significant implementation effect on capital ratios
once the standards were implemented.
While PCA appears to have been effective in getting banks to increase their capital ratios, it has not been
without its critics.3 One criticism that has been levied
against regulatory capital standards in general is that they
may lead to increasing levels of bank portfolio risk.
Research by Kahane (1977), Koehn and Santomero (1980),
and Kim and Santomero (1988) has shown, using the
mean-variance framework, that regulatory capital standards
cause leverage and risk to become substitutes and that as
regulators require banks to meet more stringent capital
standards, banks respond by choosing assets with greater
risk.4 Thus, increases in minimum capital standards by
bank regulators cause banks to increase not only their capital
ratios, but also have the unintended effect of causing them
to increase their level of risk.
While one of the primary purposes of early closure
is to prevent banks from taking increasing levels of risk as
they approach insolvency, recent research by Levonian
(1991) and Davies and McManus (1991) demonstrates that
early closure may fail to protect the deposit insurance fund
from losses because it creates incentives for banks to
increase portfolio risk by increasing their holdings of highrisk assets. As such, the design of the PCA standards has
important implications not only for capital levels, but also
for the level of risk, and ultimately, the safety and soundness of the banking system.

MODEL SPECIFICATION
To examine the possible impact of the PCA standards on
bank capital ratios and portfolio risk levels, the simultaneous equation model developed by Shrieves and Dahl
(1992) is modified to incorporate the PCA zones. In their
model, observed changes in bank capital ratios and portfolio risk levels are decomposed into two components, a discretionary adjustment and a change caused by an
exogenously determined random shock such that:
d
(1)
∆CAPj,t = ∆ CAPj ,t + Ej,t ;

(2)

d

∆RISK j, t = ∆ RI S K j, t + U j, t ,

where ∆CAP j, t and ∆RISK j, t are the observed changes in
capital ratios and risk levels for bank j in period t,
d
d
∆ CAPj ,t and ∆ RI S K j, t represent the discretionary
adjustments in capital ratios and risk levels, and Ej,t and
U j, t are exogenous shocks. Recognizing that banks may
not be able to adjust to their desired capital ratios and risk
levels instantaneously, the discretionary changes in capital
and risk are modeled using the partial adjustment framework. As a result:
(3)
∆CAPj , t = α ( CAPj∗, t – CAPj , t –1 ) + Ej , t ;
(4)

∗
∆RISKj , t = β ( RI SK j , t – RISK j , t – 1 ) + U j, t .

Thus, the observed changes in bank capital ratios
and portfolio risk levels in period t are a function of the tar∗
get capital ratio CA Pj∗, t and target risk level ∆ RI SK j , t ,
the lagged capital ratio CAP t – 1 and risk levels RISK t – 1 ,
and any random shocks. The target capital ratio and risk
level are not observable, but are assumed to depend upon
some set of observable variables including the size of the
bank (SIZE), multibank holding company status (BHC), a
bank’s income (INC), changes in portfolio risk
( ∆RISK j, t ) , and capital ratios ( ∆CAP j, t ) , while the
exogenous shock that could affect bank capital ratios or
risk levels is the regulatory pressure brought about by
PCA.
Specifically, SIZE is measured as the natural log of
total assets and BHC is a dummy variable equal to 1 if a
bank is affiliated with a multibank holding company. As
Shrieves and Dahl (1992) note, size may have an impact on
a bank’s capital ratios and level of portfolio risk because
larger banks have greater access to capital markets. For
banks belonging to multibank holding companies, both
capital and portfolio risk may be managed at the holding
company level, thus resulting in these banks having lower
target capital ratios and higher target portfolio risk levels
than independent banks. Following Jacques and Nigro
(1997), the ratio of net income to total assets, INC, is
included to recognize the ability of profitable banks to
increase their capital ratios by using retained earnings. In
addition, as noted by the use of the partial adjustment

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

25

model, lagged capital ratios and risk levels are included to
measure the fact that banks adjust their capital ratios and
risk levels to their target levels over time.
To recognize the possible simultaneous relationship between capital and risk, ∆CAP j, t and ∆RISK j, t are
included in the risk and capital equations, respectively.
Shrieves and Dahl (1992) note that a positive relationship
between changes in capital and risk may signify, among
other possibilities, the unintended impact of minimum
regulatory capital requirements, while Jacques and Nigro
(1997) note that a negative relationship may result because
of methodological flaws in the capital standards underlying
PCA.5 Empirical estimation of the simultaneous equations
model requires measures of both bank capital ratios and

significant penalty for not being considered well capitalized, or if they desire to hold a buffer stock of capital as a
cushion against shocks to equity as argued by Wall and
Peterson (1987, 1995) and Furlong (1992). Besides being
included as a separate variable, PCA is included in an
interaction term with the lagged capital ratios. The use of
this term allows banks in different PCA zones to have different speeds of adjustment to their target capital ratios. As
such, banks in the undercapitalized PCA zones would be
expected to adjust their capital ratios at faster rates than
better capitalized banks.
Given these variables, equations 3 and 4 can be
written:
(5) ∆CAP j, t = δ 0 + δ 1 SIZE j, t + δ 2 BHC j, t + δ 3 INC j, t
+ δ 4 ∆RISK j, t + δ 5 PCAA + δ 6 PCAU

portfolio risk. Following previous research, portfolio risk

– δ 7 CAP j, t – 1 – δ 8 PCAA × CAP j, t – 1

was measured in two ways, using both the total riskweighted assets as a percentage of total assets (RWARAT)
and nonperforming loans as a percentage of total assets
(NONP).6 Avery and Berger (1991) have shown that
RWARAT correlates with risky behavior, while other studies, such as those by Berger (1995) and Shrieves and Dahl
(1992), use nonperforming loans. With respect to capital,
the leverage ratio is used because Baer and McElravey
(1992) find it was more binding than the risk-based capital
standards during the period under study.
Of particular interest in this study is the regulatory pressure variables. Consistent with Shrieves and Dahl
(1992), this study uses dummy variables to signify the
degree of regulatory pressure that a bank is under. Specifically, the PCA dummies are:
PCAA = 1 if the bank is adequately capitalized; else = 0.
PCAU = 1 if the bank is undercapitalized, substantially
undercapitalized, or critically undercapitalized (hereafter referred to as undercapitalized); else = 0.
These variables allow banks across different PCA zones to
respond differently, both in capital ratios and in portfolio
risk. A priori, banks in the undercapitalized group, PCAU,
would be expected to have the strongest response because
PCA imposes penalties on their activities. Furthermore,
adequately capitalized banks, PCAA, may increase their
capital ratios or reduce their portfolio risk if they perceive a

26

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

– δ 9 PCAU × CAP j, t – 1 + µ j, t ;
(6) ∆RISK j, t = λ 0 + λ 1 SIZE j, t + λ 2 BHC j, t
+ λ 3 ∆CAP j, t + λ4 RISK j, t – 1
+ λ 5 PCAA + λ 6 PCAU + ω j, t ,
µ j, t and ω j, t are error terms, and
PCAA × CA P j, t – 1 and PCAU × CA P j, t – 1 are interaction terms, which allow a bank’s speed of adjustment to
be influenced by the PCA zone the bank is in.
where

EMPIRICAL ESTIMATION
As noted earlier, the FDICIA was passed in December
1991, with the PCA thresholds becoming effective in
December 1992. This study covers the period after passage
but before implementation (1992), and the first year the
PCA standards were in effect (1993). In addition, because
all of the capital ratios used in PCA are available beginning
at the end of 1990, 1991 is used as a control period. As
noted earlier, a significant decline in the number of all
types of undercapitalized institutions occurred during the
year after FDICIA was passed. This result is not surprising
because restrictions would be placed on the activities of
these banks beginning in December 1992. Alternatively, in
studying the impact of the risk-based capital standards,
Haubrich and Wachtel (1993) note that because the composition of bank portfolios can be changed quickly, and

because banks appear to have experienced a period of learning, the impact appears more clearly after the implementation
date. The same argument may be true for PCA, although
learning by banks may be less significant with regard to PCA
because all of the capital ratios defined in the PCA standards
had been in effect since at least December 1990.7

the ratio of risk-weighted assets to total assets (RWARAT)
to measure portfolio risk, while Table 3 measures risk
using nonperforming loans as a percentage of total assets
(NONP). All of the variables included to explain variations
in capital ratios and risk levels are statistically significant
in at least some of the equations. Bank size (SIZE) had a
negative and significant impact on capital ratios in two
equations, while multibank holding company status (BHC)
was consistently negative and significant in the capital
equations. Income (INC) had a positive and significant
impact on capital ratios in all equations, suggesting that
one reason for increasing capital ratios by banks over the
period studied was the increase in their income levels. The
parameter estimates on lagged risk ( RISK j , t – 1 ) in the
risk equations range from 5.3 percent to 24.7 percent,
while the parameter estimates on lagged capital
( CAP j , t – 1 ) in the capital equations range from 6.2 percent to 8.9 percent. These results imply that banks

RESULTS
This study examines 2,552 FDIC-insured commercial
banks with assets of $100 million or more using year-end
call report data from 1990 through 1993.8 The model is
estimated using the two-stage least squares procedure,
which recognizes the endogeneity of both bank capital
ratios and risk levels in a simultaneous equation framework, and unlike ordinary least squares, provides consistent parameter estimates.
The results of estimating the simultaneous system of
equations 5 and 6 are presented in Tables 2 and 3. Table 2 uses

Table 2
TWO-STAGE L EAST SQUARES

ESTIMATES OF PROMPT CORRECTIVE ACTION ON RISK (RWARAT) AND CAPITAL
1991

Variable
INTERCEPT
SIZE
BHC
INC
CAPt-1
RISKt-1

∆RISK

0.005*
(7.57)
-0.000
(-1.27)
-0.001*
(-3.83)
0.387*
(20.32)
-0.070*
(-9.47)
—

0.021*
(2.89)
0.001**
(1.71)
0.015*
(5.33)
—

∆CAP

—

∆RISK

0.017*
(5.47)
0.009*
(2.98)
0.023*
(8.05)
-0.135*
(-2.92)
-0.319*
(-5.17)
.218

PCAA
PCAU
PCAA × CAPt-1
PCAU × CAPt-1
R2

1992

∆CAP

—
-0.144*
(-13.11)
1.351*
(8.14)
—
0.037*
(9.25)
0.037*
(5.01)
—
—
.123

1993

∆CAP

∆RISK

∆CAP

∆RISK

0.005*
(6.77)
0.000
(0.66)
-0.002*
(-5.64)
0.551*
(26.47)
-0.089*
(-11.39)
—

0.029*
(6.33)
-0.000
(-0.92)
0.004*
(2.48)
—

0.007*
(8.46)
0.000
(1.09)
-0.003*
(-7.51)
0.409*
(14.71)
-0.062*
(-7.31)
—

0.032*
(7.62)
-0.000*
(-1.97)
0.008*
(4.64)
—

—
0.014*
(4.24)
0.022*
(6.10)
0.039*
(9.70)
-0.301*
(-4.91)
-0.627*
(-6.75)
.271

—
-0.069*
(-8.99)
0.284*
(2.74)
—
-0.015*
(-4.32)
-0.016*
(-2.40)
—
—
.063

—
0.042*
(2.61)
0.027*
(3.72)
0.024*
(3.17)
-0.389*
(-3.19)
-0.129
(-0.68)
.146

—
-0.053*
(-7.74)
0.552*
(3.20)
—
-0.024*
(-4.78)
-0.037*
(-3.97)
—
—
.060

Note: t-statistics appear in parentheses.
* Significant at the 5 percent level.
** Significant at the 10 percent level.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

27

adjusted their capital ratios and risk positions very slowly
over this period to their target levels. Finally, Tables 2 and
3 show mixed results in assessing the relationship between
changes in capital ratios and changes in risk. When portfolio risk was measured using NONP, the changes in capital
ratios and risk were negatively correlated, but when portfolio risk was measured using RWARAT, the parameter
estimates were positive. Thus, the relationship between
changes in capital ratios and changes in risk during this
period is not unambiguous. The goal of this study is to
clarify this relationship by examining the possible simultaneous impact of the PCA standards on both bank capital
ratios and risk levels.

IMPACT OF PCA ON CAPITAL
In examining the impact of PCA, the results in Tables 2
and 3 provide some rather interesting insights. In the capital equations of each table, the impact of the regulatory

Table 3
TWO-STAGE LEAST SQUARES

pressure variables are captured both by an intercept term
(PCAA or PCAU) and a speed of adjustment term
( PCAA × C AP t – 1 or PCAU × C AP t – 1 ). For adequately
capitalized banks (PCAA), regulatory pressure had a positive
impact on capital ratios in both 1992 and 1993, with the
parameter estimate in most cases being at least 100 percent
larger in 1992 and 1993 than in 1991. Furthermore, the
speed of adjustment terms for adequately capitalized banks
are statistically significant, being in most cases two to four
times greater in 1992 and 1993 than in 1991. Taken
together, these results suggest that in both 1992 and 1993,
banks classified as being adequately capitalized increased
their capital ratios and the speed with which they adjusted
their capital ratios in response to PCA. Furthermore, this
result is consistent with the hypothesis that banks held
capital above the regulatory minimum as a buffer against
shocks that could cause their capital ratios to fall below the
adequately capitalized thresholds.

ESTIMATES OF PROMPT CORRECTIVE ACTION ON RISK (NONP) AND CAPITAL
1991
∆CAP

Variable
INTERCEPT

0.004*
(5.42)
-0.000**
(-1.67)
-0.001*
(-4.03)
0.436*
(20.14)
-0.078*
(-10.14)
—

SIZE
BHC
INC
CAPt-1
RISK t-1
∆CAP

—

∆RISK

-0.295*
(-5.43)
0.011*
(3.55)
0.021*
(7.19)
-0.165*
(-3.42)
-0.302*
(-4.66)
.194

PCAA
PCAU
PCAA × CAPt-1
PCAU × CAPt-1
R2

1992
∆CAP

∆RISK

∆CAP

0.001*
(5.71)
0.000
(0.38)
-0.001*
(-2.55)
—

0.004*
(5.38)
-0.000**
(-1.71)
-0.002*
(-6.35)
0.578*
(25.70)
-0.089*
(-10.66)
—

0.001*
(4.43)
-0.000*
(-4.07)
-0.001*
(-3.11)
—

0.004*
(3.90)
-0.000
(-1.07)
-0.003*
(-6.56)
0.594*
(14.98)
-0.086*
(-8.62)
—

—
-0.247*
(-18.31)
-0.011
(-0.61)
—
0.000
(1.11)
-0.000
(-0.37)
—
—
.134

Note: t-statistics appear in parentheses.
* Significant at the 5 percent level.
** Significant at the 10 percent level.

28

1993

∆RISK

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

—
-0.476*
(-9.43)
0.015*
(3.67)
0.028*
(6.35)
-0.166*
(-2.47)
-0.414*
(-4.06)
.261

—
-0.171*
(-11.78)
-0.058*
(-3.52)
—
0.003*
(4.84)
0.000
(0.35)
—
—
.078

—
-0.957*
(-6.43)
0.036*
(4.14)
0.034*
(3.67)
-0.599*
(-4.05)
-0.601*
(-2.52)
.119

∆RISK

0.000**
(1.71)
-0.000
(-0.23)
-0.000
(-1.07)
—
—
-0.228*
(-17.62)
0.076*
(3.00)
—
-0.000
(-0.20)
-0.006*
(-4.57)
—
—
.144

The same results appear to hold true for undercapitalized banks (PCAU), although the timing and magnitude of the changes appear somewhat different. The
parameter estimates on PCAU are significantly different
from zero in both 1992 and 1993, and in all cases, they are
larger than during the control period. In addition, the
speed of adjustment estimates are generally significant and of
greater magnitude than during the control period, thereby
suggesting that undercapitalized banks adjusted their capital
ratios at much faster rates than their well-capitalized
counterparts. Examining the results in Table 2, the
parameter estimates on PCAU and PCAU × C AP t – 1 for
1992 are almost twice as large as the estimates for the
control period, while the 1993 estimates are similar in
magnitude or not significant. These results are not surprising because banks that were classified in one of the three
undercapitalized zones at the end of 1991 faced regulatory
sanctions if they did not significantly increase their capital
ratios by the time the PCA standards went into effect in
December 1992.
It is also interesting to compare the parameter
estimates on PCAU and PCAA in the capital equations. In
general, the estimates on PCAU and PCAU × C AP t – 1
are larger than similar estimates for adequately capitalized
banks in 1992, but not in 1993. This result is also not
surprising because undercapitalized banks faced severe
restrictions on their activities once PCA went into effect,
while adequately capitalized banks did not.

IMPACT OF PCA ON RISK
With respect to portfolio risk, the results in Tables 2 and 3
provide some evidence that the regulatory pressure
brought about by PCA led both adequately capitalized and
undercapitalized banks to decrease their level of portfolio
risk. While the results with respect to risk in Table 3 are
generally insignificant, when portfolio risk is measured
using RWARAT (Table 2), the results suggest that adequately capitalized banks (PCAA) significantly decreased
their portfolio risk in both 1992 and 1993, with the
parameter estimate for 1993 being 60 percent larger than
the estimate for 1992. In a similar manner, the parameter
estimates for undercapitalized banks (PCAU) in Table 2 are

negative and significant in both 1992 and 1993, with the
parameter estimate for 1993 being more than twice as
large as the 1992 estimate. This is in sharp contrast to the
results for 1991, where the parameter estimates for both
adequately capitalized and undercapitalized banks are positive and significant, thus suggesting that these banks were
increasing portfolio risk in the period before FDICIA was
passed. For 1992 and 1993, the reduction in risk is not
surprising because while PCA was announced in December
1991, sanctions and restrictions on banks became effective
at the end of 1992. Therefore, if banks viewed the sanctions associated with PCA as being costly, they had a
greater incentive once PCA became effective to reduce their
portfolio risk level, and thereby reduce the probability of
falling below the capital thresholds due to shocks to equity
or income.
Finally, the 1992 parameter estimate on PCAU in
Table 2 is almost identical to that on PCAA, a result that
suggests that while both types of banks responded to the
announcement of PCA by reducing risk, the reduction in
risk by undercapitalized banks was not significantly different from that of adequately capitalized institutions. Given
the results of the capital equations in Table 2 that undercapitalized banks had larger adjustments to their capital
ratios in 1992 than in 1993, and recognizing that undercapitalized banks may be able to adjust their risk levels
faster than they can adjust their capital ratios, it is possible
that undercapitalized banks emphasized increasing capital
rather than reducing risk in 1992. However, in 1993, the
parameter estimate on PCAU in the risk equation of Table 2
is over 50 percent greater than the parameter estimate on
PCAA. This provides some evidence that undercapitalized
banks may have felt even greater pressure than adequately
capitalized banks to reduce their level of portfolio risk once
the PCA standards became effective.

CONCLUSION
The purpose of this paper has been to investigate the
impact of the PCA standards on bank capital ratios and
portfolio risk levels. The results suggest that during both
1992 and 1993, adequately capitalized and undercapitalized banks increased their capital ratios and the rate at

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

29

which they adjusted their capital ratios in response to the
PCA standards. In addition, this study finds some evidence
that the PCA standards led to significant reductions in
portfolio risk, particularly in 1993, the year after PCA
took effect. While these results do not guarantee that

bank capital levels are adequate relative to the risk in
bank portfolios, they do suggest that PCA has been effective in getting banks to simultaneously increase their
capital ratios and reduce their level of portfolio risk.

The views expressed in this article are those of the authors and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

30

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

ENDNOTES

1. In addition, FDICIA authorizes bank regulators to reclassify a bank
at a lower capital category if, in the opinion of the bank regulators, the
bank is operating in an unsafe or unsound manner.
2. The tangible equity ratio equals the total of Tier 1 capital plus
cumulative preferred stock and related surplus less intangibles except
qualifying purchased-mortgage-servicing rights divided by the total of
bank assets less intangible assets except qualifying purchased-mortgageservicing rights.
3. For example, see Peek and Rosengren (1996, 1997).
4. The mean-variance framework has been criticized by some because it
fails to incorporate the effects of deposit insurance. See Furlong and
Keeley (1989) and Keeley and Furlong (1990).
5. Shrieves and Dahl (1992) note that a positive relationship between
changes in capital ratios and portfolio risk may also occur because of
regulatory costs, bankruptcy cost avoidance, and managerial risk
aversion.

NOTES

6. Because loans made in a given year will not be recognized as
nonperforming until a future period, we follow Shrieves and Dahl (1992)
and use nonperforming loans in the following year. Thus, the NONP
variable is the ratio of nonperforming loans to total assets from year-end
1992 through 1994.
7. Finally, a word of caution is necessary because this analysis may be
complicated by other factors present during this time period, such as the
end of the interim period for implementation of the risk-based capital
standards and other provisions of FDICIA, all of which make it difficult
to isolate and definitively assess the impact of the PCA provisions.
Nevertheless, with the simultaneous assessment of changes in bank
capital, portfolio risk, and the regulatory environment, this study is a
significant improvement over our prior understanding of the impact of
FDICIA, in general, and PCA, in particular.
8. As noted in endnote 6, because of the nature of nonperforming loans,
NONP was calculated using year-end data from 1992 through 1994.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

31

REFERENCES

Avery, Robert B., and Allen N. Berger. 1991. “Risk-Based Capital and
Deposit Insurance Reform.” JOURNAL OF BANKING AND FINANCE
15: 847-74.

Keeley, Michael C., and Frederick T. Furlong. 1990. “A Reexamination of
Mean-Variance Analysis of Bank Capital Regulation.” JOURNAL OF
BANKING AND FINANCE 14: 69-84.

Baer, Herbert L., and John N. McElravey. 1992. “Capital Adequacy and the
Growth of U.S. Banks.” Federal Reserve Bank of Chicago Working
Paper WP-92-11.

Kim, Daesik, and Anthony M. Santomero. 1988. “Risk in Banking and
Capital Regulation.” JOURNAL OF FINANCE 43: 1219-33.

Berger, Allen N. 1995. “The Relationship Between Capital and
Earnings in Banking.” J OURNAL OF M ONEY, C REDIT , AND
B ANKING 27: 432-56.
Davies, Sally M., and Douglas A. McManus. 1991. “The Effects of
Closure Policies on Bank Risk-Taking.” JOURNAL OF B ANKING
AND F INANCE 15: 917-38.

Koehn, Michael, and Anthony M. Santomero. 1980. “Regulation of Bank
Capital and Portfolio Risk.” JOURNAL OF FINANCE 35: 1235-50.
Levonian, Mark E. 1991. “What Happens If Banks Are Closed ‘Early’?”
In REBUILDING BANKING: PROCEEDINGS OF THE 27 TH
CONFERENCE ON BANK STRUCTURE AND C OMPETITION, 273-95.
Federal Reserve Bank of Chicago.

Furlong, Frederick T. 1992. “Capital Regulation and Bank Lending.”
Federal Reserve Bank of San Francisco ECONOMIC R EVIEW 3: 23-33.

Peek, Joe, and Eric S. Rosengren. 1996. “The Use of Capital Ratios to
Trigger Intervention in Problem Banks: Too Little, Too Late.” Federal
Reserve Bank of Boston NEW ENGLAND ECONOMIC REVIEW,
September-October: 49-58.

Furlong, Frederick T., and Michael C. Keeley. 1989. “Capital Regulation
and Bank Risk-Taking: A Note.” JOURNAL OF B ANKING AND
FINANCE 13: 883-91.

———. 1997. “Will Legislated Early Intervention Prevent the Next

Haubrich, Joseph G., and Paul Wachtel. 1993. “Capital Requirements and
Shifts in Commercial Bank Portfolios.” Federal Reserve Bank of
Cleveland ECONOMIC REVIEW 3: 2-15.

Shrieves, Ronald E., and Drew Dahl. 1992. “The Relationship Between
Risk and Capital in Commercial Banks.” JOURNAL OF BANKING AND
FINANCE 16: 439-57.

Jacques, Kevin, and Peter Nigro. 1997. “Risk-Based Capital, Portfolio Risk,
and Bank Capital: A Simultaneous Equations Approach.” JOURNAL OF
ECONOMICS AND BUSINESS 49: 533-47.

Wall, Larry D., and David R. Peterson. 1987. “The Effect of Capital
Adequacy Guidelines on Large Bank Holding Companies.” JOURNAL
OF BANKING AND FINANCE 11: 581-600.

Kahane, Yehuda. 1977. “Capital Adequacy and the Regulation of
Financial Intermediaries.” JOURNAL OF BANKING AND FINANCE 1:
207-17.

———. 1995. “Bank Holding Company Capital Targets in the Early

Banking Crisis?” SOUTHERN ECONOMIC R EVIEW 64: 268-80.

1990s: The Regulators Versus the Market.” JOURNAL OF BANKING
19: 563-74.

AND FINANCE

Kane, Edward. 1983. “A Six-Point Program for Deposit-Insurance
Reform.” H OUSING FINANCE REVIEW 2: 269-78.

32

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

Fair Value Accounting and Regulatory
Capital Requirements
Tatsuya Yonetani and Yuko Katsuo

1. INTRODUCTION
Advocates of fair value accounting believe that fair values
provide more relevant measures of assets, liabilities, and
earnings than do historical costs. These advocates assert
that fair value accounting better reflects underlying economic values. The advantages of this method—and the
corresponding weaknesses of historical cost accounting—
are described in more detail in “Accounting for Financial
Assets and Financial Liabilities,” a discussion paper published by the International Accounting Standards Committee (IASC) in March 1997. The IASC requires that all
assets and liabilities be recognized at fair value. Under fair
value accounting, changes in fair values (that is, unrealized
holding gains and losses) are recognized in current earnings. In contrast, under historical cost accounting, changes
in fair values are not recognized until realized.
Even though the fair value accounting debate
relates to all entities and all assets and liabilities, the focus
has been on banks’ securities. In the United States, the
Financial Accounting Standards Board (FASB) issued

Tatsuya Yonetani is a senior economist at the Bank of Japan’s Institute for
Monetary and Economic Studies. Yuko Katsuo is a Ph.D. candidate at the
University of Tokyo.

Statement of Financial Accounting Standards No. 115,
“Accounting for Certain Investments in Debt and Equity
Securities,” in May 1993. The FASB intended this standard
to encourage banks to recognize at fair value more investment securities than before. In Japan, fair value accounting
was introduced for the trading accounts of banks’ securities
in April 1997, but investment accounts for banks’ securities have not yet been recognized at fair value. The concept
of fair value accounting has also been partly adopted in regulatory capital requirements based on the 1988 Basle
Accord. In this framework, unrealized profits of investment securities can be included only in the numerator of
the capital-to-assets ratio used to assess capital adequacy.
However, some fair value accounting critics are
concerned that the precipitous adoption of market value
accounting will have adverse effects on both banks and the
financial system as a whole. In particular, these critics
believe that earnings based on fair values for investment
securities are likely to be more volatile than those based on
historical cost. They assert that this increased volatility
does not reflect the underlying economic volatility of
banks’ operations and that investors will demand an excessive premium, therefore causing investors to allocate funds
inefficiently.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

33

Critics also assert that using fair value accounting
for investment securities is likely to cause banks to violate
regulatory capital requirements more often than is economically appropriate, resulting in excessive regulatory
intervention or in costly actions to reduce the risk of regulatory intervention. Actually, regulatory capital requirements
based on the 1988 Basle Accord may have strongly influenced Japanese banks’ lending behavior after the bubble
period. Following that period, Japanese banks experienced a
sharp reduction in unrealized gains from equities. This may
have led banks to adopt overly cautious lending behaviors
to reduce the risk of regulatory intervention.
Using data on U.S. banks, Barth, Landsman, and
Wahlen (1995) have investigated the empirical validity of
the above-mentioned concerns about fair value accounting.
They found no convincing evidence to justify these
concerns. Specifically, Barth, Landsman, and Wahlen found:
• Fair-value-based earnings are more volatile than
historical cost earnings, but share prices do not reflect
the incremental volatility.
• Banks violate regulatory capital requirements more
frequently under fair value than under historical cost
accounting.
• Fair-value-based violations help predict actual regulatory capital violations, but share prices do not reflect
this potential increase in regulatory risk.
In this paper, we describe an empirical study of
fair value accounting, applying to data on Japanese banks
the analytical methods of Barth, Landsman, and Wahlen.
We also discuss a further study of regulatory risk in capital requirements associated with fair value accounting,
focusing on banks with low Basle capital adequacy ratios.
This is a different approach from that of Barth, Landsman, and Wahlen. In the United States, these authors calculated capital ratios on a fair value accounting basis with
unrealized securities profits. Using these figures, they
tested how fair-value-based violations help predict actual
regulatory capital violations and to what extent investors
recognize this potential increased regulatory risk. In this
paper, we investigate, using actual Basle adequacy ratios,
the regulatory risk in capital requirements associated

34

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

with fair value accounting. The outline of our study is as
follows:
• We examine how fair value accounting affects earnings volatility and whether any incremental volatility
is reflected in bank share prices. If this is the case, do
investors view fair value earnings volatility as a better
proxy for economic risk than historical cost earnings
volatility?
• We examine the effect of fair value accounting on the
volatility of regulatory capital ratios and whether any
increase in regulatory risk associated with fair value
accounting is reflected in share prices. (Regulatory
risk is one component of banks’ total economic risk.)
We specifically focus on banks with low Basle capital
adequacy ratios, examining how far the incremental
volatility associated with fair value accounting is
reflected in bank share prices.
• We seek a better formula for Basle capital adequacy
ratios, using the concept of fair value accounting. Specifically, we compare the volatility of capital adequacy
ratios, using the current Basle Accord formula (only
capitals are calculated using the unrealized gains of
investment securities), the formula using historical
cost accounting, and the fair value formula (in which
both capitals and assets are calculated using the unrealized gains of investment securities).
We find that:
• Bank earnings based on the fair values of investment
securities are significantly more volatile than earnings
based on historical cost securities gains and losses.
• However, the assertion that investors generally
demand an excessive premium because of the
increased volatility associated with fair value accounting, thereby raising banks’ cost of capital, is not supported by any strong empirical evidence.
• On those critical occasions, when investors value lowcapital-ratio banks’ shares, the volatility in fair value
earnings incremental to that in historical cost earnings is also priced as risk. The choice of accounting
formula adopted in regulatory capital requirements is
therefore very important.
• The Basle capital adequacy formula adopts (somewhat) the concept of fair value accounting because
the formula allows the inclusion of unrealized gains
of investment securities in the calculation of capital (the numerator). However, when including such

unrealized gains, they should also be used in the
calculation of assets (the denominator). From the
practical point of view, this assertion is also supported
by the fact that the fair value formula (both capital
and assets are calculated using the unrealized gains of
investment securities) is less volatile than the current
formula.
The remainder of this paper is organized as follows: section 2 describes our data and sample banks.
sections 3 and 4 present our empirical findings related to
earnings volatility and regulatory risk associated with fair
value accounting. In section 5, we seek a better formula for
Basle Accord capital adequacy ratios using fair value
accounting. Section 6 concludes our discussion.

2. DATA AND SAMPLE BANKS
The sample comprises annual data from fiscal year (FY)
1989-FY1996 for eighty-seven Japanese banks that more
than once during this period adopted capital adequacy
ratios based on the 1988 Basle Accord. Our estimation
includes banks that, because of their fragile financial condition, have adopted Basle capital adequacy ratios only during a limited period. However, banks that defaulted during
the period are excluded (even though these banks’ property
has been handed over to other banks).
We focus in this study on listed investment securities, because only unrealized gains for listed securities are
calculated in capital adequacy ratios based on the 1988
Basle Accord.1 These estimates are obtained from annual
statements of accounts. We can estimate annual fair value
profits and losses of investment securities during the
FY1989-FY1996 period, using data from annual statements of accounts in which unrealized gains and losses for
listed securities data are disclosed since FY1990 and unrealized securities gains calculated in Basle Accord capital
adequacy ratios are disclosed since FY1989.

3. EARNINGS VOLATILITY
Here we address two specific questions:
• Are earnings more volatile using fair value accounting
for investments rather than using historical cost?

• If earnings are more volatile, do investors perceive
this increased volatility as an additional risk premium
and do banks’ share prices reflect such a premium?
This will be the case if volatility in earnings based
on fair values for investment securities is a better proxy for
economic risk than that based on historical cost.

3.1. EMPIRICAL MEASURES OF EARNINGS VOLATILITY
Table 1 presents cross-sectional descriptive statistics of
earnings under historical cost and fair value accounting
and realized and unrealized securities gains and losses
using a sample of eighty-seven Japanese banks over the
1989-96 period. The four earnings variables are historical
cost earnings (HCE—that is, ordinary income), HCE
plus unrealized annual gains and losses for investment
securities (that is, fair value earnings, or FVE), realized
securities gains and losses (RSGL), and unrealized securities gains and losses (URSGL). Realized investment securities gains and losses are recognized under historical cost
accounting. Under fair value accounting, banks recognize
as investment securities gains and losses that are the sum
of RSGL and URSGL.2
Obviously, URSGL is more volatile than RSGL.
The effect of unrealized securities gains and losses on
ordinary income in any given year can be large. Table 1
shows the standard deviations over the 1989-96 period,
measured for the cross-sectional mean in fair value earnings

Table 1
D ESCRIPTIVE STATISTICS:
Year
89
90
91
92
93
94
95
96

HCE
σ
N Mean
87 105.1 157.8
87 97.5 146.3
87 82.5 124.6
87 83.9 133.2
87 67.8 116.6
87 74.9 144.8
87 26.0 156.0
87 26.2 203.0

Mean

σ of Mean

N=8

70.5
29.8

EARNINGS VARIABLES

FVE
Mean
σ
-207.1 333.0
-104.5 238.3
-212.9 565.9
107.6 320.4
146.1 226.5
-129.4 315.8
197.1 360.3
-171.6 448.0
-46.8
168.8

RSGL
Mean
σ
25.5 62.3
43.1 101.5
0.4 43.6
5.7 54.1
30.1 70.0
16.2 98.8
86.2 161.0
4.4 80.0
26.5
28.2

URSGL
Mean
σ
-312.2 457.7
-202.0 332.6
-295.3 623.4
23.7 261.3
78.3 143.7
-204.3 333.0
171.2 250.1
-197.8 365.2
-117.3
182.2

Note: σ denotes standard deviation.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

35

and in historical cost earnings. The former ( σ of mean:
168.8) is more than five times greater than the latter ( σ of
mean: 29.8).

3.2. EARNINGS VARIABILITY AND SHARE PRICES
The increased earnings volatility associated with fair
value accounting for investment securities documented in
Table 1 raises the question: Does the market perceive this
increased volatility as additional risk?
To address this question, we estimate the following relationship:
(1)

P = α 0 + α 1 PREE it + α 2 ( σ HCit × PREE it )
+ + α 3 [ ( σ FVit – σ HCit ) × PREE it ] + ε it ( A ),

where P is the bank’s end-of-fiscal-year share price,3 PREE
is earnings per share before securities gains and losses, and i
and t represent banks and years, respectively. σ HCit and
σ FVit are the standard deviations of historical cost and fair
value earnings per share for each bank measured over the
recent four years. Because σ HC and σ FV are computed
using four years of data, this analysis extends only from
FY1992 through FY1996.4
However, this estimation period covers the entire
duration of the Basle capital adequacy ratios, excluding the
trial period. Using this estimation, we can investigate the
regulatory risk associated with fair value accounting in
accordance with the Basle Accord of 1988. We deal with
this in section 4. Equation 1 is based on a valuation model
where price is determined as earnings divided by the cost
of equity capital. The model assumes that a firm’s equity
value equals an earnings multiple times permanent earnings, where risk is one of many determinants of the earnings multiple. The earnings multiple is assumed to be
negatively related to risk (see appendix).
Equation 1 permits the coefficient on earnings to
vary with two risk proxies based on earnings variability. If
historical cost accounting earnings and their variance are
good proxies for permanent earnings and risk, then the
expected sign of α 2 is negative. Because we are trying to
determine whether the market perceives the variance associated with fair value accounting as risk incremental to historical cost earnings variance, our test is whether α 3

36

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

equals zero. Finding that α 3 is significantly different from
zero is consistent with any difference between fair value
and historical cost earnings variance being perceived by the
market as risk.
Note that the sign of α 3 depends on the sign of the
difference between σ HC and σ FV . Because Table 1 reports
that the variance of fair value earnings, σ FV , exceeds the
variance of historical cost earnings, σ HC , we expect the sign
of α 3 to be negative. To be consistent with the goingconcern assumption in the underlying valuation model, we
eliminate observations with negative earnings, PREE.
Table 2 presents regression estimates (N=302)
using a fixed-effects estimation of eighty-seven banks. It
describes estimations of three fixed-effects models that
pool observations across years (FY1992-FY1996). Panel
A contains the regression summary statistics for equation
1. Panels B and C present regression summary statistics
from estimating versions of equation 1 that include either
the volatility of historical cost earnings or fair value earnings, each interacting with earnings before securities
gains and losses, but not both.
Panel A indicates that volatility in fair value
earnings is not associated with a reduced earnings multiple assigned by investors. The coefficient on
( σ FVit – σ HCit ) × PREE it , α 3 , is insignificantly different
from zero ( t = 0.40), indicating that the volatility in fair
value earnings incremental to that in historical cost earnings is not priced as risk.
The findings in Panel A are inconsistent with fair
value accounting critics’ assertions that increased volatility
associated with fair value earnings directly affects investors’
capital allocation decisions. The findings are consistent
with investors who perceive that volatility in historical cost
earnings is a better measure of economic risk than volatility
in fair value earnings. The fact that bank share prices do not
reflect the incremental volatility of fair value earnings is
consistent with the findings using U.S. bank data over the
1976-90 period in Barth, Landsman, and Wahlen (1995).
To eliminate collinearity between the two volatility
measures, we also estimate each measure alone. Panels B and
C indicate that each measure has a significant dampening

Table 2
REGRESSION

ESTIMATES FROM FIXED-EFFECTS ESTIMATION

Panel A

Pit = α 0i + α 0t + α 1 PREE it + α 2 ( σ HC it × PREE it ) + α 3 ( σ FVi t – σ H Cit ) × PREE it + ε it
Coefficient estimates:

α 1 = 1.40 ( t = 3.55 )
α 2 = – 0.01 ( t = – 4.13 ) F-test: F (82,216) = 78.646, P-value = [.0000]
α 3 = 0.0002 ( t = 0.40 ) Hausman-test: CHISQ(3) = 155.28, P-value = [.0000]

Panel B

Pit = α 0i + α 0t + α 1 PREE i t + α 2 ( σ HCit × PREE it ) + ε i t
Coefficient estimates:

α 1 = 1.47 ( t = 4.11 )

F-test: F (82,217) = 87.120, P-value = [.0000]

α 2 = – 0.01 ( t = – 4.47 ) Hausman-test: CHSQ(2) = 107.33, P-value = [.0000]
Panel C

Pit = α 0i + α 0t + α 1 PREE i t + α 2 ( σ FVit × PREE i t ) + ε it
Coefficient estimates:

α 1 = 1.07 ( t = 2.69 )

F-test: F (82,217) = 74.363, P-value = [.0000]

α 2 = – 0.0007 ( t = – 2.07 ) Hausman-test: CHISQ(2) = 145.78, P-value = [.0000]
Notes: P is price per share; PREE is earnings per share before securities gains and losses; σ H C is the standard deviation of historical cost earnings per share for each bank
measured over the most recent four years; σ FV is the standard deviation of fair value earnings per share, calculated as historical cost earnings plus unrealized gains and
losses for investment securities for each bank measured over the most recent four years; i is bank i; t is year t.

effect on the earnings multiple. The coefficients representing the effect of historical cost earnings volatility and fair
value earnings volatility on the earnings multiple are significantly negative, with t-statistics of -4.47 and -2.07,
respectively. Both volatility measures are therefore proxies
for risk. But our findings in Panel A indicate that historical cost volatility dominates fair value earnings volatility as
a risk proxy.

4. REGULATORY RISK
4.1. A COMPARISON OF REGULATORY
CAPITAL MEASURES
Based on the findings in Table 1, we expect regulatory capital ratios based on fair value accounting to be more volatile than those based on historical cost. This may also be
true of Basle adequacy ratios, which, in part, adopt the concept of fair value accounting for investment securities.
Table 3 shows a comparison of volatility between current
Basle capital adequacy ratios and capital adequacy ratios

calculated without unrealized profits for investment securities. Obviously, the former is more volatile than the latter.
In the table, the mean of the mean ( µ ) and the standard
deviation ( σ ) are measured for each bank over the period
FY1989-FY1996 using three formulas. These formulas
are: current Basle capital adequacy ratios (only capital is

Table 3
COMPARISON OF
µ
σ

VOLATILITY OF CAPITAL ADEQUACY RATIOS

BIS-R
9.17
3.14

HC-R
7.33
2.62

FV-R
8.81
3.02

Notes:
BIS-R is the mean of the mean and the standard deviation measured for each bank
over the period FY1989-FY1996, using current Basle capital adequacy ratios
(only capital is calculated with unrealized gains of investment securities).
HC-R is the mean of the mean and the standard deviation measured for each bank
over the period FY1989-FY1996, using capital ratios based on historical cost
accounting.
FV-R is the mean of the mean and the standard deviation measured for each bank
over the period FY1989-FY1996, using capital ratios based on fair value accounting (both capital and assets are calculated with unrealized gains of investment
securities).

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

37

calculated with unrealized gains from investment securities),
capital ratios based on historical cost accounting, and capital
ratios based on fair value accounting (both capital and assets
are calculated with unrealized gains of investment securities). The table uses a sample of eighty-seven Japanese
banks over the period FY1989-FY1996. Actually, in Japan
the current Basle capital adequacy formula is sometimes
criticized because the inclusion of unrealized gains of
investment securities in capital (the numerator) intensifies
the volatility of capital adequacy ratios, thus having an
inappropriate impact on bank behavior.

4.2. REGULATORY RISK AND SHARE PRICES
Now we investigate the pricing effect of regulatory risk by
estimating equation 1 for banks with low Basle capital adequacy
ratios. Banks with low Basle capital adequacy ratios may
have a greater possibility of regulatory capital violations
caused by the volatility of unrealized profits for investment
securities than do banks with high capital adequacy ratios.
If so, fair value earnings volatility is most likely to be
priced incrementally to historical cost earnings volatility
for banks with low Basle capital adequacy ratios. If the fair
value earnings volatility of banks with low capital adequacy ratios is reflected in their share prices, investors
should recognize the regulatory risk associated with fair
value accounting.
Table 4 presents Basle capital adequacy ratio levels
and the number of banks having those levels. We focus on

Table 4
BANKS’
1992-96

BASLE CAPITAL ADEQUACY RATIO LEVELS

BIS-R (Percent)
9.00 ~
8.75~9.00
8.50~8.75
8.25~8.50
8.00~8.25
7.75~8.00
7.50~7.75
7.25~7.50
7.00~7.25
~ 7.00

1992
59
15
12
1
0
0
0
0
0
0

1993
70
8
8
1
0
0
0
0
0
0

1994
50
12
13
9
1
0
0
0
0
1

1995
59
8
7
9
1
0
0
0
0
0

1996
61
6
3
8
2
0
0
0
0
1

Note: BIS-R is the Basle Accord regulatory capital ratio.

38

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

banks with low capital adequacy ratios (under 9.0 percent).
Table 5 provides estimates of the relationships between
bank share prices and earnings before securities gains and
losses, volatility in reported earnings, and volatility in fair
value earnings. Regression estimates are from fixed-effects
estimation. The sample represents Japanese banks with low
capital adequacy ratios (under 9.0 percent) during the
1992-96 period. The table reveals that the coefficients’
effects on the earnings multiple are significantly negative (with t-statistics of -3.01 and -3.37), even though
the historical cost earnings coefficient is larger than that of
the fair value earnings coefficient. So, for banks with low
capital adequacy ratios,5 both volatilities are reflected in
bank share prices. This finding indicates that investors recognize the regulatory risk associated with fair value
accounting.6 In this sense, we cannot reject the possibility
of increased volatility having some impact on capital allocation decisions and bank behavior. If this is the case, does
it mean that regulatory capital requirements using fair
value accounting are irrelevant? We deal with this issue in
the next section.

5. APPROPRIATE ACCOUNTING FORMULA
FOR CAPITAL ADEQUACY RATIOS
In section 3, we showed that the volatility in fair value
earnings is not generally recognized by investors as a better
risk proxy than that in historical cost earnings. However,
in section 4 we demonstrated that under critical circumstances, such as the valuation of low-capital-ratio banks’
shares, the volatility in fair value earnings incremental to
that in historical cost earnings is also priced as risk.
We interpret these findings as follows:
• No strong empirical evidence supports the assertion
that investors generally demand an excessive premium because of the increased volatility associated
with fair value accounting, therefore raising banks’
cost of capital.
• However, this does not mean that fair value earnings
are value-irrelevant. In fact, on those critical occasions
when investors value low-capital-ratio banks’ shares,
fair value earnings provide us with more useful information than do historical cost earnings.

Table 5
REGRESSION

ESTIMATES, SAMPLE OF LOW-CAPITAL-RATIO BANKS
Pit = α 0i + α 0t + α 1 PREE it + α 2 ( σ HC it × PREE it ) + α 3 ( σ FVi t – σ H Cit ) × PREE it + ε it
Coefficient estimates:

α 1 = 8.43 ( t = 5.33 )
α 2 = – 0.02 ( t = – 3.01 ) F-test: F (31,39) = 30.472, P-value = [.0000]
α 3 = – 0.008 ( t = – 3.37 ) Hausman-test: CHISQ(3) = 23.260, P-value = [.0000]

Notes: P is price per share; PREE is earnings per share before securities gains and losses; σH C is the standard deviation of historical cost earnings per share for each bank
measured over the most recent four years; σFV is the standard deviation of fair value earnings per share, calculated as historical cost earnings plus unrealized gains and
losses for investment securities for each bank measured over the most recent four years; i is bank i; t is year t; t-statistics are in parentheses.

• We can interpret as regulatory risk associated with
fair value accounting the perceived volatility in fair
value earnings incremental to that in historical cost
earnings in the valuation of low-capital-ratio banks’
shares.
Examined from a different angle, our findings
indicate that the choice of accounting formula adopted in
regulatory capital requirements is very important. If an
inappropriate accounting formula is adopted, there is a
possibility that the regulatory capital requirements mislead investors and lead to inefficient capital allocation decisions and inappropriate bank behavior.
We now ask, how relevant is the current accounting formula used to calculate capital requirements under
the terms of the 1988 Basle Accord? This question should
be addressed in terms of the purpose of the bank capital
standards. Broadly speaking, bank capital standards are
aimed at limiting bank failures by decreasing the likelihood of bank insolvency (that is, decreasing the likelihood
that banks have negative economic net worth, in which liabilities exceed assets). Therefore, banks’ capital ratios
should be a good indication of the future probability of
banks’ negative net worth. When we assess the future
probability of banks’ negative net worth, both assets and
liabilities should be fair-valued, reflecting future risk factors.
Capital ratios based on historical cost cannot accurately indicate economic net worth. In some cases, failed
institutions report positive net worth in excess of regulatory requirements under historical cost accounting, even
though these institutions already have negative economic

net worth. We can therefore consider relevant regulatory
capital requirements using fair value accounting since
these formulas lead regulators to address institutions’
financial difficulties earlier.
So, what is “fair value” in the context of capital
adequacy ratios? Theoretically, we consider valid the assertion that all assets and liabilities should be calculated using
fair value (taking into account fluctuations in value from
various risk factors, such as market risk, credit risk, and
liquidity risk). However, we find it difficult, realistically,
to use fair value accounting on all assets and liabilities to
calculate capital adequacy ratios. We have much to explore
on this matter.
In this paper, we do not deal with general risk factors or fair value accounting associated with Basle capital
adequacy ratios. Our study provides evidence to support
the assertion that inappropriate or incorrect fair values
adopted in regulatory capital requirements should be
revised, because of the possibility that they will cause inefficient capital allocations by investors and inappropriate
bank behavior. From this point of view, the current Basle
capital adequacy formula allows biased treatment, at least
theoretically, of the calculation of unrealized gains from
investment securities.7 The current formula includes unrealized gains of investment securities only in the calculation
of capital (the numerator), but assets (the denominator)
should also be calculated to include unrealized gains from
investment securities.
This is not only justified by theoretical arguments. Practically, this assertion is appropriate, because

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

39

this alternative formula (that is, calculating unrealized
gains of investment securities for denominators, as well as
numerators) mitigates capital adequacy ratios’ volatility.
Table 3 shows a comparison of the volatility of capital
adequacy ratios using the current Basle Accord formula
(only capital is calculated using the unrealized gains from
investment securities), the formula using historical cost
accounting, and fair value formulas (both capital and
assets are calculated using the unrealized gains of investment securities). Under the fair value formula, 45 percent
of the unrealized gains of investment securities is
included in capital (the numerator), which follows the
treatment under the current formula, taking into account
the concept of tax effect accounting.8 However, assets
include 100 percent of unrealized gains of investment
securities. This treatment is relevant because, under tax
effect accounting, profits can be adjusted but the asset side
remains unchanged. Obviously, the current and fair value
formulas are more volatile than the historical cost formula,
but between the two former formulas, the fair value formula—calculating unrealized gains from investment
securities—mitigates the increased volatility.
In Japan, the current Basle capital adequacy formula is sometimes criticized because it includes unrealized
gains of investment securities in capital (the numerator),
intensifying the capital adequacy ratios’ volatility and
therefore having an inappropriate impact on bank behavior.
The findings in Table 3 show that, even from the critics’
point of view, the fair value formula (calculating both capital and assets using the unrealized gains from investment
securities) is more appropriate than the current formula.

6. CONCLUSION
This paper investigated the assertions of those who criticize
the use of fair value accounting to estimate the value of
investment securities. We studied the regulatory risk associated with capital adequacy ratios based on fair value
accounting. We addressed these issues using earnings that
we calculated using disclosed fair value estimates of banks’
investment securities and Basle capital adequacy ratios,
which partly adopt the concept of fair value accounting.

40

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

We reached the following conclusions:
• Although earnings are more volatile under fair value
accounting, this increased volatility does not necessarily represent a proxy of economic risk.
• However, in critical circumstances—where investors
value low-capital-ratio banks’ shares—the volatility
in fair value earnings, incremental to that in historical
cost earnings, is also priced as risk.
Our first conclusion is consistent with the findings of Barth, Landsman, and Wahlen (1995), who use data
on U.S. banks. However, our second conclusion is different
from their empirical results. Presumably, this difference is
brought about partly by differences in regulation and in
bank behavior.
In the United States, banks basically are not
allowed to hold equity securities and the size of these holdings is limited.9 In Japan, however, the size of equity securities holdings is much larger,10 thus causing volatile
unrealized gains that can be considered to have more
impact than in the United States on investors’ valuation of
banks’ shares under critical circumstances.
Our conclusions suggest the following:
• The assertion that investors generally demand an
excessive premium because of the increased volatility
associated with fair value accounting, thereby raising
banks’ cost of capital, is not supported by any strong
empirical evidence.
• However, this does not mean that fair value earnings
are value-irrelevant. In fact, on those critical occasions
when investors value low-capital-ratio banks’ shares,
fair value earnings provide us with more useful information than do historical cost earnings.
• The perceived volatility in fair value earnings incremental to that in historical cost earnings in the valuation of low-capital-ratio banks’ shares can be
interpreted as regulatory risk associated with fair
value accounting and it indicates the importance of
the accounting framework of the Basle capital
adequacy formula. If an inappropriate accounting
formula is adopted, there is a possibility that regulatory capital requirements will mislead investors and
lead to inefficient capital allocation decisions and

inappropriate bank behavior. The Basle capital
adequacy formula adopts in part the concept of fair
value accounting in the sense that it allows the inclusion of unrealized gains of investment securities in the
calculation of capital (the numerator). However, when
including unrealized gains, we should also include

those gains in the calculation of assets (the denominator). This assertion is supported by the fact that
the fair value formula (both capital and assets are
calculated using the unrealized gains of investment
securities) is less volatile than the current formula.

APPENDIX: VALUATION AND CAPITAL ASSET PRICING MODELS

Suppose that the current price of a share is P 0 , that the
expected price at the end of a year is P 1 , and that the
expected dividend per share is DIV 1 . We assume that the
equity investors invest for both dividends and capital
gains, and that expected return is r.
Our fundamental valuation formula is, therefore,
DIV 1 + P 1
P 0 = ------------------------.
1+r
This formula will hold in each period, as well as in
the present. That allowed us to express next year’s forecast
price in terms of the subsequent stream of dividends per
share DIV 1 , DIV 2 . . . . If dividends are expected to grow
forever at a constant rate, g, then
( 1 + g )DIV
DIV
P 0 = ------------1- = -----------------------------0- .
r–g
r–g
We transform this into the following formula,
where b is the retention rate and E 0 is the current earnings
per share:
( 1 + g ) ( 1 – b )E
( 1 – b )E
(A1)
P 0 = ---------------------1- = --------------------------------------0- ≡ θ E 0 .
r–g
r–g

We obtain the relationship that equity value equals an
earnings multiple ( θ ) times current earnings per share E 0 .
Now, we focus on expected return r. By using the
capital asset pricing model, the following equation is
obtained:
ρσ
(A2)
r i = r f + β i ( r m – r f ) = r f + ---------ri- ( r m – r f ) ,
σ rm
where r f is the risk free rate, r m is the expected return on
the market index, and ρ is the covariance ( r i, r m ) ⁄ σ ri σ rm .
When we combine equations A1 and A2, we find
the earnings multiple is described in the form
1 ⁄ ( A + Bσ ri ) . If we assume that the portion of the earnings multiple attributable to risk can be disaggregated
linearly from the total earnings multiple, then we obtain
equation 1 in the main text.

The views expressed in this article are those of the authors and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

41

ENDNOTES

The authors are very grateful to Professor Satoshi Daigo, Professor Kazuyuki
Suda, Masaaki Shirakawa, Hiroshi Fujiki, and Nobuyuki Oda for their
comments and suggestions. They also thank Wataru Suzuki for his support.
The views expressed in the paper are the authors’ and do not necessarily reflect
the views of the Bank of Japan.
1. The investment securities holdings of 149 Japanese banks (including
city banks, long-term credit banks, trust banks, regional banks, and
regional banks II) on average account for 15.4 percent (in 1996) of their
total assets.
2. Under the current accounting rules in Japan, banks’ investment
securities are recognized at historical cost (equity securities are
recognized at the lower of cost or market) and estimates of their fair
values are disclosed. In this paper, on the assumption that disclosure and
recognition are informationally equivalent, we make fair value estimates
by adding URSGL to RSGL.
3. Banks’ annual statements cannot be obtained at the end of the fiscal
year. However, investors may infer those figures by evaluating forecast
figures in the semiannual statements, movements of interest rates, the
stock price index (Nikkei Heikin), and other information sources such as
from rating firms. Therefore, a bank’s end-of-fiscal-year share price can
be considered relevant. Incidentally, Barth, Landsman, and Wahlen
(1995) analyze U.S. banks by using end-of-year data—the same type of
information we use to study Japanese banks.
4. The four-year calculation period reflects the tradeoff between having
a sufficient number of observations to estimate the earnings variance
efficiently and having a sufficient number of observations to estimate
efficiently equation 1.
5. When simply conducting the same estimation with regard to high
capital adequacy ratios, the coefficient of earnings per share before
securities gains and losses, as well as that of the increased volatility of
fair value estimates, is insignificant. Presumably, this result is driven
somewhat by the large-scale loan writeoffs in the recent years: In this
situation, high earnings are not necessarily positively valued, because
myopic behavior, such as reporting high profits in the short run while

42

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

deferring the writeoffs of nonperforming loans, is negatively valued.
Mainly large banks, such as city banks that have relatively high capital
ratios, have conducted the large-scale writeoffs. At any rate, for this
study we have to conduct the empirical estimation using other financial
data such as the sum of writeoffs and nonperforming loans, which we
think will be the subject of future studies.
6. The risk investors recognize regarding capital adequacy ratios is not
limited to regulatory risk. Even without regulatory capital requirements,
investors monitor the economic capital ratios of banks and, if these ratios
decrease, they will demand an excessive premium. In this sense, we
cannot easily draw the line between regulatory risk and risk regarding
economic capital ratios. In this paper, we focus on regulatory risk and do
not touch upon such issues as the meaning of capital for shareholders and
managers and the meaning of internal capital allocation.
7. The treatment of unrealized gains from investment securities is left
to each country’s regulator. In Japan, banks are allowed to include
unrealized gains from investment securities. In this paper, we consider
the treatment of unrealized gains in Japan.
8. To be precise, under the current formula, the figure 45 percent is
considered to be determined not only by tax effect accounting, but also
by the fact that not all of unrealized profits can be realized. At any rate,
regarding the inclusion of unrealized gains in the calculation of capital,
we adopt the figure 45 percent in the calculation of the fair value formula
to clarify the comparison with the current formula.
9. “Except as hereinafter provided or otherwise permitted by law,
nothing herein contained shall authorize the purchase by the
association for its own account of any shares of stock of any
corporation.” (Title 12, United States Code Section 24, Seventh.)
10. The investment securities holdings of U.S. commercial banks
(9,528) on average account for 17.5 percent of total assets (in 1996),
which is larger than the amount for Japanese banks (15.4 percent).
However, the size of banks’ equity securities accounts for only 2.7 percent
of total holding securities, while that of Japanese banks accounts for
34.7 percent of total holding securities.

NOTES

REFERENCES

Barth, M. E. 1994. “Fair Value Accounting: Evidence from Investment
Securities and the Market Valuation of Banks.” A CCOUNTING
R EVIEW 69 ( January): 1-25.
Barth, M. E., W. H. Beaver, and M. A. Wolfson. 1990. “Components of
Bank Earnings and the Structure of Bank Share Prices.” FINANCIAL
ANALYSTS JOURNAL 46 (May-June): 53-60.
Barth, M. E., W. R. Landsman, and J. M. Wahlen. 1995. “Fair Value
Accounting: Effects on Banks’ Earnings Volatility, Regulatory
Capital, and Value of Contractual Cash Flows.” JOURNAL OF
BANKING AND FINANCE 19: 577-605.

NOTES

Daigo, S., ed. 1995. JIKAHYOUKA TO NIHON KEIZAI. Nihon keizai
shinbunsha.
International Accounting Standards Committee. 1997. “Accounting for
Financial Assets and Financial Liabilities,” March.
Watts, R. L., and J. L. Zimmerman. 1991. JISSHORIRON TO SHITENO
K AIKEIGAKU (P OSITIVE ACCOUNTING T HEORY). Translated by
K. Suda. Hakutou shobo.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

43

Measuring the Relative Marginal Cost
of Debt and Capital for Banks
Summary of Presentation
Thuan Le and Kevin P. Sheehan

The implicit assumption for the existence of an optimal
capital structure is that the cost of capital depends on the
degree of leverage and that there exists sufficient friction
that prevents investors from taking advantage of arbitrage
opportunities. By exploiting the equilibrium condition
under an optimal capital structure—that a bank’s cost of
funds either from equity or debt should be equal—we
derive a measure of capital bindingness. This measure suggests that, since 1993, the cost of equity capital has been
very expensive relative to the cost of debt by historical
standards, yet banks have not lowered their capital ratios as
theory would predict.
This finding seems to indicate that banks are
somehow “constrained” to holding a higher fraction of

their liabilities in the form of more expensive equity
capital instead of the relatively cheaper debt. Perhaps the
reason that banks are constrained from lowering their
capital ratios has less to do with regulatory capital
requirements than the banks’ inability to effectively
reduce excess capital. Recent data show that banks are
growing more slowly today than in the past, which would
preclude increasing debt as a means of lowering the capital
ratios. Empirical data suggest that banks may be attempting to reduce equity through consolidation and stock
repurchases. Some may view stock repurchases as costly
compared with mergers and acquisitions, but our work
suggests that both methods will lower capital ratios and
bring the marginal cost of debt and equity closer together.

The views expressed in this article are those of the authors and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

Thuan Le and Kevin P. Sheehan are economists in the Division of Research
and Statistics of the Federal Deposit Insurance Corporation.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

45

Commentary
Stephen G. Cecchetti

This session contains four interesting papers that are
brought together by the following important question:
What does it mean for a bank to be capital constrained?
Put slightly differently, the papers by Ediz, Michael, and
Perraudin; Aggarwal and Jacques; Yonetani and Katsuo;
and Le and Sheehan all attempt to measure how banks react
to the presence of capital requirements. In the following, I
will summarize and comment on what I believe to be the
primary focus of each of these four papers as it relates to
this question. I will then close with some general remarks.
The first paper, by Ediz, Michael, and Perraudin,
entitled “The Impact of Capital Requirements on U.K.
Bank Behaviour,” examines the behavior of British banks
near the regulatory trigger levels for capital, as set by the
examining authorities in the United Kingdom. The
authors ask the very interesting question: What actions do
banks take when their capital ratios fall close to the regulatory limit? Their conclusion is that banks approaching the
limits imposed by regulators raise capital, and do not shed
loans. This conclusion is valuable, as it suggests that the
reaction of lenders to capital requirements is not to clamp

Stephen G. Cecchetti is an executive vice president and the director of research at
the Federal Reserve Bank of New York.

down on their borrowers. Regulatory constraints do not, by
themselves, appear to reduce the supply of loans.
I view Ediz, Michael, and Perraudin’s results as
preliminary. The authors provide a number of very interesting descriptive statistics that provide support for these
conclusions. For example, they convincingly establish
(graphically) that the closer a bank’s capital (relative to
risk-weighted assets) gets to the regulatory trigger, the
more likely a bank is to increase its capital. But their
sophisticated econometric analysis has one fairly large difficulty. The authors estimate a simple model in which banks
have an optimal or target level of capital in mind and
adjust slowly to this target. Looking at the numerical
results in the paper, one finds that banks are adjusting their
capital levels each year by more than the difference
between the current level and the target. That is, the estimated adjustment rate exceeds one, meaning that the
banks are overshooting the target (and by more and more
each year).
The second paper, by Aggarwal and Jacques, is
entitled “Assessing the Impact of Prompt Corrective
Action on Bank Capital and Risk.” The authors attempt to
measure the impact of prompt corrective action (PCA) on
bank capital levels and bank risk; again, an issue clearly
worthy of study. In this work, Aggarwal and Jacques use

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

47

data on bank balances for the years 1991-93. This allows
the assessment of banks’ behavior before and after the institution of PCA in 1992. The authors find that banks with
low levels of capital at the beginning of the period
increased their levels of capital by the end and reduced the
riskiness of their asset portfolios (using the authors’ chosen
measure).
While Aggarwal and Jacques’ conclusions are
plausible, can we really ascribe them to prompt corrective
action? In order to fully confirm the causal link from PCA
to the bank balance sheet changes they document, the
authors need to confront two important difficulties. First,
are there plausible alternative explanations for the findings? What else happened in the 1991-93 period? And second, does their measure of risk really track the quantity of
interest? Again, is there another, equally plausible interpretation of the results? With respect to the first question,
a number of things happened during this period that may
have contaminated the results, making this an unfortunate
period to use for an attempt to isolate the impact of PCA.
First, 1992 was the year in which the 1988 Basle Capital
Accord was implemented in the United States. In preparation for this, banks began reporting risk-based capital in
1990-91. It seems likely that banks’ behavior during this
period was a reaction both to PCA and to the implementation of the Basle Capital Accord, and that sorting out their
relative impact will be very difficult.
Second, the early 1990s was an unusual point in
what was an important cycle in the banking industry. Prior
to this, in the late 1980s through 1991, banks had taken
loan losses associated with their real estate portfolios.
Banks’ loan-loss reserves were depleted and their capital
was significantly reduced. The natural reaction of the
banks in 1992-93 was to rebuild their capital positions.
Was the overall reaction of bank capital during the
1991-93 period the result of prompt corrective action?
Maybe, but we do not yet have convincing proof.
Aggarwal and Jacques’ second set of results concerns the impact of PCA on banks’ willingness to assume
risk. They measure bank risk exposure as the ratio of
risk-weighted assets to total assets, and presume that the
higher this ratio, the more risk a bank assumes per dollar of

48

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

book value. Unfortunately, this measures only credit risk,
and not very well. What about other sources of risk, such as
interest rate risk? I am led to conclude that they have not
convincingly shown that PCA reduced the overall riskiness
of banks’ assets.
In “Fair Value Accounting and Regulatory Capital
Requirements,” Yonetani and Katsuo examine how market
and regulatory discipline interact to affect Japanese banks.
The market might perceive that banks are undercapitalized
and might value their shares accordingly. But, Yonetani
and Katsuo hypothesize, there may be a separate influence
on the bank that comes when it actually hits its regulatory
limit. At this point, does the market punish the bank even
more? Or, does the market properly perceive the riskiness
of the bank’s asset position and value it correctly? The
authors conclude that bank earnings based on fair market
value are more volatile than those based on historical cost
and that the impact of this additional volatility depends on
the level of bank capital, suggesting that the two (negative)
effects reinforce one another.
Yonetani and Katsuo’s work is relevant in helping
us answer a much broader question than the one on which
they primarily focus: For the purposes of meeting regulatory capital requirements, at what frequency should we
require banks to mark their portfolios to market? This is an
extremely difficult question to answer. It seems that some
market value accounting is necessary, and so “never” is not
the right answer. But then, a very high frequency, even if it
were cheap to administer, does not seem to be the right
answer either. Should we insist that the bank’s capital, at
market prices, exceeds the regulatory minimum at every
instant? Probably not, as some portions of a bank’s portfolio may experience significantly more high-frequency volatility than low-frequency volatility. But we surely could
use an answer to this question, and more work in this area
would be very valuable.
The final paper in this group is Le and Sheehan’s
“Measuring the Relative Marginal Cost of Debt and
Capital for Banks.” In their study, these two authors ask
whether we can measure the impact of capital requirements
by looking at prices. The general idea of looking for the
impact of quantity constraints by examining prices seems

like a good one. Here, Le and Sheehan proceed by studying
the behavior of the difference between the cost of capital
and the cost of debt. Does this give us the information we
really want?
In assessing their methods, one must ask whether
fluctuations in the cost of capital relative to debt are likely
to tell us anything about the degree to which capital
requirements bind. In trying to answer this question, first
ask whether the cost of capital will equal the cost of debt
even if there were no capital requirements. I think that the
answer to this must be no. First, capital is more risky than
debt, and so it should have a higher expected rate of return.
Second, even if deposit insurance cuts the link between the
marginal cost of debt and the level of capital, with costly
bankruptcy, the marginal cost of capital will depend on the
level of debt. As a result, anything that changes the riskiness of capital or the likelihood of bankruptcy will change
the cost of capital relative to debt—even if there is no capital requirement at all.
Looking briefly at Le and Sheehan’s empirical
results, I have two comments. First, it is very difficult to
measure the marginal cost of capital, which is what they
need. Most techniques will allow measurement of the average cost. Second, looking at the specifics of their results,
you see that the time path of their measure of how binding
the constraints are depends critically on exactly how they
choose to measure it. Is the deviation of the estimated cost

of capital from the estimated cost of debt calculated relative to the interest rate on Treasury bills or not? It turns
out to make a big difference what measure is used, and
since the authors provide no reason for one or the other, I
am left puzzled.
In thinking about capital regulation generally, the
problem that brings these four papers together is a fundamental one: What does it mean for banks to be capital constrained? The common methodology in addressing this
question is to look at the behavior of banks as they
approach the constraint imposed by regulators. But is this
likely to give us an answer to the question we really care
about? The one result that comes through in all of these
papers is that banks that are undercapitalized raise capital.
But surely undercapitalized banks will be under market
pressure at the same time they come under regulatory pressure. Can we really say that the behavior we observe with
the regulations is different from the behavior we would
observe without them?
I realize that in these comments I have raised more
questions than I have answered. My conclusion is that the
success of these papers, really, is in helping us to refine the
questions to which we need answers. After reading these
four interesting papers, I am left asking myself two questions to which we would like to know the answers: How is
it that required capital ratios work to affect bank behavior?
What are capital requirements really supposed to achieve?

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

49

Industry Practices in Credit Risk Modeling
and Internal Capital Allocations:
Implications for a Models-Based
Regulatory Capital Standard
Summary of Presentation
David Jones and John Mingo

I. WHY SHOULD REGULATORS BE
INTERESTED IN CREDIT RISK MODELS?
Bank supervisors have long recognized two types of shortcomings in the Basle Accord’s risk-based capital (RBC)
framework. First, the regulatory measures of “capital” may
not represent a bank’s true capacity to absorb unexpected
losses. Deficiencies in reported loan loss reserves, for
example, could mask deteriorations in banks’ economic net
worth. Second, the denominator of the RBC ratios, total
risk-weighted assets, may not be an accurate measure of
total risk. The regulatory risk weights do not reflect
certain risks, such as interest rate and operating risks.
More importantly, they ignore critical differences in credit
risk among financial instruments (for example, all commercial credits incur a 100 percent risk weight), as well as
differences across banks in hedging, portfolio diversification, and the quality of risk management systems.
These anomalies have created opportunities for
“regulatory capital arbitrage” that are rendering the formal

David Jones is an assistant director and John Mingo a senior adviser in the
Division of Research and Statistics of the Board of Governors of the Federal
Reserve System.

RBC ratios increasingly less meaningful for the largest,
most sophisticated banks. Through securitization and
other financial innovations, many large banks have lowered
their RBC requirements substantially without reducing
materially their overall credit risk exposures. More
recently, the September 1997 Market Risk Amendment to
the Basle Accord has created additional arbitrage opportunities by affording certain credit risk positions much lower
RBC requirements when held in the trading account rather
than in the banking book.
Given the prevalence of regulatory capital arbitrage
and the unstinting pace of financial innovation, the current
Basle Accord may soon become overwhelmed. At least for
the largest, most sophisticated banks, it seems clear that
regulators need to begin developing the next generation of
capital standards now—before the current framework is
completely outmoded. “Internal models” approaches to
prudential regulation are presently the only long-term
solution on the horizon.
The basic problem is that securitization and other
forms of capital arbitrage allow banks to achieve effective
capital requirements well below the nominal 8 percent
Basle standard. This may not be a concern—indeed, it may
be desirable from a resource allocation perspective—when,

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

53

in specific instances, the Basle standard is way too high in
relation to a bank’s true risks. But it is a concern when
capital arbitrage lowers overall prudential standards.
Unfortunately, with the present tools available to supervisors, it is often difficult to distinguish these cases,
especially given the lack of transparency in many offbalance-sheet credit positions.
Ultimately, capital arbitrage stems from the
disparities between true economic risks and the “one-sizefits-all” notion of risk embodied in the Accord. By contrast, over the past decade many of the largest banks have
developed sophisticated methods for quantifying credit
risks and internally allocating capital against those risks.
At these institutions, credit risk models and internal
capital allocations are used in a variety of management
applications, such as risk-based pricing, the measurement
of risk-adjusted profitability, and the setting of portfolio
concentration limits.

II. THE RELATIONSHIP BETWEEN PDF
AND ALLOCATED ECONOMIC CAPITAL
Before discussing various credit risk models per se, it may
be helpful to describe how these models are used within
banks’ capital allocation systems. Internal capital allocations against credit risk are based on a bank’s estimate of
the probability density function (PDF) for credit losses.
Credit risk models are used to estimate these PDFs (see
chart). A risky portfolio is one whose PDF has a relatively
long, fat tail—that is, where there is a significant likelihood that actual losses will be substantially higher than
expected losses, shown as the left dotted line in the chart.
In this chart, the probability of credit losses exceeding the
level X is equal to the shaded area under the PDF to the
right of X.
The estimated capital needed to support a bank’s
credit risk exposure is generally referred to as its “economic
capital” for credit risk. The process for determining this
amount is analogous to VaR methods used in allocating
economic capital against market risks. Specifically, the economic capital for credit risk is determined in such a way
that the estimated probability of unexpected credit losses
exhausting economic capital is less than the bank’s “target

54

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

insolvency rate.” Capital allocation systems generally
assume that it is the role of reserving policies to cover
expected credit losses, while it is the role of equity capital to
cover credit risk, or the uncertainty of credit losses. Thus,
required economic capital is the amount of equity over and
above expected losses necessary to achieve the target insolvency rate. In the chart, for a target insolvency rate equal
to the shaded area, the required economic capital equals
the distance between the two dotted lines.
In practice, the target insolvency rate is usually
chosen to be consistent with the bank’s desired credit rating.
For example, if the desired credit rating is AA, the target
insolvency rate might equal the historical one-year default
rate for AA-rated corporate bonds (about 3 basis points).
To recap, economic capital allocations for credit
risk are based on two critical inputs: the bank’s target
insolvency rate and its estimated PDF for credit losses. Two
banks with identical portfolios, therefore, could have very
different economic capital allocations for credit risk, owing
to differences in their attitudes toward risk taking, as
reflected in their target insolvency rates, or owing to differences in their methods for estimating PDFs, as reflected in

The Relationship between PDF and Allocated
Economic Capital Losses

Probability density
function of losses
(PDF)
Allocated economic capital

Expected
losses

Losses

X

Note: The shaded area under the PDF to the right of X (the target insolvency rate)
equals the cumulative probability that unexpected losses will exceed the allocated
economic capital.

their credit risk models. Obviously, for competitive equity
and other reasons, regulators prefer to apply the same
minimum soundness standard to all banks. Thus, any
internal models approach to regulatory capital would likely
be based on a bank’s estimated PDF, not on the bank’s own
internal economic capital allocations. That is, the regulator
would likely (a) decide whether the bank’s PDF estimation
process was acceptable and (b) at least implicitly, set a
regulatory maximum insolvency probability (rather than
accept the bank’s target insolvency rate if such a rate was
deemed “too high” by regulatory standards).

III. TYPES OF CREDIT RISK MODELS
When estimating the PDF for credit losses, banks generally
employ what we term either “top-down” or “bottom-up”
methods (see exhibit). Top-down models are often used for
estimating credit risk in consumer or small business portfolios. Typically, within a broad subportfolio, such as credit
cards, all loans would be treated as more or less homogeneous. The bank would then base its estimated PDF on the
historical credit loss rates for that subportfolio taken as a
whole. For example, the variance in subportfolio loss rates
over time could be taken as an estimate of the variance of

loss rates associated with the current subportfolio. A limitation of top-down models, however, is that they may not
be sensitive to changes in the subportfolio’s composition.
That is, if the quality of the bank’s card customers were to
change over time, PDF estimates based on that portfolio’s
historical loss rates could be highly misleading.
Where changes in portfolio composition are a
significant concern, banks appear to be evolving toward
bottom-up models. This is already the predominant
method for measuring the credit risks of large and middlemarket customers. A bottom-up model attempts to
quantify credit risk at the level of each individual loan,
based on an explicit credit evaluation of the underlying
customer. This evaluation is usually summarized in terms
of the loan’s internal credit rating, which is treated as a
proxy for the loan’s probability of default. The bank
would also estimate the loan’s loss rate in the event of
default, based on collateral and other factors. To measure
credit risk for the portfolio as a whole, the risks of
individual loans are aggregated, taking into account
correlation effects. Unlike top-down methods, therefore,
bottom-up models explicitly consider variations in credit
quality and other compositional effects.

Overview of Risk Measurement Systems

Aggregative Models

Structural Models

(Top-down techniques, generally applied
to broad lines of business)
• Peer analysis
• Historical cash flow volatility

Credit Risks

Market Risks

Operating Risks

Top-Down Methods
(Common within consumer and
small business units)

Bottom-Up Methods
(Standard within large corporate business units)

• Historical charge-off volatility

Building blocks
1. Internal credit ratings

5. Parameter specification/estimation

2. Definition of credit loss
• Default mode (DM)
• Mark-to-market (MTM)

6. PDF computation engine
• Monte Carlo simulation
• Mean/variance approximation

3. Valuations of loans
4. Treatment of credit-related optionality

7. Capital allocation rule

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

55

IV. MODELING ISSUES
The remainder of this summary focuses on four aspects
of credit risk modeling: the conceptual framework,
credit-related optionality, model calibrations, and model
validation. The intent is to highlight some of the modeling
issues that we believe are significant from a regulator’s
perspective; the full version of our paper provides significantly greater detail.

A. CONCEPTUAL FRAMEWORK
Credit risk modeling procedures are driven importantly by
a bank’s underlying definition of “credit losses” and the
“planning horizon” over which such losses are measured.
Banks generally employ a one-year planning horizon and
what we refer to as either a default-mode (DM) paradigm or a
mark-to-market (MTM) paradigm for defining credit losses.

1. Default-Mode Paradigm
At present, the default-mode paradigm is by far the most
common approach to defining credit losses. It can be
thought of as a representation of the traditional “buyand-hold” lending business of commercial banks. It is
sometimes called a “two-state” model because only two
outcomes are relevant: nondefault and default. If a loan
does not default within the planning horizon, no credit
loss is incurred; if the loan defaults, the credit loss equals
the difference between the loan’s book value and the
present value of its net recoveries.

2. Mark-to-Market Paradigm
The mark-to-market paradigm generalizes this approach
by recognizing that the economic value of a loan may
decline even if the loan does not formally default. This
paradigm is “multi-state” in that “default” is only one of
several possible credit ratings to which a loan could
migrate. In effect, the credit portfolio is assumed to be
marked to market or, more accurately, “marked to model.”
The value of a term loan, for example, typically would
employ a discounted cash flow methodology, where the
credit spreads used in valuing the loan would depend on
the instrument’s credit rating.

56

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

To illustrate the differences between these two
paradigms, consider a loan having an internal credit rating equivalent to BBB. Under both paradigms, the loan
would incur a credit loss if it were to default during the
planning horizon. Under the mark-to-market paradigm,
however, credit losses could also arise if the loan were to
suffer a downgrade short of default (such as migrating from
BBB to BB) or if prevailing credit spreads were to widen.
Conversely, the value of the loan could increase if its credit
rating improved or if credit spreads narrowed.
Clearly, the planning horizon and loss paradigm are
critical decision variables in the credit risk modeling process.
As noted, the planning horizon is generally taken to be one
year. It is often suggested that one year represents a reasonable interval over which a bank—in the normal course of
business—could mitigate its credit exposures. Regulators,
however, tend to frame the issue differently—in the context
of a bank under stress attempting to unload the credit risk of
a significant portfolio of deteriorating assets. Based on
experience in the United States and elsewhere, more than one
year is often needed to resolve asset-quality problems at
troubled banks. Thus, for the banking book, regulators may
be uncomfortable with the assumption that capital is needed
to cover only one year of unexpected losses.
Since default-mode models ignore credit deteriorations short of default, their estimates of credit risk may be
particularly sensitive to the choice of a one-year horizon.
With respect to a three-year term loan, for example, the
one-year horizon could mean that more than two-thirds of
the credit risk is potentially ignored. Many banks attempt
to reduce this bias by making a loan’s estimated probability of default an increasing function of its maturity. In
practice, however, these adjustments are often made in an
ad hoc fashion, so it is difficult to assess their effectiveness.

B. CREDIT-RELATED OPTIONALITY
In contrast to simple loans, for many instruments a bank’s
credit exposure is not fixed in advance, but rather depends
on future (random) events. One example of such “creditrelated optionality” is a line of credit, where optionality
reflects the fact that drawdown rates tend to increase as a

customer’s credit quality deteriorates. As observed in
connection with the recent turmoil in foreign exchange
markets, credit-related optionality also arises in derivatives
transactions, where counterparty exposure changes randomly
over the life of the contract, reflecting changes in the
amount by which the bank is “in the money.”
As with the treatment of optionality in VaR models,
credit-related optionality is a complex topic, and methods
for dealing with it are still evolving. At present, there is
great diversity in practice, which frequently leads to very
large differences across banks in credit risk estimates for
similar instruments. With regard to virtually identical
lines of credit, estimates of stand-alone credit risk can differ
as much as a tenfold. In some cases, these differences reflect
modeling assumptions that, quite frankly, seem difficult to
justify—for example, with respect to committed lines of
credit, some banks implicitly assume that future drawdown rates are independent of future changes in a customer’s
credit quality. Going forward, in our view the treatment of
credit-related optionality needs to be a priority item, both
for bank risk modelers and their supervisors.

C. MODEL CALIBRATION
Perhaps the most difficult aspect of credit risk modeling is
the calibration of model parameters. To illustrate this
process, note that in a default-mode model, the credit loss
for an individual loan reflects the combined influence of
two types of risk factors—those determining whether or not
the loan defaults and, in the event of default, risk factors
determining the loan’s loss rate. Thus, implicitly or explicitly, the model builder must specify (a) the expected
probability of default for each loan, (b) the probability
distribution for each loan’s loss-rate-given-default, and
(c) among all loans in the portfolio, all possible pair-wise
correlations among defaults and loss-rates-given-default.
Under the mark-to-market paradigm, the estimation problem is even more complex, since the model builder needs
to consider possible credit rating migrations short of
default as well as potential changes in future credit spreads.
This is a daunting task. Reflecting the longer term
nature of credit cycles, even in the best of circumstances—

assuming parameter stability—many years of data, spanning
multiple credit cycles, would be needed to estimate default
probabilities, correlations, and other key parameters with
good precision. At most banks, however, data on historical
loan performance have been warehoused only since the
implementation of their capital allocation systems, often
within the last few years. Owing to such data limitations,
the model specification process tends to involve many crucial
simplifying assumptions as well as considerable judgment.
In our full paper, we discuss assumptions that are
often invoked to make model calibration manageable.
Examples include assumptions of parameter stability and
various forms of independence within and among the various types of risk factors. Some specifications also impose
normality or other parametric assumptions on the underlying probability distributions.
It is important to note that estimation of the
extreme tail of the PDF is likely to be highly sensitive to
these assumptions and to estimates of key parameters.
Surprisingly, in practice there is generally little analysis
supporting critical modeling assumptions. Nor is it
standard practice to conduct sensitivity testing of a
model’s vulnerability to key parameters. Indeed, practitioners generally presume that all parameters are known
with certainty, thus ignoring credit risk issues arising
from parameter uncertainty or model instability. In the
context of an internal models approach to regulatory capital
for credit risk, sensitivity testing and the treatment of
parameter uncertainty would likely be areas of keen
supervisory interest.

D. MODEL VALIDATION
Given the difficulties associated with calibrating credit risk
models, one’s attention quickly focuses on the need for
effective model validation procedures. However, the same
data problems that make it difficult to calibrate these models
also make it difficult to validate the models. Owing to insufficient data for out-of-sample testing, banks generally do not
conduct statistical back testing on their estimated PDFs.
Instead, credit risk models tend to be validated
indirectly, through various market-based “reality” checks.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

57

Peer-group analysis is used extensively to gauge the reasonableness of a bank’s overall capital allocation process.
Another market-based technique involves comparing
actual credit spreads on corporate bonds or syndicated
loans with the break-even spreads implied by the bank’s
internal pricing models. Clearly, an implicit assumption of
these techniques is that prevailing market perceptions and
prevailing credit spreads are always “about right.”
In principle, stress testing could at least partially
compensate for shortcomings in available back-testing
methods. In the context of VaR models, for example, stress
tests designed to simulate hypothetical shocks provide
useful checks on the reasonableness of the required capital
levels generated by these models. Presumably, stress-testing
protocols also could be developed for credit risk models,
although we are not yet aware of banks actively pursuing
this approach.

V. POSSIBLE NEAR-TERM APPLICATIONS
OF CREDIT RISK MODELS
While the reliability concerns raised above in connection
with the current generation of credit risk models are substantial, they do not appear to be insurmountable. Credit
risk models are progressing so rapidly it is conceivable they
could become the foundation for a new approach to setting
formal regulatory capital requirements within a reasonably
near time frame. Regardless of how formal RBC standards
evolve over time, within the short run supervisors need to
improve their existing methods for assessing bank capital
adequacy, which are rapidly becoming outmoded in the
face of technological and financial innovation. Consistent
with the notion of “risk-focused” supervision, such new
efforts should take full advantage of banks’ own internal
risk management systems—which generally reflect the
most accurate information about their credit exposures—
and should focus on encouraging improvements to these
systems over time.
Within the relatively near term, we believe that
there are at least two broad areas in which the inputs or
outputs of bank’s internal credit risk models might usefully
be incorporated into prudential capital policies. These

58

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

include (a) the selective use of internal credit risk models in
setting formal RBC requirements against certain credit
positions that are not treated effectively within the current
Basle Accord and (b) the use of internal credit ratings and
other components of credit risk models for purposes of
developing specific and practicable examination guidance
for assessing the capital adequacy of large, complex banking organizations.

A. SELECTIVE USE IN FORMAL RBC REQUIREMENTS
Under the current RBC standards, certain credit risk
positions are treated ineffectually or, in some cases, ignored
altogether. The selective application of internal risk models
in this area could fill an important void in the current RBC
framework for those instruments that, by virtue of their
being at the forefront of financial innovation, are the most
difficult to address effectively through existing prudential
techniques.
One particular application is suggested by the
November 1997 Notice of Proposed Rulemaking on
Recourse and Direct Credit Substitutes (NPR) put forth by
the U.S. banking agencies. The NPR discusses numerous
anomalies regarding the current RBC treatment of recourse
and other credit enhancements supporting banks’ securitization activities. In this area, the Basle Accord often produces
dramatically divergent RBC requirements for essentially
equivalent credit risks, depending on the specific contractual
form through which the bank assumes those risks.
To address some of these inconsistencies, the NPR
proposes setting RBC requirements for securitization-related
credit enhancements on the basis of credit ratings for these
positions obtained from one or more accredited rating agencies. One concern with this proposal is that it may be costly
for banks to obtain formal credit ratings for credit enhancements that currently are not publicly rated. In addition,
many large banks already produce internal credit ratings for
such instruments, which, given the quality of their internal
control systems, may be at least as accurate as the ratings
that would be produced by accredited rating agencies. A
natural extension of the agencies’ proposal would permit a
bank to use its internal credit ratings (in lieu of having to

obtain external ratings from accredited rating agencies),
provided they were judged to be “reliable” by supervisors.
A further extension of the agency proposal might
involve the direct use of internal credit risk models in setting formal RBC requirements for selected classes of
securitization-related credit enhancements. Many current
securitization structures were not contemplated when the
Accord was drafted, and cannot be addressed effectively
within the current RBC framework. Market acceptance of
securitization programs, however, is based heavily on the
ability of issuers to quantify (or place reasonable upper
bounds on) the credit risks of the underlying pools of
securitized assets. The application of internal credit risk
models, if deemed “reliable” by supervisors, could provide
the first practical means of assigning economically reasonable capital requirements against such instruments. The
development of an internal models approach to RBC
requirements—on a limited scale for selected instruments—
also would provide a useful test bed for enhancing supervisors’ understanding of and confidence in such models,
and for considering possible expanded regulatory capital
applications over time.

B. IMPROVED EXAMINATION GUIDANCE
As noted above, most large U.S. banks today have highly
disciplined systems for grading the credit quality of individual financial instruments within major portions of their
credit portfolios (such as large business customers). In combination with other information from banks’ internal risk
models, these internal grades could provide a basis for
developing specific and practical examination guidance to
aid examiners in conducting independent assessments of the
capital adequacy of large, complex banking organizations.
To give one example, in contrast to the one-sizefits-all Basle standard, a bank’s internal capital allocation
against a fully funded, unsecured commercial loan will
generally vary with the loan’s internal credit rating. Typical
internal capital allocations often range from 1 percent or
less for a grade-1 loan, to 14 percent or more for a grade-6
loan (in a credit rating system with six “pass” grades).
Internal economic capital allocations against classified, but

not-yet-charged-off, loans may approach 40 percent—not
counting any reserves for expected future charge-offs.
Examiners could usefully compare a particular bank’s
actual capital levels (or its allocated capital levels) with the
capital levels implied by such a grade-by-grade analysis
(using as benchmarks the internal capital allocation ratios,
by grade, of peer institutions). At a minimum, such a comparison could initiate discussions with the bank on the
reliability of its internal approaches to risk measurement
and capital allocation. Over time, examination guidance
might evolve to encompass additional elements of banks’
internal risk models, including analytical tools based on
stress-test methodologies. Regardless of the specific details,
the development and field testing of examination guidance
on the use of internal credit risk models would provide useful
insights into the longer term feasibility of an internal models
approach to setting formal regulatory capital standards.
More generally, both supervisors and the banking
industry would benefit from the development of sound
practice guidance on the design, implementation, and
application of internal risk models and capital allocation
systems. Although important concerns remain, this field
has progressed rapidly in recent years, reflecting the growing awareness that effective risk measurement is a critical
ingredient to effective risk management. As with trading
account VaR models at a similar stage of development,
banking supervisors are in a unique position to disseminate
information on best practices in the risk measurement
arena. In additional to permitting individual banks to
compare their practices with those of peers, such efforts
would likely stimulate constructive discussions among
supervisors and bankers on ways to improve current risk
modeling practices, including model validation procedures.

VI. CONCLUDING REMARKS
The above discussion provides examples by which information from internal credit risk models might be usefully
incorporated into regulatory or supervisory capital policies.
In view of the modeling concerns described in this summary, incorporating internal credit risk measurement and
capital allocation systems into the supervisory and/or

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

59

regulatory framework will occur neither quickly nor without significant difficulties. Nevertheless, supervisors should
not be dissuaded from embarking on such an endeavor. The
current one-size-fits-all system of risk-based capital
requirements increasingly is inadequate to the task of
measuring large bank soundness. Moreover, the process of

“patching” regulatory capital “leaks” as they occur appears
to be less and less effective in dealing with the challenges
posed by ongoing financial innovation and regulatory
capital arbitrage. Finally, despite difficulties with an internal
models approach to bank capital, no alternative long-term
solutions have yet emerged.

ENDNOTE
The views expressed in this summary are those of the authors and do not necessarily
reflect those of the Federal Reserve System or other members of its staff. This paper
draws heavily upon information obtained through our participation in an ongoing
Federal Reserve System task force that has been reviewing the internal credit risk
modeling and capital allocation processes of major U.S. banking organizations.
The paper reflects comments from other members of that task force and Federal
Reserve staff, including Thomas Boemio, Raphael Bostic, Roger Cole, Edward
Ettin, Michael Gordy, Diana Hancock, Beverly Hirtle, James Houpt, Myron
Kwast, Mark Levonian, Chris Malloy, James Nelson, Thomas Oravez, Patrick
Parkinson, and Thomas Williams. In addition, we have benefited greatly from
discussions with numerous practitioners in the risk management arena, especially
John Drzik of Oliver, Wyman & Company. We alone, of course, are responsible
for any remaining errors.

REFERENCES

Jones, David, and John Mingo. 1998. “Industry Practices in Credit Risk
Modeling and Internal Capital Allocations: Implications for a
Models-Based Regulatory Capital Standard.” Paper presented at the
conference “Financial Services at the Crossroads: Capital Regulation in
the Twenty-First Century,” Federal Reserve Bank of New York,
February 26-27.

The views expressed in this article are those of the authors and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

60

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

Credit Risk in the Australian
Banking Sector
Brian Gray

This paper presents a brief overview of developments
currently taking place in the Australian banking sector
relating to the measurement and management of credit
risk. Section I provides, as background, a sketch of the
structure of banking in Australia. Section II considers some
of the forces operating within the Australian banking and
financial system to increase the significance of credit and
capital management in banks. Section III outlines some of
the credit risk management practices being adopted in the
major Australian banks. Section IV looks at the implications of these developments and speculates on the scope for
greater use of banks’ internal credit risk models, or other
possible approaches, for capital adequacy purposes. A
summary and brief conclusion are in Section V.

I. THE STRUCTURE OF BANKING
IN AUSTRALIA
The banking system in Australia can be summarised in a
number of simple statistics. It comprises forty-three banking
groups, with aggregate global assets totaling more than
A$900 billion. Asset size, including the credit equivalent
of all off-balance-sheet activity, ranges from around A$250
billion for the largest bank to around A$300 million for

Brian Gray is chief manager, Bank Supervision Department, Reserve Bank of
Australia.

the smallest. As a group, banks hold more than 75 percent
of the assets held by all financial intermediaries in Australia.
The four major banking groups account for more than
75 percent of that total. Measured in terms of the assets
of the financial system as a whole (including insurance
companies and fund managers), banks now account for just
less than 50 percent.
The history of banking in Australia can be summarised as one in which a long period of heavy regulation
was followed by a period (dating from the late 1970s to the
early 1980s) of financial deregulation. Banks dominated
the system in absolute terms for many years but lost
ground over the years to the newly emerging (and largely
unregulated) nonbank sector. Between the late 1920s and
1980, banks’ share of intermediated assets fell from around
90 percent to about 55 percent. That trend changed with
the advent of financial deregulation. The long-term slide in
the proportion of financial assets held by the banks was
halted, and the expansion in the number of domestic and
foreign banks operating in the Australian market, combined with the additional freedoms given to banks as a
result of deregulation, enabled banks’ share of business to
rise. These trends have been widely documented and will
not be examined in this paper.
In contrast to the position in a number of countries, banking in Australia encompasses all aspects of

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

61

financial intermediation. Banks are the main providers of
funds to households (through personal lending and lending
for residential housing) as well as to the small and
medium-sized business sectors. They are involved heavily
in wholesale and institutional markets, including all
aspects of traded markets. Through fully owned subsidiaries,
they are prominent in insurance and funds management.
There are no limitations or artificial barriers of substance to
the type of activity that can be conducted through a bank
or its associated companies, provided the activity can be
classified as financial in nature.

II. RISK MANAGEMENT AND THE UNDERLYING
FORCES IN AUSTRALIAN BANKING
Three sets of forces have been instrumental in generating
greater interest over the past five years in risk measurement
and management within the Australian banking system:
• the after-effects of the 1988 and 1992 periods, which
saw some Australian banks suffer large losses (Chart 1).
This experience led to a recognition that in a world
characterised by financial deregulation, the potential
existed for large volatility in earnings (and potentially
large losses) induced by credit cycles. The product was
a new-found interest on the part of bank management
in ways to measure and manage credit and other forms
of risk more precisely so as to avoid, as far as possible,
the reemergence of such problems in the future.
• a recognition that the increasing volume and complexity of financial instruments and products
required that better ways be found to measure
associated risks. Growth and increasing complexity
were not limited to traded financial products, but
also extended to many balance-sheet products
offered to the household and business sectors that
involved complex structures, often incorporating
hard-to-measure degrees of optionality.
• the structural changes taking place in the financial
sector and the growth in competitive pressures. Despite
the post-deregulation resurgence in the growth of
“banking” as opposed to “nonbank” activities, the
middle years of the 1990s and beyond have been a
period of increasingly strong competition in the
financial system, and that trend is likely to continue.
Against a background of falling underlying profitability, banks have begun to place greater focus than
ever before on the maintenance of shareholder returns

62

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

and the potential for improved risk measurement and
management practices to enhance performance through
better portfolio selection and management.
This is the broad canvas against which the issue of
possible regulatory-induced inefficiencies has emerged in the
Australian market. Central to the Australian regulatory
system is the 1988 Capital Accord, which (among other
things) provided a rough rule-of-thumb for the measurement of required regulatory capital. The capital adequacy
arrangements were readily accepted within the Australian
banking system and, for a long time (often to the frustration
of bank supervisors), were even used as an internal mechanism for allocating capital within at least some banks. This
was possible, in large part, because banks’ capital levels were
well in excess of the regulatory minimum; in reality, capital
allocation (to the extent that it was practiced at all) was very
much a mechanical process with little meaning to the actual
business activities of banks (Chart 2).
That situation is now in the process of changing
and the gap between the current capital adequacy arrangements and the work being carried out by banks in relation
to credit risk is becoming more apparent. Analogies are
being drawn between the innovative regulatory approach
adopted for traded market risk and the existing credit
standards. At this stage, the arguments being presented by
the main Australian banks are still in the early stages of
development, and it could not be said that there is a strong
consensus for change, at present, to the existing arrangements.
Chart 1

Banks’ Operating Profit Attributable to Shareholders
Ratio to Average Shareholders’ Funds
Percent
20

15

10

5

0

-5
1986

87

88

89

90

91

92

93

94

95

96

Chart 2

Aggregate Capital Ratio
Australian Banking System
Percent
13

12

11

10

9

1990

91

92

93

94

95

96

97

However, it is only a matter of time before calls for change
become more pronounced. Now would seem to be the right
time, therefore, to think seriously about how an alternative
approach to the treatment of regulatory capital might be
developed.

III. CREDIT RISK MANAGEMENT
IN AUSTRALIAN BANKS
What is the current state of play concerning credit risk
management in the Australian banking system? While it is
difficult in a short paper to outline the full scope of activities taking place, and the pace of evolution, this section
attempts to give an impressionistic feeling for the nature of
changes we are seeing.
First, some general observations. As discussed above,
there is no doubt that up until the early 1990s, credit risk
measurement was at a rudimentary level in Australian banks,
while the management of credit risk was largely subjective. It
was a system that relied on experienced and skilled credit
officers within the banks. Little attention was paid to assessing, in an objective manner, the nature and extent of credit
exposures. In some cases, formal credit systems (in the modern
sense of the term) were virtually nonexistent.
Since then, Australian banks have greatly
improved their credit measurement capabilities as well as
the broader systems in place to track and report on credit
exposures. This is possibly the key finding of the program

of credit risk visits initiated by the Reserve Bank of
Australia in 1992. Credit processes are now better documented and understood within institutions. Asset and
security valuation arrangements, a particular problem
during the last credit cycle, are much tighter than in the
past. There is a new focus on the accuracy and timeliness of
information on counterparties. There is now widespread
use within banks of risk grading systems, and credit
approval and monitoring processes are being automated.
There is now greater separation between the credit and
marketing functions within banks. In some places, centralised credit bureaus have been developed to draw together
information on, and take responsibility for, credit risk
management at the group level. A more recent trend has
been the emergence of centralised and independent risk
management groups that seek to assess, in an integrated
fashion, all risks faced by a banking group (such as credit,
market, operational, and legal). The output of such groups
is routinely circulated to senior management within banks
and bank boards.
The criteria necessary to assess the effectiveness of
risk management systems are, of course, multifaceted,
touching on such issues as the quantity and quality of underlying data collected on customers and their exposures,
through to the extent to which formal risk grading is used,
how it is used, the degree to which pricing of exposures is
linked to the grading system, whether risk-adjusted returns
are measured and used within an institution, and the extent
to which broader portfolio modeling is adopted to take into
account correlations between counterparty and/or industry
exposures. Once again, the general conclusion is that
techniques are evolving rapidly, though the rigour of the
methodologies used and the comprehensiveness of credit risk
management processes vary among banks. There is no doubt
that in relation to some of the more complex or leading-edge
aspects of credit risk measurement and portfolio management, “thinking” on what is required is still well ahead of
actual application or implementation.
Some of the criteria by which credit risk management systems might be judged, described above, are considered in greater detail below. For the purposes of discussion,
the focus will be mainly on the larger Australian banks.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

63

DATA COLLECTION

RISK GRADING AND PRICING

Banks now store a wide range of information on counterparties, from the value of all exposures measured across the
whole banking group and against limits, to a wealth of
financial and other information on the counterparty,
including a history of share prices, where applicable. Typically, data required to conduct extensive cash flow analysis
on borrowing firms as well as on associated industry prospects are now collected or calculated. While there has been
significant progress in risk-based data collection by
Australian banks, in many cases the data sets still cover
only relatively short time frames. This reflects the fact that
many banks did not collect extensive risk-related information, or did not store such information in a useful form,
prior to the upsurge of work in this area over the past few
years. Access to good quality, risk-related data remains an
important constraint to the wider application of credit
analysis and modeling within the Australian system.

A logical extension of risk grading is the determination of
risk-adjusted pricing for exposures. At this stage at least,
it is not clear that this process has gone far within the
Australian banking system. Some banks have certainly
used their estimates of risk to decline exposures that do not
meet their risk/return requirements, and there has been a
definite move away from the simple “pass/fail” mentality of
the past to one more sympathetic to the view that riskiness
is a continuum that should be reflected in pricing. However, a common theme of risk managers in Australian
banks is the difficulty in introducing more active pricing
for risk regimes within their banks; the difficulty being
“selling” the idea that an otherwise good exposure should
not be accepted because a “technical” assessment shows
that there is an imbalance between risk and expected
return. It is an especially difficult message to convey to
senior bank management when competitive pressures in
the market are strong. This raises the broader issue of the
“cultural” changes needed within a bank to make risk
management (broadly defined) truly effective, and the
need for an extensive “top down” education process
within financial institutions. This issue is touched upon
further below.

RISK GRADING
Risk grading is now carried out by the bulk of Australian
banks. Though subjective assessment is still used by banks
(as it should be), energy has been devoted to the application
of statistical techniques to introduce greater objectivity into
the grading process and to provide benchmarks against
which subjective assessments can be gauged. Credit application and behavioural scoring are now commonplace where
retail/consumer portfolios are concerned, with tailored
models used for the measurement of risk in the corporate
and institutional banking areas. Grading systems naturally
tend to vary between banks, with the number of grades and
demarcations between grades reflecting the structure of
banks’ balance sheets. Where possible, gradings are benchmarked, in the absence of comparable Australian data,
against U.S. default and loss data compiled by Moody’s and
Standard and Poor’s or assessed against KMV or like methodologies. Some banks have adopted external models to
assist in the risk grading and portfolio management process.
In others, the output of grading systems is carried through
to an assessment of the required level of general provisions
(a process termed “dynamic provisioning”) and then through
to profit and loss.

64

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

MEASUREMENT OF RISK-ADJUSTED PERFORMANCE
The leading Australian banks have begun to measure
risk-adjusted performance and estimate “economic”
measures of capital. The accuracy of these measures will,
of course, turn on how well the underlying data and the
related grading systems capture risk. The absence of
comprehensive data on how well or otherwise the
Australian banking sector performs in times of economic
downturn will, for some time, place a question mark on
the reliance that can be put on such figures, especially
those relating to business and corporate loan exposures.
Nonetheless, the estimates are being produced routinely
by the leading banks and circulated to the highest levels
within the banks. In some banks, remuneration policies
are now being geared off of risk-adjusted performance
measures.

USE OF BROADER PORTFOLIO MODELS
While it is acknowledged that there are benefits to be
gained in adopting active portfolio diversification techniques, the leading Australian banks are still very much at
the experimental stage in examining the potential offered
by such approaches. Allen (1997) has summarised the state
of play in relation to portfolio diversification techniques
and their applicability to Australian banks. His analysis
and conclusions are not repeated here. Suffice to say that it
is likely to be some time before the potential advantages
offered by portfolio-based approaches are implemented
within the institutions. Short of full acceptance of such
techniques, however, banks have begun to experiment with
buying and selling loans to realise better balanced portfolios while credit derivatives are being used more actively
to the same effect (though the market in Australia is still
quite small). Securitisation of banks’ more homogeneous
portfolios has been a feature of the Australian banking
scene for several years, though there have been only limited
attempts to date to securitise other, less uniform credit
portfolios.
To summarise, the past five years have witnessed a
rapid evolution in approaches to credit risk in the Australian
banking system. Whether “world best practice” can realistically be applied to present credit risk measurement and
management practices in all the leading Australian banks is
questionable, though it is equally questionable just how
many international banks with balance sheets comparable to
Australian banks would justify that description.
A useful trend observed in this market is the
recognition (referred to above) that improving risk management within banks is as much about changing attitudes
to risk as it is about introducing complex technical models
to the organisation. It is important to avoid the temptation
to view the issue of improved risk management as essentially technical in nature. Nimmo (1997) recently referred
to the challenges of improving risk management within a
major bank in the following terms:
Improved risk management, therefore, requires
significant cultural change to make it effective.
Implementation creates a great deal of discomfort
amongst bank staff because it requires people to

move away from traditional ways of doing things,
to ways that are more logical but nonetheless unfamiliar. There is typically huge resistance to that
process of change. Nevertheless, these changes have
to be implemented in such a way that they form a
fundamental part of the management of financial
institutions. . . . The question is whether the commitment exists within institutions to actually
make the changes which, in time, will deliver the
shareholder value that waits to be extracted.

IV. IMPLICATIONS FOR SUPERVISORS
Risk management practices have improved in the Australian
banking system and the range of techniques now being
applied is expanding and growing in complexity. To what
extent does this suggest the need for supervisors to
reassess their current approach to the measurement of
regulatory capital?
It could be argued that while such developments
are highly desirable in their own right, they have little
implication for supervisors whose role is to set minimum
supervisory and capital standards. Existing capital adequacy arrangements could be seen as satisfying this
role—maintaining the pressure on minimum capital levels
and generally ensuring better coordination of capital rules
internationally (one of the original aims). Provided that the
arrangements are not used by banks to influence lending or
portfolio decisions (which should always be the product of
more sophisticated methodologies than those imposed by
supervisors), then the implications of retaining the existing
arrangements should not be too significant.
There are a number of counterarguments, but the
key one relates to the issues of supervisory relevance and
financial market efficiency. While there is little reason for
bank supervisors to lead the market in the application of
new risk technologies for supervisory purposes (a strong
case can be made against taking that approach), there are
also problems in their falling behind market developments.
Effective supervision hinges, in large part, on supervisors
maintaining credibility and being able to demonstrate that
their policies have relevance to the world in which they
apply. That was the case in 1988, when the capital adequacy
arrangements were first introduced, and it can also be said

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

65

of the recent amendment to the Capital Accord covering
market risk. If the banking industry is developing better
methods for the measurement and management of risk,
and genuinely using those techniques in their risk
management activities, then it is reasonable to expect
that supervisors will assess those developments against
existing arrangements.
Competitive pressures in banking also need to be
borne in mind. There has been an increasing tendency in
the Australian market for nontraditional providers of
finance to enter and compete strongly in areas formerly
occupied mainly by banks. That trend, which is likely to
become stronger over time, should be encouraged in the
interests of greater competition. In many cases, however,
these new providers are not supervised as intermediaries,
nor should they be given the particular structures under
which some of them operate (through securitisation vehicles
and so on). One effect of the new competition in banking,
therefore, may be to increase the competitive disadvantages
associated with current forms of regulation, a point already
made strongly by some banks. Market efficiency considerations, therefore, come into the equation and further
strengthen the case to look at alternative regulatory options.

INTERNAL MODELS
The obvious option to consider is the use of internal credit
models for regulatory purposes. The issue is more complex,
however, than simply observing the increased use of such
models in the market and concluding that they should be
applied for supervisory purposes. Even if such an approach
was accepted as a good idea in principle, the real question
is how the arrangement could be made workable, be efficient from a market perspective, and satisfy prudential
objectives. There are some significant obstacles.
The fact that credit risk is the biggest risk factor
confronting most banks is a major issue and possibly a key
obstacle to the adoption of internal models. While market
risk has the potential to cause serious damage to some
banks, it is relatively insignificant for the bulk of Australian
banks. The risk of experimenting with alternative methodologies, therefore, is much less critical where market risk,
as opposed to credit risk, is concerned.

66

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

As discussed above, the practical matter of data
will always be a critical problem where credit risk modeling
is concerned. In Australia’s case, for example, there is no
long-term history of default and loss rates across different
categories or grades of counterparty; that observation
would hold true for many other countries as well. The data
that are available (mainly from the United States) show a
wide variation in risk across different gradings. For the
lower grades, risks also appear highly cyclical, skewed, and
“fat tailed.” This means that the determination or interpretation of average and worst-case loss, or volatility of loss, is
much more complex for credit risk than for traded market
risk, where price and volatility data, and hence estimates of
losses or gains, can be estimated continuously. Yet getting
the numbers right in relation to credit risk is critical.
Migration from one credit grading to another lower grading can often involve exponential increases in default risk.
Capital adequacy arrangements built on inadequate or
incomplete data may, therefore, generate dangerously
inadequate results. The skeptic might conclude, on that
basis alone, that internal models offer little, but carry very
significant risks.
Yet as we look at the current arrangements, it is
hard to believe that they could be the appropriate regulatory model to take the rapidly evolving banking system
into the next century. The simplicity of the present framework, which at the beginning was one of its great virtues,
will, in a more complex financial system, become its greatest
failing. The financial world has become more complex and
the regulatory world must move in step. The issue, therefore, is not whether regulatory arrangements should be
modernised, but rather how they can be achieved in a
balanced way, in a reasonable time frame.
As the world of internal models is approached, it
will almost certainly be argued that problems of consistency will arise—how to ensure equal treatment across
different institutions. It should be recognised, however,
that simple approaches already suffer severely from this
problem. To use an admittedly overly used example, the
current system can generate the same capital requirement
for a bank holding only blue-chip corporate exposures as
it can for another bank holding loans to risky small

businesses. That approach cannot be generating the right
messages either for the bank, the supervisor, or the market.
The true capital needs of an institution can be determined
only from the risk characteristics of its balance sheet and
its other exposures. That must be true not only for
internal management purposes but also for the purposes
of regulation.

WILL CAPITAL LEVELS FALL?
A common concern is that the use of credit risk models
will lead to lower overall capital levels in banks. That need
not occur. Under the market risk guidelines, for example,
the output of the banks’ models is multiplied by a factor to
produce the required degree of conservatism for regulatory
purposes. That approach, or some variant of it, could be
adopted in any future approach applied to credit models.
Alternatively, the use of capital estimates derived from
internal models, combined with a capital floor determined
by some simpler regulatory-based methodology, could also
be considered.
It is worth noting, in this context, that tentative
estimates of possible capital requirements flowing from the
use of credit models have been made by a number of
Australian banks. Using some quite conservative assumptions, the results point to credit risk capital requirements
of around half of those required under the existing arrangements. However, when estimates of possible operational
risks are taken into account (Australian banks are also
attempting to quantify this component of risk as part of
the risk mapping exercises being carried out within the
institutions), then the resulting overall capital figure
increases again to something not greatly different from the
present requirement. This might suggest the need for any
new capital adequacy arrangement to reach more broadly
than just credit risk, perhaps into the area of operational
risk. Possibly the time has come to develop an even
broader approach encapsulating all forms of measurable
risk. Although this would add greatly to the complexity
of the regulatory development task, it would be consistent with the trend observed in banks to look in an
integrated way at the broad range of risks being faced as a
result of their activities.

OTHER POSSIBILITIES
To the extent that the simplicity of the present capital
structure is seen as desirable, there may be merit in contemplating an extension to the risk grading system built
into the current arrangements. It is possible to envisage the
risk weighting scale extended from the current five grades
to a higher number (say, ten), thereby providing greater
demarcation between gradings. Movement in this direction
has already occurred to some extent through the introduction of concessional risk weightings under the market
risk (standard method) guidelines. This might deliver a
closer alignment of regulatory capital rules with more
“economic-based” measures of risk. While possible, this
approach would not align with broader portfolio modeling
approaches where the impact of a single counterparty on a
bank’s overall credit risk might differ depending on the
structure of the portfolio itself. Such portfolio-based
approaches would raise challenges for any supervisory
system that continued to measure credit risk on the basis of
fixed risk gradings. Perhaps more importantly, to the
extent that internal models are viewed as the appropriate
long-term approach to capital adequacy, it may be best to
avoid “band-aid” solutions that could divert attention from
the ultimate goal. The simplified approach may have some
relevance, however, for the less sophisticated of the banks
and those with simpler balance sheets. Whatever new
arrangements were introduced, there would still be a need
for a simpler alternative for the less advanced banks.
There may also be merit in exploring, for application in the area of credit risk, some of the ideas developed
over recent years by Kupiec and O’Brien in relation to
“precommitment.” The precommitment proposal is targeted
at the calculation of a capital charge for traded market risk.
A bank commits to a maximum loss over a fixed period and
allocates capital to cover the exposure. The bank is given
incentives to set realistic, and sufficiently conservative,
capital charges—incentives that take the form of penalties if
a bank’s losses exceed its committed capital. It would avoid
the need for supervisors to preordain a fixed methodology for
measuring risk—with the appropriateness of any bank estimates of risk and loss determined solely by results. In theory,
this broad approach could be applied to any form of risk.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

67

It is not at all clear how this approach could translate to the area of credit risk (the authors see its application
largely to the area of market risk). Whereas a bank could be
assessed on a precommitment model designed to cover
market risk at regular intervals (quarterly, for example),
that would not be possible where credit risk is involved
since the nature of credit cycles is such that true tests come
only infrequently (that is, over a full economic or banking
cycle). When problems do arise, they have the potential to
be serious events. There would have to be serious doubts
about the credibility of any approach that is based on the
application of sanctions where losses involved might be
very significant or even institution-threatening.
Nevertheless, the idea of a system based on the
concept of banks committing to a certain level of capital,
with supervisors avoiding the need to attempt a complex
standardisation of rules and parameters surrounding credit
models, is an attractive thought and worth exploring.

DISCLOSURE-BASED APPROACHES
Much of the discussion above assumes the ongoing presence of a capital-based regulatory regime. Another quite
different and more radical approach is also worthy of
mention. It would involve stepping away completely from
any formal determination of capital requirements and
insisting upon much greater disclosure by banks, allowing the market to determine the relative degrees of safety
attached to the different institutions. This thought process
lies behind the current regulatory regime in New Zealand
(though it should be noted that the supervisory authority
in that country has, in fact, retained much of the traditional supervisory and capital adequacy structure).
In a disclosure-based approach, banks would be
required to provide detailed information on their measurements of credit risk, the methodologies used to derive the
estimates, capital holdings, and any other data or information relevant to interested parties. To the extent that a
bank stepped out of line with established banking norms,
these external parties would go elsewhere or demand
changes within the institution, the result being that the
institution would either go broke or be forced to comply
with market expectations, whether they be in relation to

68

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

capital, risk levels, liquidity arrangements, management
structure, or something else.
There is a very strong case to be made favouring
greater market discipline on the banking sector, and supervisors, internationally, have been at the forefront of the
debate on disclosure. The issue, however, is not about the
merits of improved banking disclosure as such (about
which there is little debate), but the extent to which disclosure could form a realistic alternative to the more traditional capital-based approach.
Ultimately, it is a philosophical judgment as to
whether market-based approaches might work or whether
the health and safety of the banking sector are considered
too important to leave entirely to the market. The latter is
the mainstream view and one that is likely to be maintained. Acceptance of this position in no way reduces the
importance of improved disclosure of financial information
by institutions.

V. ASSESSMENT AND CONCLUSIONS
There is no definitive answer to the question of how capital
adequacy arrangements, or indeed supervisory arrangements more broadly defined, should evolve in the future.
The emphasis on risk-based capital adequacy as the basis
for supervision in the industrialised world is now firmly
established and seems unlikely to change in the foreseeable future.
The option of leaving the current arrangement in
place in its present form (or with some minor modifications) may be realistic as far as most banks are concerned.
However, the activities of the leading banks are pushing
regulatory arrangements in the direction of greater sophistication of credit risk measurement (just as they did in the
case of market risk measurement). Credit modeling is still
in an early phase of development in the Australian market
and it would be unrealistic to believe that a regime based on
that approach is viable in the short term. However, developments are occurring quickly and credit modeling will
become much more significant for banks in the medium
term. Very importantly, growing competition in the provision of financial services may be increasing the competitive
disadvantages associated with existing arrangements.

As supervisors of the Australian banking system,
we are keen to see the supervisory structure evolve with
the market. Without trying to downplay the complexities
that will be involved, we believe there is a strong case to
commit to the development of an approach to capital
adequacy that utilises better measures of credit risk and
portfolio modeling techniques. Over the longer term,
more integrated approaches to risk measurement (for
example, embodying credit, market, and operational risk)
may need to be the goal. Looking specifically at credit
risk modeling, there is reason to believe that a relatively
large number of Australian banks would in time see
themselves as potential model users. The work currently
being done by the major banks in Australia provides
grounds for a belief that internally based models could be
a feasible option for that group. Even for the smaller
regional banks, with a high proportion of residential

housing on their balance sheets, it would be a relatively
simple task to model credit risk, given the stability of
residential housing default and loss rates in Australia over
a long period (this is one of the few reliable long-term
statistics available in this market).
How soon might all this occur? Realistically, it
may be some years before credit risk modeling becomes
feasible in the Australian system, and longer than that for
more sophisticated approaches that attempt to integrate
different forms of risk within a single framework. However, developments are occurring rapidly in the banking
system and it is also the case that supervisory arrangements
have evolved in recent years and supervisors (as a group) are
now technically better equipped to deal with complex
issues, such as credit modeling, than they were a decade
ago. Together, these factors may bring the respective time
frames forward.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

69

REFERENCES

Allen, W. 1997. “Alternative Approaches to the Diversification of
Portfolios.” In B. Gray and C. Cassidy, eds., CREDIT RISK IN
BANKING, 166-88. Sydney: Reserve Bank of Australia.

Nimmo, R. 1997. “Round Table Discussion.” In B. Gray and C. Cassidy, eds.,
CREDIT RISK IN BANKING, 254-6. Sydney: Reserve Bank of Australia.

Conroy, F. 1997. “Managing Credit Risk—An Overview: Discussion.” In
B. Gray and C. Cassidy, eds., CREDIT RISK IN BANKING, 21-3.
Sydney: Reserve Bank of Australia.

Smout, C. 1997. “The Bank of England: What’s on the Agenda?” Paper
presented at IBC Conference “Responding to and Preparing for the
Implementation of Basle/CAD 2 and the Changing Regulatory
Framework,” London, April 8-9.

Edey, M., and B. Gray. 1996. “The Evolving Structure of the Australian
Financial System.” In M. Edey, ed., THE FUTURE OF THE FINANCIAL
SYSTEM, 6-44. Sydney: Reserve Bank of Australia.

Thompson, G. J. 1991. “Prudential Lessons.” In I. Macfarlane, ed., THE
DEREGULATION OF FINANCIAL INTERMEDIARIES, 115-42. Sydney:
Reserve Bank of Australia.

Grenville, S. 1991. “The Evolution of Financial Deregulation.” In
I. Macfarlane, ed., T HE D EREGULATION OF F INANCIAL I NTERMEDIARIES, 3-35. Sydney: Reserve Bank of Australia.

Wallis, S. 1997. “Financial System Inquiry Final Report,” March.

Kupiec, P. H., and J. M. O’Brien. 1995. “A Pre-Commitment Approach to
Capital Requirements for Market Risk.” Board of Governors of the
Federal Reserve System Finance and Economics Discussion Series,
no. 95-36.

70

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

Portfolio Credit Risk
Thomas C. Wilson

INTRODUCTION AND SUMMARY
Financial institutions are increasingly measuring and managing the risk from credit exposures at the portfolio level,
in addition to the transaction level. This change in perspective has occurred for a number of reasons. First is the
recognition that the traditional binary classification of
credits into “good” credits and “bad” credits is not sufficient—a precondition for managing credit risk at the portfolio level is the recognition that all credits can potentially
become “bad” over time given a particular economic scenario. The second reason is the declining profitability of
traditional credit products, implying little room for error
in terms of the selection and pricing of individual transactions, or for portfolio decisions, where diversification and
timing effects increasingly mean the difference between
profit and loss. Finally, management has more opportunities to manage exposure proactively after it has been originated, with the increased liquidity in the secondary loan
market, the increased importance of syndicated lending,
the availability of credit derivatives and third-party guarantees, and so on.

Thomas C. Wilson is a principal of McKinsey and Company.

In order to take advantage of credit portfolio
management opportunities, however, management must
first answer several technical questions: What is the risk
of a given portfolio? How do different macroeconomic
scenarios, at both the regional and the industry sector
level, affect the portfolio’s risk profile? What is the effect of
changing the portfolio mix? How might risk-based pricing
at the individual contract and the portfolio level be influenced by the level of expected losses and credit risk capital?
This paper describes a new and intuitive method
for answering these technical questions by tabulating the
exact loss distribution arising from correlated credit events
for any arbitrary portfolio of counterparty exposures, down
to the individual contract level, with the losses measured
on a marked-to-market basis that explicitly recognises the
potential impact of defaults and credit migrations.1 The
importance of tabulating the exact loss distribution is
highlighted by the fact that counterparty defaults and rating migrations cannot be predicted with perfect foresight
and are not perfectly correlated, implying that management faces a distribution of potential losses rather than a
single potential loss. In order to define credit risk more
precisely in the context of loss distributions, the financial
industry is converging on risk measures that summarise
management-relevant aspects of the entire loss distribu-

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

71

tion. Two distributional statistics are becoming increasingly relevant for measuring credit risk: expected losses
and a critical value of the loss distribution, often defined as
the portfolio’s credit risk capital (CRC). Each of these
serves a distinct and useful role in supporting management
decision making and control (Exhibit 1).
Expected losses, illustrated as the mean of the distribution, often serve as the basis for management’s reserve
policies: the higher the expected losses, the higher the
reserves required. As such, expected losses are also an
important component in determining whether the pricing
of the credit-risky position is adequate: normally, each
transaction should be priced with sufficient margin to
cover its contribution to the portfolio’s expected credit
losses, as well as other operating expenses.
Credit risk capital, defined as the maximum loss
within a known confidence interval (for example, 99 percent)
over an orderly liquidation period, is often interpreted as
the additional economic capital that must be held against a
given portfolio, above and beyond the level of credit
reserves, in order to cover its unexpected credit losses.
Since it would be uneconomic to hold capital against all
potential losses (this would imply that equity is held
against 100 percent of all credit exposures), some level of

Exhibit 1

Loss Distribution
$100 Portfolio, 250 Equal and Independent Credits with Default Probability
Equal to 1 Percent
Probability (percent)
40 Loss PDF
<<1 percent 99 percent>>

Expected losses = -1.0
Standard deviation = 0.63
Credit risk capital = -1.8

20

0
4
Maximum Loss =
Credit Risk Capital

72

2
Losses
Expected Losses = Reserves

0

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

capital must be chosen to support the portfolio of transactions in most, but not all, cases. As with expected losses,
CRC also plays an important role in determining whether
the credit risk of a particular transaction is appropriately
priced: typically, each transaction should be priced with
sufficient margin to cover not only its expected losses, but
also the cost of its marginal risk capital contribution.
In order to tabulate these loss distributions, most
industry professionals split the challenge of credit risk
measurement into two questions: First, what is the joint
probability of a credit event occurring? And second, what
would be the loss should such an event occur?
In terms of the latter question, measuring potential losses given a credit event is a straightforward exercise
for many standard commercial banking products. The
exposure of a $100 million unsecured loan, for example, is
roughly $100 million, subject to any recoveries. For derivatives
portfolios or committed but unutilised lines of credit, however, answering this question is more difficult. In this
paper, we focus on the former question, that is, how to model
the joint probability of defaults across a portfolio. Those
interested in the complexities of exposure measurement for
derivative and commercial banking products are referred to
J.P. Morgan (1997), Lawrence (1995), and Rowe (1995).
The approach developed here for measuring
expected and unexpected losses differs from other
approaches in several important respects. First, it models the actual, discrete loss distribution, depending on
the number and size of credits, as opposed to using a
normal distribution or mean-variance approximations.
This is important because with one large exposure the
portfolio’s loss distribution is discrete and bimodal, as
opposed to continuous and unimodal; it is highly
skewed, as opposed to symmetric; and finally, its shape
changes dramatically as other positions are added.
Because of this, the typical measure of unexpected losses
used, standard deviations, is like a “rubber ruler”: it can
be used to give a sense of the uncertainty of loss, but its
actual interpretation in terms of dollars at risk depends
on the degree to which the ruler has been “stretched” by
diversification or large exposure effects. In contrast, the
model developed here explicitly tabulates the actual,

discrete loss distribution for any given portfolio, thus
also allowing explicit and accurate tabulation of a “large
exposure premium” in terms of the risk-adjusted capital
needed to support less-diversified portfolios.
Second, the losses (or gains) are measured on a
default/no-default basis for credit exposures that cannot be
liquidated (for example, most loans or over-the-counter
trading exposure lines) as well as on a theoretical markedto-market basis for those that can be liquidated prior to the
maximum maturity of the exposure. In addition, the distribution of average write-offs for retail portfolios is also
modeled. This implies that the approach can integrate the
credit risk arising from liquid secondary market positions
and illiquid commercial positions, as well as retail portfolios
such as mortgages and overdrafts. Since most banks are
active in all three of these asset classes, this integration is an
important first step in determining the institution’s overall
capital adequacy.
Third, and most importantly, the tabulated loss
distributions are driven by the state of the economy, rather
than based on unconditional or twenty-year averages that
do not reflect the portfolio’s true current risk. This allows
the model to capture the cyclical default effects that determine the lion’s share of the risk for diversified portfolios.
Our research shows that the bulk of the systematic or nondiversifiable risk of any portfolio can be “explained” by the
economic cycle. Leveraging this fact is not only intuitive,
but it also leads to powerful management insights on the
true risk of a portfolio.
Finally, specific country and industry influences
are explicitly recognised using empirical relationships,
which enable the model to mimic the actual default correlations between industries and regions at the transaction
and the portfolio level. Other models, including many
developed in-house, rely on a single systematic risk factor
to capture default correlations; our approach is based on a
true multi-factor systematic risk model, which reflects
reality better.
The model itself, described in greater detail in
McKinsey (1998) and Wilson (1997a, 1997b), consists of
two important components, each of which is discussed in
greater detail below. The first is a multi-factor model of sys-

tematic default risk. This model is used to simulate jointly
the conditional, correlated, average default, and credit
migration probabilities for each individual country/industry/rating segment. These average segment default probabilities are made conditional on the current state of the
economy and incorporate industry sensitivities (for example,
“high-beta” industries such as construction react more to
cyclical changes) based on aggregate historical relationships.
The second is a method for tabulating the discrete loss distribution for any portfolio of credit exposures—liquid and
nonliquid, constant and nonconstant, diversified and nondiversified. This is achieved by convoluting the conditional,
marginal loss distributions of the individual positions to
develop the aggregate loss distribution, with default correlations between different counterparties determined by the
systematic risk driving the correlated average default rates.

SYSTEMATIC RISK MODEL
In developing this model for systematic or nondiversifiable
credit risk, we leveraged five intuitive observations that
credit professionals very often take for granted.
First, that diversification helps to reduce loss uncertainty, all else being equal. Second, that substantial systematic
or nondiversifiable risk nonetheless remains for even the most
diversified portfolios. This second observation is illustrated by
the “Actual” line plotted in Exhibit 2, which represents the
average default rate for all German corporations over the
Exhibit 2

Actual versus Predicted Default Rates
Germany
Default rates
0.008
Actual
0.007
0.006
0.005

Predicted

0.004
0.003
0.002
0.001
1960 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

73

74

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Exhibit 3

,
 


1960-94 period; the variation or volatility of this series can be
interpreted as the systematic or nondiversifiable risk of the
“German” economy, arguably a very diversified portfolio.
Third, that this systematic portfolio risk is driven largely by
the “health” of the macroeconomy—in recessions, one expects
defaults to increase.
The relationship between changes in average
default rates and the state of the macroeconomy is also
illustrated in Exhibit 2, which plots the actual default
rate for the German economy against the predicted
default rate, with the prediction equation based solely
upon macroeconomic aggregates such as GDP growth
and unemployment rates. As the exhibit shows, the
macroeconomic factors explain much of the overall variation in the average default rate series, reflected in the
regression equation’s R 2 of more than 90 percent for
most of the countries investigated (for example, Germany, the United States, the United Kingdom, Japan,
Switzerland, Spain, Sweden, Belgium, and France). The
fourth observation is that different sectors of the economy react differently to macroeconomic shocks, albeit
with different economic drivers: U.S. corporate insolvency rates are heavily influenced by interest rates, the
Swedish paper and pulp industry by the real terms of
trade, and retail mortgages by house prices and regional
economic indicators. While all of these examples are
intuitive, it is sometimes surprising how strong our
intuition is when put to statistical tests. For example,
the intuitive expectation that the construction sector
would be more adversely affected during a recession
than most other sectors is supported by the data for all
of the different countries analysed.
Exhibit 3 illustrates the need for a multi-factor
model, as opposed to a single-factor model, for systematic
risk. Performing a principal-components analysis of the
country average default rates, a good surrogate for systematic risk by country, it emerges that the first “factor”
captures only 77.5 percent of the total variation in systematic default rates for Moody’s and the U.S., U.K.,
Japanese, and German markets. This corresponds to the
amount of systematic risk “captured” by most singlefactor models; the rest of the variation is implicitly

Total Systematic Risk Explained
Factor 1

Total

Moody’s
United
States
Japan
United
Kingdom
Germany

,, Factor 2
,,

Factor 3

Rest

,,,
77.5 ,,,
87.7 94.4
,,,
,,,
,,,,,
88.4
74.9 ,,,,,
99.4
,,,,,
,,,,,,,,,,,,
25.9 ,,,,,,,,,,,,
60.1
92.6
,,,,,,,,,,,,
,,,,,,,,,,,,
81.1
,
79.2 ,
,
,,,,,,,,
77.5 ,,,,,,,,
62.1
56.2
,,,,,,,,
,,,,,,,,
,,,
74.0
66.8 ,,,
90.7
,,,

0 percent

100 percent

Note: The factor 2 band for Japan is 79.7; the factor 3 band for the
United Kingdom is 82.1.

assumed to be independent and uncorrelated. Unfortunately, the first factor explains only 23.9 percent of the
U.S. systematic risk index, 56.2 percent for the United
Kingdom, and 66.8 percent for Germany. The exhibit
demonstrates that the substantial correlation remaining
is explained by the second and third factors, explaining
an additional 10.2 percent and 6.8 percent, respectively,
of the total variation and the bulk of the risk for the
United States, the United Kingdom, and Germany. This
demonstrates that a single-factor systematic risk model
like one based on asset betas or aggregate Moody’s/Standard and Poor’s data alone is not sufficient to capture all
correlations accurately. The final observation is also
both intuitive and empirically verifiable: that rating
migrations are also linked to the macroeconomy—not
only is default more likely during a recession, but credit
downgrades are also more likely.
When we formulate each of these intuitive observations into a rigorous statistical model that we can estimate, the
net result is a multi-factor statistical model for systematic
credit risk that we can then simulate for every country/industry/rating segment in our sample. This is demonstrated in
Exhibit 4, where we plot the simulated cumulative default
rates for a German, single-A-rated, five-year exposure based on
current economic conditions in Germany.

Exhibit 4

any arbitrary portfolio, capable of handling portfolios
with large, undiversified positions and/or diversified
portfolios; portfolios with nonconstant exposures, such
as those found in derivatives trading books, and/or constant exposures, such as those found in commercial lending books; and portfolios comprising liquid, creditrisky positions, such as secondary market debt, or loans
and/or illiquid exposures that must be held to maturity,
such as some commercial loans or trading lines. Below,
we demonstrate how to tabulate the loss distributions
for the simplest case (for example, constant exposures,
nondiscounted losses) and then build upon the simplest
case to handle more complex cases (for example, nonconstant exposures, discounted losses, liquid positions, and
retail portfolios). Exhibit 5 provides an abstract timeline for tabulating the overall portfolio loss distribution. The first two steps relate to the systematic risk
model and the third represents loss tabulations.
Time is divided into discrete periods, indexed by
t. During each period, a sequence of three steps occurs:
first, the state of the economy is determined by simulation; second, the conditional migration and cumulative
default probabilities for each country/industry segment

Simulated Default Probabilities
Germany, Single-A-Rated Five-Year Cumulative Default Probability
Probability
0.05
Simulated distribution
0.04
Normal distribution
0.03

0.02

0.01

0
-0.01

0

0.01
0.02
Default probability

0.03

LOSS TABULATION METHODS
While these distributions of correlated, average default
probabilities by country, sector, rating, and maturity are
interesting, we still need a method of explicitly tabulating the loss distribution for any arbitrary portfolio of
credit risk exposures. So we now turn to developing an
efficient method for tabulating the loss distribution for

Exhibit 5

Model Structure
t

t-1

0.10

t+1

❍ Company 1
❍ Company 2
● Company 3
❍ Company 4

Segment 1

Distribution of States of the World
Estimated
Equations

Probability
Loss PDF
0.05

Segment 2

0
Economic
recession

1. Determine state

-10

Economic
expansion

-5

0

Losses

2. Determine segment probability of default

3. Determine loss distributions

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

75

are determined based on the equations estimated earlier;
and, finally, the actual defaults for the portfolio are determined by sampling from the relevant distribution of segment-specific simulated default rates. Exhibit 6 gives
figures for the highly stylised single-period, two-segment
numerical example described below.
1. Determine the state: For any given period, the first
step is to determine the state of the world, that is, the health
of the macroeconomy. In this simple example, three possible
states of the economy can occur: an economic “expansion”
(with GDP growth of +1 percent), an “average” year (with
GDP growth of 0 percent), and an economic “recession”
(with GDP growth of -1 percent). Each of these states can
occur with equal probability (33.33 percent) in this numerical sample.
2. Determine segment probability of default: The second step is to then translate the state of the world into conditional probabilities of default for each customer segment
based on the estimated relationships described earlier. In
this example, there are two counterparty segments, a “lowbeta” segment, whose probability of default reacts less
strongly to macroeconomic fluctuations (with a range of
2.50 percent to 4.71 percent), and a “high-beta” segment,
which reacts quite strongly to macroeconomic fluctuations
(with a range of 0.75 percent to 5.25 percent).
3. Determine loss distributions: We now tabulate the
(nondiscounted) loss distribution for portfolios that are
constant over their life, cannot be liquidated, and have
known recovery rates, including both diversified and non-

Exhibit 6

NUMERICAL EXAMPLE

1. Determine state

State
Expansion
Average
Recession

2. Determine segment
probability of default

State
Expansion
Average
Recession

GDP
+1
0
-1

Probability of
Default
(Percent)
33.33
33.33
33.33
Low-Beta
Probability of
Default A
(Percent)
2.50
2.97
4.71

High-Beta
Probability of
Default B
(Percent)
0.75
3.45
5.25

3. Determine loss
distributions

76

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

diversified positions. Later, we relax each of these assumptions within the framework of this model in order to
estimate more accurately the expected losses and risk capital from credit events.
The conditional loss distribution in the simple
two-counterparty, three-state numerical example is tabulated by recognising that there are three independent
“draws,” or states of the economy and that, conditional on
each of these states, there are only four possible default scenarios: A defaults, B defaults, A+B defaults, or no one
defaults (Exhibit 7).
The conditional probability of each of these loss
events for each state of the economy is calculated by convoluting each position’s individual loss distribution for each
state. Thus, the conditional probability of a $200 loss in
the expansion state is 0.01 percent, whereas the unconditional probability of achieving the same loss given the
entire distribution of future economic states (expansion,
average, recession) is 0.1 percent after rounding errors. For
this example, the expected portfolio loss is $6.50 and the
credit risk capital is $100, since this is the maximum
potential loss within a 99 percent confidence interval
across all possible future states of the economy.
Our calculation method is based on the assumption that all default correlations are caused by the correlated segment-specific default indices. That is, no further
information beyond country, industry, rating, and the state
of the economy is useful in terms of predicting the default
correlation between any two counterparties. To underscore
this point, suppose that management is confronted with
two single-A-rated counterparties in the German construction industry with the prospect of either a recession or an
economic expansion in the near future. Using the traditional approach, which ignores the impact of the economy
in determining default probabilities, we would conclude
that the counterparty default rates were correlated. Using
our approach, we observe that, in a recession, the probability of default for both counterparties is significantly higher
than during an expansion and that their joint conditional
probability of default is therefore also higher, leading to
correlated defaults. However, because we assume that all
idiosyncratic or nonsystematic risks can be diversified

away, no other information beyond the counterparties’
country, industry, and rating (for example, the counterparties’ segmentation criteria) is useful in determining their
joint default correlation. This assumption is made implicitly by other models, but ours extends the standard singlefactor approach to a multi-factor approach that better captures country- and industry-specific shocks.
Intuitively, we should be able to diversify away all
idiosyncratic risk, leaving only systematic, nondiversifiable
risk. More succinctly, as we diversify our holdings within a
particular segment, that segment’s loss distribution will converge to the loss distribution implied by the segment index.
This logic is consistent with other single- or multi-factor
models in finance, such as the capital asset pricing model.
Our multi-factor model for systematic default
risks is qualitatively similar, except that there is no single
risk factor. Rather, there are multiple factors that fully
describe the complex correlation structure between countries, industries, and ratings. In our simple numerical
example, for a well-diversified portfolio consisting of a
large number of counterparties in each segment (the NA &
NB = Infinity case), all idiosyncratic risk per segment is

diversified away, leaving only the systematic risk per segment (Exhibit 8).
In other words, because of the law of large numbers, the actual loss distribution for the portfolio will converge to the expected loss for each state of the world,
implying that the unconditional loss distribution has only
three possible outcomes, representing each of the three
states of the world, each occurring with equal probability
and with a loss per segment consistent with the conditional
probability of loss for that segment given that state of the
economy. While the expected losses from the portfolio
would remain constant, this remaining systematic risk would
generate a CRC value of only $9.96 for the $200 million
exposure in this simple example, demonstrating both the
benefit to be derived from portfolio diversification and the
fact that not all systematic risk can be diversified away.
In the second case (labeled NA = 1 & NB = Infinity), all of the idiosyncratic risk is diversified away within
segment B, leaving only the systematic risk component for
segment B. The segment A position, however, still contains idiosyncratic risk, since it comprises only a single risk
position. Thus, for each state of the economy, two outcomes

Exhibit 7

NUMERICAL EXAMPLE: TWO EXPOSURES
1. Determine state
2. Determine segment probability of default
3. Determine loss distributions
Expansion
Loss Distribution

Average

Probability of
A
B
A+B Default (Percent)
-100
-100
-200
0.01
-100
0
-100
0.83
0
-100
-100
0.24
0
0
0
32.36
Correlation (A,B) = 0 percent

A
-100
-100
0
0

Probability of
B
A+B Default (Percent)
-100
-200
0.03
0
-100
0.96
-100
-100
1.12
0
0
31.23
Correlation (A,B) = 0 percent

Recession
A
-100
-100
0
0

Probability of
B
A+B Default (Percent)
-100
-200
0.08
0
-100
1.49
-100
-100
1.67
0
0
30.10
Correlation (A,B) = 0 percent

Conditional correlation (A,B) = 1 percent
Probability of Loss Event
93.4 percent

Credit RAC = 100

-0.1 percent
-200

6.5 percent

-100
Losses

0

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

77

Exhibit 8

NUMERICAL EXAMPLE: DIVERSIFIED EXPOSURES
1. Determine state
2. Determine segment probability of default
3. Determine loss distributions
NA & NB = Infinity
Loss
Probability of
A
B
A+B
Default (Percent)
Expansion
-2.50
-0.75
-3.25
33.33
Average
-2.97
-3.45
-6.42
33.33
Recession
-4.71
-5.25
-9.96
33.33
Unconditional correlation (A, B)
91.00
Credit RAC = 9.96

are possible: either the counterparty in segment A goes bankrupt or it does not; the unconditional probability that counterparty A will default in the economic expansion state is 0.83
percent (33.33 percent probability that the expansion state
occurs multiplied by a 2.5 percent probability of default for a
segment A counterparty given that state). Regardless of
whether or not counterparty A goes into default, the segment
B position losses will be known with certainty, given the state
of the economy, since all idiosyncratic risk within that segment has been diversified away.
To illustrate the results using our simulation
model, suppose that we had equal $100, ten-year exposures
to single-A-rated counterparties in each of five country
segments—Germany, France, Spain, the United States, and
the United Kingdom—at the beginning of 1996. The
aggregate simulated loss distribution for this portfolio of
diversified country positions, conditional on the then-current macroeconomic scenarios for the different countries at
the end of 1995, is given in the left panel of Exhibit 9.
The impact of introducing one large, undiversified
exposure into the same portfolio is illustrated in the right
panel of Exhibit 9. Here, we take the same five-country
portfolio of diversified index positions used in the left
panel, but add a single, large, undiversified position to the
“other” country’s position.
The impact of this new, large concentration risk is
clear. The loss distribution becomes “bimodal,” reflecting the
fact that, for each state of the world, two events might occur:
either the large counterparty will go bankrupt, generating a
“cloud” of portfolio loss events centered around -140, or the

78

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Expansion
Average
Recession

NA =1 & NB = Infinity
Loss
Probability of
A
B
A+B
Default (Percent)
-100
-0.75
-100.75
0.83
0
-0.75
-0.75
32.50
-100
-3.45
-103.45
0.99
0
-3.45
-3.45
32.30
-100
-5.25
-105.25
1.57
0
-5.25
-5.25
31.80
Credit RAC = 105.25

undiversified position will not go bankrupt, generating a similar cloud of loss events centered around -40, but with higher
probability. This risk concentration disproportionately
increases the amount of risk capital needed to support the
portfolio from $61.6 to $140.2, thereby demonstrating the
large-exposure risk capital premium needed to support the
addition of large, undiversified exposures.
The calculations above illustrate how to tabulate
the (nondiscounted) loss distributions for nonliquid portfolios with constant exposures. While useful in many
instances, these portfolio characteristics differ from reality in
two important ways. First, the potential exposure profiles
generated by trading products are typically not constant (as
pointed out by Lawrence [1995] and Rowe [1995]). Second,
the calculations ignore the time value of money, so that a
potential loss in the future is somehow “less painful” in
terms of today’s value than a loss today.
In reality, the amount of potential economic loss in
the event of default varies over time, due to discounting,
or nonconstant exposures, or both. This can be seen in
Exhibit 10. If the counterparty were to go into default
sometime during the second year, the present value of the
portfolio’s loss would be $50 in the case of nonconstant
exposures and $100* e ( –r 2∗ 2 ) in the case of discounted
exposures, as opposed to $100 and $100* e ( – r1∗ 1 ) if the counterparty had gone into default sometime during the first year.
Unlike the case of constant, nondiscounted exposures, where
the timing of the default is inconsequential, nonconstant
exposures or discounting of the losses implies that the timing
of the default is critical for tabulating the economic loss.

Exhibit 9

Examples of Portfolio Loss Distributions
Portfolio Loss Distribution
Probability
0.05
Diversified Portfolio

Probability
0.04
Nondiversified Portfolio

E_Loss = 37.545
CRAC = 24.027
Total = 61.572

0.04

E_Loss = 41.284
CRAC = 98.91
Total = 140.193

0.03

0.03
0.02
0.02
0.01
0.01

0.00

0.00
-80

-60

-40

0

-20

-200 -180

-160

-140

-120

-100

-80

-60

-40

-20

0

Note: Business unit, book, country, rating, maturity, exposure.

Addressing both of these issues requires us to work
with marginal, as opposed to cumulative, default probabilities.
Whereas the cumulative default probability is the aggregate
probability of observing a default in any of the previous
years, the marginal default probability is the probability of
observing a loss in each specific year, given that the default
has not already occurred in a previous period.
Exhibit 11 illustrates the impact of nonconstant
loss exposures in terms of tabulating loss distributions.
With constant, nondiscounted exposures, the loss distribution for a single exposure is bimodal. Either it goes into
default at some time during its maturity, with a cumulative default probability covering the entire three-year
period equal to p 1 + p 2 + p 3 in the exhibit, implying a loss of
100, or it does not. If the exposure is nonconstant, how-

Exhibit 10

Nonconstant or Discounted Exposures

ever, you stand to lose a different amount depending upon
the exact timing of the default event. In the above example, you would lose 100 with probability p 1 , the marginal
probability that the counterparty goes into default during
the first year; 50 with probability p 2 , the marginal probability that the counterparty goes into default during the
second year; and so on.
So far, we have been simulating only the cumulative default probabilities. Tabulating the marginal
default probabilities from the cumulative is a straightforward exercise. Once this has been done, the portfolio
loss distribution can be tabulated by convoluting the
individual loss distributions, as described earlier. The
primary difference between our model and other models
is that we explicitly recognise that loss distributions for
nonconstant exposure profiles are not binomial but multinomial, recognising the fact that the timing of default
is also important in terms of tabulating the position’s
marginal loss distribution.

Exposure Loss Profile
Credit Event Tree
No default
Default, year three
Default, year two
Default, year one

ar

1

Nonconstant

Discounteda

25
50
100

100*e(-r3*3)
100*e(-r2*2)
100*e(-r1*1)

is the continuously compounded, per annum zero coupon discount rate.

LIQUID OR TRADABLE POSITIONS AND/OR
ONE-YEAR MEASUREMENT HORIZONS
So far, we have also assumed that the counterparty exposure must be held until maturity and that it cannot be
liquidated at a “fair” price prior to maturity; under such

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

79

Exhibit 11

Nonconstant or Discounted Exposures
Exposure Profile
Credit Event Tree
No default 1-p1-p2-p3
Default, year three p3
Default, year two p2
Default, year one p1

Nonconstant

Constant

0
25
50
100

0
100
100
100

circumstances, allocating capital and reserves to cover
potential losses over the life of the asset may make sense.
Such circumstances often arise in intransparent segments
where the market may perceive the originator of the credit
to have superior information, thereby reducing the market
price below the underwriter’s perceived “fair” value. For
some other asset classes, however, this assumption is inadequate for two reasons:
• Many financial institutions are faced with the increasing probability that a bond name will also show up in
their loan portfolio. So they want to measure the
credit risk contribution arising from their secondary
bond trading operations and integrate it into an overall credit portfolio perspective.
• Liquid secondary markets are emerging, especially in
the rated corporate segments.
In both cases, management is presented with two
specific measurement challenges. First, as when measuring
market risk capital or value at risk, management must
decide on the appropriate time horizon over which to measure the potential loss distribution. In the previous illiquid
asset class examples, the relevant time horizon coincided
with the maximum maturity of the exposure, based on the
assumption that management could not liquidate the position prior to its expiration. As markets become more liquid, the appropriate time horizons may be asset-dependent
and determined by the asset’s orderly liquidation period.
The second challenge arises in regard to tabulating
the marked-to-market value losses for liquid assets should
a credit event occur. So far, we have defined the loss distribution only in terms of default events (although default
probabilities have been tabulated using rating migrations
as well). However, it is clear that if the position can be liquidated prior to its maturity, then other credit events (such

80

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Constant Exposure

Nonconstant Exposure
1-p1-p2-p3

1-p1-p2-p3
p1+p2+p3
-100

0

p1

p2

p3

-100

-50

-25

0

as credit downgrades and upgrades) will affect its markedto-market value at any time prior to its ultimate maturity.
For example, if you lock in a single-A-rated spread and the
credit rating of the counterparty decreases to a triple-B,
you suffer an economic loss, all else being equal: while the
market demands a higher, triple-B-rated spread, your commitment provides only a lower, single-A-rated spread.
In order to calculate the marked-to-market loss
distribution for positions that can be liquidated prior to
their maturity, we therefore need to modify our approach
in two important ways. First, we need not only simulate
the cumulative default probabilities for each rating class,
but also their migration probabilities. This is straightforward, though memory-intensive. Complicating this calculation, however, is the fact that if the time horizons are
different for different asset classes, a continuum of rating
migration probabilities might need to be calculated, one
for each possible maturity or liquidation period. To reduce
the complexity of the task, we tabulate migration probabilities for yearly intervals only and make the expedient
assumption that the rating migration probabilities for any
liquidation horizon that falls between years can be approximated by some interpolation rule.
Second, and more challenging, we need to be able
to tabulate the change in marked-to-market value of the
exposure for each possible change in credit rating. In the
case of traded loans or debt, a pragmatic approach is simply
to define a table of average credit spreads based on current
market conditions, in basis points per annum, as a function of
rating and the maturity of the underlying exposure. The
potential loss (or gain) from a credit migration can then be
tabulated by calculating the change in marked-to-market
value of the exposure due to the changing of the discount rate
implied by the credit migration.

Exhibit 12

interest rates and spreads constant, it must be seen as a
complement to a market risk measurement system that
accurately captures the potential profit-or-loss impact of
changing interest rate and average credit spread levels. If
your market risk measurement system does not capture
these risks, then a more complicated approach could be
used, such as jointly simulating interest rate levels, average
credit spread levels, and credit rating migrations.

Marked-to-Market Credit Event
Profit/Loss Distribution
0.97

0.03

0.02

RETAIL PORTFOLIOS
0.01

0
-30.7

-1.3

-0.8

-0.4

0

0.4

0.8

1.3

The results of applying this approach are illustrated in Exhibit 12, which tabulates the potential profit
and loss profile from a single traded credit exposure,
originally rated triple-B, which can be liquidated prior
to one year. For this example, we have used a recovery
rate of 69.3 percent, a proxy for the average recovery rate
for senior secured credits rated triple-B. Inspection of
Exhibit 12 shows that it is inappropriate to talk about “loss
distributions” in the context of marked-to-market loan or
debt securities, since a profit or gain in marked-to-market
value can also be created by an improvement in the counterparty’s credit standing.
Although this approach allows us to capture the
impact of credit migrations while holding the level of

Tabulating the losses from retail mortgage, credit card,
and overdraft portfolios proceeds along similar lines.
However, for such portfolios, which are often characterised by large numbers of relatively small, homogeneous
exposures, it is frequently expedient to simulate directly
the average loss or write-off rate for the portfolio under
different macroeconomic scenarios based on similar,
estimated equations as those described earlier, rather
than migration probabilities for each individual obligor.
Once simulated, the loss contribution under a given
macroeconomic scenario for the first year is calculated as
P 1∗ LEE 1 , for the second year as P 2∗ ( 1 – P 1 )∗ LEE 2 ,
and so on, where P i and LEE i are the average simulated
write-off rates and loan equivalent exposures for year i,
respectively.
A bank’s aggregate loss distribution across its total
portfolio of liquid, illiquid, and retail assets can be tabulated by applying the appropriate loss tabulation method
to each asset class.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

81

ENDNOTE

1. This approach is embedded in CreditPortfolioViewTM, a software
implementation of McKinsey and Company.

REFERENCES

Credit Suisse First Boston. 1997. “CreditRisk+: Technical Documentation.”
London: Credit Suisse First Boston.

Moody’s Investors Service. 1994. CORPORATE BOND DEFAULTS AND
DEFAULT RATES, 1970-1993. New York: Moody’s Investors Service.

Kealhofer, Stephen. 1995a. “Managing Default Risk in Portfolios of
Derivatives.” In DERIVATIVE CREDIT RISK: ADVANCES IN
MEASUREMENT AND MANAGEMENT. London: Risk Publications.

Morgan, J.P. 1997. “CreditMetrics: Technical Documentation.” New
York: J.P. Morgan.

———. 1995b. “Portfolio Management of Default Risk.” San
Francisco: KMV Corporation.
Lawrence, D. 1995. “Aggregating Credit Exposures: The Simulation
Approach.” In DERIVATIVE CREDIT RISK: ADVANCES IN
MEASUREMENT AND MANAGEMENT. London: Risk Publications.

Rowe, D. 1995. “Aggregating Credit Exposures: The Primary Risk
Source Approach.” In DERIVATIVE CREDIT RISK: ADVANCES IN
MEASUREMENT AND MANAGEMENT. London: Risk Publications.
Wilson, Thomas C. 1997a. “Credit Portfolio Risk (I).” RISK MAGAZINE,
October.

———. 1997b. “Credit Portfolio Risk (II).” RISK MAGAZINE,
McKinsey and Company. 1998. “CreditPortfolioViewTM Approach
Documentation and User’s Documentation.” Zurich: McKinsey
and Company.

November.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

82

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

Capital Allocation and Bank
Management Based on the
Quantification of Credit Risk
Kenji Nishiguchi, Hiroshi Kawai, and Takanori Sazaki

1. THE NEED FOR QUANTIFICATION
OF CREDIT RISK
Liberalization and deregulation have recently accelerated.
It is therefore useful to keep risk within a certain level in
relation to capital, considering that financial institutions
must control their risk appropriately to maintain the
safety and soundness of their operation. In 1988, the Basle
Capital Accord—International Convergence of Capital
Measurement and Capital Standards—introduced a uniform framework for the implementation of risk-based
capital rules. However, this framework applies the same
“risk weight” (a ratio applied to assets for calculation of
aggregated risk assets) to loans to all the private corporations, regardless of their creditworthiness. Such an
approach might encourage banks to eliminate loans that
can be terminated easily while maintaining loans with
higher risk.
As shareholder-owned companies, banks are
expected to maximize return on equity during this competitive era, while performing sound and safe banking
functions as financial institutions with public missions.
Banks are finding it useful to conduct business according

Kenji Nishiguchi and Hiroshi Kawai are assistant general managers and
Takanori Sazaki is a manager in the Corporate Risk Management Division of
the Sakura Bank, Limited.

to the management method that requires them to maintain
risk within capital and to use risk-adjusted return on allocated capital as an index of profitability based on more
accurate quantification of credit risk.

2. OUTLINE OF THE MODEL FOR THE
QUANTIFICATION OF CREDIT RISK
2.1. BASIC DEFINITIONS FOR THE QUANTIFICATION
OF CREDIT RISK
“Credit risk” (also referred to as maximum loss), in a narrow sense, is defined as the worst expected loss (measured
at a 99 percent confidence interval) that an existing portfolio (a specific group) might incur until all the assets in it
mature. (We set the longest period at five years here.) Capital should cover credit risk—the maximum loss exceeding
the predicted amount.
“Credit cost” (also referred to as expected loss) is
defined as the loss expected within one year. Credit cost
should be regarded as a component of the overall cost of the
loan and accordingly be covered by the loan interest.
“Loss amount” is defined as the cumulative loss we
incur over a specific time horizon because of the obligor’s
default. Loss amount is equal to the decrease in the present
value of the cash flows related to a loan caused by setting
the value of the cash flows (after the default) at zero: Loss

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

83

amount equals value in consideration of default less value in
case no default occurs.
Here, the loan is regarded as a bond that pays an
annual fixed rate. The minimum unit period for a loan is
one year; any shorter periods are to be rounded up to the
nearest year. The value of each cash flow after default is zero.
The discount rate can be determined only for one currency
that is applied to all the transactions. Mark-to-market in
case of downgrades or upgrades of credit rating is not
performed. Loss amount consists of principal plus
unpaid interest.
Lossiamount = PV d – PV 0 ,
d–1

PV d =

∑ Dt

•r•P

+ Dd • λ • P,

t=1

The above measurement does not include new
lendings or rollovers that might be extended in the future.
Prepayment is not considered, and the risks until the contract matures will be analyzed. (We set the longest period,
however, at five years.)
“Recovery rate” is defined as the ratio of 1) the
current price of the collateral multiplied by the factors
according to the internal rule to 2) the principal amount of
each loan on the basis of the present perspective of recovery.
In calculations of the loss amount, the amount that can be
recovered is deducted from the principal amount of each loan
(corresponding to D d ⋅ λ ⋅ P in the above formulas). “Uncovered balance” is loan balance less collateral coverage amount
obtained by using the above recovery rate. We do not consider the fluctuation of the recovery amount in the future.

M

PV 0 =

∑ Dt

•r•P

+ DM • P.

t=1

Here, d denotes the year of default, M the maturity
of the loan, D t the discount rate for year t, r the interest
rate of the loan, P the outstanding balance of the loan, and
λ the recovery rate. We set at zero the discount rate and
the interest rate of the loan.

2.2. CHARACTERISTICS OF THE MODEL FOR THE
QUANTIFICATION OF CREDIT RISK
First, we use Monte Carlo simulation in our model
(Figure 1). When dealing with credit risk—as opposed to
market risk—we must contend with a probability distribution function that is not normal. We overcome this problem

Figure 1

Fundamental Framework of the Model for the Quantification of Credit Risk

Data set
Credit rating transition probability
Correlation coefficient
•Between industries
•Between customers

Model for the Quantification of Credit Risk
Monte Carlo simulation
Generation of 10,000 scenarios
covering the whole maturity

Characteristics
Database
Transaction data
Collateral cover

1) Simulation of credit rating transition
2) Taking account of correlation

Customer data
Rating assignment

Measurement of expected loss/maximum loss
1) Expected loss: average of the 10,000 outcomes
2) Maximum loss: 99 percent confidence interval

Credit risk delta
Applied to the risk analysis for:
• Bank as a whole
• Each business area
• Each branch
• Each customer

Allocated capital to cover risk
Risk-adjusted return of equity (integrated ROE)

84

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

by relying on simulation approaches instead of analytical
methods.
Scenarios of credit rating transition (including
default) in the future for each obligor are generated
through simulation. We then calculate the loss amount
that we may incur for each scenario. We repeat this process
10,000 times and measure the distribution of the results.
Since no distribution of profit and loss is assumed in the
simulation approach, we can more precisely calculate and
easily understand factors such as the average loss amounts
and confidence intervals.
Second, with respect to each obligor’s credit rating
transition in Monte Carlo simulation, we take into account
the correlation between individual obligors. Simulation in
consideration of “chain default” is therefore possible, and
we can generate distributions sufficiently skewed toward
the loss side. This also permits the control of concentration
risk—that is, the risk that exposures are concentrated in,
for example, one industry.
Finally, for our model, we devise a method so that
the risk amount in a particular category can be simply
obtained by performing the Monte Carlo simulation for the
entire portfolio, measuring the ratio of the calculated risk
amount to the uncovered balance of each loan, and summing individual risks.

Table 1
EXAMPLE: TRANSITION
Year n
1
2
3
4a
4b
4c
5a
5b
5c
6a
6b
6c
7
D

1
0.81
0.01
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00

3. DATA SET
3.1. CREDIT RATING TRANSITION MATRIX
“Credit rating transition matrix” is defined as a matrix that
shows the probability of credit rating migration in one
year, including a default case for each rating category. The
probability is calculated on the basis of number of customers. A matrix is generated for each year. In this model, we
obtain the mean and volatility of credit rating migration
through the bootstrap (resampling) method. Therefore, the
data set is nothing more than several years’ matrices.
We construct the credit rating transition matrices
using internal data (Table 1). The numbers of customers
who went through credit rating migration are summed
across categories.
Probability of transition from rating m to n =
Number of customers whose ratings migrated from m to n

Number of customers with rating m.

3.2. CORRELATION
“Correlation” is defined as a data set to incorporate the correlation between industries in the simulation. It is a matrix
of correlations between industry scores obtained from the
internal data. The industry score is the average score of the
customers in each industry. Incorporation of credit rating
transition correlation into the simulation enables us to
quantify the credit risk in consideration of chain default

MATRIX

2
0.13
0.76
0.03
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00

3
0.04
0.17
0.84
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00

4a
0.02
0.03
0.02
0.69
0.25
0.07
0.03
0.01
0.01
0.00
0.01
0.00
0.00
0.00

4b
0.00
0.00
0.03
0.15
0.33
0.19
0.06
0.02
0.01
0.01
0.00
0.00
0.00
0.00

4c
0.00
0.01
0.04
0.07
0.21
0.33
0.19
0.07
0.02
0.01
0.01
0.00
0.00
0.00

Year n+1
5a
5b
0.00
0.00
0.00
0.00
0.02
0.01
0.03
0.02
0.10
0.05
0.24
0.09
0.36
0.21
0.21
0.35
0.08
0.22
0.03
0.08
0.02
0.04
0.01
0.02
0.01
0.02
0.00
0.00

5c
0.00
0.00
0.01
0.01
0.03
0.04
0.08
0.18
0.33
0.22
0.09
0.05
0.02
0.00

6a
0.00
0.00
0.00
0.01
0.02
0.02
0.04
0.07
0.18
0.35
0.23
0.11
0.05
0.00

6b
0.00
0.00
0.00
0.01
0.01
0.01
0.02
0.04
0.08
0.16
0.32
0.25
0.10
0.00

6c
0.00
0.00
0.00
0.00
0.00
0.00
0.01
0.02
0.03
0.06
0.16
0.30
0.17
0.00

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

7
0.00
0.00
0.00
0.00
0.01
0.01
0.01
0.02
0.03
0.06
0.11
0.22
0.56
0.00

D
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.01
0.01
0.02
0.03
0.06
1.00

85

across industries. We assume that each of the nine industries specified in the Industry Classification Table of the
Bank of Japan consists of only one company.
To estimate the correlation between industries, we
first measure and standardize the average industry score. In
this paper, we use the weighted average according to the
sales amount. We then measure the correlations between
industries with respect to the logarithmic rate of change in
industry score.

3.3. INDUSTRY CONTRIBUTION RATE
“Industry contribution rate” is defined as the degree to
which each company’s fluctuation can be described by the
movement factors (independent variables) representing the
industry to which each company belongs. Our model
focuses on industries as independent variables among
others such as country and company group. The contribution rate corresponds to the coefficient of determination in
regression analysis in that the square root of the coefficient
of determination is equal to the industry contribution rate.
In this model, several industries are independent
variables. The ratio of each independent variable’s impact is
its industry ratio. The square of the variable’s multiple
coefficient of correlation is its industry contribution rate.
We estimate the industry contribution rate as the
correlation coefficient by using regression analysis on the
relative movement of scores for individual companies
against industry scores (calculated in Section 2.2). We
assume in our model that the movement of the scores for
individual companies can be described by one industry
only. (See the simple regression model below.)
X j, y = α j + β jM i, y + ε j ,
where x j, y denotes the score of company j for year y; α and
β denote the regression coefficient; m i, y denotes the average
score of industry i for year y; and ε j denotes the error term.
Because it is difficult to apply individually the
industry contribution rate measured for each company
(because of data reliability questions and operational limitations), we use one identical industry contribution rate for
one industry. We calculate the industry contribution rate
to be uniformly applied to one industry by averaging the
industry contribution rates of the companies with scores that

86

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

are positively correlated with those of the relevant industry.
Here, however, the average of the industry contribution rates calculated for each industry is uniformly
applied to all customers. The average of the industry contribution rates with positive correlation is 0.5.

3.4. CORRELATION BETWEEN INDIVIDUAL
COMPANIES
The correlation between individual companies is calculated
on the basis of the above analysis. The correlation between
company 1 in industry i and company 2 in industry j is
given as: ρ 12 = C ij ⋅ r 1 ⋅ r 2 , where C ij denotes the correlation between industry i and industry j, r 1 denotes the
industry contribution rate of company 1, and r 2 denotes
the industry contribution rate of company 2.
Because both r 1 and r 2 are 0.5, r 1 ⋅ r 2 = 0.25.
That is, the correlation between companies in the same
industry is 0.25. The maximum correlation between companies in different industries is 0.25 (distributed between
0.1 and 0.2).

4. MONTE CARLO SIMULATION
4.1. CREDIT RATING TRANSITION SCENARIO
Two factors are incorporated into the credit rating transition model, that is, the specific factor for each company
and the correlation between industries (Figure 2). In our
model, we assume no distribution of profit and loss
attributable to credit risk. The default scenarios in the
future are generated by moving the following two factors

Figure 2

Credit Rating Transition Model

Industry 1

Score of
company i

Industry 2
Correlated

Industry 3
•
Independent
•
•

Specific factor

Transition of
credit rating

through Monte Carlo simulation: movement of credit rating
transition probabilities, including default, and uncertainty
of credit rating transition of each customer, including
default, under a given credit rating transition probability
(Figure 3).
As for movement of credit rating transition probabilities, calculating the standard deviation of credit rating
transition probabilities—based on the data for a five-year
period only—may not be adequate in light of data reliability. In our model, we generate the simulation of movement
of credit rating transition probabilities using the bootstrap
method as follows.

Figure 3

Flowchart of Monte Carlo Simulation

Simulation scenarios

Simulation
year

Common factors
Industry 1

Industry n

1

Volatilities
of industries

10,000
Correlation
of industries
Specific factors
Coefficients

Transition
matrix

1
Scores of
companies
10,000

Credit ratings

Industry
contribution rate
Industry ratio

Transition of credit rating
1 2 3 4 5
1

10,000

Present value
from cash flows

The matrices for each year in the future to be used
in simulation are selected at random from given sets of
matrices by creating random numbers. Although it is possible to put discretionary weight on selection, the same
probability is applied in our model. We use selected matrices as the transition probability in the future.
Regarding uncertainty of credit rating transition
(credit rating transition scenario), the credit rating is
moved annually. The credit rating transition variable V i is
defined for each customer. V i follows normal distribution.
Mean µ and standard deviation σ can take discretionary
numbers. Credit rating is moved as follows.
We determined the credit rating transition matrix
used in the simulation for each year after incorporating the
correlation (described later). Z mn , defined as follows, is
determined with a given credit rating transition matrix
[ P m → n ] , according to the credit rating transition.
Pm → 1
Pm → 2
.
.
.
Pm → 7
Pm → d

= 1 – F ( Z m1 )
= F ( Z m1 ) – F ( Z m2 )

= F ( Z m6 ) – F ( Z m7 )
= F ( Z m7 ) ,

where P m → n denotes the rate of transition from rating m
to n, and F denotes the cumulative distribution function of
2
N ( µ, σ ) .
The credit rating of customer i, whose current
rating is l, will be m after one year, which is the largest
number that satisfies Z lm < V i , where the credit rating
transition variable V i for customer i is created at random.
Credit rating transition variable V i , in consideration of correlation, is created to incorporate the correlation
into the customer’s credit rating transition. We use the
following regression model on the assumption that each
company’s movement can be explained by the industry
movement.
V i = a i + b 1i X 1 + b 2i X 2 + … + ε i ,
where X j denotes the driving factor common to industry j —
multivariate normal distribution, b ji denotes the sensitivity
of company i to the driving factor of industry j, and ε i
denotes the movement specific to company i.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

87

Coefficients are determined by the industry contribution rate and the industry ratio, defined respectively,
as follows:
Var  b ji X j


j
--------------------------------Industry contribution rate :
Var ( V )

∑

i

Industry ratio: b 1 i : b 2i : …
The mean and standard deviation of V i can take
discretionary numbers. For the sake of simplicity, we adjust
the coefficients in the following analysis so that V i will
follow standard normal distribution. Here, we move the
rating on the condition that one industry consists of one
company.
V i : Credit rating transition variable ∼ N(0,1) for company
i is defined as V i = r i X G ( i ) + 1 – r 2 ε

The required capital calculated by using the quantification of credit risk, which considers obligors’ creditworthiness, is more effective than that based on a uniform
formula without such consideration. The correlation
between individual companies has been incorporated into
the credit rating transition of each company in the Monte
Carlo simulation. This incorporation enables us to perform
the simulation assuming chain default and to generate distributions skewed sufficiently toward the loss side. This
incorporation also enables us to manage functions such as
concentration risk or the risk of concentration of credit in,
for example, a particular industry (Figure 4).

i j

i : Company
G(i): Industry of company i
X G ( i ) : Variable ∼ N(0,1) common to the industry of
company i
ε i : Variable ∼ N(0,1) specific to company i
r i : Industry contribution rate of company i to industry G(i)
 0 ( I ≠ J)
(The correlation between different
ρε ε = 
i i
 1( I = J ) company variables is 0)
ρε X
i

G(j)

Table 2
COMPARISON OF

REQUIRED CAPITAL

Risk asset
Required capital, based
on BIS regulations
Required capital, based
on the quantification
of credit risk

Required Capital
(Millions of Yen)
17,326,350

Ratio to the Risk Asset
(Percent)

1,386,108

8.00

693,889

4.00

= 0 (The correlation between company variable
and industry variable is 0)
Figure 4

ρX

G ( i ) XG ( j )

: Coefficient of correlation between industries
G(i) and G(j) (given correlation matrix)

ρV V = ri ⋅ rj ⋅ ρX X
i j
G(i) G(j)
Random number X m is created by function of
multivariate normal distribution ∼ N(0,C).

4.2. RESULT OF CALCULATION
Table 2 compares the amounts of required capital, which
are identical to the maximum loss (see Section 6.1), based
on the regulations of the Bank for International Settlements (BIS) and the qualification of credit risk with respect
to our loan portfolio in a certain category at a certain time.

88

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Distribution of Losses
Frequency
2,500

2,000

1,500

1,000

500
0
Loss

—

5. CREDIT RISK DELTA
5.1. CREDIT RISK DELTA
Japanese city banks have tens of thousands of clients
whose creditworthiness ranges from triple A to unrated
(for example, privately owned businesses). Monte Carlo
simulation is therefore inappropriate for each new lending transaction since the simulation demands a heavy
calculation load and accordingly a lengthy credit
approval process. In our model, we perform Monte Carlo
simulation once for all the portfolios and then calculate
the risk ratio on the uncovered balance of each loan on the
basis of the simulation result. We have devised a method
to calculate the risk amount in a particular category by
summing individual risks. We introduce the concept of
credit risk delta to achieve this purpose. The credit risk
delta is a measurement of the marginal increase in the
risk of the entire portfolio when loans to one segment
that constitutes the portfolio are increased. The maximum
credit risk delta is measured at a 99 percent confidence
interval. The average of credit risk deltas is equal to the
expected loss, but the delta’s maximum does not correspond to the maximum loss.
Credit risk delta by segment =
the credit risk after 10 percent increase in loans
to a segment – the present credit risk
10 percent of the loans to the segment.
Our model uses a 13-x-2 segmentation based on
credit rating (thirteen grades) and loan period (one year or
less, over one year). Two cases are considered for each segment (that is, a new loan and an increase in an existing
loan). Accordingly, credit risk deltas are measured in 13-x-2x-2 patterns.

5.2. METHOD OF MEASURING THE CREDIT RISK
DELTA: PART 1
We consider two patterns of increase in loan amount:
• To increase the amount of an existing loan. This is the
case where the balance of the existing loans in the relevant segment is increased at a certain ratio.
• To add a new loan client. This is the case where a new
loan client is added to the relevant segment on the

assumption that the attributes of the new loan are
essentially the same as those of existing loans.
In light of actual banking practice, both of the
above are extreme cases. Reality is expected to lie in the
middle. Accordingly, we determine that the credit risk
delta is the average of the results in the two cases. Methods
of measurement differ depending on the patterns mentioned above.

Increase in the Amount of an Existing Loan
The profit and loss attributed to each customer are proportionate to the principal amount of the loan. With respect to
a client whose loan is increased at a certain ratio, therefore,
the same coefficient should be applied to the profit and
loss. The increment is the credit risk delta. It is not necessary to run a new Monte Carlo simulation.

New Loan Client
The default of a new loan client is not perfectly linked to
that of an existing loan. Therefore, it is necessary to run a
new Monte Carlo simulation. In our model, the Monte
Carlo simulation (generation of default scenarios) is performed separately for the entire loan portfolio, including
new loan clients selected at random in a certain proportion
from existing loan clients in the relevant segment. New
loan clients are deemed to be new on the assumption that
new loan attributes are essentially the same as those of
existing loans. The credit risk delta is the increment of the
loss attributable to the addition of new loan clients.
This method makes it difficult to obtain the credit
risk delta at a desired confidence interval because of the
characteristics of the simulation. (The confidence interval
for the measurement of credit risk delta under a certain
scenario may not always correspond to that for the entire
portfolio, which is 99 percent, for example.)

5.3. METHOD OF MEASURING THE CREDIT RISK
DELTA: PART 2
Although it is possible to calculate credit risk delta only
using the method described in Section 5.2, the order of
the risk ratios measured therein, as mentioned above, may
not always correspond to the credit ratings, hence an

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

89

unrealistic outcome. In our model, we determine the credit
risk delta on the basis of the analysis of its distribution, as
described below.
Figure 5 presents the distribution of loss amounts
for the entire portfolio. Figure 6 is an example of the credit
risk delta measurement for each segment in the case of an
increase in the amount of existing loans in the segment
that covers rating 6a and periods longer than one year. We
determined that the credit risk delta is the increment of
the risk amount when the loan balance in such a segment is
increased by 10 percent.
Figures 5 and 6 show that the credit risk delta
increases monotonically with the width of the confidence
interval for maximum loss. Therefore, the credit risk delta
corresponds to the confidence interval for the maximum
loss (the method described in Section 5.2). On the other
hand, the credit risk delta fluctuates significantly at each
particular point. Accordingly, the risk amount based simply
on the credit risk delta at the relevant confidence interval
may move a great extent when the confidence interval is
slightly shifted. Consequently, the distribution of the
observed credit risk deltas should be statistically analyzed
to find out the relationship between credit risk delta and
the confidence interval as follows.

First, the credit risk delta ratio is equal to the
credit risk delta (measured above) divided by the increment of loan balance (loan balance JPY95,400 million
× 10 percent). The ratio is depicted in Figure 7. To
improve the visual observation, the vertical axis represents
the fourth root of the credit risk delta ratio.
Figure 8 plots the fourth root of credit risk delta
ratio on the vertical axis with the horizontal axis representing the standard normal variables (Q-Q plotting), which
replace the confidence intervals in Figure 7. Figure 8 shows
that the credit risk delta in Q-Q plotting is distributed
almost linearly. That is, the fourth root of credit risk delta
follows approximately normal distribution.
Then, we estimate the regression coefficient by
performing regression analysis on this Q-Q plotting. Since
the distribution can be approximated by a linear graph, we
estimate the relationship between confidence interval and
credit risk delta ratio through the linear regression function in this analysis.
4
Credit risk delta V is given as v = ( a + bx ) ,
where x denotes the standard normal variable corresponding to the confidence interval in the standard normal distribution (2.33 for 99 percent).
The regression analysis for the example presented
in Figure 8 gives the following result: a=0.437, b=0.0867

Figure 6
Figure 5

Credit Risk Delta Measurement: An Example

Distribution of the Portfolio’s Losses

Marginal Risk (Rating 6a and Periods Longer Than One Year)

Loss (billions of yen)
3,500

Loss (billions of yen)
45

3,000

40
35

2,500
30
2,000

25

1,500

20
15

1,000

10
500

5

0

0
0

90

0.2

0.4
0.6
Confidence interval

0.8

1.0

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

0

0.2

0.6
0.4
Confidence interval

0.8

1.0

Figure 7

Figure 8

Credit Risk Delta Ratio

Credit Risk Delta Ratio Measured in Q–Q Plotting

Fourth Root of Credit Risk Delta (Rating 6a and Periods Longer Than One Year)

(Rating 6a and Periods Longer Than One Year)

Fourth root of credit risk delta
1.0

Fourth root of credit risk delta
0.9
0.8

0.8

0.7
0.6

0.6

0.5
0.4

0.4

0.3
0.2

0.2

0.1
0
0

0.2

0.6
0.4
Confidence interval

0.8

1.0

2

(coefficient of determination R = 0.83, number of
samples = 10, 000 ). That is, the credit risk delta ratio
of the existing loans in the segment that covers rating
6a and periods longer than one year is estimated at
4
( 0.437 + 0.0867 × 2.33) = 0.167 (16.7 percent).

5.4. COMPILATION OF THE RESULTS AND
ADJUSTMENT OF THE CREDIT RISK DELTAS
We now classify in thirteen ratings the rates measured
for 13-x-2-x-2 categories. For each rating, we calculate the
average of the rates for the periods of one year or less and
more than one year (weighted average according to outstanding balance) as well as the average of those for new
loan clients and existing loans (arithmetic mean).
Credit risk delta is regarded as the degree of
effect that an individual risk has on the portfolio. In our
model, we made an adjustment to equate the sum of the
credit risk deltas with the risk of the entire portfolio so
that risks ranging from those of an individual company to
those of the whole portfolio can be interpreted consistently through credit risk delta (Table 3). The sum for
all the clients is Σ .

0
-4

-2

0
2
Standard normal variables

4

When Σ Credit Risk Delta < the Risk Amount for the
Entire Portfolio
We adjust the credit risk deltas by multiplying them
with a constant—risk amount for the entire portfolio/ Σ
marginal risk—so that their sum will equal the risk
amount for the entire portfolio.

When Σ Credit Risk Delta > the Risk Amount for the
Entire Portfolio
We do not adjust the credit risk deltas. We regard Σ
credit risk delta as the risk amount for the entire portfolio.
Furthermore, the capital required for credit risk is
assumed to be equal to credit risk.

6. BUSINESS MANAGEMENT BASED ON
THE QUANTIFICATION OF RISK
6.1. ALLOCATION OF CAPITAL
The amount of capital required to cover each type of risk
can be quantified based on the concept of maximum loss, a
measurement common to all risks. We assign capital to
each risk as “allocated capital.” Required capital equals the

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

91

Table 3
RESULT OF CREDIT
Rating
1
2
3
4a
4b
4c
5a
5b
5c
6a
6b
6c
7
TOTAL

RISK DELTA CALCULATION

Credit Cost
(Percent)
0.00
0.00
0.00
0.05
0.07
0.12
0.20
0.31
0.71
1.05
1.54
1.88
3.37

Credit Risk
(Percent)
0.00
0.00
0.03
1.38
2.07
2.79
4.05
5.87
9.18
12.21
15.33
16.66
21.10

Asset
(Millions of Yen)
1,194,230
876,139
1,712,623
725,792
865,106
1,221,975
1,744,059
1,951,575
1,788,003
1,824,986
1,330,100
912,579
1,179,183
17,326,350

risk amount measured as maximum loss and is kept below
the allocated capital amount. This enables us to keep the
risk amount within the capital and to perform safe and
sound bank management. Table 4 gives an example.

6.2. INTEGRATION OF PROFITABILITY
MEASUREMENT
We measure the profitability of each business area using
risk-adjusted return on allocated capital (integrated ROE),
not return on asset (ROA). We calculate the integrated
ROE as follows:
Integrated ROE =
(net business profit – expected loss)/allocated capital.
The ratio of profit net of expected loss to the risk
actually taken is termed “risk-return ratio.”
Risk-return ratio =
(net business profit – expected loss)/capital required to cover risk.
The risk-return ratio is useful when assessing the
profitability of each business area or reviewing the capital
allocation because it (more than others) provides tools for
decision making on the input of more capital and resources
in the more profitable existing business lines.
We use the allocated capital utilization ratio to
measure the rate of usage of the allocated capital.

92

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Uncovered Balance
(Millions of Yen)
1,185,094
846,015
1,555,640
488,218
546,752
744,359
1,068,275
1,131,679
952,833
1,034,857
670,638
477,417
704,891
11,406,668

Required Capital
(Millions of Yen)
0
0
467
6,737
11,318
20,768
43,265
66,430
87,470
126,356
102,809
79,538
148,732
693,889

Percent-to-Asset
Ratio
0.00
0.00
0.03
0.93
1.31
1.70
2.48
3.40
4.89
6.92
7.73
8.72
12.61
4.00

BIS Regulation
(Percent)
8.00
8.00
8.00
8.00
8.00
8.00
8.00
8.00
8.00
8.00
8.00
8.00
8.00
8.00

Allocated capital utilization ratio =
capital required to cover risk /allocated capital.
With these indices, we can consistently measure
the profitability of the bank as a whole, each business area,
each branch, and each customer.

6.3. EVALUATION OF PERFORMANCE
Evaluation of profitability by customers using integrated
ROE in the example in Table 5 is as follows: Although
Customer B yields a better interest rate spread (or interest
rate spread minus credit cost) than Customer A, its profitability—in light of credit risk—is lower than that of A.
Table 4
ALLOCATION OF
Billions of Yen

Risk asset
41,042

Market risk
in trading
316
41,358

CAPITAL: AN EXAMPLE

Required Capital
Based on
BIS Regulations
41,042 x 8% = 3,283

Required Capital
Based on the
Quantification
Credit risk
1,465

Allocated
Capital
1,538

Interest rate risk
[ALM]
87

712

Equity risk
543

570

25

Market risk
25

416

3,308

2,120

3,236

Table 5
PROFITABILITY BY
Customer
A
B

CUSTOMER

Credit Rating
5b

Loan Amount
(Millions of Yen)
1,000

5c

1,000

Profit
(Millions of Yen)
10
(1.00%)
15
(1.50%)

Credit Cost
(Millions of Yen)
3.10
(0.31%)
7.10
(0.71%)

Credit Risk
(Millions of Yen)
58.70
(5.87%)
91.80
(9.18%)

Integrated ROE
(Percent)
11.75
8.61

Notes: Recovery rate is zero. Percentages in parentheses show annual rate on loan amount.

Table 6
PERFORMANCE EVALUATION

Grade
A

Comparison between the Previous Month
and This Month
Risk-Return
Allocated Capital
Integrated ROE
Ratio
Utilization Ratio
Up
Up
Up

Evaluation
Very good

Capital utilization ratio increased. Profitability improved.

B

Up

Up

Down

Good

Although profitability was improved, capital utilization ratio declined.
Potential remains.

C

Up

Down

Up

Good/fair

Although both capital utilization ratio and profitability were improved,
the profitability of new business was low.

D

Down

Up

Down

Good/fair

Both capital utilization ratio and profitability declined.
Return on risk improved.

E

Down

Down

Up

Poor

Although capital utilization ratio increased, it did not lead
to improved profitability.

F

Down

Down

Down

Poor

Capital utilization ratio declined. Profit decreased as well.

More than 100%

Warning

The integrated ROE, risk-return ratio, and allocated capital utilization ratio employed together enable us
to evaluate the performance of each branch. Table 6 shows
the possible combinations of the three indices and the
corresponding evaluations.

7. CONCLUSION
Safe and sound banking is maintained through the allocation and control of capital by the use of integrated risk
management techniques that are based on quantification
of the risks inherent in the banking business. Furthermore, business management with the integrated ROE

Risk (capital required to cover risk) exceeds the allocated capital.
Need for reduction.

(that is, risk-adjusted ROE) facilitates efficient utilization of capital. Such management contributes to the
growth of a bank’s profitability. By promoting this
type of management at Japanese banks with large portfolios of transactions—both in number and amount—the
concept of credit risk delta is an effective method. The
credit risk delta helps to quantify risks while taking into
account the types of business management city banks use.
This management method provides consistent and simple
measurement applicable to all the levels—from individual customers up to branches and the bank as a whole.

The views expressed in this article are those of the authors and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

93

ENDNOTE

The authors thank the individuals at Sakura Bank who gave them useful advice
and instructions in preparing this document, as well as the Fujitsu Research
Institute, which codeveloped the methods of quantification of credit risk.

REFERENCES

Wakasugi, Takaaki, Ayumi Nakabayashi, and Masanobu Sasaki. 1998.
“Portfolio Credit Risk Measurement Based on Corporate Rating
Migration Process.” SOCIETY OF ECONOMICS OF THE UNIVERSITY OF
TOKYO, JOURNAL OF ECONOMICS, October.

94

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

Commentary
William Perraudin

I shall divide my comments into three parts: (i) general
thoughts about credit risk modeling and the technical
difficulties involved, (ii) remarks on the implementation of
such models, with particular reference to the papers in this
session by Wilson and by Nishiguchi et al., and (iii) a discussion of the policy implications of credit risk modeling
and the light shed on this issue by the papers by Jones and
Mingo and by Gray.

BACKGROUND
It is important to understand the background to the current interest in credit risk modeling. Recent developments
should be seen as the consequence of three factors. First,
banks are becoming increasingly quantitative in their
treatment of credit risk. Second, new markets are emerging
in credit derivatives, and the marketability of existing
loans is increasing through growth in securitizations and
the loan sales market. Third, regulators are concerned
about improving the current system of bank capital
requirements, especially as it relates to credit risk.
These three factors are strongly self-reinforcing.
The more quantitative approach taken by banks could be

William Perraudin is a professor of finance at Birkbeck College, University of
London, and special advisor to the Regulatory Policy Division of the Bank of
England.

seen as the application of risk management and financial
engineering techniques initially developed in the fixed
income trading area of banks’ operations. However, they
raise the possibility of pricing and hedging credit risk
more generally and encourage the emergence of new
instruments such as credit derivatives. Furthermore, if
banks are adopting a more quantitative approach, regulators may be able to develop more sophisticated and
potentially less distortionary capital requirements for
banking book exposures. However, if regulators do permit the use of models in capital requirement calculations,
banks will have a substantial incentive to invest further
in the development of credit risk models.
The basic problems in developing models of credit
risk are (i) obtaining adequate data and (ii) devising a satisfactory way of handling the covariability of credit exposures. On data, banks face the difficulty that they have
only recently begun to collect relevant information in a
systematic manner. Many do not even know simple facts
about defaults in their loan books going back in time.
Although serious, this difficulty is transitional and will
be mitigated as time goes by and perhaps as banks make
arrangements to share what data exist.
The more serious data problem is that bank loans
and even many corporate bonds are either partly or totally
illiquid and mark-to-market values are therefore not

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

95

available. This means that one must rely on some other
measure of value in order to establish and track the riskiness of credit-sensitive exposures. Two approaches have
been followed by credit risk modelers. J.P. Morgan and
Credit Suisse Financial Products in their respective
modeling methodologies, CreditMetrics and CreditRisk+,
employ ratings and probabilities of ratings transitions as
bases for measuring value and risk. The consulting firm
KMV uses equity price information to infer a borrower’s
underlying asset value and the probability that it will fall
below some default trigger level.
The second major problem faced by credit risk
analysts is that of modeling the covariation in credit risks
across different exposures. It is particularly difficult to do
this in a tractable way while respecting the basic nature
of credit risk, that is, return distributions that are fattailed and highly skewed to the left. Two approaches have
been taken. On the one hand, the CreditMetrics approach
to covariation consists of supposing that ratings transitions are driven by changes in underlying, continuous
stochastic processes. Correlations between these processes
(and hence in ratings transitions) are inferred from correlations in equity returns (to some degree therefore relying
on the KMV methodology). CreditRisk +, on the other
hand, allows parameters of the univariate distributions of
individual exposures to depend on common conditioning
variables (for example, the stage of the economic cycle).
Conditionally, exposures are supposed to be independent,
but unconditionally they are correlated.

IMPLEMENTATIONS OF CREDIT
RISK MODELING
Two papers in this session represent implementations of
credit risk methods, namely, those by Wilson and by
Nishiguchi et al. The Wilson study describes an approach
to credit risk modeling that resembles CreditRisk+. More
specifically, this approach employs binomial and multinomial models of default/no-default events and of movements
between ratings. Correlations between the risks on different exposures are incorporated by allowing the probabilities to vary according to whether the macroeconomy is in

96

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

one of two states. It is slightly difficult to see how such
a framework would perform in actual applications. For
example, it might be thought of as a problem that the
economy can only be in a boom or a bust. Integrating over
a larger number of states or over some continuous set of
different states might be more natural.
Although the Wilson paper does discuss ratings
changes, the primary focus (as in CreditRisk+ ) is on probabilities of default. Credit losses are deemed to occur only
if a borrower defaults and not if, for example, its rating
declines sharply without default taking place. This
approach resembles traditional practices in insurance and
banking markets. By contrast, CreditMetrics takes a more
portfolio-theoretic approach in which losses are registered
as the credit rating of a borrower declines. From an economic viewpoint, the portfolio-theoretic approach appears
preferable. For example, it more straightforwardly yields
prescriptions about how a given credit risk may be hedged.
The Nishiguchi et al. paper resembles CreditMetrics in that it takes a more portfolio-theoretic
approach. However, in its treatment of correlations, its
approach, like that of Wilson and CreditRisk+, is to allow
exogenous conditioning variables to serve as the source of
covariation in credit risk. Like the Wilson paper, the
Nishiguchi et al. paper does not explore the effectiveness
of the authors’ very complicated approach to modeling
correlation. Since correlations are crucial inputs to the
credit risk measures that come out of such models, a critical evaluation of the sensitivity of the results to different
approaches would be desirable.

POLICY RELEVANCE
The other two papers in this session, those by Jones and
Mingo and by Gray, provide extremely useful snapshots
of what U.S. and Australian banks, respectively, have
achieved in their implementation of quantitative credit
risk modeling. In both cases, it is notable quite how far
the banks have gotten, although significant obstacles
remain. Substantial efforts have been directed at collecting
data and implementing credit risk measurement systems. Almost no banks follow a fully portfolio-theoretic

approach. Most employ ratings-based approaches like
CreditMetrics or CreditRisk+ rather than KMV techniques.
Supervisors in both the United States and Australia
have had extensive contact with banks, monitoring
progress and, in the Australian case, coordinating the
exchange of data.
For regulators, a crucial question that Jones and
Mingo, and to some extent Gray, address is whether bank
models are sufficiently developed and comprehensive to
be employed in the calculation of risk-sensitive capital
requirements on banking book exposures. Both studies
are quick to conclude that global use of credit risk models
for the entire banking book is quite infeasible at the current
stage of development of credit risk modeling. Nevertheless, both studies view the adoption of such models in
some form as inevitable. The primary argument advanced
by Jones and Mingo is that large U.S. banks currently
engage in substantial “capital arbitrage,” using securitizations and other transactions to cut their capital levels
while retaining the underlying credit risk. A more positive argument, perhaps, is that by allowing the use of
models, supervisors may reduce distortions in banks’
portfolio choices attributable to the current capital
requirement system, with its unsophisticated approach to
risk weighting.
There are two ways in which credit risk models
could be employed in a limited sense for capital requirement calculations. The first would involve their use as a
guide in banking supervision. In their contact with
banks, U.S. supervisors suggest capital add-ons for banking book assets over and above the Basle 8 percent capital
charge. In the United Kingdom, such add-ons have a
more formal status in that regulators actually require
banks to hold amounts of capital over and above the Basle
8 percent charge. Thus, U.K. banks are required to maintain risk-asset ratios for each U.K. bank (that is, the ratio
of broad capital to risk-weighted assets) that exceed bank-

specific trigger ratios. In principle at least, output from
credit risk models could be used as an input to decisions
about such formal or informal capital add-ons.
Second, credit risk models could be employed for
part but not all of the banking book. Jones and Mingo
have a limited discussion of this point. The section of the
banking book to which models might be applied could be
selected either because it is the source of substantial capital arbitrage or possibly because the assets involved have
stable credit risk on which considerable information is
available. Jones and Mingo presumably have the first of
these two criteria in mind when they argue that certain
transactions involving securitization should be subjected
to modeling. More generally, loans issued by borrowers
that already possess ratings on traded debt or that have
quoted equity might be obvious candidates for credit risk
modeling. Alternatively, some particularly homogeneous
asset categories such as mortgages, personal loans, or
credit card debt may be judged to have stable default
behavior susceptible to credit risk modeling.

CONCLUSION
The papers in the session serve to underline the fact that
credit risk modeling will be a crucial area for regulators
and industry practitioners in coming years. It is hard to
resist the conclusion that models in some shape or form
will be used before too long in bank capital calculations.
As Jones and Mingo argue, the current division of bank
assets between the trading and banking books in and of
itself obliges regulators to consider changes since it provides banks with strong incentives to reduce capital
requirements through arbitrage. On a more positive note,
making bank capital requirements more sensitive to the
credit risks a bank faces will reduce distortions inherent
in a nonrisk-adjusted system without impairing the main
function of capital requirements, that of bolstering the
stability of the financial system.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

97

Supervisory Capital Standards:
Modernise or Redesign?
Edgar Meister

I. I am delighted to have the opportunity to speak to
such an eminent group at this important conference on
capital regulation.
“If you see a banker jump out of the window,
jump after him: there is sure to be profit in it,” said the
eighteenth-century French philosopher Voltaire. Looking
at the situation in Southeast Asia, I am not entirely convinced that it would always be wise to follow Voltaire’s
advice. Even if all banks pursue the same course, their
actions are not necessarily appropriate.
It is also becoming clear, however, that the Asian
crisis has given new urgency to the already important
topics of risk and capital adequacy. In that respect, this
conference has come at a very opportune moment.
The question addressed by this conference is
whether the prudential supervisory standard established by
the 1988 Basle Capital Accord can meet the challenges of
the twenty-first century. If an entirely new standard is
needed, then our task is to consider which alternative system of capital requirements might be superior to the
present one. There are differences of opinion on these
issues, not only between the supervised institutions and

the supervisors but also, in some cases, among the supervisors themselves.
In debating whether it is better to modernise the
Basle Accord or to redesign it by developing a new set of
capital rules, we need to keep two considerations in mind:
• A capital standard should promote the security of individual institutions—that is, each institution’s ability
to manage risk and to maintain an adequate cushion
of capital against losses—and the overall stability of
the banking system. I assume that no one wants less
financial market stability than we have now.
• The easing of regulatory burdens and the creation
of a level playing field for banks are important
objectives. Although the extent of the regulatory
burdens imposed by different capital standards should
not be the main criterion in deciding whether to
modernise or redesign the Basle Accord, efforts to
streamline regulation are welcome because they reduce
the competitive disadvantages experienced by banks and
optimise the cost-effectiveness of the supervisory
system. A related consideration is that any prudential
measures taken should not create competitive discrepancies between different groups of banks.

Edgar Meister is a member of the Board of Directors of the Deutsche Bundesbank.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

101

II. In terms of risk considerations, an ideal capital standard would fully capture an institution’s risks and would
produce a capital base that takes due account of risk. An
ideal standard would also increase market discipline. In
reality, we are still far away from these theoretical ideals.
There are differences in the measurability and hence also in
the controllability of the main risks to which banks and
other financial intermediaries are exposed. Market risks, for
example, can be measured quite accurately using existing
data and risk-monitoring techniques.
By contrast, in what is still the main risk area
for banks, credit risk, a purely quantitative determination
of risk—comparable to market risk modeling—is much
more difficult and has not yet been achieved. For that
reason, assessment of credit risk still relies heavily on
traditional methods—that is, the judgement of the
banks’ credit officers.
Efforts to improve the quantification of credit risk
through the use of models are mainly hampered by insufficient or poor-quality data. For that reason, the survey of
data sources for credit risk models that was recently
released by the International Swaps and Derivatives
Association (ISDA) is very welcome. It remains to be seen,
however, whether the quality of the data in major market
segments will be adequate.
Data problems also complicate the modeling of
operational risks. These risks range from the inadequate
segregation of duties to fraud and errors in data processing.
At present, measures of these risks are “guesstimates” based
largely on data not objectively observable.

III. The difficulties in risk measurement are a problem
not only for institutions, but also for the supervisory
agencies that define capital requirements. Our existing
regulatory framework aims to ensure that institutions have
an adequate cushion of capital as a protection against
unavoidable losses. Although this “shield” of capital is supposed to cover all risk factors—including operational and
legal risks—the calculation of required capital has essentially been geared to a single risk factor: default risk. At
the beginning of this year, separate capital requirements

102

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

were implemented for banks’ market risk exposures, but
default risk remains the primary target of capital rules.
Bankers and some supervisors have recently called
the Capital Accord into question, not least because of its
inexact categorisation of risks. They point out, for example,
that exposures to countries in the Organization for Economic
Cooperation and Development are assigned a uniform risk
weight of 0 percent, although there are considerable differences in risk within that group of countries. Similar questions arise about the assignment of a 100 percent risk weight
to exposures to nonbanks, a group that includes blue-chip
firms known worldwide. Additionally, critics claim that risk
weights under the Basle Accord do not take into account the
degree of diversification in individual institution’s loan
books—an oversight that may prevent institutions from
using their funds in the most productive way.

IV. This is the backdrop against which more sophisticated methods of credit risk measurement are being discussed. These methods include a subtly differentiated
prudential weighting scheme, the use of internal ratings,
the inclusion of portfolio effects and credit risk models,
and certain new concepts completely different from the
Capital Accord. It is my assessment that supervisors are
fundamentally open-minded about these alternatives.
Notable among the new concepts are the precommitment
approach put forward by economists from the Board of
Governors of the Federal Reserve System and a framework
that emphasises self-regulation, proposed by the Group of
Thirty (G-30).
Under the precommitment approach, a bank itself
decides how much capital it will hold within a given
period to cover the risks arising from its trading book.
Sanctions will apply if the accumulated losses exceed that
amount. This approach is appealing in many respects. It
could ease the job of supervisors and reduce the regulatory
burden for institutions. Moreover, the approach is highly
market-oriented.
The precommitment approach poses a number of
fundamental difficulties, however. First, it involves a
purely ex post analysis of a bank’s risk and capital status.

This perspective means that supervisory authorities are
reacting to market outcomes and to choices already made
by an institution rather than specifying a given level of
capital for the institution in a preventive manner. Without
wishing to preempt this afternoon’s discussion, I would
argue that some institutions facing regulatory sanctions for
failing to commit sufficient capital to cover their losses
might be motivated to accept additional risk—on the
theory that “If you are in trouble, double.”
A second problem with the precommitment
approach is the difficulty of finding a logically consistent
penalising mechanism. If an institution takes risks that
result in losses greater than the capital reserved, banking
supervisors would have to impose mandatory fines or
higher capital requirements, which would end up exacerbating the financial difficulties of that institution.
Another penalty contemplated under the precommitment scheme—public disclosure—points to a third
problem with this approach. The idea that an institution
could be required to inform the market if it failed to limit
its losses has met with considerable reservations on the part
of many institutions and supervisory authorities. I am
quite doubtful whether institutions would be prepared to
go that far in terms of disclosure. At the risk of exaggeration, I would suggest that the precommitment approach
represents a bank’s promise that it will not become insolvent. If that promise cannot be kept, then the question
whether supervisors can or will impose sanctions remains
open—at least in critical cases.
A proposal by the G-30, which goes further than
the precommitment approach in reducing the role of bank
supervisors, essentially leaves the development of regulatory strategies to the market or to a small group of major
international financial institutions. The involvement of
supervised institutions in the creation of regulatory standards is not new in principle; it has been tried and tested.
Whenever industry methods of measuring and monitoring
risk have become state of the art, supervisors have been
ready to adopt them—as was recently the case with the recognition of internal models for market risk. Nevertheless,
in the absence of administrative sanctions to enforce standards, how binding could those standards be?

Trusting solely in effective market controls presupposes a comparatively high degree of transparency.
As in the case of the precommitment approach, it is questionable whether all market players would be prepared to
disclose their risk positions and losses to the market. Such
disclosures would require institutions to reveal market
expectations, trading strategies, and other business secrets.
Furthermore, under the G-30 proposal, the interests of the select group of member institutions might not
prove to be identical with the general interests of the financial industry. In particular, competitive distortions at the
expense of smaller institutions might arise. As mentioned
above, an outcome in which supervisory standards cause
new competitive problems should at all events be avoided.

V. As concepts, the precommitment approach and the
G-30 proposal for self-regulation supply important and
thought-provoking ideas. Because of their pronounced
market orientation, these alternatives to the present prudential standard would reduce the regulatory burden and
give banks greater freedom in their risk management.
At the same time—in addition to the reservations
already mentioned—I perceive the danger of a decline in
the overall security level of the individual credit institution
and the banking system. Existing risks might be covered
by less capital than under the Capital Accord.
Although self-regulation aimed at greater market
discipline would be welcome, the precommitment approach
and G-30 proposal would probably not be able to achieve it on
a lasting basis—especially if a bank or a banking system were
in a difficult situation. In such a situation, these alternative
approaches would not be able to make up for the disadvantages of allowing institutions to maintain a lower capital base.
What should also be given consideration is that
both approaches are intended to apply mainly to large
banks that operate internationally. These institutions are
players with an especially prominent role in maintaining
the stability of the financial markets. At the same time, we
know that the world of risk has become more complex
during the last few years and that the risks borne by institutions under the pressure to perform have increased. Risky

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

103

high-yield transactions in emerging markets, for example,
are likely to become increasingly significant in the future
despite the recent turmoil in Asia.
Indeed, the events in Southeast Asia demonstrate
how difficult it is to determine bank-specific risks with
sufficient accuracy. Even leading rating agencies have
tended to run behind the markets in line with the maxim
“Please follow me, I am right behind you.”

VI. Capital is, therefore, still a modern prudential requirement. The Basle Capital Accord is, in this context, a rough
and comparatively simple approach. This standard, which
has now been put into practice virtually worldwide,
undoubtedly has some weaknesses. It has, however, demonstrated its suitability under changing conditions in the
almost ten years since its introduction. In my view, the
empirical findings are definitely positive.
The Capital Accord has not worked, however,
when the calculated capital was not actually in place. In many
countries that have experienced crises, credit institutions had
only formally fulfilled the norm of 8 percent minimum
capital. An evaluation of actual assets and liabilities in
line with market conditions would have shown that the
capital had been used up long beforehand.
Because the Capital Accord sets capital requirements more conservatively than do the precommitment
approach and the G-30 proposal, there remains a buffer
for cushioning the risks that are difficult to measure—
operational and legal risks, for instance. To that extent, an
adequate cushion of capital can make up for shortcomings
in risk identification, measurement, and control.

VII. To come back to the original question: I am in
favour of an evolutionary solution. The Basle Accord
should be modernised and not—at present—replaced by
other concepts. Other approaches are indeed worth discuss-

ing, but at present I cannot identify any alternative that
would be operationally viable, practicable, and superior to
the Capital Accord.
The Capital Accord itself is adaptable enough to
allow new developments in the markets to be integrated
with its system in a meaningful manner—as occurred in
the case of market risk, for example. It can also accommodate all other developments currently under discussion,
such as on-balance-sheet netting, credit derivatives, credit
risk models, and new capital elements.
The capital requirements established by the Basle
Accord will, of course, have to be expanded to include buffers for risks that have so far gone uncovered. For example,
given an easing of capital requirements in other areas,
buffers for operational risks, valuation risks, and concentration risks must no longer be a “no-go” area.
Generally speaking, further qualitative requirements
may also help to curb risks and hence create a stabilising
impact in micro- and macro-prudential terms. In that respect,
the Basle Committee’s “Framework for the Evaluation of
Internal Control Systems” is especially important. Qualitative
and quantitative minimum standards for the use of credit risk
models—validated through extensive testing and application—would also have to be specified in due course.
In my view, self-regulation can have a stimulating
effect, but it cannot replace the administrative supervision
of banks and other financial intermediaries. To that extent,
self-regulation is an approach that complements prudential
supervision. I believe that this assessment has been reinforced by various bank crises in the past and borne out yet
again by the Asian crisis.
A revised capital framework incorporating greater
self-regulation requires that supervisors work closely with
financial institutions. Such cooperation should yield regulations that are, on the one hand, up-to-date and compatible
with the market and, on the other hand, conducive to
market discipline and the stability of the overall system.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

104

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

The Value of Value at Risk: Statistical,
Financial, and Regulatory Considerations
Summary of Presentation
Jon Danielsson, Casper G. de Vries, and Bjørn N. Jørgensen

Value at risk (VaR) has emerged as a major tool for measuring market risk, and it is used internally by banks for
risk management and as a regulatory tool for ensuring the
soundness of the financial system. A large amount of
research work into VaR has emerged, and various aspects
of VaR have been extensively documented. There are two
areas of VaR-related research that we feel have been relatively neglected: the relationship of VaR to statistical
theory and the financial-economic foundations of VaR.
Most VaR methods are based on normality, however; as
stated by Alan Greenspan (1997), “the biggest problems
we now have with the whole evaluation of risk is the
fat-tailed problem, which is really creating very large conceptual difficulties.”
Common methods for measuring VaR fall into
two major categories—parametric modeling of the
conditional (usually normal) distribution of returns and
nonparametric methods. Parametric modeling methods
have been adapted from well-known forecasting technologies to the problem of VaR prediction. As a result, they

Jon Danielsson is a professor in the Department of Accounting and Finance at the
London School of Economics and a contributor to the Institute of Economic Studies
at the University of Iceland. Casper G. de Vries is a research fellow of Tinbergen
Institute and a professor at Erasmus University. Bjørn N. Jørgensen is an
assistant professor at Harvard Business School.

seek to forecast the entire return distribution, from which
only the tails are used for VaR inference.
Value at risk, however, is not about common
observations. Value at risk is about extremes. For most
parametric methods, the estimation of model parameters
is weighted to the center of the distribution and, perversely, a method that is specifically designed to predict
common events well is used to predict extremes, which
are neglected in the estimation. Nonparametric historical
simulation, where current portfolio weights are applied
to past observations of the returns on the assets in the
portfolio, does not suffer from these deficiencies. However,
it suffers from the problem of tail discreteness and from
the inability to provide predictions beyond the size of the
data window used.
Danielsson and de Vries (1997) apply semiparametric extreme value theory to the problem of
value at risk, where only the tail events are modeled
parametrically, while historical simulation is used for
common observations. Extreme value theory is especially
designed for extremum problems, and hence their semiparametric method combines the advantages of parametric
modeling of tail events and nonparametric modeling of
common observations. Danielsson and de Vries (1997)
develop estimators for both daily and multiday VaR predictions, and demonstrate that for their sample of U.S.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

107

stock returns, the conditional parametric methods underestimate VaR and hence extreme risk, which, according to
historical simulation, suffers from undesirable statistical
properties in the tails. The semiparametric method, however, performs better than either a parametric conditional
variance-covariance method or nonparametric historical
simulation.
Conditional parametric methods typically depend
on the conditional normality for the derivation of multiperiod VaR estimates, primarily because of the selfadditivity of the normal distribution. The Basle Accord
suggests using the so-called square-root-of-time rule to
obtain multiday VaR estimates from one-day VaR values,
where multiperiod volatility predictions are obtained by
multiplying one-day volatility by the square root of the
time horizon. However, relaxation of the normality
assumption results in this scaling factor becoming incorrect. Danielsson and de Vries (1997) argue that the
appropriate method for scaling up a single-day VaR to a
multiday VaR is an alpha-root rule, where alpha is the
number of finite-bounded moments, also known as the
tail index. This eventually leads to lower multiday VaRs
than would be obtained from the normal rule. Hence, the

normality assumption may be, counterintuitively, overly
conservative in a multiperiod analysis.
Danielsson, Hartmann, and de Vries (1998)
examine the impact of these conclusions in light of the
current market risk capital requirements and argue that
most current methodologies underestimate the VaR, and
are therefore ill-suited for market risk capital. Better VaR
methods are available, such as the tail-fitting method
proposed by Danielsson and de Vries (1997). However,
financial institutions may be reluctant to use these methods
because current market risk regulations may, perversely,
provide incentives for banks to underestimate the VaR.
Danielsson, Jørgensen, and de Vries (1998) investigate the question of why regulators are interested in
imposing VaR regulatory measures. Presumably, VaR
reporting is meant to counter systemic risk caused by
asymmetric information, that is, in a perfect market there
is no need for VaR reports. But, as we argue, even if
VaR reveals some hidden information, VaR-induced
recapitalization may not improve the value of the firm.
In our opinion, the regulatory basis for VaR is not well
understood and merits further study.

REFERENCES
The authors’ research papers are available on the World Wide Web at
http://www.hag.hi.is/~jond/research.

Danielsson, J., B. N. Jørgensen, and C. G. de Vries. 1998. “On the
(Ir)relevancy of Value-at-Risk.” London School of Economics, mimeo.

Danielsson, J., and C. G. de Vries. 1997. “Value at Risk and Extreme
Returns.” London School of Economics, Financial Markets Group
Discussion Paper no. 273.

Greenspan, Alan. 1997. Discussion at Federal Reserve Bank of Kansas
City symposium “Maintaining Financial Stability in a Global
Economy.”

Danielsson, J., P. Hartmann, and C. G. de Vries. 1998. “The Cost of
Conservatism: Extreme Value Returns, Value-at-Risk, and the Basle
‘Multiplication Factor.’” RISK, January.

The views expressed in this article are those of the authors and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

108

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

Horizon Problems and Extreme Events
in Financial Risk Management
Peter F. Christoffersen, Francis X. Diebold, and Til Schuermann

I. INTRODUCTION
There is no one “magic” relevant horizon for risk management. Instead, the relevant horizon will generally vary by
asset class (for example, equity versus bonds), industry
(banking versus insurance), position in the firm (trading
desk versus chief financial officer), and motivation (private
versus regulatory), among other things, and thought must
be given to the relevant horizon on an application-byapplication basis. But one thing is clear: in many risk
management situations, the relevant horizons are long—
certainly longer than just a few days—an insight incorporated, for example, in Bankers Trust’s RAROC system, for
which the horizon is one year.
Simultaneously, it is well known that shorthorizon asset return volatility fluctuates and is highly
forecastable, a phenomenon that is very much at the center
of modern risk management paradigms. Much less is
known, however, about the forecastability of long-horizon
volatility, and the speed and pattern with which forecastability decays as the horizon lengthens. A key question

Peter F. Christoffersen is an assistant professor of finance at McGill University.
Francis X. Diebold is a professor of economics and statistics at the University of
Pennsylvania, a research fellow at the National Bureau of Economic Research,
and a member of the Oliver Wyman Institute. Til Schuermann is head of research
at Oliver, Wyman & Company.

arises: Is volatility forecastability important for longhorizon risk management, or is a traditional constantvolatility assumption adequate?
In this paper, we address this question, exploring the interface between long-horizon financial risk
management and long-horizon volatility forecastability
and, in particular, whether long-horizon volatility is
forecastable enough such that volatility models are useful for long-horizon risk management. In particular, we
report on recent relevant work by Diebold, Hickman,
Inoue, and Schuermann (1998); Christoffersen and Diebold
(1997); and Diebold, Schuermann, and Stroughair
(forthcoming).
To assess long-horizon volatility forecastability, it
is necessary to have a measure of long-horizon volatility,
which can be obtained in a number of ways. We proceed in
Section II by considering two ways of converting shorthorizon volatility into long-horizon volatility: scaling and
formal model-based aggregation. The defects of those procedures lead us to take a different approach in Section III,
estimating volatility forecastability directly at the horizons
of interest, without making assumptions about the nature
of the volatility process, and arriving at a surprising conclusion: Volatility forecastability seems to decline quickly
with horizon, and seems to have largely vanished beyond
horizons of ten or fifteen trading days.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

109

If volatility forecastability is not important for
risk management beyond horizons of ten or fifteen trading
days, then what is important? The really big movements
such as the U.S. crash of 1987 are still poorly understood,
and ultimately the really big movements are the most
important for risk management. This suggests the desirability of directly modeling the extreme tails of return
densities, a task potentially facilitated by recent advances
in extreme value theory. We explore that idea in Section IV,
and we conclude in Section V.

II. OBTAINING LONG-HORIZON
VOLATILITIES FROM SHORT-HORIZON
VOLATILITIES: SCALING AND FORMAL
AGGREGATION1
Operationally, risk is often assessed at a short horizon, such
as one day, and then converted to other horizons, such as
ten days or thirty days, by scaling by the square root of
horizon [for instance, as in Smithson and Minton (1996a,
1996b) or J.P. Morgan (1996)]. For example, to obtain a
ten-day volatility, we multiply the one-day volatility by
10 . Moreover, the horizon conversion is often significantly longer than ten days. Many banks, for example, link
trading volatility measurement to internal capital allocation and risk-adjusted performance measurement schemes,
which rely on annual volatility estimates. The temptation
is to scale one-day volatility by 252 . It turns out, however, that scaling is both inappropriate and misleading.

SCALING WORKS IN IID ENVIRONMENTS

2

with variance hσ and standard deviation hσ . Hence,
the “ h rule”: to convert a one-day standard deviation to
an h-day standard deviation, simply scale by h . For some
applications, a percentile of the distribution of h-day
returns may be desired; percentiles also scale by h if log
changes are not only iid, but also normally distributed.

SCALING FAILS IN NON-IID ENVIRONMENTS
The scaling rule relies on one-day returns being iid, but
high-frequency financial asset returns are distinctly not iid.
Even if high-frequency portfolio returns are conditionalmean independent (which has been the subject of intense
debate in the efficient markets literature), they are certainly not conditional-variance independent, as evidenced
by hundreds of recent papers documenting strong volatility persistence in financial asset returns.2
To highlight the failure of scaling in non-iid
environments and the nature of the associated erroneous
long-horizon volatility estimates, we will use a simple
GARCH(1,1) process for one-day returns,
yt = σtεt
2

2

2

σ t = ω + αy t – 1 + βσ t – 1
ε t ∼ NID ( 0, 1 ),
t = 1, …, T . We impose the usual regularity and covariance stationarity conditions, 0 < ω < ∞, α ≥ 0, β ≥ 0 , and
α + β < 1. The key feature of the GARCH(1,1) process is
that it allows for time-varying conditional volatility, which
occurs when α and/or β is nonzero. The model has been fit

Here we describe the restrictive environment in which
scaling is appropriate. Let v t be a log price at time t, and
suppose that changes in the log price are independently
and identically distributed,
iid
2
ε t ∼ ( 0, σ ) .
v t = vt – 1 + ε t

to hundreds of financial series and has been tremendously
successful empirically; hence its popularity. We hasten to

Then the one-day return is

it has been studied the most intensely, yielding a wealth of
results that enable us to illustrate the failure of scaling

vt – v t – 1 = εt ,
with standard deviation σ . Similarly, the h-day return is
h–1

v t – vt – h =

∑ εt – i ,

i=0

110

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

add, however, that our general thesis—that scaling fails in
the non-iid environments associated with high-frequency
asset returns—does not depend in any way on a GARCH(1,1)
structure. Rather, we focus on the GARCH(1,1) case because

both analytically and by simulation.
Drost and Nijman (1993) study the temporal
aggregation of GARCH processes.3 Suppose we begin with
T
a sample path of a one-day return series, { y ( 1 )t } t = 1, which

follows the GARCH(1,1) process above.4 Then Drost and
Nijman show that, under regularity conditions, the corT⁄h
responding sample path of h-day returns, { y ( h )t } t = 1 ,
similarly follows a GARCH (1,1) process with
2

2

2

σ ( h )t = ω ( h ) + β ( h ) σ ( h )t – 1 + α ( h ) y ( h ) t – 1 ,

where

h

1 – (β + α )
ω ( h ) = hω --------------------------1 – ( β + α)
h

and β ( h ) < 1 is the solution of the quadratic equation,
β(h )
a(β + α) – b - ,
---------------- = -----------------------------------------------2
2h
1 + β( h)
a( 1 + (β + α ) ) – 2b
h

2

we show ninety-day volatilities computed in two different
ways. We obtain the first (incorrect) ninety-day volatility
by scaling the one-day volatility, σ t , by 90 . We obtain
the second (correct) ninety-day volatility by applying the

α( h) = ( β + α ) – β ( h ) ,

where

β = 0.85, which are typical of the parameter values
obtained for estimated GARCH(1,1) processes. The choice
of ω is arbitrary; we set ω = 1.
The GARCH(1,1) process governs one-day volatility; now let us examine ninety-day volatility. In Chart 1,

2

2

( 1 – β – α ) ( 1 – β – 2βα )

a = h ( 1 – β ) + 2 h ( h – 1 ) ----------------------------------------------------------2
( κ – 1 )( 1 – ( β + α ) )
h

( h – 1 – h ( β + α ) + ( β + α ) ) ( α – βα ( β + α ) )
+ 4 --------------------------------------------------------------------------------------------------------------2
1 – (β + α)

Drost-Nijman formula.
It is clear that although scaling by

h produces
volatilities that are correct on average, it magnifies the
volatility fluctuations, whereas they should in fact be
damped. That is, scaling produces erroneous conclusions
of large fluctuations in the conditional variance of longhorizon returns, when in fact the opposite is true. Moreover, we cannot claim that the scaled volatility estimates
are “conservative” in any sense; rather, they are sometimes
too high and sometimes too low.

2h

( β α) ,
b = ( α – βα ( β + α ) ) ------------------------------2
1–

+

1 – (β + α)

and κ is the kurtosis of y t . The Drost-Nijman formula is
neither pretty nor intuitive, but it is important, because it
is the key to correct conversion of one-day volatility to
h-day volatility. It is painfully obvious, moreover, that the
h scaling formula does not look at all like the DrostNijman formula.
Despite the fact that the scaling formula is incorrect, it would still be very useful if it was an accurate
approximation to the Drost-Nijman formula, because of its
simplicity and intuitive appeal. Unfortunately, such is not
the case. As h → ∞ , the Drost-Nijman results, which
build on those of Diebold (1988), reveal that α ( h ) → 0
and β( h) → 0 , which is to say that temporal aggregation
produces gradual disappearance of volatility fluctuations.
Scaling, in contrast, magnifies volatility fluctuations.

Chart 1

Ninety-Day Volatility, Scaled and Actual
Conditional standard deviation
140 Scaled Volatility, m = 90
120
100
80
60
40
20
0

2000

4000

6000

8000

140 True Volatility, m = 90
120
100
80

A WORKED EXAMPLE

60

Let us examine the failure of scaling by h in a specific
example. We parameterize the GARCH(1,1) process to be
realistic for daily returns by setting α =0.10 and

40
20
0

10

20

30

40

50
Time

60

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

70

80

90

100

111

FORMAL AGGREGATION HAS PROBLEMS
OF ITS OWN
One might infer from the preceding discussion that formal
aggregation is the key to converting short-horizon volatility estimates into good, long-horizon volatility estimates,
which could be used to assess volatility forecastability. In
general, such is not the case; formal aggregation has at least
two problems of its own. First, temporal aggregation formulae are presently available only for restrictive classes of
models; the literature has progressed little since Drost and
Nijman. Second, the aggregation formulae assume the
truth of the fitted model, when in fact the fitted model is
simply an approximation, and the best approximation to
h-day volatility dynamics is not likely to be what one gets
by aggregating the best approximation (let alone a mediocre
approximation) to one-day dynamics.

III. MODEL-FREE ASSESSMENT
OF VOLATILITY FORECASTABILITY
AT DIFFERENT HORIZONS5
The model-dependent problems of scaling and aggregating
daily volatility measures motivate the model-free investigation of volatility forecastability in this section. If the true
process is GARCH(1,1), we know that volatility is forecastable at all horizons, although forecastability will decrease
with horizon in accordance with the Drost-Nijman formula.
But GARCH is only an approximation, and in this section
we proceed to develop procedures that allow for assessment
of volatility forecastability across horizons with no assumptions made on the underlying volatility model.

THE BASIC IDEA
Our model-free methods build on the methods for evaluation of interval forecasts developed by Christoffersen
(forthcoming). Interval forecasting is very much at the
heart of modern financial risk management. The industry
standard value-at-risk measure is effectively the boundary
of a one-sided interval forecast, and just as the adequacy
of a value-at-risk forecast depends crucially on getting
the volatility dynamics right, the same is true for interval
forecasts more generally.

112

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

T

Suppose that we observe a sample path { y t } t = 1 of
the asset return series y t and a corresponding sequence of
one-step-ahead
interval
forecasts,
T
{ ( L t t – 1 ( p ), U t t – 1 ( p ) ) } t = 1 , where L t t – 1 ( p ) and
U t t – 1 ( p ) denote the lower and upper limits of the interval forecast for time t made at time t – 1 with desired coverage probability p. We can think of L t t – 1 ( p ) as a valueat-risk measure, and U t t – 1 ( p ) as a measure of potential
upside. The interval forecasts are subscripted by t as they
will vary through time in general: in volatile times a good
interval forecast should be wide and in tranquil times it
should be narrow, keeping the coverage probability, p,
fixed.
Now let us formalize matters slightly. Define the
hit sequence, I t , as
 1, if y t e [ L t t – 1 ( p ), U t
It = 
 0, otherwise ,

t – 1(p )]

for t = 1, 2, …, T . We will say that a sequence of interval
forecasts has correct unconditional coverage if E [ It ] = p for
all t, which is the standard notion of “correct coverage.”
Correct unconditional coverage is appropriately
viewed as a necessary condition for adequacy of an interval
forecast. It is not sufficient, however. In particular, in the
presence of conditional heteroskedasticity and other higher
order dynamics, it is important to check for adequacy of
conditional coverage, which is a stronger concept. We
will say that a sequence of interval forecasts has correct
conditional coverage with respect to an information set Ω t – 1 if
E [ I t Ω t – 1 ] = p for all t. The key result is that if
Ω t – 1 = { I t – 1, I t – 2, …, I 1 }, then correct conditional
iid
coverage is equivalent to { I t } ∼ Bernoulli ( p) , which can
readily be tested.
Consider now the case where no volatility dynamics
are present. The optimal interval forecast is then constant,
and given by { ( L ( p ), U ( p ) ) }, t = 1, …, T . In that case,
testing for correct conditional coverage will reveal no evidence of dependence in the hit sequence, and it is exactly the
independence part of the iid Bernoulli ( p) criterion that is
designed to pick up volatility dynamics. If, however, volatility dynamics are present but ignored by a forecaster who

erroneously uses the constant { L ( p ), U ( p ) } forecast, then a
test for dependence in the hit sequence will reject the constant interval as an appropriate forecast: the ones and zeros in
the hit sequence will tend to appear in time-dependent
clusters corresponding to tranquil and volatile times.
It is evident that the interval forecast evaluation
framework can be turned into a framework for assessing
volatility forecastability: if a naive, constant interval forecast produces a dependent hit sequence, then volatility
dynamics are present.

MEASURING AND TESTING DEPENDENCE
IN THE HIT SEQUENCE
Now that we have established the close correspondence
between the presence of volatility dynamics and dependence in the hit sequence from a constant interval forecast,
it is time to discuss the measurement and testing of this
dependence. We discuss two approaches.
First, consider a runs test, which is based on
counting the number of strings, or runs, of consecutive
zeros and ones in the hit sequence. If too few runs are
observed (for example, 0000011111), the sequence exhibits
positive correlation. Under the null hypothesis of independence, the exact finite sample distribution of the number of
runs in the sequence has been tabulated by David (1947),
and the corresponding test has been shown by Lehmann
(1986) to be uniformly most powerful against a first-order
Markov alternative.
We complement the runs test by a second measure, which has the benefit of being constrained to the
interval [-1,1] and thus easily comparable across horizons
and sequences. Let the hit sequence be first-order Markov
with an arbitrary transition probability matrix. Then
dependence is fully captured by the nontrivial eigenvalue,
which is simply S ≡ π 11 – π 01 , where π ij is the probability of a j following an i in the hit sequence. S is a natural
persistence measure and has been studied by Shorrocks
(1978) and Sommers and Conlisk (1979). Note that under
independence π 01 = π 11 , so S = 0, and conversely, under
strong positive persistence π 11 will be much larger than
π 01 , so S will be large.

AN EXAMPLE: THE DOW JONES COMPOSITE
STOCK INDEX
We now put the volatility testing framework to use in an
application to the Dow Jones Composite Stock Index,
which comprises sixty-five major stocks (thirty industrials,
twenty transportations, and fifteen utilities) on the New
York Stock Exchange. The data start on January 1, 1974,
and continue through April 2, 1998, resulting in 6,327
daily observations.
We examine asset return volatility forecastability
as a function of the horizon over which the returns are calculated. We begin with daily returns and then aggregate to
obtain nonoverlapping h-day returns, h = 1, 2, 3, …, 20 .
We set { L ( p ), U ( p ) } equal to ± 2 standard deviations and
then compute the hit sequences. Because the standard
deviation varies across horizons, we let the interval vary
correspondingly. Notice that p might vary across horizons,
but such variation is irrelevant: we are interested only in
dependence of the hit sequence, not its mean.
At each horizon, we measure volatility forecastability using the P-value of the runs test—that is, the
probability of obtaining a sample that is less likely to conform to the null hypothesis of independence than does the
sample at hand. If the P-value is less than 5 percent, we
reject the null of independence at that particular horizon.
The top panel of Chart 2 on the next page shows the P-values
across horizons of one through twenty trading days. Notice
that despite the jaggedness of the line, a distinct pattern
emerges: at short horizons of up to a week, the P-value is
very low and thus there is clear evidence of volatility forecastability. At medium horizons of two to three weeks, the
P-value jumps up and down, making reliable inference
difficult. At longer horizons, greater than three weeks, we
find no evidence of volatility forecastability.
We also check the nontrivial eigenvalue. In order
to obtain a reliable finite-sample measure of statistical
significance at each horizon, we use a simulation-based
resampling procedure to compute the 95 percent confidence interval under the null hypothesis of no dependence
in the hit sequence (that is, the eigenvalue is zero). In the
bottom panel of Chart 2, we plot the eigenvalue at each

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

113

Chart 2

For all returns, the finite-sample P-values of the
runs tests of independence tend to rise with the aggrega-

Volatility Persistence across Horizons
in the Dow Jones Composite Index

tion level, although the specifics differ somewhat depending on the particular return examined. As a rough rule of

Conditional standard deviation
1.0
P-Value in Runs Test

thumb, we summarize the results as saying that for aggregation levels of less than ten trading days we tend to reject

0.8

independence, which is to say that return volatility is
significantly forecastable, and conversely for aggregation

0.6
0.4

levels greater than ten days.
The estimated transition matrix eigenvalues tell

0.2
0
0.2

the same story: at very short horizons, typically from one to
ten trading days, the eigenvalues are significantly positive,

Eigenvalue with 95 Percent
Confidence Interval

but they decrease quickly, and approximately monotonically, with the aggregation level. By the time one reaches

0.1

ten-day returns—and often substantially before—the estimated eigenvalues are small and statistically insignificant,

0

indicating that volatility forecastability has vanished.
-0.1
0

2

4

6
8
10
12
14
16
Horizon in number of trading days

18

20

IV. FORECASTING EXTREME EVENTS6
The quick decay of volatility forecastability as the forecast

Notes: The hit sequence is defined relative to a constant ±2 standard
deviation interval at each horizon. The top panel shows the P-value for a runs
test of the hypothesis that the hit sequence is independent. The horizontal
line corresponds to a 5 percent significance level. The bottom panel shows
the nontrivial eigenvalue from a first-order Markov process fit to the hit
sequence. The 95 percent confidence interval is computed by simulation.

horizon along with its 95 percent confidence interval. The
qualitative pattern that emerges for the eigenvalue is the
same as for the runs test: volatility persistence is clearly
present at horizons less than a week, probably present at
horizons between two and three weeks, and probably not
present at horizons beyond three weeks.

MULTI-COUNTRY ANALYSIS OF EQUITY, FOREIGN
EXCHANGE, AND BOND MARKETS
Christoffersen and Diebold (1997) assess volatility forecastability as a function of horizon for many more assets
and countries. In particular, they analyze stock, foreign
exchange, and bond returns for the United States, the
United Kingdom, Germany, and Japan, and they obtain
results very similar to those presented above for the Dow
Jones composite index of U.S. equities.

114

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

horizon lengthens suggests that, if the risk management
horizon is more than ten or fifteen trading days, less energy
should be devoted to modeling and forecasting volatility
and more energy should be devoted to modeling directly
the extreme tails of return densities, a task potentially
facilitated by recent advances in extreme value theory
(EVT).7 The theory typically requires independent and
identically distributed observations, an assumption that
appears reasonable for horizons of more than ten or fifteen
trading days.
Let us elaborate. Financial risk management is
intimately concerned with tail quantiles (for example, the
value of the return, y, such that P ( Y > y ) = .05) and tail
probabilities (for example, P ( Y > y ) , for a large value y).
Extreme quantiles and probabilities are of particular interest, because the ability to assess them accurately translates
into the ability to manage extreme financial risks effectively, such as those associated with currency crises, stock
market crashes, and large bond defaults.
Unfortunately, traditional parametric statistical
and econometric methods, typically based on estimation of

entire densities, may be ill-suited to the assessment of
extreme quantiles and event probabilities. Traditional
parametric methods implicitly strive to produce a good fit
in regions where most of the data fall, potentially at the
expense of a good fit in the tails, where, by definition, few
observations fall. Seemingly sophisticated nonparametric
methods of density estimation, such as kernel smoothing,
are also well known to perform poorly in the tails.
It is common, moreover, to require estimates of
quantiles and probabilities not only near the boundary of
the range of observed data, but also beyond the boundary.
The task of estimating such quantiles and probabilities
would seem to be hopeless. A key idea, however, emerges
from EVT: one can estimate extreme quantiles and probabilities by fitting a “model” to the empirical survival function of a set of data using only the extreme event data
rather than all the data, thereby fitting the tail and only
the tail.8 The approach has a number of attractive features,
including:
• the estimation method is tailored to the object of
interest—the tail of the distribution—rather than the
center of the distribution, and
• an arguably reasonable functional form for the tail can
be formulated from a priori considerations.
The upshot is that the methods of EVT offer hope for
progress toward the elusive goal of reliable estimates of
extreme quantiles and probabilities.
Let us briefly introduce the basic framework. EVT
methods of tail estimation rely heavily on a power law
assumption, which is to say that the tail of the survival
function is assumed to be a power law times a slowly varying function:
P( Y > y) = k( y) y

–α

,

where the “tail index,” α , is a parameter to be estimated.
That family includes, for example, α -stable laws with
α < 2 (but not the Gaussian case, α = 2 ).
Under the power law assumption, we can base an
estimator of α directly on the extreme values. The most
popular, by far, is due to Hill (1975). It proceeds by ordering the observations with y ( 1 ) the largest, y ( 2 ) the second
largest, and so on, and forming an estimator based on the

difference between the average of the m largest log returns
and the m-th largest log return:
 1
α =   --m



ln ( y ( i ) ) – ln ( y ( m ) )


i=1
m

∑

–1

.

It is a simple matter to convert an estimate of α into
estimates of the desired quantiles and probabilities. The
Hill estimator has been used in empirical financial settings,
ranging from early work by Koedijk, Schafgans, and de Vries
(1990) to more recent work by Danielsson and de Vries
(1997). It also has good theoretical properties; it can be
shown, for example, that it is consistent and asymptotically
normal, assuming the data are iid and that m grows at a
suitable rate with sample size.
But beware: if tail estimation via EVT offers
opportunities, it is also fraught with pitfalls, as is any
attempt to estimate low-frequency features of data from
short historical samples. This has been recognized in other
fields, such as the empirical finance literature on long-run
mean reversion in asset returns (for instance, Campbell, Lo,
and MacKinlay [1997, Chapter 2]). The problem as relevant
for the present context—applications of EVT in financial
risk management—is that for performing statistical inference on objects such as a “once every hundred years”
quantile, the relevant measure of sample size is likely better approximated by the number of nonoverlapping hundred-year intervals in the data set than by the actual
number of data points. From that perspective, our data
samples are terribly small relative to the demands placed
on them by EVT.
Thus, we believe that best-practice applications of
EVT to financial risk management will benefit from awareness of its limitations, as well as the strengths. When the
smoke clears, the contribution of EVT remains basic and
useful: it helps us to draw smooth curves through the
extreme tails of empirical survival functions in a way that is
consistent with powerful theory. Our point is simply that we
should not ask more of the theory than it can deliver.

V. CONCLUDING REMARKS
If volatility is forecastable at the horizons of interest, then
volatility forecasts are relevant for risk management. But

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

115

our results indicate that if the horizon of interest is more
than ten or fifteen trading days, depending on the asset
class, then volatility is effectively not forecastable. Our
results question the assumptions embedded in popular risk
management paradigms, which effectively assume much
greater volatility forecastability at long horizons than

appears consistent with the data, and suggest that for
improving long-horizon risk management, attention is
better focused elsewhere. One such area is the modeling of
extreme events, the probabilistic nature of which remains
poorly understood, and for which recent developments in
extreme value theory hold promise.

The views expressed in this article are those of the authors and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

116

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

ENDNOTES

We thank Beverly Hirtle for insightful and constructive comments, but we alone
are responsible for remaining errors. The views expressed in this paper are those of
the authors and do not necessarily reflect those of the International Monetary
Fund.

4. Note the new and more cumbersome, but necessary, notation: the
subscript, which keeps track of the aggregation level.

1. This section draws on Diebold, Hickman, Inoue, and Schuermann
(1997, 1998).

6. This section draws on Diebold, Schuermann, and Stroughair
(forthcoming).

2. See, for example, the surveys of volatility modeling in financial
markets by Bollerslev, Chou, and Kroner (1992) and Diebold and Lopez
(1995).

7. See the recent book by Embrechts, Klüppelberg, and Mikosch
(1997), as well as the papers introduced by Paul-Choudhury (1998).

3. More precisely, they define and study the temporal aggregation of
weak GARCH processes, a formal definition of which is beyond the scope
of this paper. Technically inclined readers should read “weak GARCH”
whenever they encounter the word “GARCH” in this paper.

NOTES

5. This section draws on Christoffersen and Diebold (1997).

8. The survival function is simply 1 minus the cumulative density
function, 1 – F ( y ) . Note, in particular, that because F ( y ) approaches 1
as y grows, the survival function approaches 0.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

117

REFERENCES

Andersen, T., and T. Bollerslev. Forthcoming. “Answering the Critics:
Yes, ARCH Models Do Provide Good Volatility Forecasts.”
INTERNATIONAL ECONOMIC REVIEW.

Diebold, F. X., T. Schuermann, and J. Stroughair. Forthcoming. “Pitfalls
and Opportunities in the Use of Extreme Value Theory in Risk
Management.” In P. Refenes, ed., COMPUTATIONAL FINANCE.
Boston: Kluwer Academic Press.

Bollerslev, T., R. Y. Chou, and K. F. Kroner. 1992. “ARCH Modeling in
Finance: A Review of the Theory and Empirical Evidence.” JOURNAL
OF ECONOMETRICS 52: 5-59.

Drost, F. C., and T. E. Nijman. 1993. “Temporal Aggregation of GARCH
Processes.” ECONOMETRICA 61: 909-27.

Campbell, J. Y., A. W. Lo, and A. C. MacKinlay. 1997. THE ECONOMETRICS
OF FINANCIAL MARKETS. Princeton.: Princeton University Press.

Embrechts, P., C. Klüppelberg, and T. Mikosch. 1997. MODELLING
EXTREMAL EVENTS. New York: Springer-Verlag.

Christoffersen, P. F. Forthcoming. “Evaluating Interval Forecasts.”
INTERNATIONAL ECONOMIC REVIEW.

Hill, B.M. 1975. “A Simple General Approach to Inference About the
Tail of a Distribution.” ANNALS OF STATISTICS 3: 1163-74.

Christoffersen, P. F., and F. X. Diebold. 1997. “How Relevant Is Volatility
Forecasting for Financial Risk Management?” Wharton Financial
Institutions Center Working Paper no. 97-45.

Koedijk, K. G., M. A. Schafgans, and C. G. de Vries. 1990. “The Tail Index
of Exchange Rate Returns.” JOURNAL OF INTERNATIONAL
ECONOMICS 29: 93-108.

Danielsson, J., and C. G. de Vries. 1997. “Tail Index and Quantile
Estimation with Very High Frequency Data.” JOURNAL OF
EMPIRICAL FINANCE 4: 241-57.

Lehmann, E. L. 1986. T ESTING S TATISTICAL H YPOTHESES . 2d ed.
New York: John Wiley.
Morgan, J.P. 1996. “RiskMetrics Technical Document.” 4th ed.

David, F. N. 1947. “A Power Function for Tests of Randomness in a
Sequence of Alternatives.” BIOMETRIKA 34: 335-9.
Diebold, F. X. 1988. EMPIRICAL M ODELING OF EXCHANGE RATE
DYNAMICS. New York: Springer-Verlag.
Diebold, F. X., A. Hickman, A. Inoue, and T. Schuermann. 1997.
“Converting 1-Day Volatility to h-Day Volatility: Scaling by h is
Worse Than You Think.” Wharton Financial Institutions Center
Working Paper no. 97-34.

Paul-Choudhury, S. 1998. “Beyond Basle.” RISK 11: 89. (Introduction
to a symposium on new methods of assessing capital adequacy,
R ISK 11: 90-107.)
Shorrocks, A.F. 1978. “The Measurement of Mobility.” ECONOMETRICA
46: 1013-24.
Smithson, C., and L. Minton. 1996a. “Value at Risk.” RISK 9: January.
———. 1996b. “Value at Risk (2).” RISK 9: February.

———. 1998. “Scale Models.” RISK 11: 104-7. (Condensed and retitled
version of Diebold, Hickman, Inoue, and Schuermann [1997].)
Diebold, F. X., and J. Lopez. 1995. “Modeling Volatility Dynamics.” In
Kevin Hoover, ed., MACROECONOMETRICS: DEVELOPMENTS, TENSIONS
AND PROSPECTS, 427-72. Boston: Kluwer Academic Press.

118

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Sommers, P. M., and J. Conlisk. 1979. “Eigenvalue Immobility Measures
for Markov Chains.” JOURNAL OF MATHEMATICAL SOCIOLOGY 6:
253-76.

NOTES

Methods for Evaluating Value-at-Risk
Estimates
Jose A. Lopez

I. CURRENT REGULATORY FRAMEWORK
In August 1996, the U.S. bank regulatory agencies
adopted the market risk amendment (MRA) to the 1988
Basle Capital Accord. The MRA, which became effective
in January 1998, requires that commercial banks with
significant trading activities set aside capital to cover the
market risk exposure in their trading accounts. (For further
details on the market risk amendment, see Federal Register
[1996].) The market risk capital requirements are to be
based on the value-at-risk (VaR) estimates generated by the
banks’ own risk management models.
In general, such risk management, or VaR, models
forecast the distributions of future portfolio returns. To fix
notation, let y t denote the log of portfolio value at time t.
The k-period-ahead portfolio return is ε t + k = y t + k – y t .
Conditional on the information available at time t, ε t + k
is a random variable with distribution f t + k . Thus, VaR
model m is characterized by f mt + k , its forecast of the
distribution of the k-period-ahead portfolio return.
VaR estimates are the most common type of forecast generated by VaR models. A VaR estimate is simply a
specified quantile (or critical value) of the forecasted
f mt + k . The VaR estimate at time t derived from model
m for a k-period-ahead return, denoted VaR mt ( k ,α ) , is

Jose A. Lopez, formerly an economist at the Federal Reserve Bank of New York, is
now an economist at the Federal Reserve Bank of San Francisco.

the critical value that corresponds to the lower α percent
tail of f mt + k . In other words, VaR estimates are forecasts of
the maximum portfolio loss that could occur over a given
holding period with a specified confidence level.
Under the “internal models” approach embodied
in the MRA, regulatory capital against market risk
exposure is based on VaR estimates generated by banks’
own VaR models using the standardizing parameters of a
ten-day holding period ( k = 10 ) and 99 percent coverage
( α = 1 ) . A bank’s market risk capital charge is thus based
on its own estimate of the potential loss that would not be
exceeded with 1 percent certainty over the subsequent twoweek period. The market risk capital that bank m must
hold for time t + 1 , denoted MCR mt + 1 , is set as the
larger of VaR mt ( 10 ,1 ) or a multiple of the average of the
previous sixty VaR mt ( 10 ,1 ) estimates, that is,
MRC mt + 1 = max VaR mt ( 10 ,1 ) ;
1
S mt × -----60

∑

59

∑ VaRmt – i ( 10 ,1 )

+ SR mt ,

i=0

where S mt is a multiplication factor and SR mt is an additional capital charge for the portfolio’s idiosyncratic credit
risk. Note that under the current framework S mt ≥ 3 .
The S mt multiplier explicitly links the accuracy of
a bank’s VaR model to its capital charge by varying over
time. S mt is set according to the accuracy of model m’s VaR

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

119

estimates for a one-day holding period ( k = 1) and 99 percent coverage, denoted VaR mt ( 1 ,1 ) or simply VaR mt .
S mt is a step function that depends on the number of
exceptions (that is, occasions when the portfolio return ε t + 1
is less than VaR mt ) observed over the last 250 trading days.
The possible number of exceptions is divided into three
zones. Within the green zone of four or fewer exceptions, a
VaR model is deemed “acceptably accurate,” and S mt
remains at its minimum value of three. Within the yellow
zone of five to nine exceptions, S mt increases incrementally
with the number of exceptions. Within the red zone of ten
or more exceptions, the VaR model is deemed to be “inaccurate,” and S mt increases to its maximum value of four.

II. ALTERNATIVE EVALUATION METHODS
Given the obvious importance of VaR estimates to banks
and now their regulators, evaluating the accuracy of the
models underlying them is a necessary exercise. To date,
two hypothesis-testing methods for evaluating VaR estimates have been proposed: the binomial method, currently
the quantitative standard embodied in the MRA, and the
interval forecast method proposed by Christoffersen (forthcoming). For these tests, the null hypothesis is that the
VaR estimates in question exhibit a specified property
characteristic of accurate VaR estimates. If the null hypothesis is rejected, the VaR estimates do not exhibit the specified property, and the underlying VaR model can be said to
be “inaccurate.” If the null hypothesis is not rejected, then
the model can be said to be “acceptably accurate.”
However, for these evaluation methods, as with any
hypothesis test, a key issue is their statistical power, that is,
their ability to reject the null hypothesis when it is incorrect.
If the hypothesis tests exhibit low power, then the probability of misclassifying an inaccurate VaR model as “acceptably
accurate” will be high. This paper examines the power of
these tests within the context of a simulation exercise.
In addition, an alternative evaluation method that
is not based on a hypothesis-testing framework, but instead
uses standard forecast evaluation techniques, is proposed.
That is, the accuracy of VaR estimates is gauged by how
well they minimize a loss function that represents the

120

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

regulators’ concerns. Although statistical power is not relevant for this evaluation method, the related issues of
comparative accuracy and model misclassification are
examined within the context of a simulation exercise. The
simulation results are presented below, after the three
evaluation methods are described. (See Lopez [1998] for a
more complete discussion.)

EVALUATION OF VAR ESTIMATES BASED ON THE
BINOMIAL DISTRIBUTION
Under the MRA, banks will report their VaR estimates to
their regulators, who observe when actual portfolio losses
exceed these estimates. As discussed by Kupiec (1995),
assuming that the VaR estimates are accurate, such exceptions can be modeled as independent draws from a binomial
distribution with a probability of occurrence equal to 1 percent. Accurate VaR estimates should exhibit the property
that their unconditional coverage α∗ = x ⁄ 250, where x is
the number of exceptions, equals 1 percent. Since the probability of observing x exceptions in a sample of size 250
under the null hypothesis is
250
x
250 – x
0.01 × 0.99
,
Pr ( x ) = 
 x 
the appropriate likelihood ratio statistic for testing
whether α∗ = 0.01 is
x
250 – x
LR uc = 2 [ log ( α ∗ ( 1 – α∗ )
)
x

– log ( 0.01 × 0.99

250 – x

) ].

Note that the LR uc test is uniformly most powerful for a
given sample size and that the statistic has an asymptotic
2
χ ( 1 ) distribution.

EVALUATION OF VAR ESTIMATES USING THE
INTERVAL FORECAST METHOD
VaR estimates are also interval forecasts of the lower 1 percent tail of f t + 1 , the one-step-ahead return distribution.
Interval forecasts can be evaluated conditionally or unconditionally, that is, with or without reference to the information available at each point in time. The LR uc test is an
unconditional test since it simply counts exceptions over
the entire period. However, in the presence of variance
dynamics, the conditional accuracy of interval forecasts is an

important issue. Interval forecasts that ignore variance
dynamics may have correct unconditional coverage, but at
any given time, they will have incorrect conditional coverage.
In such cases, the LR uc test is of limited use since it will
classify inaccurate VaR estimates as “acceptably accurate.”
The LR cc test, adapted from the more general test
proposed by Christoffersen (forthcoming), is a test of
correct conditional coverage. Given a set of VaR estimates,
the indicator variable I mt + 1 is constructed as
 1 if ε t + 1 < VaR mt
.
I mt + 1 = 
 0 if ε t + 1 ≥ VaR mt
Since accurate VaR estimates exhibit the property of
correct conditional coverage, the I mt + 1 series must exhibit
both correct unconditional coverage and serial independence. The LR cc test is a joint test of these two properties.
The relevant test statistic is LR cc = LR uc + LR ind , which
2
is asymptotically distributed χ ( 2 ) . The LR ind statistic is
the likelihood ratio statistic for the null hypothesis of serial
independence against the alternative of first-order Markov
dependence.

EVALUATION OF VAR ESTIMATES USING
REGULATORY LOSS FUNCTIONS
The loss function evaluation method proposed here is not
based on a hypothesis-testing framework, but rather on
assigning to VaR estimates a numerical score that reflects
specific regulatory concerns. Although this method forgoes
the benefits of statistical inference, it provides a measure
of relative performance that can be used to monitor the
performance of VaR estimates.
To use this method, the regulatory concerns of
interest must be translated into a loss function. The general
form of these loss functions is

generated for individual VaR estimates, and the score for the
complete regulatory sample is
250

Cm =

∑ Cmt + i .

i=1

Under very general conditions, accurate VaR estimates will
generate the lowest possible numerical score. Once a loss
function is defined and C m is calculated, a benchmark can
be constructed and used to evaluate the performance of
a set of VAR estimates. Although many regulatory loss
functions can be constructed, two are described below
(see diagram).

Loss Function Implied by the Binomial Method
The loss function implied by the binomial method is
 1 if ε t + 1 < VaR mt
.
C mt + 1 = 
 0 if ε t + 1 ≥ VaR mt
Note that the appropriate benchmark is the expected value
of C mt + 1 , which is E [ C mt + 1 ] = 0.01, and for the full
sample, E [ C m ] = 2.5 . As before, only the number of exceptions is of interest, and the same information contained in
the binomial method is included in this loss function.

Loss Functions of Interest

Cmt+1

 f ( ε t + 1, VaR mt ) if ε t + 1 < VaR mt
,
C mt + 1 = 
 g ( ε t + 1, VaR mt ) if ε t + 1 ≥ VaR mt
where f ( x ,y ) and g ( x ,y ) are functions such that
f ( x ,y ) ≥ g ( x ,y ) for a given y. The numerical scores are
constructed with a negative orientation, that is, lower
values of C mt + 1 are preferred since exceptions are given
higher scores than nonexceptions. Numerical scores are

VaRmt

0
εt+1

Notes: The diagram graphs both the binomial and the magnitude loss functions.
The binomial loss function is equal to 1 for εt+1< VaRmt and zero otherwise. For
the magnitude loss function, a quadratic term is added to the binomial loss
function for εt+1< VaRmt.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

121

Loss Function That Addresses the Magnitude of the Exceptions
As noted by the Basle Committee on Banking Supervision
(1996), the magnitude as well as the number of exceptions
are a matter of regulatory concern. This concern can be
readily incorporated into a loss function by introducing a
magnitude term. Although several are possible, a quadratic
term is used here, such that

2
 1 + ( ε t + 1 – VaR mt ) if ε t + 1 < VaR mt
.
C mt + 1 = 
 0 + ( ε t + 1 – VaR mt )2 if ε t + 1 ≥ VaR mt

Thus, as before, a score of one is imposed when an
exception occurs, but now, an additional term based on its
magnitude is included. The numerical score increases with
the magnitude of the exception and can provide additional
information on how the underlying VaR model forecasts the
lower tail of the underlying f t + 1 distribution. Unfortunately,
the benchmark based on the expected value of C mt + 1 cannot be determined easily, because the f t + 1 distribution is
unknown. However, a simple, operational benchmark can
be constructed and is discussed in Section III.

Simulation Exercise
To analyze the ability of the three evaluation methods to
gauge the accuracy of VaR estimates and thus avoid VaR
model misclassification, a simulation exercise is conducted. For the two hypothesis-testing methods, this
amounts to analyzing the power of the statistical tests,
that is, determining the probability with which the tests
reject the null hypothesis when it is incorrect. With
respect to the loss function method, its ability to evaluate
VaR estimates is gauged by how frequently the numerical
score for VaR estimates generated from the true datagenerating process (DGP) is lower than the score for
VaR estimates from alternative models. If the method is
capable of distinguishing between these scores, then the
degree of VaR model misclassification will be low.
In the simulation exercise, the portfolio value y t + 1
is specified as y t + 1 = y t + ε t + 1, where the portfolio return
ε t + 1 is generated by a GARCH(1,1)-normal process. That
is, h t + 1 , the variance of ε t + 1 , has dynamics of the form
2
h t + 1 = 0.075 + 0.10ε t + 0.85h t . The true DGP is one of

122

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

eight VaR models evaluated and is designated as the “true”
model, or model 1.
The next three alternative models are homoskedastic VaR models. Model 2 is simply the standard normal
distribution, and model 3 is the normal distribution with a
variance of 1½. Model 4 is the t-distribution with six
degrees of freedom, which has fatter tails than the normal
distribution and an unconditional variance of 1½.
The next three models are heteroskedastic VaR
models. For models 5 and 6, the underlying distribution is
the normal distribution, and h mt + 1 evolves over time as an
exponentially weighted moving average of past squared
returns, that is,
∞

h mt + 1 = ( 1 – λ )

∑ λ εt – i = λhmt + ( 1 – λ )εt .
i 2

2

i=0

This type of VaR model, which is used in the well-known
RiskMetrics calculations (see J.P. Morgan [1996]), is calibrated here by setting λ equal to 0.94 and 0.99 for models 5 and 6, respectively. Model 7 has the same variance
dynamics as the true model, but instead of using the normal distribution, it uses the t-distribution with six
degrees of freedom. Model 8 is the VaR model based on
historical simulation using 500 observations, that is, using
the past 500 observed returns, the α percent VaR estimate
is observation number 5∗ α of the sorted returns.
In the table, panel A presents the power analysis of
the hypothesis-testing methods. The simulation results
indicate that the hypothesis-testing methods can have relatively low power and thus a relatively high probability of
misclassifying inaccurate VaR estimates as “acceptably
accurate.” Specifically, the tests have low power against the
calibrated normal models (models 5 and 6) since their
smoothed variances are quite similar to the true GARCH
variances. The power against the homoskedastic alternatives is quite low as well.
For the proposed loss function method, the simulation results indicate that the degree of model misclassification generally mirrors that of the other methods, that is,
this method has a low-to-moderate ability to distinguish
between the true and alternative VaR models. However, in
certain cases, it provides additional useful information on

SIMULATION RESULTS FOR GARCH(1,1) -NORMAL DGP
Units: percent
Models
Homoskedastic
Heteroskedastic
Historical
2
3
4
5
6
7
8
PANEL A: POWER OF THE LRUC AND LRCC AGAINST ALTERNATIVE VAR MODELSa
52.3 21.4 30.5
5.1 10.3 81.7
23.2
LRuc
56.3 25.4 38.4
6.7 11.9 91.6
33.1
LRcc
PANEL B: ACCURACY OF VAR ESTIMATES USING REGULATORY LOSS FUNCTIONSb
Loss function
Binomial
91.7 41.3 18.1
52.2 48.9
0
38.0
Magnitude
96.5 56.1 29.1
75.3 69.4
0
51.5
Notes: The results are based on 1,000 simulations. Model 1 is the true data-generating
process, ε t + 1 Ω t ∼ N ( 0 ,h t + 1 ) , where h t + 1 = 0.075 + 0.10 ε t2 + 0.85h t . Models 2,
3, and 4 are the homoskedastic models N(0, 1), N(0,1.5), and t (6), respectively.
Models 5 and 6 are the two calibrated heteroskedastic models with the normal
distribution, and model 7 is a GARCH(1,1) -t (6) model with the same parameter
values as model 1. Model 8 is the historical simulation model based on the previous
500 observations.
aThe

size of the tests is set at 5 percent using finite-sample critical values.

b

Each row represents the percentage of simulations for which the alternative
VaR estimates have a higher numerical score than the “true” model, that is,
the percentage of the simulations for which the alternative VaR estimates are
correctly classified as inaccurate.

the accuracy of the VaR estimates under the defined loss
function. For example, note that the magnitude loss function is relatively more correct in classifying VaR estimates
than the binomial loss function. This result is not surprising given that it incorporates the additional information
on the magnitude of the exceptions into the evaluation.
The ability to use such additional information, as well as
the flexibility with respect to the specification of the
loss function, makes a reasonable case for the use of the
loss function method in the regulatory evaluation of VaR
estimates.

III. IMPLEMENTATION OF THE LOSS
FUNCTION METHOD
Under the current regulatory framework, regulators
250
observe { ε t + i ,VaR mt + i } i = 1 for bank m and thus can
construct, under the magnitude loss function, C m . However, for a realized value C m∗ , aside from the number of
exceptions, not much inference on the performance of these
VaR estimates is available. It is unknown whether C m∗ is a
“high” or “low” number.

To create a comparative benchmark, the distribution of C m , which is a random variable due to the random
observed portfolio returns, can be constructed. Since each
observation has its own distribution, additional assumptions must be imposed in order to analyze f ( C m ), the distribution of C m . Specifically, the observed returns can be
assumed to be independent and identically distributed
(iid); that is, ε t + 1 ∼ f . This is quite a strong assumption,
especially given the heteroskedasticity often found in
financial time series. However, the small sample size of 250
mandated by the MRA allows few other choices.
Having made the assumption that the observed
returns are iid, their empirical distribution ˆf ( ε t + 1 ) can be
estimated parametrically, that is, a specific distributional
form is assumed, and the necessary parameters are estimated from the available data. For example, if the returns
are assumed to be normally distributed with zero mean, the
2
variance can be estimated such that ˆf ( ε t + 1 ) is N ( 0 ,σ̂ ) .
Once ˆf ( ε t + 1 ) has been determined, the empirical
distribution of the numerical score C m under the distributional assumptions, denoted ˆf ( C m ) , can be generated since
the distribution of the observed returns and the corresponding VaR estimates are now available. For example, if
2
ε t + 1 ∼ N ( 0 ,σ̂ ) , then the corresponding VaR estimates
are VaR ˆ = – 2.32 σ̂ . Using this information, ˆf ( C m ) can
ft
then be constructed via simulation by forming 1,000 values
of the numerical score C m , each based on 250 draws from
ˆf ( ε ) and the corresponding VaR estimates.
t+1
Once ˆf ( C m ) has been generated, the empirical
quantile q̂ m = Fˆ ( C m∗ ) , where Fˆ ( C m ) is the cumulative
distribution function of ˆf ( C m ) , can be calculated for the
observed value C m∗ . This empirical quantile provides a performance benchmark, based on the distributional assumptions, that can be incorporated into the evaluation of the
underlying VaR estimates. In order to make this benchmark
operational, the regulator should select a threshold quantile
above which concerns regarding the performance of the VaR
estimates are raised. This decision should be based both on
the regulators’ preferences and the severity of the distributional assumptions used. If q̂ m is below the threshold that
regulators believe is appropriate, say, below 80 percent, then

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

123

C m∗ is “typical” under both the assumptions on ˆf ( ε t + 1 )
and the regulators’ preferences. If q̂ m is above the threshold,
then C m∗ can be considered atypical, and the regulators
should take a closer look at the underlying VaR model.
Note that this method for evaluating VaR estimates does not replace the hypothesis-testing methods, but
instead provides complementary information, especially
regarding the magnitude of the exceptions. In addition,
the flexibility of this method permits many other concerns
to be incorporated into the analysis via the choice of the
loss function.

IV. CONCLUSION
As implemented in the United States, the market risk
amendment to the Basle Capital Accord requires that commercial banks with significant trading activity provide their
regulators with VaR estimates from their own internal
models. The VaR estimates will be used to determine the
banks’ market risk capital requirements. This development
clearly indicates the importance of evaluating the accuracy
of VaR estimates from a regulatory perspective.

The binomial and interval forecast evaluation
methods are based on a hypothesis-testing framework and
are used to test the null hypothesis that the reported VaR
estimates are “acceptably accurate,” where accuracy is
defined by the test conducted. As shown in the simulation
exercise, the power of these tests can be low against reasonable alternative VaR models. This result does not negate
their usefulness, but it does indicate that the inference
drawn from this analysis has limitations.
The proposed loss function method is based on
assigning numerical scores to the performance of the VaR
estimates under a loss function that reflects the concerns of
the regulators. As shown in the simulation exercise, this
method can provide additional useful information on the
accuracy of the VaR estimates. Furthermore, it allows the
evaluation to be tailored to specific interests that regulators
may have, such as the magnitude of the observed exceptions. Since these methods provide complementary information, all three could be useful in the regulatory
evaluation of VaR estimates.

REFERENCES

Basle Committee on Banking Supervision. 1996. “Supervisory Framework for
the Use of ‘Backtesting’ in Conjunction with the Internal Models
Approach to Market Risk Capital Requirements.” Manuscript,
Bank for International Settlements.
Christoffersen, P. F. Forthcoming. “Evaluating Interval Forecasts.”
INTERNATIONAL ECONOMIC REVIEW.

J.P. Morgan. 1996. RISKMETRICS TECHNICAL DOCUMENT. 4th ed.
New York: J.P. Morgan.
Kupiec, P. 1995. “Techniques for Verifying the Accuracy of Risk
Measurement Models.” JOURNAL OF DERIVATIVES 3: 73-84.
Lopez, J. A. 1998. “Methods for Evaluating Value-at-Risk Estimates.”
Federal Reserve Bank of New York Research Paper no. 9802.

Federal Register. 1996. “Risk-Based Capital Standards: Market Risk.”
Vol. 61: 47357-78.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

124

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

Commentary
Beverly Hirtle

I am very pleased to speak here today and to comment on
these three very interesting and constructive papers dealing
with value-at-risk modeling issues. In my view, each paper
is an excellent example of what academic research has to
tell practitioners and supervisors about the practical problems of constructing value-at-risk models. Each paper
examines a particular aspect of value-at-risk modeling or
validation, and offers important insights into the very real
issues that can arise when specifying these models and
when considering their use for supervisory purposes. In
that sense, the papers make important contributions to
our understanding of how these models are likely to work
in practice.

DANIELSSON, DE VRIES, AND JØRGENSEN
The Danielsson, de Vries, and Jørgensen paper examines
some key issues surrounding the question of how well
current state-of-the-art, value-at-risk models capture the
behavior of the tails of the distribution of profit and loss,
that is, those rare but important instances in which large
losses are realized. As the paper points out, this question is
a fundamental one in the world of value-at-risk modeling,

Beverly Hirtle is a vice president at the Federal Reserve Bank of New York.

since both risk managers and supervisors are presumably
quite concerned about such events. In fact, one of the key
motivations for the development of value-at-risk models
was to be able to answer the question, If something goes
really wrong, how much money am I likely to lose? Put
more technically, risk managers and the senior management of financial institutions wanted to be able to assess
both the probability that large losses would occur and the
extent of losses in the event of unfortunate movements in
markets. When supervisors began considering the use of
these models for risk-based capital purposes, the fundamental questions were much the same. Thus, for all these
reasons, the ability to model the tails of the distribution
accurately is an important concern.
As the Danielsson et al. paper shows, this ability is
especially key when there is suspicion that the distribution might feature “fat tails.” As you know, the phrase
fat tails refers to the situation in which the actual probability of experiencing a loss of a given size—generally, a large
loss that would be considered to have a low probability
of occurring—is greater than the probability predicted
by the distribution assumed in the value-at-risk model.
Obviously, this disparity would be a matter of concern for
risk managers and for supervisors who would like to use
value-at-risk models for risk-based capital purposes.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

125

The paper suggests a method for addressing this
situation. I will not go into the details of the analysis, but
the paper proposes a method of estimating the overall
distribution of potential profits and losses that essentially
combines fairly standard methods for specifying the
middle of the distribution with an alternative approach for
estimating the tails. The paper then tests this modeling
approach using random portfolios composed of U.S.
equities and concludes that, at least for these portfolios, the
“tail estimator” approach outperforms value-at-risk models
based on a normal distribution and historical simulation.
When thinking about the practical implications of
the proposed tail estimator technique, at least one significant question occurs to me. The empirical experiments
reported in the paper are based on a fairly large data sample
of 1,500 trading-day observations, or about six years of historical data. While this long data history may be available
for certain instruments, it strikes me that these are more
data than are likely to be available for at least some of the
key risk factors that could influence the behavior of many
financial institutions’ portfolios, particularly when regime
shifts and major market breaks are taken into account.
Thus, the question that arises is, How well would the
proposed tail estimator approach perform relative to more
standard value-at-risk techniques when used on an historical data set more typical of the size used by financial
institutions in their value-at-risk models, say, one to three
years of data? At its heart, the question I am asking is
whether the tail estimator approach would continue to
perform significantly better than other value-at-risk
methods under the more typical conditions facing financial
institutions, both in terms of data availability and in terms
of more complex portfolios. This is a question on which
future research in this area might focus.

CHRISTOFFERSEN, DIEBOLD,
AND SCHUERMANN
The Christoffersen, Diebold, and Schuermann paper
addresses another key practical issue in value-at-risk
modeling, namely, whether the volatility of important
financial market variables such as stock price indices and
exchange rates is forecastable. By asking whether volatility

126

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

is forecastable, the paper essentially asks whether there
is value to using recently developed econometric techniques—such as some form of GARCH estimation—to
try to improve the forecast of the next period’s volatility,
or whether it makes more sense to view volatility as
being fairly constant over the long run. In technical
terms, the question concerns whether conditional volatility
estimates, which place more weight on recent financial
market data, outperform unconditional volatility estimates,
which are based on information from a fairly long historical
observation period.
The answer, as the paper makes clear, is that it
depends. Specifically, it depends on the horizon—or holding
period—being examined. The results in the paper indicate
that for holding periods of about ten days or more, there is
little evidence that volatility is forecastable and, therefore,
that more complex estimation techniques are warranted.
For shorter horizons, in contrast, the paper concludes that
volatility dynamics play an important role in our understanding of financial market behavior.
The basic message of the paper—that the appropriate estimation technique depends on the holding period
used in the value-at-risk estimate—implies that there is no
simple response to the question, What is the best way to
construct value-at-risk models? The answer will clearly
vary with the value-at-risk estimates’ purpose.
As valuable as the contribution of the Christoffersen
et al. paper is, there are some extensions that would link
the work even more closely to the real world issues that
supervisors and risk managers are likely to face. In particular, the analysis is based on examinations of the behavior
of individual financial time series, such as equity price
indices, exchange rates, and U.S. Treasury bond returns.
Essentially, the analysis considers each individual financial
variable as a very simple portfolio consisting of just one
instrument. An interesting extension would be to see how
or whether the conclusions of the analysis would change if
more complex portfolios were considered. That is, would
the conclusions be altered if the volatility of portfolios of
multiple instruments were considered?
The results already suggest that the ability to
forecast volatility is somewhat dependent on the financial

variable in question—for instance, Treasury bond returns
appear to have forecastable volatility for holding periods
as long as twenty days, compared with about ten days for
some of the other variables tested. It would be interesting, then, to build on this observation by constructing
portfolios comprised of a mixture of instruments that
more closely mirror the portfolio compositions that
financial institutions are likely to have in practice. Such
an experiment presumes, of course, that the risk manager
is interested in knowing whether the volatility of the
portfolio can be forecast, as opposed to the volatility of
individual financial variables. In practice, risk managers
and supervisors may be interested in knowing the answer
to both questions.

LOPEZ
Finally, the paper by my colleague Jose Lopez addresses
another important area in the world of value at risk: model
validation. The paper explores the question, How can we
assess the accuracy and performance of a value-at-risk
model? To answer this question, it is first necessary to
define what we mean by “accuracy.” As the paper points
out, there are several potential definitions. First, by accuracy, we could mean, how well does the model measure a
particular percentile of the profit-and-loss distribution?
This is the definition that has been incorporated into the
market risk capital requirements through the so-called
backtesting process. As the paper points out, approaches to
assessing model accuracy along this dimension have
received considerable attention from both practitioners and
researchers, and the properties of the associated statistical
tests have been explored in several studies.
However, the main contribution of the Lopez paper
is its suggestion that alternative approaches to evaluating
the performance of value-at-risk models are possible. For
instance, another potential approach involves specifying a
characteristic of value-at-risk models that a risk manager or
a supervisor may be particularly concerned about—say, the
model’s ability to forecast the size of very large losses—and
designing a method of evaluating the model’s performance
according to this criterion. Such approaches are not formal
hypothesis tests, but instead involve specifying what is

known as a “loss function,” which captures the particular
concerns of a risk manager, supervisor, or other interested
party. In essence, a loss function is a shorthand method of
calculating a numerical score for the performance of a
value-at-risk model.
The results in the Lopez paper indicate that this
loss function approach can be a useful complement to more
traditional hypothesis-testing approaches. I will not go
over the detail of his analysis, but the loss function
approach appears to be able to provide additional information that could allow observers to separate accurate and
inaccurate value-at-risk models. The important conclusion
here is not that the loss function approach is superior to
more traditional hypothesis-testing methods or that it
should be used in place of these methods. Instead, the
appropriate conclusion, which is spelled out in the paper,
is that the loss function approach is a potentially useful
supplement to these more formal statistical methods.
A further implication of the analysis is that the
assessment of model performance can vary depending on
who is doing the assessing and what issues or characteristics are of particular concern to the assessor. Each interested
party could assess model performance using a different loss
function, and the judgments made by these different
parties could vary accordingly.
Before moving on to my concluding remarks, I
would like to discuss briefly the material in the last section
of the Lopez paper. This last section proposes a method for
implementing the loss function approach under somewhat
more realistic conditions than those assumed in the first
section of the paper. Specifically, the last section proposes a
method for calibrating the loss function in the entirely
realistic case in which the “true” underlying distribution of
profits and losses is unknown. Using a simulation technique, the paper demonstrates how such an approach could
be used in practice, and offers some illustrations of the type
of information about model accuracy that the approach
could provide.
The material in this last section is a promising
beginning, but before the actual usefulness of this application of the loss function approach can be assessed, it seems
necessary to go beyond the relatively stylized simulation

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

127

framework presented in the paper. The ideal case would
be to use actual profit-and-loss data from a real financial
institution’s portfolio to rerun the experiments presented in
the paper. Admittedly, such data are unlikely to be readily
available outside financial institutions, which makes such
testing difficult. However, the issue of whether the proposed
loss function approach actually provides useful additional
information about model performance is probably best
assessed using real examples of the type of portfolio data
that would be encountered if the method was actually
implemented.

CONCLUDING REMARKS
In making a few brief concluding remarks about the
lessons that can be drawn from these three papers, I would
like to point out two themes that I see running through
the papers’ results. First, as discussed above, the papers
highlight the point that in the world of value-at-risk modeling, there is no single correct way of doing things. The
papers illustrate that the “right approach” often depends
on the question that is being asked and the circumstances
influencing the concerns of the questioner. The most
important contribution of these papers is their helping us
to understand what the “right answer” might be in certain
situations, whether that situation is the presence of a fattailed distribution or different holding period horizons.
Furthermore, the papers illustrate that in some situations,
multiple approaches may be required to get a full picture
of the behavior of a given portfolio or the performance of a
particular model. In both senses, the three papers in this
session have helped to provide concrete guidance on how to
make such choices as circumstances vary.

The second theme that I see emerging from these
papers is a little less direct than the issues I have just discussed. In my view, the papers reinforce the point that
value-at-risk modeling—indeed probably most types of
risk modeling—is a dynamic process, with important
innovations and insights occurring along the way. It has
been several years since I myself first started working on
value-at-risk issues, as part of the original team that developed the internal models approach to market risk capital
charges. Even at that stage, many financial institutions had
already devoted considerable time and resources—over
periods spanning several years—to the development of the
models they were using for internal risk management.
Despite this long history, these papers clearly indicate that
serious thinking about value at risk is still very much a live
issue, with innovations and new insights continuing to
come about.
For that reason, no value-at-risk model can probably ever be considered complete or final; it is always a matter of keeping an eye on the most recent developments and
incorporating them where appropriate. This is probably a
pretty obvious observation to those of you who are involved
in risk modeling on a hands-on basis. Nonetheless, it is an
important observation to keep in mind as new studies
emerge illustrating new shortcomings of old approaches
and new approaches to old problems. These studies—such
as the three presented here today—do not reflect the failure
of past modeling efforts, but instead demonstrate the
importance of independent academic research into the
practical questions facing risk managers, supervisors, and
others interested in risk modeling.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

128

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Pilot Exercise—Pre-Commitment
Approach to Market Risk
Jill Considine

An international group of ten banking organizations (the
“Participating Institutions”) participated in a pilot (the
“Pilot”) of the pre-commitment approach to capital
requirements for market risks (the “Pre-Commitment
Approach”). The Pre-Commitment Approach was described
in the request for comments published by the Board of
Governors of the Federal Reserve System (the “Federal
Reserve Board”) in 60 Fed. Reg. 38142 (July 25, 1995). In
brief, under the Pre-Commitment Approach, banks would
specify the amount of capital they wished to allocate to
cover market risk exposures over a given period, subject to
penalties if trading losses over that period exceeded this
precommitted amount.
The Pilot was organized by The New York Clearing
House Association (the “Clearing House”). The Participating
Institutions were BankAmerica Corporation, Bankers Trust
New York Corporation, the Chase Manhattan Corporation, Citicorp, First Chicago NBD Corporation, First
Union Corporation, the Fuji Bank Limited, J.P. Morgan &
Co. Incorporated, NationsBank Corporation, and Swiss
Bank Corporation. This is their report on the Pilot.

SUMMARY
Set forth below in Part I is a discussion of the background
of the Pilot; in Part II, conclusions arising out of the conduct
of the Pilot; and in Part III, the Participating Institutions’
views as to the next steps. The Pilot left the Participating
Institutions with three core conclusions:
• that the Pre-Commitment Approach is a viable alternative to the internal models approach for establishing the capital adequacy of a trading business for
regulatory purposes. When properly structured and
refined, it should be implemented as an alternative,
and not an “add-on,” to existing capital standards;
• that, for progress to be made, it is essential that the
bank regulatory agencies participate actively with
the banking industry in the effort to refine how the
Pre-Commitment Approach would be implemented
in practice; and
• that the most important remaining question requiring an answer is what penalty would result for an
institution that incurs losses in its trading business
exceeding its pre-committed amount for a relevant
period.

I. BACKGROUND

Jill Considine is president of The New York Clearing House Association L.L.C.

The complexity and diversity of activities conducted by
banking organizations and other financial institutions have
developed at a rapid pace in recent years. It has become

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

131

increasingly apparent to the Participating Institutions,
and increasingly recognized by bank regulators as well,
that a standardized “one-size-fits-all” regulatory approach,
whether as to capital or other matters, is becoming less and
less appropriate. With regard to bank capital standards for
market risks, the Basle supervisors recognized this view in
1995 by developing the internal models approach as an
alternative to the standardized model issued two years
earlier. The Pre-Commitment Approach builds upon the
logic of the internal models approach by having each
banking organization develop its capital requirements in
relation to the organization’s own activities. By relying
on economic incentives instead of on fixed rules, the
Pre-Commitment Approach stands at the opposite end of
the spectrum from the one-size-fits-all approach.
In a comment letter to the Federal Reserve Board
dated October 31, 1995, the member banks of the Clearing
House suggested that the Federal Reserve Board and other
regulators consider adoption of the Pre-Commitment
Approach for two reasons. First, the Pre-Commitment
Approach might constitute a way to establish effectively
a relationship between an institution’s calculation of
value at risk for management purposes and prudent capital
requirements for regulatory purposes. Second, the PreCommitment Approach by its nature results in capital
requirements for market risks tailored to the particular
circumstances of each institution; it thereby solves the onesize-fits-all problem of the standardized model in the Basle
capital standards while avoiding the inaccuracies created
by the rigid, uniform quantitative standards imposed by
the internal models approach. The letter also suggested
that one or more institutions apply the Pre-Commitment
Approach on a trial basis; the suggestion was the genesis of
the Pilot described in this report.
The purpose of the Pilot was to provide further
information and experience to the Federal Reserve Board,
the Federal Deposit Insurance Corporation (“FDIC”), and
the Office of the Comptroller of the Currency (the
“OCC”)—collectively, the “U.S. Agencies”—as well as to
the Ministry of Finance, the Bank of Japan in Japan, and
the Federal Banking Commission in Switzerland—
together with the U.S. Agencies, the “Agencies”—as well

132

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

as to the Participating Institutions themselves, as to the
usefulness and viability of the Pre-Commitment Approach
for regulatory purposes as applied to the Participating
Institutions’ trading portfolios and activities. In addition,
the appropriate relationship between (i) “value at risk”
and other measurements of risk, on the one hand, and
(ii) the appropriate regulatory capital level, on the other,
is unique to each institution and its circumstances. It
was hoped that the Pilot would generate practical experience concerning that relationship for the Participating
Institutions.
The Pilot was conducted under the assumption
that, in practice, the Pre-Commitment Approach would be
a substitute for other market risk capital standards, and
not an additional capital measurement or requirement to
be added to other capital standards or requirements. In
addition, the Clearing House, as well as several of the
Participating Institutions individually, are on record as
believing that the appropriate penalty for exceeding
pre-committed capital levels is disclosure by the affected
institution that a loss exceeding its pre-committed capital
amount for the relevant period has occurred. The Participating Institutions conducted the Pilot under the assumption that the penalty would be disclosure.
Prior to commencing the Pilot, the Participating
Institutions held several meetings with the U.S. Agencies
to discuss the upcoming Pilot, how it should be conducted,
and what it might accomplish. The non-U.S. Participating
Institutions met with the relevant Agencies in their countries as well. Following these meetings, the Participating
Institutions agreed upon the purpose, scope, and mechanics
of the Pilot.
In particular, the Participating Institutions agreed
that the Pilot would be conducted for four quarterly measurement periods (“Measurement Periods”) corresponding
to calendar quarters as well as to customary reporting periods
for both call report purposes and reporting under the Securities Exchange Act of 1934. The Measurement Periods
were (i) October 1, 1996, through December 31, 1996;
(ii) January 1, 1997, through March 31, 1997; (iii) April 1,
1997, through June 30, 1997; and (iv) July 1, 1997,
through September 30, 1997.

The Pilot was conducted by the Participating
Institutions on a consolidated basis. Accordingly, precommitted capital amounts and related P&L Changes (as
defined below) were identified for, and took into account,
the consolidated trading operation, including activities in
bank subsidiaries as well as Section 20 subsidiaries.
Prior to the commencement of each Measurement
Period, (i) each Participating U.S. Institution will identify
in writing to the Board and to the Agency that is the
primary regulator for its lead bank subsidiary (together,
its “Primary Regulators”), as well as to the Clearing
House, its pre-committed capital amount for the upcoming
Measurement Period; and (ii) each non-U.S. Participating
Institution will identify to the Agency that is its primary
regulator (its “Primary Regulator”), as well as to the
Clearing House, its pre-committed capital amount for the
upcoming Measurement Period. That amount was eventually
compared with the change in the relevant Participating
Institution’s trading profits and losses (the “P&L Change”)
for the relevant Measurement Period based upon all of such
Participating Institution’s consolidated trading activities
(both proprietary and for its customers), not just its
proprietary account. Accordingly, the P&L Change took
into account, in addition to net gains or losses from
proprietary trading, (i) brokerage fees, (ii) dealer spreads,
(iii) net interest income before taxes associated with trading positions, and (iv) the net change between the beginning and end of the Measurement Period in the
Participating Institution’s reserves maintained against its
trading activities.

The pre-committed capital amount identified by a
Participating Institution for a Measurement Period covered
both general market risk and specific risk arising out of such
Participating Institution’s trading portfolios and activities
for the relevant period.1 This approach is consistent with
defining the P&L Change with which a pre-committed
capital amount is compared as the change in the relevant
Participating Institution’s trading profits and losses for the
relevant Measurement Period from all sources and risks.
Each Participating Institution delivered to the
Agency that is its primary regulator an “Individual Institution Report” for each Measurement Period. These Individual
Institution Reports contained both pre-committed capital
amounts and P&L Changes for each Measurement Period.
Thus, the reports made possible a simple comparison of the
pre-committed capital amount for each Measurement
Period with, if applicable, the negative cumulative P&L
Change calculated as of the end of such Measurement
Period. Each Participating Institution reported its P&L
Change for each Measurement Period irrespective of whether
the P&L Change was positive (a profit) or negative (a loss).2
The Clearing House also prepared and distributed
to all of the Agencies and to the Participating Institutions
an “Aggregate Data Report.” The Aggregate Data Report
is cumulative (see table). It shows, for each Participating
Institution (identified by number instead of name for
confidentiality reasons) and Measurement Period, the ratio
of such Participating Institution’s P&L Change to its precommitted capital amount for the relevant Measurement
Period.

PRE-COMMITMENT PILOT EXERCISE: AGGREGATE DATA REPORT
Bank
1
2
3
4
5
6
7
8
9
10

Fourth-Quarter 1996
P&L:PCA Ratio

First-Quarter 1997
P&L:PCA Ratio

Second-Quarter 1997
P&L:PCA Ratio

Third-Quarter 1997
P&L:PCA Ratio

0.56
2.27
3.56
0.44
1.84
0.42
0.81
0.77
5.43
1.46

1.21
1.20
3.79
0.59
2.92
0.68
1.01
0.42
5.89
1.99

1.39
2.18
3.25
0.74
1.89
0.75
1.12
1.15
5.11
1.36

1.09
0.96
3.61
0.84
1.81
0.54
1.12
0.91
6.60
1.88

Notes: P&L is trading profit and loss on consolidated trading activities for the Measurement Period. PCA is the pre-committed capital amount for market risk for the
Measurement Period.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

133

II. CONCLUSIONS FROM THE PILOT
The Participating Institutions drew the following conclusions from the Pilot of the Pre-Commitment Approach:
1. In the view of the Participating Institutions, steps
should be taken to implement the Pre-Commitment
Approach, when properly structured and refined,
as a replacement for existing market risk capital
requirements. The Pilot demonstrated that the
Pre-Commitment Approach is a viable alternative
to the internal models approach for establishing
the capital adequacy of a trading business for regulatory purposes. The Participating Institutions
believe that the Pilot demonstrated that the
Pre-Commitment Approach provides strong incentives for prudent risk management and more
efficient allocation of capital as compared with
other existing capital standards. The Participating
Institutions were able to establish and report in a
timely manner pre-committed capital amounts
and P&L Changes for the relevant Measurement
Periods.
2. The Pilot in effect assigned to the Participating
Institutions the responsibility for determining an
appropriate level of capital, free of any regulatory
preconceptions as to what that specific level
should be. As a result of having to focus on an
appropriate amount of capital, the Pilot contributed to the development and depth of the Participating Institutions’ thinking as to the purpose of
capital and the distinction between the economic
capital maintained for the benefit of shareholders
to accommodate the variability of revenue and
income and the regulatory capital available to
protect the safety and soundness of the financial
system from the effects of unanticipated losses.
3. At the outset of the Pilot, it was anticipated that
the Aggregate Data Report would include the
ratio of the pre-committed capital amount to the
market risk capital requirement for each Participating Institution in each Measurement Period.
This turned out not to be feasible because the
Participating Institutions became certified to use
the internal models approach for market risk capital
requirements at different times. Nevertheless,
each Participating Institution has, on an informal

134

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

basis, compared its pre-committed capital amount
with its estimated market risk capital requirement
under the internal models approach; generally,
pre-committed capital amounts were significantly
less than the market risk capital requirements
estimated to apply under the market risk provisions. The Participating Institutions believe that
the results of the Pilot suggest that the “3X”
multiplier, as well as the specific risk component,
even after the Basle Committee’s revision dated
September 17, 1997, lead to excessive regulatory
capital requirements for their trading positions.
4. As reflected in the Aggregate Data Report, no
Participating Institution reported a negative P&L
Change exceeding its pre-committed capital
amount. The Participating Institutions recognize
that the Pilot was conducted during a period of
moderate market volatility and generally favorable
trading results reported by financial institutions.
Nonetheless, the pre-committed capital amounts
were calculated to cover losses stemming from
unusual spikes in volatility and market reversals,
and the Participating Institutions would not
change the procedures, methods, and vetting processes applied during the Pilot in light of the
unsettled markets in October 1997 following the
conclusion of the Pilot.
5. The ratios of P&L Changes to pre-committed capital amounts varied significantly. For example, the
ratios reported by Participating Institution no. 9
were generally five times that of Participating
Institution no. 4. The Participating Institutions are
not uncomfortable with the differences. Such differences arise from differences among the institutions
in the nature of their trading books, the varying
risk appetites and risk management techniques
among firms, differing ratios of proprietary trading revenues to customer flow revenues among
firms, and differing views as to the relationship
between economic and regulatory capital. It would
be of interest to know whether the Agencies,
which have access to the full spectrum of the data
underlying the Aggregate Data Report, have
additional insights as to the sources of differences
among the Participating Institutions, which did not
share their own underlying data with each other.

III. LOOKING FORWARD
The Participating Institutions believe that the PreCommitment Approach is a viable alternative to the internal
models approach for determining the capital adequacy of a
trading business, and that steps should be taken to refine
and ultimately implement the Pre-Commitment Approach.
Before further effort by the banking industry can be justified or progress made, it is essential that the Agencies
participate actively in the effort to refine how the PreCommitment Approach will be implemented in practice.
Assuming the Agencies concur with the Participating Institutions’ views, implementation of the PreCommitment Approach requires that the Agencies confirm
what penalties would apply if a banking institution violates the criteria for capital adequacy specified in the PreCommitment Approach. The Participating Institutions
believe that disclosure is the appropriate penalty, and they
conducted the Pilot under the assumption that disclosure
would indeed be the penalty. It would be useful to discuss
with the Agencies whether they concur with this view, and
how they believe such disclosure might occur.

Finally, although the Pre-Commitment Approach
was initially proposed (and the Pilot was conducted) for
the market risk of trading businesses, the Participating
Institutions believe that the benefits of the Approach are
likely to exist when applied to other risks of trading businesses. The Pre-Commitment Approach goes directly to
the basic question of whether a business possesses adequate
capital to absorb unanticipated losses. The pre-committed
capital as applied to a business covers any risk—market,
specific, operational, legal, settlement—that has the
potential to create a loss. As a result, the Pre-Commitment
Approach avoids many of the complications and inefficiencies generated when capital charges are set separately for
each category of risk. Furthermore, institutions differ in how
they measure and manage the component risks, and the
correlations between the risks likely will vary according to
each institution’s business mix. The Pre-Commitment
Approach recognizes these differences while providing
incentives to ensure that minimum prudential standards
are maintained within the industry.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

135

ENDNOTES

1. A Participating Institution’s pre-committed capital amount for a
Measurement Period did not cover, however, foreign exchange and
commodities positions outside the trading account (activities that are
covered in the market risk rule that was recently adopted).

2. If the Pre-Commitment Approach is implemented, only a negative
cumulative P&L Change for a Measurement Period having an absolute
value exceeding the relevant Participating Institution’s pre-committed
capital amount for such Measurement Period would give rise to a
disclosure requirement or other penalty.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

136

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

Value at Risk and Precommitment:
Approaches to Market Risk Regulation

.

Arupratan Daripa and Simone Varotto

1. INTRODUCTION
Traditionally, regulation of banks has focused on the risk
entailed in bank loans. Loans are typically nontraded
assets. In recent years, another component of bank assets
has become increasingly important: assets actively traded
in the financial markets.1 These assets form the “trading
book” of a bank, in contrast to the “banking book,” which
includes the nontraded assets such as loans. Though for
most large banks the trading book is still relatively small
compared with the banking book, its rising importance
makes the market risk of banks an important regulatory
concern.
In January 1996, the European Union (EU)
adopted rules to regulate the market risk exposure of
banks, setting risk-based capital requirements for the trading books of banks and securities houses. At this point, one
must ask what the purpose of such regulatory capital is.
We proceed under the hypothesis that the purpose of regulatory capital is to provide a buffer for contingencies
involving large losses, in order to protect both depositors
and the system as a whole by reducing the likelihood that
the system will fail. In this paper, we look at two different

Arupratan Daripa is a lecturer in the Department of Economics at Birkbeck
College, and Simone Varotto is an analyst in the Regulatory Policy Division of
the Bank of England.

ways of calculating bank capital for market risk exposures
and compare their performance in delivering an adequate
cover for large losses.
The approach taken by the EU is to use a “hardlink” regime that sets a relation between exposure and
capital requirement exogenously. The adopted requirements, known as the standardised approach, laid down
rules for calculating the capital requirement for each
separate risk category (that is, U.K. equities, U.S. equities, U.K. interest rate risk, and so on). These are added
together to give the overall requirement. A weakness of
this method is that it does not take into account the
diversification benefits of holding different risks in the
same portfolio, and thus yields an excessive capital requirement for a large diversified player. One way to correct for
this problem is to use the value-at-risk (VaR) models that
some banks have developed to measure overall portfolio
risk. The Basle Supervisors’ Committee has now agreed to
offer an alternative regime, with capital requirements
based on such internal VaR models, and the EU is considering whether to follow suit.
While the measure of risk exposure employed by
the two regimes is different, in both approaches the regulator lays down the parameters for the calculation of the
capital requirement for a given exposure. Thus, both
regimes embody a hard link.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

137

Under VaR, the capital requirement for a particular
portfolio is calculated using the internal risk management
models of the banks.2 For any portfolio, the aim is to
estimate a level of potential loss over a particular time
period that would only be exceeded with a given probability. Both the probability and the period are laid down by
the regulator. Basle has set these at 1 percent and ten days,
respectively. The capital requirement is based on this
potential loss.3
But using VaR comes at a price. The regulator
must try to ensure that the internal model used to calculate
risk is accurate. Otherwise, banks might misrepresent their
risk exposure. However, back-testing to check the accuracy
of an internal VaR model is difficult in the sense that a
large number of observations are needed before an accurate
judgment can be made about the model.4 This motivated
economists Kupiec and O’Brien (1997) of the Federal
Reserve Board to put forward a new “precommitment”
approach (PCA) that proposes the use of a “soft link.” Such
a link is not externally imposed, but arises endogenously.

2. AGENCY PROBLEMS AND FRAUD
This paper examines whether principal agent problems
between the shareholders and the managers in banks would
undermine the use of a capital regime relying on incentives
for the shareholders.5 In particular, it looks at whether the
management might choose to run positions that were
excessive relative to the capital of the bank. This is not a
question of illicit activity such as the hiding of positions,
which no capital regime will deal with, but whether the
managers, because of concerns about market share, their
own bonuses, etc., might on occasions take excessive risk.
For example, a very large position might be taken on the
assumption that it could be treaded out of in minutes.
Hard-link regimes avoid this issue because the positions
taken at any time must be consistent with the amount of
capital available to back them according to a formula laid
down by the regulators. There is no scope for judgment by
the managers. The scope for such judgment is an advantage
in PCA. Depending on the effectiveness of the incentives,
however, it could also be a weakness.

In the case of the proposed precommitment approach, the
link between exposures held and the capital backing them
is induced by the threat of penalties whenever trading
losses exceed a level prespecified by the bank (known as the
precommitment capital).
Specifically, under PCA, banks are asked to choose
a level of capital to back their trading books for a given
period of time (for example, one quarter). If the cumulative
losses of the trading book exceed the chosen cover at any
time during the period, the banks are penalised, possibly
by fines. The chosen capital is thus a “precommitment”
level, beyond which penalties are imposed. The task of the
regulator is to choose an appropriate schedule of penalties
to induce a desirable choice of cover for each level of risk.
The banks then position themselves in terms of risk and
capital choices for the trading book. The idea is attractive
because it does not require the regulator to estimate the
level of trading book risk of any particular bank or to
approve the firm’s model, and it promotes a more “handsoff” regulation.

138

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

3. HARD LINKS AND SOFT LINKS:
A POTENTIAL TRADE-OFF
PCA not only circumvents the problems of back-testing,
but also gives the banks much greater freedom in choosing
the portfolios they wish to carry. Since the trading desks
of banks are likely to be more adept at estimating risks of
various trades, it seems inefficient to impose hard links.
While these advantages of PCA have been discussed in the literature, another aspect of this soft-link
approach seems to have received little attention. The flexibility of a soft-link approach such as PCA comes from the
fact that it is not directly prescriptive, but creates incentives through the use of penalties. In more general terms,
PCA tries to solve what is known as a “mechanism design”
problem. It attempts to specify a mechanism (in this case,
a penalty framework that the banks take into account
in choosing portfolio risk and committed capital) that
would make it incentive-compatible for the banks to
choose the socially desirable risk profile. The success of
such a programme depends on how well the regulator

anticipates the strategic opportunities that a mechanism
might create.
In other words, while soft-link approaches are
flexible and not subject to measurement problems, they
create a host of strategic issues. To build a successful softlink regulatory policy, one must recognise all possible
conflicts of interest that might arise subsequently, and
provide incentives to align them with the objectives of
the regulator.
The first step toward building an optimal softlink policy is to analyse the incentive effects of PCA in a
detailed model of the conflicts of interest within the bank.
An example of such a model can be found in Daripa and
Varotto (1998a).
In Daripa and Varotto, we find that switching to
PCA from a hard-link approach does entail a trade-off. On
the one hand, the switch would allow firms greater scope to
choose portfolios that were appropriate given their expertise and market liquidity. On the other hand, the switch
could also increase the likelihood that large players have
insufficient capital to cover market spikes. One issue is
whether key features of the soft-link approach could be
combined with certain features of a hard-link approach in
order to circumvent certain incentive problems.

4. SEPARATION OF OWNERSHIP
AND CONTROL IN LARGE BANKS:
THE AGENCY PROBLEM
A large part of the corporate finance literature explores the
corporate control problem. The problem is empirically well
documented and theoretically well understood. The typical
solution to agency problems is to use incentive contracts
(see, for example, Gibbons and Murphy [1992], Jensen and
Murphy [1990], Garen [1994], and the survey by Jensen
and Warner [1988]). A corporate control problem arises
whenever ownership is separate from the decision-making
body. In many large corporations, ownership is diffuse and
decisions are taken by managers.
As in most large corporations, an integral feature
of large modern banks is the separation of owners from
day-to-day decision making. The ownership is diffuse—

there are numerous small shareholders who have little
impact on most decisions. For example, in the United
Kingdom, shareholders rarely have more than 2 to 3 percent of the shares in any one bank. Even relatively large
shareholders would in general have hardly any impact on
day-to-day risk taking. It is the incentives of, say, the
traders of the bank that determine what specific strategies
they might adopt on a particular day. Thus, it is important
to see to what extent the owners can control their actions.
However, in regulating banks, scarce attention
has been paid so far to such internal control problems
and their effect on the success of the regulatory mechanism. There is a good reason for this lack of attention.
Regulation usually takes the form of an exogenous specification for capital for each level of estimated risk carried
by the bank (combined with some form of inspection to
ensure that the rules were adhered to). As Daripa and
Varotto (1998a) show, regulation by such a hard link is
not sensitive to agency problems.6 But this is no longer
true when we consider a soft-link approach. In Kupiec
and O’Brien (1997), the regulator interacts with banks
intended as homogenous entities. Shareholders and managers are not considered as separate centres of interest.
This leaves aside the important issue of the effects of the
incentive structure within the bank. Indeed, under
PCA, the generation of the right incentives is at the
very heart of the problem. Thus agency-related control
problems become central issues and must be addressed
in order to gain a clear understanding of the regulatory
incentives that would be generated.
As a control device, the owners write contracts
with managers, and then the managers make the most of
the trading decisions. Moreover, managers cannot usually
be fined (that is, paid negative salaries) in the event of a
loss.7 Thus, decisions about trading-book risk are taken by
managers with limited liability, while the owners have to
suffer the losses in the trading book and pay the penalty in
the case of a breach under PCA.
This fact implies that to study the effectiveness
of the incentive structure generated by PCA, it is no
longer sufficient to consider the bank as a single entity

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

139

whose actions are influenced directly by the regulatory
incentives. Without explicitly modeling the agency
structure and the nature of optimal incentive contracts in
the bank, the effect of regulatory policies on large banks
is difficult to gauge.
In other words, to evaluate a soft-link regulatory
scheme, the appropriate question to ask relates to the effect
of the regime on the incentive structure within the bank.
An analysis of this question would tell us which regulatory
objectives are filtered through, and what aspects of the regulatory mechanism need further modification. In this
paper, we aim to provide such an analysis.

5. SUMMARY OF THE RESULTS
In Daripa and Varotto (1998a), we investigate the above
issues in a simple principal-agent framework. We obtain
the following results.

5.1 AGENCY INCENTIVES UNDER
A HARD-LINK APPROACH
First, we show that conflicts of interest within the bank8
have no implications for hard-link policies. The regulator
sets a capital requirement for each level of estimated risk.
At any point in time, the risk cannot exceed the level consistent with the given capital. It is easy to see that this is
true irrespective of the incentive structure in the bank.
Clearly, when regulators are relying on models specified by
the firms to generate capital requirements there may be
some scope for managers to produce results that downplay
the losses. But the managers’ scope is severely limited. The
regulators lay down the amount of returns data that must
be used (one year minimum), the parameters used in the
model, and approve the model. The regulators also carry
out back-testing.
So, while a hard-link regime such as VaR is subject
to measurement problems—as highlighted in the literature—and is economically unattractive in some respects,
the presence of a hard link does manage to sort out some
potential strategic complications. A hard link works
because it sets an exogenous requirement that cannot be
breached.

140

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

However, the estimated risk under VaR uses
fixed parameters and does not take into account extra
information about, say, future market liquidity that
might be available to the manager. The estimated risk
also fails to reflect managerial expertise in choosing holding periods optimally, given the opportunity set. Thus,
the VaR estimate may often be an overestimate. Of
course, an overestimate provides even better cover for
extreme losses; at the same time, however, it cuts off certain investment opportunities inefficiently.

5.2 AGENCY INCENTIVES UNDER PCA
While the structure of an agency would be a concern under
any soft-link regime, the precise effects would differ across
different soft-link policies. In this paper, we analyse the
effects of agency on the outcomes generated by PCA.
Under PCA, the capital chosen does not constrain
the manager’s choice of riskiness. Even if the shareholders
used an internal model to monitor risk, they would not
want to cut off too many investment opportunities. In fact,
they would like to rely on the judgments of the manager in
order to reap the benefits of his expertise. Instead of putting a priori constraints on portfolios, they would want to
link payment to “performance.”
In the absence of a priori restrictions on the
choice of risk, the outcome depends on the manager’s
preferences, because even with the use of a VaR model
the manager could choose the holding period according
to expected market liquidity or price volatility. We
show that if managers care only about monetary compensation, the principal (that is, the bank owner/shareholders) could design contracts that would generate
incentives for the manager to behave consistently with
the principal’s objectives, and in turn, the regulator
could therefore achieve the right capital levels. But the
manager might also be interested in nonmonetary
rewards (for example, attaining star status by generating
large positive returns) and might therefore undertake
high-risk strategies (limited managerial liability
implies that only the upside matters). In Daripa and
Varotto (1998a), we show that in this case tighter controls

on the manager can be achieved only at the cost of the
principal’s own profit. This leads the principal to choose
a level of control that is not too tight, resulting in a
nontrivial probability of very risky investments and
large losses in relation to the amount of capital precommitted.

6. MODIFYING PCA: OPTIMAL
REGULATION
Correcting for agency distortions is, in general, not
straightforward. This is a problem of designing a mechanism to implement a certain objective given that various
interacting agents have conflicting preferences.9 Such a
general approach could be very fruitful in this context.
While devising a suitable approach is one of our research
areas, an analysis along this line is beyond the scope of
the paper.
However, there is another possible route—since
the interaction between the regulator and the banks takes
place repeatedly over time, we need not focus simply on
static regulation. The key problem here is that on the one
hand, maintaining flexibility makes it necessary to allow
the banks to choose their own riskiness. On the other hand,
such flexibility might result in loss of control by the principal over the manager. A hard link is inflexible, but it
allows full control.
A loss of control occurs when managers of different
types have different preferences for portfolio risk. In view of
this, we might attempt to retain the flexibility and yet
harden the soft links under PCA in the following manner.

Consider the following scheme for any given bank:
• Regulate according to PCA to start with.
• In any future period t, if there has been no breach in
period t-1, regulate according to PCA.
• If a breach occurred in period t-1, adopt a hard-link
approach for T periods (if VaR is econometrically
problematic, adopting the standardised approach
would do just as well—as would any other hard-link
regime that puts limits on managerial risk taking). At
the end of T periods, switch back to PCA.
Such a scheme would help eliminate the agency
distortion. The reason is that the manager must trade off
risk today with risk tomorrow.10
Suppose the manager puts a large weight on portfolio risk. Suppose he takes a very high-risk strategy in
period t and large losses occur. In a static context, limited
liability implies that the manager would not care about the
losses. But now there are other consequences. Since the manager puts a large weight on risk, unless he discounts the
future heavily, he would care about the risk he can undertake
in period t+1 and after. Higher risk in period t increases the
chances of facing a hard-link regime for T periods that
would put limits on managerial risk taking. Thus, there is
now a trade-off. This helps reduce the agency distortion.
The policy is simple enough—a violating bank
must go through a “probationary” phase during which its
risks would be very inflexibly controlled. This approach
maintains the flexibility of PCA, while hardening the links
on punishment paths.
In future research, we hope to explore these issues
further and shed light on optimal regulation.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

141

ENDNOTES

The views expressed in this paper are those of the authors and do not necessarily
reflect those of the Bank of England. The content of the paper as well as the
exposition have benefited enormously from regular interaction with Patricia
Jackson and Ian Michael. We have also benefited from comments on an earlier
version by William Perraudin and two referees for the Bank of England Working
Paper series, as well as comments by our discussants Jean-Charles Rochet and
Paul Kupiec at the Financial Regulation and Incentive Conference at the Bank
of England. We are grateful to all of them.
1. For example, securities and foreign exchange or commodities
positions that are held for short-term trading purposes.
2. The value at risk of a given portfolio can be calculated via parametric
or nonparametric (historical-simulation) models. Parametric approaches
are based on the assumption that the distribution of future returns
belongs to a given parametric class. The historical-simulation approach
produces a time series of profits and losses that would have occurred if the
portfolio had been held over a specified estimation period.
3. The Basle rules specify an additional multiplier of three, which is
applied to the results of the VaR model to convert it into a capital
requirement.

142

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

4. See Kupiec (1995) and Jackson and Perraudin (1996).
5. This paper is a summary of the results derived by Daripa and Varotto
(1998a). Readers interested in a more formal discussion should refer to
that paper.
6. With this we do not mean that hard-link regulation prevents managers from undertaking fraudulent activities. An implicit assumption in
our analysis is that managers act legally.
7. Even when fired, most managers are usually able to find other jobs.
8. Clearly, if they do not degenerate into fraudulent actions on the part
of the manager.
9. For a lucid discussion of the central issues in the implementation
literature, see the survey by Moore (1992).
10. Of course, such a scheme would work only if the expected duration
of the manager’s employment were not very short.

NOTES

REFERENCES

Besanko, D., and G. Kanatas. 1996. “The Regulation of Bank Capital: Do
Capital Standards Promote Bank Safety?” JOURNAL OF FINANCIAL
INTERMEDIATION 5: 160-83.
Bliss, R. 1995. “Risk-Based Bank Capital: Issues and Solutions.” Federal
Reserve Bank of Atlanta ECONOMIC REVIEW 80, September-October:
32-40.
Campbell, T., Y. Chan, and A. Marino. 1992. “An Incentive Based Theory
of Bank Regulation.” JOURNAL OF FINANCIAL INTERMEDIATION
2: 255-76.
Chan, Y., S. Greenbaum, and A. Thakor. 1992. “Is Fairly Priced Deposit
Insurance Possible?” JOURNAL OF FINANCE 47: 227-45.
Daripa, A., P. Jackson, and S. Varotto. 1997. “The Pre-Commitment
Approach to Setting Capital Requirements.” Bank of England
FINANCIAL STABILITY REVIEW, autumn.

Jackson, P. 1995. “Risk Measurement and Capital Requirements for
Banks.” Bank of England QUARTERLY BULLETIN, May: 177-84.
Jackson, P., D. Maude, and W. Perraudin. 1996. “Bank Capital and Valueat-Risk.” Birkbeck College Working Paper.
Jensen, M. C., and K. J. Murphy. 1990. “Performance Pay and TopManagement Incentives.” JOURNAL OF POLITICAL ECONOMY 98:
225-64.
Jensen, M. C., and J. B. Warner. 1988. “The Distribution of Power
among Corporate Managers, Shareholders, and Directors.” JOURNAL
OF FINANCIAL ECONOMICS 20: 3-24.
Kupiec, P. 1995. “Techniques for Verifying the Accuracy of Risk
Measurement Models.” JOURNAL OF DERIVATIVES, winter: 73-84.

Daripa, A., and S. Varotto. 1998a. “Agency Incentives and the Regulation
of Market Risk.” Bank of England Working Paper, January.

Kupiec, P., and J. O’Brien. 1997. “The Pre-Commitment Approach:
Using Incentives to Set Market Risk Capital Requirements.” Federal
Reserve Board Finance and Economics Discussion Series, no. 97-14,
March.

———. 1998b. “A Note On: Agency Incentives and the Regulation of
Market Risk.” Bank of England FINANCIAL STABILITY REVIEW,
spring.

Marshall, D., and S. Venkatraman. 1997. “Bank Capital Standards for
Market Risk: A Welfare Analysis.” Federal Reserve Bank of Chicago
Working Paper, April.

Dewatripont, M., and J. Tirole. 1993. “The Prudential Regulation of
Banks.” Cambridge: MIT Press.

Moore, J. 1992. “Implementation, Contracts and Renegotiation in
Environments with Complete Information.” In ADVANCES IN
ECONOMIC THEORY: SIXTH WORLD CONGRESS. Vol. 1: 182-282.
Cambridge: Cambridge University Press.

Garen, J. E. 1994. “Executive Compensation and Principal-Agent Theory.”
JOURNAL OF POLITICAL ECONOMY 102: 1175-99.
Giammarino, R., T. Lewis, and D. Sappington. 1993. “An Incentive
Approach to Banking Regulation.” JOURNAL OF FINANCE 48: 1523-42.
Gibbons, R., and K. J. Murphy. 1992. “Optimal Incentive Contracts in the
Presence of Career Concerns: Theory and Evidence.” JOURNAL OF
POLITICAL ECONOMY 100: 468-505.

Prescott, E. 1997. “The Precommitment Approach in a Model of
Regulatory Banking Capital.” Federal Reserve Bank of Richmond
ECONOMIC QUARTERLY.
Rochet, J. 1992. “Capital Requirements and the Behaviour of Commercial Banks.” EUROPEAN ECONOMIC REVIEW 36: 1137-78.

The views expressed in this article are those of the authors and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

NOTES

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

143

Designing Incentive-Compatible
Regulation in Banking: The Role of
Penalty in the Precommitment Approach
Shuji Kobayakawa

1. INTRODUCTION
The purpose of this paper is to present a framework for
incentive-compatible regulation that would enable regulators to ensure that riskier banks maintain higher capital
holdings.
Under the precommitment approach, a bank
announces the appropriate level of capital that covers the
maximum value of expected loss that might arise in its
trading account. If the actual loss (after a certain period)
exceeds the announced value, the bank is penalised. This
framework creates the correct incentive for banks: The
banks choose the level of capital that minimises the total
cost, which consists of the expected cost of penalty and the
cost of raising capital.
Nevertheless, it is not certain that the regulator
will always implement the mechanism through which
banks accurately reveal their riskiness. To be more precise,
the approach relies solely on the first-order condition of
cost minimisation, in which the regulator need only offer a
unique penalty rate and let each bank select the amount of
capital that satisfies the first-order condition. This implies

Shuji Kobayakawa is an economist at the Bank of Japan’s Institute for Monetary
and Economic Studies.

that the regulator needs no information ex ante with regard
to the riskiness of each bank (that is, the regulator can
extract private information ex post by observing how much
capital each bank chooses to hold after setting the unique
penalty rate).
It is, however, questionable whether riskier banks
will always choose a higher level of capital. The choice of
capital holding depends on the bank’s private information,
such as the shape of the density function of its investment
return. Riskier banks may in fact choose smaller amounts
of capital. Thus, the normative capital requirement dictating that riskier banks should hold higher levels of capital
may not always be satisfied under the precommitment
approach. With this in mind, we examine an alternative
to the precommitment approach, in which the regulator
is viewed as offering incentive-compatible contracts that
consist of both the level of capital and the penalty rate,
and see whether banks fulfill the normative capital
requirement.
The paper is organised as follows: In the next section, we briefly review the precommitment approach and
show that in some cases it may not be possible to determine each bank’s riskiness by observing how much capital
it decides to hold. In Section 3, we develop a model from
the perspective of mechanism design whereby the regulator

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

145

designs a menu of contracts. We then examine under different scenarios whether the regulator could achieve the
norm where riskier banks decide to hold higher levels of
capital. Section 4 summarises the paper’s findings.

2. OUTLINE OF THE PRECOMMITMENT
APPROACH
In this section, we briefly review the model set forth by
Kupiec and O’Brien (1995), who first proposed the precommitment approach. We will examine the case where
monetary fines are used as a penalty and will discuss how
the fines work by letting banks hold optimal levels of
capital, according to the innate qualities of the assets in
their trading accounts.1
First, the net return of assets in banks’ trading
accounts is denoted by ∆r , which follows the density function, dF ( ∆r ) , and banks hold capital equivalent to k . In
the model, there are two cost factors—the cost associated
with raising capital and the expected cost of the penalty.
The penalty is imposed if the actual net loss exceeds the
precommitted amount (that is, if the net return is lower
than - k , then the penalty is imposed). Assuming the penalty is imposed proportional to the excess loss, the total
cost function is written as follows:
(1)

C ( k ,ρ ) = ηk – ρ

–k

∫–∞ ( ∆r + k ) dF ( ∆r ) ,

where η is the marginal cost of capital, and ρ is the penalty
rate. The first term represents the cost of raising capital.
The second term shows the total expected cost of the penalty. Taking the first derivative with respect to k , we have
∂C ( k ,ρ )
(2)
-------------------- = η – ρF ( – k ) = 0 .
∂k
Given the rate of penalty, banks choose their optimal levels
of capital, which satisfy equation 2.2
Although Kupiec and O’Brien do not go beyond
this point, let us extend the model in such a way that it
incorporates the riskiness of banks.3 Suppose now that two
types of banks exist: banks with riskier assets (H-type
banks), whose density function is denoted by dF H ( ∆r ) ,
and banks with less risky assets (L-type banks), whose density function is denoted by dF L ( ∆r ) . We assume the variance of dF H ( ∆r ) is larger than that of dF L ( ∆r ) . Then, we
can imagine one example of the minimum cost curves, for

146

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Chart 1

One Example of Minimum Cost Curves for High-Risk
and Low-Risk Banks
ρ

ρ2

CHmin(k, ρ)

CLmin(k, ρ)

ρ

1

k

H-type and L-type banks, on which the first-order condition is always satisfied (Chart 1).
H ( k ,ρ ) is the minimum cost curve for H-type
C min
L ( k ,ρ ) is the minimum cost curve for L-type
banks, and C min
banks. The higher the penalty rate offered by the regulator,
the higher the capital requirement for banks to satisfy the
first-order condition. The figure also generalises the case
where H-type banks have a gentle curve when k is low,
while they have a steep curve when k is high. This occurs
because when k is low (that is, close to the mean of the
density function), an additional increase in the penalty rate
requires H-type banks to add more capital than L-type
banks must add to retain the first-order condition. The
magnitude of changes in the density function per one-unit
increase in capital level is less for H-type banks (whose
variance is larger) than for L-type banks. On the contrary,
when k is high (that is, close to the tail of the density
function), an additional increase in the penalty rate may
require L-type banks to add more capital than H-type
banks to reestablish the first-order condition. The reason is
that the magnitude of changes in its density function per
one-unit increase in capital level is less for L-type banks.
The following two situations could arise:

• If the regulator charges a penalty rate higher than
ρ 2, then L-type banks choose to hold higher levels
of capital.
• If the regulator charges ρ ∈ [ ρ 1 ρ 2 ] , then H-type
banks choose to hold higher levels of capital.

A summary of these situations follows.
Kupiec and O’Brien assume that the regulator,
without knowing the banks’ riskiness, can allow banks to
reveal their riskiness by charging a unique penalty rate.4
Each bank, given the penalty rate, voluntarily chooses the
level of capital that minimises the total cost. The authors
further claim that the choice of capital level is incentive
compatible for every bank. But without knowing where the
minimum cost curves lie, the regulator cannot assess banks’
riskiness just by observing the levels of capital (that is,
high-risk banks sometimes hold more capital, sometimes
less). In this situation, we are not sure whether the regulator can overcome private information (that is, the riskiness
of each bank) just by penalising at the uniform rate.
Next, we suggest a general model in which the
regulator offers contracts that consist of the level of capital
and the penalty rate and lets banks select a contract—an
arrangement that enables the regulator to assess the
riskiness of each bank correctly. We will see how we could
satisfy the normative requirement that high-risk banks
hold higher levels of capital.

3. THE MODEL
The following model is designed to establish whether the
regulator could determine banks’ riskiness by offering
banks a menu of contracts and letting each select one. We
are interested in two points: How incentive compatibility
can be satisfied in both the precommitment approach and
the model presented below, and whether the normative
standard of capital requirements—whereby banks with
riskier assets choose to hold higher levels of capital than
those with less risky assets—is fulfilled.

3.1. SETUP OF THE MODEL
Two players participate in the game: the regulator and the
banks. The banks are categorised according to the innate
qualities of the assets in their trading accounts. For simplicity, we assume there are two types of banks—H-type
(a bank whose portfolio consists of high-risk, or largevariance, assets) and L-type (a bank whose portfolio
consists of low-risk assets). Although the banks know their
own types, the regulator does not know ex ante which bank

belongs to which type. One may argue, however, that the
regulator can learn each bank’s type through monitoring or
from the records of on-site supervision. Nevertheless, we
assume that most of the assets in the trading accounts are
held short term and that banks can form the portfolios
with different levels of riskiness. The assessment of the
riskiness of a portfolio at the time of on-site supervision
may therefore not be valid for a long time. Hence, it is reasonable to assume that the regulator is uninformed about
the types. Remember, we are concerned with the quality of
the banks’ assets in their trading accounts. It may not be
appropriate to extend the same interpretation to the assets
in their banking accounts. Because these assets are held for
much longer periods, the information obtained through
monitoring is valid longer. The scope for private information is therefore much more limited.
Next, let us explain the sequence of events in the
model. In each of the game’s three periods, the following
events take place.
Period 0
1. Banks collect one unit of deposits, whose rate of
interest is normalised to zero. The deposit has to
be paid back to depositors at the end of the game
(that is, in Period 2).
2. The banks then invest the money in financial
assets.
Period 1
1. The regulator offers a menu of contracts consisting
of different levels of required capital and penalty
rates corresponding to each capital requirement
level.
2. Banks choose a contract from the menu. For them,
accepting a contract means that they hold
k i ∈ ( 0, 1 ) ( i = H , L ) as capital.
Period 2
1. The return on investment, r̃ , is realised.
2. If the return fails to achieve the precommitted
level, the regulator penalises the bank.
Let the return on investment be a stochastic variable in the range of [ r – , r+] , and it follows a density function, dF ( r̃ ) . We denote the return on investment by
dFH ( r̃ ) for an H-type bank, and dF L ( r̃ ) for an L-type
bank. We assume that the variance of dF H ( r̃ ) is larger

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

147

than that of dF L ( r̃ ) , but we do not assume any specific
shape of distribution functions.5
The regulator penalises the bank if the net loss
from the investment, – ( r̃ – 1 ) , exceeds the precommitted
value k i ; hence the penalty is imposed if 1 – k i ≥ r̃ . Let the
penalty rate be denoted by p i ( i = H ,L ) , so that the
amount of penalty is p i × [ ( 1 – k i ) – r̃ ] .
We analyse the three following cases according to
the relative size of the cumulative density:6
Case 1: F H ( 1 – k i ) ≥ F L ( 1 – k i )

for k i ∈ ( 0 ,1 )

The cumulative density for H-type banks is always larger
than the one for L-type banks.7
Case 2: F H ( 1 – k i ) ≥ F L ( 1 – k i )

for k i close to 0

FH ( 1 – ki ) < FL ( 1 – ki )

for k i close to 1

The cumulative density for H-type banks is larger when
the level of capital is close to 0; it is smaller when the level
of capital is close to 1.
Case 3: F H ( 1 – k i ) ≤ F L ( 1 – k i )

for k i close to 0

FH ( 1 – ki ) > FL ( 1 – ki )

for k i close to 1

The cumulative density for H-type banks is smaller when
the level of capital is close to 0; it is larger when the level of
capital is close to 1.8
We now write the bank’s cost function as follows:
(3)

C HH ≡

1 – kH

∫r

–

p H [ ( 1 – k H ) – r̃ ] dF H ( r̃ ) + η k H ,

where C ji represents the cost function of the bank that has an
innate riskiness of j but announces the riskiness i . The first
term in this cost function is the expected cost of a penalty. The
second term is the cost associated with raising capital equivalent to k H , where η is the marginal cost of capital. Likewise,
the cost function of an L-type bank is as follows:
(4)

C LL ≡

1 – kL

∫r

–

p L [ ( 1 – k L ) – r̃ ] dF L ( r̃ ) + η k L .

3.2. REGULATOR’S PROGRAMME
Let us now analyse how the regulator designs the mechanism
in which the H-type and L-type banks reveal their types
truthfully. The following programme is a starting point:9
min
 k L, k H 

– 1  η- 

 L = δ × d k L –  1 – F L  ----P L 
 p L, p H 

2

η 
– 1  -----+ kH –  1 – F H

 P 

2

H

+ ( 1 – δ ) × max ( 0, k L – k H ) , where k L ≠ k H

148

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

(ICH) C HH =

1 – kH

∫r

≤

p H [ ( 1 – k H ) – r̃ ] dF H ( r̃ ) + η k H

1 – kL

∫r

≤
(ICL) C LL =

–

–

1 – kL

∫r

–

1 – kH

∫r

–

p L [ ( 1 – k L ) – r̃ ] dF H ( r̃ ) + η k L ≡ C HL

p L [ ( 1 – k L ) – r̃ ] dF L ( r̃ ) + η k L
p H [ ( 1 – k H ) – r̃ ] dF L ( r̃ ) + η k H ≡ C LH .

The loss function of the regulator consists of both
the deviation of capital from the level specified by the
first-order condition and the difference between capital
holdings of banks with different risk levels. The term in
parentheses after δ represents any capital holding that is
not equivalent to the optimal level. Such a case is regarded
as costly for the regulator. This applies to both L-type and
H-type banks. The term after ( 1 – δ ) shows that the regulator is willing to let high-risk banks hold more capital. As
long as high-risk banks hold more capital, the regulator
does not incur any loss. This is consistent with the norm
specifying that the level of capital holding should increase
with riskiness.
The two inequalities after the regulator’s objective
function are called incentive-compatibility constraints
for H-type and L-type banks. We denote them by IC H
and IC L , respectively. These constraints guarantee that
each bank will select the contract appropriate to its
type. By choosing the wrong contract, a bank will have to
pay a higher cost. Any pair of contracts that satisfy the
incentive-compatibility constraints is one of a number of
possible solutions.
Case 1: F H ( 1 – k 1 ) ≥ F L ( 1 – k 1 )

for k 1 ∈ ( 0, 1 )

In this case, the minimum cost curve—where the firstorder condition is satisfied—for H-type banks is always
below the curve for L-type banks (Chart 2).
Chart 2 also depicts the iso-cost curve, where the
total cost remains constant (reverse U-shaped function).
The curvature of the iso-cost curve is easily verified. The
slope of the curve is always 0 when it crosses the minimum
cost curve. The reason is that, in the case of H-type banks,
F H( 1 – kH ) – η
dp= -----------------------------------------------------------------1
–
kH
dk C = const
[ ( 1 – k H ) – r̃ ] dF H ( r̃ )
–

∫r

is zero whenever the first-order condition is satisfied.

∫

Chart 2

Minimum Cost Curve: Case 1

p

<

∫

– k 2H ) – r̃ ] dF H ( r̃ ) + η k 2H

1 – k 2L L
p2 [( 1
r–

– k 2L ) – r̃ ] dF H ( r̃ ) + η k 2L

for an H-type bank and
CLmin(k, p)

1 – k 2L L
p2 [ ( 1
r–

∫

CHmin(k, p)

L

p

1 – k 2H H
p2 [( 1
r–

2

<

pH2

p1
p0
kL1

kL2 kH1

kH2

k

Next, we check the marginal cost. Additional
capital will influence the total cost through two different
channels. First, it will reduce the range of r̃ in which the
penalty is imposed (penalty cost-saving effect), so that the
more capital the bank holds, the less expected cost it will
incur. Second, more capital means the total cost of raising
capital increases (capital cost effect). On the right-hand-side
of the minimum cost curve, the iso-cost curve is downward
sloping because the marginal cost is positive. In other
words, the capital cost effect exceeds the penalty costsaving effect, so that the more capital the bank holds, the
more costly it is. Hence, to retain the same level of cost,
the penalty rate needs to be reduced. On the left-hand-side
of the minimum cost curve, the iso-cost curve is upward
sloping because the marginal cost is negative. In other
words, the penalty cost-saving effect exceeds the capital
cost effect, so that the more capital the bank holds, the less
costly it is. Hence, to retain the same level of cost, the penalty rate needs to be raised.
Here, the menu of contracts can be incentive compatible. One example of the menu is depicted in Chart 2. If
the regulator provides ( k 2L, p 2L ) and ( k 2H, p 2H ) , L-type
banks will choose the former and H-type banks will choose
the latter. The menu options minimise the loss function of
the regulator (that is, the menu identifies the level of
capital that satisfies the first-order condition, and H-type
banks are offered a higher level of capital). The menus also
satisfy incentive compatibility, namely that

– k 2L ) – r̃ ] dF L ( r̃ ) + η k 2L

1 – k 2H H
p2 [ ( 1

∫r –

– k 2H ) – r̃ ] dF L ( r̃ ) + η k 2H

for an L-type bank.
At the same time, the regulator offering the
unique penalty rate also guarantees incentive compatibility because the penalty rate minimises the loss function. To see this point, suppose that the regulator offers
p 1 in Chart 2. The pairs of ( k 1L, p 1 ) and ( k 1H, p 1 ) are
incentive compatible, namely that
1 – k 1L

∫r –

<

p 1 [ ( 1 – k 1H ) – r̃ ] dF H ( r̃ ) + η k 1H

1 – k 1L

∫r –

p 1 [ ( 1 – k 1L ) – r̃ ] dF H ( r̃ ) + η k 1L

for an H-type bank and
1 – k 1L

∫r –

<

p 1 [ ( 1 – k 1L ) – r̃ ] dF L ( r̃ ) + η k 1L

1 – k 1H

∫r –

p 1 [ ( 1 – k 1H ) – r̃ ] dF L ( r̃ ) + η k 1H

for an L-type bank.
Because this model and the original approach
satisfy both incentive compatibility and the requirement that riskier banks hold more capital, the menu of
contracts with different penalty rates may not be necessary: As long as the single penalty rate is offered by the
regulator, the regulator’s objective is fulfilled.10
Chart 3

Minimum Cost Curve: Case 2

p

CHmin(k,p)

CLmin(k,p)

p1
p3

p2
p0
kL2

kH1

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

kL1

k

149

Case 2: F H ( 1 – k i ) ≥ F L ( 1 – k i )

for k i close to 0

FH ( 1 – ki ) < FL ( 1 – ki )

for k i close to 1

In case 2, the minimum cost curves intersect at k > 0
(Chart 3).
In the precommitment approach, any penalty rate
that lies between p 0 and p 3 will yield the same result as in
case 1. A problem arises, however, when a penalty rate
above p 3 is imposed. Here, the regulator can no longer
achieve its objective: Although the capital levels chosen by
the banks are incentive compatible, the regulator incurs an
additional loss by letting L-type banks hold more capital
than H-type banks. Our approach, however, may be able to
overcome this problem. Suppose that in Chart 3 the regulator offers two contracts, k 2L, p 2 and k 1H, p 1 . It is indeed the
case that L-type banks choose the first contract and H-type
banks choose the second (incentive compatibility is satisfied). Moreover, the regulator achieves its objective by
minimising the loss: an additional loss is not incurred as
long as H-type banks choose to hold more capital than
L-type banks.
We therefore propose two modifications to the
precommitment approach. First, the regulator collects necessary information concerning banks’ risk characteristics so
that it will not impose a penalty rate above p 3 . Any penalty rate between p 0 and p 3 will achieve the objective: the
regulator will be able to assess each bank’s riskiness by
observing the level of capital that the bank chooses to hold.
Second, the regulator again collects necessary information
on banks’ riskiness and provides banks with two contracts
having different penalty rates. Note that both modifications would require regulators to gather extensive information about banks’ risk characteristics.
Case 3: F H ( 1 – k i ) ≤ F L ( 1 – k i )

for k i close to 0

FH ( 1 – ki ) > FL ( 1 – ki )

for k i close to 1

Our final case is the opposite of case 2 (Chart 4).
In the precommitment approach, any penalty rate above
p 3 will yield the same result as in case 1, but p ∈ ( p 0, p 3 )
must be avoided. Unfortunately, our approach may not be
able to overcome this difficulty. When one of a pair of contracts deals with a penalty rate below p 3 , the regulator’s
objective cannot be achieved, because H-type banks are

150

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Chart 4

Minimum Cost Curve: Case 3

p
CLmin(k,p)

CHmin(k,p)

p3
p1
p0

kH1

kL1

k

permitted to hold less capital. To achieve the normative
capital requirement, two contracts must thus be offered
with penalty rates above p 3 . The regulator’s objective can
also be achieved by offering the single penalty rate as in the
precommitment approach, under the condition that the
regulator knows p 3 , the penalty rate at which the two
minimum cost curves intersect. Perhaps it would be simpler to rely on the single penalty rate above p 3 —in which
case incentive compatibility is automatically satisfied—
rather than to design a menu of contracts that requires the
regulator to ensure that incentive compatibility is satisfied.

4. CONCLUSION
In this paper, we developed a model from the perspective of
mechanism design and demonstrated that, in some cases,
the penalty also plays an important role in persuading riskier banks to hold more capital than less risky banks.
In the original precommitment approach framework, the regulator can allegedly discover a bank’s riskiness
by offering a unique penalty rate. Nonetheless, the
appropriate level of capital for each bank depends on the
bank’s private information, such as the shape of its investment
return’s density function. Thus, it is not certain that
riskier banks always choose to hold more capital than less
risky banks.
We then developed a model of mechanism design
in which the regulator offers a menu of contracts representing different levels of capital and the corresponding pen-

alty rates. We found that the regulator can implement
incentive-compatible contracts in which banks with one
level of riskiness voluntarily separate themselves from
banks with other levels of riskiness.
We examined three cases. In case 1, if the cumulative density for H-type banks is always greater than the
cumulative density for L-type banks, then both the precommitment framework and our approach achieve the
regulator’s objective: The level of capital holding is equivalent to the amount specified by the first-order condition. In
addition, the level of capital holding increases as the bank’s
riskiness goes up. In this case, it would probably be easier
for the regulator to implement the original approach rather
than to offer contracts with various penalty rates. In case 2,
the cumulative density for H-type banks is greater than the
cumulative density for L-type banks for small amounts of
capital; the cumulative density is smaller for large amounts
of capital. In this instance, our model may be able to
achieve the regulator’s objective. By contrast, in the precommitment approach, the penalty rate must fall within a
particular range; otherwise, the regulator’s objective is not
completely fulfilled in that incentive compatibility is satisfied but the normative capital requirement is not achieved.
In case 3, we examined an instance in which the cumulative density for H-type banks is smaller than the cumulative density for L-type banks for small amounts of capital,
whereas cumulative density is greater for large amounts of
capital. In case 3, neither approach achieves the regulator’s
objective as long as either one or two penalty rates take the
value where the cumulative density for H-type is smaller. To
avoid this, the penalty rate must be set in the range where

the cumulative density for H-type is larger. Then, both the
precommitment approach and our modification of this
approach achieve the regulator’s objective. In this instance,
it would probably be easier, as in case 1, to implement the
original approach.
We have demonstrated that both the precommitment approach and our approach have limitations that prevent them from achieving the optimal result as specified in
the regulator’s objective function. Here, the key element is
how much information the regulator needs to assess banks’
risk characteristics. In their recent paper, Kupiec and
O’Brien (1997) also note the importance of information to
regulators attempting to develop the incentive-compatible
regulation. Future research must examine the amount of
necessary information and the extent to which there may
be a limit to the amount of pressure the regulator can place
on banks to disclose their riskiness truthfully.
As we have observed, incentive-compatible contracts cannot be provided unless the regulator obtains
certain information. In this sense, incentive-compatible
regulation will not replace the traditional role of the regulator as an ex ante monitor of banks: The provision of
incentive-compatible contracts and the monitoring by the
regulator can be complementary. On a related matter, it has
been proposed that the regulator’s penalty be replaced by
public disclosure. In other words, whenever a bank’s actual
loss exceeds its precommitted value, the regulator will
inform the market of the fact. Such a proposal might be
feasible if market participants have the necessary information to assess others’ riskiness and if market participants
can impose a penalty that satisfies incentive compatibility.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

151

ENDNOTES

This is a revised version of the paper presented at the conference. The author thanks
discussant Pat Parkinson and other participants in the conference, especially Jim
O’Brien, for useful comments and criticisms. Any errors are the author’s. The views
expressed here are the author’s and not necessarily those of the Bank of Japan.

as they satisfy the first-order condition. Then there may not be an
incentive for banks to “separate.” They can be pooled by choosing the
same pair. Consequently, the regulator may not need to identify banks’
characteristics.

1. Kupiec and O’Brien (1995) stress that since the regulator’s objective
is to let banks precommit levels of capital that satisfy the desired valueat-risk (VaR) capital coverage, it is incentive compatible as long as banks
achieve the regulator’s goal: Incentive compatibility is allegedly satisfied
if they hold the amount of capital that is equivalent to the desired VaR
capital requirement.

4. To be fair, Kupiec and O’Brien’s recent paper (1997) mentions that
the regulator should collect information in order to assess banks’ risk
characteristics.

2. F ( – k ) in equation 2 is the probability that losses exceed the level of
capital, which represents the basis for a VaR capital requirement. In this
interpretation of incentive compatibility, it does not matter whether
banks with higher risk levels hold higher capital: As long as they hold
the right amount of capital consistent with the desired VaR capital
requirement, they are regarded as incentive compatible with the
regulator’s objective. We feel this interpretation is rather unique.
Generally speaking, incentive compatibility may not be an instrument
that ensures consistency with the principal’s objective. There may be a
case where a capital requirement is inconsistent with the principal’s
objective, which nevertheless does not satisfy incentive-compatibility
constraints.

6. These cases may not cover all the possibilities. As the bank portfolio
becomes more complex, the shape of the distribution becomes more
complex as well, and the cumulative densities for H-type and L-type
banks may intersect repeatedly. Still, the fundamental idea developed in
this section can be applied to more complex cases.

3. To be more precise, we take the riskiness of banks as exogenous. This
may contradict what Kupiec and O’Brien maintain. The underlying idea
of the precommitment approach claims that banks, after being offered a
penalty rate, would either commit capital, adjust risk, or do both to
satisfy the first-order condition. Here, the riskiness is taken as an
endogenous strategy for the banks. Nonetheless, if we view both the risk
adjustment and capital holding as endogenous variables, banks do not
have any preference-ordering among the pairs of these variables as long

9. We have neglected individual rationality constraints for H-type
and L-type by simply assuming that the regulator will not offer contracts
that exceed the reservation level of cost for both types.

152

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

5. Kupiec and O’Brien are critical of such simplifying assumptions as
first-order/second-order stochastic dominance.

7. Note that the opposite case—in which the cumulative density for
H-type is always smaller than the one for L-type—does not exist.
8. Note that we have implicitly assumed that all these events—from
case 1 to case 3—take place in the feasible range for the level of capital
holding.

10. This observation implies that the precommitment approach is a
L
H
special case of our model, where ρ = ρ (that is, the penalty rates
offered to L-type and H-type banks are identical).

NOTES

REFERENCES

Alworth, J., and S. Bhattacharya. 1995. “The Emerging Framework of
Bank Regulation and Capital Control.” LSE Financial Markets Group
Special Paper no. 78.

Kupiec, P., and J. O’Brien. 1995. “A Pre-Commitment Approach to Capital
Requirements for Market Risk.” Board of Governors of the Federal
Reserve System Finance and Economics Discussion Paper no. 95-36.

Baron, D. 1989. “Design of Regulatory Mechanisms and Institutions.” In
R. Schmalensee and R. Willig, eds., HANDBOOK OF INDUSTRIAL
ORGANIZATION. Vol. 2. New York: Elsevier Science Publishers.

———. 1997. “The Pre-Commitment Approach: Using Incentives to
Set Market Risk Capital Requirements.” Board of Governors of
the Federal Reserve System Finance and Economics Discussion Paper
no. 97-14.

Besanko, D., and G. Kanatas. 1996. “The Regulation of Bank Capital: Do
Capital Standards Promote Bank Safety?” JOURNAL OF FINANCIAL
INTERMEDIATION 5: 160-83.
Galloway, T., W. Lee, and D. Roden. 1997. “Banks’ Changing Incentives
and Opportunities for Risk Taking.” JOURNAL OF BANKING AND
FINANCE 21: 509-27.
Giammarino, R., T. Lewis, and D. Sappington. 1993. “An Incentive Approach
to Banking Regulation.” JOURNAL OF FINANCE 48, no. 4: 1523-42.

Marshall, D., and S. Venkataraman. 1997. “Bank Capital Standards for
Market Risk: A Welfare Analysis.” Paper presented at the Conference
on Bank Structure and Competition at the Federal Reserve Bank of
Chicago.
Mas-Colell, A., M. Whinston, and J. Green. 1995. MICROECONOMIC
THEORY. New York: Oxford University Press.
Myerson, R. 1979. “Incentive Compatibility and the Bargaining
Problem.” ECONOMETRICA 47: 61-73.

Gumerlock, R. 1996. “Lacking Commitment.” RISK 9, no. 6: 36-9.
Hendricks, D., and B. Hirtle. 1997. “Regulatory Minimum Capital
Standards for Banks: Current Status and Future Prospects.” Paper
presented at the Conference on Bank Structure and Competition at the
Federal Reserve Bank of Chicago.
Huang, C., and R. Litzenberger. 1988. FOUNDATIONS FOR FINANCIAL
ECONOMICS. Englewood Cliffs, N.J.: Prentice Hall.
Kobayakawa, S. 1997. “Designing Incentive Compatible Regulation in
Banking: Part I—Is Penalty Credible? Mechanism Design Approach
Using Capital Requirement and Deposit Insurance Premium.”
Unpublished paper.

Nagarajan, S., and C. Sealey. 1995. “Forbearance, Deposit Insurance
Pricing, and Incentive Compatible Bank Regulation.” JOURNAL OF
BANKING AND FINANCE 19: 1109-30.
Park, S. 1997. “Risk-Taking Behavior of Banks under Regulation.”
JOURNAL OF BANKING AND FINANCE 21: 491-507.
Prescott, E. 1997. “The Pre-Commitment Approach in a Model of
Regulatory Banking Capital.” Federal Reserve Bank of Richmond
ECONOMIC QUARTERLY 83, no. 1: 23-50.
Salanié, B. 1997. T HE E CONOMICS OF C ONTRACTS. Cambridge:
MIT Press.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

NOTES

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

153

Commentary
Patrick Parkinson

I appreciate the opportunity to participate in this discussion
of the pre-commitment approach to achieving regulatory
objectives relating to bank capital.
The presenters might reasonably expect the discussant to take up each of their papers in turn, commenting
on their strengths and weaknesses and offering an overall
assessment of their quality. I am concerned, however, that
while the usual approach might best do justice to the presenters, it could leave the audience at something of a loss
as to what to make of all this. So I am going to take a
different approach. I will begin by briefly reviewing the
objective of capital regulation and identifying the factors
that make achieving that objective so complex and difficult. In that context, I will then try to frame the debate
between proponents of the more traditional approaches to
capital regulation and proponents of incentive-based
approaches, including the pre-commitment approach, in
terms of three basic questions. First, how effective is the
current internal models approach to capital for market
risk? Second, is the pre-commitment approach a viable
alternative? Third, can the two approaches be integrated in

Patrick Parkinson is an associate director in the Division of Research and
Statistics at the Board of Governors of the Federal Reserve System.

ways that play to their respective strengths while avoiding
their respective weaknesses? Most of the major arguments
made by the presenters will surface in addressing these
questions. I shall conclude by offering my own views on
these key questions.

CAPITAL REGULATION: OBJECTIVES
AND APPROACHES
In general terms, there seems to be agreement on the objective of capital regulation. Regulators seek to ensure that
banks maintain sufficient capital so that banks’ portfolio
choices fully reflect risks as well as returns. Regulation is
necessary because the government safety nets that support
banks weaken the incentives for capital adequacy that
would otherwise be provided by the market discipline of
bank creditors, a phenomenon that is usually called “moral
hazard.” An important difficulty facing regulators as they
attempt to achieve their objective is that the riskiness of
banks’ portfolios is not readily ascertainable. Traditional
approaches to capital regulation have placed ex ante restrictions on bank portfolios that have been based on regulatory
risk measurement schemes of lesser or greater sophistication and complexity. Inevitably, however, such regulatory
measurement schemes are simpler and less accurate than
banks’ own risk measurement schemes.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

155

As a result, such schemes are not incentivecompatible, that is, they do not create incentives for banks
to make decisions that produce outcomes consistent with
regulatory objectives. To the contrary, they create the
motive and the opportunity for banks to engage in regulatory arbitrage that frustrates the achievement of regulatory
objectives. Specifically, they create incentives for banks to
reduce holdings of assets whose risks are overestimated by
regulators and to increase holdings of assets whose risks are
underestimated by regulators. Regulators may seek to
compensate for such reactions by raising the level of capital
requirements, but such actions may intensify the incentives
for regulatory arbitrage without meaningfully reducing the
opportunities.
Incentive-compatible approaches to capital regulation are intended to solve this problem by inducing banks
to take actions that reveal their superior information
about the riskiness of their portfolios. In some of these
approaches, including the pre-commitment approach, the
inducement takes the form of ex post penalties that are
imposed on banks in the event that portfolios produce
sizable losses. For example, under the pre-commitment
approach, a bank would be required to specify the amount
of capital it chose to allocate to cover market risks. If
cumulative trading portfolio losses over some subsequent
interval exceeded the commitment, the bank would be
penalized. In principle, the prospect of future penalties
would induce banks to commit an amount of capital that
reflected their private information on the riskiness of their
portfolios.
None of this, it should be emphasized, is news to
regulators. In particular, the recent evolution of capital
requirements for market risks has reflected a growing
recognition of the limitations of supervisory risk measurement schemes, the potential for regulatory arbitrage to
undermine achievement of regulatory objectives, and the
importance of incentive compatibility. Specifically, the
January 1996 amendments to the Basle Accord included an
internal models approach (IMA) to setting capital requirements for the market risks of assets and liabilities that are
carried in banks’ trading accounts. Under the IMA, the
capital requirement for a bank that meets certain qualitative

156

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

and quantitative standards for its risk measurement and
risk management procedures is set equal to a multiple of a
widely used measure of market risk—so-called value at risk
(VaR)—that is estimated using the bank’s own internal
model. The minimum multiplier was arbitrarily set equal
to three. However, subject to this floor, the IMA provided
economic incentives for accurate risk measurement by
imposing a penalty—a “plus factor” that could increase a
bank’s VaR multiplier to a maximum of four if the bank
fails a “back-test” of its VaR estimates, that is, if its daily
trading losses exceeded its VaR estimates with sufficient
frequency.
Thus far, however, supervisors have been unwilling to rely more heavily on incentive approaches to capital
regulation. In particular, although the Federal Reserve
System continues to study the pre-commitment approach,
that approach is not currently under active consideration
by the Basle Committee. Most regulators seem to believe
that the IMA will prove quite effective, and some have
openly questioned the viability of the pre-commitment
approach.

EFFECTIVENESS OF THE INTERNAL
MODELS APPROACH
On the efficacy of the internal models approach, Daripa
and Varotto characterize it as “a ‘hard-link’ regime that sets
a relation between exposure and capital requirement.”
They do not mean to imply, however, that VaR is a perfect
measure of risk. They acknowledge that VaR is subject to
measurement problems and that the use of a fixed holding
period in computing VaR ignores management information about the liquidity of markets that might imply that
use of a shorter or longer holding period might be appropriate. Still, they seem to think that VaR, if anything,
overestimates risk and, therefore, that the IMA is a prudent,
if somewhat costly, means of ensuring that regulatory
objectives relating to capital are met.
The New York Clearing House Association evidently is more skeptical of the effectiveness of the IMA,
although its criticism of the approach is surprisingly
oblique. The Clearing House’s report does state clearly that
the institutions participating in the pilot believe that the

minimum multiplier of three results in excessive regulatory capital requirements—the amounts that institutions
pre-committed during the pilot generally were significantly less than those implied by applying the minimum
multiplier to the firms’ internal VaR estimates. Furthermore, they argue that the use of any fixed multiplier, even
if it was smaller than three, is not an appropriate means of
establishing a regulatory capital requirement. Use of a
fixed multiplier constitutes a “one-size-fits-all” approach
that they feel does not adequately account for differences in
the nature of banks’ trading businesses and trading portfolios. Finally, they note that market risk is but one source of
risk in a trading business. The participating institutions
fear that possible future efforts by regulators to develop
capital charges for operational risks (or even legal risks
or settlement risks) will be fraught with complications and
inefficiencies that could be avoided through use of the
pre-commitment approach.

VIABILITY OF THE PRE-COMMITMENT
APPROACH
On the viability of the pre-commitment approach as an
alternative to the IMA, the Clearing House’s report asserts
that the pilot demonstrates that the approach is a viable
alternative to the IMA. In a narrow sense, this is true—the
pilot demonstrated that the participating institutions have
internal procedures for allocating capital for market risks
and other risks in their trading businesses. However, what
the pilot did not, and realistically could not, demonstrate
is that these internal allocations are sufficiently large to
meet regulatory objectives with respect to minimum bank
capital. The fact that no participating institution reported
a loss in excess of its commitment during the pilot is not
compelling. None of the institutions incurred a cumulative
loss over any of the four quarters. Hence, no violations
would have occurred if no capital was committed. To be
fair, without a more precise understanding of the desired
loss coverage of regulatory minimum capital requirements,
the report could not be expected to demonstrate that precommitment is a viable means of meeting that objective.
Both Kobayakawa, and Daripa and Varotto cast
doubt on the viability of the pre-commitment approach,

at least in its present form. Kobayakawa concludes that a
simple penalty—in the form of a fine proportional to the
amount by which cumulative losses exceed the capital
commitment—would not reliably induce banks to commit amounts of capital commensurate with their private
information on their riskiness. In their presentation
tomorrow, Paul Kupiec and Jim O’Brien, who developed
the theoretical model that motivated the pre-commitment
approach, reach the same conclusion. The fundamental
problem is that a one-size-fits-all approach to setting
penalties would not work. To achieve regulatory objectives reliably, the penalty would need to be bank-specific.
Moreover, the appropriate penalty would depend on a
bank’s cost of capital and on its individual investment
opportunities, factors that unfortunately are not ascertainable by regulators.
Daripa and Varotto argue that the effectiveness of
the pre-commitment approach could be undermined by
principal-agent problems between shareholders and bank
managers and that the internal models approach is immune
to such problems. The potential importance of agency
problems in banking certainly is incontrovertible. When
managers or staff have different objectives and incentives
than shareholders, shareholders can suffer greatly, as the
Barings, Daiwa, and numerous other episodes have made
clear. In addition, it may be that agency problems could
undermine the pre-commitment approach. What seems
implausible, however, is the claim that the IMA avoids
such problems. This claim seems to be a corollary of the
view that the IMA creates a hard link between risk and
capital. To be sure, it creates a hard link between VaR and
capital, but VaR and risk are hardly the same thing. To see
this, one need only ask—would a VaR-based capital
requirement have saved Barings from its fatal agency problem? Clearly not. The fatal positions were hidden from
senior management, shareholders, and regulators, and
would not have entered into any calculation of VaR nor
been covered by a VaR-based capital requirement. Both the
IMA and the pre-commitment approach recognize that
quantitative controls (VaR measures or penalties, respectively) must be supplemented by qualitative requirements
for risk management, including requirements relating to

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

157

the internal controls that are the only realistic solution to
potential agency problems.

CAN THE INTERNAL MODELS
AND PRE-COMMITMENT APPROACHES
BE INTEGRATED?
Although both Kobayakawa, and Daripa and Varotto are
critical of the pre-commitment approach as proposed,
they are, it should be emphasized, fully appreciative and
supportive of incentive-compatible capital regulation.
Kobayakawa suggests amending the pre-commitment
approach to offer banks a schedule of combinations of ex ante
capital requirements and ex post penalties that he claims
would induce banks to reveal to regulators their private
information about the riskiness of their portfolios. As he
claims, his approach would more reliably achieve regulatory objectives than a pre-commitment approach that utilizes a uniform penalty for all banks. Nonetheless,
Kobayakawa’s alternative faces the same practical difficulties that Kupiec and O’Brien have acknowledged as limiting the effectiveness of the pre-commitment approach and
any other incentive-compatible approaches. Specifically,
banks will reveal their “riskiness” through their choices
from Kobayakawa’s menu only if he sets the “schedules” of
the capital requirements and penalties quite adroitly. But
doing so requires extensive knowledge of banks’ portfolio
opportunities and capital costs that regulators simply do
not (and realistically cannot) possess.
Daripa and Varotto suggest that the pre-commitment
approach be amended to provide for use of the IMA as the
penalty for violating a pre-commitment. Although they do
not provide a formal theoretical justification for their suggestion, they reason that the future prospect of what they
see as a hard-link internal models approach would diminish the agency problems that they argue are unique to the
pre-commitment approach. As indicated earlier, agency
problems are not unique to pre-commitment, nor can they
be eradicated by use of a VaR-based capital requirement.
However, an alternative way of looking at their
suggestion is as a modification of the IMA. In this regard,
it does address some of the concerns that the Clearing
House report expressed about the IMA. Daripa and

158

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Varotto’s suggested approach is not a one-size-fits-all
approach, and it would eliminate the minimum and purportedly excessively conservative multiplier of three, at least
for banks that had never violated their pre-commitment.
Of course, this type of penalty scheme is opposed in the
Clearing House report. They argue that the appropriate
penalty for violation of a pre-commitment would be public
disclosure that a violation had occurred and that regulatory
penalties would be unnecessary.

MY OWN VIEWS ON THE ISSUES
My views on the issues raised by the presenters will perhaps please no one. In brief, I see ample room to question
the effectiveness of the IMA. But I am sympathetic to regulators’ concerns about reliance on a pure incentives-based
approach. Thus, I believe consideration should be given to
more modest alternatives to the IMA that would loosen
but not eliminate ex ante restrictions while enhancing and
reorienting the use of ex post penalties.
Regarding the IMA, its essential weakness is the
tenuous link between VaR and regulatory capital objectives. VaR is defined as a 99 percent confidence limit for
potential losses over a one-day period. But regulators are
concerned about the potential for cumulative losses from
more extreme price movements over longer time horizons.
In such circumstances, application of a multiplier to a
bank’s VaR estimate is clearly necessary. However, as the
Clearing House report argues, the appropriate multiplier
needs to be portfolio-specific and probably bank-specific as
well, to take account of banks’ different abilities to curb
losses through active portfolio management. The choice of
three as a minimum multiplier no doubt is excessive for
some portfolios and may, as the Clearing House report suggests, be too conservative for the portfolios currently held
by most banks. In practice, this may provide incentives for
banks to focus trading activities on illiquid instruments,
such as emerging market currencies and debt instruments,
for which even a multiplier of three may be insufficient.
Furthermore, because of the tenuous link between VaR and
regulatory objectives, back-testing of VaR estimates is of
limited value. A bank that passed its back-test could suffer
severe losses from future price movements more extreme

than those allowed for by the VaR estimates. Conversely, a
bank with poor VaR estimates might not be vulnerable to
large cumulative losses if its positions were held in very
liquid markets and it had the capacity to close out those
positions promptly.
Regarding pre-commitment and other incentivebased approaches, they have their own limitations, and
those limitations should be recognized. The most recent
work by Kupiec and O’Brien has acknowledged that the
link between any simple system of ex post penalties and
regulatory capital objectives is also tenuous. The penalty
appropriate to achieving regulatory objectives relating to
capital coverage for trading risks is bank-specific and
depends on characteristics that cannot be measured precisely by regulators. Moreover, the efficacy of an approach
that relies on ex post penalties to influence bank behavior
implicitly assumes that the bank is forward-looking and
takes the potential penalties into account when making its
current capital allocation. This is a reasonable assumption
for healthy banks that are managed as going concerns, but
Kupiec and O’Brien have acknowledged that weak banks
may not care about future penalties that, in the extreme,
might not be enforceable owing to insolvency.
In the end, I find merit in Daripa and Varotto’s
suggested modification to the pre-commitment approach,
although I think it more useful to view it as a modification
to the IMA. Institutions would be free to choose a capital
allocation for risks in their trading activities—not only
market risks but also operational and legal risks—that is
less than three times VaR. However, if losses exceeded the
capital allocated, the existing IMA would be reimposed for
some extended period, presumably with a large “plus factor,”

that is, a multiplier larger than three. To assuage regulators’
legitimate concerns about the limitations of incentivebased approaches, a floor might be placed under the precommitment, perhaps expressed as a multiple of VaR.
However, to enhance incentives for ongoing improvements
in risk management and to diminish incentives for counterproductive and costly regulatory arbitrage, the minimum
should be well below the existing minimum of three
times VaR.
In effect, this would involve two important
changes to the tests and penalties embodied in the existing
IMA. First, the back-test would be based not on daily VaR
measurement but on cumulative quarterly risk management performance as reflected in the quarterly profit and
loss. Second, favorable back-test results, that is, successful
efforts to avoid losses in excess of commitments, would be
rewarded—in effect, a “minus” would be subtracted from
the standard multiplier of three. Furthermore, the minus
would not be some arbitrary amount, but instead would
reflect banks’ judgments about their ability to avoid losses
in their trading businesses.
Clearly, these would not be radical changes. But
they would be important ones, ones that would relate capital requirements more closely to regulatory objectives and
provide stronger incentives for banks to sharpen their skills
at risk management rather than their skills at regulatory
arbitrage. They would, I believe, be consistent with the
widely shared belief that regulatory capital requirements
need to continue to evolve, consistent with their basic
objectives.
Thank you.

ENDNOTE
The views expressed in this commentary are Mr. Parkinson’s and do not
necessarily reflect those of the Federal Reserve System or its staff.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

159

The Role of Capital in Optimal
Banking Supervision and Regulation
Alan Greenspan

It is my pleasure to join President McDonough and our colleagues from the Bank of Japan and the Bank of England in
hosting this timely conference. Capital, of course, is a topic
of never-ending importance to bankers and their counterparties, not to mention the regulators and central bankers
whose job it is to oversee the stability of the financial system. Moreover, this conference comes at a most critical and
opportune time. As you are aware, the current structure of
regulatory bank capital standards is under the most intense
scrutiny since the deliberations leading to the watershed
Basle Accord of 1988 and the Federal Deposit Insurance
Corporation Improvement Act of 1991.
In this tenth anniversary year of the Accord, its
architects can look back with pride at the role played by
the regulation in reversing the decades-long decline in bank
capital cushions. At the time that the Accord was drafted,
the use of differential risk weights to distinguish among
broad asset categories represented a truly innovative and,
I believe, effective approach to formulating prudential
regulations. The risk-based capital rules also set the stage
for the emergence of more general risk-based policies
within the supervisory process.

Alan Greenspan is the chairman of the Board of Governors of the Federal
Reserve System.

Of course, the focus of this conference is on the
future of prudential capital standards. In our deliberations,
we must therefore take note that observers both within the
regulatory agencies and in the banking industry itself are
raising warning flags about the current standard. These
concerns pertain to the rapid technological, financial, and
institutional changes that are rendering the regulatory
capital framework less effectual, if it is not on the verge of
becoming outmoded, with respect to our largest, most
complex banking organizations. In particular, it is argued
that the heightened complexity of these large banks’ risktaking activities, along with the expanding scope of
regulatory capital arbitrage, may cause capital ratios as
calculated under the existing rules to become increasingly
misleading.
I, too, share these concerns. In my remarks this
evening, however, I would like to step back from the technical discourse of the conference’s sessions and place these
concerns within their broad historical and policy contexts.
Specifically, I would like to highlight the evolutionary
nature of capital regulation and then discuss the policy
concerns that have arisen with respect to the current capital
structure. I will end with some suggestions regarding basic
principles for assessing possible future changes to our
system of prudential supervision and regulation.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

163

To begin, financial innovation is nothing new, and
the rapidity of financial evolution is itself a relative concept—what is “rapid” must be judged in the context of the
degree of development of the economic and banking structure. Prior to World War II, banks in this country did not
make commercial real estate mortgages or auto loans. Prior
to the 1960s, securitization, as an alternative to the traditional “buy and hold” strategy of commercial banks, did
not exist. Now banks have expanded their securitization
activities well beyond the mortgage programs of the 1970s
and 1980s to include almost all asset types, including corporate loans. And most recently, credit derivatives have
been added to the growing list of financial products. Many
of these products, which would have been perceived as too
risky for banks in earlier periods, are now judged to be safe
owing to today’s more sophisticated risk measurement and
containment systems. Both banking and regulation are
continuously evolving disciplines, with the latter, of
course, continuously adjusting to the former.
Technological advances in computers and in telecommunications, together with theoretical advances—
principally in option-pricing models—have contributed to
this proliferation of ever more complex financial products.
The increased product complexity, in turn, is often cited as
the primary reason that the Basle standard is in need of
periodic restructuring. Indeed, the Basle standard, like the
industry for which it is intended, has not stood still over
the past ten years. Since its inception, significant changes
have been made on a regular basis to the Accord, including, most visibly, the use of banks’ internal models to assess
capital charges for market risk within trading accounts. All
of these changes have been incorporated within a document
that is now quite lengthy—and written in appropriately
dense, regulatory style.
While no one is in favor of regulatory complexity,
we should be aware that capital regulation will necessarily
evolve over time as the banking and financial sectors themselves evolve. Thus, it should not be surprising that we
constantly need to assess possible new approaches to old
problems, even as new problems become apparent. Nor
should the continual search for new regulatory procedures
be construed as suggesting that existing policies were ill

164

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

suited to the times for which they were developed or will
be ill suited for those banking systems that are at an earlier
stage of development.
Indeed, so long as we adhere in principle to a common prudential standard, it is appropriate that differing
regulatory regimes may exist side by side at any point in
time, responding to differing conditions between banking
systems or across individual banks within a single system.
Perhaps the appropriate analogy is to computer-chip manufacturers. Even as the next generation of chip is being
planned, two or three generations of chip—for example,
Pentium IIs, Pentium Pros, and Pentium MMXs—are
being marketed, and at the same time, older generations of
chip continue to perform yeoman duty within specific
applications. Given evolving financial markets, the question is not whether the Basle standard will be changed but
how and why each new round of change will occur and to
which market segment it will apply.
As it oversees the necessary evolution of the Accord
for the more advanced banking systems, the regulatory
community would do well to address some of the basic
issues that, in my view, it has not adequately addressed to
date. In so doing, perhaps we can shed some light on the
source of our present concerns with the existing capital
standard. There really are only two questions here: First,
How should bank “soundness” be defined and measured?
Second, What should be the minimum level of soundness
set by regulators?
When the Accord was being crafted, many supervisors may have had an implicit notion of what they meant
by soundness—they probably meant the likelihood of a
bank becoming insolvent. Although by no means the only
one, this definition of soundness is perfectly reasonable.
Indeed, insolvency probability is the standard explicitly
used within the internal risk measurement and capital allocation systems of our major banks. That is, many of the
large banks explicitly calculate the amount of capital they
need in order to reduce to a targeted percentage the probability, over a given period, that losses would exceed the
allocated capital and drive the bank into insolvency.
But whereas our largest banks have explicitly set
their own internal soundness standards, regulators really

have not. Rather, the Basle Accord set a minimum capital
ratio, not a maximum insolvency probability. Capital, being
the difference between assets and liabilities, is of course an
abstraction. Thus, it was well understood at the time that
the likelihood of insolvency is determined by the level of
capital a bank holds, the maturities of its assets and liabilities, and the riskiness of its portfolio. In an attempt to
relate capital requirements to risk, the Accord divided
assets into four risk “buckets,” corresponding to minimum
total capital requirements of 0 percent, 1.6 percent,
4.0 percent, and 8.0 percent, respectively. Indeed, much of
the complexity of the formal capital requirements arises
from rules stipulating which risk positions fit into which
of the four capital buckets.
Despite the attempt to make capital requirements
at least somewhat risk-based, the main criticisms of the
Accord—at least as applied to the activities of our largest,
most complex banking organizations—appear to be warranted. In particular, I would note three: First, the formal
capital ratio requirements, because they do not flow from
any particular insolvency probability standard, are for the
most part arbitrary. All corporate loans, for example, are
placed into a single, 8 percent bucket. Second, the requirements account for credit risk and market risk but not
explicitly for operating and other forms of risk that may
also be important. Third, except for trading account
activities, the capital standards do not take account of
hedging, diversification, and differences in risk management techniques, especially portfolio management.
These deficiencies were understood even as the
Accord was being crafted. Indeed, it was in response to
these concerns that, for much of the 1990s, regulatory
agencies focused on improving supervisory oversight of
capital adequacy on a bank-by-bank basis. In recent years,
the focus of supervisory efforts in the United States has
been on the internal risk measurement and management
processes of banks. This emphasis on internal processes has
been driven partly by the need to make supervisory policies
more risk-focused in light of the increasing complexity of
banking activities. In addition, this approach reinforces
market incentives that have prompted banks themselves to
invest heavily in recent years to improve their management

information systems and internal systems for quantifying,
pricing, and managing risk.
It is appropriate that supervisory procedures evolve
to encompass the changes in industry practices, but we
must also be sure that improvements in both the form
and the content of the formal capital regulations keep
pace. Inappropriate regulatory capital standards, whether
too low or too high in specific circumstances, can entail significant economic costs. This resource allocation effect of
capital regulations is seen most clearly by comparing the
Basle standard with the internal “economic capital” allocation processes of some of our largest banking companies.
For internal purposes, these large institutions attempt
explicitly to quantify their credit, market, and operating
risks by estimating loss probability distributions for various
risk positions. Enough economic, as distinct from regulatory, capital is then allocated to each risk position to satisfy
the institution’s own standard for insolvency probability.
Within credit risk models, for example, capital for internal
purposes often is allocated so as to hypothetically “cover”
99.9 percent or more of the estimated loss probability
distribution.
These internal capital allocation models have
much to teach the supervisor and are critical to understanding the possible misallocative effects of inappropriate
capital rules. For example, the Basle standard lumps all
corporate loans into the 8 percent capital bucket, but the
banks’ internal capital allocations for individual loans vary
considerably—from less than 1 percent to well over 30 percent—depending on the estimated riskiness of the position
in question. In the case in which a group of loans attracts
an internal capital charge that is very low compared with
the Basle 8 percent standard, the bank has a strong incentive
to undertake regulatory capital arbitrage to structure the
risk position in a manner that allows it to be reclassified
into a lower regulatory risk category. At present, securitization is, without a doubt, the major tool used by large U.S.
banks to engage in such arbitrage.
Regulatory capital arbitrage, I should emphasize,
is not necessarily undesirable. In many cases, regulatory
capital arbitrage acts as a safety valve for attenuating the
adverse effects of those regulatory capital requirements that

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

165

are well in excess of the levels warranted by a specific activity’s underlying economic risk. Absent such arbitrage, a
regulatory capital requirement that is inappropriately high
for the economic risk of a particular activity could cause a
bank to exit that relatively low-risk business by preventing
the bank from earning an acceptable rate of return on its
capital. That is, arbitrage may appropriately lower the
effective capital requirements against some safe activities
that banks would otherwise be forced to drop by the effects
of regulation.
It is clear that our major banks have become quite
efficient at engaging in such desirable forms of regulatory
capital arbitrage, through securitization and other devices.
However, such arbitrage is not costless and therefore not
without implications for resource allocation. Interestingly,
one reason that the formal capital standards do not include
very many risk buckets is that regulators did not want to
influence how banks make resource allocation decisions.
Ironically, the “one-size-fits-all” standard does just that, by
forcing the bank into expending effort to negate the capital
standard, or to exploit it, whenever there is a significant
disparity between the relatively arbitrary standard and
internal, economic capital requirements.
The inconsistencies between internally required
economic capital and the regulatory capital standard create
another type of problem: Nominally high regulatory capital ratios can be used to mask the true level of insolvency
probability. For example, consider the case in which the
bank’s own risk analysis calls for a 15 percent internal
economic capital assessment against its portfolio. If the
bank actually holds 12 percent capital, it would, in all
likelihood, be deemed to be well capitalized in a regulatory
sense, even though it might be undercapitalized in the
economic sense.
The possibility that regulatory capital ratios may
mask true insolvency probability becomes more acute as
banks arbitrage away inappropriately high capital requirements on their safest assets by removing these assets from
the balance sheet via securitization. The issue is not solely
whether capital requirements on the bank’s residual risk
in the securitized assets are appropriate. We should also
be concerned with the sufficiency of regulatory capital

166

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

requirements on the assets remaining on the book. In the
extreme, such “cherry picking” would leave on the balance
sheet only those assets for which economic capital allocations
are greater than the 8 percent regulatory standard.
Given these difficulties with the one-size-fits-all
nature of our current capital regulations, it is understandable that calls have arisen for reform of the Basle standard.
It is, however, premature to try to predict exactly how the
next generation of prudential standards will evolve. One
set of possibilities revolves around market-based tools and
incentives. Indeed, as banks’ internal risk measurement
and management technologies improve, and as the depth
and sophistication of financial markets increase, bank
supervisors should continually find ways to incorporate
market advances into their prudential policies, when
appropriate. Two potentially promising applications of this
principle have been discussed at this conference. One is the
use of internal credit risk models as a possible substitute
for, or complement to, the current structure of ratio-based
capital regulations. Another approach goes one step further
and uses market-like incentives to reward and encourage
improvements in internal risk measurement and management practices. A primary example is the proposed precommitment approach to setting capital requirements for
bank trading activities. I might add that precommitment
of capital is designed to work for only the trading account,
not the banking book, and then for only strong, wellmanaged organizations.
Proponents of an internal-models-based approach
to capital regulations may be on the right track, but at
this moment of regulatory development, it would seem
that a full-fledged, bankwide, internal models approach
could require a very substantial amount of time and
effort to develop. In a paper given earlier today, Federal
Reserve Board economists David Jones and John Mingo
enumerate their concerns about the reliability of the
current generation of credit risk models. They suggest,
however, that these models may, over time, provide a
basis for setting future regulatory capital requirements.
Even in the shorter term, they argue, elements of internal
credit risk models may prove useful within the supervisory process.

Still other approaches are of course possible,
including some combination of market-based and traditional ratio-based approaches to prudential regulation. But
regardless of what happens in this next stage, as I noted
earlier, any new capital standard is itself likely to be superceded within a continuing process of evolving prudential
regulations. Just as manufacturing companies follow a
product-planning cycle, bank regulators can expect to
begin working on still another generation of prudential
policies even as proposed modifications to the current
standard are being released for public comment.
In looking ahead, supervisors should, at a minimum, be aware of the increasing sophistication with which
banks are responding to the existing regulatory framework
and should now begin active discussions on the necessary
modifications. In anticipation of such discussions, I would
like to conclude by focusing on what I believe should be
several core principles underlying any proposed changes to
our current system of prudential regulation and supervision.
First, a reasonable principle for setting regulatory
soundness standards is to act much as the market would if
there were no safety net and all market participants were
fully informed. For example, requiring all of our regulated
financial institutions to maintain insolvency probabilities
that are equivalent to a triple-A rating standard would be
demonstrably too stringent because there are very few such
entities among unregulated financial institutions not subject
to the safety net. That is, the markets are telling us that the
value of the financial firm is not, in general, maximized at
default probabilities reflected in triple-A ratings. This suggests, in turn, that regulated financial intermediaries cannot
maximize their value to the overall economy if they are
forced to operate at unreasonably high levels of soundness.
Nor should we require individual banks to hold
capital in amounts sufficient to protect fully against rare
systemic events, which, in any event, may render standard
probability evaluation moot. The management of systemic
risk is properly the job of the central banks. Individual
banks should not be required to hold capital against the
possibility of overall financial breakdown. Indeed, central
banks, by their existence, appropriately offer banks a form of
catastrophe insurance against such events.

Conversely, permitting regulated institutions that
benefit from the safety net to take risky positions that, in
the absence of the net, would earn them junk bond ratings
for their liabilities is clearly inappropriate. In such a world,
our goals of protecting taxpayers and reducing the misallocative effects of the safety net would simply not be
realized. Ultimately, the setting of soundness standards
should achieve a complex balance—remembering that the
goals of prudential regulation should be weighed against
the need to permit banks to perform their essential risktaking activities. Thus, capital standards should be structured to reflect the lines of business and the degree of risk
taking chosen by the individual bank.
A second principle should be to continue linking
strong supervisory analysis and judgment with rational
regulatory standards. In a banking environment characterized by continuing technological advances, this means
placing an emphasis on constantly improving our supervisory techniques. In the context of bank capital adequacy,
supervisors increasingly must be able to assess sophisticated internal credit risk measurement systems and to
gauge the impact of the continued development in securitization and credit derivative markets. It is critical that
supervisors incorporate, where practical, the risk analysis
tools being developed and used on a daily basis within the
banking industry itself. If we do not use the best analytical
tools available and place these tools in the hands of highly
trained and motivated supervisory personnel, then we
cannot hope to supervise under our basic principle—
supervision as if there were no safety net.
Third, we have no choice but to continue to plan
for a successor to the simple risk-weighting approach to
capital requirements embodied within the current regulatory standard. While it is unclear at present exactly what
that successor might be, it seems clear that adding more
and more layers of arbitrary regulation would be counterproductive. We should, rather, look for ways to harness
market tools and market-like incentives whenever possible,
by using banks’ own policies, behaviors, and technologies
in improving the supervisory process.
Finally, we should always remind ourselves that
supervision and regulation are neither infallible nor likely

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

167

to prove sufficient to meet all our intended goals. Put
another way, the Basle standard and the bank examination
process, even if structured in optimal fashion, are a second
line of support for bank soundness. Supervision and regulation can never be a substitute for a bank’s own internal
scrutiny of its counterparties and for the market’s scrutiny
of the bank. Therefore, we should not, for example, abandon

efforts to contain the scope of the safety net or to press for
increases in the quantity and quality of financial disclosures
by regulated institutions.
If we follow these basic prescriptions, I suspect
that history will look favorably on our attempts at crafting
regulatory policy.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

168

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Building a Coherent Risk Measurement
and Capital Optimisation Model
for Financial Firms
Tim Shepheard-Walwyn and Robert Litterman

I. INTRODUCTION
Risk-based capital allocation methodologies and regulatory
capital requirements have assumed a central importance in
the management of banks and other financial firms since
the introduction of the Basle Committee’s Capital Accord
in 1988. However, as firms have progressively developed
more sophisticated techniques for measuring and managing risk, and as regulators have begun to utilise the output
of internal models as a basis for setting capital requirements for market risk, it is becoming increasingly clear
that the risk as measured by these models is significantly
less than the amount of equity capital that the firms themselves choose to hold.1
In this paper, we therefore consider how risk
measures, based on internal models of this type, might be
integrated into a firm’s own methodology for allocating
risk capital to its individual business units and for determining its optimal capital structure. We also consider the
implications of these developments for the future approach
to determining regulatory capital requirements.

Tim Shepheard-Walwyn is managing director, Corporate Risk Control,
UBS AG. Robert Litterman is managing director, Asset Management Division,
Goldman Sachs.

II. WHY DO FINANCIAL FIRMS NEED
INTERNAL RISK MEASUREMENT
AND RISK-BASED CAPITAL
ALLOCATION METHODOLOGIES?
The core challenge for the management of any firm that
depends on external equity financing is to maximise shareholder value. To do this, the firm has to be able to show at
the margin that its return on investment exceeds its
marginal cost of capital. In the context of a nonfinancial
firm, this statement is broadly uncontentious. If the expected
return on an investment can be predicted, and its cost is
known, the only outstanding issue is the marginal cost of
capital, which can be derived from market prices for the
firm’s debt and equity.
In the case of banks and other financial firms,
however, this seemingly simple requirement raises significant difficulties. In the first place, the nature of risk in
financial markets means that, without further information
about the firm’s risk profile and hedging strategies, even
the straightforward requirement to be able to quantify the
expected return on an investment poses problems. Second,
the funding activities of financial firms do not provide
useful signals about the marginal cost of capital. This is
because, for the majority of large and well-capitalised
financial firms, the marginal cost of funds is indifferent to
day-to-day changes in the degree of leverage or risk in their

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

171

balance sheets. This, in turn, leads to a third problem,
which is how to determine the amount of capital that the
firm should apply to any particular investment. For a nonfinancial company, the amount of capital tied up in an
investment can be more or less equated to the cost of its
investment. However, in the case of a financial firm, where
risk positions often require no funding at all, this relationship does not hold either.
It therefore follows that a financial firm that wants
to maximise shareholder value cannot use the relatively
straightforward capital pricing tools that are available to
nonfinancial firms, and must seek an alternative shadow
pricing tool to determine whether an investment adds to or
detracts from shareholder value. This is the purpose that is
served by allocating risk capital to the business areas
within a financial firm.

III. RISK MEASUREMENT, SHADOW PRICING,
AND THE ROLE OF THE SHARPE RATIO
Since the objective of maximising shareholder value can be
achieved either by increasing the return for a given level
of risk, or alternatively by reducing the risk for a given
rate of return, the internal shadow pricing process needs
to be structured in a way that will assist management in
achieving this objective. In other words, the shadow pricing tool has to have as its objective the maximisation of the
firmwide Sharpe Ratio, since the Sharpe Ratio is simply
the expression of return in relation to risk. Seen in these
terms, we can draw a number of important conclusions that
will assist us in determining how we should build our
shadow pricing process.
First, and importantly, the shadow pricing process
should operate in a manner that is independent of the level
of equity capital in the firm. This follows because, where
the perceived risk of bankruptcy is negligible, as is the case
for most large financial firms, the Sharpe Ratio is independent of the amount of equity within a firm (see appendix).
Thus, for any given set of assets, the amount of equity the
firm has does not alter the amount of risk inherent in the
assets, it merely determines the proportion of the risk that
is assumed by its individual equity holders. Consequently,
for any given level of equity, shareholder value can always

172

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

be enhanced either by increasing the ex post rate of return
for the given level of risk, or more importantly for a bank,
which has little scope for significantly enhancing the earnings on its loan portfolio, by reducing the variance of those
earnings through improved portfolio management.
Second, if the purpose of the process is to maximise
the firm’s Sharpe Ratio by encouraging risk-optimising
behaviour, it has to capture all the important components
of a firm’s earnings volatility. The Sharpe Ratio that is relevant to the investor is simply the excess return on the
firm’s equity relative to the volatility of that return.
In ex post terms, this can be expressed as:
R pt – R ft
SharpeeRatio t = ------------------,
σ pt
where
R pt is the observed firmwide return on the investment
in time t,
R ft is the return on the risk-free rate at time t, and
σ t is the standard deviation of R pt measured at time t.
Management’s objective at time t is therefore to
maximise the expected Sharpe Ratio over the future
period t+1. In order to do this, management has to be able
to predict R pt + 1 and σ t + 1 . This means that we need to
be able to understand both the components of E ( R pt + 1 )
and the determinants of its variance, σ t + 1 .
In a simple model of the firm, we can express
E ( R pt + 1 ) as follows:
E ( R pt + 1 ) = E ( ∆P t + 1 + Y t + 1 – C t + 1 ) ,
where
E ( R pt + 1 ) is the forecast value of earnings in time t+1,
∆P t + 1 is the change in the value of the firm’s portfolio of
assets in time t+1,
Y t + 1 is the value of the firm’s new business revenues in
time t+1, and
C t + 1 is the costs that the firm incurs in time t+1.
2

We can express Var ( R pt + 1 ) as σ t + 1 , so that by
definition:
2

2

2

2

σ t + 1 = σ ∆P t + 1 + σ Y t + 1 + σ C t + 1
+ 2 ( Cov ( ∆P t + 1, Y t + 1 ) – Cov ( ∆P t + 1, C t + 1 )
– Cov ( Y t + 1, C t + 1 ) ) …

Because this is a forward-looking process, the firm
cannot rely solely on observed historical values. It needs to
be able to estimate their likely values in the future. The
firm must therefore understand the dynamics of each of
∆P t + 1 , Y t + 1 , and C t + 1 , and in particular the elements
that contribute significantly to both their variance and
covariance. These are the risk drivers of the business, which
need to be identified and modeled if the firm is to have an
effective shadow pricing process for its risk.
As a result of this approach, it is possible to think
in terms of a generic risk pricing approach for maximising
shareholder value, using generally agreed-upon risk pricing
tools that could be applicable to all financial firms. Just
as value at risk measures for market risk have become a
common currency for comparing and analysing market
risk between firms, a similar approach to other risk factors
could readily be developed out of this model.

IV. DETERMINING THE OPTIMAL CAPITAL
STRUCTURE FOR THE FIRM
As we have explained, there is no causal link between the
level of gearing that a firm chooses and its Sharpe Ratio.
However, this is subject to one important caveat, which
is that the amount of equity capital that a firm holds has
to be large enough to enable it to survive the “normal”
variability of its earnings. This means that at the minimum, a firm will need to have some multiple of its
expected earnings volatility— ( σ t + 1 ) k, where k is a fixed
multiplier—as equity capital. Failure to maintain such an
amount should lead to a risk premium on the firm’s equity,
which would make the cost of capital prohibitive. In most
cases, though, management will choose to operate in some
excess of this minimum level.
The question we therefore need to address here is
how much equity capital in excess of ( σ t + 1 ) k will a
well-managed firm choose to hold, and how should it
reach that decision?
Although by definition the amount of equity that
the firm chooses will itself be a multiple of E ( σ t + 1 ) k,2
the methodology for deciding how to set that amount
needs to be significantly different from the methodology
by which the shadow pricing amount σ t + 1 is determined.

This is so for three reasons. First, financial markets are
prone to the characteristics of fat tails, which means that it
is dangerous to rely solely on the properties of statistical
distributions to predict either the frequency or the size of
extreme events. Given that one of the responsibilities of the
management of a financial firm is to ensure the continuity
of that firm in the long term—which will in turn help to
ensure that the perceived risk of bankruptcy is kept to a
minimum—the firm needs to be able to analyse the nature
of these rare events and ensure that the capital and balancesheet structure are robust enough to withstand these occurrences and still be able to continue in business thereafter.
Thus, while in the case of certain risk factors the
potential stress or extreme loss that the firm faces and
needs to protect against may indeed be best estimated by
an extension of the statistical measures used to calculate
σ t + 1 , in other cases the results of scenario analysis may
yield numbers well in excess of the statistical measure.
(The 1987 market crash, for example, was a 27 standard
deviation event—well outside the scope of any value-at-risk
measure.) As a result, statistical techniques that are applicable to a risk pricing process need to be supplemented
with effective scenario and stress analysis techniques in
order for management to assess the potential scale of the
firm’s exposure to such extreme events.
The second consideration in managing the firm’s
capital is how to optimise the firm’s equity structure in an
imperfect world. In theory, in the absence of any significant
risk of bankruptcy, the market should be indifferent between
different levels of leverage for firms with the same Sharpe
Ratio, but it is not clear that this is the case. In particular, highly capitalised banks, which should have lower target
returns on equity to compensate for their lower risk premia,
appear to remain under pressure to provide similar returns
on equity to more thinly capitalised firms.
Third, management has the additional requirement to ensure that it complies with regulatory capital
requirements, set by reference to regulatory measures of
risk, which often do not correspond with internal risk measures and in many cases conflict with them.
This means that one of the principal strategic considerations for management is to optimise the capital

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

173

structure, bearing in mind the three different considerations of protecting the firm against catastrophic loss,
meeting shareholder expectations, and complying with
external regulatory requirements.
The essential requirement for this optimisation
exercise is to ensure that the two following conditions are
always met:
( σ t + 1 )k i ≤ TotaliCapitali ,

(Condition 1)

where
( σ t + 1 )k i is the minimum level of capitalisation at which
firm i can raise capital funds in the market for its given
level of risk, and TotaliCapital i is the amount of capital
that the firm actually holds
and
RegulatoryiCapital i ≤ TotaliCapital i , (Condition 2)
where
RegulatoryiCapital i is the amount of capital that firm i is
required to hold under the existing regulatory capital
regime.
This formulation shows clearly why in a shadow
pricing approach to risk, based on the calculation of σ t + 1 ,
the amount of capital at risk and therefore being charged to
the business is always likely to be less than the total capital
of the firm.
Furthermore, from the perspective of the firm, the
preferable relationship between these three considerations
would also be such that
( σ t + 1 )k w < RegulatoryiCapital w < OptimaliCapital w ,
(Condition 3)
where
OptimaliCapital w is the amount of capital that the firm
would choose for itself in the absence of a regulatory
constraint.
Where this condition can be met, the firm can
concentrate solely on optimising its capital structure and
maximising shareholder value without having to factor
considerations about the impact of a regulatory capital
regime into its optimisation exercise.
For completeness, we can also note here that the
further necessary condition should exist from the regulatory perspective for any regulatory capital regime to be

174

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

appropriately represented as risk-based, which is
( σ t + 1 )k i ≤ RegulatoryiCapital i ,

(Condition 4)

so that the risk-based regulatory capital requirement is at
least consistent with the market’s assessment of the minimum amount of capital a firm should have in order to
protect against the risk inherent in its business. This, in
turn, by combining Conditions 2 and 4, leads us to the
minimum requirement for a satisfactory regulatory capital
regime that
( σ t + 1 )k w ≤ RegulatoryiCapitali ≤ TotaliCapital i .
(Condition 5)
We return to this issue, and in particular the
relationship between the regulatory requirements and
optimal capital structure for the firm in more detail in
Section VI.

V. RISK MEASUREMENT—THE CHALLENGE
OF NORMALISATION
Now that we have distinguished between the different
purposes of risk measurement for shadow pricing of risk
and for the determination of the optimal capital structure,
we can move on to consider the challenges of building an
effective risk measurement system. The objective here is to
enable management to assess the different risks that a firm
faces in a broadly similar fashion, and to understand their
interrelationships. This requires both a common measurement framework and a methodology for ensuring that the
risk process covers all the material risks that may impact
the shadow pricing process or the decisions about the
capital structure.
At the outset, a firm has to have a clear understanding of the meaning of risk if it is to develop an
effective risk measurement methodology. For the purposes
of this paper, we can define the risk in a firm on an ex post
basis as the observed volatility of the firm’s earnings over
time around a mean value. The firm’s risk measures are
thus the firm’s best estimates of that volatility, which management can then use to make choices between different
business strategies and investment decisions and to determine the firm’s capital structure.

In order to achieve this, it is necessary to distinguish between the three measures of expected, unexpected,
and stress loss as follows.
The expected loss associated with a risk factor is simply
the expected value of the firm’s exposure to that risk factor.
It is important to recognise that expected loss is not itself a
risk measure but is rather the best estimate of the economic
cost of the firm’s exposure to a risk. The clearest example of
this at present is the treatment of credit risk, where banks
know that over the credit cycle they will incur losses with a
high probability, but only account for those losses as they
occur. This introduces a measure of excessive volatility into
the firm’s reported earnings, which is not a true measure of
the “risk,” given that the defaults are predictable with a high
degree of confidence. The true risk is only that part of the
loss that diverges from the expected value.
Having established the expected loss associated
with a risk, it is then possible to measure the variance of
that cost in order to establish the extent to which it contributes to the overall variance of the firm’s earnings, which
we term the unexpected loss associated with the risk factor.
Both VaR for market risk and the credit risk measures produced by CreditMetrics and CreditRisk+ are examples of
measures of unexpected loss that can be used in an internal
risk pricing process of the type discussed in Section III.
However, comparison of these two approaches also points
up the significance of adopting different time horizons in
measuring different risks.
VaR measures for market risk are typically either
a one-day or ten-day measure of risk. By contrast, the
modeling of default risk, which is still at an early stage of
development, typically utilises an annual observation
period, since default frequencies change over a much longer
time horizon than market prices. As a result of these different time horizons, a ten-day 99 percent confidence
interval for market risk would imply that the VaR limit
could be expected to be exceeded once every three years. An
annually based VaR of 97.5 percent for credit risk, however, would be expected to be exceeded only once every
forty years. Aggregating the two measures into a single
measure of the firm’s risk—even assuming for the moment
that the firm’s market and credit risk were independent—

would not provide a satisfactory indication of the aggregate
risk that the firm faces.
A further problem with the estimation of unexpected losses is the availability of reliable data for the
different risk factors that a firm faces. Significant progress
has been made on measuring market risk because of the
availability of daily data for prices and for revenues within
firms, and more recently progress has also been made on
modeling credit risk, although here the data quality
problem is proving more challenging. In the case of other
risk factors such as liquidity, legal, and operational risks,
however, the analysis is likely to have to rely on firms’ own
internal data, and very little work has yet been undertaken
to examine the statistical properties of those risks. Moreover, meaningful estimates of the covariances between risk
factors will only be possible once reliable estimates can be
made of unexpected loss on a stand-alone basis.
In addition to the need to develop expected and
unexpected loss measures, which are particularly relevant
to the firm’s risk pricing methodology, the firm also has
to have a methodology for determining the extreme or
stress loss that it might face over the longer term horizon as
a result of its exposure to a risk factor in order to make
meaningful decisions about its capital structure and risk
limits systems. A number of risk measures and limits, such
as the concentration limits that banking regulators use to
limit the proportion of a bank’s capital that can be at risk
to any one counterparty, are derived explicitly or implicitly
from this type of measure. The methodology that a firm
may choose for calculating the potential stress loss associated with a particular risk will vary from risk factor to risk
factor, but will typically consist of a form of scenario simulation, which envisions the type of situation where the firm
could potentially be put at risk from a particular risk
factor, or a combination of factors, and then assesses the
firm’s capital resources and limits structures by reference to
the results of this exercise.
Given that the purpose of measuring risk is to
estimate the exposure of the firm to earnings variability
from its principal risk drivers, the firm also needs to
have a factor model that identifies the key risk factors to
which it is exposed and measures their impact on the

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

175

volatility of the earnings stream. The issue we now need
to address is, What are these risk drivers and how can
they be measured effectively?
In order to establish a starting point for this
exercise, we can use the 1994 Basle Committee paper on
risk management for derivatives, which identified six risks
that firms face—market risk, credit risk, settlement risk,
liquidity risk, legal risk, and operational risk. If we relate
this list back to the shadow pricing equation in Section III,
we can readily see how much still remains to be done in
establishing an effective internal risk pricing process.
As we discussed in Section III, firms have started
this process by analysing their trading exposure to
market risk, which is where the data are most readily
available. It is interesting to note, however, that even in
the context of market risk, few firms are yet able to
measure their overall revenue exposure from areas such as
corporate finance or funds management to movements in
market variables, even though these may be significantly
more powerful factors in determining the quality of their
earnings in the medium term, not least because the time
horizons are different.
In a manner similar to their work on market risk,
firms have turned their attention more recently to the
issues associated with the measurement of the unexpected
loss associated with credit risk. Work in this area derives
from two parallel initiatives. On the one hand, there has
been increasing interest, stimulated in considerable part by
the Basle Committee’s model-based approach to capital
requirements for market risk, in developing models of the
specific risk in the trading book. On the other hand, there
has been an increasing effort to develop reliable models for
measuring the default risk in the banking book.
The third category of risk identified in the 1994
paper in the context of derivative products was settlement
risk. In practice, settlement risk is a special case of credit
risk, since it arises from the failure of a counterparty to
perform on a contract. Its particular characteristic is that it
arises on a daily basis as transactions—particularly in
foreign exchange and payments business—are settled, and
the magnitude of the daily exposure between different
financial institutions in relation to settlement risk is many

176

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

times larger than for other risk factors. The primary challenge for a financial firm is therefore to be able to capture
and monitor its settlement risk in a timely manner. Once
this has been done, the same methodology for measuring
expected and unexpected loss can be applied to settlement
risk as for other types of credit risk.
To date, the techniques for measuring liquidity risk
have tended to focus on the potential stress loss associated
with the risk, whether in the form of the cash capital measure used by the U.S. securities firms or the funding gap
analysis undertaken by bank treasuries. Both are attempts
to quantify what might occur in extreme cases if the firm’s
funding sources dried up. While this is clearly a prudent
and desirable part of corporate financial management, it
should also be possible to apply the framework of expected
and unexpected loss to liquidity risk by measuring the
extent to which the liquidity risk inherent in the business
gives rise to costs in hedging out that risk through the
corporate treasury function.
In a similar way to the approach to liquidity risk,
the focus to date in analysing the impact of legal risk and
other aspects of operational risk has been in seeking to
prevent the serious problems that have given rise to the
well-publicised losses, such as those of Hammersmith
and Fulham in the context of legal risk, or those of Barings
and Daiwa Bank in the context of operational risk more
generally. As with liquidity risk, however, the issue that
has yet to be addressed in the context of internal risk
pricing is how these risk factors contribute to the earnings volatility of the firm, since operational risk can be
seen as a general term that applies to all the risk factors
that influence the volatility of the firm’s cost structure as
opposed to its revenue structure. It is therefore necessary
for the firm to classify and analyse more precisely the
nature of these risk factors before any meaningful attempt
can be made to fit them into a firmwide risk model of the
type envisaged by this paper.
As the foregoing analysis indicates, a considerable
amount of further work clearly still remains to be undertaken in the development of risk modeling in financial
firms. Nevertheless, despite the evident gaps in the development of a full risk model, this does not preclude

proceeding to implement a risk pricing methodology for
those risks that can be measured. This is because with risk
pricing there is no presumption that the risk measures should
add to the total capital of the firm, and thus there is no
danger of misallocating capital to the wrong business, which
can occur if a risk-based capital allocation model is used with
an incomplete risk model. Given this fact, the integrity of
the risk measure for the particular risk factor is the primary
consideration, and the need for a strict normalisation of risk
measures—so that the measures for each risk factor can be
aggregated on a consistent “apples for apples” basis—
assumes a lesser importance as an immediate objective.

VI. RISK ALLOCATION METHODOLOGIES
AND REGULATORY CAPITAL
REQUIREMENTS—A SYNTHESIS?
Having outlined the components of an integrated approach
to risk pricing and capital optimisation within financial
firms, we can now consider the implications of this analysis
for the structure of a satisfactory regulatory capital framework. In this context, we do not seek to analyse the different rationales for capital regulation, but simply note that it
is now widely accepted that any regulatory capital requirement should be risk-based and should be consistent with
firms’ own internal risk measurement methodologies, so
that a firm that carries more risk is subject to a higher capital requirement than one that carries less risk.
As we have explained, the core objective of a
firm’s own internal risk pricing mechanism should be to
enhance shareholder value by encouraging behaviour that
will improve the firm’s overall Sharpe Ratio. In normal
circumstances, this will be separate from the process of
determining the optimal capital structure for the firm.
The difference between the two is that the risk pricing
exercise is based on a measure of unexpected loss and is
designed to operate at the margin, at the level of the individual business decision. The decision on the capital
structure should, by contrast, be based on an assessment
of stress loss scenarios and be independent of activity at
the margin, leading to the minimum capital condition
that, identified in Section III, that
( σ t + 1 )k i ≤ TotaliCapitali .

(Condition 1)

In Section III, we also derived the following minimum condition, which we believe should be satisfied in order
to characterise a regulatory capital regime as adequately riskbased
( σ t + 1 )k i ≤ RegulatoryiCapital i ≤ TotaliCapital i ,
(Condition 5)
and we identified the desirable condition for a well-managed
and well-capitalised firm that
( σ t + 1 )k w < RegulatoryiCapitalw < OptimaliCapitalw .
(Condition 3)
We can now assess how these requirements compare under
three alternative approaches to setting regulatory capital
requirements, which can be summarised as follows:
• the fixed ratio approach (Basle 1988/CAD/SEC
net capital rule)
• the internal measures approach (Basle market risk
1997/Derivatives Policy Group proposals)
• the precommitment approach.
The fixed ratio approach calculates the required
regulatory capital for a financial firm by reference to a regulatory model of the “riskiness” of the firm’s balance sheet.
The problem associated with any regime of this sort, which
seeks to impose an arbitrary measure of the riskiness of a
firm’s business on a transaction-by-transaction basis, is that
there is no mechanism for testing it against the true risk in
the firm, which will by definition vary from firm to firm.
As a result, the only part of Conditions 3 and 5 that this
approach can satisfy a priori is that
RegulatoryiCapital i ≤ TotaliCapital i ,
which is achieved by regulatory requirement. But Condition 1
is violated because we cannot be sure that
( σ t + 1 )k i ≤ RegulatoryiCapital i
and equally, there is no way of ensuring for a well-managed
firm that Condition 3 can be met because there is no mechanism for ensuring that
RegulatoryiCapital w < OptimaliCapital w .
Given these flaws, it is difficult to see how a fixed ratio
regime could realistically be adapted to meet our conditions for an optimal capital structure.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

177

By comparison with the fixed ratio approach, the
internal models approach is clearly preferable from the viewpoint of the well-managed firm, since it seeks to equate
regulatory capital to
( σ t + 1 )m ,
where m is the regulatory multiplier.
If we assume that m is set at a level that is higher
than k (the minimum capital requirement for a viable firm)
but at a level that is still economic, it is likely that the
well-managed firm will be able to live with this regime,
provided it has a sufficient margin of capital between
( σ t + 1 )m w and OptimaliCapital w .
However, it is questionable whether such a “full
models” regime is genuinely optimal, or could be introduced quickly, since neither the industry nor the regulators
are yet able to define the model that determines σ t + 1 for
the whole firm. Consequently, a decision to use a full
models approach for regulatory capital purposes would
commit both regulators and financial firms to a significant
investment of resources, with an indeterminate end date,
and would at the same time provide no assurance that the
outcome was superior to a simpler and less resourceintensive approach.
The precommitment approach, by contrast with either
the fixed ratio or internal models approach, has the attraction of simplicity and synergy with the firm’s own processes since it allows firms to determine their own capital
requirement for the risks they face. If the regulators are
able to ascertain that the firm’s internal procedures are such
as to ensure that
( σ t + 1 )k i ≤ TotaliCapitali
with sufficient margin to satisfy the regulatory needs for
capital, then precommitment in its most complete sense
has the simple result that
( σ t + 1 )k ≤ TotaliCapital i ≡ RegulatoryiCapital i ,
which satisfies the requirements of our three conditions.
However it is questionable whether a full precommitment approach, as outlined, can be defined as a
regulatory capital regime at all. It would probably be
better described as an internal controls regime, since in
substance it would mean that the regulator would review

178

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

the methodology whereby the firm undertook its risk pricing and capital structuring decisions and would either
approve them—allowing precommitment—or impose a
capital requirement if they were not satisfied with the
process. In addition, the regulatory authority would be
susceptible to criticism, in the event that a problem was
encountered at a firm that had been allowed to employ the
precommitment approach, that it had unnecessarily foregone an important regulatory technique.
Given the evident problems of a move that is as
radical as the precommitment proposal, we therefore
believe that it is worthwhile to consider a fourth approach,
which we refer to as the base plus approach. Under this
approach, the regulator would determine directly on a
firm-by-firm basis the regulatory capital requirement for
the forthcoming period as an absolute amount, say R t + 1 ,
based on some relatively simple rules such as a multiple
of the firm’s costs or revenues in the previous year, and
modified to take account of the risk profile of the firm. The
basis for setting this requirement should be clearly defined,
and would need to be sufficient to ensure that the condition for the well-managed firm was met such that
( σ t + 1 )k w < RegulatoryiCapitalw < OptimaliCapitalw .
However, in order to prevent the firm from
exploiting this fixed capital requirement by changing its
risk profile after the capital requirement was set, the firm
would also be required to supplement its regulatory capital
by a precommitment amount that should be sufficient to
cover the amount that its risk profile changed during the
reference period.
The advantage of this approach would be that it
would be simple from the firm’s perspective, it would
require relatively little detailed assessment by the regulator
of the firm’s own internal models regime, and would not be
conditional on the firm having modeled every material risk
before it took effect. At the same time, it could have incentives built in, since the more confident the regulator was
about the quality of the firm’s internal controls the lower
could R t + 1 be set, while still leaving the regulator the
ultimate authority to ensure that all firms were capitalised
at a level sufficiently in excess of ( σ t + 1 )k to protect the

overall system against the risk of extreme systemic events.
From the perspective of the firms, the fact that additional
capital was required at the level of changes in ( σ t + 1 )k and
not based on a higher multiplier would ensure that the
regulatory regime remained in line with the requirements
of the internal risk pricing, so avoiding the risk of regulatory arbitrage arising from inappropriate capital rules.

VII. CONCLUSION
It is becoming increasingly clear that the regulatory capital
requirements for both banks and securities firms are not
appropriately aligned either with the risk that those firms
are taking or with the way in which those firms manage
their own risks in order to maximise shareholder value and
optimise their capital structures. In this paper, we have
argued that this process has two elements. Internal risk
measures such as value at risk can be used by financial firms
as a means of enhancing shareholder value by targeting

directly the firmwide Sharpe Ratio rather than through the
indirect mechanism of internal capital allocation. However,
we argue that these measures of unexpected loss need to be
supplemented by techniques such as scenario analysis when
assessing the firm’s potential exposure to stress loss and
thus determining the firm’s optimal capital structure.
In light of these considerations, we do not believe
that any of the current proposed regulatory capital regimes,
which we characterise as the regulatory ratio approach,
internal models approach, and the precommitment
approach, are consistent with this account of risk pricing
and capital optimisation within firms. By contrast, we
believe that our proposal for a base plus approach to regulatory capital would be consistent with both regulatory
objectives and firms’ own internal processes, and as such
would provide a sound basis for a regulatory capital regime
for financial firms in the twenty-first century.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

179

APPENDIX: THE SCALE INDEPENDENCE OF THE SHARPE RATIO

1. Definitions:
I Arbitrary Amount of Investment
F Financing Amount of Investment I
C Capital Allocated to Investment I
Such that:
I = F +C.
(This is merely a restatement of an accounting fact
that assets = liabilities.)
Further:
Exp ( P ) Expected Profits from Investment I net of
direct and allocated indirect costs before funding
Exp ( P net ) Expected Net Profits, that is, profits after
funding costs
Exp ( R ) Expected Return (percent) on (arbitrary
amount) Capital Allocated to Investment I,
where:
Exp ( P net )
.
Exp ( R ) = ---------------------C
Vol P
Vol R
rf

Volatility of Profits
Volatility of Return on Equity
the Default Free Interest Rate

In its simplest form, the Sharpe Ratio is defined as
the excess return of an investment over the standard
deviation of the excess return. If we assume that interest rates are fixed over the time horizon of the investment, then the volatilities of returns and of excess
returns are the same.
2. First Result:
Many activities in banking effectively require little or
no investment at the outset (if regulatory capital
requirements are neglected for a moment), such as
swaps and futures. For this reason, we choose to start
with an absolute revenue-based Sharpe Ratio and
extend it to a relative (percent) measure in a second
step.
The excess profits over the risk-free rate of interest
for capital and after any refinancing costs are given by:
Exp ( P ) – r f F – r f C ,

180

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

and the Sharpe Ratio therefore by
Exp ( P ) – r f F – r f C Exp ( P ) – r f ( F + C )
------------------------------------------- = --------------------------------------------Vol P
Vol P
Exp ( P ) – r f I Exp ( P net ) – r f C
= ----------------------------- = ------------------------------------ .
Vol P
Vol P
The Sharpe Ratio of the Expected Revenues is thus
given by the profits net of the costs for full (that is,
100 percent) refinancing over the volatility of earnings.
3. Second Result:
If return is measured as the ratio of absolute return to
allocated capital (which can be an arbitrary amount),
then the following result holds for volatilities:
P
1
Vol ( Return ) = Vol  --- = ---Vol ( P ) .
 C
C
This simple result obviously guarantees that the
Sharpe Ratio does not change its value since both the
numerator and the denominator are scaled by the same
amount. A closer examination of the above formula,
however, gives some intuition for this result
Exp ( P ) – r f I Exp ( P ) – r f I Exp ( P )
F

-------------------------------------------------------- ----------------- – r f  --- + 1
C
C
C
C
----------------------------- = ----------------------------- = ----------------------------------------------.
Vol ( R )
1
P
---Vol ( P )
Vol  ---
 C
C
Apart from the fact that the C cancels out, one can see
that the higher the leverage the higher the expected
return on the one hand, but the higher also the volatility
of the returns, which leaves the Sharpe Ratio
unchanged.
4. Conclusion:
As long as the institution can refinance itself at
approximately the risk-free rate, or its refinancing rate
is indifferent to changes in volatility over the relevant
range, the amount of capital that it allocates to the
business will not affect its Sharpe Ratio. This can be
seen by solving the Sharpe Ratio backwards for some

APPENDIX

APPENDIX: THE SCALE INDEPENDENCE OF THE SHARPE RATIO (Continued)

(arbitrary) capital allocation C:
Exp ( P net )
Exp
( P -) – r  --F- + C
------------------
– rf
---------------------f
Exp ( R ) – r f
C
C C
C
--------------------------- = -------------------------------- = ----------------------------------------------Vol ( R )
1
P
---Vol ( P )
Vol  ---
 C
C
I
Exp ( P )
----------------- – r f --Exp ( P ) – r f I
C
C
= ------------------------------- = ----------------------------- .
Vol ( P )
1
---Vol ( P )
C

APPENDIX

Of course, this whole relationship changes as soon
as the marginal cost of funding becomes a function of
the credit quality of the institution. In that case, the
costs of funding become an increasing function of the
volatility of the profits (or returns) and, as a consequence, the Sharpe Ratio drops.
It is for this reason that the absolute level of capital in banks is held at some multiple of the volatility of
the earnings, since this ensures that the cost of funding
at the margin remains independent of day-to-day
changes in the risk profile of the firm.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

181

ENDNOTES

The authors are grateful to Marcel Rohner of Swiss Bank Corporation for his
contribution to the development of this paper and for providing the appendix.
1. This is borne out by the experience of the recent precommitment
pilot study and by the value at risk returns provided by members of the
Derivatives Policy Group in the United States to the Securities and
Exchange Commission.

2. Strictly, we should denote our risk term as E ( σ t + 1 )t —that is,
expected value at time t of the standard deviation of earnings at time
t + 1 . For ease of notation, however, we adopt the term σ t + 1 for the rest
of this paper.

REFERENCES

Froot, Kenneth A., and Jeremy C. Stein. 1997. “Risk Management, Capital
Budgeting and Capital Structure Policy for Financial Institutions.”
Unpublished paper, February.

McQuown, J. A., and S. Kealhofer. 1997. “A Comment on the Formation
of Bank Stock Prices.” KMV Corporation, April.

The views expressed in this article are those of the authors and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

182

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

Capital from an Insurance Company
Perspective
Robert E. Lewis

This morning, I would like to give a few practical comments on capital adequacy from an insurance company
perspective. In doing so, I will present two views on capital
adequacy and capital allocation in the insurance industry.
The first view is the regulatory perspective, that is, the
motivations behind regulatory capital requirements in the
insurance industry, the structure of those requirements,
and the relationship between regulatory capital amounts
and the actual risks facing insurance companies. The second
view is an insurance company perspective, in particular, the
approach taken by the American International Group (AIG)
to determine adequate capital allocations for our various
businesses and for the firm overall.

REGULATORY PERSPECTIVE
The regulatory perspective on capital adequacy was well
summarized, in June 1996, by B.K. Atchinson, president of
the National Association of Insurance Commissioners
(NAIC):
The most important duty of insurance commissioners is to help maintain the financial stability of
the insurance industry—that is, to guard against
insolvencies.... Among the greatest weapons against
insolvency are the risk-based capital requirements.

Robert E. Lewis is chief credit officer at American International Group, Inc.

In other words, the NAIC recognizes the important role
that capital can play in preventing insolvencies and has
implemented a set of risk-based capital requirements
intended to address this concern.
Without going into the details of the calculations,
the NAIC’s risk-based capital requirements are intended to
capture several forms of risk facing insurance companies.
For life/health companies, these risks include:
• asset risk: the risk of default or a decline in the market
value of assets;
• insurance risk: the risk that claims exceed expectations;
• interest risk: the risk of loss from changes in interest
rates; and
• business risk: various risks arising from business
operations, including guarantee fund assessments for
the eventuality that one insurance company fails and
others have to stand by with capital to assume some of
those losses.
For property/casualty companies, the risks covered by the
capital calculations are different, because the business is
quite different. In brief, the risk-based capital calculations
are intended to cover:
• asset risk: the risk of default or a decline in the market
value of assets;
• credit risk: the risk of loss from unrecoverable reinsurance and other receivables;

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

183

• underwriting risk: the risk of loss from pricing and
reserving inadequacies; and
• off-balance-sheet risk: the risk of loss from factors
such as contingencies or high business growth rates.
While the regulatory capital requirements are
intended to cover a wide range of the risks facing insurance
companies, the rules have a number of shortcomings. From
a technical perspective, the calculations impose overly
harsh capital requirements along several dimensions. For
one, the calculations do not include covariance adjustments
within risk groups, so the benefits of diversification of risks
are not fully recognized. Further, the requirements impose
undue penalties on affiliated investments, ceded reinsurance, and adequate reserving, as well as on affiliated foreign
insurers. The NAIC’s risk-based capital rules also have a
number of shortcomings from a practical or operational
perspective. In particular, the requirements are applied
only to insurance firms in the United States; there is no
international acceptance of these requirements and, therefore, no level playing field with regard to capital regulation. Even within the United States, not all states apply the
NAIC guidelines. Finally, since the requirements do not
cover the full range of risks facing insurance firms, supervisors typically expect insurers to maintain multiples of the
minimum risk-based capital requirement.
Further, in practice, the requirements have not
proven to provide either a good predictor of future insolvency or a consistent rating of relative financial strength
among insurers. History has shown that only a small percentage of insolvent insurers failed the risk-based capital test
prior to their insolvency. Conversely, of those insurers that
fail the risk-based capital test, only a small percentage
actually become insolvent. Thus, the risk-based capital rules
provide a very noisy indicator of the actual financial strength
of U.S. insurance companies. On the plus side, however, the
rules have permitted supervisors to take prompt regulatory
steps against insurers without court action.

INSURANCE COMPANY PERSPECTIVE
A number of factors are influencing insurers’ views concerning capital adequacy in the current insurance industry
environment. Overall, a shortage of capital is not a prob-

184

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

lem for most insurers operating today; indeed, in the view
of many, there is overcapacity in the industry. However,
current conditions in the insurance industry may not
prevail in the future. Overcapacity has intensified competition in the market for insurance products, driving a
loosening in underwriting standards. While combined
ratios—a measure of an insurer’s overall underwriting
profitability—are improving, this improvement largely
reflects a lack of “catastrophes” and the resulting surge of
claims, rather than strong underwriting practices. In
many cases, loss reserves are not increasing commensurate
with premium growth and profitability is being driven
by attractive financial market returns, rather than by core
underwriting activities. These conditions suggest that
capital adequacy may become more of an issue in the
not-too-distant future.
In March 1994, these views were nicely summarized by Alan M. Levin of Standard and Poor’s:
Of course, a strong capital base is an important
determinant, but without good business position
and strategy, management acumen, liquidity and
cash flow, favorable trends in key insurance
markets, dependable reinsurance programs, and
numerous other factors, a strong capital base can
be rendered inadequate in an astonishingly short
time.
As this quotation suggests, there are many sources of unexpected losses that can quickly erode an insurer’s capital
base. These include adverse claims development (as the
result of one or more catastrophes or because general
expectations of claims were understated); unrecognized
concentrations of risk exposures in investments and credit
extensions; unexpected market risk developments that
adversely affect investment returns; and legal risks such as
legislation requiring retroactive coverage of exposures.
Given these considerations and the general environment in the insurance industry today, AIG has developed a
set of basic principles concerning our approach to capital
adequacy and business strategy. To begin, capital must be
sufficient to cover unexpected losses while maintaining
AIG’s credit rating. We feel that the credit rating, the best
credit rating, is absolutely important for an insurance
company to maintain soundness, to maintain credibility

and confidence, and to be able to seek any opportunity that
it finds profitable.
Further, the insurance business must return an
underwriting profit, without consideration of returns from
the investment portfolio, and underwriting decisions must
be kept separate from investment decisions. We find “cashflow underwriting,” as the term is called in the industry, to
be a disturbing situation where risks are written assuming
discount rates that require an insurer to take financial risk
in order to achieve a profit. In a similar vein, operating
cash flow and liquidity must be adequate to insulate the
corporation from the need to liquidate investments to
cover expected claims and losses. Finally, reserves must be
built consistent with the company’s current underwriting
risk profile.
Our approach to modeling capital adequacy
reflects these basic principles. First, we begin with actuarial
assessments of capital and reserve adequacy for our underwriting business. We then look at balance-sheet capital,
make economic adjustments, and allocate the adjusted
capital to profit centers throughout the corporation. Each
profit center must meet a hurdle rate of return without
benefit of investment income. In this way, we assess capital
adequacy in relation to the basic underwriting business,
without relying on investment returns. To assess investment and other forms of credit risk, we are installing a
credit risk costing model. Finally, we are in the process of
implementing a market risk measurement model to assess
market risks in our insurance-related investments as well as
in our financial services businesses.
One important aspect of risk modeling that
deserves special attention is concentration risk. Diversification of businesses is key to providing stable earnings,
reserving, and capital growth. Ideally, capital modeling
would be done using full covariance matrices to assess the
degree of diversification—or, conversely, the degree of
concentration—in business activities and other risks.

However, designing an approach that makes use of full
covariance matrices is a complex undertaking. Instead, we
plan to emphasize stress testing of correlation risks. In this
way, we can assess the impact from adverse events on insurance, investment, liquidity, and financial services, and get a
picture of the extent of concentration risk across our business activities.
In our firm, we try to stress test through scenarios
that look at the correlation of insurance investments, market risks, and liquidity risks. For example, we might look
at an eight-point Richter Scale earthquake in Tokyo, which
our geologists tell us is a highly positively correlated event
with a sizable earthquake in California. When we look at
that scenario and at what could happen from an insurance
company perspective, we look at the possibility that financial markets are disrupted or closed for a period of time. In
this environment, companies have to react and respond, have
the liquidity to be able to make the investment decisions,
and not have to sell assets into a very disrupted market. At
the same time, we want to have enough capital, and a
strong enough credit rating, to be the corporation that
we are today. These are the types of stress tests that we
undertake, and judgment is a big component of the
whole exercise.

CONCLUSION
This paper has provided a brief overview of the factors
affecting capital adequacy in the insurance industry, both
from the perspective of insurance regulators and an individual insurance company. The key idea is that we try to
approach capital adequacy from the perspective of not only
being able to play the game after adverse events have
occurred, but being able to play the game the way we play
it today. While risk modeling is an important part of this
assessment, we use the modeling only with a very high
degree of reason and discussion.
Thank you.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

185

Commentary
Masatoshi Okawa

In my understanding, the issue of internal capital allocation is usually referred to as the question of how to allocate
the overall capital of a financial firm among individual
business areas of the firm, taking into account the amount
of risk incurred by each business area. Internal capital
allocation is used as a basis to decide the pricing of individual transactions or to evaluate the performance of each
business area by the management of a firm. In this sense,
the establishment of risk measurement methodologies is
usually regarded as a prerequisite for successful internal
capital allocation, as seen in the most famous example in
this area, RAROC of Bankers Trust. Another concrete
example of internal capital allocation is outlined in the
paper, “Capital Allocation and Bank Management Based
on the Qualification of Credit Risk,” by Kenji Nishiguchi,
Hiroshi Kawai, and Takanori Sazaki, although that paper
deals only with credit risk.
It seems to me, however, that this session’s first
paper, “Building a Coherent Risk Measurement and
Capital Optimisation Model for Financial Firms,” by
Tim Shepheard-Walwyn and Robert Litterman, tackles
the issue from a different angle, reflecting the fact that risk

Masatoshi Okawa is chief manager of the Planning and Coordination Division
of the Currency Issue Department at the Bank of Japan.

measurement methodologies are still developing rapidly.
The paper emphasizes how to quantify overall optimal capital for financial firms rather than how to allocate overall
capital among individual business areas of the firm. I will
not repeat the contents of the paper in detail. But I would
like to point out some of the most challenging ideas.
First, the paper focuses on a risk pricing methodology called shadow pricing, instead of the more traditional
risk-based capital allocation methodology. The objective is
to maximize the firmwide Sharpe ratio, which represents
the relationship between risk and the returns of a firm. The
authors advocate this approach because risk-based capital
allocation techniques would run the risk of incentivizing
inappropriate behavior by overcharging for the risks that are
yet to be subject to effective measurement. Although such
techniques seek to allocate the total capital to the risks that
have been identified and quantified, the traditional riskbased capital allocation methodology may lead to overcharging for risk because it lacks a comprehensive risk-factor
model. In addition, this risk pricing methodology allegedly
has some technical merits compared with the risk-based
capital allocation methodology. For one, it recognizes
covariance effects and the potential for implementation on a
sequential basis without the significant risk of creating perverse incentives. I am not quite sure whether these technical
aspects could be verified or not, and am interested to hear

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

187

comments on this point from the session’s participants.
Second, the paper considers a model for an optimal
regulatory capital regime called the base-plus approach,
which could replace the existing fixed-ratio approach,
internal models approach, or even the precommitment
approach. Under the base-plus approach, regulators determine a fixed amount of capital as a base requirement for
the firm. In addition, regulators permit the firm to adopt
the precommitment approach or models-based approach to
cover any increase in the firm’s risk profile during the reference period by the “plus” amount of the regulatory capital. The base-plus approach could be regarded as a
combination of the fixed-ratio approach and the internal
models or precommitment approach; the authors argue
that it has some of the merits of both approaches.
The new base-plus approach is conceptually very
interesting. Practically speaking, however, calculating the
plus amount using the internal models approach or the
precommitment approach could present a problem, especially for regulators. The plus amount is added to the base
amount set by regulators for the purpose of covering any
increase in the firm’s risk profile. This seems redundant,
however, given the multiplication factor of “at least three”
that has been introduced in the market risk capital requirement because of the same concerns about the theoretical
limitations of internal models. Furthermore, the required
amount of capital in the 1988 Basle Capital Accord is

already expected to function as a cushion for unexpected
events of default. I very much look forward to hearing comments about this aspect of the base-plus approach from
supervisors.
The second paper, “Capital from an Insurance
Company Perspective,” by Robert Lewis, explains the regulatory capital regime surrounding insurance firms in the
United States, taking into account the function of capital
at these firms and their differences compared with other
types of financial firms. I would like to make just one
remark here. It is a matter of course that the function of
capital differs between insurance companies and other
types of financial firms; these firms maintain different
portfolio structures and conduct different activities. Problems could arise when the capital of these different types
of financial firms is treated together. I would like to point
out that this February the Basle Committee, IOSCO, and
IAIS each released several papers on the supervision of
financial conglomerates that are the result of the activities
of the Joint Forum—an organization of banking, securities,
and insurance supervisors. These organizations are seeking
comments from the outside world. One of the papers
released this February deals with possible methodologies for
calculating the groupwide capital of financial conglomerates,
including insurance companies. In this area, the paper by
Robert Lewis offers us some important insights.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

188

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Formulas or Supervision? Remarks
on the Future of Regulatory Capital
Arturo Estrella

INTRODUCTION
How much capital should a bank have? There was a time,
not too long ago, when the answer to this question seemed
simple, at least to some. Then came floating exchange
rates, oil shocks, global inflation, swaps, inverse floaters,
and other tribulations, and the answer seemed not to be so
simple after all. Regulators responded in kind with more
complicated formulas; they introduced risk weights,
credit-equivalent amounts, potential future exposures,
maturity buckets, and disallowances. How does this story
end, and what is the moral of the story? Were things ever
really simple? Do we have more confidence now in the
accuracy of the capital assessments?
We must bear in mind two important facts in
order to address those questions. First, regulatory capital
has never been a mindless game played with simple
mechanical formulas. Second, firms themselves have used a
changing array of prevailing practices to develop their own
estimates of the level of capital they should have. To be
sure, mistakes have been made, but those mistakes
typically have not resulted from thoughtless reliance on
mechanical formulas.

Arturo Estrella is a senior vice president at the Federal Reserve Bank of New York.

This paper focuses on the relative emphasis that
the structure of regulatory capital places on formulas and
on supervision. The two are not viewed as mutually exclusive, but as elements to which capital policy implicitly
assigns relative weights. We will see that in U.S. regulatory practice, these weights have shifted over time, not
always in the same direction. Furthermore, we will explore
the relationships among regulatory formulas, supervisory
appraisals, and the prevailing business practices in the
banking industry.1 We then ask, what is the appropriate
mix of formulas and supervision?
Why is this an important issue? Consider three
related reasons. First, there is a risk of an increasing disconnect between regulatory capital and what banks and other
financial institutions do. The last few decades have brought
tremendous changes in the nature of financial firms, their
activities, and their approaches to risk management. In
such an environment, past regulatory achievements provide
no guarantee of future success. Second, for much the same
reasons, inertia will almost surely lead regulators down the
wrong path. Steady progress in a given direction is not
enough if the business has a tendency to change course—to
innovate. Third, banks and other institutions are in danger
of being over- or underregulated as the business changes
course. Overregulation can thwart a useful economic role
for financial institutions. Underregulation can undermine

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

191

faith in the financial sector and dampen its role as a catalyst
for economic progress.
The issues considered here are difficult and fundamental, and they seem resistant to an approach based solely
on straightforward economic analysis. Therefore, this article
makes use of a variety of tools: analytical, historical,
doxographical. We examine the rationale for capital regulation; the history of regulatory capital in the United
States, including current and proposed approaches to
regulatory capital; and the expressed views of practitioners
and theorists.
To preview the results, the principal conclusion is
a reaffirmation of the benefits of informed supervision.
Mechanical formulas may play a role in regulation, but
they are in general incapable of providing a solution to the
question of how much capital a bank should have. At the
margin, scarce public resources are better employed to
enhance supervision than to develop new formulas whose
payoff may be largely illusory.

ASSUMPTIONS OF REGULATORY
CAPITAL POLICY
We examine in this section the basic reasoning that underlies regulatory capital as we observe it in practice. One
conclusion to be drawn from the existing academic literature on this topic is that it is difficult to define—let alone
compute—the right level of capital for an arbitrary institution.2 In the end, the problem is so complicated and the
technical tools so limited that reasonable persons may have
substantial disagreements about the right amount of
capital that a given firm should hold.
Since it is impossible to “prove” that there is any
one right approach to regulatory capital, and since support
for any approach must ultimately rest on some ungrounded
propositions, I attempt here simply to list a series of
assumptions that are likely to be representative of the
thinking behind existing systems of regulatory capital. The
structure provided by this inventory can then serve as a
backdrop for the discussion of specific aspects of the regulatory capital framework.
Consider first some very general assumptions
concerning the rationale for capital. These assumptions are

192

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

relatively noncontroversial and are probably widely held.
1. Capital can help protect the safety and soundness
of individual institutions.
2. Capital can help protect the safety and soundness
of the financial system.
3. Supervisors can play a socially useful role by monitoring the capital levels of financial institutions.
Support for assumptions 1 and 2 may be found in
Berger, Herring, and Szegö (1995) and in many of the references contained in that paper. Assumption 3 may be
slightly less straightforward, particularly if an extreme
“free market” point of view is adopted. Nevertheless, it
seems likely that most observers would admit that the
capital decisions of individual institutions may produce
externalities and that an impartial public-sector supervisor
with enforcement powers can play a useful monitoring role.
The following assumptions involve the appropriate
levels of capital more directly, or the means of estimating
such levels. Most of these assumptions are likely to have
been maintained in the framing of capital requirements at
one time or another.
4. There is some level of capital that is consistent
with the interests of the firm and the regulatory
and supervisory objectives of safety and soundness.
Call this the optimum level of capital.
5. The optimum level of capital can be estimated
with reasonable accuracy.
6. A lower bound for the optimum level of capital
can be computed from a mechanical formula.
7. An accurate estimate of the optimum level of capital can be computed from a mechanical formula.
Assumption 4 strikes a balance between the objectives of the firm and those of regulators, which in general
are not identical.3 In assumptions 6 and 7, note that the
term “mechanical formula” does not presuppose that the
formula is simple, but only that it be computable in a
mechanical way, for instance, by means of a computer
program. Explicit regulatory capital requirements in the
United States and in most other industrial countries are
consistent with assumption 6. In fact, the 1988 Basle
Accord (Basle Committee on Banking Supervision 1988)
states that: “It should be stressed that the agreed framework is designed to establish minimum levels of capital for
internationally active banks” (italics in original).

Assumption 7 is more controversial. The Basle
Committee on Banking Supervision (1988), for example, is
careful to point out that its measure is in no way optimal.
The committee emphasizes “that capital adequacy as measured by the present framework, though important, is one
of a number of factors to be taken into account when assessing the strength of banks.” Of course, the fact that one
specific formula is not sufficiently accurate does not rule
out that other, more accurate formulas may exist.
If assumptions 1 through 7 all held, there would be
a high degree of confidence in the well-functioning of regulatory capital. In fact, many of these assumptions are
unlikely to be controversial. Most problematic are those
assumptions that involve some knowledge of the optimum
level of capital, perhaps obtained by means of a mechanical
formula. I refrain at this point from taking a stand on the
assumptions. In a later section, I return to the issue of
whether optimum capital is calculable by means of
mechanical formulas.

U.S. REGULATORY PRACTICE
IN HISTORICAL PERSPECTIVE
A brief preliminary review of the history of regulatory
capital for U.S. banks may provide a helpful perspective on
the issue of the relative importance of formulas and supervision.4 Before 1981, there were no explicit regulatory
requirements for capital ratios. Examiners from the federal
supervisory agencies (the Office of the Comptroller of the
Currency, the Federal Deposit Insurance Corporation, and
the Federal Reserve System) were responsible for formulating
opinions about the capital adequacy of individual firms.
Any formulas used differed from supervisor to supervisor,
and possibly even from bank to bank, and were conceived
as informal guidelines rather than as precise estimates of an
optimum level of capital. In terms of the structure of the
previous section, we could think of the pre-1981 regime as
embodying the first five assumptions, but not the last two.
In 1981, in the aftermath of the thrift crisis and in
the midst of widespread discontent with the actual capital
ratios of many banking institutions, a new three-tier set of
explicit capital requirements was introduced. These
requirements were based on the ratio of primary capital,

which consisted mainly of equity and loan loss reserves, to
total assets. The multi-tier framework was instituted to
facilitate the transition to the new system by larger institutions, whose capital ratios were in general less than desired.
The distinctions among banks of different sizes were eliminated in 1985.5 In this early period of explicit capital
requirements, we could say that regulators and supervisors
became more comfortable with assumption 6 regarding a
lower bound for optimum capital.
Toward the mid-1980s, there was again some
discontent with the levels of capital of U.S. institutions,
and once again the focus tended to be on the larger firms.
At the same time, regulators in other countries, including
the United Kingdom and Japan, had similar concerns
about their own institutions. These countries joined
forces with others in the so-called Group of 10 and issued
in 1988 the Basle Accord (Basle Committee on Banking
Supervision 1988).6
The Accord differed in two significant respects
from the structure of capital requirements then in place in
the United States. First, for the purpose of calculating
required capital, asset values were weighted by a few simple
credit risk factors. Second, the risk-weighted assets were
supplemented by credit-equivalent amounts corresponding
to off-balance-sheet instruments. The 1988 innovations
relied on the same assumptions 1 through 6 as the 1981
requirements. However, the changes reflected two new
developments.
First, large firms were increasingly engaged in
activities that produced risky exposures not captured (or
not fully captured) on the balance sheet. This change
exposed a natural weakness of mechanical formulas: they
typically have to be adjusted when there are unforeseen
changes in the environment. The second development was,
in essence, increased confidence in assumption 6, that
is, on the precision of formulas for calculating a lower
bound for optimum capital. For example, factors corresponding to potential future exposure of off-balance-sheet
instruments were based, albeit loosely, on state-of-the-art
mathematical simulation methods.
The most recent event in our chronology is the
introduction of market risk rules by the Basle Committee

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

193

(1996). The 1988 Basle Accord had recognized that there
were various problems that were left unresolved for future
iterations. The 1996 rules took the ground-breaking step
of allowing banks to calculate their exposure to market
risk using their own internal models, subject to some
restrictions on the choices of parameters and features of the
model.7 As in 1988, these changes reflected increased
confidence in assumptions 1 through 6, rather than the
introduction of a new one. In 1996, the optimism centered
on assumption 5—on the accuracy with which optimum
capital could be estimated using state-of-the-art modeling
techniques.
To summarize, history demonstrates that supervision and examination have always played a major role in
regulatory capital in the United States, and that it is only
since 1981 that mechanical formulas have been used
explicitly across the board. Of the assumptions listed in the
previous section, only assumption 7 failed to be invoked
historically. However, through history, there has been a
clear recurrent fascination with the idea of reducing everything to formulas, and it seems unlikely that such an ideal
has been given up at this point. In the next section, I turn
to assumption 7 or, more specifically, to the drawbacks of
mechanical formulas and to their limitations in defining
regulatory capital.

THE PROBLEMS WITH FORMULAS
The landmark Basle Accord of 1988 was issued by the
Basle Committee on Banking Supervision under the
chairmanship of W.P. Cooke. The Accord relies heavily on
mechanical formulas, but it is clear from the document
that it by no means constitutes an unqualified endorsement
of formulas. In fact, a few years earlier, Cooke (1981)
had stated bluntly that “There is no objective basis for
ex-cathedra statements about levels of capital. There can be
no certainty, no dogma about capital adequacy.” This section
is an attempt to understand the limitations of mechanical
formulas.
One could easily conceive of mechanical formulas
playing a useful role in banking if the business were completely determined by formal laws that were clearly stated
and strictly implemented. In the words of legal philoso-

194

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

pher H.L.A. Hart (1994), “Everything could be known,
and for everything, since it could be known, something
could be done and specified in advance by rule. This would
be a world fit for ‘mechanical’ jurisprudence.” However,
the reality of banking is quite different: the business has
important informal determinants and conventions that
have evolved over the course of several centuries and that
continue to evolve.
Banking has developed in most countries as a
market solution to a common array of business problems.
Furthermore, not only is the institution of banking an
evolving response to economic conditions, but evolving
economic conditions are in turn profoundly affected by the
institution of banking. These mutual influences are so
important that it would be impossible, in the context of a
mature banking sector, to identify one as logically or
chronologically prior to the other.8
Fundamentally, banks and other financial firms are
social institutions. They have emerged not by external
design, but as sets of rules that rest on a social context of
common activity. These rules are not limited to formal
laws, like banking statutes and regulations, but also
include conventions that are predicated on the agreement
of the parties involved and on the existence of formal and
informal criteria that may be used to determine whether
the rules are being followed.9
Examples of informal rules abound in banking.
There is remarkable consistency in the instruments that
banks employ, even banks of different sizes and geographical
locations. Consider, for example, commercial loans. There
is some variation in the terms of these loans, such as
maturity and reference interest rates, but the choices are
typically conventional and essentially “menu-driven.”
Furthermore, even the criteria for loan approval are determined by the normal practices of the business. Other
examples of conventional instruments are consumer loans,
mortgages, demand deposits, and time deposits. Closer to
the issue of regulatory capital are conventions with regard
to risk management, such as simulation models for calculating exposures to fluctuations in market prices and, more
generally, value-at-risk models. Consensus on these techniques, while not universal, is widespread.

The business practices of the financial sector, and
in particular the network of informal rules and conventions
on which they are partly based, provide a certain level of
consistency, but they are also dynamic and complex. A
supervisory or regulatory regime that ignores these practices will fail to deal with the economic reasons for the
existence of the financial sector and, if the restrictions are
binding or even relevant, the regime will create economic
distortions and inefficiencies that will make everyone
worse off. Consider in turn the implications of dynamism
and complexity.
There is no question that the financial sector is
dynamic. Commons ([1934] 1990) anticipated later observers in noting that “Working rules are continually changing in the history of an institution.” And North (1990),
drawing on historical observations, contends that “The
stability of institutions in no way gainsays the fact that
they are changing. From conventions, codes of conduct,
and norms of behavior to statute law, and common law,
and contracts between individuals, institutions are evolving and, therefore, are continually altering the choices
available to us.”
How can we rely on static formulas if they have to
be applied to a business that is continually changing?
Obviously, the only way to keep pace is to change the formulas. However, predictability in regulation is helpful,
perhaps essential. What happens if, in an effort to keep up
with the dynamism of banking, inflexible regulatory
regimes have to be modified at an increasing pace? There is
a tradeoff between predictability and dynamism, and there
is a danger that changes are now (and will continue to be)
required with increasing frequency.
Let us turn to the issue of complexity. The very
fact that an activity is based on informal rules brings with
it some degree of complexity. North (1990) contends that:
It is much easier to describe and be precise about
the formal rules that societies devise than to
describe and be precise about the informal ways by
which human beings have structured human
interaction. But although they defy, for the most
part, neat specification and it is extremely difficult
to develop unambiguous tests of their significance,
they are important.

To be sure, one of the reasons for the complexity of
informal rules is that they have not been written down, or
formalized. However, the problem is not simply that they
have not been specified, but rather that they defy specification. Behind the network of routine practices of the
business lurks a system of true inherent complexity.
So, where do we turn? A decision by the Supreme
Court of the United States (1933) may be useful in providing some sense of direction.10 In referring to the Sherman
Anti-Trust Act of 1890, the Court stated that
As a charter of freedom in the public interest, the
act has a generality and adaptability comparable to
that found to be desirable in constitutional provisions. It does not go into detailed definitions
which might either work injury to legitimate
enterprise or through particularization defeat its
purposes by providing loopholes for escape. The
restrictions the act imposes are not mechanical or
artificial.
Abstracting from the specific legal issue facing the Court
on that occasion, the general economic principles are close
in spirit to those that we address here. The suggestions are
clear: strive for generality and adaptability in statute and
regulation, avoid detailed definitions that may be inefficient and circumventable, stay away from the mechanical
or artificial.
Do we want to say, in conclusion, that there is no
role for mechanical formulas in regulatory capital? No, that
would be dogmatic and inflexible. Even if formulas are
problematic as constraints on banks’ decisions, they may
still be useful in some circumstances, for instance, to convey certain kinds of information about the bank or to make
some interbank comparisons. We do not want, however, to
be unreasonably restrained by lingering mechanical formulas for years or decades at a time. It therefore seems advisable to avoid writing detailed mechanical formulas into
statute and possibly even into regulation.

WHAT ELSE IS THERE?
If mechanical formulas hold very little promise of identifying appropriate levels of regulatory capital, what else is
there for regulators to turn to? In announcing the sweeping
changes in financial regulation and supervision that took

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

195

place in the United Kingdom in 1997, Sir Andrew Large
(1997) indicated that “I don’t think we should lose sight of
the fact that so much in regulation is not about structure
but about attitude and management: the ‘how’ of regulation; the way it is done.” The implications for regulatory
capital seem clear. It is an important priority of supervisors
to determine whether the appropriate “attitude and management” toward capital prevail in a firm, to focus on the
way things are done. It is less clear that they need to provide the firm with mechanical formulas to estimate the
appropriate level of capital.
Yet mechanical formulas produce tangible results,
whereas attitude and management seem quite fuzzy. If we
were to rely less on formulas, is there any substitute for the
determinacy they seem to provide, or are we inevitably
thrust into an environment in which there are no guideposts and only discretion prevails? This is potentially a
serious difficulty, certainly in practical terms, but especially in view of the arguable importance for authorities to
commit in advance to certain types of behavior in order to
avoid problems of moral hazard and time inconsistency.11
However, in banking, there is a network of informal constraints—as described in the preceding section—that can
provide a solid grounding for the capital decisions of firms
and the informed judgment of supervisors.
These informal constraints or conventions are also
useful in dealing with moral hazard and time consistency
problems. Although formal economic models often imply
that mechanical rules are necessary for those purposes,
Williamson (1983) and North (1990), among others, conclude that conventions are sufficient to achieve “credible
commitments” in real-world situations. A particularly
relevant case is presented by North and Weingast (1989).
They argue that, following the Glorious Revolution in
seventeenth-century England, the Crown and Parliament
agreed to abide by credible commitments that led to new
institutional arrangements. These new institutions, in
turn, made possible the development of modern financial
markets.
The foregoing considerations suggest that, in
designing regulatory capital requirements, it is desirable to
avoid excessive detail in statute and regulation. However,

196

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

to determine how much capital a bank should have, detail
is ultimately unavoidable. One solution to this regulatory
dilemma is to ensure both that firms delve into whatever
level of detail is necessary and that supervisors have the
necessary expertise to determine whether the details are
properly handled by the firm. In terms of the initial question of this paper, less weight could be placed on the development of mechanical formulas, and more weight could be
devoted to supervision.
We should note that, in this regard, there is no
immediate cause for alarm. The principal concerns, however,
are not with the present, but with the future evolution of the
system. How do we make further progress, and how do we
avoid allowing the dynamic environment to elude us?
Let us review a couple of recent ideas. First, consider the “pre-commitment approach,” an attempt to do
away with mechanical formulas for the calculation of capital for market risk and to replace them with penalties for
firms whose decisions are proven wrong by experience.12
Under this approach, firms pre-commit a certain amount of
capital for market risk at the beginning of, say, each quarter. This amount may be determined by whatever means
the firm sees fit. At the end of the quarter, the supervisor
compares the firm’s losses arising from market risk, if any,
with the pre-committed amount. If the loss exceeds the
amount, a penalty of some sort is imposed. Kupiec and
O’Brien (1995b) consider a broad range of possible penalties, from monetary fines to supervisory disclosures.
The pre-commitment approach is attractive for
several reasons. First, it provides considerable flexibility in
the determination of capital amounts. Second, it is not
intrusive; it is designed to allow the firm to pursue its
business objectives with few distortionary effects from regulation. Third, it seems to require little knowledge or
effort on the part of the supervisor. With regard to banks’
internal models, Kupiec and O’Brien (1995a) argue that
“It is virtually impossible for a regulator to verify the accuracy of the size of the losses associated with rare tail
events.” They propose instead the easier task of comparing
actual losses with a pre-committed amount.
Though theoretically attractive, there are serious
problems in the implementation of the pre-commitment

approach. One central issue is the design of the penalty
structure. The approach circumvents the need for mechanical formulas in the initial determination of capital, but regulators must address the need for a “penalty formula” at the
other end. Should this be a mechanical formula, which
might suffer from the shortcomings described in the previous section? Should there be room for supervisory discretion? Some proponents of the method might be put off by
the introduction of discretion in a method conceived as
objective and nondiscretionary. There are also other, more
mundane issues, such as defining what is meant by “the
firm’s losses arising from market risk.” Thus, the pre-commitment approach is basically attractive, but is not without its share of practical problems.
Another idea from the recent literature is what we
might call the “supervisory approach,” whose rationale is to
focus primarily on the determination of optimum capital
by the firm, monitored by the supervisor, while limiting
reliance on mechanical formulas to a simple, well-defined
role in which they are more likely to be useful.13 Under
this approach, the firm would be accountable in the first
instance for determining its own appropriate level of capital, abiding by sound practices developed in the context of
the business. Firms engaged in trading of complex financial instruments, for example, would need to apply sophisticated mathematical techniques, which they would be
required by supervisors to have at any rate for risk management purposes. Firms that focus on small business lending
would have to apply very different techniques, most likely
emphasizing more traditional credit analysis.
The supervisor would monitor the performance of
the firm in the determination of the appropriate level of
capital. There is substantial potential synergy between the
supervisory review of risk management activities, which is
already an important part of bank examinations, and the
monitoring of regulatory capital in the way described. Furthermore, the attention paid by supervisors to the process,
not just to the final result, provides incentives for firms to
refine their management of risk. In monitoring the determination of capital, the supervisors would also ensure that
the views of the firm are consistent with the public goals of
systemic safety and soundness, and that there is no attempt

to take undue advantage of elements of the financial safety
net, such as deposit insurance. Procedures to enforce compliance through supervisory sanctions would have to be in
place, much as they are now in the United States and other
countries.
Finally, mechanical formulas could be retained in a
relatively modest role as rough indicators of severely inadequate capital. If an institution were to require closure, it is
in the public interest to prevent any losses from having to
be borne ultimately by taxpayers. A formula may be helpful in this regard as a trigger point, much in the same way
that prompt corrective action regulation is implemented
for U.S. banks.
One important issue in the supervisory approach is
that it places a substantial burden both on firms and supervisors. Firms have to be ready to take the necessary steps to
make an accurate assessment of their need for capital. For
many of them, reliance on mechanical formulas would not
be an option. Supervisors would have to develop and retain
human and other resources that would enable them to
come to grips with the full diversity of methods employed
by firms.
The supervisory approach is in many ways similar
to the system in place in the United States prior to 1981,
which regulators in the end found unsatisfactory. However,
the similarities are only superficial, because a broad array of
new conventions has been introduced in the financial markets since 1981. For instance, in the 1970s, many financial
institutions were caught off guard by sudden bursts of
inflation and sharp rises in interest rates, and the magnitude of the resulting losses was staggering. Today, even the
smallest institutions are aware of interest rate risk and are
required by supervisors to manage it prudently. In general,
firms and regulators are much more cognizant today of risk
and risk management, and this awareness has led to a
whole structure of conventions designed to deal flexibly
with new risks as they are identified.
The approaches to regulatory capital described
above are only two examples of methods that can help
effect a shift from mechanical formulas to supervision in
the context of regulatory capital. As these and other potential ideas are discussed, what criteria can be used to evalu-

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

197

ate them? Toward this goal, we conclude with the
following series of questions, which are based on the analysis of this paper.
• Does the idea make sense in principle? Does it address
the shortcomings of the current system and is it based
on sound theoretical analysis?
• What are the practical implications of implementation? What exactly is required on the part of the institution and on the part of supervisors?

198

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

• Is it a short-term fix or a long-term solution? Is it
capable of handling new instruments and practices?
• Is it applicable to the institution as a whole? Would
other different—and potentially inconsistent—
approaches have to be developed for other risks or
other parts of the business?

ENDNOTES

1. Although most of the discussion of this paper focuses on banks, the
principles delineated also apply to other types of financial institutions
that perform similar services. The focus on banks is adopted to make the
analysis more concrete, especially since history is one of the main tools
employed in the paper. For similar reasons, examples are drawn mostly
from the U.S. experience.
2. For example, see Berger, Herring, and Szegö (1995) and
Dewatripont and Tirole (1994). Historical approaches to banking crises
include Bernanke (1983) and Mishkin (1991), whereas Davis (1992) and
Calomiris and Gorton (1991) combine theoretical and historical analysis.
3. The Modigliani-Miller (1958) theorem implies that under certain
ideal conditions, the firm would not have a preference for any
determinate level of capital. However, see also Berger, Herring, and
Szegö (1995), and Miller (1995).

8. An interesting attempt to model these types of mutual influences is
found in Caplin and Nalebuff (1997).
9. In this paper, the terms “rules,” “formulas,” and “models” have very
different meanings, as the usage in the text demonstrates. Rules are
interpreted quite generally to include conventions and other practices
that are generally followed in the course of business but are not formally
prescribed, for example, by statute or regulation. Mechanical formulas
include mathematical expressions, but more generally any formula that
can be constructed, for example, by means of a computer program and
therefore that can be computed without human judgment or
intervention. Finally, models refers to mathematical techniques applied
to a specific problem, say, to the estimation of optimum capital for a
given bank. These models may include, among others, value-at-risk
models for calculating market risk of trading portfolios.
10. I am grateful to Arturo Estrella, Sr., for this reference.

4. See Gaske (1995), Berger, Herring, and Szegö (1995), and Kaufman
(1991).

11. See, for example, Kydland and Prescott (1977).

5. Board of Governors of the Federal Reserve System (1985).

12. See Kupiec and O’Brien (1995b).

6. An account of the process that led to the Basle Accord is found in
Bardos (1987-88).

13. Some thoughts on how a regulatory approach could be designed are
found in Estrella (1995).

7. The model-based rules are described in detail in Hendricks and
Hirtle (1997).

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

NOTES

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

199

REFERENCES

Bardos, Jeffrey. 1987-88. “The Risk-Based Capital Agreement: A Further
Step towards Policy Convergence.” Federal Reserve Bank of New York
QUARTERLY REVIEW 12, no. 4 (winter).
Basle Committee on Banking Supervision. 1988. “International Convergence
of Capital Ceasurement and Capital Standards.” Basle: Bank for
International Settlements, June.
———. 1996. “Amendment to the Capital Accord to Incorporate
Market Risks.” Basle: Bank for International Settlements, January.

Hart, H.L.A. 1994. THE CONCEPT OF LAW. 2d ed. Oxford: Clarendon
Press.
Hendricks, Darryll, and Beverly Hirtle. 1997. “Bank Capital Requirements
for Market Risk: The Internal Models Approach.” Federal Reserve
Bank of New York ECONOMIC POLICY REVIEW 3, no. 4: 1-12.
Kaufman, George G. 1991. “Capital in Banking: Past, Present and
Future.” JOURNAL OF FINANCIAL SERVICES RESEARCH 5: 385-402.
Kupiec, Paul, and James O’Brien. 1995a. “Internal Affairs.” RISK, May: 43-7.

Berger, Allen N., Richard J. Herring, and Giorgio P. Szegö. 1995. “The Role
of Capital in Financial Institutions.” JOURNAL OF BANKING AND
FINANCE 19: 393-430.
Bernanke, Ben S. 1983. “Non-Monetary Effects of the Financial Crisis in
the Propagation of the Great Depression.” AMERICAN ECONOMIC
REVIEW 73: 257-76.
Board of Governors of the Federal Reserve System. 1985. “Announcements.”
FEDERAL RESERVE BULLETIN, January: 440-1.
Calomiris, Charles W., and Gary Gorton. 1991. “The Origins of Bank
Panics: Models, Facts, and Bank Regulation.” In R. Glenn Hubbard,
ed., FINANCIAL MARKETS AND FINANCIAL CRISES. Chicago:
University of Chicago Press.
Caplin, Andrew, and Barry Nalebuff. 1997. “Competition among
Institutions.” JOURNAL OF ECONOMIC THEORY 72: 306-42.
Commons, John R. [1934] 1990. INSTITUTIONAL ECONOMICS: ITS PLACE
IN POLITICAL ECONOMY. Reprint, New Brunswick, N.J.: Transaction
Publishers.
Cooke, W.P. 1981. “Banking Regulation, Profits and Capital
Generation.” THE BANKER, August.
Davis, E.P. 1992. DEBT, FINANCIAL FRAGILITY, AND SYSTEMIC RISK.
Oxford: Clarendon Press.
Dewatripont, Mathias, and Jean Tirole. 1994. THE PRUDENTIAL
REGULATION OF BANKS. Cambridge: MIT Press.
Estrella, Arturo. 1995. “A Prolegomenon to Future Capital
Requirements.” Federal Reserve Bank of New York ECONOMIC
POLICY REVIEW 1, no. 2: 1-12.

———. 1995b. “Model Alternative.” RISK, June: 37-40.
Kydland, Finn E., and Edward C. Prescott. 1977. “Rules Rather Than
Discretion: The Inconsistency of Optimal Plans.” JOURNAL OF
POLITICAL ECONOMY 85: 473-92.
Large, Sir Andrew. 1997. “Regulation and Reform.” Speech delivered to
the Society of Merchants Trinity House, Tower Hill, London, May 20.
Miller, Merton H. 1995. “Do the M&M Propositions Apply to Banks?”
JOURNAL OF BANKING AND FINANCE 19: 483-9.
Mishkin, Frederic S. 1991. “Asymmetric Information and Financial Crises:
A Historical Perspective.” In R. Glenn Hubbard, ed., FINANCIAL
MARKETS AND FINANCIAL CRISES. Chicago: University of Chicago
Press.
Modigliani, Franco, and Merton H. Miller. 1958. “The Cost of Capital,
Corporation Finance, and the Theory of Investment.” AMERICAN
ECONOMIC REVIEW 48: 261-97.
North, Douglass C. 1990. INSTITUTIONS, INSTITUTIONAL CHANGE AND
ECONOMIC PERFORMANCE. Cambridge: Cambridge University Press.
North, Douglass C., and Barry R. Weingast. 1989. “Constitutions and
Commitment: The Evolution of Institutions Governing Public Choice
in Seventeenth-Century England.” JOURNAL OF ECONOMIC HISTORY
49: 803-32.
U.S. Supreme Court. 1933. APPALACHIAN COALS V. UNITED STATES, 288
U.S. 344.
Williamson, Oliver E. 1983. “Credible Commitments.” AMERICAN
ECONOMIC REVIEW 73: 519-40.

Gaske, Ellen. 1995. “A History of Bank Capital Requirements.”
Unpublished paper, Federal Reserve Bank of New York, December.

200

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

Deposit Insurance, Bank Incentives, and
the Design of Regulatory Policy
Paul H. Kupiec and James M. O’Brien

1. INTRODUCTION
A large literature studies bank regulatory policies intended
to control moral hazard problems associated with deposit
insurance and optimal regulatory design. Much of the
analysis has focused on uniform bank capital requirements,
risk-based capital requirements, risk-based or fairly priced
insurance premium rates, narrow banking, and, more
recently, incentive-compatible designs.
All formal analyses employ highly simplified
treatments of an individual bank or banking system. This
study is concerned with the appropriateness of modeling
simplifications used to characterize banks’ investment
opportunity sets and access to equity financing. While the
characteristics of assumed investment opportunities differ
among studies, all are highly simplified relative to the
actual opportunities available to banks. In some studies,
banks are assumed to invest only in 0 net present value
(NPV) market-traded securities while in other studies only
in risky nontraded loans. In models where banks make
risky nontraded loans, loan opportunity set characteristics
are highly specialized. Frequently, a bank is limited to

Paul H. Kupiec is a principal economist at the Freddie Mac Corporation. James M.
O’Brien is a senior economist in the Division of Research and Statistics at the
Board of Governors of the Federal Reserve System.

choosing between a high- and a low-risk asset. In both
these cases and those in which loan opportunity sets are
expanded, a well-defined relationship between risk and
NPV is assumed. Further, in many analyses, banks are
assumed to have unrestricted access to equity capital at the
risk-free rate on a risk-adjusted basis.
In the full version of this paper (Kupiec and
O’Brien [1998]), we show that these modeling specializations have been important for policy results frequently
cited in the literature. The shorter version presented here
is limited to showing that substantial difficulties in optimal regulatory design arise when greater complexity in
bank investment opportunity sets and financing alternatives is recognized.
For the analysis, banks are assumed to maximize
net shareholder value, which derives from the banks’ ‘‘economic value-added’’ and the net value to shareholders of
deposit insurance. Economic value-added comes from positive net present value loan investments and from providing
liquidity or transaction services associated with deposit
issuance. A bank’s economic value-added is measured net of
dead-weight costs associated with outside equity financing
(equity issuance costs) and the present value of potential
distress costs. The latter costs are incurred when outside
capital is raised by the bank against its franchise value to
cover a current account deficit. In contrast to previous

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

201

models of bank regulation where loan investments are
assumed to satisfy a well-defined investment opportunity locus—such as first-or second-order stochastic dominance—different loan NPV and risk configurations are
permitted here.1 Even if a bank’s optimal loan choices can
be limited to a subset of all its loan investment opportunities, this set will depend on the regulatory regime. Also, in
determining its risk exposure, the bank has access to riskfree and risky 0 NPV market-traded securities.
Because deposit insurance can create moral hazard
incentives, share value maximization need not coincide
with maximization of the bank’s economic value-added. In
our model, the objective of regulatory policy is to minimize reductions in banks’ economic value-added due to
moral hazard influences on bank investment and financing
decisions. Besides the determinants of economic valueadded described above (that directly enter shareholder net
values), optimal regulatory design must also factor in the
dead-weight costs incurred in closing an insolvent bank.
If, as assumed in previous models of bank regulation, the bank has unrestricted access to equity capital at
the risk-free rate on a risk-adjusted basis, the moral hazard problem associated with deposit insurance in these
models can be resolved by requiring full collateralization
of insured deposits with the risk-free asset and setting the
insurance premium at zero. Since equity financing is avaliable at the risk-free rate on a risk-adjusted basis, the bank
will want to undertake all positive NPV loan investment
opportunities and deposit issuance will be governed by the
profitableness of providing deposit transaction services.
The optimal design of regulatory policy becomes
much more complicated when it is recognized that outside
equity financing can be costly, that is, all-in issuance costs
may significantly exceed the risk-free rate on a riskadjusted basis. When equity issuance is costly, regulatory
schemes that require the bank to raise a lot of equity capital, including narrow banking, can impose significant
dead-weight costs on bank shareholders and discourage
positive NPV investments. Under costly equity issuance,
an optimal bank capital requirement that most efficiently
resolves moral hazard incentives will be tailored to each

202

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

bank’s investment (risk and NPV) opportunities and its
access to capital financing. The optimal bank-specific capital requirements and insurance premium rates, however,
are difficult to achieve because regulators must have information on banks’ investment choices or opportunity sets
on the level of a bank insider.
Incentive-compatible regulatory mechanisms have
been proposed as a way of solving the information problems
that regulators face in designing an optimal policy.2 However, when bank investment opportunities are more complex
than typically assumed, we find substantial limitations on
the incentive-correcting or sorting potential of incentivecompatible proposals. Our results suggest that incentive
approaches that are able to achieve optimal bank-specific
results, even if possible, require extensive information
gathering. More likely, feasible regulatory alternatives will
be much less information-intensive and, even when usefully
employing incentives, will be uneven in their effectiveness
and decidedly suboptimal on an individual bank basis.

2. BANK SHAREHOLDER VALUE
AND ECONOMIC VALUE
2.1. MODEL ASSUMPTIONS
Each bank makes investment and financing decisions in
the initial period to maximize the net present value of
shareholders’ claims on bank cash flows realized in the next
period. On the asset side, a bank may invest in one-period
risky nontraded loans, risky 0 NPV market-traded securities, and a 0 NPV risk-free security.
Individual loans are discrete investments and a
bank’s loan investment opportunity set is defined to be the
set of all possible combinations of the discrete lending
opportunities it faces. Each loan has an associated investment requirement, NPV, and set of risk characteristics.
While financial market equilibrium (absence of arbitrage)
requires that the expected returns on traded assets be linearly
related to their priced risk components, this condition
places no restrictions on the relationship between the NPV
and risk of nontraded assets. Assets with positive NPV are
expected to return to bank shareholders more than their

market equilibrium required rates of return. For such
assets, there are no equilibrium conditions that impose a
relationship among NPV, investment size, or risk. Thus, a
bank’s loan investment opportunity set could be characterized by a wide variety of investment size, loan portfolio
NPV, and risk combinations. Any subset of investment
portfolios that a bank may choose to restrict itself to will
depend on the regulatory policy regime.
The bank finances its investments in loans ( L ) ,
risky securities ( M ) , and the risk-free asset ( T ) with a
combination of internal equity capital, external equity, and
deposits. End-of-period deposit values ( B ) are government insured against default. Internal equity ( W ) represents the contribution of the initial shareholders. Outside
equity financing ( E ) generates issuance costs of d 0 ≥ 0
per dollar of equity issued. While deposit accounts provide
transactions or liquidity services, the model treats these
accounts as equivalent to one-period discount bonds.
Deposits earn the one-period risk-free return of r , less a
charge for liquidity services that earns the bank a profit of
π per dollar of deposits. Both these profits and the bank’s
–r
deposit insurance premium payments, denoted by φBe ,
are paid at the beginning of the period. The bank has a
maximum deposit base of B (par value).
In the second period, the bank’s cash flows from its
loans, risky securities, and risk-free bonds are used to pay
off depositors. Shareholders receive any excess cash flows
and obtain rights to a fixed franchise value, J .3 If cash flow
is insufficient to meet depositors’ claims, the bank may
issue equity against its franchise value. However, equity
issued against J to finance end-of-period cash flow shortfalls generates ‘‘distress issuance costs’’ of d 1 ≥ 0 per dollar
of equity issuance. As with equity sales in nondistress
periods, distress issuance costs would include both transaction fees and costs for certifying the value of the issue. The
deposit insurer assumes the bank if it cannot cover its
existing deposit liabilities.

2.2. BANK SHAREHOLDER VALUE
Under these assumptions, the net present value of initial
shareholders’ claims is given by

(1)

–r

–r

S = j LO – I + e J + πBe + P I – φBe

–r

d1
d0
- ( P D – P I ) – -------------E ,
– ------------1 – d1
1 – d0
where
–r

–r

E = max { ( I + T + M + φBe – ( 1 + π )Be – W ), 0 }
and

I =

∑

∀j ∈ L

Ij , j L0 =

∑

jL j 0 .

∀j ∈ L

The components of shareholder value follow: j L0
is the value of the loan portfolio, I its required initial
investment, and jL0 – I the loan portfolio’s net present
–r
value; e J is the present value of the bank’s end-of-period
–r
franchise value; Be π are the profits from deposit-generated
–r
fee income; P I – φBe is the net value of deposit insurance to
bank shareholders. P I has a value equivalent to that of a European put option written on the bank’s total asset portfolio
r
with a strike price of j d = B – Te – ( 1 – d 1 )J . This strike
price is the cash flow value below which the bank’s shareholders default on the bank’s deposit liabilities. For j d ≤ 0, P I ≡ 0 .
The second line in equation 1 captures the costs associated with outside equity issuance. E covers any financing
gap that remains after deposits, inside equity, and deposit
–r
profits net of the insurance premium, ( π – φ )Be , are
exhausted by the bank’s investments. Each dollar of external
1 finance generates d 0 in issuance costs, requiring that 1------------–
d0
d1
- ( P D – P I ) is the
dollars of outside equity be raised. ------------1 – d1
initial value of the contingent liability generated by endof-period distress costs. The distress costs are proportional
to the difference between two simple put options, P D and
P I , where both options are defined on the underlying value
of the bank’s asset portfolio. P D is the value of a put option
–r
with a strike price of j ds = B – Te , the threshold value
below which the bank must raise outside equity to avoid
default. The strike prices of these options define the range
of cash-flow realizations, ( j d , jds ), within which shareholders bear financial distress costs.4 Distress costs reduce
shareholder value since P D ≥ P I .5

2.3. SHAREHOLDER VALUE MAXIMIZATION
The shareholder value function, S , must be maximized
using integer programming methods. This is necessitated by

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

203

the assumption that loans are discrete nontradeable investments with individualized risk and return characteristics.
Let j L O represent the risk-adjusted present value
k
of loan portfolio k that can be formed from the bank’s loan
investment opportunity set. The loan portfolio has a required
investment of I k and an NPV equal to jL k 0 – I k . The
bank shareholder maximization problem can be written as,



–r
(2) maxS = e J + max  ( j L O – I k ) + max  K ( L j )
 ,
k
Lj – Lk



∀k
where
d1
d0
–r
- E – ------------- ( P – PI )
K ( L j ) = P I + ( π – φ )Be – ------------1 – d0
1 – d1 D
and K ( L j ) L = L indicates that the function K is to be
j
k
evaluated conditional on the loan portfolio L k . The conditional value of K is maximized over T, M, B, W , and the
risk characteristics of the market-traded securities portfolio
with E satisfying the financing constraint in equation 2,
B ∈ ( 0, B ) and I, T, M, W, E ≥ 0 . Thus, for each possible
loan portfolio (including the 0 investment loan portfolio),
the bank maximizes the portfolio’s associated K value by
making the appropriate investment choices for risk-free
and risky securities, outside equity issuance, and inside
capital (or dividend payout policy). The bank then chooses
the loan portfolio for which the sum of loan portfolio NPV
and associated maximum K value is the greatest.

2.4. BANK ECONOMIC VALUE-ADDED
For analyzing the efficiency of alternative regulatory environments, we define a measure of the bank’s economic
value-added. As a simplification, the bank is assumed to
capture entirely the economic value-added from its investment and deposit activities. That is, the bank’s profits from
deposit taking mirror the depositor welfare gains generated
by transaction accounts, and the bank’s asset portfolio NPV
reflects the entire NPV produced by its investment activities.
This avoids modeling the production functions, utility
functions, and bargaining positions of the bank’s counterparties when constructing a measure of social welfare. The
bank’s franchise value, J, is assumed to reflect entirely economic value-added (the future NPV of lending opportunities, providing deposit liquidity services, with no net
insurance value).6

204

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Netted against these economic value-added components are the bank’s dead-weight equity issuance costs
and distress costs, and the dead-weight costs borne by the
insurer if the bank is closed. Under insolvency, the insurer
pays off depositors with the realized cash flow from the
bank’s investments, the sale of the bank’s franchise, and a
drawdown on its cash reserve from accumulated premium
payments. Dead-weight closure costs arise if, in disposing
of the bank’s franchise, the insurer loses a fraction of the
initial value J . While the magnitude of such losses is
unclear in practice, the simplest approach is to assume this
fraction is the same as that lost by shareholders in a distress
situation, d 1 .7 Under this assumption, the insurer’s deadweight closure costs are d 1 J . Aggregating across all of the
bank’s claimants the realized end-of-period payments (payouts), taking their risk-adjusted present expected values,
and subtracting initial investment outlays yield the bank’s
economic value-added. Where closure costs are equal to
d 1 J , the bank’s economic value-added (EVA) is,
d1
d0
–r
r
- E – ------------- ( P – P ).
(3) EVA = j LO – I + πBe + Je – ------------1 – d0
1 – d1 D I
Because of the influence of deposit insurance on
bank investment and financing choices, bank policies
that maximize the net value of shareholder equity may
not maximize the banks’ EVAs. In the present analysis, an
optimal regulatory policy consists of an insurance pricing
rule and supplemental regulations, that is, capital
requirements, that minimize the distortive incentive
effects of deposit insurance, taking into account the direct
effects on EVAs of the regulatory policy as well. The
insurer or regulator is constrained to providing deposit
insurance to an ongoing bank without subsidy, which is
always possible in our model (see below).

3. OPTIMAL REGULATORY POLICY WHEN
EQUITY ISSUANCE IS COSTLESS
First, consider the possibility of fairly priced insurance
when the bank has perfect access to equity capital financing, that is, there are no equity issuance costs ( d 0 = 0 ).
The insurance is said to be fairly priced if the insurance

premium is equal to the value of deposit insurance to bank
–r
shareholders, that is, φBe = P I .8 Under a fair-pricing
condition, no equity issuance costs, and access to a risk-free
0 NPV investment, net shareholder value is maximized by
choosing all positive NPV loans and accepting all insured
deposits. Any funding requirements in excess of the bank’s
internal equity capital and deposits can be costlessly met
with outside equity financing. If there are potential distress
costs ( d 1 > 0 ), these can be costlessly eliminated by investing in the risk-free asset, as well as investing in positive
NPV loans.
Further, when an intermediary can guarantee its
deposit obligations by collateralizing them with risk-free
bonds, if outside equity issuance is costless, the potential
for costless collateralization creates the possibility of
implementing fairly priced deposit insurance without any
governmental subsidy to the banking system. This possibility is formalized in Proposition 1.

unfunded. However, absent a narrow bank policy, pricing
the deposit insurance guarantee is fraught with difficulties.
One difficultly is that the bank regulators are unlikely to
have sufficient expertise to value the bank’s (nontraded)
assets or assess their risk.9 Even if regulators have sufficient expertise, the bank has an incentive to disguise highrisk investments or substitute into high-risk assets after its
insurance premium has been set. Without resorting to
highly intrusive monitoring, the moral hazard problem
necessitates capital or other regulations that reduce risktaking incentives arising from the deposit guarantee. The
analysis here assumes that the insurer has the expertise to
value individual assets banks might acquire and examines
capital-based regulatory policies intended to solve the
moral hazard problem.
To facilitate the analysis, we consider a hypothetical banking system comprised of four independent banks.
Each bank faces a unique loan investment opportunity set

Proposition 1 If (i) initial equity issuance is costless
( d 0 = 0 ) and (ii) the bank has unrestricted access to risk-free
bond investments, then a bank is indifferent between: (a) fairly
priced deposit insurance and (b) a requirement that all insured
deposits be collateralized with risk-free bond investments with an
insurance premium equal to 0.

Table 1
ALTERNATIVE LOAN

Systematic
Loan
Loan
Expected
Number Amount Returna (Priced) Riskb
Loan Opportunity Set A
1
75
.20
.08
2
50
.10
.00
3
100
.25
.10

Proposition 1 establishes the possibility of an
efficient, fairly priced deposit insurance system in the form
of a ‘‘narrow bank’’ deposit collateralization requirement.
This proposition does not depend on banks earning deposit
rents and would hold in a competitive equilibrium. Proposition 1 does require, however, that banks can issue equity
at competitive risk-adjusted rates with no costs or discounts generated, for example, by informational problems
or tax laws.

4. REGULATORY POLICY WHEN EQUITY
ISSUANCE IS COSTLY
When it is costly to issue outside equity (the likely situation), a narrow banking requirement can generate significant social costs in the form of equity issuance costs and
the opportunity cost of positive NPV investments that go

OPPORTUNITY SETS
Nonsystematic Total
Riskc
Riskd NPVe
.20
.45
.30

.22
.45
.32

5.44
2.56
10.52

.10
.05
.10

.50
.20
.60

.51
.21
.61

12.14
2.83
2.56

.10
.12

.45
.35
.45

.46
.36
.47

3.85
8.33
2.04

Loan Opportunity Set D
1
190
.21
2
190
.75
3
50
.21

.05
.70
.12

.10
.90
.45

.11
1.14
.47

21.30
0.00
2.04

Risky Market-Traded Security
.35

.30

.30

.42

.00

Loan Opportunity Set B
1
75
.30
2
140
.12
3
50
.20
Loan Opportunity Set C
1
75
.20
2
100
.03
3
50
.21

a

-.10

One-period expected return to loan i defined by µ + .5 σ i2 . See endnote 10.
i
systematic risk (standard deviation) for loan i , s 0i .

b One-period
c

One-period nonsystematic (idiosyncratic) risk for loan i , s 1i .

½

2 + s2 ) .
Total risk for loan i (one-period return standard deviation), σ i = ( s 0i
1i
e NPV is calculated using the expression in endnote 10, where the market price of
systematic risk is 1, λ = 1, and r = .05 is the risk-free rate.
d

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

205

consisting of three possible loans (seven possible loan combinations). For simplicity, individual loans have log-normal
end-of-period payoffs that include a single systematic
(priced) risk source and an idiosyncratic risk.10 Banks’
individual loan opportunity sets are described in Table 1.
Bank A’s opportunity set includes loans with relatively
modest overall risk. Bank B can invest in two loans with
relatively high risk, one of which has substantial NPV.
Bank C’s opportunities also include relatively high-risk
loans; its most profitable loan has negative systematic risk.
Bank D’s investment opportunity set includes a large, lowrisk, high-NPV loan and a large, high-risk, 0 NPV loan.
All four banks can invest in a risk-free bond and a risky 0
NPV security whose characteristics are described in the last
row of Table 1. For simplicity, all heterogeneity across
banks is assumed to arise from differences in loan investment opportunities. The three banks are subject to identical equity issuance costs ( d 0 = .2), distress costs ( d 1 = .4),
franchise values ( J = 40 ), maximum internal equity
capital ( W = 27 ), maximum deposits ( B = 200 ), and a
common transaction service profit rate ( π = 0.025 ). The
risk-free rate is arbitrarily set at .05.

4.1. THE FIRST-BEST SOLUTION
To establish an optimal benchmark, assume that the insurer
has sufficient knowledge to set a fair insurance premium and
that the bank must irrevocably commit to its asset portfolio
and capital structure before the insurer sets its premium.
Table 2 reports each bank’s optimization results.11 Columns
2-6 report optimal loan, securities, and equity financing
choices. Net share value is defined in equation 1 above. Eco-

Table 2
FAIRLY PRICED INSURANCE WITH
Bank Optimizing Results
Bank
A
B
C
D

Loans
1, 2, 3
1, 2
1, 2, 3
1

Risky Security
0.00
0.00
0.00
0.00

nomic value-added is the bank’s net social value and is
defined assuming that insurer closure costs mirror bank dis–r
tress costs (equation 3). Net insurance value, P I – φBe , is
zero by construction. For the risk capital ratio, capital is
defined as the book value of loans and securities minus
deposits, and risk assets are defined as the book value of loans
plus risky securities. Under the closure cost assumption, if
deposit insurance is fairly priced, S = EVA , and maximizing net share value also maximizes economic value-added.
By this measure, fairly priced deposit insurance is a first-best
policy with no need for capital requirements.
Implementing a fairly priced deposit insurance
system is problematic when a bank’s decisions cannot be
completely and continuously monitored. Although each
bank’s insurance premium may be calibrated to fair value
by assuming a bank operating policy that achieves maximum economic value-added, given this premium and an
ability to alter its asset mix, a bank may face incentives to
substitute into a more risky asset portfolio. In the example
in Table 2, banks B and D could increase their insurance
values, and net shareholder values, if they could substitute
into higher risk assets at the given insurance rates (reported
in footnote a). The insurance would become underpriced
and, while shareholder values would increase, economic
value-added would be reduced.

4.2. OPTIMAL POLICY WITH
IMPERFECT MONITORING
Absent complete information on each bank’s investments,
deposit insurance can still be fairly priced and moral hazard
incentives removed by imposing a narrow banking require-

PERFECT MONITORING

Riskless Security
0.00
5.26
0.00
32.00

Internal Equity
27.00
27.00
27.00
27.00

Outside Equity
3.47
0.00
4.57
0.00

Net
Share Value
59.33
55.35
53.58
64.08

Economic
Value-Added
59.33
55.35
53.58
64.08
232.34

Net
Insurance Valuea
0.00
0.00
0.00
0.00

–r

a

P I – φBe . For banks A, B, C, and D, the fair premium rates are .002, .008, .009, and 0, respectively.

b

Book capital to risk assets. Book capital equals investments in loans and securities minus deposits. Risk assets equal loans plus risky securities.

206

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

Risk
Capital Ratiob
.154
.140
.154
.167

ment that all deposits be collateralized with the risk-free
asset. While feasible, the narrow banking solution can
entail large reductions in banks’ EVAs due to equity issuance costs and foregone positive NPV loan opportunities
for which financing costs are now too high (see Kupiec and
O’Brien [1998] for numerical illustration). However, if the
regulator has complete information about each bank’s
investment opportunities and can enforce a minimum capital requirement, moral hazard incentives can be eliminated and fair insurance premiums can be set at a smaller
social cost than is incurred under narrow banking. In determining optimal minimum capital requirements, the regulator must determine the minimum capital requirement
and insurance premium rate combination that maximizes
each bank’s economic value-added, subject to a fair-pricing
condition and incentive-compatible condition that the
bank have no incentive to engage in asset substitution at
its required capital and insurance premium settings.12 The
optimal capital requirement will vary with each bank’s
investment opportunity set.
The optimal bank-specific capital requirements
are calculated for each bank in Table 3. The second and
third columns in the table present bank-specific minimum
capital requirements and fair-premium rates for the four
banks. The fourth column shows the maximum economic
value-added for each bank and, for comparison, the fifth
column shows the first-best economic value-added
reported in Table 2. The minimum capital requirements
remove the moral hazard incentives for banks B and D that
would exist at first-best capital requirements and premium
rates. The costs of imposing the capital requirements are

Table 3
OPTIMAL
Bank
Ab
B
Cc
D

a small reduction in bank B’s EVA due to a reduced loan
portfolio NPV and equity issuance costs incurred by
bank D. In general, the incentive-compatibility constraints
required when the regulator cannot perfectly monitor bank
actions will result in an optimal policy that is not a firstbest solution.
Notice that the optimal bank-specific capital
requirements are not ‘‘risk-based’’ capital requirements as
defined under current bank capital regulations but are
designed to solve the moral hazard problems. The insurance premium rates, being fair premiums, are risk-based.
This is a more efficient solution than “risk-based” capital
requirements with a fixed deposit insurance rate. Also note
that the costs associated with a minimum risk-asset capital
standard do not include a loss in the value of ‘‘liquidity services.’’ Because the capital requirement applies to risk
assets defined to exclude an identifiable risk-free asset
(such as Treasury bills), there is no incentive for banks to
reduce deposit levels. This result contrasts with studies
that suggest an important cost of more stringent capital
requirements is a reduction in the provision of socially
valuable liquidity services (for example, John, John, and
Senbet [1991]; Campbell, Chan, and Marino [1992]; and
Giammarino, Lewis, and Sappington [1993]).

4.3. IMPERFECT MONITORING AND
INCOMPLETE INFORMATION
The design of an optimal bank-specific capital policy
imposes the unrealistic requirement that the regulator
know each bank’s investment opportunity set. A growing
literature has proposed the use of incentive-compatible

BANK-SPECIFIC CAPITAL REQUIREMENTS AND FAIR INSURANCE RATES WITHOUT PERFECT MONITORING
Required Risk-Capital Ratio
≥ .154
≥ .247
≥ .154
≥ .351

Premium Rate
.002
.005
.009
.000

Economic Value-Added
59.33
55.30
53.58
55.36
223.57

a

Figures taken from Table 2.

b

Bank A’s optimal strategy for any minimum required risk-capital ratio between 0 and .154.

c Bank

First-Best Economic Value-Addeda
59.33
55.35
53.58
64.08
232.34

Net Insurance Value
0.00
0.00
0.00
0.00

C’s optimal strategy for any minimum required risk-capital ratio between .045 and .154.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

207

contracting mechanisms that can simultaneously identify
the investment opportunity sets specific to individual
banks and control moral hazard behavior even when the
regulator is not fully informed a priori. Among others,
Kim and Santomero (1988a); John, John, and Senbet
(1991); Chan, Greenbaum, and Thakor (1992); Campbell,
Chan, and Marino (1992); Giammarino, Lewis, and Sappington (1993); and John, Saunders, and Senbet (1995)
provide formal analyses of incentive-compatible policies.
In the spirit of this approach, assume as before that
there are four banks each with a loan investment opportunity set that is one of the types presented in Table 1, either
A, B, C, or D. While an individual bank knows its type,
the regulator only knows the characteristics of the alternative investment opportunity sets but does not know the
opportunity set associated with each individual bank.
Because it cannot distinguish bank types, the regulator
cannot directly set the bank-specific capital requirements
and insurance premiums that achieve the results in Table 3,
that is, that solve the policy problem when the regulator
has complete information on investment opportunity sets.
The incentive-compatible literature suggests, however,
that the risk types can be identified by an appropriate set
of contracts.
Consider, as in Chan, Greenbaum, and Thakor
(1992), an ex ante incentive-compatible policy based on a
menu of contracts whose terms consist of combinations of
a required minimum capital ratio and insurance premium
rate, assuming the regulator can enforce a minimum capital requirement. As in the preceding case, the optimal
capital and insurance premium combinations will satisfy

Table 4
OPTIMAL INCENTIVE-COMPATIBLE
Bank
A
B
C
D
a

Required Bank-Capital Ratioa
≥ .351
≥ .351
≥ .351
≥ .351

the constraint that each individual bank will not ‘‘assetsubstitute’’ given its minimum capital requirement and
insurance premium. In addition, the menu offered to
banks must be such that each bank not prefer a capital
requirement–insurance premium rate combination intended
for another bank type.
In general, the capital requirement–premium rate
combinations that satisfy these incentive-compatibility
constraints will differ from those that solve the policy
problem where there is imperfect monitoring but complete
information. For example, if banks were offered a menu of
contract terms taken from columns 1 and 2 of Table 3—
the capital requirements and premium rate combinations
that maximize firm values under the full information
assumption—bank optimizing choices would not identify
their types. Given such a menu, all banks would claim to
have a type A investment opportunity set.
If bank A is excluded from the table, the fairpricing contract terms for the remaining banks in Table 3
show a monotonic inverse relationship between the contract’s capital requirement and its insurance premium. The
inverse relationship is consistent with the ordering of
terms proposed by Chan, Greenbaum, and Thakor (1992)
as an incentive-compatible policy when the regulator is not
completely informed of banks’ specific investment opportunity sets. This inverse relationship will not, however,
produce a correct sorting of banks in the table as type B
and D banks would reveal themselves to be type C banks.
They would choose higher risk investments and produce
lower EVAs than the full information results presented in
Table 3, and their insurance would be underpriced.

CAPITAL REQUIREMENTS AND FAIR INSURANCE RATES WITH INCOMPLETE INFORMATION
Premium Rate
0
0
0
0

Economic Value-Added
52.17
54.16
49.59
55.36
211.28

First-Best Economic Value-Addedb
59.33
55.35
53.58
64.08
232.34

Net Insurance Value
0.00
0.00
0.00
0.00

Banks A, C, and D will optimally operate at the minimum required capital ratio. Bank B will optimally choose to operate at a capital ratio of .423.

b Figures

208

taken from Table 2.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

The optimal solution to the incentive-compatible
contracting problem is given in Table 4. The optimal
incentive-compatible contract imposes a uniform minimum risk-asset capital requirement and a uniform insurance premium on all banks. Bank EVAs also are mostly
smaller than those presented in Table 3. This occurs
because greater limits on regulators’ information impose
additional incentive-compatibility conditions on the regulator that constrain further the set of feasible policies from
which to choose. Given the bank investment opportunities
(and equity issuance costs) in this example, the incentivecompatible policy even fails to distinguish banks. However, because it allows for some deposit-financed lending,
the optimal policy is still more efficient than the narrow
banking solution.
Contracts like those in Chan, Greenbaum, and
Thakor (1992) fail to generate a separating equilibrium in this
example because our investment opportunity set and financing structures are more complex than those that underlie
their model. By assumption, all bank loan investment
opportunity sets in Chan, Greenbaum, and Thakor can be
ranked according to first-order or second-order stochastic
dominance.13 In our model, the set of possible asset portfolios represents investment opportunities whose combinations of risk, NPV, and financing requirements do not fit any
well-defined risk ordering. In particular, the opportunity
sets cannot be uniquely ordered by a one-dimensional risk
measure such as first- or second-order stochastic dominance.
This last example illustrates that, with less stylized investment opportunity sets, designing incentivecompatible policies that achieve a high degree of sorting
among bank types can impose formidable information
requirements on regulators. In some respects, the information assumptions made here are still very strong in
that regulators are unlikely to have a clear idea of the constellation of investment opportunities available to banks.
In the present model, if regulators had to consider a wider

set of investment opportunities for each bank than the
four assumed, an optimal policy would produce an economic value-added for each bank somewhere between
that shown in Table 4 and the results under a narrow
banking approach.

5. CONCLUSIONS
The preceding analysis has shown the difficulties inherent
in designing an optimal bank regulatory policy where
commonly used modeling stylizations on banks’ investment and financing choices are relaxed. When banks can
issue equity at the risk-adjusted risk-free rate, a common
modeling stylization, collateralization of deposits with a
risk-free asset costlessly resolves moral hazard inefficiencies
and insurance pricing issues addressed in the literature.
With costly equity issuance, this narrow banking approach
can impose large dead-weight financing costs and reduce
positive NPV investments funded by the banking system.
When equity issuance is costly, the most effective and efficient capital requirements are bank-specific, as they
depend on individual banks’ investment opportunities and
financing alternatives. Directly implementing optimal
bank-specific capital requirements, however, requires
detailed regulatory information on the investment opportunities and financing alternatives of individual banks.
Incentive-compatible designs have been proposed
in the theoretical literature as a way of minimizing regulatory intrusiveness and information requirements in obtaining optimal bank-specific results. However, in relaxing
previous modeling stylizations, we found that heavy information requirements also inhibited incentive-compatible
designs in obtaining optimal bank-specific results. Despite
the potential benefits of incentive approaches over rigid regulations, feasible approaches are still likely to be substantially constrained by limited regulatory information and by
“level playing field” considerations and thus are likely to be
decidedly suboptimal at the individual bank level.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

209

ENDNOTES

The authors are grateful to Greg Duffee and Mark Fisher for useful discussions
and to Pat White, Mark Flannery, and Erik Sirri for helpful comments.
1. For example, see Gennotte and Pyle (1991); Chan, Greenbaum, and
Thakor (1992); and Giammarino, Lewis, and Sappington (1993) for use
of stochastic dominance assumptions.
2. For example, see Chan, Greenbaum, and Thakor (1992);
Giammarino, Lewis, and Sappington (1993); Kim and Santomaro
(1988); John, John, and Senbet (1991); Campbell, Chan, and Marino
(1992); and John, Saunders, and Senbet (1995).
3. Franchise value may arise from continuing access to positive NPV
loan opportunities, the ability to offer transaction accounts at a profit,
and the net value of deposit insurance in future periods.
d1
- P is a hypothetical value of the distress costs the bank would
4. ------------1 – d1 D
face if it could not default on its deposit obligations. Because bank
shareholders will not have to bear distress costs for portfolio value realizad1
- P , credits
tions less than jd , the default threshold, the term ------------1 – d1 I
shareholders with the default portion of the distress costs.
5. See Kupiec and O’Brien (1998) for a more complete development of
the option components of the bank’s net shareholder value.
6. This assumption is consistent with the regulatory policies analyzed
below.
7. See James (1991) for a description and estimates of bank closure
costs.

9. Flannery (1991) emphasizes this point and considers the
consequences for insurance pricing and bank capital policy, although his
analysis does not incorporate moral hazard behavior.
10. In terms of earlier notation (see equation 1), the second period cash
flow from loan i is j i1 + I i0 e µ i + s 0i z 0 + s 1i z 1i , where I i0 is the bank’s
initial required outlay for loan i , µ i the expected return, s0i z 0 the
systematic risk component, s 1i z 1i the idiosyncratic component, and
the z terms are independent standard normal variates. The initial value of
1

loan i is j i0 = I i0 e µ i + --2- ( s0i + s 1i ) + λs 0i – r, where λ is the market price
of risk and r the one-period risk-free rate. For positive NPV loans,
j i0 > I i0 .
2

2

11. The shareholder equity maximization problem is solved numerically
using integer programming as described in equation 2 above. As the sum
of lognormal variables is not lognormal and does not have a closed form
density function, all option values are calculated using numerical
techniques. A lognormal distribution approximation to the sum of lognormal variables is used (see Levy [1992] for details). Option values from
the use of the lognormal approximating distribution were similar to
values calculated using Duan and Simonato’s (1995) empirical
martingale simulation technique.
12. See Kupiec and O’Brien (1998) for the formal incentivecompatibility conditions.
13. This ordering is also assumed in Giammarino, Lewis, and
Sappington (1992); John, John, and Senbet (1991); and John, Saunders,
and Senbet (1995).

8. The fairly priced premium will equal the insurer’s liability value if
the insurer’s costs in liquidating the bank are the same as the distress
costs to shareholders (see above).

210

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

NOTES

REFERENCES

Campbell, Tim, Yuk-Shee Chan, and Anthony Marino. 1992. ‘‘An IncentiveBased Theory of Bank Regulation.’’ JOURNAL OF FINANCIAL
INTERMEDIATION 2: 255-76.
Chan, Yuk-Shee, Stuart I. Greenbaum, and Anjon Thakor. 1992. ‘‘Is Fairly
Priced Deposit Insurance Possible?’’ JOURNAL OF FINANCE 47, no. 1:
227-45.
Craine, Roger. 1995. ‘‘Fairly Priced Deposit Insurance and Bank Charter
Policy.’’ JOURNAL OF FINANCE 50, no. 5: 1735-46.
Duan, Jin-Chuan, and Jean-Guy Simonato. 1995. ‘‘Empirical Martingale
Simulation for Asset Prices.’’ Manuscript, McGill University.
Flannery, Mark J. 1991. ‘‘Pricing Deposit Insurance When the Insurer
Measures Bank Risk with Error.’’ JOURNAL OF BANKING AND
FINANCE 15, nos. 4-5: 975-98.
Gennotte, Gerard, and David Pyle. 1991. ‘‘Capital Controls and Bank
Risk.’’ JOURNAL OF BANKING AND FINANCE 15, nos. 4-5: 805-24.
Giammarino, R., T. Lewis, and D. Sappington. 1993. ‘‘An Incentive
Approach to Banking Regulation.’’ JOURNAL OF FINANCE 48, no. 4:
1523-42.

John, Kose, Teresa John, and Lemma Senbet. 1991. ‘‘Risk-Shifting Incentives
of Depository Institutions: A New Perspective on Federal Deposit
Insurance Reform.’’ JOURNAL OF BANKING AND FINANCE 15,
nos. 4-5: 895-915.
John, Kose, Anthony Saunders, and Lemma W. Senbet. 1995. ‘‘A Theory of
Bank Regulation and Management Compensation.’’ New York
University Salomon Center Working Paper S-95-1.
Kim, Daesik, and Anthony Santomero. 1988a.‘‘Deposit Insurance Under
Asymmetric and Imperfect Information.’’ Manuscript, University of
Pennsylvania, March.
———. 1988b. ‘‘Risk in Banking and Capital Regulation.’’ JOURNAL
OF FINANCE 43, no. 5: 1219-33.
Kupiec, Paul H., and James M. O’Brien. 1998. “Deposit Insurance, Bank
Incentives, and the Design of Regulatory Policy.” FEDS working
paper no. 1998-10, revised May 1998.
Levy, Edmound. 1992. ‘‘Pricing European Average Rate Currency Options.’’
JOURNAL OF INTERNATIONAL MONEY AND FINANCE 11: 474-91.

James, Christopher. 1991. ‘‘The Losses Realized in Bank Failures.’’
JOURNAL OF FINANCE 46, no. 4: 1223-42.

The views expressed in this article are those of the authors and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

NOTES

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

211

Issues in Financial Institution Capital
in Emerging Market Economies
Allen B. Frankel

I. INTRODUCTORY REMARKS
For the past twenty years, Asia has been regarded as an
economic success story. The recent economic turmoil in the
region, however, has prompted a reevaluation of the longterm sustainability of the dynamic economic performance.
Undoubtedly, lessons will be drawn from the Asian experience—lessons that will inform future decisions at various
levels to move financial liberalization forward while
providing for prudential concerns.
Many thoughtful analysts surveying the Asian
experience have focused on the inadequacies and inefficiencies
in the banking systems of Asian nations as particularly significant elements in precipitating the current crisis. The banking
problems of these nations, however, only bring into specific
relief deep, complex, and more pervasive problems in the
institutional arrangements of the affected nations—problems
that are, in fact, common to many emerging market countries
throughout the world. These issues have particular relevance
to the consideration of future financial liberalization and the
broadening of the international financial community via
multilateral trade negotiations and international understandings among national financial supervisors.

II. AN OUTLINE OF THE POLICY PROBLEM
This paper sets out to discuss a policy problem involving
the integration of emerging market banking systems into
international financial markets. Below is an outline of the
policy problem:
Policy problem. Promote financial market liberalization in
emerging economies through the exploitation of international financial linkages, including interbank transactions.
Constraint. Satisfy system-wide prudential policy needs.
Premise. As long as entry of foreign banks is restricted,
domestic banks have superior capacity to gather information on domestic economic actors and discriminate among
those actors.
Instruments that can be applied to the solution of the policy
problem:
• robust institutional arrangements;1
• design of macroeconomic policy instruments;
• binding international agreements such as the Financial
Services Agreement of the General Agreement on
Trade and Services; and
• multilateral understandings such as the Basle Core
Principles.

Allen B. Frankel is the chief of the International Banking Section of the Division
of International Finance at the Board of Governors of the Federal Reserve System.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

213

This paper is organized as follows. In Section III, we
explicate the policy problem through an overview discussion
of the Asian experience. In Section IV, we consider institutional deficiencies in emerging market countries and their
negative implications for prudent banking. We also extend
the discussion to include the impact of institutional issues
on the credit relationships between domestic and foreign
banks. In Section V, we discuss the relevance of trade agreements in financial services and agreements among supervisors to the process of integrating emerging markets into
international banking markets. Drawing on the insights of
incomplete contracting theory, we consider how the
involvement of emerging market countries might influence
both the form and the coverage of multilateral agreements
covering prudential standards.

III. PUTTING THE POLICY PROBLEM
IN CONTEXT
Our statement of the general policy problem has been
informed by the Asian experience. Many observers link the
poor macroeconomic performance of Asian emerging market
countries in the recent past to an inconsistency in economic
policy: although these countries encouraged domestic
institutions to be actively involved in international financial markets, they did not at the same time aggressively
pursue domestic institutional reform. The Asian economies
made efforts to implement financial market liberalization
by removing restrictions, for example, on the character and
magnitude of funding activities that Asian banks could
conduct in international interbank markets.
Over the last decade, Asian governments sought to
support the preeminent role of their banking systems as
sources of finance for investment projects by removing
interest rate controls and by initiating other liberalizing
measures designed to avoid disintermediation. The policies
were successful in that they permitted seemingly wellcapitalized banks to assume investment responsibility for
large amounts of domestic and foreign savings. They were
unsuccessful, ex post, in that many of the projects financed
did not generate sufficient revenues to meet contractual
loan payments. The eventual result has been the current
crisis in Asia, which has generated both domestic economic

214

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

problems in affected Asian countries and concern about the
impact on banks in other countries. This outcome can be
associated with the deficient state of institutional arrangements in emerging Asian economies. In particular, these
economies commonly lack complete legal arrangements as
well as well-developed mechanisms to produce good
accounting information. In turn, this produces a lack of
transparency in corporate financial affairs, distorted incentive structures for economic agents, and a lack of certainty
as to the locus of corporate control. In the next section, we
will look more closely at how the deficient institutional
arrangements create difficulties for the making of prudent
credit decisions and can in fact generate prudential concerns.

IV. INSTITUTIONAL FAILINGS:
ASIAN EXAMPLES
A. ACCOUNTING, MONITORING,
AND MACROECONOMIC POLICY
As noted above, many emerging market countries lack strong
accounting mechanisms and traditions. Numerous factors
may contribute to these weaknesses. First, many countries lack
legal requirements for the independent auditing of financial
statements. Second, the limited penetration of sophisticated
accounting systems in many emerging market countries
reduces the quality and timeliness of financial data. In addition, the lack of liquid, well-developed asset markets in these
nations often limits the validity of financial information; companies must use internal estimates of values rather than objective, transparent, market-based observations. Finally, the
values of corporate transparency, avoidance of conflicts of
interest, and safeguarding of corporate assets are not fully
ingrained in some of the emerging markets.
Furthermore, macroeconomic policies in emerging
markets often make prudent banking more difficult, as
foreseeable consequences of those policies cannot be
managed readily by emerging market banks with underdeveloped risk management systems. As the Asian experience demonstrates, the choice of exchange rate regime can
introduce instability into the domestic banking markets.
To some extent, this occurred in Mexico in 1994 and 1995.
The most striking example of this phenomenon, however,
took place in Chile in the late 1970s. Diaz-Alejandro

(1983) reported that real lending rates in Chile averaged
more than 75 percent per annum over the period 1975-82.
It was not surprising to him that, in these circumstances,
Chilean banks borrowed heavily in foreign currency and
lent the proceeds to domestic customers. Finally, he noted
that Chilean banks had not taken into account the substitution of exchange rate risk exposure for credit risk. This
failure, in turn, contributed to Chilean bank failures.

Stylized Example
To provide additional insight into the impact of these
institutional issues on banking markets, we will present a
stylized example based on the Asian experience. To begin
with, let us consider the economic and financial circumstances present in Asia.
The slowdown in economic growth in Asia has
been reflected in sharp deteriorations in the cash holdings
(liquidity) of Asian companies.2 It seems apparent, based
on available data, that in some cases companies chose to
respond to these cash squeezes by taking on currency risk
through arrangement and drawdowns of hard currency
credit facilities from domestic and foreign banks. Companies found these hard currency credit facilities attractive
because they permitted the companies to reduce the rate
of drawdown of their cash reserves by lowering interest
payments. The reduced cash flow came at the cost of the
assumption of financial risk of a depreciation of the
domestic currency.
The chart shows data for two countries: Korea and
Thailand. For both of these countries, there was a strong
association of the buildup of the foreign borrowing of
domestic banks with the increase in domestic credit extensions to the domestic private sector. The greater steepness
of the foreign bank borrowing line in both cases is consistent with a story that external bank borrowing was undertaken to accommodate the corporate sector’s heightened
interest in conserving scarce cash liquidity. We would
caution, however, that available data do not permit us to
verify the presumed behavior that banks passed on the
currency risk to liquidity-constrained corporate borrowers.
Now let us develop our stylized example. Consider the following circumstances regarding the exchange

rate environment. The monetary authorities are seeking
to avoid currency depreciation through open-market
purchases. To hold a position in the domestic currency,
market participants require compensation in the form of
higher domestic interest rates for the anticipated future
depreciation.3
Let us assume that the borrowing behavior
described above is sufficiently prevalent among the borrowers of a particular bank that a depreciation of the exchange
rate would significantly increase the credit exposures of that
bank. Furthermore, assume that the bank’s credit decisions
are based on a single criterion, the borrower’s credit history,
reflecting the only information available to the bank.4 This
assumption is based on the notion that domestic corporates
either do not prepare financial data or that the data they prepare are of highly uncertain value and therefore cannot be
relied on as a basis for credit decisions. An important characteristic of the data on the borrowing history is that they have
only been accumulated during an observation period in
which the sensitivities of borrowers’ financial situations to
exchange rate movements could not be observed.

Funding of Domestic Bank Credit in International
Interbank Markets
Year-end 1992 = 100
500
450
400

Thai FBB

350
300
Korean FBB
250

Thai DBC

200
Korean DBC
150

100
Q4
1992

Q2
Q4
1993

Q2
Q4
1994

Q2
Q4
1995

Q2

Q4
1996

Q2
1997

Sources: Various editions of The Maturity, Sectoral and Nationality Distribution
of International Bank Lending, Bank for International Settlements Monetary
and Economic Department, and International Financial Statistics, International
Monetary Fund, January 1998.
Notes: FBB = foreign currency borrowing by domestic banks. DBC =
domestic bank credit to private sector Korean borrowers and to nonfinancial
private Thai firms.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

215

Now, let us consider the consequences to the bank
of a depreciation of the currency. The immediate impact of
the depreciation would be to increase exposures to all borrowers who have been loaned funds in the foreign currency.
The bank, given the information available, would have no
way to assess the consequences of the depreciation on the
ability of any particular borrower to pay. The depreciation
may have impaired the ability of some borrowers, who
are unhedged, to honor their debt obligations. Other
borrowers, however, who are effectively hedged (for example, those who have receivables denominated in the foreign
currency) would not be adversely affected by the depreciation. Let us assume that a borrower of each type approaches
the bank to restructure its loan. Each borrower requests
that its loan payment, expressed in domestic currency, be
no more than required before depreciation. Given the
absence of firm financial information, the bank has no
objective basis to differentiate among the two applicants
for debt relief. Thus, the bank faces the possible result that
if it gives concessional terms to both applicants, it has
advantaged one unnecessarily. If it does not permit the concessional terms, it forces one borrower into bankruptcy
unnecessarily. And if it takes the final choice and offers one
concession and denies the other, it faces the possibility of
ending up in the worst of possible worlds in which one
borrower defaults and the other is unnecessarily provided
with reduced payments.
Now let us bring the domestic bank/foreign bank
relationship into our analysis. Assume that the only information disclosed to a foreign bank is data on the level of
nonperforming loans of the domestic bank. In the period
before depreciation, differences in the levels of nonperforming loans among banks would not have systematically
revealed the hedging or nonhedging of these banks’ borrowers. The analysis above suggests that if the concentration of a bank’s loans is to unhedged borrowers, then
depreciation might result in a large increase in nonperforming loans. But this characteristic of the bank’s loan
portfolio would be revealed only ex post. This demonstrates how the foreign bank, in the absence of effective
monitoring mechanisms, would not have the wherewithal
to alter the way it processes information and makes credit

216

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

decisions. Such mechanisms could inform decision making
at foreign banks, and could therefore lead to avoidance of
exposures to those domestic banks likely to be affected by a
currency depreciation.
This example suggests the potential value of
forward-looking information. Such information can be
produced by stress tests.5 The tests are particularly useful
when historical experience has been limited by successful
government efforts to fix asset prices (most prominently,
fixed exchange or interest rates). The information drawn
from these tests can support alternate projections of cash
flows, so bank managements can take various contingencies
into account for purposes of capital planning.6

B. THE BANKRUPTCY REGIME
Let us now turn to the relevance of a country’s bankruptcy
regime on the relationship between the domestic bank and
the domestic borrower as well as between the foreign bank
and the domestic bank. We will consider how deficient
rules for corporate debt workouts, and in particular violation of the absolute priority rule (APR), undermine the
ability of both domestic banks to make credit decisions on
the corporate level and foreign banks to make discriminations among domestic banks in the interbank market.
Again, the experience of Asia is both especially instructive
and particularly relevant.
Economic analysis of bankruptcy arrangements
has focused on the impact of the bankruptcy regime, and in
particular the absolute priority rule, on the efficiency of
financial contracting (see Longhofer [1997]). The absolute
priority rule provides for the retention in bankruptcy of the
priority of claims established outside of bankruptcy. In
other words, the most senior creditors should be paid off
before anything is given to the next senior creditors, and so
on down to the shareholders.
Asian countries appear to have a high tolerance for
violations of APR. This has been traced by legal scholars to
Asian cultural traditions as well as to the influence of European
civil law heritages in a number of countries, including
Indonesia.7 The character of significant violations of APR,
which we think are prevalent, is suggested by the following examples. In a recent restructuring of a major Thai

company with $330 million in debt, creditors will forgive
95 percent of their debt and shareholders will retain an
interest in the company. The terms reflect the power of the
dominant shareholder to veto proposed restructuring
arrangements, a widely recognized shortcoming of Thailand’s
bankruptcy arrangements.8
In the case of Korea, violations of APR have been
associated with the behavior of entrenched managements.
The managements of bankrupt chaebols and other large
Korean companies have been able to apply for court mediation which, when granted, has permitted them to stay in
place. This process violates the absolute priority rule
because control of corporate assets has not been transferred
to the new owners. The Korean government has now proposed legislation that restricts the opportunities of managements at troubled companies to entrench themselves.9
Now let us evaluate the impact of violation of
APR on the creditor relationship between the domestic
bank and borrower. The higher the probability of APR violations in a given legal structure, the less incentive owners/
managers have to avoid bankruptcy. The lessened incentive
reflects the diminished discrepancy in outcomes between
the bankruptcy and nonbankruptcy states. In these circumstances, the domestic creditor bank would be less favorably
treated than in the absence of APR violations.
Consider the case where the domestic bank does
have special ability vis-à-vis foreign banks to discriminate
between domestic companies as to the likelihood of
default. The presence of such a superior capacity helps
explain why foreign banks would choose to fund domestic
banks’ extensions of credit to domestic borrowers. That is,
the presence of APR violations enhances the domestic
banks’ advantage.
To summarize, badly structured bankruptcy
regimes can result in the increased likelihood of bankruptcy (because of the reduction in incentives) and reduced
recoveries in states of bankruptcy, and thus will tend to
reduce the attractiveness for creditors of debt positions in
those economies. As well, poor bankruptcy arrangements
increase the likelihood that foreign banks will use domestic
banks as intermediaries in lending relationships with
domestic corporate borrowers.

Our analysis above provides a basis for the presumption that the interests of emerging market countries
would be served by addressing institutional failings. Additionally, instability in domestic financial markets associated
with such institutional arrangements could be transmitted
to international markets. Therefore, international supervisors also have incentives for evaluating the state of institutional arrangements in emerging market countries when
considering whether and how to negotiate on international
prudential and financial liberalization issues.

V. MULTILATERAL AGREEMENTS
This section reviews the consequences of the size and composition of the group participating in international prudential and liberalization agreements on the contractual
character of those agreements. In particular, we focus on
the significance for international liberalization and supervisory arrangements on the inclusion of emerging market
countries. In connection with this discussion, we review
three agreements: the Basle Capital Accord, the Financial
Services Agreement of the General Agreement on Trade
and Services, and the Basle Committee’s Core Principles for
Effective Banking Supervision.

A. BASLE CAPITAL ACCORD
The Basle Capital Accord is an understanding among the
bank supervisory agencies of the G-10 member countries.
The agreement, signed in 1988, was undertaken during a
period when these authorities expressed interest in a
shared-rule framework for judging the financial strength of
applicant banks, which were, at that time, primarily from
each other’s countries. The thrust of the revised Basle
Accord (updated to include the coverage by capital regulation of market risks) can be summarized as follows:
1. A bank must hold equity capital equal to at least a
fixed percent of its risk-weighted credit exposures
as well as capital to cover market risks in the
bank’s trading account.
2. When performance causes capital to fall below this
minimum requirement, shareholders can retain
control provided that they recapitalize the bank to
meet the minimum capital ratio.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

217

3. If the shareholders fail to do so, the bank’s regulatory agency is required to sell or liquidate the
bank.
The Basle Accord provides de facto liberalization
by establishing a transparent standard for the crucial variable, capital, that is used in making judgments on various
applications, including those for entry, of foreign banks.
Due to its transparent framework and simplicity, the agreement operates to limit discretion for supervisors in signatory countries and other countries that voluntarily chose to
adhere to it.
It is instructive to consider what lessons the
Accord provides regarding the factors that influence the
outcome of contracting among groups of national supervisory authorities. Economic analysis of contracting would
suggest that the small size and homogenous character of
the group of signatories explain the simplicity of the Basle
Capital Accord. These characteristics allowed the negotiations to be effectively limited to questions involving
capital issues.10 Additionally, the cost and complexity of
negotiations were reduced as the agreement does not
involve formal treaty obligations and accords flexibility in
national implementation.
Let us consider the case that can be made to use
the Basle Accord as a complete and controlling international prudential agreement covering all banking systems,
including those of the emerging markets. Four characteristics of the Basle Accord are inconsistent with this case.
First, the framers of the Accord implicitly presumed that the signatory countries had compatible
institutional arrangements. As discussed above, in many
cases the institutional failings of emerging market countries make them incompatible with those of established
financial centers.
Second, the Accord is an incomplete agreement
that affords considerable discretion to national supervisory
authorities. For example, it provides no guidance as to how
signatory supervisors should address failures of bank shareholders to meet agreed minimum requirements. The disparate implementation of prompt corrective action
initiatives in the United States and Japan affirms this
observation.11 Additionally, the Accord offers no specific

218

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

guidance as to the circumstances in which a host country
supervisor may close a branch office of a foreign bank.12
Because of the incomplete nature of the contract, national
supervisors in G-10 countries have had to expend considerable effort to make adaptations and to develop informal
understandings in order to keep the Basle Accord relevant
and useful. Enlargement of the group of nations consulted in this process would considerably increase the
costs of reaching consensus on modifications of the Accord
and could possibly discourage needed adaptations.
Third, the Basle Accord is tightly focused on
issues related to capital measurement and the setting of
minimum capital adequacy standards. For example, it does
not offer standards for banks’ efforts to identify, measure,
monitor, and control material risks. It would be important
to reach agreement on standards such as these if the group
of countries negotiating standards became more diverse.
Fourth, there is no formal enforcement mechanism
in the Accord. In the signatory countries, there has been an
increased understanding that formal enforcement mechanisms such as prompt corrective action are required at the
national level. The same view, however, has not become as
widely accepted in emerging market countries. In the
absence of an enforcement mechanism, enlargement of the
signatory group risks the introduction of a rogue national
banking system into international markets. The presence of
such a rogue signatory could undercut the understandings
on which the normal functioning of the international interbank markets are based.

B. THE GENERAL AGREEMENT ON TRADE AND
SERVICES AND THE BASLE CORE PRINCIPLES
In this section, we will consider how emerging markets
might be brought into the Basle-based discussions. To begin,
however, it is important to appreciate the importance of the
separate process of negotiating international liberalization
organized under the aegis of the World Trade Organization.
The General Agreement on Trade and Services
(GATS) promotes competitive and efficient markets worldwide. In particular, the Financial Services Agreement of
GATS brought trade in financial services into a global
multilateral framework comparable to that provided for

trade in goods (see Key [1997]). The agreement calls for a
process of liberalization involving the reduction or removal
of barriers to foreign financial services and foreign financial
services providers from national markets.
The coverage of financial services by GATS is
modified by the so-called prudential carve-out. The carveout permits signatory countries to take measures for prudential purposes notwithstanding other GATS provisions.
However, limited guidance has been provided as to what
constitutes prudential measures. It is clear only that the
carve-out permits measures for the protection of various
classes of stakeholders such as policyholders and depositors
or “to ensure the integrity and stability of the financial
system.” Therefore, fleshing out the meaning of the prudential carve-out requires reference to alternative sources.
Consider the character of guidance that would be
provided for this concept by the recently drafted Basle Core
Principles for Effective Banking Supervision. The Core
Principles are intended to serve as a basic reference for
supervisory and other public authorities. That is, they
provide general, not detailed, guidance on an extensive
listing of topics (see Basle Committee on Banking Supervision [1997]). The Core Principles were drafted by representatives from the Basle Committee’s G-10 member
countries and nine emerging market countries. Supervisors
from all countries, however, are being encouraged to
endorse the Core Principles. The Basle Core Principles comprise twenty-five basic standards that relate to: preconditions for effective banking supervision (Principle 1),
licensing and structure (Principles 2 to 5), prudential
regulation and requirements (Principles 6 to 15), methods
of ongoing banking supervision (Principles 16 to 20),
information requirements (Principle 21), formal powers
of supervisors (Principle 22), and cross-border banking
(Principles 23 to 25).
The Core Principles employ the concept of capital
regulation established in the Basle Accord. To this they
add an extensive set of supervision issues. One might
interpret the greater breadth of the Principles as reflecting
the now-established international sentiment that improvements need to be made in the supervisory systems of many
countries.

VI. ALTERNATIVES TO THE BASLE ACCORD
METHODOLOGY
The breadth of the Core Principles may make them more
useful than the Basle Accord for extended application to
the emerging market countries. As noted, however, the
Core Principles still make use of the Basle Capital Accord.
Therefore, some of the same arguments against the further
expansion of Basle Accord signatory countries apply to the
Core Principles as well. There is rather broad agreement
that the Accord’s methodology has flaws, but certainly
no consensus on what, if any, alternative could or should
replace it. In this section, we will make some observations
regarding two of these alternative prudential methodologies.
We will first consider fair pricing of deposit insurance and
then the so-called precommitment approach.

A. FAIR PRICING OF DEPOSIT INSURANCE
John, Saunders, and Senbet (1995) have argued that countries should adopt fairly priced deposit insurance to avoid
the distorting consequences for resource allocation associated with capital regulation. They argue that appropriate
risk-adjusted deposit insurance premiums would provide
bank owners with incentives to put in place optimal
management compensation structures. The motivation for
such a scheme would be to induce managers to avoid
taking risks beyond those that are optimal for an “allequity-financed bank.”
The experience in the United States indicates that
implementation of risk-adjusted premiums is a politically
difficult task. The range of risk-adjusted premiums now
charged by the Federal Deposit Insurance Corporation is
about 30 basis points, well below the approximately
100-basis-point range routinely estimated by researchers
in the early 1990s as required to adequately account for
risk differences among banks. The European experience
also suggests that gaining agreement among countries on
adopting risk-adjusted premiums would not be an easy
task. In 1993, the European Commission issued the Directive on Deposit Guarantees requiring EU member nations
to adopt a national system of deposit insurance that met
broadly agreed-upon standards.13 National authorities
were given wide latitude, however, in implementing the

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

219

Directive in their home countries. Countries chose a wide
variety of implementation mechanisms; only two, however,
chose risk-adjusted premiums.14

B. THE PRECOMMITMENT APPROACH
Now let us consider the possibility of substituting a
precommitment-type approach for the current Basle Accord
methodology. Under the precommitment approach, a bank
commits to its regulator that it will not exceed a certain
magnitude of loss for a period to come. Each bank determines this amount on its own. If the bank violates this
commitment, then it faces a penalty, which must be viewed
as credible in order for the approach to be effective.
To date, there has been little, if any, discussion
regarding the challenges involved in ex post verification
of periodic profit-loss outcomes. The reason for this
dearth of deliberation seems clear—the precommitment
approach raises no new issues in economies with strong
accounting traditions and systems. However, if consideration were to be given to emerging market banks employing such an approach, verification would become an issue
due to institutional shortcomings in these countries. In
particular, recent discussions on the current operation of
emerging market banking systems suggest that these
systems are often characterized by a lack of transparency,
a scarcity of supervisory personnel with requisite technical training, incomplete avoidance of conflicts of
interest, and lax safeguarding of corporate assets by
system participants. These deficiencies could undermine
the verification procedure, which is a key aspect of a selfassessed regulatory approach. This discussion suggests
that, at present, there are significant barriers to the use
of incentive-compatible regulatory schemes in emerging
market economies.

VII. ENFORCEMENT MECHANISMS
So far, we have considered various international agreements
and frameworks for dealing with prudential concerns and
their relevance to a world in which a growing and increasingly diverse group of countries participate in international
markets. As our final topic, we will discuss the arguments
related to the choice of whether to include enforcement

220

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

mechanisms in multilateral agreements on capital adequacy and associated prudential issues.
The argument for rewards and penalties is to
provide incentives to participant countries to take actions
that would tend to improve prospects for stability of the
international financial system. Two obstacles, however,
must be overcome. First, it would be difficult to ensure
that enforcement actions are applied fairly and to insulate
them from forces other than those related to prudential
concerns. Second, supervisors would more closely scrutinize any proposals if the proposals were connected to a
binding agreement. This would increase the difficulty of
negotiating an agreement.

VIII. CONCLUDING REMARKS
A lesson of the Asian financial crisis concerns the macroeconomic costs of poorly designed institutional structures.
One of the possible explanations for the persistence of the
tolerance of these structures may be that they afford a competitive advantage to domestic financial institutions of
emerging market countries. The competitive advantage of
these banks is based on their value as intermediaries
between international markets and domestic agents. This
value arises from their knowledge of the intricacies of the
institutional structures in their home countries.
Much of our discussion of the policy problem
assumes that, going forward, emerging market supervisors
will be included in the negotiation of multilateral supervisory understandings. The analysis of the paper suggests
that their participation will influence the outcome of the
nature of understandings among regulators. In particular,
the outcome would likely result in an attainable standard
that is consistent with a process of institutional reform over
time. During this period of reform, emerging market
banks would be insulated from the full consequences of
market discipline and thus would retain some protection of
their competitive status. This would result in an agreedupon strategy for integrating emerging market banking
systems into international markets.
When relaxing the assumption that emerging
market supervisors must agree to an international supervisory standard, the standard would move toward one

that permits the most efficient employers of bank capital
to fully exploit their competitive advantages. In the
absence of protected franchises, few emerging market
banks would be able to compete in the market for equity
capital at this time. Under these conditions, one possible
response of emerging market authorities could be to close
off their markets to avoid direct competition between
more efficient foreign banks and their less efficient
domestic institutions. This could well be accompanied by
lessened emphasis on institutional reform efforts. The

costs of these policy measures would presumably be less
real economic integration of emerging market economies
with the international economy. We also cannot discount
the possibility that the less complete and more slowly
implemented institutional reforms will have a negative
impact on systemic risk. This might occur if market participants failed to take into account in their own risk management actions a scaling back of the market-oriented
oversight of banking and other financial supervisors in
emerging market economies.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

221

ENDNOTES

The author wishes to acknowledge the extraordinary assistance provided by
Garrett Ulosevich in the preparation of this paper. This paper presents the views
of the author and should not be interpreted as reflecting the views of the Board
of Governors of the Federal Reserve System or other members of its staff.
1. See Annex of Financial Stability in Emerging Market Economies, Report
of the Working Party on Financial Stability in Emerging Market
Economies. This report is available on the World Wide Web at the Bank
for International Settlements’ site (http://www.bis.org). It provides an
illustrative List of Indicators of Robust Financial Systems. The six
main headings of the listing are: 1) legal and juridical framework;
2) accounting, disclosure, and transparency; 3) stakeholder oversight and
institutional governance; 4) market structure; 5) supervisory/regulatory
authority; and 6) design of the safety net.
2. Korean data for deposits at banks by individuals and corporations
show much stronger increases in growth of deposits of individuals in
1996 and 1997. In addition, Korean data show sizeable increases in
foreign-currency-denominated bank loans equal to 47 percent between
the end of 1995 and the end of the third quarter of 1997. That is, the
Korean data appear to be broadly consistent with the circumstances
described in the stylized example (Bank of Korea 1997).
3. See Krugman (1996) for a discussion of conventional currency crisis
theory. See Krugman (1998) for an exposition of a model that
concentrated on the problem of moral hazard in the financial sector and
its macroeconomic consequences.
4. In the discussion of the stylized example, we abstract from the possible
use of collateral. The credit policies in many emerging market economies
are asset based rather than cash-flow based. Because banks do not require
information on cash flows of the underlying asset, they are unable to
evaluate independently the asset’s value through discounted cash flow or
similar methodologies. In such circumstances, collateral should provide the
lender less comfort than when the collateral-assumed values are consistent
with estimates derived from a discounted cash-flow analysis.
5. See Gibson (1997) for a discussion of how the design of an
information system depends on the risk measurement methodology that
a bank chooses.
6. For a discussion of the usefulness of cash-flow analysis in emerging
market countries, see Kane (1995).
7. For overview discussions of the administration of insolvency laws
across Asia, see Tomasie and Little (1997). Tomasie and Little have
commented on the impact of Confucian philosophy on the resolution of
financially troubled companies in Asia. They suggest that the cultural ideal
of communal risk bearing results in an unwillingness to visit total loss on any

222

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

class of stakeholders. Tomasie and Little have also commented on the separate
influence of the European civil law tradition. They have observed that under
this tradition, judges look first to the satisfaction of public policy objectives
and only then consider the proposed resolution’s consistency with the
structure of creditor preference outside of bankruptcy. For a more general
discussion of how the character of legal rules and the quality of law
enforcement affect financial activity, see La Porta et al. (1996, 1997).
8. See Sherer (1998) for an article on the restructuring plan proposed for
Alphatec Electronics PLC.
9. To address this situation, the Korean government proposed
legislation, in early 1998, that would restrict the circumstances in which
management could apply to the courts for protection. Under current
Korean law, a company can file for liquidation, reorganization, or court
mediation. It is estimated that almost all large company filings have been
for court mediation. Korean commentators have asserted that filings for
the court mediation option are often undertaken by managements
seeking to retain authority rather than for the purpose of present
liquidation. Under the proposed legislation, debtor companies would not
be permitted to withdraw from a proceeding once an order has been
issued. It is anticipated that this change would address the problem of
management abuse of the process.
10. However, the agreement did not call for limiting the benefits to
signatories. For example, applications to the Federal Reserve from banks
from countries that adhere to the Accord are required to meet the Basle
guidelines as administered by their home country supervisors. An
applicant from a country not subscribing to the Basle Accord is required
to provide information regarding the capital standard applied by the
home country regulator, as well as information required to make data
submitted comparable to the Basle framework. See Misback (1993).
11. The gist of the U.S. implementation of prompt corrective action is
to limit the discretion available to regulators with respect to the actions
they require bank owners to take in response to lowered capital ratios. In
contrast, the Japanese implementation can be interpreted as providing a
menu of options for supervisors.
12. In its use here, the term Accord should be broadly construed to refer
to the public documents that have been issued by the Basle Committee.
13. For details on the EU Directive on Deposit Guarantees, see
McKenzie and Khalidi (1994).
14. Portugal and Sweden employ risk-adjusted deposit insurance
premiums. The only other foreign countries with risk-adjusted deposit
insurance premiums are Argentina and Bulgaria. See Garcia (1997).

NOTES

REFERENCES

Bank for International Settlements. 1997. FINANCIAL STABILITY IN
EMERGING MARKET ECONOMIES. Report of the Working Party on
Financial Stability in Emerging Market Economies.

Krugman, Paul. 1996. “Are Currency Crises Self-Fulfilling?” NBER
MACROECONOMICS ANNUAL.

———. 1998. “Bubble, Boom, Crash: Theoretical Notes on Asia’s
Bank of Korea. 1997. MONTHLY STATISTICS BULLETIN, December.

Crisis.” Unpublished note, January.

Basle Committee on Banking Supervision. 1997. CORE PRINCIPLES OF
EFFECTIVE BANKING SUPERVISION.

La Porta, Rafael, Florencio Lopez-de-Silanes, Andrei Shleifer, and Robert W.
Vishny. 1996. “Law and Finance.” NBER Working Paper no. 5661.

Diaz-Alejandro, C. 1983. “Goodbye Financial Repression, Hello
Financial Crash.” JOURNAL OF DEVELOPMENT ECONOMICS 19: 1-24.

———. 1997. “Legal Determinants of External Finance.” NBER

Garcia, Gillian. 1997. “Commonalities, Mistakes and Lessons: Deposit
Insurance.” Paper presented at the Federal Reserve Bank of Chicago/
World Bank Conference on Preventing Banking Crises.

Longhofer, Stanley D. 1997. “Absolute Priority Rule Violations, Credit
Rationing and Efficiency.” Federal Reserve Bank of Cleveland
Working Paper no. 9710.

Gibson, Michael. 1997. “Information Systems for Risk Management.” In
THE MEASUREMENT OF AGGREGATE MARKET RISK. Basle: Bank for
International Settlements.

McKenzie, George, and Manzoor Khalidi. 1994. “The EU Directive on
Deposit Insurance: A Critical Evaluation.” JOURNAL OF COMMON
MARKET STUDIES 32, no. 2 (June).

John, Kose, Anthony Saunders, and Lemma W. Senbet. 1995. “A Theory of
Bank Regulation and Management Compensation.” New York
University Salomon Center Working Paper S-95-1.

Misback, Ann E. 1993. “The Foreign Bank Supervision Enhancement Act
of 1991.” FEDERAL RESERVE BULLETIN 79, no. 1 (January).

Kane, Edward. 1995. “Difficulties of Transferring Risk-Based Capital
Requirements to Developing Countries.” PACIFIC-BASIN FINANCE
JOURNAL 3, nos. 2-3 (July).

Working Paper no. 5879.

Sherer, Paul M. 1998. “Major Thai Restructuring Would Pay 5% to
Creditors.” WALL STREET JOURNAL, February 4, p. A19.
Tomasie, Roman, and Peter Little, eds. 1997. INSOLVENCY LAW AND
PRACTICE IN ASIA. FT Law & Tax Asia Pacific.

Key, Sydney J. 1997. “Financial Services in the Uruguay Round and the
WTO.” Group of Thirty Occasional Paper no. 54.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

NOTES

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

223

Commentary
Christine M. Cumming

In commenting on the three thought-provoking papers in
this session, I would like to consider the first two papers
together and then turn to the third.
From the standpoint of methodology, the first two
papers could not be more different. The Estrella paper
blends analytical and historical methodologies, with
attention to supervisors’ own understanding of their
policies and practices, to consider the appropriate role of
formulas and judgment in the supervisory assessment of
capital adequacy. The Kupiec and O’Brien paper considers
a series of results in the literature in the context of a more
general model. Paul Kupiec and Jim O’Brien have done a
great service in their paper by bringing these strands of the
academic literature into a common framework. They help
us to understand better the role of capital requirements and
the interaction of capital requirements with risk management, the public safety net, and the short- and long-run
optimization problems of firms, where franchise value is
interpreted as capturing the long-run value of the firm as
an ongoing concern.
The themes in the two papers, however, are very
similar. Estrella emphasizes the dynamism and complexity

Christine M. Cumming is a senior vice president at the Federal Reserve Bank of
New York.

of the financial system and, more particularly, of the rules
and conventions that guide financial institution and supervisory behavior. In doing so, he draws on literature beyond
economics that discusses the phenomenon of reliance on
judgment and interpretation in the crafting and execution
of rules and conventions. Reliance on simple quantitative
rules applicable to all institutions—in Estrella’s language,
formulas—cannot work as supervisors would like them to.
In their paper, Kupiec and O’Brien make much
the same point by generalizing the models used in the
literature on capital requirements and deposit insurance
pricing. Well-known policy prescriptions developed in
models with certain assumptions change markedly with
the relaxation of even one or two assumptions. In particular,
for banks with different strategies or different investment
opportunities, the “optimal” capital requirement—the
requirement that shareholder value is maximized but moral
hazard is minimized—is bank-specific. No two capital
requirements are likely to be the same.
In both the Estrella and the Kupiec and O’Brien
papers, the development of bank-specific requirements
entails large amounts of information and a degree of precision that is not reasonable to expect of anyone, except the
owners of the firm. As the world becomes more analytical,
precise, and complex, it becomes all the more difficult to
specify simple and hard-and-fast regulatory rules.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

225

Yet both papers see a role for capital requirements—to limit moral hazard, to benchmark information,
and to provide a cushion to limit the social costs of a bank
liquidation. If we look beyond these papers to actual practice, formulas such as minimum capital requirements
appear to have additional purposes. Such requirements
shorten the negotiation time to agreement between firm
and supervisor on appropriate capital levels by providing a
lower bound to the possible outcomes. A related consideration is transparency. Since the regulator has statutory
powers to enforce capital adequacy, the considerations
influencing its evaluation should be known to the financial
firm, and the government should be able to demonstrate
capital inadequacy in setting out any remedial action.
What, then, do the conclusions in these papers
mean for supervisors?
First, capital requirements will necessarily be
imperfect and have only temporary effectiveness. Second,
the increasing sophistication and complexity of risk management in financial institutions call for more judgment in
assessing capital adequacy. Third, capital cannot be considered in isolation, but has to be understood in the context of
strategy, investment opportunities, risk management, and
the cost of equity issuance. Capital requirements need to be
seen in the broad context of supervisory activity, and
capital adequacy supervision must necessarily involve some
elements of supervisory judgment. Fourth, the conclusions
in these papers help explain why we increasingly see a link
between the quality of risk management and various
supervisory rules and permissions. For example, the internal models approach includes both qualitative and quantitative criteria. With prompt corrective action and under
the recently revised Regulation Y in the United States,
limitations on activities and requirements to seek regulatory permission to conduct activities can be triggered by
supervisory judgments, as reflected in the CAMEL or
Management ratings given by U.S. supervisors during
a bank examination. Finally, the results also help to
explain the appeal of “hybrid” approaches described
by Daripa and Varotto and by Parkinson; the supervisory approach described in Estrella’s 1995 paper, “A
Prolegomenon to Future Capital Requirements”; and

226

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

the approach described in the Shepheard-Walwyn and
Litterman paper.
In reading the Frankel paper, I found myself surprised. After the breadth of perspective in the previous two
papers, Frankel moves the point of perspective higher and
further back to survey the broad global scene, and generates the shock of the unexpected—the problems we just
considered in Estrella and in Kupiec and O’Brien are yet
more complex. The shock is reinforced by the contrast
between the elegance of the two earlier papers and
Frankel’s candid observations.
Frankel’s paper considers two sets of issues. First,
he points out that certain preconditions have to be met for
financial supervision to have any meaningful role. These
preconditions include meaningful financial statements,
publicly available on a timely basis, and a clear set of rules
determining what happens when debtors cannot pay. In
other words, we need to have adequate accounting, disclosure and bankruptcy principles established and applied in
every country active in the international financial markets.
No one in this room is likely to disagree openly
with his point. Frankel argues that the absence of these
preconditions in some countries contributed to and exacerbated the recent crisis in Asia. Moreover, that crisis does
seem to have created a defining moment for G-10 supervisors and central banks. The G-10 official community
shows every sign that it agrees on the need to strengthen
global accounting, disclosure, and bankruptcy rules and
practices. What makes the moment defining is that these
issues are not new—efforts have already been made to
address them within the G-10 countries with mixed success, and the need for genuine success is all the greater.
That brings me to Frankel’s second set of issues. I
did not fully understand his arguments, but the issue of the
respective roles of authorities in the G-10 and the emerging
market countries in creating these preconditions is important. In my view, there is no question where leadership
should come from. In the context of capital regulation,
leadership from the G-10 countries—rooted in a perspective that encompasses the emerging market countries—
suggests some considerations in evaluating possible
approaches to twenty-first-century capital requirements. In

particular, we might look for approaches that provide
evolutionary paths for capital requirements, with financial
institutions proceeding along the path at their own
pace and consistent with the nature of their business
strategy and risk management and internal control processes. The 1996 Market Risk Amendment to the Basle
Accord, with its standardized and internal models
approaches, represented one example of the creation of an
evolutionary path.
One caution, however. The path concept cannot be
seen as a reason to avoid moving expeditiously down the
path or failing to put the preconditions described by
Frankel in place. When you drive on the Autobahn, you

cannot drive at 25 kilometers per hour or operate a car in
need of repair.
The substantive issues raised by Frankel’s paper are,
what changes to the national and the international financial
systems do we want and how much do we want them? The
other issues he raises—who is a signatory to international
agreements and whether and how to have some international enforcement mechanism to ensure minimum
standards among participants in the international financial
markets—are issues of process. We first have to work on
agreeing on the substantive issues. The very process of
forging a consensus is by its nature inclusive, and that
suggests some clear considerations for the process issues.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

227

Capital Regulation: The Road Ahead
Tom de Swaan

INTRODUCTION
It is a great pleasure for me to be here and to participate in
the discussion of the future of capital adequacy regulation.
I would like to compliment the organizers of this conference on the programme they have set up, covering many
relevant topics, and the range of experts they have been
able to bring together.
In my address, as I am sure you would expect, I
will approach the issues from a supervisory perspective
and in my capacity as chairman of the Basle Committee.
Most of the questions that have arisen and been discussed
here in the last two days are complicated, and many issues
will require careful review. So do not at this stage expect
me to provide clear answers on specifics. I do hope to be
fairly explicit, however, on some of the more general
issues at stake, in particular on the level of capital
adequacy required for prudential purposes. In other
words, my address today should be seen as part of the
exploratory process that should precede any potentially
major undertaking.

Tom de Swaan recently joined ABN AMRO Bank; on January 1, 1999, he will
become a member of the managing board of the institution. At the time of the
conference, he was executive director of De Nederlandsche Bank and Chairman of
the Basle Committee on Banking Supervision.

STARTING POINT: THE BASLE ACCORD
When assessing the setup of capital regulation, I take as
my starting point the Basle Capital Accord of 1988. It is
commonly acknowledged that the Accord has made a major
contribution to international bank regulation and supervision. The Accord has helped to reverse a prolonged downward tendency in international banks’ capital adequacy
into an upward trend in this decade. This development has
been supported by the increased attention paid by financial
markets to banks’ capital adequacy. Also, the Accord has
effectively contributed to enhanced market transparency, to
international harmonization of capital standards, and thus,
importantly, to a level playing field within the Group of
Ten (G-10) countries and elsewhere. Indeed, virtually all
non-G-10 countries with international banks of significance have introduced, or are in the process of introducing,
arrangements similar to those laid down in the Accord.
These are achievements that need to be preserved.
It is often said that the Accord was designed for a
stylized (or simplified) version of the banking industry at
the end of the 1980s and that it tends to be somewhat rigid
in nature—elements, by the way, that have enabled it to be
widely applicable and that have contributed to greater harmonization. Since 1988, on the other hand, banking and
financial markets have changed considerably. A fairly

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

231

recent trend, but one that clearly stands out, is the rapid
advances in credit risk measurement and credit risk management techniques, particularly in the United States and
in some other industrialized countries. Credit scoring, for
example, is becoming more common among banks. Some
of the largest and most sophisticated banks have developed
credit risk models for internal or customer use. Asset
securitization, already widespread in U.S. capital markets,
is growing markedly elsewhere, and the same is true for the
credit derivative markets. Moreover, one of the advantages
of the Capital Accord, its simplicity through a small number of risk buckets, is increasingly criticized.
Against this background, market participants
claim that the Basle Accord is no longer up-to-date and
needs to be modified. As a general response, let me point
out that the Basle Accord is not a static framework but is
being developed and improved continuously. The best
example is, of course, the amendment of January 1996 to
introduce capital charges for market risk, including the
recognition of proprietary in-house models upon the industry’s request. The Basle Committee neither ignores market
participants’ comments on the Accord nor denies that there
may be potential for improvement. More specifically, the
Committee is aware that the current treatment of credit
risk needs to be revisited so as to modify and improve the
Accord, where necessary, in order to maintain its effectiveness. The same may be true for other risks, but let me first
go into credit risk.

OBJECTIVES
Before going on our way, we should have a clear idea of
what our destination is. One of the objectives for this
undertaking is, at least for supervisors, that the capital
standards should preferably be resilient to changing needs
over time. That is, ideally, they should require less frequent
interpretation and adjustments than is the case with the
present rules. Equally desirable is that capital standards
should accurately reflect the credit risks they insure
against, without incurring a regulatory burden that
would ultimately be unproductive. Substantial differences between the risks underlying the regulatory capital
requirements and the actual credit risks would entail the

232

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

wrong incentives. These would stimulate banks to take
on riskier loans within a certain risk category in pursuit
of a higher return on regulatory capital. To obtain better
insight into these issues, we should further investigate
banks’ methods of determining and measuring credit risk
and their internal capital allocation techniques. In doing
so, however, we should not lose sight of the functions of
capital requirements as discussed in the preceding session
of this conference.
Moreover, the Accord should maintain its transparency as much as possible: with the justified ever-greater
reliance on disclosure, market participants should be able
to assess relatively easily whether a bank complies with the
capital standards and to what extent. Especially in this
respect, the present Accord did an outstanding job. Every
self-respecting bank extensively published its Bank for
International Settlements ratios.
Capital requirements foster the safety and soundness of banks by limiting leverage and by providing a
buffer against unexpected losses. Sufficient capital also
decreases the likelihood of a bank becoming insolvent and
limits—via loss absorption and greater public confidence—
the adverse effects of bank failures. And by providing an
incentive to exercise discipline in risk taking, capital can
mitigate moral hazard and thus protect depositors and
deposit insurance. Admittedly, high capital adequacy ratios
do not guarantee a bank’s soundness, particularly if the
risks being taken are high or the bank is being mismanaged. Therefore, supervisors consider a bank’s capital adequacy in the context of a host of factors. But the bottom
line is that capital is an important indicator of a bank’s
condition—for financial markets as well as depositors and
bank regulators—and that minimum capital requirements
are one of the essential supervisory instruments.

GUIDING PRINCIPLES
Therefore, it should be absolutely clear that, when it
assesses the treatment of credit risk, the Basle Committee
will have no predetermined intention whatsoever of
reducing overall capital adequacy requirements—maybe
even the contrary. Higher capital requirements could
prove necessary, for example, for bank loans to higher risk

countries. In fact, this has been publicly recognized by
bank representatives in view of the recent Asian crisis.
More generally, we should be aware of the potential instability that can result from increased competition among
banks in the United States and European countries in the
longer run. And we should not be misled by the favourable
financial results that banks are presently showing, but keep
in mind that bad banking times can—and will—at some
point return. In those circumstances, credit risk will still
turn out to be inflexible, still difficult to manage, and still
undoubtedly, as it has always been, the primary source of
banks’ losses. Absorption of such losses will require the
availability of capital. A reduction of capital standards
would definitely not be the right signal from supervisors to
the industry, nor would it be expedient.
Of course, I am aware of the effects of capital standards on the competitiveness of banks as compared with
largely unregulated nonbank financial institutions such as
the mutual funds and finance companies in the United
States. Admittedly, this is a difficult issue. On the one
hand, too stringent capital requirements for banks that
deviate too much from economic capital requirements
would impair their ability to compete in specific lending
activities. On the other hand, capital standards should not
per se be at the level implicitly allowed for by market
forces. Competition by its very nature brings prices down
but, alas, not the risks. If competitive pressures were to
erode the spread for specific instruments to the point where
no creditor is being fully compensated for the risks
involved, prudent banks should consider whether they
want to be involved in that particular business in the first
place. It is therefore up to supervisors to strike the optimal
balance between the safety and soundness of the banking
system and the need for a level playing field. In the longer
run, efforts should be made to harmonize capital requirements among different institutions conducting the same
activities, or at least to bring them into closer alignment.
A first exchange of views on this takes place in the joint
forum on the supervision of financial conglomerates.
Another principle that the Basle Committee wants
to uphold is that the basic framework of the Capital
Accord—that is, minimum capital requirements based on

risk-weighted exposures—has not outlived its usefulness.
The rapid advances in credit risk measurement and credit
risk management techniques are only applicable to sophisticated, large financial institutions. When discussing
changes in the present Capital Accord, one should remember that it is not only being applied by those sophisticated
institutions but by tens of thousands of banks all over the
world. The Asian crisis has underlined once again that
weak supervision, including overly lax capital standards,
can have severe repercussions on financial stability. In the
core principles for effective banking supervision published
by the Basle Committee last year, it is clearly indicated
that application of the Basle Capital Accord for banks is an
important prerequisite for a sound banking system.
Changes in the Capital Accord should take into account
that the sophisticated techniques referred to above require
among other things sophisticated risk management standards and a large investment in information technology—
preconditions most banks in both industrialized and
emerging countries cannot meet in the foreseeable future.
Consequently, for these banks, the basic assumptions of the
present Accord should be maintained as much as possible.
Precisely because the Capital Accord is relatively simple,
the framework is useful for banks and their supervisors
in emerging market countries and contributes to market
transparency.
Keeping that in mind, one should, however,
acknowledge that the current standards are not based on
precise measures of credit risk, but on proxies for it in the
form of broad categories of banking assets. Indeed, banks
regularly call for other (that is, lower) risk weightings of
specific instruments. In order to obtain more precise
weightings, the Basle Committee should be willing to consider less arbitrary ways to determine credit risks. But it is
unrealistic to expect that internationally applicable risk
weightings can be established that accurately reflect banks’
risks at all times and under all conditions. Compromises in
this respect are inevitable.

CREDIT RISK MODELS
A way out may be to refer to banks’ own methods and
models to measure credit risk, under strict conditions

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

233

analogous to the treatment of market risks. At present, I
would describe credit risk models as still being in a development stage, although the advances that some banks have
made in this area are potentially significant. Ideally, as
sound credit risk models bring forward more precise
estimates of credit risk, these models will be beneficial for
banks. Models can be and are used in banks’ commercial
operations—for example, in pricing, in portfolio management or performance measurement, and naturally in risk
management. The quantification that a model entails
implies a greater awareness and transparency of risks
within a bank. More precise and concise risk information
will enhance internal communication, decision making,
and subsequent control of credit risk. Also, models enable
banks to allow for the effects of portfolio diversification
and of trading of credit risks or hedging by means of
credit derivatives. So it can be assumed that a greater
number of banks will introduce credit risk models and
start to implement them in their day-to-day credit operations, once the technical challenges involved in modeling
have been solved.
The more difficult question is whether credit risk
models could be used for regulatory capital purposes, just
as banks’ internal models for market risk are now being
used. As should be clear from what I have just said, credit
risk models can have advantages from a prudential point of
view. For this reason, the Committee is conscious of the
need not to impede their development and introduction in
the banking industry. However, there are still serious
obstacles on this road. First, credit risk models come with
substantial statistical and conceptual difficulties. To mention just a few: credit data are sparse, correlations cannot be
easily observed, credit returns are skewed, and, because of
the statistical problems, back testing in order to assess a
model’s output may not be feasible. Clearly, there are
model risks here.
Second, if models were to be used for regulatory
capital purposes, competitive equality within the banking
industry could be compromised. Because the statistical
assumptions and techniques used differ, it is very likely
that credit risk models’ results are not comparable across
banks. The issue of competitive equality would be compli-

234

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

cated even further by the potential differences in required
capital between banks using models and banks using the
current approach.
Third, and most important, a credit risk model
cannot replace a banker’s judgement. Models do not manage.
A model can only contribute to sound risk management
and should be embedded in it. This leads me to conclude
that if credit risk models are to be used for regulatory
capital purposes, they should not be judged in isolation.
Supervisors should also carefully examine and supervise the
qualitative factors in a bank’s risk management and set
standards for those factors. A possible stragegy would be to
start applying models for a number of asset categories for
which the technical difficulties mentioned before are more
or less overcome, while at the same time maintaining the
present—albeit reassessed—Accord for other categories.
This clearly has the advantage of giving an incentive to the
market to develop the models approach further so that the
approach can be applied to all credits. On the other hand,
it might jeopardize transparency.

MARKET RISK AND THE PRECOMMITMENT
APPROACH
Let me now make a short detour and discuss the supervisory treatment of risks other than credit risk. First, market
risk. Although the internal models approach was introduced only recently, research work is going on and possible
alternatives to this approach are being developed. The
Federal Reserve, for instance, has proposed the precommitment approach. Its attractive features are that it incorporates a judgement on the effectiveness of a bank’s risk
management, puts greater emphasis on the incentives for a
bank to avoid losses exceeding the limit it has predetermined, and reduces the regulatory burden. In my opinion,
however, under this approach, too, a bank’s choice of a capital commitment and the quality of its risk management
system still need to be subject to supervisory review. And
there are a number of other issues that are as yet
unsolved—for example, comparability across firms given
that the choice of the precommitment is subjective, the
role of public disclosure, and the supervisory penalties,
which are critical to the viability of the approach. For these

reasons, international supervisors will have to study the
results of the New York Clearing House pilot study
carefully.

OTHER RISKS
Now, let me turn to the other risks. If one leaves aside the
recent amendment with respect to market risks, it is true
that the Capital Accord deals explicitly with credit risk
only. Yet the Accord provides for a capital cushion for
banks, which is meant to absorb more losses than just those
due to credit risks. Therefore, if the capital standards for
credit risk were to be redefined, an issue that cannot be
avoided is how to go about treating the other risks. Awareness of, for instance, operational, legal, and reputational
risks among banks is increasing. Some banks are already
putting substantial effort into data collection and quantification of these risks. This is not surprising. Some new
techniques, such as credit derivatives and securitization
transactions, alleviate credit risk but increase operational
and legal risks, while several cases of banks’ getting into
problems because of fraud-related incidents have led to an
increased attention to reputational risk. Not surprisingly,
then, the Basle Committee will also be considering the
treatment of risks that are at present implicitly covered by
the Accord, such as those just mentioned and possibly
interest rate risk as well.
In this process, it will be important to distinguish
between quantifiable and nonquantifiable risks and their
respective supervisory treatments. More specifically, the
Committee will have to consider whether it should stick to a
single capital standard embracing all risks, including market
risks, or adopt a system of capital standards for particular

risks—that is, the quantifiable ones—in combination
with a supervisory review of the remaining risk categories.
From a theoretical point of view, one capital standard
might be preferable, since risks are not additive. Given the
present state of knowledge, however, one all-encompassing
standard for banking risks that takes account of their
interdependencies still seems far away. As the trend thus
far has been toward the development of separate models for
the major quantifiable risks, a system of capital standards
together with a supervisory review of other, nonquantifiable risks seems more likely.

CONCLUSION
The overall issue of this conference, particularly of this
session, is where capital regulation is heading. In my
address, I have argued that, since supervisory objectives are
unchanged, a reduction in banks’ capital adequacy would
not be desirable. Alterations in the basic framework of the
Capital Accord should not only take into account the
developments in risk measurement techniques as increasingly applied by sophisticated banks, but should also
reflect the worldwide application of the Accord. The Basle
Committee is committed to maintaining the effectiveness
of capital regulation and is willing to consider improvements, where possible. In this regard, the advances made
by market participants in measuring and modeling credit
and other risks are potentially significant. They should be
carefully studied for their applicability to prudential
purposes and might at some point be incorporated into
capital regulation. But before we reach that stage, there are
still formidable obstacles to be overcome.
Thank you.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

235

Risk Management: One Institution’s
Experience
Thomas G. Labrecque

I am very pleased to be part of this forward-looking conference on developments in capital regulation. Because the
purpose of capital is to support risk, I decided to approach
this session from the viewpoint of someone leading an
institution that depends, for its success or failure, on how
well it manages risk. My plan is to take you through my
experiences at Chase Manhattan Corporation and to close
with some thoughts on the implications of these experiences for capital regulation in the twenty-first century.
What I am going to describe to you is a dynamic
approach to risk management, though not a perfect one.
We continually make improvements, and we need to.
Nevertheless, if I look back on the last six months—and
the Asian crisis that has dominated this period—I would
argue that never during this time did I feel that we had
failed to understand the risks we were facing. In addition,
I feel fairly confident that our regulators have a reasonably
good understanding of the systems we use, and that, in the
event of a crisis, these regulators would have access to daily
information if they needed it.
Let me speak for a minute about market risk.
There has been considerable discussion at this conference

Thomas G. Labrecque is the president and chief operating officer of Chase
Manhattan Corporation.

about the limitations of the value-at-risk approach to risk
measurement. This approach is, of course, imperfect: it is
built on the same kinds of assumptions that we all use
routinely in our work.
In my view, value at risk is important, but it cannot stand alone. At Chase, we calculate our exposure to
market risk by using both a value-at-risk system and a
stress-test system. These systems apply to both the markto-market portfolio and the accrual portfolios. We use this
combination of approaches to set limits on the risks we
undertake and to assign capital to cover our exposures.
We came into 1997 with five stress-test scenarios
built into our systems: the October 1987 stock market
crash, the 1992 exchange rate mechanism crisis, the March
1994 bond market sell-off, the December 1994 peso crisis,
and a hypothetical flight-to-quality scenario. We are currently expanding this set of scenarios to include four new
prospective scenarios. In developing at least three of these
four, we will have to use our judgment to predict how currencies, interest rates, and markets would be affected. By
contrast, in the case of four of the five scenarios now in use,
we already know the outcome.
Our risk limits in 1997, and certainly into early
1998, have been set by assessing our risks against these
stress scenarios and the value-at-risk system. In fact, in the
last year, the balance between the two approaches to risk

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

237

management has probably moved more to the center. In
any case, this combination of approaches has enabled us to
manage market risk successfully.
Now, turning briefly to credit risk, let me review
how our institution handles it. First, at Chase, we monitor
individual transactions from several angles. We examine
not only how the transaction is structured but also how it
measures up against our lending standards. In this regard,
an independent risk-rating process for applying and verifying risk ratings—one that is entirely independent of the
units that actually carry out the bank’s business—is an
essential part of the credit review process at Chase. We also
decide, at the time of the transaction, which credits we
plan to hold in our portfolio and which we plan to sell into
the market. Finally, we determine the contribution that
each transaction makes to the overall risk of the portfolio
because that contribution forms the basis of the capital
allocation process.
Second, we identify and control credit risk by looking carefully at portfolio concentrations. Many of the crises
of the 1980s—the real estate crisis, the savings and loan
failures, the debt buildup in developing countries—can be
traced to a failure to monitor portfolio concentrations.
Recognizing these concentrations—for instance, by industry
or by country—is a key element of understanding the true
risks of the credit portfolio.
Institutions should track these concentrations as
part of a dynamic approach to managing their portfolios.
Dynamic portfolio management involves changing exposures to various risk categories through securitization,
sell-downs, syndication, and other means, while continuing
to serve your good clients.
At Chase, such dynamic management of concentrations in the portfolio is an important aspect of our
overall risk management strategy. We’ve found that it
brings results: for instance, because of our attention to
portfolio concentrations, Chase did not have finance
company risk in Korea in 1997. That was not an accident.
Third, we control risk by applying stress testing
to our credit portfolio. Although the stress tests are not
perfect, they do provide important guidance. For example,
in the early stages of the Asian crisis, we ran a simulation

238

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

in which we took the Asian segment of our portfolio and
lowered the ratings of every credit by two grades. Then,
by using historical data on nonperforming credits and
charge-offs, we estimated how much of our Asian portfolio,
in a two-grade drop, would be identified as nonperforming
and how much would be charged off. Again, although the
stress-testing approach has its limits, it was helpful in
assessing our institutional risks.
A fourth way in which we manage credit risk is to
review our customers on a real-time basis. It is especially
important in an environment of crisis—such as the current
financial turmoil in Asia—to look at every customer carefully. In this way, we have an evolving customer-bycustomer view of our risk exposures, as well as an evolving
stress-test view of our risks.
Moving on, let’s consider how institutions can
manage operating risks. Anyone who has been in this
business as long as I have—and it is probably longer than
you imagine—knows that payments system operating
risks are crucial. Institutions must pay attention to the
condition of their counterparties and to changes in the
patterns of clearing activity. They should also regularly
review the suitability of their intraday bilateral limits. In
this regard, I would argue that the world’s clearing systems
and, most important, the New York Clearing House and
the Clearing House Interbank Payments System [CHIPS]
have worked with incredible efficiency and effectiveness
to manage the operating risks that have arisen during the
last six months.
Now, let’s turn our attention to management oversight. Considerable responsibility for the sound operation
of an institution rests with the management. Having a
range of risk-monitoring systems is important, but if the
findings of these systems are not relayed to management,
then the systems will be of limited use. At Chase, market
risk information is made available daily—not only to the
traders but also to managers at the highest levels—the
business manager, the head of capital markets, Walter
Shipley (chairman and chief executive officer of Chase), and
me. These daily reports are used to assess current risk control strategies and to develop an appropriate limit structure
for the institution.

Similarly, information relating to credit risk goes
to the business manager, to the head of the global division,
to the corporate credit policy division, and to Walter and
me. Information bearing on operating risk and payments
system risk is reviewed by the payments system manager,
the head of Chase Technology Services, the head of credit
for institutional clients, and Walter and me.
In addition to reviewing the risk estimates provided by the business units, the senior officers of an
institution also need an independent risk management
unit. At Chase, this group runs the models and the management information systems, tests the models, works on
the theory underlying the models, and gives us an entirely
independent view of what we are doing every day.
As part of our approach to risk control, Walter and
I routinely begin the week with two meetings: one is to
review market risk, and the other is to assess credit and
underwriting risk as well as current developments. Because
of the events in Asia in recent months, we have held these
meetings even more frequently—in fact, on a daily basis
during some periods. In addition, each night we have
reports on every market risk item on our desks.
The careful identification and analysis of risk are,
however, only useful insofar as they lead to a capital allocation system that recognizes different degrees of risk and
includes all elements of risk. At Chase, each business is
allocated capital on the basis of the different types of risk
it assumes—market risk, credit risk, and operating risk—
and for the good will and other intangible assets it creates.
Finally, we have added to these capital allocations a balance
sheet tax for assets and for stand-by letters of credit—two
measures that have not proved entirely popular.
The rationale for our procedures is that once we
have characterized our risks, we want to make sure that we
have allocated capital in accordance with these risks. In
addition, we want to make sure that the returns we get
from our businesses are commensurate with the risks we
are actually taking.
What are the implications of our experience for
regulators? First, it would be unwise to develop regulations
that place inflexible restrictions on detailed aspects of our
businesses. Banking is a very dynamic business, and regu-

lation must be flexible enough to fit the institutions that
are being examined.
Second, regulators should be very comfortable
with the risk models used by each bank. In evaluating an
internal model, regulators should adopt four criteria:
Does the model closely mirror the markets? Is the complexity of the model (or of the combination of models
used by the bank) commensurate with the institution’s
business and level of complexity? Does the model truly
differentiate among various degrees of risk? Can the
model be adapted to accommodate new products and new
business, and, if so, is the review process for new products
and services a sound one?
Third, regulators should examine an institution’s
capital allocation system for how closely it mimics markets
and how well it differentiates risk.
If regulators follow these suggestions, then it
should be easy to determine whether institutions are successfully managing their exposures or exceeding their risk
limits. It should also be easy to check the returns on the
risk-adjusted capital applied.
In closing, I would like to return for a moment to
a theme raised in the conference’s keynote address. Alan
Greenspan remarked that our major banks use the probability of insolvency as the measure of institutional soundness for their internal risk assessments. It might be helpful,
then, to identify some early warning signals of insolvency.
In this connection, I recommend that supervisors monitor
more carefully the level of subordinated debt issued by
banks. Under what market conditions is the debt issued?
How is the debt priced? How does the market react to the
issue? How does the issue subsequently trade? At Chase,
we are already attempting to implement this kind of
review with our clients.
Another early warning signal might become available with the adoption of private-sector deposit insurance.
I have thought long and hard about this issue over the
years and can make a good case for private-sector deposit
insurance. I would argue that if an institution were to buy
commercially the first 5 percent of its insurance coverage
on deposits (in the United States, this would mean that
the Federal Deposit Insurance Corporation would be

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998

239

responsible for the remaining 95 percent), observers could
learn a great deal about the soundness of that institution
from the pricing of the insurance.
What I have given you today is the view of a
practitioner, one who seeks to identify and control risks
that could undermine the first-class institution he manages. My experience suggests that regulators should seek
dynamic, rather than static, solutions to the problems of

risk management and capital adequacy—solutions that
reflect the diversity of the regulated institutions and the
rapid changes in the structure, products, and risk control
practices of the financial industry. If regulators look carefully at the risks assumed by each institution and the models
each institution uses to calculate its exposure, then I am confident that they can determine the right capital positions.
Thank you all very much.

The views expressed in this article are those of the author and do not necessarily reflect the position of the Federal Reserve
Bank of New York or the Federal Reserve System. The Federal Reserve Bank of New York provides no warranty, express or
implied, as to the accuracy, timeliness, completeness, merchantability, or fitness for any particular purpose of any information
contained in documents produced and provided by the Federal Reserve Bank of New York in any form or manner whatsoever.

240

FRBNY ECONOMIC POLICY REVIEW / OCTOBER 1998