View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

At the International Center for Business Information’s Risk Management 2004
Conference Geneva, Switzerland
December 7, 2004

It’s Not Just about the Models: Recognizing the Importance of Qualitative Factors
in an Effective Risk-Management Process
Good afternoon. I am delighted to join you today. I have spent much of my career in the
field of risk management, and of course the Federal Reserve has a keen interest in this topic.
For those of us who have spent more than a few years in the business, it is easy to see the
recent progress in the quantitative or scientific aspects of risk management brought about by
improved databases and technological advances. These increased capabilities have opened
doors and minds to new ways of measuring and managing risk.
These advances have made possible the development of new markets and products that are
widely relied upon by both financial and nonfinancial firms, and that in turn have helped to
promote the adoption of the best risk measurement and management practices. They have
also made the practice of risk management far more sophisticated and complex. These
changes have come about because of better risk-measurement techniques and have the
potential, I believe, to substantially improve the efficiency of U.S. and world financial
markets.
Although the importance of the quantitative aspects of risk measurement may be quite
apparent--at least to practitioners of the art--the importance of the qualitative aspects may
be less so. In practice, though, these qualitative aspects are no less important to the
successful operation of a business--as events continue to demonstrate. Some qualitative
factors--such as experience and judgment--affect what one does with model results. It is
important that we not let models make the decisions, that we keep in mind that they are just
tools, because in many cases it is management experience--aided by models to be sure--that
helps to limit losses. Some qualitative factors affect what the models and risk measures say,
including the parameters and modeling assumptions used as well as choices that relate to the
characteristics of the underlying data and the historical period from which the data are
drawn.
In my comments today, I will address three topics. First, sound risk management is more
than technical skill in building internal models. Models of risk need to be integrated into a
robust enterprise-wide program that encompasses even line management's routine business
practices. Second, regular testing of "data integrity" in its broadest sense as it relates to these
risk measurement and management processes is essential to the effectiveness of these
processes. Finally, I want to raise some issues around accounting and disclosure of risk.
If there is a single theme to my remarks today, it is simply this: Keep improving, refining,
and innovating in risk management. Although Basel II is a remarkable achievement, and the
subject of this conference, don't let your best practices be limited by what Basel II does or

requires. Indeed, one of the more desirable aspects of Basel II is that it anticipates that it can
evolve with best industry practices without creating a new framework. The designers have
not intended to build a straitjacket, and the policymakers have insisted on this characteristic,
which my colleague Vice Chairman Ferguson has referred to as Basel II's "evergreen"
aspect.
Enterprise-wide risk management
Within financial institutions, the better and greater focus on risk measurement has helped to
bridge the gap between the perspective of the traditional credit risk officer and that of the
"quant." This is no small accomplishment, and a significant advance in credit culture. If you
will pardon my use of stereotypes, historically the credit risk officer has been the fellow who
always says "no" because it is the conservative thing to do, while the financial modeler has
been the proponent of active-trading strategies that fulfill the promise of a data-mined
efficient frontier that has been estimated to several decimal places of precision. The logic of
risk and return in a competitive marketplace, as measured by return on economic capital that
is founded in empirical analysis, has provided both sides with a common language and set of
standards. It is not unusual anymore to hear chief credit officers describe their appetite for
risk in terms of risk-adjusted return on capital (RAROC) or monitor current spreads on
credit default swaps to look for market signals on borrower credit quality. With better credit
cultures and improved tools, institutions can also measure and evaluate more objectively the
results of their business strategies and use that information to enhance future performance.
They can decide how much risk to take, rather than letting their risk profile be the
consequence of other decisions.
The evolution of interest rate risk management in the United States is a great illustration of
how an enterprise-wide approach can help institutions customize products that better serve
customers, set prices to reflect risk exposures and attain profit targets, and ensure that
corporate earnings contributions are met. Thirty years ago, bankers who were used to taking
fixed-rate deposits--capped under the old Regulation Q ceilings--and making fixed-rate term
loans, found the cost of their deposits rising with the market after short-term rates rose
dramatically late in 1979. Financial institutions found that, to meet market interest rates,
they had to pay higher rates of interest on deposits than they were receiving on loans. As a
banker, I went through that period in 1980 when the popular new six-month CDs that were
booked in March, at annualized interest rates of around 15 percent, were funding loans at a
negative carry when the prime rate fell to 11 percent by August. The roller coaster continued
as lower CD rates in the second half of 1980 were funding loans at a prime rate of more than
20 percent by January 1981.
One of the first challenges bankers faced in this environment was developing the information
and analytical systems needed to manage the institution's overall interest rate sensitivity. So,
in the early 1980s, taking advantage of the newly emerging computer technology and
software, they developed asset-liability management models that integrated information on
deposit and loan repricing. Further, the management committees responsible for interest rate
risk changed. Instead of committees that included only management from the funding-desk
and investment-portfolio management, a new group was created--the Asset/Liability
Committee, or ALCO. This committee included the old finance committee members and
new ALCO staff but also, most important, added business-line managers responsible for
major corporate and retail banking activities. For the first time, pricing of loans and deposits
was moved from the silos of business-line management, recognizing that the enterprise as a
whole had to coordinate balance sheet usage in order to maintain the net interest margin
around a targeted level. While this enterprise-wide approach to market risk evolved at

varying paces in different depository institutions, by the latter part of the 1980s all of the
basic elements were in place and, by the 1990s, the process had matured. The ALCO
process is now widely recognized as a critical element in both management and board
governance processes.
This discipline emerged not only because of better asset-liability risk measures, but also
because these new risk measurement and management techniques and computer technology
facilitated rapid innovation in financial instruments. The industry turned to new
securitization techniques to pool mortgages and remove the interest rate risk from balance
sheets, techniques that eventually expanded to other loan types as well. Interest rate
derivatives, structured investment securities, and callable debt have allowed financial
institutions to meet customer demands more effectively while managing the liquidity and
interest rate risk exposures those relationships entail. Enterprise-wide market-risk
management became a value-added activity and has been widely accepted as a critical
element of the governance and strategic processes at financial institutions.
The example I presented of how effective asset-liability management committees and
processes can support business-line strategies as well as governance, is intended to illustrate
that effective enterprise-wide risk-management processes are not built just to comply with
regulations, such as banking regulations or Sarbanes-Oxley requirements in the United
States. Rather, these processes can add value when they become an integral part of both
strategic and tactical business-decision processes.
Corporate strategies often focus on the "most likely" future scenario and the benefits of a
strategic initiative. A sound governance, risk-management, and internal-control environment
starts by stretching the strategic planning exercise to consider alternative outcomes. That is,
while the strategy is being developed, management and the board should consider how risk
exposures will change as part of the planning process. Then, appropriate controls can be
built into the process design, the costs of errors and rework in the initial rollout can be
reduced, and the ongoing initiative can be more successful because monitoring processes can
signal when activities and results are missing their intended goals and corrective actions can
be initiated more promptly.
According to a global survey of governance at financial institutions conducted by
PricewaterhouseCoopers (and reported in April), one of the reasons financial institutions are
not making the grade is that they equate effective governance with meeting the demands of
regulators and legislators, without recognizing that sound governance is also good for
business.1 In other words, they tend to look at this as another compliance exercise. The
study goes on to state that this compliance mentality is limiting these institutions' ability to
achieve strategic advantages through governance.
I agree that any institution that views corporate governance as merely a compliance exercise
is missing the mark. Over the years, corporate managers have learned that focusing on better
process management and quality can enhance financial returns and customer satisfaction.
They have learned that correcting errors, downtime in critical systems, and undertraining of
staff all result in higher costs and lost revenue opportunities. I challenge you to consider the
corporate governance structure appropriate to your bank's unique business strategy and scale
as an important investment, and to consider returns on that investment in terms of the
avoidance of the costs of poor internal controls and of customer dissatisfaction.
As you know, once an organization gets lax in its approach to corporate governance,

problems tend to follow. We have some experience in that regard. Some of you may recall
the time and attention that management of U.S. banks devoted to section 112 of the Federal
Deposit Insurance Corporation Improvement Act, which first required bank management
reports on internal controls and auditor attestations in the early 1990s. Then the process
became routine, delegated to lower levels of management and unresponsive to changes in
the way the business was being run. Unfortunately, for organizations with weak governance,
trying to change the culture again to meet Sarbanes-Oxley requirements is taking an
exceptional amount of senior management and directors' time--time taken away from
building the business. The challenge, therefore, is not only to achieve the proper control
environment at one point in time, but also to maintain that discipline and, indeed, ensure that
corporate governance keeps pace with the changing risks that you will face in the coming
years.
One weakness we have seen is the delegation by management of both the development and
the assessment of the internal-control structure to the same risk-management, internalcontrol, or compliance group. It is important to emphasize that line management has the
responsibility for identifying risks and ensuring that the mitigating controls are effective--and
to leave the assessments to a group that is independent of that line organization. Managers
should be expected to evaluate the risks and controls within their scope of authority at least
annually and to report the results of this process to the chief risk officer and the audit
committee of the board of directors. An independent group, such as internal audit, should
perform a separate assessment to confirm management's assessment.
Credit risk management
The evolution of a portfolio approach to credit risk management has followed a path similar
to that of asset-liability management. It began in the late 1980s, in the aftermath of serious
credit-quality deterioration. Models and databases on defaults and credit spreads have since
become more sophisticated, and loan review committees have evolved into committees that
consider more broadly the various aspects of portfolio risk management. As a result, loans
are priced better to reflect their varying levels of risk, they are syndicated and securitized to
mitigate lenders' risk, and credit derivatives have been created to limit credit-risk exposures
that are retained.
On that last topic, I would be remiss not to draw your attention to a recent consultative
paper on credit-risk transfer issued by the Joint Forum in October. This consultative paper
focuses on credit derivatives and related transactions--themselves an outgrowth of better
risk measurement--and is open to public comment through January 2005. It documents the
remarkable growth and innovation in these credit products, with aggregate notional value of
$2.3 trillion and with about 1,200 regularly traded reference entities or "names." The paper
emphasizes that much more growth is likely, because these products are still in the early
phases of their life cycle, and that the most important issue now facing market participants is
the continuing development of their risk measurement and management capabilities.
In that context, the consultative paper responds to three questions: whether the transactions
accomplish a clean transfer of risk, whether participants understand the risks involved, and
whether undue concentrations of risk are developing. Overall, the paper offers seventeen
specific recommendations for improving the practice and supervisory oversight of credit risk
transfer activity, drawing heavily on discussions with sophisticated market participants. I
encourage you to review this paper and consider its recommendations seriously. Let me
discuss two of them briefly.

One recommendation is that firms understand fully and apply discipline to their credit
models in order to ensure quality and manage the usage of these models appropriately.
Correlation assumptions receive special attention here, including the growing presence of
"correlation trading desks" and the observation from several market participants that there
may be too much commonality in these assumptions across market participants. Insufficient
diversity in views could lead to the kind of turmoil that occurred in markets for longer-term
Treasury instruments in mid-2003. In that case, unexpected increases in rates led a large
number of similarly positioned financial institutions to seek to take the same side of
transactions simultaneously.
Another recommendation is that participants properly understand the economic meaning of
external ratings that are applied to credit risk transfer instruments, especially collateralized
debt obligations or CDOs--as compared with ratings given to more traditional obligations.
Identical ratings across different types of instruments do not guarantee identical risk
characteristics, and in particular may imply equal probability of a loss event but unequal
severity of loss. It is important to understand both the specific methodology used by the
rating agency--which the agencies make available in extensive detail, including how default
correlation is addressed--and the structure of a specific transaction in order to properly
assess its contribution to a portfolio's risk profile.
Data integrity, broadly defined
Even the best of processes suffers if the data used to measure risk and performance are
flawed. In understanding the drivers of good risk management, qualitative factors are a
critical influence on the reliability and characteristics of the "data" used to evaluate risk and
performance. In this broader sense, "data integrity" can refer not only to the consistency,
accuracy and appropriateness of the information in the data base and model, but also to the
processes that produce and utilize these measures. Used this way, "data integrity" includes
the quality of credit files, tracking of key customer characteristics, internal processes and
controls, and even the training that supports them all.
When one says "data integrity" in risk-management circles these days, most people think of
the qualifying standards for the internal-ratings-based approaches to credit risk capital under
Basel II. I think it is a broader concept, so let me spend a moment on that subject. The
proposed timetable for U.S. implementation of Basel II reflects the requirements of our
rulemaking processes and the need for banks and supervisors to prepare for the introduction
of the new standards. As you probably know, the U.S. banking agencies envision formal
release of the proposed regulations in mid-2005, with parallel running of Basel I and Basel II
in 2007 and full implementation in 2008. Even on that timetable, the regulatory community
recognizes that substantial data limitations may prevent banks from developing viable and
robust parameter estimates in the near term--even for probability of default in some cases.
For this reason, both banks and their supervisors will have to wait while data accumulate
before banks can estimate and validate parameter inputs in a reliable, robust manner.
In the interim, banks and supervisors will have to rely heavily on qualitative validation
approaches--although not entirely. Supervisors across countries are working together to
address validation issues, and, I believe, will develop useful guidelines for banks and
supervisors alike. In the early years, more weight may need to be placed on qualitative
reviews of a bank's internal policies and procedures, including its internal validation and
documentation. But we expect that, soon after implementation, banks should have the ability
to generate the needed parameters from actual data, and supervisors will want to see positive

steps being taken by banking organizations to develop good databases to provide the sort of
data integrity I am discussing. Qualitative and quantitative benchmarking studies, which
compare methodologies and parameter estimates across banks, will be important tools for
validation and for encouraging the diffusion of best practices throughout the industry during
both the initial, more qualitative and the later more quantitative, intervals.
But, as I noted, high-quality data are important for strong risk management, and not just for
Basel II. Data are needed for other models and risk measures used in financial services,
including credit scoring models, market-based measures such as KMV, and value-at-risk and
other economic capital models. As you know, these economic capital models are a key
element of Pillar 2. The broader concept of data integrity also applies to the development
and maintenance of well-controlled processes including those that measure risk and
performance. If the environment in which the models operate is not appropriate--if an
institution considers internal controls just to be a checklist--its risk measures will not provide
the performance it hopes to achieve.
Accounting, disclosure, and market discipline
Strong risk measurement and disciplined maintenance of data also improve the
communication between the institution and its investors and counterparties. This sense of
data integrity relates just as well to the information provided to these parties. I would like,
now, to turn to some of the recent accounting issues surrounding complex instruments and
the role of financial disclosure in promoting risk management.
Some of you may have experienced earnings volatility resulting from the use of credit
derivatives. Under U. S. generally accepted accounting principles, credit derivatives are
generally required to be recognized as an asset or liability and measured at fair value, and
the gain or loss resulting from the change in fair value must be recorded in earnings. Most
credit derivatives do not qualify for hedge accounting treatment, implying greater earnings
volatility if the hedged portfolio or securities are carried at historic cost in their banking
book. As a bank supervisor, I am concerned if the accounting treatment discourages the use
of new risk-management financial instruments.
You may be wondering if the answer to this volatility issue is fair value accounting. If the
hedged asset were measured at fair value, the changes in values of the hedged item and the
credit derivative would offset each other, reducing the volatility that arises when only the
derivative is marked to market, depending of course on the effectiveness of the hedge. But
some volatility is likely to remain, since it is the lack of close correlation that prevents hedge
accounting treatment.
The IASB developed the new "fair value option" under International Accounting Standard
(IAS) 39, under which firms could mark to market both the credit derivative and the hedged
position and report changes in their fair values in current earnings. While at first glance the
fair value option might be viewed as the solution to addressing the problems of the current
accounting model, it also raises a number of concerns. Without observable market prices and
sound valuation approaches, fair value measurements are difficult to determine, verify, and
audit. Reporting would become less comparable across institutions. Moreover, if an entity's
creditworthiness deteriorates significantly, there is potentially a peculiar result. In this
circumstance, financial liabilities would be marked down to fair value and a gain would be
recorded in the entity's profit and loss statement. In the most dramatic case, an insolvent
entity might appear solvent as a result of marking to market its own deteriorated credit risk.
Many of these concerns, as well as recommendations to address them, were included in a

July comment letter to the IASB from the Basel Committee.2 As heavy users of customer
and investor corporate financial statements, bankers should also consider how a fair value
measure may mask underlying reasons for the change in fair value.
As institutions using IASB standards consider how to use the fair value option for their own
financial reporting purposes, they should be aware of certain related complexities. For
example, if loans are reported using the fair value option, changes in fair value would
presumably affect loan loss allowances and thus regulatory capital, important asset-quality
measures like nonperforming assets, and even net interest margins.
One area in which improved disclosures by banking organizations are needed involves credit
risk and the allowance for loan losses. As you know, a high degree of management judgment
is involved in estimating the loan-loss allowance, and that estimate can have a significant
impact on an institution's balance sheet and earnings. Expanded disclosures in this area
would improve market participants' understanding of an institution's risk profile and whether
the firm has adequately provided for its estimated credit losses in a consistent,
well-disciplined manner. Accordingly, I strongly encourage institutions to provide additional
disclosures in this area. Examples include a breakdown of credit exposures by internal credit
grade, allowance estimates broken down by key components, more-thorough discussions of
why allowance components have changed from period to period, and enhanced discussions
of the rationale behind changes in the more-subjective allowance estimates, including
unallocated amounts.
Thus, as both users and preparers of financial statements, bankers should encourage
transparency in accounting and disclosure of risk positions.
Conclusion
In my remarks today, I have sought to encourage you to continue your efforts to support the
evolution of risk measurement and risk management practices and to heighten the degree of
professionalism that every effective risk manager should demonstrate. I want to leave by
reminding all of you not to become so caught up in the latest technical development that you
lose sight of the qualitative aspects of your responsibilities. Models alone do not guarantee
an effective risk-management process. You should encourage continuous improvement in all
aspects, including data integrity, legal clarity, transparent disclosures, and internal controls.
For the risk managers at this conference, I hope the message you have heard is that you
should be actively engaged with managers throughout the organization, talking about the
merits of a consistent, sound enterprise-wide risk management culture. In doing so, you can
help managers see that the risk-management process will allow them to better understand the
inherent risks of their activities so that they in turn can more effectively mitigate these risks
and achieve their profit goals.
Footnotes
1. PricewaterhouseCoopers and the Economist Intelligence Unit, "Governance: From
Compliance to Strategic Advantage," (436 KB PDF) (April 2004). Return to text
2. A copy of the letter, dated July 30, 2004, can be found at www.bis.org/bcbs
/commentletters/iasb14.pdf (59 KB PDF) Return to text

Return to top
2004 Speeches
Home | News and events
Accessibility | Contact Us
Last update: December 7, 2004