View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Statement and Testimony by Eric Kolchinsky
Before the Financial Crisis Inquiry Commission
I want to thank Chairman Angelides, Vice Chair Thomas and the Commissioners for inviting me to speak
about the role of the rating agencies in the financial crisis.
My name is Eric Kolchinsky, and during the majority of 2007, I was the Managing Director in charge of
the business line which rated sub‐prime backed Collateralized Debt Obligations (also known as ABS
CDOs) at Moody’s Investors Service. I have spent my entire career in structured finance and began
working with CDOs specifically in 1998. In addition to spending 8 years at Moody’s, I have also worked
at Goldman Sachs, Merrill Lynch, Lehman Brothers and MBIA.
I hold an undergraduate degree in aerospace engineering from the University of Southern California, a
law degree from the New York University School of Law and a Masters of Science in Statistics from NYU’s
Stern School of Business.
I hope to shed some light on a fundamental question facing the commission – what caused the rating
agencies to assign such erroneous ratings? How could renowned companies like Moody’s and S&P and
Fitch with a hundred years of experience in credit analysis produce such poor analysis? More
importantly, how can this prevented from happening again?
The answers lie primarily in the structure of the market for rating services. While the initial users of
ratings may be private entities, they seek ratings to satisfy various regulatory mandates. Thus the
nature of rating agencies is quasi‐regulatory and is very similar to the auditing work performed by
accounting firms. The failure of the rating agencies can be seen as an example of “regulatory capture” –
a term used by economists to describe a scenario where a regulator acts in the benefit of the regulated
instead of the public interest. In this case, the “quasi” regulators were the rating agencies, the
“regulated” included banks and broker/dealers and the public interest lay in the guarantee which
taxpayers provide for the financial system.
This dynamic manifested itself in the interplay of several factors: the mandated outsourcing of credit
analysis without any associated mandated standards of highly complex and flexible structured finance
instruments to private companies whose managers were strongly incentivized to maximize profits. In
short, the rating agencies were given a blank check.
The combination of these factors fundamentally changed the incentives of managers at rating agencies
and allowed the creation of hundreds of billions of toxic instruments which lay at the core of the
financial crisis.
Let me discuss each element separately:
1) Mandated risk outsourcing. Not only did private investors outsource their risk management to
rating agencies, but the government did as well. Rules regulating bank capital are designed to
Page 1 of 5

ensure that the managers of a bank do not abuse the subsidy provided them by government
guarantee. For structured finance instruments, capital rules directly relied (and still rely) on the
rating agencies. The higher the rating, the less money a bank is required to set aside for any given
instrument. Yet these rules fail to set out any standards for ratings – even such basic concepts as
whether the ratings should measure expected loss or probability of default and even the meaning of
the coveted AAA.
2) Structured Finance. Structured finance instruments are unlike any other debt rated by the rating
agencies. Not only are they structurally flexible, they also have little history to analyze. Given the
product’s complexity, there was no general understanding of structured credit outside of issuers,
bankers and rating agencies. This gave the agencies greater berth when deciding which factors were
relevant in the initial analysis as well as surveillance.
3) Private Entities. The rating agencies are private entities whose managers are incentivized to
maximize revenue.
Consider the incentives created by these factors. The rating agencies could generate billions in revenue
by rating instruments which few people understood. The lack of guidance from the private and public
users of ratings ensured that there was little concern that anyone would question the methods used to
rate the products. The only negative factors to consider were some amorphous concepts of
“reputational risk”. In other words, the rating agencies faced the age old and pedestrian conflict
between long term product quality and short term profits. They chose the latter.
These asymmetric incentives caused a shift of the culture at Moody’s from one resembling a university
academic department to one which values revenue at all costs. Two stories can demonstrate the
difference.
Upon first joining Moody’s in 2000, I was asked to opine on a new transaction backed only by
telecommunication loans. After doing my research, I reported to my managers that I did not believe the
deal could be rated. I felt no pressure to reverse my decision and was complimented for my work even
though we lost out on a lucrative piece of business. The decision not to rate was a sound one – shortly
thereafter, as a result of the dot‐com collapse, many telecom loans defaulted.
However, by 2007, the culture at Moody’s changed dramatically. It was now a major public company
with revenues of over $2 billion. It had been one of the best equity performers in the S&P 500.
Managers received a significant portion of their compensation in restricted stock and options.
The products rated by my group had grown from a financial backwater to a profit leader. In 2001, a
total of $57 billion of CDOs were rated. By 2006, the number had reached $320 billion – a nearly 6 fold
increase. In the first half of 2007, our revenues represented over 20% of the total rating agency
revenues earned by Moody’s.

Page 2 of 5

The growth was the result of surging structured finance origination and focus on increasing and
maintaining market share. Senior management would periodically distribute emails detailing their
departments’ market share. Even if the market share dropped by a few percentage points, managers
would be expected to justify “missing” the deals which were not rated. Colleagues have described
enormous pressure from their superiors when their market share dipped.
For senior management, concern about credit quality took a back seat to market share1. While there
was never any explicit directive to lower credit standards, every missed deal had to be explained and
defended. Management also went out of its way to placate bankers and issuers. For example, and
contrary to the testimony of a Moody’s senior managing director, banker requests to keep certain
analysts off of their deals were granted.
The focus on market share inevitably lead to an inability to say “no” to transactions. It was well
understood that if one rating agency said no, then the banker could easily take their business to
another. During my tenure at the head of US ABS CDOs, I was able to say no to just one particularly
questionable deal. That did not stop the transaction – the banker enlisted another rating agency and
received the two AAA ratings he was looking for.
The poor performance of structured finance ratings is primarily the result of senior management’s
directive to maintain and increase market share. Leverage during negotiations can only be gained if the
one side has the ability to walk away (or the opposing party believes they can walk away). Without this
leverage, the power to extract meaningful concessions from bankers ceased to exist. Instead, analysts
and managers rationalized their concessions since the nominal performance of the collateral was often
quite exceptional.
The increased use of synthetics (in the form of credit default swaps) also changed the nature of the ABS
CDO market. The use of synthetics changed the nature, the role and the incentives of the players in our
market. The ability to go short created a new class of “investors” whose goal was to maximize losses.
The influence of these players was never anticipated by our models and assumptions.
Additionally, the ability to infinitely replicate any credit synthetically also raised concerns about
correlation between any two CDOs. The probability of two identical bonds in two separate portfolios
was no longer limited by the outstanding size of the issue. This increased correlation would have an
impact on our ratings since most CDOs had a sizable exposure to other CDOs. This correlation concern
was especially true with respect to the bonds in the ABX index.
The ABX was a synthetic index which tracked the performance of 20 sub‐prime transactions. The index
or its components started to appear frequently in many of the CDOs we rated. It was also a popular tool
for hedge funds shorting the subprime market. A methodology detailing this concern and limiting CDO

1

In Congressional testimony, Moody’s has tried to distinguish between the pursuit of market coverage and
market share. I don’t believe that there is a substantive difference between the two terms.

Page 3 of 5

exposure to the index was ready to be published in October of 2006. However, it was not published due
to market share concerns.
Synthetics also changed the dynamics of the rating process. A CDO is already a very complex structure,
but adding synthetics increased this intricacy geometrically. First, each credit default swap was in itself
a highly customizable and bespoke instrument. Second, the analysts also had to deal with issues such as
counter‐party risk, collateral (funding) risk and multiple waterfall (cash and synthetic) dynamics.
In addition, while a cash transaction would have taken months to accumulate the collateral it needed to
close, a synthetics transaction could “ramp‐up” in a week. This significantly shortened the window for
analysts to be able to analyze their transactions. Pressure from bankers, always high, increased
tremendously. Without any negotiating leverage, it was difficult to manage this increased workflow
effectively.
Despite the increasing number of deals and the increasing complexity, our group did not receive
adequate resources. By 2007, we were barely keeping up with the deal flow and the developments in
the market. Many analysts, under pressure from bankers and their high deal loads began to do the bare
minimum of work required. We did not have the time to do any meaningful research into all the
emerging credit issues. My own attempts to stay on top of the increasingly troubled market were
chided by my manager. She told me that I spent too much time reading research.
As the market began to falter after the collapse of the Bear Stearns hedge funds, I was asked to post
senior management on the developments in the markets. There appeared to be little concern regarding
credit quality. According to my manager, the CEO, Ray McDaniel, was asking for information on our
potential deal flow prospects: “obviously, they're getting calls from [equity] analysts and investors”. I
believe that this mindset helps to explain how in the fall of 2007, Moody’s nearly committed securities
fraud.
During the course of that year, the group which rated and monitored subprime bonds did not react to
the deterioration in their performance statistics. During the summer of 2007, a relatively small batch of
subprime bonds was downgraded. However, by early September, during an impromptu meeting with
Mr. Nicolas Weill, I was told that the ratings on the 2006 vintage of subprime bonds were about to be
downgraded broadly and severely. While the understaffed group needed time to determine the new
ratings, I left the meeting with the knowledge that the then current ratings were wrong and no longer
reflected the best opinion of the rating agency.
The rating methodology for CDOs backed by subprime bonds relied on the latter ratings to access credit
risk. The worse the sub‐prime ratings, the more subordination would be required for any particular CDO
rating. If the underlying ratings were no longer correct, then the ratings on the CDOs would also be
wrong. I believed that to assign new ratings based on assumptions which I knew to be wrong would
constitute securities fraud.

Page 4 of 5

Since we still had a handful of CDOs in our pipeline, I immediately notified my manager and proposed a
solution to this problem. My manager declined to do anything about the potential fraud, so I sought
advice from Gary Witt, my colleague and former manager. He suggested that I take the matter up with
the managing director in charge of credit policy. As a result of my intervention, a procedure for lowering
sub‐prime bond ratings going into CDOs was announced on September 21, 2007. I believe that this
action saved Moody’s from committing securities fraud. About a month and a half after my
intervention, I was asked to leave the CDO group and offered a transfer and demotion to a lower
position in another department at Moody’s.
What can be done to improve rating quality? One solution which has been proposed is to completely
remove any references to ratings in regulations. While this proposal seems simple and just, it is also
impractical. At this point in time there are no organizations ready to take the agencies’ role in the
capital markets. Furthermore, the perverse incentives described above will apply to any private
organizations charged with the same task.
The only practical solution is to add accountability to the system by mandating minimum credit
standards. This would put a floor on market share motivated free‐falls in methodologies and restrict
completion to where it belongs – price and service.

Page 5 of 5