Full text of Financial Industry Studies : December 1998
The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.
# FEDERAL RESERVE BANK OF DALLAS DECEMBER 1998 FINANCIAL INDUSTRY s>' mi ft* fir wv Concentration, Technology, and Market Power in Banking: Is Distance Dead? Robert R. M o o r e Benchmarking the Productive Efficiency of U.S. Banks Thomas F. Siems arid Richard S. Barr This publication was digitized and made available by the Federal Reserve Bank of Dallas' Historical Library (FedHistory@dal.frb.org) Financial Industry Studies Federal Reserve Bank of Dallas Robert D. McTeer, Jr. President and Chief Executive Officer Helen E. Holcomb First Vice President and Chief Operating Officer Robert D. Hankins Senior Vice President W. Arthur Tribble Vice President Economists Jeffery W. Gunther Robert R. Moore Kenneth J. Robinson Thomas F. Siems Sujit "Bob" Chakravorti Financial Analysts Robert V. Bubel Robert F. Mahal ik Karen M. Couch Kelly Klemme Edward C. Skelton Kory A. Killgo Graphic Designer Candi Aulbaugh Editors Jeffery W. Gunther Robert R. Moore Kenneth J. Robinson Publications Director Kay Champagne Copy Editor Jennifer Afflerbach Design & Production Laura J. Bell Financial Industry Studies is published by the Federal Reserve Bank of Dallas. The views expressed are those of the authors and should not be attributed to the Federal Reserve Bank of Dallas or the Federal Reserve System. Articles may be reprinted on the condition that the source is credited and a copy of the publication containing the reprinted article is provided to the Financial Industry Studies Department of the Federal Reserve Bank of Dallas. Financial Industry Studies is available free of charge by writing the Public Affairs Department, Federal Reserve Bank of Dallas, P.O. Box 655906, Dallas, Texas 75265-5906, or by telephoning (214) 922-5254 or (800) 333-4460, ext. 5254. It is also available on the Dallas Fed's web site, www.dallasfed.org. Contents Concentration, Technology, and Market Power in Banking: Is Distance Dead? Advancing technology is reducing the barrier that distance has traditionally posed between potential buyers and sellers for a variety of goods and services. To what extent will technology overcome distance as a barrier in banking? Consistent with distance becoming less of a barrier and banking markets becoming larger in geographic scope, I find that the presence of nearby competitors helps explain bank profitability in 1986 and 1987 but not in 1996 and 1997. Hence, while it may be premature to pronounce distance dead in banking, its role does appear to be diminishing. Robert R. Moore Page 1 Benchmarking the Productive Efficiency of U.S. Banks Thomas F. Siems and Richard S. Barr Page 11 Effective benchmarking allows comparisons among similar business units to discover best practices and incorporate process and product improvements into ongoing operations. Most current benchmarking analyses are limited in scope by taking a onedimensional view of a service, product, or process and by ignoring any interactions, substitutions, or trade-offs between key variables. In this study, we use a constrained-multiplier, inputoriented, data envelopment analysis (DEA) model to benchmark the productive efficiency of U.S. banks. We find that the most efficient banks effectively control costs and hold a greater percentage of earning assets than the least efficient banks. Performance measures for the most efficient banks indicate that they earn a significantly higher return on average assets, hold more capital, and manage less risky and smaller loan portfolios. We find a close association between a bank's relative efficiency score derived from the DEA model and its examination (CAMEL) rating. Concentration, Technology, and Market Power in Banking: Is Distance Dead? Advancing technology is breaking down the barrier that distance has traditionally posed between potential buyers and sellers. For example, residents in remote rural areas once would have had to travel great distances to shop at a large bookstore, but using the Internet, they can now browse through bookstores offering millions of titles without leaving their homes. But the extent to which distance will become irrelevant in other types of transactions will depend on the nature of the transaction. Imagining a world in which it would be as easy to enjoy the cuisine of a restaurant a thousand miles away as that of a restaurant a block away is difficult. At the other extreme, downloading software from a company on the other side of the world can be as easy as downloading software from a company next door. Where does banking fit into this spectrum? Robert R. Moore C Consistent with distance becoming less important in banking, I find that although operating in a market with few nearby competitors boosted profitability a decade ago, more recently it does not. The answer is taking on heightened importance in light of the recent frenzied pace of bank mergers that have brought about a conspicuous decline in the number of U.S. banking organizations. One potential effect of banking consolidation is a reduction in the competitiveness of banking markets. Reflecting the traditional view that the presence of nearby competitors is essential for competitive outcomes, markets for banking services often have been thought to be confined to relatively limited geographic areas. If an abundance of nearby competitors is essential for competition, then banking consolidation might threaten competition if it reduced the number of competitors in some local markets. But although the number of U.S. banking organizations is declining, other factors are heightening competition in the industry. Deregulation has reduced geographic restrictions on banking, allowing the banking organizations that remain to have a physical presence in a larger number of areas and fostering greater competition among those organizations. Nonbank competitors have become a more important source of competition. And although traditional antitrust analysis considers physical presence in a market essential for competing in that market, competing in distant markets is becoming less difficult than it once was. Advancements in communications are making it easier and less expensive to exchange information over great distances through the telephone and Internet. Also, with banking innovations such as credit scoring and telephone banking, transactions that once required face-to-face conRobert R. Moore is a senior economist and policy advisor tact can now be conducted at a distance. at the Federal Reserve Bank of Dallas. Whether distance remains a significant FEDERAL RESERVE BANK OF DALLAS 1 FINANCIAL INDUSTRY STUDIES SEPTEMBER 1998 few competitors. Higher HHI values imply a greater concentration of market share among a few competitors. The HHI would attain its highest possible value (10,000 = 100 2 ) in a monopolistic market and would approach zero in a market divided equally among an infinite number of competitors. In merger analysis, mergers proposed in markets with high initial HHI values or mergers that would increase the HHI substantially are considered potentially anticompetitive. In particular, Department of Justice guidelines view bank mergers as potentially anticompetitive if they would result in a postmerger HHI above 1,800 and would increase the HHI by 200 or more (Rhoades 1993).2 barrier to competition in light of these innovations has important ramifications for banking market competitiveness. If distance is a formidable barrier to competition, then a bank operating in a market without nearby competitors might be able to set lower-than-competitive deposit rates and higher-than-competitive loan rates, as it would be difficult for a customer to transact with a distant bank, even if that bank offered more competitive rates. But if technology has made it easier for distant banks to compete, then operating in a market with little competition nearby would not boost profits. To see whether distance has become less of a barrier to competition in banking, I examine the relationship between the profitability of banking markets and the physical presence of competitors in those markets. Consistent with distance becoming less important in banking, I find that although operating in a market with few nearby competitors boosted profitability a decade ago, more recently it does not. Calculating the HHI requires a definition of the banking market. In practice, banking markets are defined to be relatively small geographic areas for antitrust purposes.3 In an urban area, the banking market would typically be defined as the Census Bureau's metropolitan statistical area (MSA); outside urban areas, the county (or parish in Louisiana) is typically used as the definition of the banking market (Radecki 1998). I refer to these definitions as "local" banking markets.4 The traditional focus on relatively small geographic areas is linked to the historical importance of distance as a barrier in banking; to the extent that it was historically difficult to transact with distant banks, they were excluded from the market definition. Consistent with banking markets being confined to local areas, two recent surveys find that only a small fraction of consumers and small businesses use commercial banks outside their local area (Kwast, Starr-McCluer, and Wolken 1997). Cyrnak (1998) finds, however, that lenders outside the local market play an important role in making loans to small businesses, especially in rural areas. The differences between these studies' findings highlight the challenge of defining banking markets in today's environment. TRADITIONAL ANTITRUST POLICY See United States v. Philadelphia National Bank, 374 U.S. 321 (1963). These guidelines are supplemented with a consideration of mitigating factors, including, but not limited to, potential competition, the competitive viability of the target firm, economic conditions in the market, market shares of leading firms, economies of scale in small mergers, and the importance of nonbank competitors. Various academic studies have examined the scope of banking markets. Osborne (1988) examines loan rates in various regions and among various loan sizes and concludes that banking is an integrated, national market. In contrast, Hannan (1991) examines the relationship between local market concentration and the terms of bank lending to businesses and finds significant local-market effects. Jackson and Eisenbeis (1997) find that consumer deposit rates in different regions are cointegrated, supporting the idea that banking is an integrated, national market. While these definitions serve as reasonable proxies for the local markets used in antitrust analysis, the actual definition of the banking market for antitrust analysis is more complicated and involves market-by-market analysis of commuting patterns, location of schools, shopping facilities, advertising, and other factors (Amel 1997). The focus here is on the geographic definition of the banking market. Additional questions concerning the definition of the relevant products and providers are beyond the scope of this article. Competition has long been viewed as essential for market forces to work in the best interest of an economy. In recognition of the importance of maintaining competition, the United States enacted various laws near the turn of the twentieth century that were intended to ensure adequate competition. One such law was the Clayton Act of 1914, which prohibited mergers if they would substantially reduce competition. Some uncertainty existed about the applicability to banking of the turn-of-thecentury antitrust laws and their subsequent amendments, but the Philadelphia National Bank case of 1963 made it clear that banks were subject to those laws.1 To provide a concrete framework for quantifying the impact of bank mergers on competition, in 1982 the Department of Justice published guidelines for merger approval based on the Herfindahl-Hirschman Index (HHI), with some subsequent revisions to the guidelines. The Federal Reserve uses these guidelines as an initial step in evaluating the competitive impact of a proposed bank merger (Rhoades 1993). The HHI equals the sum of the squared market shares of the firms in the market, based on deposits. If, for example, the deposits in a market are equally divided between two competitors, then that market would have an HHI of 5,000 = 50 2 + 50 2 . The HHI measures the concentration of the market, that is, the degree to which market shares are concentrated among a Changing the definition of the banking market will change the numerical value of the HHI.5 As an extreme example, suppose banking markets were defined as blocks within a large city that had one bank on each block. The HHI would then count each bank as having a monopoly in its market, when in practice bank customers in the city could readily transact with any bank in town, implying that the banks were not monopolists. While that example is unrealistic, it does make an important point: if markets are defined too narrowly, the HHI will be artificially high. Moreover, if an artificially high HHI is used in the regulatory decision of whether to 2 approve a merger, some mergers would he rejected as anticompetitive when they actually are not. vice, and pay for the purchase. Each step of this process would have been more difficult to complete with distant sellers than with nearby sellers. Consequently, nearby sellers had an advantage relative to distant sellers. And if there were only a few nearby sellers, those sellers could use the advantage conferred by their proximity to the buyers as a source of monopoly power; that is, the sellers could set prices above those that would prevail in a competitive market.6 Technological advancement is making distance a less formidable barrier than it once was. Cairncross (1997) discusses the "death of distance," arguing that technological advancements are on their way to making distance virtually irrelevant in economic transactions. The competitive ramifications of this would be profound: a seller down the block would no longer have a distance-based advantage over a seller on the other side of the country, or even the world. Banking is largely an information-based business. Traditionally, customers brought information to the bank in person. Because the cost of traveling to the bank depended on the distance between the customer and the bank, distance served as a barrier to competition. But information can flow into a bank through other means, such as by telephone. As advancements in communication technology make it easier to communicate from afar, distance as a barrier to competition in banking could be eroded. Two statistics are suggestive of the decline in communication costs from 1987 to 1997.7 First, the price index for telephone services decreased by an inflation-adjusted 21 percent during that time. Second, in 1987 the price of a very long (2,455-mile) call was 47 percent higher than the price of a short (39-mile) toll call, but by 1997 the price of the two calls was equal (Waldon 1998).8 Thus, the overall cost of communicating by telephone has dropped, and the price of very long-distance calls has declined relative to the price of shorter calls. Banking by phone allows customers to interact with their bank without visiting the bank in person. To the extent that the cost of going to the bank has remained roughly constant, the decrease in communication costs has lowered the cost of transacting at a distance relative to the cost of transacting in person.9 While some impediments to conducting business at a distance may remain, the relative cost has in all likelihood declined, thus reducing the barrier to competition. STRUCTURE, CONDUCT, AND PERFORMANCE Much of the basis for traditional antitrust analysis—for banking and other industries— stems from the structure-conduct-performance (SCP) paradigm. As summarized by Tirole (1988), the SCP paradigm argues that the structure of a market (which includes factors such as market concentration, product differentiation among sellers in the market, and cost structure) influences firms' conduct in the market (which includes factors such as pricing, advertising, and research and development), and conduct influences performance (which includes such factors as profitability, efficiency, and price relative to marginal cost). Empirical tests of the SCP paradigm in banking have often found that the structure of the market—especially market concentration— has influenced conduct and performance. Gilbert (1984) reviews the banking literature and concludes that the majority of the evidence supports the idea that bank market structure influences bank performance. He also concludes that the evidence supports measuring concentration at the local market level to explain performance. Additional studies have been conducted since Gilbert's review. Berger and Hannan (1989) find that higher local market concentration is associated with banks paying lower rates on deposits. Hannan (1991) finds that higher local market concentration is associated with higher interest rates on loans. More recently, however, Radecki (1998) looks at the relationship between interest rates paid by a bank and the concentration of the market in which the bank operates. When examining data from 1996, he finds no link between local market concentration and the rates paid on deposits; he does find, however, a link between concentration of a state's banking market and deposit rates paid in that state, suggesting that the state may now be the relevant definition of the banking market. THE CHANGING RELEVANCE OF DISTANCE Distance has historically been a barrier between buyers and sellers in many markets. To conduct a transaction, a buyer would need to gather information about the seller and the seller's product, take possession of the good or ser- FEDERAL RESERVE BANK OF DALLAS Concurrent advancements in financial technologies have also worked to reduce the obstacle of distance in banking. Credit scoring approaches allow decisions that once would 3 FINANCIAL INDUSTRY STUDIES SEPTEMBER 1998 6 When distance is a barrier between potential buyers and sellers, buyers in a market where nearby sellers are sparse can be viewed as having a high search cost, as in Stigler (1961). In such models, higher search costs make the buyer more willing to accept high prices. 7 The decline in communication costs from 1987 to 1997 is part of a much longer trend. From 1928 to 1997, the inflation-adjusted price of a 10-minute, 2,752-mile daytime call on AT&T regular rates fell by 98.9 percent. Data on AT&T rates from Waldon (1998). 8 The relative prices are for AT&T basic residential daytime rates. Discount flatrate calling plans have also made the relative price of very long-distance calls equal to the price of short toll calls. 9 By allowing documents to move quickly and cheaply over long distances via telephone lines, the fax machine also reduces the importance of distance. have been made using information obtained in a face-to-face transaction to now be made using information obtained elsewhere. Credit scoring uses statistical analysis to evaluate the riskiness of loan applicants based on the historical relationship between borrower characteristics and borrower performance.10 Also, the adoption of banking by personal computer has the potential to downplay distance even further. Already, 231 U.S. banks offer services over the Internet (Online Banking Report 1998). For interaction over the Internet, the physical distance between the customer and the bank is irrelevant. median level and profitability in markets with concentration above the median level are used to provide some evidence on that relationship. Second, regressions that control for additional market characteristics provide more evidence on the relationship. I conduct these tests for 1986, 1987, 1996, and 1997. If distance is becoming less relevant in banking, then local market concentration should be a less important determinant of profitability in the latter years than in the earlier years. Past studies tend to use the interest rates paid on deposits or the interest rates paid on loans as the measure of banking outcomes. One potential shortcoming of using these measures is that they may not capture differences in the underlying bank products that could account for differences in rates charged or paid. If, for example, a bank paid a lower interest rate on deposits than its competitors did, the bank might nevertheless attract depositors in a competitive market if the bank maintained a larger staff that provided better service than the other thinly staffed banks. Some customers might be willing to accept a lower interest rate on deposits if they were compensated with better service. Cairncross (1997) reports that when the telephone spread to villages in Sri Lanka, farmers in outlying areas received prices for their crops that were 80 percent to 90 percent of those in Colombo, the capital; before use of the telephone allowed village farmers to know prices in Colombo, farmers were receiving only 40 percent to 50 percent of the Colombo price. Similarly, when bank depositors are able to learn easily the interest rates offered on deposits by distant banks, using the Internet or other sources, local banks may have a heightened incentive to offer more competitive rates. As a measure of banking outcomes, profitability avoids this shortcoming. Under competitive conditions, paying lower rates while providing a larger staff would leave profitability unchanged: although paying lower rates on deposits would increase profitability, that boost to profitability would be offset by the cost of maintaining the larger staff. In addition, more banks can be included in a study based on profitability than in a study based on interest rates because all banks report profitability on the Report of Condition and Income ("call report"), whereas reliable interest-rate information must be obtained from more limited survey data. Although profitability has some advantages as a performance measure, it also has some drawbacks. First, the accounting conventions that are used to compute profitability may introduce imprecision into the measurement of profitability itself. Second, while the rate paid or charged by a bank is a fairly direct measure of the link to the customer, profitability is a larger, less immediate concept subject to various influences that may be difficult to control for statistically. Because profitability lacks immediacy as a measure of the terms of banking services that customers receive, the results of a study based on profitability may be interpreted in several ways. MEASURING THE IMPACT OF LOCAL AREA MARKET CONCENTRATION ON BANK CUSTOMERS 10 See Mester (1997) for a review of credit scoring. Peek and Rosengren (1998) argue that the adoption of credit scoring is promoting small businesses' access to credit trom new sources. 11 One difficulty in linking banking performance in a market to conditions in that market is that some banks have branches located in more than one market. Because the Report of Condition and Income ("call report") does not provide income data at the branch level, assigning income to individual branches would be problematic. To avoid this problem, I limit attention to banks that have all of their operations confined to a single market. Such banks accounted for 88.4 percent of all banks by number and held 46.3 percent of bank assets in June 1987. In June 1997, those percentages were 75.3 and 28.2, respectively. The arguments above suggest that distant banks are a more important source of competition than in the past. Suppose a local banking market were highly concentrated and banks in that market offered monopolistic terms to their customers. In the past, it would have been difficult for banks outside the local market to offer services at more competitive terms to customers in that market. But today, with the decline in communication costs and the ability to conduct long-distance transactions that once required face-to-face contact, distant banks would be able to compete for those customers; that additional source of competition would drive the terms on bank products toward competitive rates. These arguments imply that high concentration in a local banking market is not as likely to result in noncompetitive effects today as in the past. If distant banks are an important source of competition, then the traditional definition of local banking markets is too narrow. To examine that claim empirically, I examine the relationship between profitability in a local market and the concentration of the market using two approaches.11 First, simple univariate tests that compare profitability in markets with concentration at or below the Under an interpretation rooted in the SCP paradigm, finding that higher market concentration is associated with higher profitability would 4 Table 1 Univariate Tests of the Relationship Between Profitability and Market Concentration be taken as a sign of anticompetitive practices: high concentration implies that competition is limited, allowing firms in the market to exercise pricing power that results in monopoly profits. As Peltzman (1977) discusses, however, a positive correlation between profitability and concentration could emerge for reasons other than anticompetitive practices. If a market has a firm that grows large because of cost advantages, that market would exhibit high profitability and high concentration, even in the absence of anticompetitive practices. Results of earlier studies (e.g., Berger and Hannan 1989, Hannan 1991) of the banking industry, however, suggest that a positive relationship between concentration and profitability would reflect market power, given that those studies found that higher concentration was associated with less competitive terms on loans and deposits. Year and market type 1986 Average profitability in markets with HHI above median .83 .76 1.01** Rural .85 Urban .82 .94* 1.36 Rural Urban 1987 1996 Rural Urban 1997 Rural Urban 1.09 1.29 1.23 1.34 1.29 1.25 1.21 1.31 1.05 N O T E S : ** a n d * d e n o t e m a r k e t c a t e g o r i e s w h e r e return o n a s s e t s is significantly different in the high H H I m a r k e t s t h a n in the low HHI m a r k e t s at the 1 - p e r c e n t a n d 5 - p e r c e n t levels, respectively. Statistical significance of differences in m e a n s w a s tested using a two-tailed t-test. The particular measure of profitability used is the return on average assets (ROA) lor banks within the local market, where local markets are approximated as MSAs for metropolitan statistical areas and as counties for nonmetropolitan areas. For each market, ROA is the ratio of net income of the banks in the market to the average over the year of the banks' assets. Market concentration is measured by the HHI. As discussed above, the HHI is used in antitrust analysis. Higher values of the HHI are associated with a more concentrated market. Also, in computing the HHI, thrift deposits were included but were weighted by 50 percent, reflecting some, but presumably imperfect, substitutability between bank and thrift deposits.12 significant. Finally, in 1997, average profitability was higher in concentrated rural markets than in unconcentrated rural markets, but in urban markets, average profitability was actually lower in concentrated markets; none of the differences in profitability in 1997 is statistically significant, however. The univariate results thus support the idea that operating in a market with few nearby competitors tended to boost profitability—at least for rural markets—a decade ago. More recently, however, operating in a market with few nearby competitors is not associated with higher profitability. These results are consistent with distant competitors becoming more important over the past decade; that is, the results are consistent with distance declining as a barrier in banking. Univariate Approach If operating in a local market with few nearby competitors confers pricing power on the banks operating in those markets, then local markets with a high HHI should have high profitability. Table 1 shows the average profitability of markets with an HHI at or below the median in a given year and those with an HHI above the median. This analysis is conducted separately for urban and rural markets. Table 1 also displays the results of statistical tests run to detect significant differences in profitability between markets with high and low concentration. As Table 1 shows, in 1986 and 1987, average profitability was higher for concentrated markets as expected in both urban and rural markets; the difference in profitability was only statistically significant in rural markets, however. In 1996, average profitability was higher in concentrated markets for both urban and rural markets, but the differences were not statistically FEDERAL RESERVE BANK OF DALLAS Average profitability in markets with HHI at or below median Regression Approach Although the univariate analysis above is consistent with local area concentration no longer being an important determinant of local market profitability, that analysis does not control for factors beyond concentration that could influence profitability. The regression approach below attempts to isolate the relationship between profitability and the HHI by controlling for additional factors that could influence profitability. Table 2 provides formal definitions of the variables used in the model. Similar to the idea for univariate analysis, under the hypothesis that higher concentration allows banks to earn monopoly profits, the regression coefficient on HHI should be positive. However, to the extent that it has become easier for financial service providers to compete 5 FINANCIAL INDUSTRY STUDIES SEPTEMBER 1998 The degree of substitutability between thrifts and banks is debatable, so thrifts were allowed to enter the concentration figures in two other ways. First, thrifts were excluded entirely, reflecting nonsubstitutability between banks and thrifts. Second, thrifts were included with their deposits weighted by 100 percent, reflecting complete substitutability between banks and thrifts. The results shown in Tables 3a-3d are based on thrift deposits being weighted at 50 percent. Weighting thrift deposits at zero and 100 percent produced qualitatively similar results. Table 2 Variable Definitions Variable states and zero for markets located in other states; this leaves states with limited branching as the base case. Branching restrictions increase the cost of entering a market. By making it more difficult to enter, restrictions on branching could make it possible for high profitability to persist; in the absence of restrictions, a highly profitable market would be likely to attract outside competition. With the entry of additional competitors, bank customers would obtain terms that are more favorable, and the profitability of banks in that market would decline. To the extent that branching restrictions impede that dynamic, I would expect branching restrictions to be associated with greater profitability; that is, I would expect a negative sign for BRANCH and a positive sign for UNIT. However, if distance is not a barrier in banking in 1996 and 1997, the effects of branching restrictions would be zero; if distance no longer impedes transactions, then barriers to establishing a physical presence in a distant location would no longer affect profitability. Definition ROA Return on average assets, percent HHI ( S u m of squared market shares of all banks in market)/100,000 POP Population of market in hundreds of millions of people BRANCH Equals 0.01 if market is in a state with unrestricted branching, zero otherwise UNIT Equals 0.01 if state is a unit banking state, zero otherwise PINCOME Per capita personal income in market, $100,000 DEPGROW Ratio of change in deposits to prior year deposits in market TAR Ratio of troubled assets to total assets CONSUMER Ratio of c o n s u m e r loans to total assets AGRI Ratio of agricultural loans to total assets CI Ratio of commercial and industrial loans to total assets REALEST Ratio of real estate loans to total assets SECURITIES Ratio of securities to total assets SUBCHS Fraction of assets in market held by Subchapter S banks NOTES: The variables ROA, TAR, CONSUMER, AGRI, CI, REALEST, SECURITIES, and SUBCHS are computed using the subset of banks that have all their operations confined to a single geographic area. in distant markets, I would expect any positive relationship between local area concentration and profitability to be weaker in 1996 and 1997 than in 1986 and 1987. While the relationship between profitability and the HHI is the primary focus, factors other than HHI could influence the profitability of a market. The first control factor is the size of the market, measured by the popLilation within the local area (POP). A market with a small population might not attract entrants even though existing banks in the market were highly profitable, if the gains from entering the market could not be justified by the cost. To the extent that the cost of competing from afar has declined over time, entering small markets would have become easier over time, making it less likely for a small size to be associated with higher profitability in 1996 or 1997 than in 1986 or 1987. The model includes other control factors that could affect profitability. Per capita personal income (PINCOME) controls for a potential influence of affluence on profitability. Yearover-year deposit growth (DEPGROW) measures the growth in deposits in the market. To the extent that rapid growth is demand-driven, I would expect DEPGROW to be associated with higher profits. TAR, the troubled asset ratio, is the ratio of loans past due 90 days or more, nonaccrual loans, and other real estate owned to total assets. A high vahie of TAR reflects banking problems that would make it difficult for banks in the market to be profitable. The variables CONSUMER, AGRI, CI, REALEST, and SECURITIES are included to control for any effect of portfolio composition on profitability; these variables measure the fraction of assets that are held in consumer loans, agricultural loans, commercial and industrial loans, real estate loans, and securities, respectively. Restrictions on banks' ability to expand into new markets through branching could also influence market profitability. The model includes two variables that reflect such legal restrictions. First, the variable BRANCH identifies states where intrastate branching is freely permitted; in states where intrastate branching is freely permitted by merger, acquisition, or on a de novo basis, BRANCH equals one; otherwise BRANCH equals zero.13 In addition, in 1986 and 1987, a few states were unit banking states, " Data on states'branching regulations were obtained from Amel (1993) and Conference of state Bank Supervisors (1986,1996). Finally, a recent change in tax law allows certain banks to be organized as "Subchapter S" corporations (Greef and Weinstock 1996). Because banks structured as Subchapter S corporations avoid corporate income tax, those banks would be expected to report higher profitability than other banks. To capture the influence of Subchapter S status on the profitability of a market, I include a variable SUBCHS, the fraction of the assets in a market that are held by Subchapter S banks. Because Subchapter S status was not available until 1997, this variable is only included in the 1997 regressions. where no branching was allowed at all. To capt u r e t h e p o s s i b l e effect of these more Stringent restrictions, I include a variable UNIT that equals one for markets located in unit banking 6 Table 3 a Table 3 b Estimation Results for Rural Markets for 1986 and 1996: Dependent Variable ROA Intercept HHI POP 1986 Estimation Results for Rural Markets for 1987 and 1997: Dependent Variable ROA 1996 .08 (-35) -1.10 (1.81) 5.05** (1.24) (1.10) 1.35 103.30 (107.33) -49.97 (85.03) BRANCH 14.55 (8.74) 4.57 (4.01) UNIT 9.10 (5.71) PINCOME 1987 Intercept HHI -3.47 (4.09) 4.54** (1.20) -1.68 (2.84) -5.21 (181.56) 488.39 (354.23) BRANCH 10.07 (5.49) 6.28 (8.08) UNIT 3.80 (5.75) POP — 1997 -.97 (.69) — -3.78** (1.09) -.09 (.55) .61** (.19) .42 (.23) -27.37** (2.27) -13.86* (6.53) CONSUMER 1.64** (.54) 3.38 (2.15) CONSUMER AGRI 1.86** (.50) 2.86 (2.05) AGRI CI 1.85** (.57) 3.03 (1.84) CI REALEST 2.34** (.46) 2.82 (1.98) REALEST 3.00** (.95) 6.80 (5.13) SECURITIES 2.10** (.39) 2.47 (2.00) SECURITIES 2.84** (.83) 6.61 (5.05) DEPGROW TAR R2 Chi-square statistic for overall significance .38 542.2** PINCOME DEPGROW TAR .09 61.9** -7.24 (4.83) .35 (.19) 1.06 (.96) -19.23** (2.57) -10.34* (4.26) 3.06** (1.13) 5.60 (5.66) 3.03** (.92) 7.66 (5.56) 1.48 (1.06) SUBCHS — R2 .25 Chi-square statistic for overall significance NOTES: ** and * denote statistical significance at the 1-percent and 5-percent levels, respectively. Heteroskedasticity-consistent standard errors are shown in parentheses. Coefficient estimates were obtained by ordinary least squares. Sample size was 2,003 for 1986 and 1,927 for 1996. -1.73 (1.03) 389.3** 7.25 (5.52) .87*' (.16) .11 77.0** NOTES: ** and * denote statistical significance at the 1-percent and 5-percent levels, respectively. Heteroskedasticity-consistent standard errors are shown in parentheses. Coefficient estimates were obtained by ordinary least squares. Sample size was 1,974 for 1987 and 1,620 for 1997. RESULTS Tables 3 a - 3 d show the results from estimating the following equation relating market profitability to market characteristics: ROA = + + + + Tables 3a and 3b show the results for rural areas. For both 1986 and 1987, ROA in rural markets has a significant, positive relationship with HHI. Thus, in the earlier years examined, higher concentration is associated with higher profitability in rural markets; this result is consistent with traditional antitrust policy that views concentration in local banking markets as influencing the terms that consumers of banking services receive. The rural markets in 1996 and 1997, in contrast to the earlier years, do not a 0 + a ! HHI + a 2 POP + a 3 BRANCH a 4 UNIT + a 5 PINCOME + a 6 DEPGROW a 7 TAR + a 8 CONSUMER + ou> AGRI a 1 0 CI + a n REALEST + a 1 2 SECURITIES a, 3 SUBCHS The regressions are run separately for urban and rural areas. FEDERAL RESERVE BANK OF DALLAS 7 FINANCIAL INDUSTRY STUDIES SEPTEMBER 1998 show a statistically significant relationship between the concentration measures and market profitability; although not statistically significant, the estimated sign on HHI is actually negative in 1997.14 Moreover, the coefficients on HHI are significantly different in the statistical sense when comparing 1997 with 1987 and when comparing 1997 with 1996.15 The 1996 and 1997 results cast doubt on the notion that concentration in local banking markets continues to affect the terms that consumers of banking services receive. The finding that a market's profitability is no longer tied to its concentration is consistent with the argument that geographic distance is becoming less relevant in banking. Measures of concentration that are based on physical presence in local banking markets ignore the role of distant competitors; to the extent that distance is becoming a lower barrier to competition, ignoring distant competitors by defining banking markets locally is becoming increasingly misleading. The significance of HHI in 1986 and 1987 and its insignificance in 1996 and 1997 do not change when the HHI is calculated under the alternative ways of including thrifts mentioned earlier. Also, the results shown were obtained using all markets for which the relevant data were available; a similar pattern of results occurs when extreme observations are excluded by limiting the sample to markets with an ROA between -20 percent and 10 percent. The statistical test for differences across years also showed that the effects of PINCOME differed in rural markets when comparing 1986 and 1996, and that the effects of TAR differed in urban markets when comparing 1987 and 1997. The differences in effects across years for the other variables were not statistically significant. The insignificance of HHI might also stem from multicollinearity of the explanatory variables. To reduce multicollinearity concerns, I estimate a model where HHI is the only explanatory variable. The results of these parsimonious regressions agree with those of the regressions with the control factors included: HHI is positive and significant only in the 1986 and 1987 rural equations. Table 3c Estimation Results for Urban Markets for 1986 and 1996: Dependent Variable ROA 1986 Intercept HHI 1996 3.18 .47 (6.58) (.83) 20.00 6.24 (13.88) (3.97) -5.22 POP BRANCH -1.08 (17.43) (2.08) -2.74 19.75 (19.54) (15.36) -24.27 UNIT — (22.66) PINCOME DEPGROW .68 (.79) 2.51 .57 (1.59) (.43) -6.57 -20.96* TAR The control variables in the rural models entered with mixed significance. BRANCH and UNIT are never significant. TAR is always significant with the expected negative sign. SUBCHS was significant with the expected positive sign in 1997. All the portfolio variables are individually significant in 1986, but none is individually significant in 1996; however, in both years the portfolio variables are jointly significant. In 1987, all the portfolio variables except CI are individually significant, and in 1997, none is; the portfolio variables are jointly significant in 1987 but not in 1997. The reduction in the significance of the portfolio shares as determinants of bank profitability may reflect the tranquility of bank credit markets in 1996 and 1997 relative to the oil-price-induced decline in asset quality that occurred in 1986 and 1987. 2.20 (7.31) CONSUMER (10.10) (5.13) -2.24 1.47 (8.71) AGRI (.88) -.10 -.18 (8.22) (1.37) -5.58 CI .85 (11.56) REALEST (1.13) -2.95 .60 (10.17) (.69) -.37 -3.66 SECURITIES (8.37) (.53) .07 R2 Chi-square .14 statistic for overall 93.3** 37.8** significance N O T E S : ** a n d * d e n o t e statistical s i g n i f i c a n c e at the 1-percent and 5-percent spectively. levels, re- Heteroskedasticity-consistent s t a n d a r d e r r o r s a r e s h o w n in p a r e n t h e s e s . Coefficient estimates were obtained by o r d i n a r y least s q u a r e s . S a m p l e s i z e w a s Tables 3c and 3d show the results for urban markets. Unlike in the rural markets, profitability in the urban markets does not show a significant relationship with HHI in any of the years examined; moreover, the estimated sign on HHI is negative in 1997. In all the years examined, the average level of concentration is much lower in urban markets than in rural markets. Hence, it is possible that concentration is low enough in urban markets for competitive outcomes to have been obtained in all periods of examination. Also, difficulties in controlling for all potential influences on profitability may have obscured the relationship between profitability and concentration. Finally, given that ROA is only as precise as the accounting assumptions that underlie it, it is possible that 3 4 2 for 1 9 8 6 a n d 3 5 8 for 1 9 9 6 . an underlying relationship between profitability and concentration exists but is masked by imprecision in the data.16 The control variables included in the urban models entered with mixed significance. BRANCH and UNIT, the variables included to capture the potential effect of geographic branching restrictions on profitability, are insignificant in all equations estimated. TAR is statistically significant for 1986 and 1987, but not in 1996 and 1997. Overall, these results are consistent with distance declining in importance in banking. 8 Table 3d relevant in banking, then local market concentration should be a less important determinant of profitability. The evidence presented here is consistent with the presence of nearby competitors influencing the competitiveness of rural banking markets in 1986 and 1987. In 1996 and 1997, however, the data are no longer consistent with that view: the presence of nearby competitors no longer helps to explain the profitability of the banking market, suggesting that markets for banking services are now geographically broader than before. II Estimation Results for Urban Markets for 1987 and 1997: Dependent Variable ROA Intercept 1987 1997 13.12 (11.95) -1.70 (3.44) HHI 39.37 (28.47) -17.76 (10.56) POP -25.09 (20.85) -.72 (3.09) BRANCH 82.08 (63.93) 16.29 (13.33) UNIT 6.34 (27.65) PINCOME 13.51 (10.61) DEPGROW .18 (1.06) — 3.88* (1.97) .003 (.002) TAR -64.21* (31.87) 10.24 (10.46) CONSUMER -15.28 (15.45) 3.35 (3.93) -9.74 (10.61) 3.16 (3.78) CI -23.14 (21.85) 2.47 (5.20) REALEST -17.13 (15.87) 1.79 (3.55) SECURITIES -16.63 (16.07) 2.73 (3.93) AGRI SUBCHS R2 Chi-square statistic for overall significance — .18 37.3** The change in the relationship between local market concentration and profitability over the past ten years accords with recent changes in technology. Falling communication costs are making distant competitors increasingly important, since lower communication costs make obtaining information about and transacting with distant banks more economical. Moreover, banking innovations are enabling long-distance transactions that once required face-to-face contact. While the results presented here are not sufficient to pronounce distance dead in banking, they are consistent with a weakening of its role. Banks continue to maintain and grow extensive branch networks, suggesting that a substantial number of customers still value geographic proximity. However, the conduct and performance of banks appear to depend less now on the physical presence of competitors in local markets, suggesting that linkages between local areas and the broader banking market are stronger now than in the past. This finding is consistent with the eroding impact of distance as a barrier to competition in banking. The benefits of vigorous competition under a free market system have long been recognized. Advancing technology is promoting competition in new ways by diminishing the barrier of distance. .44 (.35) .07 17.0 NOTES: ** and * denote statistical significance at the 1-percent and 5-percent levels, respectively. Heteroskedasticity-consistent standard errors are shown in parentheses. Coefficient estimates were obtained by ordinary least squares. Sample size was 341 for 1987 and 336 for 1997. REFERENCES Amel, Dean F. (1993), "State Laws Affecting the Geographic Expansion of C o m m e r c i a l Banks," Board of Governors of the Federal Reserve System, u n p u b l i s h e d While concentration of local markets helped to explain profitability in rural markets a decade ago, it no longer does. manuscript, September, 1 - 4 3 . (1997), "Antitrust Policy in Banking: Current Status a n d Future Prospects," Proceedings ference CONCLUSION of the 1997 Con- and Competition (Chicago: Federal Reserve Bank of Chicago). The traditional approach to antitrust enforcement in banking is to view the market for banking services as local and geographically limited. However, if distance is becoming less FEDERAL RESERVE BANK OF DALLAS on Bank Structure Berger, Allen N., a n d Timothy H. Hannan (1989), "The Price-Concentration Relationship in Banking," Review of Economics 9 and Statistics 71 (May): 2 9 1 - 9 9 . FINANCIAL INDUSTRY STUDIES DECEMBER 1998 Cairncross, Frances (1997), The Death of Distance: the Communications Revolution Will Change How Online Banking Report (1998), "True U.S. Internet Banks," Our Lives <http://www.onlinebankingreport.com/fulserv2/shtml>, (Boston: Harvard Business School Press). visited O c t o b e r 1, 1998. Conference of State Bank Supervisors (1986, 1996), O s b o r n e , Dale K. (1988), "Competition a n d G e o g r a p h i c a l /A Profile of State Chartered Integration in C o m m e r c i a l Bank Lending," Journal Banking. Banking and Finance of 12 (March): 8 5 - 1 0 3 . Cyrnak, Anthony W. (1998), "Bank Merger Policy a n d the New CRA Data," Federal Reserve Peek, Joe, a n d Eric S. Rosengren (1998), "The Evolution Bulletin 84, September, of Small Bank L e n d i n g to Small Businesses," Federal 703-15. Reserve Bank of Boston New England Competition: A Survey," Journal Banking of Money, Credit Economic Review, March/April: 2 7 - 3 6 . Gilbert, R. Alton (1984), "Bank Market Structure a n d and Peltzman, Sam (1977), "The Gains a n d Losses from 16 (November): 6 1 7 - 4 5 . Industrial Concentration," Journal of Law and Economics 20 (October): 2 2 9 - 6 3 . Greef, Charles E „ a n d Peter G. Weinstock (1996), "Tax Freedom Day C o m e s E a r l y — S u b S Status Now Available for Banks," The Texas Independent Banker 23 (October): Radecki, Lawrence J. (1998), "The E x p a n d i n g Geo- 16-18. g r a p h i c Reach of Retail Banking Markets," Federal Hannan, Timothy H. (1991), "Bank C o m m e r c i a l Loan June, 1 5 - 3 4 . Reserve Bank of N e w York Economic Policy Review, Markets a n d the Role of Market Structure: Evidence from Surveys of C o m m e r c i a l Lending," Journal Finance of Banking and Rhoades, Stephen A. (1993), "The Herfindahl-Hirschman Index," Federal 15 (February): 1 3 3 - 4 9 . Reserve Bulletin 79, March, 1 8 8 - 8 9 . J a c k s o n , William E., a n d Robert A. Eisenbeis (1997), Stigler, G e o r g e (1961), "The E c o n o m i c s of Information," " G e o g r a p h i c Integration of Bank Deposit Markets a n d Journal of Political Economy 69 (June): 2 1 3 - 2 5 . Restrictions on Interstate Banking: A Cointegration A p p r o a c h , " Journal of Economics and Business Tirole, Jean (1988), The Theory of Industrial 49 (July/August): 3 3 5 - 4 6 . ( C a m b r i d g e , Mass.: MIT Press). Kwast, Myron L., Martha Starr-McCluer, a n d John D. Waldon, Tracy (1998), The Industry Wolken (1997), "Market Definition a n d the Analysis of Reference Antitrust in Banking," Antitrust tures for Telephone Bulletin 42 (Winter): Mester, Loretta J. (1997), "What's the Point of Credit Scoring?," Federal Reserve Bank of Philadelphia Business Services Review, S e p t e m b e r / O c t o b e r : 3 - 1 6 . 10 and Division's Expendi- (Washington, D.C.: Federal C o m m u n i c a t i o n s Commission). 973-95. Analysis Book of Rates, Price Indices, Organization Benchmarking the Productive Efficiency of U.S. Banks The U.S. banking industry is highly competitive. Conventional wisdom holds that in competitive industries the strongest institutions survive and that those institutions are among the most efficient and effective. Success in competitive markets demands achieving the highest levels of performance through continuous improvement and learning. It is therefore imperative that managers understand where they stand relative to competitors and best practices regarding their productivity. Comparative and benchmarking information can provide impetus for significant improvements and can alert institutions to new practices and new paradigms. Uncovering and understanding best practices, however, is often limited by the simplicity of the analytical framework and the difficulty in collecting and analyzing vast quantities of data for large-scale problems. Simple gap analyses—probably the most commonly used technique for benchmarking— can provide important insights but are somewhat limited in scope because they take a one-dimensional view of a service, product, or process and because they ignore any interactions, substitutions, or trade-offs between key variables. For the U.S. banking industry, DeYoung (1998) provides evidence that simple, one-dimensional accounting ratios give an incomplete picture. DeYoung found that wellmanaged banks often incur significantly higher raw (accounting-based) unit costs than poorly managed banks. DeYoung reports that blind pursuit of accounting-based cost benchmarks actually might reduce a bank's cost-efficiency by cutting back on expenditures essential to a wellrun bank. Thus, a more inclusive multiple-input, multiple-output framework for evaluating productive efficiency and providing benchmarking information on how to become a well-managed bank seems essential to improving decision making at poorly managed banks. Thomas F. Siems and Richard S. Barr M I ore efficient banks tend to be higher performers and safer institutions. We use a constrained-multiplier, inputoriented, data envelopment analysis (DEA) model to create a robust quantitative foundation to benchmark the productive efficiency of U.S. banks. DEA is an alternative and a complement to traditional central-tendency (statistical regression) analyses, and it provides a new approach to traditional cost-benefit analyses and frontier (or best-practices) estimation. DEA is a linearThomas F. Siems is a senior economist and programming-based technique that converts policy advisor at the Federal Reserve Bank of Dallas. multiple inputs and multiple outputs into a scalar measure of relative productive efficiency. Richards. Barr is an associate professor of computer science and engineering In this study, we are interested in benchat Southern Methodist University. marking the productive efficiency of U.S. banks. FEDERAL RESERVE BANK OF DALLAS 11 FINANCIAL INDUSTRY STUDIES SEPTEMBER 1998 We would like to thank Kory Killgo, Kelly Klemme, and Sheri Zimmel for outstanding research assistance. The second author acknowledges that this work was supported in part by the National Science Foundation, grant DMII 93-13346, and the Texas Higher Education Coordinating Board, Advanced Technology Program, grant ATP-003613-023. customer-oriented measures for evaluation purposes. Banks can benchmark all kinds of things, although the most valuable generally fall into the following four categories: business results, cycle time, quality assurance, and assets. Steps 2 and 3 are the organizational steps necessary for information and data collection. It is useful to form a benchmarking team with employees from many different parts of the bank and to develop efficient data collection and information gathering systems. Step 4 uses the measures drawn from all relevant information sources to assess the bestpractice organizations, and one's standing relative to them, both at present and projected into the future. The U.S. banking industry reports balance sheet and income statement data to federal banking regulators on a quarterly basis. These data are often used to assess performance relative to peer groups. Finally, once the best practices are identified and understood, step 5 uses these results to formulate action plans for improvement. This is accomplished by comparing the volume of services provided and resources used by each bank with those of all other banks. To further evaluate our results and demonstrate their usefulness as a complementary off-site monitoring tool for regulators, we compare our DEA results with bank examination (CAMEL) ratings. We find that the most efficient banks are relatively successful in controlling costs and also hold a greater amount of earning assets. The more efficient banks also earn a significantly higher return on average assets, hold more capital, and manage less risky and smaller loan portfolios than less efficient institutions. To validate our results, we compare the relative efficiency scores derived from the DEA model with the examination ratings assigned by bank supervisors. We find a close association between our efficiency scores and bank examination ratings, suggesting that our model could be useful to regulators as a complementary off-site monitoring tool. WHAT IS BENCHMARKING? LIMITATIONS OF CURRENT BENCHMARKING METHODS Benchmarking is the search for best practices to improve an organization's products and processes. The word "benchmark" comes from geographic surveying and means "to take a measurement against a reference point." Benchmarking has become the darling of the continuousprocess-improvement movement; in fact, in 1991 it became an integral part of the Malcolm Baldrige National Quality Award guidelines.1 Xerox Corp.'s pioneering use of benchmarking led to its reclamation of leading market share from overseas competitors. Xerox cites 40 percent to 50 percent lower production costs, increased quality, 25 percent to 50 percent reduction in product-development cycle time, and inventory reductions of 75 percent (Finein 1990). Such radical improvements have not been lost on those organizations eager to excel. The Malcolm Baldrige National Quality Award was established in 1987 through passage of the Malcolm Baldrige Quality Improvement Act. The award was created to stimulate American companies to improve quality and productivity, recognize their achievements, and establish criteria to evaluate quality improvement efforts. See U.S. Department of Commerce (1993) and Hart and Bogan (1992). See Camp (1989), Harrington (1991), and Spendolini (1992). See Sammon, Kurland, and Spitalnic (1984). Every documented benchmarking study contains a data analysis component. In Camp's (1989) seminal book on benchmarking, data analysis involves determining the current performance "gap" and then projecting future performance levels. However, benchmarking analysts are often left to their own devices as to how to actually analyze the data, characterize and measure gaps, and project future performance levels. One of the earliest approaches to competitor assessment consists of simple time-series plots and projections of each measure identified for the benchmarking organization and its perceived best competitor.3 While these analyses can be useful (mostly for financial performance measures like return on assets or relative stock price movements), they are somewhat limited in scope; that is, simple gap analyses are onedimensional views of a service, product, or process that ignore any interactions, substitutions, or trade-offs between key variables. For example, a negative correlation between two or more desired qualities will be disregarded using simple gap analyses. An automobile manufacturer designing a new car would like both "high fuel economy" and "low time from 0 to 60 miles per hour." But improvement on one quality measure will have a negative impact on the other. And a complete Virtually every documented benchmarking analysis has the following steps: (1) determine what to benchmark, (2) form a benchmarking team, (3) identify benchmarking targets, (4) collect and analyze information and data, and (5) take action.2 The fundamental idea is to measure and compare the products, services, or work processes of organizations that are identified as representing best practices. From this, one can use benchmarking to assess relative performance, establish organizational targets and goals, and monitor and learn from industry best practices. Step 1 consists of choosing the items or processes to be benchmarked and selecting 12 understanding of the process is aggravated further as more quality measures are considered, such as vehicle safety. Simple gap analyses examine only one measure at a time and ignore any interactions between variables. Such analyses are difficult to interpret when trade-offs and choices must be made between multiple measures. The commonly employed analytical framework and surrounding theory and methodology used to identify best-practice competitors and contrast them within the reference population is somewhat limited. In addition, the fundamental fact remains that benchmarking in the service sector is far more challenging than in manufacturing, primarily due to the difficulty in measuring services.4 Overcoming these limitations requires an innovative approach. tiple inputs and outputs. As a result, DEA was first used to evaluate productive efficiency among nonprofit entities. Its use then spread to evaluate the relative productive efficiency of branches in large networks and of individual institutions in entire industries.6 In general, DEA focuses on technological, or productive, efficiency rather than economic efficiency.7 For our purposes, productive efficiency focuses on levels of inputs relative to levels of outputs. To be productively efficient, a firm must either maximize its outputs given inputs or minimize its inputs given outputs. Economic efficiency is somewhat broader in that it involves optimally choosing the levels and mixes of inputs and/or outputs based on reactions to market prices. To be economically efficient, a firm seeks to optimize some economic goal, such as cost minimization or profit maximization. In this sense, economic efficiency requires both productive efficiency and allocative efficiency. DATA ENVELOPMENT ANALYSIS: A NEW WAY TO ANALYZE DATA Despite the paucity of tools available to analyze best practices and compute relative strengths and weaknesses, the success of benchmarking underscores its inherent usefulness as a process and points to the dramatic additional gains that are possible. A more useful benchmarking paradigm should have the following attributes: As discussed in Bauer et al. (1998), it is quite plausible that some productively efficient firms are economically inefficient, and vice versa. Such efficiency mismatches depend on the relationship between managers' abilities to use the best technology and their abilities to respond to market signals. Productive efficiency requires only input and output data, whereas economic efficiency also requires market price data. Allocative efficiency is about doing the right things, productive efficiency is about doing things right, and economic efficiency is about doing the right things right. DEA was developed specifically to measure relative productive efficiency, which is our focus here. • a solid economic and mathematical underpinning, • alternative actual and composite/hypothetical best-practice units, • the ability to take into account the trade-offs and substitutions among the benchmark metrics, and • a means to suggest directions for improvement on the many organizational dimensions included in the study. According to microeconomic theory, the concept of a production function forms the basis for a description of input-output relationships in a firm; that is, the production function shows the maximum amount of outputs that can be achieved by combining various quantities of inputs. Alternatively, considered from an input orientation, the production function describes the minimum amount of inputs required to achieve the given output levels. For a given situation, the production function, if it were known, would provide a description of the production technology. Efficiency computations then could be made relative to this frontier. Specifically, inefficiency could be determined by the amount of deviation from this production function, or frontier. In practice, however, one has only data—a set of observations corresponding to achieved output levels for given input levels. Thus, the initial problem Data envelopment analysis, or DEA, is a frontier estimation methodology with the above attributes.5 DEA has proven to be a valuable tool for strategic, policy, and operational decision problems, particularly in the service and nonprofit sectors. Its usefulness to benchmarking is adapted here to provide an analytical, quantitative benchmarking tool for measuring relative productive efficiency. DEA was originally developed by Charnes, Cooper, and Rhodes (1978) to create a performance measure that managers could use when conventional market-based performance indicators were unavailable. DEA computes the relative technical (or productive) efficiency of individual decision-making units by using mul- FEDERAL RESERVE BANK OF DALLAS 13 FINANCIAL INDUSTRY STUDIES SEPTEMBER 1998 4 In many industries, the intangibility, multiplicity, and heterogeneity of service outputs make it difficult to construct clear and uniform performance standards within an industry. And of those measures in use, most are simple ratios—return on investment, time per transaction, cost per person served, services delivered per hour, etc.—with no synthesizing metrics commonly accepted. Interested readers are directed to Fitzgerald et al. (1991). 5 Frontier estimation methodologies are mathematical approaches to determine best-practice firms, that is, those firms performing on the frontier. In the past two decades, four main frontier approaches have been developed to assess firm performance relative to some empirically defined best-practice standard. DEA is a nonparametric linear programming approach. The other three approaches are econometric approaches—the stochastic frontier approach (SFA), thick frontier approach (TFA), and distribution-free approach (DFA). The approaches differ primarily in their assumptions regarding the shape of the efficient frontier and how random error is handled. Interested readers are directed to Berger and Humphrey (1997), Bauer et al. (1998), and the papers included in Fried, Lovell, and Schmidt (1993). 6 See, for example, Banker and Johnston (1994), Thompson et al. (1990), Boussofiane, Dyson, and Thanassoulis (1991), and Grosskopf et al. (forthcoming). 7 DEA can be adapted to examine economic efficiency by observing the costs to produce a set of outputs given the best-practice technology and input prices. Interested readers are directed to Bauer et al. (1998) and Fare, Grosskopf, and Lovell (1994). input. This is represented as: Chart 1 Comparison of DEA and Regression Approaches EFFICIENCY^ = (Eurk OUTPUTrk)/(Evik INPUTik), where urk is the unit weight placed on output r and vik is the unit weight placed on input i by the k'b firm in the population. Now, how should the weights (the u's and v's) be determined? DEA selects the weights that maximize each firm's productive efficiency score as long as no weight is negative and the weights are universal; that is, any firm should be able to use the same set of weights to evaluate its own efficiency ratio, and the resulting ratio must not exceed one. So, for each firm, DEA maximizes the ratio of its own total weighted output to its own total weighted input. In general, the model will put higher weights on those inputs the firm uses least and those outputs the firm produces most. Output Input Parametric approaches require the imposition of a specific functional form—such as a regression equation or production function—that relates the independent variables to the dependent variables. In contrast, as a nonparametric method, DEA requires no assumptions about the functional form and calculates a maximal performance measure for each firm relative to all other firms. Interested readers are directed to Charnes et al. (1994). In a study of hospital efficiency reported by Banker, Conrad, and Strauss (1986), regression analysis concluded that no returns to scale were present, whereas DEA uncovered the possibilities of returns to scale in individual hospitals. In another context, Leibenstein and Maital (1992) used DEA to demonstrate that gains from moving inefficient firms onto the frontier can be more significant than gains from moving efficient firms to the optimal point on the frontier. DEA was originally developed by Charnes, Cooper, and Rhoades (1978), who expanded on the concept of technical efficiency as outlined in Farrell (1957). See Ali (1992). The specific DEA model incorporated here is the constrained-multiplier, input-oriented ratio model as described in Charnes et al. (1990) and Charnes et al. (1989). 10 A constrained-multiplier DEA model places restrictions on the range for the weights, or multipliers. While unconstrained DEA models allow each firm to be evaluated in its best possible light, undesirable consequences can result when firms appear efficient in ways that are difficult to justify. More specifically, to maximize a particular firm's efficiency score, unconstrained models often assign unreasonably low or excessively high values to the weights. In contrast, constrained multiplier models incorporate judgment, or a priori knowledge, into the evaluation of each firm. Upper and lower bounds are imposed on the individual weights and used to transform the data before the individual DEA efficiency scores are computed. (See box titled "Mathematical Foundations for DEA" for more details.) is the construction of an empirical production frontier based on the observed data. DEA constructs such an empirical production frontier. More precisely, DEA is a nonparametric frontier estimation method that involves applying linear programming to observed data to locate the best-practice frontier.8 This frontier can then be used to evaluate the productive efficiency of each of the organizational units responsible for the observed output and input quantities. As such, DEA is a methodology directed to frontiers rather than central tendencies. As shown by the singleinput, single-output representation in Chart 1, instead of trying to fit a regression line through the center of the data, DEA "floats" a piecewise linear surface on top of the observations. The focus of DEA is on the individual observations in contrast to the focus on the averages and estimation of parameters associated with regression approaches. Because of this unique orientation, DEA is particularly adept at uncovering relationships that remain hidden from other methodologies. 9 Key to the identification of such a frontier from empirical observations is the solution of a set of mathematical programming problems of sizable proportions. Specifically, if one is to evaluate and compare n different organizations along a variety of criteria simultaneously, then n separate, but related, mathematical programming problems must be optimized and the results combined. The computational capacity and speed for these large-scale problems has only recently improved. Until just a few years ago, the maximum number of units, or organizations, that could be evaluated was in the hundreds.11 But more recently, refined algorithms that employ parallel-processing technology have produced a capability to simultaneously consider tens of DEA produces relative efficiency measures. The solid line in Chart 1 is the derived efficient frontier, or envelopment surface, which represents the revealed best-practice production frontier. The relative efficiency of each firm in the population is calculated in relation to this frontier. For each inefficient firm (those that lie below the envelopment surface), DEA identifies the sources and level of inefficiency for each of the inputs and outputs. Mathematically, the relative productive efficiency of each firm is computed as the ratio of its total weighted output to its total weighted 14 Table 1 Variable Definitions thousands of units.12 While most benchmarking studies narrow their scope to a small number of units (most likely because of the computational speed required and the difficulty in obtaining vast quantities of data), global or world-class benchmarking can now be performed with these new algorithms and computational structures. As a result, large-scale analyses can be used to distill the true leaders from a large pool of competitors, and the entire U.S. banking industry can now be analyzed using DEA. Call Report item code Inputs Salary expense Premises a n d fixed assets Other noninterest expense Interest expense Purchased funds* Outputs Earning a s s e t s ! RIAD4135 RCFD2145 RIAD4093 - RIAD4135 RIAD4073 RCFD0278 + RCFD0279 + RCON2840 + RCFD2850 + RCON6645 + RCON6646 R C F D 2 1 2 2 - ( R C F D 1 4 0 7 + RCFD1403) + RCFD0390 + RCFD0071 + RCFD0276 + Interest income Noninterest income USING DEA TO BENCHMARK PRODUCTIVE EFFICIENCY OF BANKS RCFD0277 + RCFD2146 RIAD4107 RIAD4079 * Purchased funds are federal funds purchased and securities sold under agreement to repurchase, demand notes issued to the U.S. Treasury, other borrowed money, time certificates of deposit of $100,000 or more, and open-account time deposits of $100,000 or more. To examine this analytical framework for benchmarking, we focus on the U.S. commercial banking industry. Following earlier research, we slightly modify a five-input, threeoutput DEA model that captures the essential financial intermediation functions of a bank (see Chart 2).13 The model approximates the decision-making nature of bank management by incorporating the necessary input allocation and product mix decisions needed to attract deposits and make loans and investments. In general, the five inputs represent resources required to operate a bank (i.e., labor costs, buildings and machines, and various funding costs). The three outputs represent desired outcomes: earning assets, interest income, and noninterest income. The variable definitions and Call Report item codes are shown in Table l.14 According to this model, productively efficient banks—or best-practice banks—allocate resources and control internal processes by effectively managing their employees, facilities, expenses, and sources and uses of funds while working to maximize earning assets and income. t Earning assets are total loans less loans past due 90 days or more and loans in nonaccrual status, plus total securities, interest-bearing balances, federal funds sold and securities purchased under agreements to resell, and assets held in trading accounts. edge of factors that are important in judging quality of bank management. The survey was intended to identify the correct set of the most important inputs and outputs and then evaluate the importance of each variable in relation to the others. Examiners were asked the following four questions: 1. What publicly available data do you think are important in judging the quality of bank management? 2. What publicly available data do you think are important in influencing the quality of bank management? 3. Which of the given list of criteria are most important in judging and influencing the quality of bank management? 4. Evaluate the relative importance (via pairwise comparisons) of the factors given below using one of the following indicators: the factors are equal in importance (=); one factor is slightly greater in importance (> or <); one factor is greater in importance (> or <); one factor is much greater in importance ( » or « ) . The upper and lower bounds for the unit weights used in the constrained-multiplier model were determined through a survey of twelve experienced Federal Reserve Bank of Dallas bank examiners regarding their knowl- Chart 2 Questions 1 and 2 focus on the most important outputs and inputs, respectively. Each criterion was rated on a scale of 1 (not important) to 7 (extremely important). Questions 3 and 4 provide relative comparisons between the multiple inputs and multiple outputs. The "given list of criteria" and "factors given below" referenced in questions 3 and 4, respectively, refer to the five inputs and three outputs used in our model. DEA Model • Interest expense • Purchased funds FEDERAL RESERVE BANK OF DALLAS 15 FINANCIAL INDUSTRY STUDIES SEPTEMBER 1998 See Barr and Durchholz (1997). Variables included in this model have been shown to capture the importance of management to a bank's survival. See Barr, Seiford, and Siems (1993) and Barr and Siems (1997) for discussion of a similar model used to evaluate bank efficiency and then predict survivability. Banks submit Call Reports to the federal banking regulators on a quarterly basis to capture balance sheet and income statement information. Additional variables as potential candidates for inclusion in the model are numerous, however, so we employ this relatively parsimonious model because of its value in previous research studies. Table 2 Constraints for the Multipliers (Weights) in the DEA Model Survey range (percent) Survey average (percent) Analytic hierarchy process weights (percent) Inputs Salary expense Premises and fixed assets Other noninterest expense Interest expense Purchased funds 15.8-35.9 3.1-15.7 15.8-35.9 17.2-42.8 12.1-34.0 23.1 9.6 22.7 25.9 18.8 25.2 11.4 19.8 23.5 20.2 Outputs Earning assets Interest income Noninterest income 40.9-69.5 25.7-46.9 10.2-20.2 51.3 34.3 14.4 52.5 33.8 13.7 process and compared with the survey averages.15 As shown, four of the five publicly available input variables used in our model have relatively equal importance; only premises/fixed assets has a much lower average weight. For the three publicly available output variables, earning assets is clearly the most important, followed by interest income and then noninterest income. MODEL RESULTS Our DEA benchmarking model was applied to year-end 1991, 1994, and 1997 data. There were 11,397 banks in operation in 1991, 10,224 banks in 1994, and 8,628 banks in 1997 that conformed to our data requirements.16 To evaluate the input and output factors driving the efficiency results, the banks were divided into quartiles for each of the three analysis periods based on their DEA efficiency score. Table 3 shows the average values for each input and As shown in Table 2, upper and lowrer bounds on the values of the multipliers were established from the survey results based on the relative scores given by the bank examiners. To verify the accuracy of these results and check for robustness, the relative average weights were also computed using the analytic hierarchy Mathematical Foundations for DEA strained-multiplier, input-oriented DEA model construction is the reduction of the multiple-inputmultiple-output situation for each DMU to that of a single "virtual" input and a single "virtual" output. For a DMU, the ratio of this single virtual output to single virtual input provides a measure of relative efficiency that is a function of the multipliers. Thus, each DMU seeks to maximize this ratio as its objective function. The decision variables are the unit weights (multipliers) for each of the inputs and outputs, so that the objective function seeks to maximize the ratio of total weighted output of DMUfc divided by its total weighted input: The mathematical programming approach of DEA is a robust procedure for efficient frontier estimation. In contrast to statistical procedures based on central tendencies, DEA focuses on revealed best-practice frontiers. That is, DEA analyzes each decision-making unit (DMU) separately; it then measures relative productive efficiency with respect to the entire population being evaluated. DEA is a nonparametric form of estimation; that is, no a priori assumption on the analytical form of the production function or distributional assumptions are required. The analytic hierarchy process is an effective decision-making tool that quantifies subjective judgments and preferences. In essence, a hierarchy of components is developed, numerical values are assigned to subjective judgments using pairwise comparisons, and then the judgments are synthesized to determine which components have the highest priority and influence in the decision process. Interested readers are directed to Saaty (1982) and Golden, Wasil, and Harker (1989). Banks that were chartered within three years of the analysis date were excluded from the analysis because de novo banks typically have different cost structures than more established banks (see DeYoung, 1998). Also, banks reporting nonpositive values for any input or output variable (with the exception of purchased funds, which In the discussion to follow, we assume there are n DMUs to be evaluated. Each DMU consumes varying amounts of m different inputs to produce s different outputs. Specifically, DMU^ consumes amounts Xk = {xik} of inputs (/'= and produces amounts Yk = {yrk} of outputs ( r = 1,...,s). We assume xik>0 and yrk > 0. The sx n matrix of output measures is denoted by Y, and the m x n matrix of input measures is denoted by X. A number of different mathematical programming DEA models have been proposed in the literature (see Charnes et al., 1994). Essentially these various models each seek to establish which of n DMUs determine an envelopment surface, which defines the best-practice efficiency frontier. The geometry of this envelopment surface is prescribed by the specific DEA model employed. To be efficient, the point corresponding to DMU^ must lie on this surface. DMUs that do not lie on the envelopment surface are termed inefficient. The DEA results identify the sources and amounts of inefficiency and provide a summary measure of relative productive efficiency. maximize EFFICIENCYk = {Zurk yrk)/(Zvik xik), where urk is the unit weight selected for output yrk, and Vjk is the unit weight selected for input x,k. For the constrained-multiplier model, these weights must be within the possible range specified by incorporating expert information, managerial preference, or other judgment into the analysis. The universality criterion requires DMU^to choose weights subject to the constraint that no other DMU would have an efficiency score greater than one if it used the same set of weights, so that: (.Iurk yn)l(Xvlk Xjj) < 1, for all j = 1 n. In addition, the selected weights cannot be negative, so that urk > 0 for r = 1 s and vik > 0 for /'= 1,...,m. This fractional programming problem is then transformed, following Charnes and Cooper (1962), into an equivalent ordinary linear programming problem. A complete DEA solution involves the solution of n such linear programs, one for each DMU. The essential characteristic of the con- could equal zero) were removed from the analysis. 16 Table 3 Bank Profiles by DEA Efficiency Quartile DEA efficiency quartile 1991 data Inputs 1 Most efficient 2 3 4 (percent) (percent) (percent) Least efficient (percent) Most to least efficie, difference (percent) 1.83 2.22 -.40* -1.22* 2.41 4.62 16.07 -.87* -9.78* 88.24 4.44* .13* -.05 Salary expense / total assets 1.43 1.54 1.65 Premises and fixed assets / total assets Other noninterest expense / total assets Interest expense / total assets 1.00 1.53 4.71 1.48 1.62 1.76 1.84 Purchased funds / total assets 6.29 4.70 8.17 4.66 11.12 Earning assets / total assets 92.68 91.67 Interest income / total assets 8.68 8.71 90.59 8.67 .95 .79 .89 8.55 1.00 Number of institutions 2,850 2,850 .7340 .6334 2,848 .5982 2,849 Average efficiency score Lower boundary .5387 .4611 .5665 .6334 .5092 .5664 0 .5091 1.59 1.56 1.70 1.77 1.95 2.14 1.61 1.72 2.62 2.13 2.68 10.89 12.88 -3.38* 91.54 90.36 6.67 2.23* .37* .08* Outputs Noninterest income / total assets Upper boundary 1.0000 .2728* 1994 data Inputs Salary expense / total assets 1.57 Premises and fixed assets / total assets Other noninterest expense / total assets 1.19 Interest expense / total assets 1.79 2.52 Purchased funds / total assets 9.50 2.58 10.29 92.59 7.04 92.08 6.91 1.30 .80 6.80 .85 Number of institutions 2,556 2,556 2,557 Average efficiency score .7356 .6404 .5742 2,555 .5207 .5550 0 1.0000 .6150 .5932 .6404 .5932 .5550 Salary expense / total assets 1.67 1.60 Premises and fixed assets / total assets Other noninterest expense / total assets Interest expense / total assets .98 1.85 1.55 1.31 1.64 1.94 1.75 2.44 3.29 3.30 12.33 -.38* -.96* -.34* -.16* Outputs Earning assets / total assets Interest income / total assets Noninterest income / total assets Lower boundary Upper boundary 1.05 .25 .2149* 1997 data Inputs Purchased funds / total assets 10.46 -.08 -1.45* -.07 .14* 1.50 1.92 3.27 13.63 3.15 15.32 -4.85* 91.83 90.65 2.33* 7.37 .84 7.33 .13t .90 .90* Outputs Earning assets / total assets 92.99 Interest income / total assets 7.45 Noninterest income / total assets 1.80 92.60 7.41 .77 Number of institutions 2,157 2,157 2,157 2,157 Average efficiency score Lower boundary .6685 .4722 .4313 .3982 .3717 .3067 1.0000 .4721 .3451 .3981 0 .3450 Upper boundary * Indicates significant difference at the .01 level. t Indicates significant difference at the .05 level. FEDERAL RESERVE BANK OF DALLAS 17 FINANCIAL INDUSTRY STUDIES SEPTEMBER 1998 .3617* Table 4 Bank Performance Measures by DEA Efficiency Quartile DEA efficiency quartile M o s t t0 (percent) 4 Least efficient (percent) least efficien difference (percent) 1.00 8.81 53.34 1.65 .82 8.25 54.74 1.96 .01 7.76 56.56 2.93 1.22* 2.59* -7.61* -1.38* 2,850 .7340 .6334 1.0000 2,848 .5982 .5665 .6334 2,849 .5387 .5092 .5664 2,850 .4611 0 .5091 1994 data Return on average assets Equity / total assets Total loans / total assets Nonperforming loans / gross loans 1.52 11.19 53.22 1.00 1.26 9.62 55.58 .96 1.06 9.10 55.38 .99 .61 8.63 54.27 1.46 Number of institutions Average efficiency score Lower boundary Upper boundary 2,556 .7356 .6404 1.0000 2,556 .6150 .5932 .6404 2,557 .5742 .5550 .5932 2,555 .5207 0 .5550 1997 data Return on average assets Equity / total assets Total loans / total assets Nonperforming loans / gross loans 1.57 12.42 54.78 1.04 1.31 10.19 59.06 .92 1.20 9.49 60.23 .94 .86 9.29 60.53 1.15 Number of institutions Average efficiency score Lower boundary Upper boundary 2,157 .6685 .4722 1.0000 2,157 .4313 .3982 .4721 2,157 .3717 .3451 .3981 2,157 .3067 0 .3450 1 Most efficient (percent) 2 3 (percent) 1991 data Return on average assets Equity / total assets Total loans / total assets Nonperforming loans / gross loans 1.23 10.35 48.95 1.55 Number of institutions Average efficiency score Lower boundary Upper boundary .2728* .91* 2.56* -1.06t -.46* .2149* .72* 3.13* -5.75* —.lit .3617* * Indicates significant difference at the .01 level, t Indicates significant difference at the .05 level. output variable as a percentage of total assets. Comparing the most efficient quartile of banks with the least efficient quartile reveals some interesting differences. In 1991, the most efficient banks had significantly lower salary expense, premises and fixed assets, other noninterest expense, and purchased funds, and they had significantly higher relative levels of earning assets, interest income, and interest expense. By 1997, only interest expense, premises and fixed assets, and purchased funds had statistically significant differences among the input variables. On the output side, significant differences still existed for earning assets and interest income, and a significant advantage for the most efficient banks was found for noninterest income. relative efficiencies for the time period under analysis, the underlying trends regarding the significance and strength that each variable contributes can help explain the changes that took place in the banking industry. From 1991 to 1997, noninterest income became a significantly more important part of banking. In 1991, noninterest income as a percentage of total assets averaged 0.95 percent for the most efficient banks and 1 percent for the least efficient banks. By 1997, the most efficient banks increased this percentage to 1.8 percent, whereas the ratio for the least efficient banks dropped to 0.9. The increase in noninterest income as a percentage of total assets for the most efficient institutions is consistent with banks' increased focus on earning greater fee income and participation in off-balance sheet activities. The tradi- While DEA efficiency scores cannot be compared from year to year because they reveal 18 tional role of U.S. hanks as strictly financial intermediaries is widely viewed as changing, as banks move beyond the balance sheet and compete in other arenas (see Clark and Siems, 1997). To see whether our productive efficiency model correlates with performance, the average values for a few important bank performance measures are given in Table 4 by DEA efficiency score quartile for each analysis period. As shown, the most efficient banks earned a significantly higher return on average assets than the least efficient institutions. In 1991, the most efficient bank quartile earned an average 1.23 percent on average assets, whereas the least efficient bank quartile earned just 0.01 percent. In 1997, return on average assets increased for the banking industry, with the most efficient bank quartile earning an average 1.57 percent and the least efficient bank quartile earning 0.86 percent. In 1991, the largest institutions were found to be significantly less efficient than the smallest banks. For 1994 and 1997, there were no significant differences in productive efficiency between the largest and smallest institutions, suggesting no significant economies of scale.17 CLASSIFYING BANKS USING EXAMINER RATINGS In the early 1970s, federal regulators developed the CAMEL rating system to help structure the bank examination process. In 1979, the Uniform Financial Institutions Rating System was adopted to provide federal bank regulatory agencies with a framework for rating the financial condition and performance of individual banks. Since then, the use of the CAMEL factors in evaluating a bank's financial health has become widespread among regulators. The evaluation factors are as follows: More efficient banks also held significantly higher equity capital. In 1991, the most efficient bank quartile's capital-to-asset ratio averaged 10.35 percent, versus 7.76 percent for the least efficient banks. Similar to gains in profitability, capital levels substantially increased for the entire banking industry by 1997, with the most efficient bank quartile holding a capital-to-assets ratio of 12.42 percent and the least efficient bank quartile holding 9.29 percent. More efficient banks also managed relatively smaller loan portfolios that tended to have fewer risky assets, as evidenced by their lower levels of nonperforming loans to gross loans. In 1991, the most efficient bank quartile had an average ratio of total loans to total assets of 48.95 percent, which is significantly less than the average 56.56 percent held by the least efficient banks. Nonperforming loans to gross loans for the most efficient banks in 1991 averaged 1.55 percent, versus 2.93 percent for the least efficient banks. By 1997, banking conditions had improved, but the significant differences in portfolio composition, asset quality, and risk levels remained. The most efficient bank quartile in 1997 had a total loans-to-total assets ratio of 54.78, significantly less than the 60.53 percent for the least efficient bank quartile. And, while overall asset quality improved, nonperforming loans to gross loans for the most efficient bank quartile was 1.04 percent, significantly less than the 1.15 ratio for the least efficient banks. As shown in Table 5, when the data are separated into asset-size quartiles, we find no significant differences in efficiency between the largest and smallest banks, with one exception. FEDERAL RESERVE BANK OF DALLAS • • • • • Capital adequacy Asset quality Management quality Earnings ability Liquidity Each of the five factors is scored from one to five, with one being the strongest rating. An overall composite CAMEL rating, also ranging from one to five, is then developed from this evaluation.18 As a whole, the CAMEL rating, which is determined after an on-site examination, provides a means to categorize banks based on their overall health, financial status, and management. The Commercial Bank Examination Manual produced by the Board of Governors of the Federal Reserve System describes the five composite rating levels as follows: CAMEL = 1 An institution that is basically sound in every respect. CAMEL = 2 An institution that is fundamentally sound but has modest weaknesses. CAMEL = 3 An institution with financial, operational, or compliance weaknesses that give cause for supervisory concern. CAMEL = 4 An institution with serious financial weaknesses that could impair future viability. CAMEL = 5 An institution with critical financial weaknesses that render the probability of failure extremely high in the near term. 19 FINANCIAL INDUSTRY STUDIES SEPTEMBER 1998 17 The literature on economies of scale in banking is extensive; interested readers are directed to Clark (1996) for a review. The quartile approach here should be viewed as tentative, because quartiles are too broad to capture differences among the larger institutions. 18 Beginning in 1997, the CAMEL rating system was revised to include a sixth component: S—sensitivity to market risk. This study uses the original CAMEL rating system for the 1991 and 1994 samples, as it was the one in use during those periods. Because market risk had been implicitly considered in the original CAMEL rating, its introduction to the revised rating, CAMELS, was not expected to result in significant changes to the composite rating. Table 5 Bank Profiles by Asset Quartile Asset quartile 1 Largest (percent) 2 3 (percent) (percent) 4 Smallest (percent) Largest to smallest difference (percent) 1991 data Inputs Salary expense / total assets Premises and fixed assets / total assets Other noninterest expense / total assets Interest expense / total assets Purchased funds / total assets 1.49 1.64 1.90 4.63 13.69 1.52 1.62 1.73 4.70 10.46 1.61 1.63 1.82 4.71 9.51 1.82 1.57 1.95 4.66 7.99 -.33* ,07t -.04 -.03 5.71* Outputs Earning assets / total assets Interest income / total assets Noninterest income / total assets 90.23 8.56 1.09 91.40 8.64 .84 91.01 8.68 .85 90.55 8.74 .86 -.32* -.18* .23* Score DEA efficiency score .5661 .5826 .5867 .5965 -.0304* Number of institutions 2,849 2,849 2,849 2,850 1994 data Inputs Salary expense / total assets Premises and fixed assets / total assets Other noninterest expense / total assets Interest expense / total assets Purchased funds / total assets 1.56 1.69 1.92 2.54 14.54 1.63 1.74 1.76 2.62 10.92 1.69 1.71 1.73 2.61 9.86 1.92 1.51 1.85 2.63 8.24 -.36* .19* .07 -.08* 6.30* Outputs Earning assets / total assets Interest income / total assets Noninterest income / total assets 91.29 6.72 1.21 91.99 6.84 .94 91.71 6.89 .86 91.59 6.98 .99 -.30* -.26* ,23t Score DEA efficiency score .6156 .6121 .6060 .6118 .0039 Number of institutions 2,556 2,556 2,556 2,556 1997 data Inputs Salary expense / total assets Premises and fixed assets / total assets Other noninterest expense / total assets Interest expense / total assets Purchased funds / total assets 1.54 1.73 1.72 3.23 16.03 1.61 1.87 1.57 3.27 13.37 1.63 1.79 1.63 3.28 12.22 1.87 1.52 1.66 3.23 10.13 -.33* .21* .05 .01 5.90* Outputs Earning assets / total assets Interest income / total assets Noninterest income / total assets 91.76 7.32 1.38 92.20 7.39 .98 92.12 7.41 .97 91.98 7.45 .98 -.22 -.13 .391 Score DEA efficiency score .4641 .4133 .4401 .4607 .0034 Number of institutions 2,157 2,157 2,157 2,157 * Indicates significant difference at the .01 level, t Indicates significant difference at the .05 level. 20 Table 6 DEA Efficiency Scores by Strong/Weak CAMEL Rating 1991 Inputs 1994 1997 19 Strong Weak Strong Weak Strong Weak banks (percent) banks banks banks (percent) (percent) (percent) banks (percent) banks (percent) Salary expense / total assets 1.54 Premises and fixed assets / total assets Other noninterest expense / total assets Interest expense / total assets 1.53 1.64 4.64 Purchased funds / total assets 10.24 1.83* 1.97* 1.65 1.65 2.23* 1.98* 1.63 1.72 2.41* 1.71 2.92* 4.82* 2.61 11.39* 10.95 Currently, federal banking agencies permit banks that have less than $250 million of assets, are well-capitalized, are well-managed, have CAMELS ratings of 1 or 2, and have not experienced a change of control during the previous 2.04* 12 months to be examined every 18 months. Problem banks—those with 1.43 1.941" 2.26* 2.73* 3.25 3.48* are examined twice per year. 10.79 12.65 14.83* CAMELS ratings of 4 or 5—typically 20 Cole and Gunther (1998) assess the speed with which the information con- Outputs Earning assets / total assets 91.65 Interest income / total assets 8.55 87.85* 8.88* .83 1.05* 6.83 .93 DEA efficiency score .5942 .5235* Number of institutions 5,641 1,846 Noninterest income / total assets 92.18 89.72* 1.38* 7.33 .85 7.88* 1.25* .6137 .5532* .4272 .3751* 7,188 491 4,273 221 91.92 88.11* 7.32* tent of CAMEL ratings decays when benchmarked against an off-site monitoring system. Applying a probit model to publicly available accounting data, Cole and Gunther found that their econometric forecasts provide a more accurate indication of survivability for banks with examination ratings more NOTE: Strong banks are those with CAMEL ratings of 1 or 2; weak banks are those with CAMEL ratings of 3, 4, or 5. than one or two quarters old. Cargill * Indicates significant difference between strong and weak banks at the .01 level, (1989) found that CAMEL ratings are primarily proxies for available market t Indicates significant difference between strong and weak banks at the .05 level. information. Berger and Davies (1994) found that downgrades in CAMEL ratings precede stock price reductions. 21 Commercial hanks are examined annually for safety and soundness by one of the federal bank regulatory agencies or a state regulator.19 The use of examiner ratings in research studies has been limited.20 DeYoung (1998) uses CAMEL ratings and a logit model to separate banks into well-managed and poorly managed samples and then estimates a thick cost frontier model to measure X-inefficiency differences between the two samples.21 DeYoung found that the well-managed banks had significantly lower estimated unit costs than the poorly managed banks.22 Despite this significant cost-efficiency difference, DeYoung also found that the wellmanaged banks incurred significantly higher raw (accounting-based) unit costs than did the poorly managed banks. This result is important because it implies that cost-efficient bank management requires expenditures generally not made by poorly managed banks. with composite CAMEL ratings of 1 or 2; weak banks are those rated 3, 4, or 5. As shown in Table 6, for each of our analysis periods, strong banks had significantly higher efficiency scores than weak banks.23 For the input variables, strong banks generally have significantly lower (as a percentage of total assets) salary expense, premises and fixed assets, other noninterest expense, interest expense, and purchased funds than weak banks. For the output variables, we find that strong banks hold significantly more earning assets than weak banks, as one would expect; but, somewhat surprisingly, they generate significantly less interest income and noninterest income as a percent of total assets than weak banks. The higher relative interest and noninterest income levels for the weak banks might be due to their generally higher risk positions and poorer asset quality. Weak banks might be earning greater interest and noninterest income because their investments have more risks. But the increased income levels do not make up for the significantly higher input costs needed to monitor these investments and service these assets. Chart 3 shows the percentage of banks within each CAMEL-rating category that falls into each efficiency score quintile. This analysis uses the entire sample of banks for all three years. If the DEA efficiency scores do not differentiate between strong banks and weak banks, To further evaluate our DEA model, individual bank efficiency scores were compared with confidential bank examiner ratings. For our 1991 sample, 7,487 U.S. commercial banks were examined and given CAMEL ratings in 1992; for our 1994 and 1997 samples, 7,679 and 4,494 banks were examined and given ratings in 1995 and 1998, respectively. To simplify our analysis, we grouped the banks into two categories based on their composite CAMEL rating: strong and weak. Strong banks are those institutions FEDERAL RESERVE BANK OF DALLAS 21 FINANCIAL INDUSTRY STUDIES SEPTEMBER 1998 The thick frontier approach is one of the main parametric frontier methods used by researchers to evaluate efficiency. The thick frontier method compares estimates of costs derived from a best-practice cost function with those derived from a cost function using data from the worst-practice firms (see Berger and Humphrey, 1997). In the banking cost literature, X-inefficiency describes any excess cost of production not caused by suboptimal scale or scope. While the methodology selected has a great effect on the X-inefficiency differences, most studies find X-inefficiencies equal to about 20 percent to 25 percent of costs. See Berger, Hunter, and Timme (1993) and Evanoff and Israilevich (1991) for thorough reviews of this literature. 22 The relationship between management quality and X-inefficiency has not been explored as thoroughly as one might expect, given the number of studies that conclude that the quality of management is the most important factor in the success or failure of a bank. For more on the link between X-inefficiency and management quality, interested readers are directed to Peristiani (1997), who found a statistically significant correlation between X-inefficiency and bank examiners' numerical assessments of "management quality," and Barr and Siems (1997) and Wheelock and Wilson (1995), who use an efficiency measure as a proxy for management quality in bank failure studies. 23 Our analysis was also carried out using the M-rating instead of the composite CAMEL rating and produced qualitatively similar results to those presented here. periods: 1991, 1994, and 1997. Interestingly, noninterest income became a significantly more important variable over time as banking conditions improved and banks focused on generating more fee income and offering a greater selection of products. Our analysis also reveals that the most efficient banks earn a significantly higher return on average assets, hold significantly more capital, and manage relatively smaller loan portfolios with fewer troubled assets. Consistent with previous empirical research, the results found here for U.S. commercial banks confirm the commonsense proposition that banks that receive better CAMEL ratings by banking regulators are significantly more efficient. Using our DEA model, we find that strong banks (those rated 1 or 2) are significantly more efficient than weak banks (those rated 3, 4, or 5). This result points to the potential usefulness of our DEA efficiency model as an additional off-site monitoring tool for bank examiners. The development and extension of frontier estimation research has been limited historically by compLitational feasibility. Recent breakthroughs in solving truly large-scale models, such as the one developed here, open up a wide range of new possibilities and directions for research. A benchmarking support system could be developed to help individual institutions explicitly gauge their shortcomings and formalize and prioritize action plans to improve productive efficiency. Additionally, large-scale efficiency analyses of the entire banking system can be used to better understand the effects of industry dynamics and structure changes— mergers and acquisitions, local market concentration and competitiveness, regulatory changes, technological improvements, and so forth. Chart 3 Efficiency Score Quintiles by CAMEL Rating Combined Data 1991,1994,1997 Percentage of banks in efficiency quintile 100 CAMEL rating fCXj 1st quintile (highest DEA scores) 4th quintile I 5th quintile (lowest DEA scores) | 2nd quintile | \ ' j 3rd quintile then each CAMEL-rating category would be expected to contain 20 percent of the highest scoring banks, 20 percent of the second qLiintile banks, etc. However, as shown in the chart, there is a clear separation of efficiency quintiles: the most efficient banks are overrepresented in the CAMEL-1 group, while the least efficient banks are overrepresented in the CAMEL-5 group. More specifically, 30 percent of the CAMEL-1-rated banks have efficiency scores in the highest score quintile, while 57 percent of the 5-rated banks have efficiency scores in the lowest score qLiintile. The close association between efficiency scores derived from the DEA model and bank examiner CAMEL ratings suggests that the scores may be useful as an additional off-site surveillance tool for bank regulators. Overall, benchmarking the productive efficiency of U.S. banks can help bank managers and regulators better understand a bank's productive abilities relative to competitors and industry best practices. We have shown that more efficient banks tend to be higher performers and safer institutions. CONCLUSIONS AND DIRECTIONS FOR FUTURE RESEARCH In this study, we used a constrained-multiplier, input-oriented DEA model to evaluate the relative productive efficiency of U.S. banks. Using this measure, we can consider the sources of inefficiency and possible paths to boost productive efficiency. In addition, this productive efficiency measure provides an indicator to benchmark performance and is conceptually superior to measures produced using common gap analysis methodologies. Using our five-input, three-outpLit model, we find that the most efficient bank quartile has significantly higher DEA efficiency scores than the least efficient quartile for all three analysis REFERENCES Ali, A. I. (1992), "Streamlined C o m p u t a t i o n for Data Envelopment Analysis," European Research Journal of Operations 64 (January): 6 1 - 6 7 . Banker, R. D., R. F. Conrad, a n d R. P. Strauss (1986), "A C o m p a r a t i v e A p p l i c a t i o n of DEA a n d Translog Methods: A n Illustrative Study of Hospital Production," ment Science 22 32 (January): 3 0 - 4 4 . Manage- Banker, R. D., and H. H. Johnston (1994), "Evaluating Charnes, A., and W. W. Cooper (1962), "Programming the Impacts of Operating Strategies on Efficiency in the with Linear Fractional F u n c t i o n a l , " Naval U.S. Airline Industry," in Data Envelopment Logistics Theory, Methodology and Applications, Analysis: Research Quarterly 9 (3/4): 181-85. ed. A. Charnes, W.W. Cooper, A. Y. Lewin, and L. M. Seiford (Norwell, Charnes, A., W. W. Cooper, Z. M. Huang, and D. B. Sun Mass.: Kluwer Academic Publishers), 97-128. (1990), "Polyhedral Cone-Ratio DEA Models with an Barr, R. S., and M. L. Durchholz (1997), "Parallel and Journal of Econometrics Illustrative Application to Large Commercial Banks," 46 (1/2): 7 3 - 9 1 . Hierarchical Decomposition Approaches for Solving Large-Scale Data Envelopment Analysis Models," Charnes, A., W. W. Cooper, A. Y. Lewin, and L.M. Seiford Annals of Operations (1994), Data Envelopment Research 73: 3 3 9 - 7 2 . and Applications Barr, R. S., L. W. Seiford, and T. F. Siems (1993), "An Analysis: Theory, Methodology (Norwell, Mass.: Kluwer Academic Publishers). Envelopment-Analysis Approach to Measuring the Managerial Efficiency of Banks," Annals of Operations Charnes, A., W. W. Cooper, and E. Rhodes (1978), "Meas- Research 45: 1-19. uring the Efficiency of Decision Making Units," Journal of Operational European Research 2 (November): 4 2 9 - 4 4 . Barr, R. S., and T. F. Siems (1997), "Bank Failure Prediction Using DEA to Measure Management Quality," Charnes, A., W. W. Cooper, Q. L. Wei, and Z. M. Huang in Interfaces (1989), "Cone Ratio Data Envelopment Analysis and in Computer Science and Research: Advances in Metaheuristics, Stochastic Technologies, Modeling Operations Optimization, and Multi-Objective Programming," International ed. R. S. Barr, R. V. Journal of Systems Science 20 (July): 1099-1118. Helgason, and J. L. Kennington (Norwell, Mass.: Kluwer Academic Publishers), 3 4 1 - 6 5 . Clark, J. A. (1996), "Economic Cost, Scale Efficiency and Competitive Viability in Banking," Journal of Money, Bauer, P. W., A. N. Berger, G. D. Ferrier, and D. B. Credit, and Banking 28 (3) Part 1: 3 4 2 - 6 4 . Humphrey (1998), "Consistency Conditions for Regulatory Analysis of Financial Institutions: A Comparison of Frontier Efficiency Methods," Journal of Economics Clark, J. A., and T. F. Siems (1997), "Competitive Viability and in Banking: Looking Beyond the Balance Sheet," Finan- Business 50 (March/April): 8 5 - 1 1 4 . cial Industry Studies Working Paper no. 5-97 (Federal Reserve Bank of Dallas, December). Berger, A. N., and S. M. Davies (1994), "The Information Content of Bank Examinations," Finance and Economics Cole, R. A., and J. W. Gunther (1998), "Predicting Bank Discussion Series, no. 94-20 (Washington, D.C.: Board Failures: A Comparison of On- and Off-Site Monitoring of Governors of the Federal Reserve System, July). Systems," Journal of Financial Services Research 13 (April): 103-17. Berger, A. N., and D. B. Humphrey (1997), "Efficiency of Financial Institutions: International Survey and Directions DeYoung, R. (1998), "Management Quality and for Future Research," European Journal of X-lnefficiency in National Banks," Journal of Financial Operational Research 98 (April): 1 7 5 - 2 1 2 . Services Research Berger, A. N., W. C. Hunter, and S. G. Timme (1993), Evanoff, D., and P. Israilevich (1991), "Productive "The Efficiency of Financial Institutions: A Review and Efficiency in Banking," Federal Reserve Bank of Chicago Preview of Research Past, Present, and Future," Journal Economic 13 (February): 5 - 2 2 . Perspectives, July, 11-32. of Banking and Finance 17 (April): 2 2 1 - 4 9 . Fare, R., S. Grosskopf, and C. A. K. Lovell (1994), Boussofiane, A., R. G. Dyson, and E. Thanassoulis (1991), "Applied Data Envelopment Analysis," Journal of Operational Production European Frontiers (Cambridge: Cambridge University Press). Research 52 (February): 1-15. Farrell, M. J. (1957), "The Measurement of Productive Camp, R. C. (1989), Benchmarking: Best Practices That Lead to Superior The Search for Efficiency," Journal of the Royal Statistical Performance Society, Series A, General, Part 3: 2 5 3 - 8 1 . (Milwaukee: Quality Press). Finein, E. (1990), "Leadership Through Quality," from the Cargill, T. F. (1989), "CAMEL Ratings and the CD Market," IEEE/TLC Videoconference "Total Quality: The Malcolm Journal of Financial Services Research 3 (December): Baldrige Approach to Quality Management," National 347-58. Technological University, Fort Collins, Colo.: September 13. FEDERAL RESERVE BANK OF DALLAS 23 FINANCIAL INDUSTRY STUDIES SEPTEMBER 1998 Fitzgerald, L., R. Johnston, S. Brignall, R. Silvestro, and Saaty, T. L. (1982), Decision Making for Leaders C. Voss (1991), Performance (Belmont, Calif.: Lifetime Learning Publications). Businesses Measurement in Service (Surrey: Unwin Brothers). Sammon, W. L., M. A. Kurland, and R. Spitalnic (1984), Fried, H. O., C. A. K. Lovell, and S. S. Schmidt, eds. Business Competitor (1993), The Measurement Organizing, niques and Applications of Productive Efficiency: Tech- Intelligence: and Using Information Methods for Collecting, (New York: Wiley). (New York: Oxford University Spendolini, M. J. (1992), The Benchmarking Press). Book {New York: AMACOM). Golden, B. L., E. A. Wasil, and P. T. Harker, eds. (1989), The Analytic Hierarchy Process: Applications and Studies Thompson, R. G., L. N. Langemeier, C. T. Lee, and R. M. Thrall (1990), "The Role of Multiplier Bounds in Efficiency (New York: Springer). Analysis with Application to Kansas Farming," Journal of Econometrics Grosskopf, S., K. J. Hayes, L. L. Taylor, and W. L. Weber 46 (1/2): 9 3 - 1 0 8 . (forthcoming), "Anticipating the Consequences of School Reform: A New Use of DEA," Management Science. U.S. Department of Commerce (1993), 1993 Award Harrington, H. J. (1991), Business Process Improvement Md.: National Institute of Standards and Technology). Criteria, Malcolm Baldrige Quality Award (Gaithersburg, (New York: McGraw-Hill). Wheelock, D. C., and P. W. Wilson (1995), "Explaining Hart, C. W. L., and C. E. Bogan (1992), The Baldrige Bank Failures: Deposit Insurance, Regulation, and Efficiency," Review of Economics (New York: McGraw-Hill). (November): 6 8 9 - 7 0 0 . Leibenstein, H., and S. Maital (1992), "Empirical Estimation and Partitioning of X-lnefficiency: A Data-Envelopment Approach," American Economic Review 82 (May): 428-33. Peristiani, S. (1997), "Do Mergers Improve the X-Efficiency and Scale Efficiency of U.S. Banks? Evidence from the 1980s," Journal of Money, Credit and Banking 29 (August): 3 2 6 - 3 7 . 24 and Statistics 77