The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.
Financial Modernization: Vastly Different or Fundamentally the Same? Edward G. Boehne This article is an edited version of the Hutchinson Lecture, delivered at the University of Delaware on April 18, 2000, by Edward G. Boehne. On May 31, 2000, Ed Boehne retired from the Federal Reserve Bank of Philadelphia after 32 years of service, the last 19 of them as president. Closing his career with the publication of an article in the Business Review seems fitting, since writing a Business Review article was one of his first tasks when he joined the Philadelphia Fed as an economist in 1968. Ed thanks Loretta Mester for her assistance in preparing the Hutchinson Lecture. Financial Modernization: Vastly Different or Fundamentally the Same? Edward G. Boehne T he Financial Modernization Act (GrammLeach-Bliley), which was passed last fall, is in the process of taking effect this year. Among other things, this act repealed the Glass-Steagall Act (which separated commercial banking and securities underwriting) and was yet another step in dismantling a regulatory structure put in place nearly seven decades ago. Ongoing deregulation of the banking and financial system along with rapid changes in technology has raised some questions about the ultimate outcomes of several trends in the banking industry. Five key questions come to mind: 1. Will ongoing consolidation in the financial system and the ability of banks to expand into new product markets lead inevitably to financial supermarkets? 2. Will banking become primarily e-banking? 3 BUSINESS REVIEW 3. Will relationships between a small business and a bank, or between a consumer and a bank, go the way of the horse and buggy? To put it another way, will bank products primarily be bought and sold in the financial marketplace as commodities, or will personal service and personal contact still matter? 4. Will the major focus of the business of banking become the collection of fee income instead of the margins on loans? 5. Will bank regulators eventually rely primarily on market discipline instead of on more traditional bank examinations to determine the health of banks? Past and current trends in the banking industry, along with the recent passage of GrammLeach-Bliley, may lead many people to respond in knee-jerk fashion and answer “yes” to all of these questions. I will make the case that there is more to the story: the financial system, to be sure, is going to be vastly different in terms of the structure of the industry and the delivery systems used by banks to provide services to their customers. Nevertheless, I will also argue that the financial system will be fundamentally the same; that is, the fundamental underpinning of a stable and successful financial system will remain as it has always been, public confidence. This fact will help shape, and perhaps limit, the ways some of these banking trends unfold. SUMMARY OF THE FINANCIAL MODERNIZATION ACT At least for the foreseeable future, financial institutions in the U.S. will operate under the new financial landscape established late last year. The Gramm-Leach-Bliley Act of 1999, also called the Financial Modernization Act, became law on November 12, 1999, and the remnants of the separation between commercial banking and investment banking were finally carted away. I say remnants, since the Glass-Steagall Act, which established the separation in 1933, had been whittled away by market forces, court rul4 JULY/AUGUST 2000 ings, and regulatory actions. For example, banks were able to affiliate with securities underwriters, but there were limits on the types and amount of debt and equity underwriting in which the security affiliate could engage. With passage of Gramm-Leach-Bliley, this is no longer the case. The act sets up a two-tier system for expanding activities. A new entity, called a financial holding company, can have commercial bank subsidiaries along with other subsidiaries that engage in activities considered “financial in nature,” “incidental” to financial activities (as determined by the Fed and the Office of the Comptroller of the Currency), or “complementary” to financial activities (as determined by the Fed). These activities include insurance underwriting, real estate development, merchant banking, and securities underwriting. In addition, subsidiaries of commercial banks (called financial subsidiaries) will be able to engage in some expanded activities, including insurance agency and brokerage activities (but not underwriting), and securities underwriting. These bank subs will not be allowed to participate in real estate development or merchant banking, at least for now, or in activities considered “complementary” to financial activities. Those activities can be done in affiliates of a bank within a financial holding company structure, but not in subsidiaries of the bank itself. Limiting some of the activities of banking subs but allowing these activities in bank affiliates is intended to allow banks to engage in new activities, but in such a way that their risks can be limited. The act contains other provisions intended to limit the risks of new activities. To qualify as a financial holding company, the institution’s bank subsidiaries must be well capitalized and well managed. They also must have CRA ratings of satisfactory or higher. As of early April 2000, the Federal Reserve Board has approved 144 applications from bank holding companies wishing to become financial holding companies. Mellon Financial Corporation, First Union Corporation, PNC, Chase, and Citigroup FEDERAL RESERVE BANK OF PHILADELPHIA Financial Modernization: Vastly Different or Fundamentally the Same? are a few familiar names that have been approved. Similarly, for a bank to qualify for having a financial subsidiary, the bank and its depository affiliates must be well capitalized and well managed and must have CRA ratings of satisfactory or higher. Also, if a bank is one of the 50 largest insured banks in the United States, it must have at least one issue of unsecured longterm debt that is rated in one of the three highest categories by a nationally recognized rating agency like Moody’s or Standard & Poor’s. This brings the discipline of the market down on banks that wish to engage in expanded activities. These criteria are intended to protect the FDIC’s bank insurance funds and prevent extension of the discount window safety net to nonbanking firms. In addition to these criteria, the Gramm-Leach-Bliley Act uses the provisions of Sections 23A and 23B of the Federal Reserve Act to limit credit extensions and require arm’s-length activity between a bank and its subsidiaries and between a bank and other financial holding company subsidiaries. As further protection, Gramm-Leach-Bliley does not permit the mixing of banking and commerce, and it closed the unitary thrift holding company loophole, whereby commercial firms could enter the banking industry by buying a single thrift institution. Now that institutions can engage in a wider array of activities, the regulatory structure will change to one of functional regulation. That is, instead of a bank regulator having primary responsibility for supervising all aspects of an institution, each of the institution’s functions will be supervised by the appropriate regulator. For example, the insurance agency activities of a commercial bank subsidiary will be subject to state insurance regulations, and securities affiliates will be regulated by the Securities and Exchange Commission and National Associa- Edward G. Boehne tion of Securities Dealers. In addition, the Federal Reserve will act as an umbrella supervisor over financial holding companies, similar to the role it plays today with respect to bank holding companies. Here the Fed will have oversight of the entire organization, with a focus on consolidated risk management. TRENDS IN THE FINANCIAL INDUSTRY Now, let’s go back to the five questions raised earlier. Even though the Gramm-Leach-Bliley Act permits banks to perform a wide variety of activities, the question remains: will institutions take advantage of their new powers? And if so, will this, coupled with the ongoing consolidation we’ve seen in the financial services industry over the past few years, lead to financial institutions that look like financial supermarkets, offering all things to all people? Over the past 10 years, the number of banks in the United States has fallen from over 12,000 to under 9000, a 30 percent decline. Much of the consolidation was driven by mergers and acquisitions among existing banks and bank holding companies. In fact, several hundred mergers and acquisitions occurred each year. We have seen some of the largest bank mergers and acquisitions ever in the past few years, including several between institutions with assets over $100 billion each. Consolidation has led to increased concentration in the banking industry, which can be illustrated by a few simple comparisons: • Over the last 10 years, the share of domestic banking assets held by the 10 largest banking organizations in the country doubled, from about 20 percent to about 40 percent. • Over the last 15 years, the share of industry assets in very large banks has risen substantially. Now, over 60 percent of indus- Will banks become financial supermarkets? 5 BUSINESS REVIEW try assets are in banks with more than $10 billion in assets, compared with 40 percent in 1985. In inflation-adjusted dollars, the average asset size of U.S. banks has doubled since 1985 and is currently about $550 million. • Consolidation is even more striking at the holding company level. The share of assets in bank holding companies with over $100 billion in assets has tripled since 1985; these institutions now hold over 40 percent of industry assets. Banks are getting bigger, but are they getting better? From the standpoint of profitability, the answer is yes. While the industry has been consolidating, bank performance, as measured by return on assets (ROA) and return on equity (ROE), has improved greatly. Is this just a coincidence? Much of the improvement in performance reflects the favorable macroeconomic environment in which banks have been operating. The banking industry is similar to other cyclically sensitive industries in this respect, although banking is probably more sensitive to the interest rate cycle than other industries. But recent evidence also suggests that consolidation may have played an important role in the dramatic changes in bank performance. Indeed, while merging banks appear to have experienced some increased costs, they have more than made up for this by increased revenues. In a changing marketplace, banks must reinvent themselves to stay competitive. And regulators must allow this to happen while ensuring that the safety and soundness of the industry remains intact as the transformation occurs. Banks now compete with money market mutual funds and stock and bond funds on the deposit side. Data from the Federal Reserve’s Flow of Funds Accounts show that 25 years ago, nearly one-quarter of households’ assets were in deposits and less than 2 percent were in mutual and money market funds combined. Today, the deposit share has fallen to under 13 percent, while the share in mutual and money market 6 JULY/AUGUST 2000 funds has risen to almost 10 percent. And this does not even include households’ pension fund assets, which have seen tremendous growth. On the loan side, the growth of the commercial paper market and the entry of nonbank firms into the market for middle market borrowers and now even for small business loans have changed the face of commercial lending. Technological innovations have enabled the development of credit scoring and automated loan application processing, for example, which can be used by bankers and nonbankers alike. In addition, these technological changes have spurred banks to become larger to capture the scale economies embedded in these new technologies. Consolidation and expansion into new activities can increase bank efficiency by allowing an institution to reach a scale or mix of output that is more profitable. Gains might come from lower costs but also from higher revenues as banks provide better quality services or additional services valued by their customers. Restructuring can also mean a change in managerial behavior that improves the efficient use of resources or that improves the tradeoff between the bank’s risk and its expected returns, by allowing the bank to expand into broader geographic areas or into different product areas with different return characteristics. Recent research has documented the benefits from banks’ increasing their scale of operations, but what about banks' expanding into new activities? Here the research results are more mixed. There is some evidence (not strong evidence) that risk might be lower when commercial banks expand their activities: for example, studies have simulated the performance of portfolios that include both permitted and previously nonpermitted banking activities, and usually, the variance of returns (a measure of risk) is smaller when nonpermitted activities are included. To date, research has not conclusively shown that there are many cost or revenue synergies between different financial service offerings (alFEDERAL RESERVE BANK OF PHILADELPHIA Financial Modernization: Vastly Different or Fundamentally the Same? though it is probably too early to tell, since banks have been restricted in the amounts of these other services that they could provide). The cross-selling opportunities, still to be worked out in light of the privacy rules now being written, suggest there may be gains to banks from expanding into new product markets. However, it is not at all clear that one-stop shopping, which the modernization legislation now more easily permits, will be what consumers demand. At the same time that the law now permits commercial banks to offer insurance and other financial services, it is now easier for consumers to do comparison shopping and to switch accounts if it looks as if there’s a better deal to be had elsewhere. Thus, the convenience of one-stop shopping might be overstated. In fact, early attempts to develop financial supermarkets failed in the U.S. What’s more, lessons from other industries suggest that the trend is not always toward greater product diversification. Among nonfinancial industries that have always had the ability to diversify across product lines, one finds that the desire to form big conglomerates ebbs and flows. And many nonfinancial firms still specialize in just one industry or in just one aspect of an industry. Certainly, the more than 2000 small banks that have entered the industry since 1985 think that they can make a go of it without being huge or without being all things to all people. What is likely to occur is that the average size of the financial firm will be larger, but there will still be niche players—institutions that focus on providing particular services to particular segments of the market. Indeed, one of the byproducts of technological innovations is that it has become easier to tailor products to individual customers’ needs. Banks that wish to emphasize customer service might choose to remain small. While they would be less able to take advantage Edward G. Boehne of some technologies, which require a larger size over which to spread the fixed costs of the technologies, they would be able to use other technologies to provide better service to their customers. For example, banks of all sizes have been expanding their presence on the Internet. Which brings us to our next question: Will banking become primarily e-banking? E-commerce seems to take up 50 percent of TV advertising, but so far it accounts for probably less than 5 percent of total retail sales. The advertising and hype lead one to think that ecommerce and e-banking are really big, but that day is still some way off. Certainly electronic payments will become more important with time. But we can’t count out paper-based payments just yet. Back in the 1960s, analysts predicted that by now we would have a checkless society because of the spread of electronic transfers, but checks are still with us. Indeed, between 65 and 70 billion checks are written annually in the U.S. Still, there is no doubt that electronic means are becoming a more important outlet for banking services. Banks began offering PC banking in the 1980s via proprietary computer systems. But these systems were so slow that consumers were turned off, and it was very difficult to get them to try PC banking again once the technology improved. Various sources estimate that users of online banking currently number between 4 and 7 million, or 4 to 7 percent of households. And forecasts say the number of online banking users will double several times by 2003. One development that makes predictions of double- or even triple-digit growth of PC banking more credible now than at any time in the past is the growth of the Internet. Internet banking, one form of PC banking, offers customers 24-hour access and the ability to bank from multiple venues, since proprietary software need not Will banking become primarily e-banking? 7 BUSINESS REVIEW JULY/AUGUST 2000 reside on each machine. According to estimates, 30 to 40 percent of all households access the Internet now, and this number has been growing quickly. According to the Graphics, Visualization, and Usability (GVU) Center’s 1998 survey, over 90 percent of the Internet users surveyed are making purchases online and about 60 percent are also paying for the items over the Internet most or all of the time. This indicates that these buyers have some confidence in the security of the Internet for financial transactions. Indeed, at the end of last year, over one-third of all banks indicated on their Reports of Condition and Income (Call Reports) that they had a web site and many more reported plans to build one. Some banks see the Internet as a way to deliver their products to customers; others see it as a separate line of business for the bank. Whereas some banks just offer information about their products on the web, others have transactional web sites at which their customers can do things like check their account balances, transfer funds between accounts, pay their bills, use financial planning software, apply for loans, stop payments, or trade online. And some banks exist only on the Internet. They have no physical presence — no branches at all! These banks save on the costs of brick and mortar, but need to spend more on advertising. So far, there are only a handful of these virtual banks, and many are finding it difficult to remain branchless. Seeing is believing, and that’s as true in banking as in anything else. A special problem that virtual banks have to solve is how to deliver cash to their customers and how to accept deposits. Some are allowing customers to access ATMs without cost. Direct deposit can sometimes be used, but in other cases, deposits have to be mailed to the bank. So much for the electronic age! Most banks are offering online banking now as a way to retain customers, rather than generate new business, although not always successfully (surveys suggest that many customers who have tried online banking have stopped using it—many thought it was too time consuming and some thought there was poor customer service). In this way, the online banking of today resembles the ATM of the 1970s. It took time for a large volume of customers to use ATMs. While the cost savings to the bank were substantial, trying to coerce customers to use ATMs instead of tellers wasn’t successful. Youth, high income, and a college degree are associated with a higher incidence of computer banking. It seems reasonable to predict that online banking will eventually take its place alongside the other ways customers can interact with their banks: branches, telephone centers, loan production offices, and ATMs. But it seems unlikely that all banking will become e-banking. Even in this technologically advanced age, the in-person visit remains the main way people interact with their banks, as they develop and maintain their relationship with the institution. But will this relationship change as banking becomes more electronic? Will relationships between a consumer and a bank, or between a small business and a bank, go the way of the horse and buggy? To put it another way: will banking products primarily be bought and sold in the financial marketplace as commodities, or will personal service and personal contact still matter? This question is related to both of the first two questions, concerning bank size and new technologies. Small-business lending and consumer Will relationships between a consumer and a bank, or between a small business and a bank, go the way of the horse and buggy? 8 FEDERAL RESERVE BANK OF PHILADELPHIA Financial Modernization: Vastly Different or Fundamentally the Same? lending used to be the purview of small banks, which devoted substantial resources to getting to know their customers and developing relationships with them. But this is changing. Today, large banks, which want to take advantage of the scale economies that come with size, are using credit scoring to make small-business loans and are processing applications using automated and centralized systems. These banks are able to generate large volumes of smallbusiness loans at low cost even in areas where they do not have extensive branch networks. Applications are being accepted over the phone, and some banks are soliciting customers via direct mail, as credit card lenders do. Technology is also helping nonbanks become larger players in the small-business loan market. For example, American Express is one of the top granters of credit lines to small businesses in the Philadelphia Federal Reserve District, especially lines with face values under $100,000. The smallest loans are the most likely to benefit from new technologies, and data indicate that those are the types of loans where the larger banks are increasing their small-business lending. But these loans differ from traditional smallbusiness loans. Recently, economists Rebel Cole, Lawrence Goldberg, and Lawrence White studied more than 1200 loan applications made by small businesses. Their results indicate that large and small banks do differ in the way they handle applications from small businesses: large banks rely more on easily verified, interpreted, and quantifiable financial data while smaller banks use more subjective criteria indicative of “character,” or relationship-type, lending. Some types of small-business loans are like credit card loans, which do not require much in the way of information-intensive credit evaluation beyond what is done in a credit scoring model. The scale economies in automation available to large banks allow them to produce these transactionstype small-business loans more cheaply than a small bank can. Credit scoring will tend to standardize these loans and make default risk more Edward G. Boehne predictable. These steps should make it more feasible to securitize the loans. This ability to securitize would bring a new set of investors into the small-business loan market, a positive effect. Borrowers who have credit histories good enough to receive a passing grade from a credit scoring model will find it cheaper to obtain credit from larger banks. Small banks will still serve the small borrowers who may not have the financials to qualify for a passing credit score, but who, upon further credit evaluation, are good risks. Small banks will continue to offer the traditional relationship-driven lending, which requires the bank to stay in contact with borrowers over time to gain information about them. It also requires the bank to be a specialist in evaluating the creditworthiness of borrowers for whom there is little public information. The more complicated organizational structure of large banks may put them at a disadvantage in making these relationship-type loans. So small banks should retain their niche in relationship lending — but that niche is likely to be smaller than it is today. It’s important for small businesses to realize they are making a choice between different kinds of credit when they choose their type of lender. (I’m not sure they do realize this — new small businesses have not experienced a recession in which their financials quickly deteriorate, thereby making it difficult to pass a credit scoring model.) It is also important for borrowers to understand the pricing of relationship vs. commodity-type loans. With a relationship loan, a bank can offer better terms to a firm facing temporary problems, then make up for these concessionary rates when the firm turns around. But, of course, the firm should expect to pay something for this kind of insurance. With commodity-type lending, the bank charges its break-even price period by period. The borrower should not expect to get a concessionary rate in bad times. Thus, technology will impact not only the type of loans being offered but also their pric9 BUSINESS REVIEW JULY/AUGUST 2000 ing. Technology can also affect pricing in a less direct way, which brings us to our next question: Will the major focus of the business of banking become the collection of fee income instead of the margins on loans? The answer is yes, because it is happening already. As new technologies have come into the banking industry, new players have entered too. Commercial banks’ share of loans to businesses and households has declined significantly over the past two decades. The increased competition banks are facing from insurance companies, mortgage banks, and the commercial paper market has driven commercial banks to seek more stable sources of income. These fee-based services include mortgage servicing, cash management, data processing, and investment services. Income from these sources is less sensitive to the business cycle. FDIC data indicate that in 1984, noninterest income was about 25 percent of operating revenue, and in 1999, it had risen to over 40 percent. Both large and small banks are increasing their percentage of fee-based income, although there has been a stronger rise at the larger banks because they operate at a sufficient size to capture the scale economies inherent in many of the technologies used to provide these fee-based services. The push toward fee income goes hand in hand with the expansion of banking into new product areas and the commoditization of traditional bank products. Technology allows the unbundling of banking products so that fees can be assessed for each component of the product. Note that there’s an interaction between the pricing of new products and their acceptance by the consumer. For example, until recently, consumers didn’t explicitly pay for the costs of using checks; they implicitly paid for check services by receiving lower rates on deposits or paying higher rates on loans, but the costs were not apparent to consumers. The fact that they now see the cost of using paper checks may spur them to explore new, more efficient electronic forms of payment. There is no doubt that the changing nature of banking will necessitate changes in the way banks are supervised and regulated. Which brings up our final question: Will bank regulators eventually rely primarily on market discipline instead of more traditional bank examinations to determine the health of banks? Already, the trend is toward relying more on the market’s assessment of the health of larger, more complex banking institutions than had been possible in the past. As banks become larger, they are more likely to have stocks and bonds that are actively traded. The market’s view of the institution is embedded in the prices of these securities. Indeed, under the GrammLeach-Bliley Act, if a bank is one of the 50 largest insured banks in the U.S., it can have a financial subsidiary that engages in expanded activities only if it has at least one issue of unsecured long-term debt that is rated in one of the three highest categories by a nationally recognized rating agency (such as Moody’s or Standard & Poor’s). The act also calls for a study by the Fed and the OCC to determine the feasibility of requiring large banks and financial holding companies to hold some of their regulatory capital in the form of subordinated debt. The idea is that subordinated debtholders are sophisticated investors, but unlike equityholders, they would not share in any upside gains from risky actions that happened to pay off. Thus, these investors have lower pref- Will the major focus of the business of banking become the collection of fee income instead of the margins on loans? 10 FEDERAL RESERVE BANK OF PHILADELPHIA Financial Modernization: Vastly Different or Fundamentally the Same? erence for risk than do equityholders. The Financial Modernization Act allows for a more complicated banking organization. These complicated institutions will be more difficult to monitor using only examinations. And research has shown that information in bank exams gets stale fairly quickly — in about six months. Thus, it is important to set up rules to give institutions better incentives to better align their interests with those of society. One can view reliance on market discipline in this way: if the market has information about the institution, it will exact a risk premium from those institutions considered to be especially risky. The bank’s having to pay more for taking on risk works to control the bank’s risk-taking. However, it is important that the market have access to information about the institution in order to make its evaluation. Hence, disclosure becomes much more important. Indeed, disclosure is one of the key factors in helping to ensure public confidence in the financial system. Edward G. Boehne cial institutions to be safe and sound. They want their financial transactions (whether involving loans or deposits or other services) to be executed in a timely and accurate manner. They want convenience, and they also want privacy and fairness. Note that there is nothing new about any of these — no matter what form the banking system takes, these are the things the public will continue to care about. Public confidence is essential: if consumers do not believe their money is in safe hands, they will exit the financial system. We saw this happen during the Great Depression and in later episodes of bank runs. Just a few blocks from the Federal Reserve Bank of Philadelphia are the First and Second Banks of the United States. These banks, established in 1791 and 1816, respectively, are still standing today and remain quite impressive buildings. The strength of the structures was meant to convey the strength of the institutions and to instill public confidence. Most commercial bank buildings were constructed with the same idea in mind: solid buildings with strong vaults that would instill public confidence. Today, it is just as important for the public to have confidence in its financial system, but the means for ensuring this confidence is different in this age of technological innovation. It is important that the public be assured that banks will use the highest level of electronic security, not just imposing physical structures with strong vaults, to protect their money and the information that customers give to their banks. From the bank’s viewpoint, this information can be as valuable as the money a customer places in the bank. The trends in banking that we’ve been discussing cannot occur unless the public remains Will bank regulators eventually rely primarily on market discipline? THE FUNDAMENTAL ROLE OF PUBLIC CONFIDENCE Ultimately, the answers to the above questions will depend on whether trends in the financial industry reinforce or undermine the public’s confidence in the financial system, since public confidence forms the essential underpinning of the financial system. If any of these trends pushes the envelope too far and begins to erode public confidence, it must be stopped. And it can be stopped in one of two ways: banks can take steps to stop it, or Congress or state legislatures will step in and stop it. What is the foundation of the public’s confidence in the financial system? Members of the public want their money to maintain its purchasing power; they don’t want its value eaten away by inflation. They want banks and other finan- 11 BUSINESS REVIEW JULY/AUGUST 2000 confident in the financial system as it undergoes transformation and unless the public sees a benefit in the changes taking place. For example, the trend is toward moving from a paperbased payments system to an electronic one, but how fast a payments instrument is adopted depends on how the risks, costs, and benefits of the new instrument are distributed among participants. Why have smart cards done so poorly in trials in the U.S.? Because consumers and merchants could not see much benefit in using the cards instead of using cash for small purchases. Similarly with e-commerce and e-banking. If consumers don’t see much benefit or, even worse, if they are burned by fraudulent e-commerce or e-banking practices, the trend toward electronic banking will slow down. When banks offered their own proprietary PC banking systems in the 1980s, these systems were so slow that consumers were turned off. It was difficult to get consumers to take another look once the technologies improved. The issue of privacy is another example. Banks now have the reputation of treating customers’ information in a secure manner, but this could be jeopardized in the move to e-banking, as some recent episodes suggest. Consider DoubleClick. This company has built a database of consumer profiles by using “cookies” planted on computers when users visit any of the 11,000 web sites operated by the company’s 1500 clients. Until recently, DoubleClick said it would not collect or share information that could identify individuals. But, in January, DoubleClick announced it would begin doing so to facilitate targeted advertising. Unfortunately for DoubleClick, there was an immediate backlash. Two states plan to sue the company for violating their consumer protection laws, and the FTC is investigating whether DoubleClick uses unfair and deceptive trade practices in failing to properly disclose what information it collects and how it is used. Faced with mounting criticism, DoubleClick has retreated and again says it will not provide consumer-profile data to advertisers. Other firms also are being sued for breaching states’ privacy laws. In addition to these types of intentional disclosures involving targeted advertising, there have also been a number of highly publicized unintentional disclosures of customer information. For example, Intuit’s policy states that Intuit will not willfully disclose customer data without a customer’s permission. However, a programming bug in Quicken relayed to DoubleClick some customer data, including income, assets, and debts. Similarly, H&R Block inadvertently exposed some customers’ tax data to other customers. Episodes like these can make deep and lasting impressions on the minds of potential users of Internet financial services. Both banks and regulators must be aware of the overriding public interest in maintaining confidence in the financial system. Banks are now allowed to expand into new product markets, and they now have more avenues at their disposal by which to provide these products to their customers. The general approach being taken is one of allowing banks and other providers of financial services to determine which services to provide and in what manner, within broad guidelines. This is consistent with the current supervisory framework, where bank management is responsible for risk management and control, and bank supervisors are respon- Public confidence is essential: if consumers do not believe their money is in safe hands, they will exit the financial system. 12 FEDERAL RESERVE BANK OF PHILADELPHIA Financial Modernization: Vastly Different or Fundamentally the Same? sible for ensuring that bank management has an effective system to manage risk. There has been a recognition on the part of regulators that market forces are difficult, if not impossible, to thwart. Regulation needs to accommodate changes in the financial system. Thus, rather than regulate through proscriptions, the goal is to establish the proper incentives, so that it is in a financial service firm’s interest to act in a prudent and responsible manner. This approach has the potential of yielding a more efficient, flexible, and innovative financial system (all attributes that can bolster confidence in the system). Banks, however, must share the responsibility for maintaining public confidence in the financial system. Banks must understand that if they fail to take appropriate actions or find themselves unable to maintain the public’s confidence, regulators and legislators will be forced to take action. Unfortunately, political interventions can sometimes be inefficient in the long run. In light of the Gramm-Leach-Bliley Act, a prime example of this comes to mind. The Glass-Steagall Act that separated commercial banking from securities underwriting was passed in the backlash of the Depression. About 10,000 banks failed between 1929 and 1933. In response to the banking crises, some 20 laws were passed between 1932 and 1935. These laws limited competition and restricted bank activities in an attempt to secure bank safety. In retrospect, the majority of bank failures during the 1930s was likely not due to excessive risk-taking on the part of banks, but rather to some bad policy-making. Thus, it appears that the Glass-Steagall Act was not really necessary, yet we lived with it for almost 70 years. This is not to say regulations and legislation are always uncalled for. It is well to remember that public confidence is a public good, and sometimes the market may fail to ensure sufficient provision of this public good. In these cases, measured intervention can be a help. For example, regulators are currently writing Edward G. Boehne rules regarding financial institutions’ handling of customers’ personal data. Until recently, the general approach to privacy on the Internet was one of self-regulation, where industry providers would take care to establish privacy standards and stick with them. From a competitive viewpoint it makes perfect sense that banks should be at the forefront, establishing policies to safeguard their customers’ privacy. Why wouldn’t banks want to build on the reputation they already have and one that their nonbank competitors have yet to establish? But breaches of privacy standards have pushed privacy onto the radar screens of legislators and regulators. And rules are now being written to implement privacy provisions of the Gramm-Leach-Bliley Act. These types of rules enable further development of electronic financial services by assuring consumers that information about them on such electronic systems is safe. Regulators can also work to encourage financial market participants to communicate with one another as financial innovations develop. They can help to clarify some of the legal issues surrounding financial innovations, which would help facilitate their growth. For example, it is not yet clear what the potential liabilities, rights, and responsibilities of issuers, merchants, and consumers are with regard to some of the new electronic payments instruments. If an issuer were to become bankrupt or insolvent, what would be the status of the claim represented by a balance on a smart card? Clarifying such legal ambiguities also helps to ensure public confidence. In the post-Gramm-Leach-Bliley era, where banks are less restricted in what they can do, the task for regulators is to determine how banks can enter new businesses in ways that maintain the public’s confidence in the financial system. CONCLUSION Overriding all of the changes in the banking system I have discussed is the public’s need for confidence in the banking and financial system. 13 BUSINESS REVIEW A well-functioning financial system is the underpinning of a strong economy, and public confidence is the underpinning of a successful financial system. My major point is that while the financial system after passage of the Financial Modernization Act will be vastly different in terms of its structure and delivery systems, it will also be 14 JULY/AUGUST 2000 fundamentally the same: public confidence will remain the basis of a sound financial system. All parties — banks, nonbanks, regulators, legislators — share a responsibility to ensure public confidence. How this shared responsibility plays out will be a powerful influence shaping, and perhaps limiting, the trends we’ve discussed here. FEDERAL RESERVE BANK OF PHILADELPHIA Financial Modernization: Vastly Different or Fundamentally the Same? Edward G. Boehne Economics and the New Economy: The Invisible Hand Meets Creative Destruction Leonard I. Nakamura* A s the third millennium begins, the buzzwords “new economy” and “new paradigm” are invoked repeatedly to explain the U.S. economy. In general, these words refer to a view that high-tech innovations and the globalization of world markets have changed our economy enough that we need to think about it and operate within it differently. Perhaps what we notice most is a new Zeitgeist of accelerating change in the worlds of work and knowledge, change that’s emphasized in books with titles *Leonard Nakamura is an economic advisor in the Research Department of the Philadelphia Fed. like Blur (Davis and Meyer) and Faster: The Acceleration of Just About Everything (Gleick). Unsurprisingly, economists by no means agree that there is a new economy or that there is a need for a new paradigm. One sign that there has been a fundamental shift is that direct production of goods and services no longer absorbs the preponderance of workers’ time. In 1975, production of goods and services ceased being the occupation of the majority of U.S. workers. Never before had a society been so productive that it could afford to assign most of its workers to white-collar tasks such as management, paperwork, sales, and creativity. As recently as 1900, production workers in 15 BUSINESS REVIEW JULY/AUGUST 2000 goods and services accounted for 82 percent of the U.S. workforce (Figure).1 Over the course of the century, that number declined by large steps, to 64 percent in 1950, and to 41 percent in 1999. Managers, professionals, and technical workers, who are increasingly involved in creative activities, have risen from 10 percent of the workforce in 1900 to 17 percent in 1950, to 33 percent in 1999.2 In 1999 the U.S. economy employed 7.6 million professional creative workers — 2.3 million engineers and architects, 2.9 million scientists, and 2.4 million writers, designers, artists, and entertainers. At the start of the 20th century, this FIGURE The Decline of Production Work Major occupational categories as proportions of total employment Source: 1900-70 Historical Statistics of the United States. 1980, Census of Population. 1990 and 1999, Employment and Earnings, January 1991 and January 2000. Production occupations are defined here to include farming, forestry, and fishing; precision production, craft, and repair; operators, fabricators, and laborers; private household and other service workers. Sales and clerical workers include sales workers and administrative support, including clerical workers. Managers, professionals, and technical workers include executive, administrative, and managerial workers, professional specialty workers, and technical and related support workers. 16 1 The 1998 occupational data used here are from the Current Population Survey of the U.S. Bureau of Labor Statistics, published in Employment and Earnings, and the data for years before 1972 are from the decennial U.S. Censuses of Population as recorded in the Historical Statistics of the United States. Production occupations are defined here to include farming, forestry, and fishing; precision production, craft, and repair; operators, fabricators, and laborers; and private household and other service workers. 2 Managers, professionals, and technical (MPT) occupations include executive, administrative, and managerial workers; professional specialty positions; and technicians and related support. The residual category of occupations is composed of sales and administrative support, including clerical. This sales and clerical category rose from 8 percent of the workforce in 1900 to 19.5 percent in 1950 and grew more rapidly than MPT during that time. It continued to grow more rapidly than MPT until it reached 25 percent in 1970. Since then, however, the proportion of clerical and sales workers has been relatively stable; it amounted to 26 percent in 1999. Much of the function of these workers involves paperwork, the processing of which has been greatly automated in the past 30 years. FEDERAL RESERVE BANK OF PHILADELPHIA EconomicsModernization:Economy: The Invisible Hand Meets the Same? Financial and the New Vastly Different or Fundamentally Creative Destruction Leonard I. Nakamura Edward G. Boehne group numbered 200,000 workers — less than 1 such as designing, inventing, and marketing new percent of the 29.3 million workers then em- products; and more and more economic activity ployed. By 1950, the count had risen more than is devoted to creating technical progress? five times to 1.1 million—almost 2 percent of the In light of the changes summarized above, total of 59 million workers. There are now more perhaps the theory set forth by Joseph than six times as many creative professionals as Schumpeter and often referred to as creative dein 1950, representing 5.7 percent of the workforce struction is a better paradigm for the current U.S. (Table). economy. Paul Romer (1998), a Stanford profesThese professional creative workers are paid sor of economics and one of the new for their efforts primarily through property rights Schumpeterian theorists, uses the metaphor of to their creations: they (and the corporations that cooking to describe direct production as followemploy them) are granted copyrights, patents, ing existing recipes while creativity is seen as brand names, or trademarks. These property creating new recipes. The new recipes that result rights in turn create temporary exclusivity, tem- from creative endeavors allow a higher standard porary monopoly power that negates the unfet- of living. But creative efforts are risky: while some tered access to markets so prized in economic efforts will fail and yield little, if any, payoff, eftheory. forts that yield successful new products are richly The clash between creativity and traditional economics runs deep. TABLE Perfect competition is the central paradigm economists have relied on Professional Creative Workers to describe capitalist economies. This paradigm, which underlies Adam Year Millions of professional Proportion Smith’s “Invisible Hand” theorem, creative workers of all employment focuses on production processes and 1999 7.6 5.7 abstracts from the informational 1990 5.6 4.7 tasks that managers, professionals, 1980 3.7 3.8 clerks, and sales workers perform. 1970 2.6 3.3 The paradigm of perfect competition 1960 1.6 2.3 was formulated by William S. Jevons, 1950 1.1 1.9 Leon Walras, and Carl Menger in the 1900 0.2 0.7 late 19th century, a time when direct production of goods and services dominated work.3 Is this paradigm Sources: 1900-1980, Censuses of Population. 1990 and 1999, still appropriate in an age in which Employment and Earnings, January 1991 and January 2000. innovation is such an important economic activity; millions of workers Professional creative workers consist of architects, engineers, mathematical and computer scientists, natural scientists, soare employed in creative activities, cial scientists and urban planners, writers, artists, entertainers, and athletes. 3 American economist Frank Knight is generally credited with formalizing the paradigm of perfect competition in the first years of the 20th century. His book Risk, Uncertainty, and Profit dates from his 1916 doctoral thesis. Minor multiplicative adjustments have been made to exclude teachers of dance, music, and art from the artists and entertainers category in earlier years; teachers of all types are now separated from artists and entertainers in the occupational statistics. 17 BUSINESS REVIEW rewarded. Firms and workers whose products are outmoded by the new products are harmed. The unevenness of reward implies that an economy that devotes a lot of its resources to creative efforts may have greater inequality, as well as a higher average standard of living, than one that is less creative. And if creativity continues to increase in importance, inequality may continue to rise in the long run, or at least may not decline. FOLLOWING EXISTING RECIPES: THE WORLD OF THE INVISIBLE HAND Ever since Adam Smith’s The Wealth of Nations (1776), most economists have espoused the view that a specific aspect of competition called perfect competition is the main spur to economic efficiency. In terms of the metaphor of recipes, this type of competition requires that all firms in an industry have access to the same set of recipes. Let’s explore this idea to gain insight into the standard demonstration of the Law of the Invisible Hand. A recipe for producing a good or a service has a list of ingredients: quantities of inputs, including the services of labor and capital, that go into making the final product. The desire to maximize profits induces each firm to produce the product at the lowest possible cost — that is, to use the recipe that allows the firm to produce the good or service at minimum cost — given the prices of ingredients. If many firms compete, and all of them can use the same recipes, no firm can charge more than the lowest cost at which all competing firms can make the product. If it did, a competitor would offer the product at a lower price and make a profit doing so. If prices of inputs change, firms may adopt a different recipe, but they will still seek to produce at lowest cost, and competition will still force firms to charge no more than the new lowest cost. Thus, a consumer buys from firms that, in their own self-interest, produce products as efficiently as the consumer could wish and charge prices that reflect the lowest possible production cost. 18 JULY/AUGUST 2000 Guided by the invisible hand of the marketplace, firms are led by self-interest to behave in a way that maximizes each consumer’s well being — so long as there is vigorous competition among firms. This is the Law of the Invisible Hand. In general, Smith’s Law of the Invisible Hand implies that government interference in the perfectly competitive economy is unnecessary except for ensuring that monopoly does not arise. If a firm can exclude other firms from its market, thereby monopolizing a good, it will maximize profits by restricting supply and charging more than the cost of production. When that happens, consumers buy less of the monopolized good than they would at the lower price that competition would force firms to charge. The result is that the economy will operate inefficiently: too little of the monopolized good will be produced and consumers will be worse off than they would be if the good were produced competitively. In this theory, monopoly is a primary threat to the efficiency of a capitalist economy. In some cases, however, a single producer may yield the lowest cost way of producing a good or service, perhaps because the cost of making an additional unit of the good keeps falling as more units are produced by a producer (economists refer to this as scale economies). In such cases, the government’s role is to regulate the monopoly so that it does not artificially restrict supply. Smith’s theory also implies that governments can assist the invisible hand by abolishing artificial barriers to trade. This can force into competition firms that otherwise might have monopolized small markets. At the same time, larger markets encourage individuals to specialize in different parts of the production process and coordinate their labor. In turn, specialization — the division of labor — is the chief engine of increased productivity. Division of labor, according to Smith, owes its power to increase productivity to three sources: “first, to the increase of dexterity in every particular workman; secondly, to the saving of the time which is commonly lost FEDERAL RESERVE BANK OF PHILADELPHIA Financial Modernization:Economy: The Invisible Hand Meets the Same? Economics and the New Vastly Different or Fundamentally Creative Destruction in passing from one species of work to another, and lastly, to the invention of a great number of machines which facilitate and abridge labor, and enable one man to do the work of many” (p. 7). Smith saw the inventive activity that improved production techniques as being a byproduct of the division of labor, since, when a worker concentrated attention on one activity, time-saving inventions often came to mind. Of course, even in the 18th century, when Smith was writing, the activity of inventors and other creative workers was evident in the economy, but the flow of payments to creative work was minuscule compared with those that flowed to the labor, land, and capital that directly produced products.4 Smith saw progress in economic activity as flowing naturally, almost magically, from wider markets. The theory of the invisible hand, as it has evolved within modern economic growth theory, treats both economies of scale and creative activity as exogenous, that is, outside the scope of economic theory, and therefore “magical.”5 But an alternative perspective is to describe economies of scale and technical progress as endogenous to the economy, viewing creativity as an economic activity. This perspective on economics found its foremost advocate in a Harvard professor named Joseph Schumpeter, who wrote in the first half of the 20th century, during the years when formal corporate research and development first emerged on a substantial scale. CREATING NEW RECIPES: THE NEW ECONOMY OF CREATIVE DESTRUCTION Schumpeter argued that what really made capitalism powerful was profits derived from creativity.6 He believed that the force of habit was extremely powerful in work life and that Edward G. Boehne Leonard I. Nakamura since economic development required implementing creativity, overcoming this inertia was crucial. In his masterwork, Capitalism, Socialism, and Democracy (1942), Schumpeter constructed a paradigm for economic theory in which creativity was the prime mover in a modern economy, and profits were the fuel. He argued that what is most important about a capitalist market system is precisely that it rewards change by allowing those who create new products and processes to capture some of the benefits of their creations in the form of short-term monopoly profits.7 Competition, if too vigorous, would deny these rewards to creators and instead pass them on to consumers, in which case firms would have scant reason to create new products. These monopoly profits provide entrepreneurs with the means to (1) fund creative activities in response to perceived opportunities; (2) override the natural conservatism of other parties who must cooperate with the new product’s launch as well as the opposition of those whose markets may be harmed by the new products; and (3) widen and deepen their sales networks so that new products are quickly made known to a large number of customers.8 6 Good academic introductions to this point of view are in the articles by Paul Romer (1986, 1990) and the book by Joseph Stiglitz. Romer (1998) is a good business-oriented popular discussion. The book by Gene Grossman and Elhanan Helpman is an advanced text. Smith ascribes this inventive activity to workers in industries that make capital equipment. 7 Schumpeter ignores the theoretical possibility that new recipes can be developed and paid for using perfect contracts, where the inventors are paid for their labor and the recipes are then made available freely to all firms. It appears that new consumer products cannot be readily specified in advance, as such a perfect contract would require. The book by Stiglitz discusses evidence that creative destruction is difficult to assimilate into a perfect contract world. 5 In his book Krugman uses the term magic to describe the exogenous sources of economic growth in a nice exposition of this point of view. 8 Opposition to new products can arise from consumer and political groups, from workers who make rival products within or outside the firm, or from potential dis- 4 19 BUSINESS REVIEW The drive to temporarily capture monopoly profits promotes, in Schumpeter’s memorable phrase, “creative destruction,” as old goods and livelihoods are replaced by new ones.9 Thus, while Adam Smith saw monopoly profits as an indication of economic inefficiency, Joseph Schumpeter saw them as evidence of valuable entrepreneurial activity in a healthy, dynamic economy. Indeed, Schumpeter’s view was that new products and processes are so valuable to consumers that governments of countries should encourage entrepreneurs by granting temporary monopolies over intellectual property and other fruits of creative effort. Thus, in contrast to Adam Smith, Schumpeter argued that government action to prevent or dismantle monopolies might harm growth and the consumer in the long run. 10,11 In practice, temporary intellectual property protection has been adopted by all advanced tributors. This opposition may be formal or informal, legal or illegal. Consider the recent worldwide opposition to genetically modified agricultural products or the protests at the Seattle meeting of the World Trade Organization. 9 The monopoly is only temporary; it lasts until a better product comes along that drives out the old or until the patent or copyright expires and others are able to copy the idea or process and compete with the originator. If the grant of monopoly were long-lived, the monopolist would have less incentive to create innovations and might have the power to prevent potential competitors from introducing innovations. 10 Schumpeter’s book gloomily prophesied that capitalism itself would succumb to socialism because of the intellectual disrepute into which economic theory had plunged monopoly and monopolists, when these very monopolists were the heroes of capitalism, properly understood. JULY/AUGUST 2000 industrial economies, suggesting that this reward system is indeed valuable in promoting economic growth. To this extent, modern economies have not obeyed the law of the invisible hand. We have made monopoly, albeit temporary, an important instrument of national development policy.12 On the other hand, the temporary monopoly protections of intellectual property law are not the only way modern societies reward innovators. For example, much scientific research is generated by grants made by public agencies or private foundations. Development of military products is often done for a fixed payment, which is determined by a bidding process, or on the basis of the incurred and audited costs of the developer. However, these alternative reward systems are employed only where a normal market does not exist for the product. For consumer products, it appears that, in general, the marketplace is the best measure of the value of an invention. The more valuable the product, the greater the reward to its creator should be. And that’s exactly what a patent or copyright does — gives the creator a reward that rises with consumer value, because the greater a product’s consumer value, the more profit a monopolist can realize from its sales, since the monopolist can charge more for it.13 At the same time, it remains true that the temporary monopoly itself deprives society of the full value of the creation, since to textile industry in India, to prevent the adoption of new, superior technology when entrants have limited ability to profit from the new technology. 12 Mark Rose’s study of the development of English copyright law illustrates the explicit balancing of the property rights of the creator against the desirability of limiting monopoly power. 13 11 Schumpeter may have gone too far; entrenched monopolies can become the enemy of progress. The theoretical model in the article by Stephen Parente and Edward Prescott shows that it is possible for entrenched existing monopolies, such as state-protected employment in the 20 The theoretical basis for this, as well as modern views of the underlying complexities, is laid out in the article by Suzanne Scotchmer and the one by Francesca Cornelli and Mark Schankerman. One important limitation to the theoretical result is that it assumes away patent races. FEDERAL RESERVE BANK OF PHILADELPHIA Financial Modernization:Economy: The Invisible Hand Meets the Same? Economics and the New Vastly Different or Fundamentally Creative Destruction secure their monopoly profits, firms limit supply. Thus, the full value of the creation is realized only when the monopoly ends.14 While Schumpeterian theories tell us some form of intellectual property protection for creators is desirable, they do not yet tell us how much protection to award, for instance, how long patents should last. There are two important drawbacks to an economy of creative destruction. First, an economy of creative destruction knows only one pace — hectic. There is no way to know who created something except for priority — whoever says or does it first. Once something is discovered, it is easy to copy. Someone who independently creates something, but does so belatedly, does not get credit and does not share in the reward. The rewards of creativity go to the swiftest. It is thus no accident that long hours are a frequent correlate of creative activity. Second, creative destruction, as its name implies, involves risk and change. Those whose products are outmoded by a new product lose their livelihoods. Even those who create a new product can predict but a small part of its consequences. The forces that oppose creativity are not irrational; they are the natural concerns of economic participants as to how they will be affected by creativity. WHY ARE THE FORCES OPPOSING CREATIVITY SO STRONG? Why oppose change and growth in the economy? Because of the riskiness of creating, making, competing against, and buying new products.15 All activities are at risk in an environment of creative destruction. 14 Robert Hunt’s article is a good summary of theoretical and empirical evidence about the uncertainties of optimal patent protection. 15 Discussions of the impact of increasing risk and inequality in the U.S. are found in the book by Robert Frank and Philip Cook and the book by Michael Mandel. Edward G. Boehne Leonard I. Nakamura Creativity Puts Existing Products At Risk. One aspect of competition within the creative destruction paradigm is what might be called leapfrogging competition, but which economists call a “quality ladder.”16 In this form of competition — which can be observed in video game machines, personal microprocessors, computer software, pharmaceuticals, cell phones, and color televisions — companies try to create new generations of the same product so that the bang for the buck (in economic terms, quality-adjusted value per dollar) rises. A clear example is the personal computer (PC), whose power and speed have been rising at rapid rates for over 20 years. In the competition to supply components of the PC such as modems or memory, any firm that wants to play the game has to invest in creating new, faster, and smaller versions of the component. To earn profits to justify this investment and its uncertainties, the resulting innovation must leapfrog the competition by creating a new generation. The first firm to market with the new generation can often grab the bulk of the entire market and, with it, almost all the profits to be had. Of course, this typically wipes out the profitability of the previous generation and sets the stage for the next leapfrogger, who will then destroy the profits of the current leader. Another aspect of creative destruction is competition across different types of products. The creation of a new type of product will, first and foremost, increase the variety of products available to consumers.17 Beyond that, it will enhance 16 The pioneering article is the one by Philippe Aghion and Peter Howitt. Grossman and Helpman’s book is a nice exposition, albeit at an advanced level. The competition being described is not easy to model mathematically because the firms engaged in this competition have to worry about both the past and the future—the qualities of existing products and the future products that will be discovered—in calculating the likely profitability of their investments. 17 The seminal paper is the one by Avinash Dixit and Stiglitz. 21 BUSINESS REVIEW the desirability of some kinds of products and lower that of others, just as the automobile increased the demand for rubber tires and gasoline and reduced the demand for horseshoes and buggy whips. More generally, new products encompass both aspects — they can be seen both as quality improvements and as different products that widen the market. Consider new drugs like Celebrex and Vioxx, improved versions of aspirin that minimize the gastrointestinal side effects of long-term use of aspirin and aspirin substitutes. These products have modestly reduced the demand for aspirin, but because of their current high price, their main effect has been to expand the market to those who have had adverse reactions to aspirin and other aspirin substitutes. Being Creative Is Inherently Risky. You don’t know what will work until you try it. While successful new products may earn immense returns, others inevitably fail and cause losses to their creators and their supporters. Every new product is a step into the unknown.18 Recent examples of products that were expected to fare well in the marketplace, but did not, include the antibiotic Trovan and the 1998 remake of the movie Godzilla. Trovan was expected to be a multibillion dollar antibiotic. Its launch in 1998 was a tremendous success: two million prescriptions were written in a year. But of these users, 14 suffered severe liver damage as a side effect, and several died.19 As a result, Trovan’s distribution was limited to use in supervised settings (that is, hospitals) in the United States, and the European Union banned it outright. Now Trovan JULY/AUGUST 2000 is no longer expected to be a blockbuster drug. Similarly, among movies, the remake of Godzilla was expected to be the summer blockbuster of 1998. Instead, its sales were very disappointing. Careers and Sequels. For individual scientists and artists, past success is no guarantee of future success. If we could pick winners, we would give those who are going to be productive the resources they need, but often we recognize talent only after the fact. After he published his Principia, Newton’s scientific output essentially disappeared. Computer laser typesetting pioneer Wang Xuan, of Beijing University, was quoted in Science magazine as lamenting, “When I was in my prime, doing the most advanced research, I was not recognized. [N]ow that my creative peak has long passed...my fame grows while I’m making fewer and fewer contributions.”20 This riskiness extends to those who work with creators, because their continued employment may depend on the success of the creators. Some kinds of downsizing can be viewed as the natural consequence of failed creativity, of the inability of a group to maintain a stream of innovation. Of course, in a world of creative destruction, those who don’t even attempt to innovate also get downsized. Workers whose employment is attached to outmoded methods of production or outmoded goods suffer large penalties if they are unable to adapt to change. Networks and Risk. Another aspect of the risk of creative destruction is the fact that consumers also invest in a product or system.21 If the product or system becomes outmoded, consumers suffer along with the producer. Hence, 18 The first economist to focus on the fundamental uncertainties of creativity was Frank Knight, and in his honor, this aspect of uncertainty is often called Knightian risk. Because we cannot rely on new creativity to be like past creativity, an empirical analysis of Knightian risk will likely always be at least somewhat unsatisfactory. It also implies that the confidence of investors (which Keynes called their animal spirits) may be an important determinant of the rate of investment. 22 19 In clinical trials, 7000 patients were exposed to Trovan and no cases of acute liver failure were reported. (“Questions and Answers about TROVAN Advisory,” FDA Medwatch, June 9, 1999.) 20 Quoted in the section “Random Samples,” Science 285, September 10, 1999, p. 1663. FEDERAL RESERVE BANK OF PHILADELPHIA Economics and the New Vastly Different or Fundamentally Creative Destruction Financial Modernization:Economy: The Invisible Hand Meets the Same? consumers also must try to pick winners. This effect becomes sharper when the number of consumers investing in a given system influences its value for each consumer, for example, the more of your friends who have email, the more useful email is to you. Phonograph records suddenly became a risky investment in the 1980s when compact discs took the market. Compact discs offered enough advantages to ensure that new consumers would want to switch to the new technology. Older consumers had to bear switching costs, in particular, their existing collections of records and stereo equipment became outmoded and new records ceased to become widely available. Betamax looked like a technology winner to most experts when videocassette recorders (VCRs) were invented in the late 1970s. Beta was competing with VHS, and insiders knew that Sony had had the opportunity to develop either Beta or VHS and had chosen Beta as the superior technology. But the corporations that developed VHS were able to more rapidly lengthen videocassette playback times. Consumers who did adopt Beta eventually found that they had to switch to VHS, as Sony was forced to abandon the system by the greater availability of prerecorded videocassettes on VHS. When consumers do choose a system, the system’s rivals may suffer irreversible setbacks, as the Beta system did. This underscores the risks of competition — network competition creates big winners and big losers. In 1961, back in the early days of the computer, when each piece of computer software was written for a specific model of computer, IBM decided to create an operating system that would permit computer users to use the same programs 21 Carl Shapiro and Hal Varian’s book gives a readable introduction to consumer network effects that have been the focus of much economic research. John Sutton’s book discusses the general issue of consumer investments in a system. Leonard I. Nakamura Edward G. Boehne across the entire family of IBM computers. The difficulty of creating such a system proved much greater than expected, and IBM nearly failed waiting for its completion in 1966 (see the book by Thomas Watson). But once the system was together and operating, IBM’s rivals in the computer business were helpless — and virtually all of their important customers migrated to this new system that could grow as they did. Here the “consumers” were large corporate users, whose investments in software became much more durable once they could be used unchanged on different models of computers. IBM’s U.S. competitors became known as the Seven Dwarfs. IBM dominated the worldwide computer market for 20 years thereafter. The costs associated with the riskiness of creativity must be balanced against the gains obtained. Unfortunately, measuring the economic gains due to new products is harder than measuring those from more efficiently produced existing products. CREATIVITY IS HARD TO VALUE The investments that consumers make in using a product, or that firms make in new complements, make that product more valuable. When VCRs first came on the market, they were mainly used to record television programs for playback at a more convenient time. But as VCRs proliferated and were able to play longer tapes, they became a convenient format for playing movies. Businesses that rented prerecorded tapes to consumers further enhanced the value of the VCR. Similarly, the development of software and of the Internet have further enhanced the value of personal computers. Because we learn about the true value of new products only with experience, and because consumers invest in new product systems only over time — and in doing so enhance their value — it takes a long time to know how valuable any given piece of creativity is. The enthusiasm of the moment — whether highbrow or lowbrow — may not be what lasts. Samuel Johnson said 23 BUSINESS REVIEW that a century was long enough to judge that Shakespeare’s plays were indeed immortal. Shakespeare himself thought that his sonnets would last, but didn’t publish his plays.22 Yet when Harold Bloom argues that Shakespeare created the modern world, he’s citing the plays, not the sonnets. Will Seinfeld be an important source of humor for the 22nd century? Will John Cage or John Lennon be seen as the more important composer a century from now? Not only is measuring the value of creativity inherently difficult, but the task is made harder because many of our measures implicitly assume perfect competition. The U.S. Bureau of Economic Analysis (1998) describes the classification of products in the national income and product accounts as follows: “Goods are products that can be stored or inventoried, services are products that cannot be stored and are consumed at the time of their purchase, and structures are products that are usually constructed at the location where they will be used and that typically have long economic lives.” This description appears to leave no room for intangible assets, such as the copyright for Windows98 and the patent for Viagra, that result from creative endeavors. These assets are not material and are thus unlike goods and structures, but they may be long-lived, unlike services. Under the theoretical ideal of the perfectly competitive economy, intangible assets do not exist because the monopoly power they imply is ruled out. Put another way, in a perfectly competitive economy, because all recipes are freely available, no one earns a profit from owning one. A direct consequence of the use of the invisible hand paradigm is that the value of creativity disappears from statistical view. The result is that creativity is poorly mea- 22 In Sonnet 18, Shakespeare promised his now forgotten patron that his verse would be immortal, “So long as men can breathe or eyes can see, So long lives this, and this gives life to thee.” 24 JULY/AUGUST 2000 sured in the U.S. economy. Our official statistics generally don’t treat creativity as an investment (Nakamura, 1999a). This in turn causes the statistics to understate nominal output, savings, and profits. Retail innovations and the proliferation of new products that result from creative activity have made it more difficult to measure the inflation rate (Nakamura 1995, 1998, 1999b). Indeed, our official statistics almost certainly overstate inflation. The combination means that our measures understate real economic growth (Nakamura, 1997). One of the anomalous features of the U.S. economy is the slow rate of measured productivity growth since the mid-1970s, during this period of intensive creativity. In large part, the reason for this anomaly is that the perfect competition paradigm describes creativity as unimportant, and therefore, our economic statistics tend to ignore it. However, measures of U.S. economic growth are in the process of being revised. In the 1999 revision to the national income accounts, the U.S. Bureau of Economic Analysis raised the annual growth rate during the period 1978 to 1998 from 2.6 percent to 3.0 percent. As a result of this change, the Bureau of Labor Statistics has raised its estimates of average growth in output per hour in the nonfarm business economy from 1.1 percent to 1.5 percent per year. This change was made primarily because the BEA recognized software as an investment and also improved the measures of financial sector output to reflect product change — in both cases bringing increased awareness of new products’ impact on economic growth into the national accounts. Until the process of revision of our statistical structure is reasonably far along, it will be hard for the economics profession to judge the empirical validity of the paradigm of creative destruction. If there is to be a scientific paradigm shift, then the creative destruction paradigm must explain data better than the invisible hand paradigm does. This in turn requires that the fundamental measures that the economics proFEDERAL RESERVE BANK OF PHILADELPHIA Financial and the New Vastly Different or Fundamentally Creative Destruction EconomicsModernization:Economy: The Invisible Hand Meets the Same? fession uses to generate data be reformulated to reasonably reflect the value of creativity, not only for the current period but for the past. If upon doing so, we observe long-term acceleration of productivity, this observation would provide valuable empirical evidence that the creative destruction paradigm is superior (Romer, 1986). Moreover, if these arguments are correct, we should then be able to describe the sources of economic growth more precisely and convincingly. Another point of difference between the invisible hand and creative destruction is a prediction about the distribution of outcomes. The Law of the Invisible Hand suggests that competition between workers and companies will tend to equalize wages, whereas creative destruction suggests that markets may tend to magnify inequalities. IN THE NEW ECONOMY, INEQUALITY MAY BE ON THE RISE Inequality and Productivity Growth in the U.S. Productivity growth in the U.S. has been phenomenal if we look at long periods of time, even using traditional measures of output. Output per hour has doubled every 30 to 40 years for the past 120 years, leading to a standard of living roughly 10 times higher than that just after the Civil War (see the book by Angus Maddison). Even the poorest U.S. citizens are far better off than in the distant past. But over the past 20 years, inequality has risen distinctly in the U.S., and creative destruction appears to have had an important role in its increase. While very highly paid male workers earned less than 2.5 times the pay of poorly paid male workers (precisely, the worker at the 90th percentile in earnings compared with one at the 10th percentile) in the 1960s and the early 1970s, the multiple has since risen fairly steadily. Since the mid-1990s, very highly paid male workers have earned roughly four times what poorly paid male workers earn.23 On average, workers at companies that are engaged in creative activi- Edward G. Boehne Leonard I. Nakamura ties — as measured by research and development expenditures, investment in computers, and on-the-job training — have earned more and had greater income growth. The rapid technological change in this period appears to have favored the highly educated — those who are best prepared to create, to assist in creativity, and to learn new ways of working to accommodate the resulting changes.24 Even though the supply of the highly educated has risen rapidly, demand has outpaced supply, and the value of higher education has risen. Quantitatively, the proportion of the working population over age 25 with at least a bachelor’s degree has gone up from 22 percent in 1979 to 31 percent in 1999. The median worker with a college degree earned 68 percent more a week than the median worker with a high school degree in 1999, up from 29 percent in 1979.25 There is a clear and close connection between the rising value of college education and the rapid growth of managerial and professional work that is increasingly centered on creativity. A college degree is often required for these occupations, and those who earn college degrees generally enter these occupations. As of March 1997, 62 percent of managers and professionals had bachelor’s or advanced degrees. Conversely, 68 percent of all holders of bachelor’s or advanced degrees were either managers or professionals. At least some of the value imputed to a college degree is likely to be a return to greater continuing investment in knowledge; holders of college degrees are much more likely than others to en23 See the article by Peter Gottschalk. 24 For the background to this argument, see the Symposium on Wage Inequality in the Spring 1997 Journal of Economic Perspectives, where articles by Gottschalk, George Johnson, Robert Topel, and Nicole Fortin and Thomas Lemieux present a variety of views on skill-biased technical change. 25 Economic Report of the President, February 2000, U.S. Government Printing Office, pp. 135-36. 25 BUSINESS REVIEW gage in formal education while working. And inequality has risen substantially even after we control for measurable changes in education, demographics, and the growth of trade.26 If the U.S. economy continues to change as dynamically as it has in the recent past — and the evidence on the proportion of the workforce devoted to creativity suggests that it will — there is scant reason for supposing that inequality will decline. Moreover, increases in inequality are occurring not only within the United States but also between the advanced industrial economies and other countries. Inequality in the World Economy. The paradigm of perfect competition implies that inequality between rich and poor countries should fall as barriers to trade fall. Opening up trade permits countries to specialize more in the products they produce most efficiently. Allowing the unhindered importing of capital lets poor countries adopt the technology of richer ones. Under fairly general conditions, the wages of workers and the return to capital in rich and poor countries will tend to become more similar. Workers in less-developed countries should benefit more than workers in developed countries as both types of economies become more efficient and relative wages of the workers in the less developed countries rise. As global trade increases, average output per person should become less disparate.27 But while global trade has increased, the evidence on whether inequality has diminished is, at best, equivocal. Output per worker among the advanced industrial countries has tended to converge, but over long periods of time, the gap JULY/AUGUST 2000 between the advanced countries and the less developed countries has not generally diminished. Output per worker throughout the world has risen dramatically, as it has in the United States, but there remain large pockets of poverty in which households produce little more than the bare minimum necessary for subsistence. According to The World Bank’s World Development Indicators 2000, the 3.5 billion inhabitants of the low-income countries had an average gross national product per person of $2,170 in 1998.28 The middle-income countries, with 1.5 billion inhabitants, averaged $5,990 per person that year, while the high-income countries, with 0.9 billion inhabitants, averaged $23,420 per person. As a group, the richest countries generate 11 times as much gross national product per person as the low-income countries. By comparison, Lant Pritchett has argued that in 1870, the income gap between the high-income countries and the low-income countries must have been less than nine times. While lowincome countries have experienced, on average, a very substantial increase in income, so have the high-income countries. The net result is that worldwide inequality has not diminished over the past 130 years. No doubt much of this inequality is the result of bad governance and bad luck, including the rapacity of local oligarchs, disease, war, colonial policy, and civil disorder. This period of history includes extended periods during which trade barriers between nations were quite high and rising, as well. If we confine our observations to the period since 1960, during which trade barriers around the world have fallen, we also see relatively little decline in income inequality.29 According to Robert Summers and Alan Heston, gross domes- 26 See the Symposium in the Journal of Economic Perspectives cited earlier. 28 27 That international trade tends to increase both equality of returns and efficiency was put on a firm foundation by a series of economists beginning with David Ricardo and continuing to the present. See, for example, the text by Wilfred Ethier. 26 Product here is measured in terms of its purchasing power in 1998 U.S. dollars. 29 Trade barriers fell first under the General Agreement on Tariffs and Trade and now under the auspices of the World Trade Organization. FEDERAL RESERVE BANK OF PHILADELPHIA Financial Modernization:Economy: The Invisible Hand Meets the Same? Economics and the New Vastly Different or Fundamentally Creative Destruction tic product per person in 1960 in the high-income countries was 10 times higher than it was in the low-income countries. Thus, the 1998 ratio of 11 times shows scant convergence even in the recent period of trade liberalization. Can we expect more rapid convergence in an era in which economic value increasingly depends on creative destruction? Consider the advantages the United States has vis-a-vis a less developed country in the race to create. The U.S. has a well-educated, diverse, and disciplined workforce; access to the most recent research; a deregulated economy relatively unencumbered by bureaucratic restrictions; moderate taxes; a smoothly functioning financial market to finance investment; a long history of rule by law and democracy; a military under firm civilian control; and a host of highly innovative corporations. These absolute advantages count for a great deal in the world of creative destruction, where speed, flexibility, and advanced education all count in developing new products and bringing them rapidly to the marketplace. Indeed, to the extent that creative individuals and firms benefit from geographic proximity, the direct economic benefits of successful creativity will tend to be concentrated in the most advanced countries. The United States will have these advantages whether or not the less developed countries participate in globalization. Even so, in the long run, less developed countries benefit from the improved ability of the world economy to provide new recipes. But the benefits of globalization should not be oversold. In the short run, rapid obsolescence will tend to deter adoption of new technology in nations where indigenous markets are small. And less developed countries will find it difficult to emulate — and are not allowed by the rules of intellectual property protection to copy — the development of new products. The paradigm of creative destruction implies — in all probability — persistent or even rising inequality between countries. Edward G. Boehne Leonard I. Nakamura HOW TO THINK ABOUT A CHANGE IN PARADIGM FOR ECONOMICS What should the fundamental paradigm of economics be: creative destruction or the invisible hand? This is an empirical matter that depends on the importance of creativity. It is, indeed, hard to measure creativity precisely. But if we fail to recognize it in our economic theory or in our economic measures, we are doomed to be precisely wrong rather than approximately correct. Federal Reserve Chairman Alan Greenspan made this point when he said, “But the essential fact remains that even combinations of very rough approximations can give us a far better judgment of the overall cost of living than would holding to a false precision of accuracy and thereby delimiting the range of goods and services evaluated. We would be far better served following the wise admonition of John Maynard Keynes that ‘it is better to be roughly right than precisely wrong.’”30 How should economists and noneconomists think about the possibility of a paradigm shift in economics? British Nobel laureate economist John Hicks took up this topic in his 1983 paper on “revolutions” in economics: “Our special concern [in economics] is with the fact of the present world; but before we can study the present, it is already past. In order that we should be able to say useful things about what is happening, before it is too late, we must select, even select quite violently. We must concentrate our attention, and hope that we have concentrated it in the right place. “Our theories, regarded as tools of analysis, are blinkers in this sense. Or it may be politer to say that they are rays of light, which illuminate a part of the target, leaving the rest in the dark. As we use them, we avert our 30 Testimony of Chairman Alan Greenspan before the Committee on the Budget, U.S. House of Representatives, March 4, 1997. 27 BUSINESS REVIEW JULY/AUGUST 2000 eyes from things that may be relevant. ...But it is obvious that a theory which is to perform this function satisfactorily must be well chosen; otherwise it will illumine the wrong things. Further, since it is a changing world that we are studying, a theory which illumines the right things now may illumine the wrong things another time. This may happen because of changes in the world (the things neglected may have grown relative to the things considered) or because of changes in our sources of information (the sorts of facts that are readily accessible to us may have changed) or because of changes in ourselves (the things in which we are interested may have changed). There is, there can be, no economic theory which will do for us everything we want all the time.” Put succinctly, Hicks argues that economic science must adapt to the nature of the economy. The growing importance of creative endeavors appears to be what’s new in the New Economy. If so, the New Economy represents a significant change in the nature of the U.S. economy, one that is difficult to align with the paradigm of perfect competition. The New Economy is highly competitive, but creative destruction, not production, is the center of the competition. This implies, in line with Hicks’s views, that for understanding the New Economy, Joseph Schumpeter’s creative destruction paradigm may be superior to Adam Smith’s invisible hand. BIBLIOGRAPHY Aghion, Philippe, and Peter Howitt. “A Model of Growth Through Creative Destruction,” Econometrica 60 (2), March 1994, pp. 323-51. Berman, Eli, John Bound, and Zvi Griliches. “Changes in the Demand for Skilled Labor within U.S. Manufacturing: Evidence from the Annual Survey of Manufacturing,” Quarterly Journal of Economics 109, May 1994, pp. 367-98. Berman, Eli, John Bound, and Stephen Machin. “Implications of Skill-Biased Technological Change: International Evidence,” NBER Working Paper 6166, September 1997. Bureau of Economic Analysis. National Income and Product Accounts of the United States, 192994, Volume 1. Washington, DC: U.S. Government Printing Office, 1998. Cornelli, Francesca, and Mark Schankerman. “Patent Renewals and R&D Incentives,” RAND Journal of Economics 30 (2), Summer 1999, pp. 197-213. Davis, Stan, and Christopher Meyer. Blur. New York: Addison-Wesley, 1998. Dixit, Avinash, and Joseph E. Stiglitz. “Monopolistic Competition and Optimum Product Diversity,” American Economic Review 67, 1977, pp. 297-308. Ethier, Wilfred J. Modern International Economics, Third Ed. New York: W.W. Norton, 1997. 28 FEDERAL RESERVE BANK OF PHILADELPHIA Financial Modernization:Economy: The Invisible Hand Meets the Same? Economics and the New Vastly Different or Fundamentally Creative Destruction Edward G. Boehne Leonard I. Nakamura Fortin, Nicole M., and Thomas Lemieux. “Institutional Changes and Rising Wage Inequality: Is There a Linkage?” Journal of Economic Perspectives, 11 (2) Spring 1997, pp. 75-96. Frank, Robert H., and Philip J. Cook. The Winner-Take-All-Society. New York:Viking Penguin, 1996. Gleick, James. Faster: The Acceleration of Just About Everything. New York: Pantheon, 1999. Gottschalk, Peter. “Inequality, Income Growth and Mobility: The Basic Facts,” Journal of Economic Perspectives, 11 (2) Spring 1997, pp. 21-40. Grossman, Gene M., and Elhanan Helpman. Innovation and Growth in the Global Economy. Cambridge: MIT Press, 1991. Hicks, John. “’Revolutions’ in Economics,” in John Hicks, Classics and Moderns, Collected Essays, Vol. III. Cambridge: Harvard University Press, 1983, pp. 3-16. Hunt, Robert. “Patent Reform: A Mixed Blessing for the United States Economy?” Federal Reserve Bank of Philadelphia Business Review, November/December 1999, pp. 15-29. Johnson, George E. “Changes in Earnings Inequality: The Role of Demand Shifts,” Journal of Economic Perspectives, 11 (2) Spring 1997, pp. 41-54. Knight, Frank. Risk, Uncertainty and Profit. Boston: Houghton Mifflin, 1921. Krugman, Paul. Peddling Prosperity. New York: Norton, 1994. Maddison, Angus. Monitoring the World Economy 1820-1992. Paris: OECD, 1995. Mandel, Michael J. The High-Risk Society: Peril and Promise in the New Economy. New York: Random House, 1998. Nakamura, Leonard. “Measuring Inflation in a High-Tech Age,” Federal Reserve Bank of Philadelphia Business Review, November/December 1995. Nakamura, Leonard. “Is the U.S. Economy Really Growing Too Slowly? Maybe We’re Measuring Growth Wrong,” Federal Reserve Bank of Philadelphia Business Review, March/ April 1997. Nakamura, Leonard. “The Retail Revolution and Food-Price Measurement,” Federal Reserve Bank of Philadelphia Business Review, May/June 1998. Nakamura, Leonard. “Intangibles: What Put the New in the New Economy?” Federal Reserve Bank of Philadelphia Business Review, July/August 1999a. 29 BUSINESS REVIEW JULY/AUGUST 2000 BIBLIOGRAPHY (continued) Nakamura, Leonard. “The Measurement of Retail Output and the Retail Revolution,” Canadian Journal of Economics 32, 2, April 1999b, pp. 408-25. Parente, Stephen L., and Edward C. Prescott. “Monopoly Rights: A Barrier to Riches,” American Economic Review 89, December 1999, pp. 1216-33. Pritchett, Lant. “Divergence, Big Time,” Journal of Economic Perspectives 11, 3, Summer 1997, pp. 3-17. Romer, Paul M. “Increasing Returns and Long-Run Growth,” Journal of Political Economy 94, 1986, pp. 1002-37. Romer, Paul M. “Endogenous Technical Change,” Journal of Political Economy 98, 1990, pp. S71-S102. Romer, Paul M. “Bank of America Round Table on the Soft Revolution,” Journal of Applied Corporate Finance, Summer 1998, pp. 9-14. Rose, Mark. Authors and Owners: The Invention of Copyright. Cambridge: Harvard University Press, 1993. Schumpeter, Joseph. Capitalism, Socialism, and Democracy. New York: Harper, 1942. Scotchmer, Suzanne. “On the Optimality of the Patent Renewal System,” RAND Journal of Economics 30 (2), Summer 1999, pp. 181-96. Shapiro, Carl, and Hal R. Varian. Information Rules. Boston: Harvard Business School Press, 1999. Smith, Adam. The Wealth of Nations. Reprinted Homewood, IL: Irwin, 1963. Stiglitz, Joseph E. Whither Socialism? Cambridge: MIT Press, 1994. Summers, Robert and Alan Heston. “The World Distribution of Well-being Dissected,” in Alan Heston and Robert E. Lipsey, eds., International and Interarea Comparisons of Income, Output, and Prices. Chicago: University of Chicago Press, 1999. Sutton, John. Sunk Costs and Market Structure. Cambridge: MIT Press, 1991. Topel, Robert H. “Factor Proportions and Relative Wages: The Supply-Side Determinants of Wage Inequality,” Journal of Economic Perspectives, 11 (2) Spring 1997, pp. 55-74. Watson, Thomas J. Jr. Father, Son & Co.: My Life at IBM and Beyond. New York: Bantam, 1990. 30 FEDERAL RESERVE BANK OF PHILADELPHIA