View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Economic
.Review
Federal Reserve Bank
of San Francisco
1992

Number 2

Bharat Trehan

Predicting Contemporaneous Output

Brian A. Cromwell

Does California Drive the West?
An Econometric Investigation
of Regional Spillovers

Carolyn Sherwood-Call

Changing Geographical Patterns
of Electronic Components Activity

Predicting Contemporaneous Output ...........................................................................3
Bharat Trehan

Does California Drive the West?
An Econometric Investigation of Regional Spillovers ............................................... 13
Brian A. Cromwell

Changing Geographical Patterns of Electronic Components A ctivity................... 25
Carolyn Sherwood-Call

Federal Reserve B ank o f San F rancisco

1

Opinions expressed in the Economic Review do not neces­
sarily reflect the views of the management of the Federal
Reserve Bank of San Francisco, or of the Board of Governors
of the Federal Reserve System.
The Federal Reserve Bank of San Francisco’s Economic Review
is published quarterly by the Bank’s Research Department under the
supervision of Jack H. Beebe, Senior Vice President and Director of
Research. The publication is edited by Judith Goff. Design, production,
and distribution are handled by the Public Information Department,
with the assistance of Karen Flamme and William Rosenthal.
For free copies of this and other Federal Reserve publicatons, write or
phone the Public Information Department, Federal Reserve Bank of San
Francisco, P.O. Box 7702, San Francisco, California 94120. Phone
(415) 974-2163.
Printed on recycled paper
with soybean inks.

2

E conom ic R eview / 1992, N um ber 2

Predicting Contemporaneous Output

Bharat Trehan
Senior Economist, Federal Reserve Bank: of San Francisco.
I am grateful to John Judd, Brian Motley and Sun Bae Kim
for comments. Robert Ingenito provided capable and conscientious research assistance.

This paper presents an update of a simple model for
predicting real GDP using contemporaneous monthly data.
These forecasts are based on just three variables, all of
which are available early in the quarter. The earlier version
ofthis model was used atthis bankfor more thanfour years.
An analysis ofthe real-timeforecasts made over this period
shows that the forecasting errors were reasonable, and that
the model's forecasts compare well to the Blue Chip
consensus forecasts.

It is easy to appreciate the value of an accurate reading
on the current state of the economy during times of great
uncertainty. During the first quarter of this year, for example, observers were trying to determine whether an economic recovery would take hold, or whether the economy
would slip back into recession. Such a determination is
likely to be especially important for monetary policymakers. For instance, information that the economy was contracting over the first quarter could very well have led to a
further easing of policy. Yet data on broad measures of
economic activity are available only with a lag. In this
paper I discuss a method of obtaining estimates of contemporaneous aggregate activity using data that are available
with relatively short lags.
Earlier work at this bank showed that reasonable forecasts of real gross national product (GNP) growth in the
current quarter could be obtained using a set of only three
variables: nonagricultural employment, industrial production, and real retail sales.' This paper adds to the earlier
analysis in three ways. First, I update the model so that it
can be used to predict real gross domestic product (GDP)
instead of real GNP. After searching over a list of about a
dozen or so variables, I find that the same three variables
(with one small modification) still provide reasonably
accurate forecasts of real GDP.
Second, I present data on the ex ante forecast accuracy
of this model. The model (which I shall refer to as the
Monthly Indicators model or MI model below) has been
used to predict real GNP at the Federal Reserve Bank: of
San Francisco for about four years now. The model's (real
time) forecasts over this period have been more precise
than the Blue Chip consensus forecast (which is the
average of the forecasts of roughly fifty leading private
sector forecasters).
Last , I look at what the MI model contributes to the
accuracy of real GDP forecasts over a time horizon of one
to two years. I use a quarterly Bayesian vector autoregression (BVAR) model which forecasts GDP (plus some other
variables) to examine this issue. I present evidence which

'See Trehan (1989) for a discussion.

Federal Reserve Bank of San Francisco

3

suggests that attempts to improve the MI model's forecast
of current quarter real GDP growth are unlikely to have
large payoffs in terms of forecasting real GDP growth over
longer horizons.
The rest of the paper is organized as follows. Section I
briefly reviews earlier work on the model and then discusses the process that was used to choose the variables to
predict real GDP. Sections II and III present tests of the
forecasting accuracy of the model. I present both results for
the variables used to predict real GDP (the indicator
variables) and the results of predicting real GDP itself.
Section III also contains the comparison with the Blue
Chip forecasts. Section IV takes up the issue of what the
BVAR contributes to real GDP forecasts beyond the onequarter horizon, mainly to determine the likely benefits of
making the current quarter forecast more precise. Section
V concludes.
I. CHOOSING THE INDICATOR VARIABLES

The Original Model
When the model was first specified, three criteria were
employed to choose variables that would be used to predict
real GNP. The same criteria will be used this time as well.
For a variable to be included in our model, the first (and
most important) test it must pass is purely statistical:
variables will be ranked on the basis of their usefulness in
predicting real GDP. Second, in order to limit the costs of
collecting and processing data, I also impose the requirement that only a relatively small number of variables be
used to predict real GDP. This rules out methods that
attempt to predict each (or most) component(s) of GDP in
the National Income and Product Accounts (NIPA). Finally, since I am interested in obtaining current quarter real
GDP forecasts as early as possible, I impose the requirement that the monthly variables that are to be included be
available relatively early.
Based largely on considerations of timeliness, a set of
more than a dozen variables was chosen for statistical
analysis. These included different measures of interest
rates, sales, labor inputs, and soon. 2 I found that reasonable forecasts could be obtained on the basis of three
variables: nonagricultural employment, industrial production, and retail sales deflated by the producer price index.

The estimated equation was
RGNP t = 0.8

+ 0.17IPt + 0.14 RSALSt +

(2.2)
- 0.21 RGNP t (3.0)

(2.~

~.4)

0.09 RGNP t _

1 -

(1.4)

1.13 EMP t
~.O)

2 -

0.26 RGNP t _
(4.0)

3,

adjustedR? = 0.74, SEE = 2.17
where
RGNP = real GNP
IP = industrial production
RSALS = real retail sales
EMP = nonfarm payroll employment,
(all variables are included as annualized growth rates)
The estimation period was 1968.Q2 to 1988.Q2. The
absolute value of the t-statistics are shown in parentheses.
Even though purely statistical criteria were employed to
select indicator variables, the final selection consists of
three of the four key variables that the NBER's Business
Cycle Dating Committee used to date the beginning of the
current recession.P Further, employment and industrial
production are two of the four series included in the
Commerce Department's Index of Coincident Indicators.
That index also includes real personal income and real
manufacturing and trade sales. 4 Note that the real retail
sales variable included in the MI model is similar to the
latter variable and has the advantage of being available
roughly one month earlier. This similarity to the coincident
indicator index suggests that the model should do reasonably well at turning points. (I will return to this issue
below.)
Before going further it is also worth noting that data on
the variables included in the model become available
relatively quickly. Specifically, data for any month are
available by the middle of the following month. For example, data for January are available by mid-February.

Updating the Model
An important reason for updating the model has to do
with the benchmark revision of the National Income and
Product Accounts (NIPA) that was released in early December 1991. Two of the numerous changes introduced as
part of that revision are particularly relevant for the purpose of forecasting GDP. First, the Bureau of Economic
Analysis announced that it was shifting from the gross

2In addition. to the three variables included in the model, the list of
variables I looked at contains manufacturing shipments and inventories,
housing starts, automobile sales, retail sales net of autos, total labor
hours, average weekly hours, manufacturing hours, and short-term and
long-term interest rates. For a detailed description of the variable
selection strategy, see Trehan (1989).

4

3The fourth variable used by the NBER is real income. See Hall
(1991-92) for a discussion of how the NBER dates cycles.
"See U.S. Department of Commerce (1984) for a discussion.

Economic Review / 1992, Number 2

national product (GNP) to the gross domestic product
(GDP) as the primary measure of production. (GNP includes net receipts of factor income from the rest of the
world while GDP excludes it.) Second, the base period of
the NIPA was shifted from 1982 to 1987. We need to
determine whether the MI model has to be respecified
because of these changes.
I did make one small change to the original specification
before carrying out this analysis. The first time around, the
producer price index was used to deflate retail sales instead
of the more obvious consumer price index (CPI), because
producer prices typically became available more than one
week earlier than consumer prices. However, the gap
between the release dates of the two series has narrowed
over time, and thus it is now possible to employ the CPI to
deflate retail sales and produce the forecasts at around the
same time as when the PPI was used.
The search forthe best specification was carried out in
two parts. Starting with a set of 16 variables, I first isolated
variables that were useful in explaining within-sample
changes in real GDP. Several alternative statistical criteria
were used to help determine the best set of variables.? At
the end of this procedure I ended up with a set of variables
that included the three variables in the original monthly
indicators model as well as average weekly hours worked
and the 10-year Treasury bond rate. In the second part of
this procedure the "out-of-sample" forecasts obtained
from this: set of variables were compared to the out-ofsample forecasts obtained from the set of variables originally included in the MI model. (This procedure involves
estimating the real GDP equation up to a given quarter and
using the indicator variables to predict real GDP the
following quarter. I used a sample of more than 40 forecasts to carry out this comparison.) It turns out that this
larger set of variables does not provide forecasts that are
noticeably different from the three variables originally
included in the equation. Consequently, I decided not to
alter the specification of the original monthly indicators
model.
Thus, nonfarm payroll employment, industrial production and real retail sales (which is obtained by deflating
nominal retail sales by the consumer price index) tum out
to provide reasonably good forecasts of real GDP as well as
of real GNP.
5These included using the "general-to-specific" strategy recommended
by David Hendry (see Hendry and Mizon 1978, for example) as well as
the "Final Prediction Error" criterion (see Judge, et al. 1985 for a
description) to determine which variables and lag lengths were to be
included. Some judgment was also involved; for instance, a variable for
which a mechanical procedure included the second lag but not the
contemporaneous term was dropped.

Federal Reserve Bank of San Francisco

The estimated equation is

RGDP t = 1.1 + 0.20 IP t
(3.9) (4.1)
- 0.20 RGDP t _
(3.4)
adjusted R2

=

1 -

+ 0.16 RSALS t + 0.96 EMPt
(5.0)

0.10 RGDP t _
(1.8)

0.79, SEE

=

(5.5)
2 -

0.26 RGDP t _
(4.7)

3,

L80.

where
RGDP = real GDP
IP = industrial production
RSALS = real retail sales
EMP = nonfarm payroll employment,
(all variables are included as annualized growth rates)
The absolute value of the t-statistics are shown in parentheses. The equation has been estimated over the period
1968.Q2 to 199LQ2; as before, the starting date is determined by the availability of the retail sales data. The
Lagrange multiplier test for first order serial correlation
leads to a test statistic of 0.5, which has a marginal
significance level of 50 percent. Thus, it appears that the
inclusion of the lagged real GDP terms is sufficient to
eliminate serial correlation.
It is worth noting that the new equation is not very
different from the original one, despite definitional
changes in the dependent variable (specifically, the use of
real GDP instead of real GNP, as well as the change in the
base year) and in one of the explanatory variables (specifically, the use of the CPI to deflate retail sales instead of
the·PPI).
It also is tempting to speculate aboutwhy the lagged real
GDP terms are significant in the estimated equation. One
reason that comes to mind is the role played by inventories.
This conjecture can be verified by subtracting changes in
inventories from real GDP and re-estimating the above
equation in terms of final sales:

RFSALt = 1.2 + 0.03IPt + 0.30 RSALS t
(4.7) (0.6)
(10.9)
- 0.12RFSALt _
(L7)

1-

0.06RFSALt _
(1.0)

adjusted R2 = 0.76, SEE

2 -

+ 0.55EMPt
(3.5)

0.01RFSALt _
(0.1)

3,

= L52.

where
RFSAL = real final sales.
Note that lags of the dependent variable are significantly
less important than in the GDP equation; in fact, the
F(3,87) statistic for the null hypothesis that the three
lagged RFSAL terms are zero is L2, so that the null

5

from an unrestricted VAR and the forecaster's prior beliefs.
The prior employed here has been termed the "Minnesota
prior;" it imposes the belief that most economic time series
behave like random walks with drift. For each variable the
coefficient on its own first lag is pushed towards one, while
the coefficients on all other right-hand-side variables are
pushed towards zero. How much should the estimated
coefficients be pushed towards this prior? Answering this
question involves estimating different versions that vary in
how tightly the prior is imposed. The forecasting performance of these different versions is then compared, and the
specification that leads to the best forecasts is chosen. 6
Searching for the best specification to forecast the indicator variables led to a BVAR with five variables: the
three indicator variables themselves, plus the interest rate
on six-month commercial paper and the average weekly
hours of production workers on private, nonagricultural
payrolls. Each equation contains 12 lags of each of the
variables plus a constant. Since interpreting this many
coefficients would be a difficult task, the estimated equations are not presented here. Instead, Table 1 shows cumulative errors from the BVAR over horizons from one

hypothesis cannot be rejected at any reasonable significance level. This suggests that the lagged RGDP terms in
the RGDP equation are capturing the effects of inventory
adjustments.
These results suggest that including inventory data may
help to make the forecast more precise. Unfortunately,
inventory data are released rather late to be useful in this
forecast. The lag for data on nominal magnitudes is about
two months, while the lag for data on the appropriate
deflators is even longer.

II.

PREDICTING THE INDICATOR VARIABLES

Since the forecaster (or policymaker) is likely to be
interested in obtaining real GDP forecasts even before
three months of information on the indicator variables
becomes available, it is necessary to have a method for predicting the monthly values of the indicator variables themselves. I estimate a Bayesian vector autoregression (BVAR)
to obtain these forecasts. A vector autoregression (VAR)
involves regressing each of a set of variables on lagged
values of all variables in the system. Estimating a BVAR
implies imposing priors so that the resulting coefficients
are a mixture of the coefficients that would be obtained

6S ee Todd (1984) for a discussion of Bayesian vector autoregressions.

Table 1
Predicting the Indicator Variables: January 1981-June 1991
Months
Ahead

Mean
Error

Mean
Absolute
Error

Root Mean
Square Error

Theil's UStatistica

2.19
1.54
1.43

.73
.91
1.03

8.42
6.17
4.05

.78
.82
.78

17.74
9.80
7.04

.57
.58
.61

Nonfarm Payroll Employment
1
2
3

-.06
-.08
-.10

1.45
1.15
1.12

Industrial Production
1
2
3

.79
.67
.55

6.04
4.75
3.10

Real Retail Sales
1
2
3

.83
-.10
-.23

12.54
7.41
5.30

Note: Growth rates are annualized. The errors shown here are cumulative. For instance, the mean error three months ahead is the error in
predicting the annualized growth rate between today and three months into the future.
aThis is the ratio of the RMSE of the model forecast to the RMSE of the naive forecast of no change in growth rates.

6

Economic Review / 1992, Number 2

instance, the standard deviation of the month-to-month
growth rates (over the 1981.MI-1991.M6 period) of the
employment variable is 2.8 percent, that of industrial
production is more than three times as much, and that of
real retail sales is roughly seven times as much. The Theil
statistics show that the model outperforms the naive forecast by a greater margin when predicting real retail sales
than when predicting either industrial production or nonfarm payroll employment.
The errors from the BVAR are smaller than those
obtained from univariate autoregressive equations for the
same variables, although the differences are not large.
Averaging across the three variables, the errors from univariate AR equations are roughly 5 percent larger than
those from the BVAR at the one-month horizon and
roughly 10 percent larger at the three-month horizon.

to three months; the errors are measured as annualized
growth rates.
The sample period covers slightly more than ten years,
extending from January 1981 to June 1991, a total of 126
forecasts. For each forecast, the BVAR is estimated up to
the prior month and then used to forecast the next three
months. For example, for the first forecast the model is
estimated through December 1980 and is used to generate
forecasts over the january-March period. Next time around
the model is estimated through January 1981 and forecasts
are generated over the February-April period. Four different measures of forecast accuracy are presented in Table 1:
the mean error (ME), the mean absolute error (MAE), the
root mean square error (RMSE) and Theil's V-statistic
(which compares the RMSE of the model forecast with the
RMSE of the naive forecast of no change).
Note that the errors get smaller as the forecast horizon
lengthens, a result consistent with the presence of substantialnegative serial correlation in the monthly errors. As
may be expected, the differences in the size of the errors
reflect differences in volatility among the variables; for

III.

PREDICTING REAL GDP

Error statistics for the real GDP forecast are shown in
Table 2. The full sample period runs from 1981.Ql to

Table 2
Real GOP Forecast Errors from Monthly Indicators Model
Month of
Forecast-

Mean
Error

Mean
Absolute
Error

Root Mean
Square Error

Theil's UStatlstic"

Full Sample 1981.QI-1991.Q2 (42 forecasts)
1
2

3
4

.25
.27
.26
.26

2.08
1.56
1.13
1.11

2.60
1.92
1.59
1.54

.78
.58
.48
.46

2.86
2.17
1.88
1.89

.70
.53
.46
.46

2.32
1.63
1.22
1.06

1.02
.71
.53
.47

Subsample 1981.QI-1986.Ql (21 forecasts)
1
2
3
4

.52
.60
.70
.75

2.39
1.77
1.45
1.44

Subsample 1986.Q2-1991.Q2 (21 forecasts)
1
2
3
4

-.02
-.06
-.18
-.23

1.77
1.34
0.81
0.78

Note: Growth rates are annualized.
aThese dates refer to the month of the quarter in which the forecast becomes available. The fourth month is the month after the quarter
ends. Each forecast is based on complete data for the previous month.
bThis is the ratio of the RMSE of the model forecast to the RMSE of the naive forecast of no change in growth rates.

Federal Reserve Bank of San Francisco

7

1991.Q2, a total of42 forecasts. In addition, I also show the
results for the two halves of the sample period, that is, for
the subperiods 1981.QI-1986.Ql and 1986.Q2-1991.Q2.
For each forecast, the GOP equationis estimated up to the
previous quarter, and the resulting coefficients are used,
together with the current quarter values of the indicator
variables, to predict real GOP growth in that quarter.
Four different exercises were performed for each sample
period to duplicate the amount of information available
over the course of the quarter. The first one tests the
forecasting capabilities of the model during the first month
of each quarter, when no information is available on the indicator variables. In this case, the BVAR forecasts the values of the indicator variables for all three months of the
quarter, and these values are used in the GOP equation to
forecast GOP growth. The second assumes that we are in
the second month of the quarter, when we have one month
of data on the indicator variables, and the BVAR is used to
forecast the values of the indicator variables for the remaining two months of the quarter. Similarly, the third set of
GDP forecasts is based on two months of data for the quarter, and the BVAR is used to forecast the values of the
indicator variables in the third month of the quarter.
Finally, the fourth set is based on all three months of actual
data for the indicator variables, so that no BVAR forecast is
required to predict GOP growth.
Table 2 reveals that the monthly indicators model does
notdo a very good job of predicting real GOP growth when
it has no information about the current quarter. Indeed,
for the 1986.Q2-1991.Q2 subsample, the RMSE of the
monthly indicators model is slightly larger than the RMSE
of the forecast that is based on the simple rule that the rate
of real GOP growth this quarter will be the same as it was
last quarter (which is why the computed U-statistic ·is
slightly greater than 1).
The model's forecasts become noticeably more precise
in the second month of the quarter, that is, once information about the first month of the quarter becomes available:
For the full sample, both the MAE and the RMSE fall by
around 25 percent. The arrival of the second month of
information leads to some further improvement in the
forecast. 7
In comparing the two subsamples, note that while the
RMSEs of the first half (that is, the 1981.QI-1986.Ql
period) are larger than those for the second half (the

7In the originalversionof the model the arrivalof the secondmonth of
information did not lead to a reduction in the model's forecast errors.
This is reflectedin the resultsof real time forecasting shownin Table3.
The reasonsbehind this changeare not obvious,althoughexperimentation suggeststhat the changein resultshas to do with the changein base
years and not the change from GNP to GDP.

8

1986.Q2-1991.Q2 period), the reverse is true for the Ustatistics. This finding suggests that real GOP was more
volatile in the first subsample than in the second. Indeed,
this conjecture is confirmed by the data, which show that
the standard deviation of quarterly real GOP growth fell by
more than 50 percent, from 4.1 percent over the 1981.Ql1986.Ql period to 1.9 percent over the 1986.Q2-1991.Q2
period.

Real-time versus Final Data
It is possible that the results presented in Table 2
exaggerate the precision of the BVAR forecast, since they
are based upon better data than would be available for use
in forecasts made in real time. While it is not possible to
overcome this problem completely, some information on
the model's performance can be obtained from the real time
forecasts of real GNP that have been made over the past
four years. Specifically, we have compiled data on the
original model's forecasts since the model began forecasting in 1987.Q3, which gives us a total of 16 forecasts to
analyze.
The results of this analysis are shown in Table 3. To
provide some sense of the model's relative performance,
the table also includes data on the forecasting performance
of the consensus real GNP forecast from the Blue Chip
Survey; These data are taken from a. newsletter titled Blue
Chip Economic Indicators published by Capitol Publications. This well-known consensus forecast is the average of
the individual forecasts of about 50 major forecasters in the
private sector.
It needs to be pointed out that it is difficult to line up the
two forecasts so that the two are based upon the same
amount of information. The Blue Chip forecasts have been
dated on the basis of the month in which they are released.
For instance, the second quarter Blue Chip forecast released on the June 10 is compared to the model forecast
available onJune 15. Thus, the Blue Chip forecast will be
based on less information than the MI forecast. Further,
while the official release date of the Blue Chip survey is the
10th of the month, the survey itself is conducted over the
first week of the month. Of the three indicator variables
used in the real GOP equation, the only variable likely to be
available at that time is payroll employment. 8
One way to overcome this problem is to compare the
model forecast in a given row with the Blue Chip forecast
in the following row. Note that such a comparison will tend
to overcompensate in those months when employment data
for the previous month are released before the survey is

8Forecasters will also know interest rates and labor hours.

Economic Review /·1992, Number 2

Table 3
Comparison of Real GNP Forecast Errors, 1987.Q3-1991.Q2
Monthly Indicators Model Forecasts
Month of

Forecasts

Mean
Error

Mean Abs.
Error

Root Mean
Sq. Error

Blue Chip Forecasts

Theil's UStatistic

Mean
Error

Mean Abs.
Error

Root Mean
Sq. Error

Theil's UStatistic

1.13
0.99
0.90

1.49
1.30
1.14

0.99
0.86
0.76

1.37
1.26
1.19

1.99
1.83
1.67

1.28
1.17
1.07

Using Real-Time GNP

2
3
4

0.11
0.21
0.14

0.71
0.86
0.79

0.90
1.09
1.02

0.60
0.72
0.68

0.42
0.36
0.34

.Using Revised GNP

2
3
4

0.14
0.24
0.18

1.01
1.06
1.05

1.34
1.48
1.38

0.86
0.95
0.88

0.46
0.39
0.38

Note: Growth rates are annualized.
aThese dates refer to the month of the quarter in which the forecast becomes.available. The fourth month is the month after the quarter
ends. This dating convention implies that the model forecast may be based on as much as one month of additional information compared to
the Blue Chip forecast. See text for details.

conducted. (Employment data for a particular month are
usually released on the first Friday of the following month.)
The top half of the table compares both sets of forecasts
with "early" GNP data. These early GNP data have been
obtained from the Commerce Department's Survey of
Current Business four months after the end of the quarter.
The idea is to reproduce, as closely as possible, the GNP
data as it existed when the forecasts were made. The results
for the monthly indicators model show that the MAEs
average around 0.8 percent, regardless of whether we have
one, two, or three months of data on hand. Similarly, the
RMSEs are around 1.0 percent.
The results for the Blue Chip consensus forecast show
that the MAE varies around 1 percent depending upon the
amount of information available, while the RMSE falls
from around 1.5 percent for the forecast made inmonth 2 of
the quarter to approximately 1.1 percent for the forecast
made in the month after the quarter has ended. While these
errors are not that much larger than those ofthe monthly
indicators model, it is worth pointing out that the MI
forecasts made in the second month of the quarter (that is,
forecasts that are based on one month of information) are
more accurate than the Blue Chip consensus forecast made
after the quarter has ended (month 4). The Theil statistics
show that both sets of forecasts do better than the naive
forecast of no change in growth rates.
The second half of the table compares the two forecasts

Federal Reserve Bank of San Francisco

to revised real GNP data. Specifically, the two forecasts are
compared to real GNP data as of the fourth quarter of 1991.
Note that this increases the forecast errors of both models;
the deterioration is especially noticeable in the case of the
Blue Chip forecast since it does worse than the simple
prediction that real GNP growth this quarter will be the
same as it was last quarter.
Chart 1 plots the MI and Blue Chip forecasts as well as
early GNP data over this period. The top panel of the chart
shows forecasts based on one month of information, while
the lower panel shows forecasts based on three months of
information. Note that the MI forecast tracks the recession
quite well, a result that is not surprising since the forecasts
are based on information about the current quarter. Recall
also that the set of indicator variables is close to the set of
variables included in the Index of Coincident Indicators.
Finally, as the results in Table 3 would suggest, while the
MI forecasts are more accurate on average than the Blue
Chip forecasts, this is not always the case.
Before going further it needs to be pointed out that the
Blue Chip consensus forecast has been used only as a
benchmark (since it is widely available), and not because it
is taken to be the most accurate forecast of real activity in
the current quarter. In fact, it is not unreasonable to believe
that the forecasters included in the panel were trying to
minimize their forecast errors over a time span of a year or
so instead of a quarter. In that context, it is useful to ask

9

Chart 1
Real Time Forecasts of Real GNP Growth
A. Based on One Month of Information
Percent

6.4
4.8

I ········ r. ,

3.2

•••••

-0.0

.. ........

u.

~ ~#~~

Monthly····... ...." \
~
Indicators ".," Blue Chip

1.6

,,:.

,,-

-;---------'------~i:____:',;,_-

-1.6

-3.2
-4.8 - ; - - - - - r - - - - - - , - - - - - , - - - - - - r - - - - - - - ,

1987

1988

1989

1990

1991

B. Based On Three Months of Information
Percent

6.4
4.8

3.2
1.6
-0.0

-;--------------4~~ii--

....,

t

-1.6

Monthly
~\-Indicators-' '•
,-4.8 -;-----r-------,,-------,------r-------,

..
..

-3.2

1987

1988

1989

1990

1991

Note: Real GNP data have been taken from the Commerce Department's Survey of Current Business four months following the end of
each quarter.

what the monthly indicators model contributes to the
accuracy of real GDP forecasts beyond the current quarter.
We examine this question in the next section.

IV.

EVALUATING THE USEFULNESS
OF THE CURRENT QUARTER GDP FORECAST

Usually the forecaster (or policymaker) is interested not
just in the forecast of real GDP growth this quarter, but in
growth over some longer time period, such as a year or two.
It is, therefore, natural to ask what the monthly indicator
model's forecast contributes to predicting real GDP over
somewhat longer horizons. Perhaps a more important issue
for the project at hand concerns the payoff to making the
MI forecast more precise. As discussed above, the MI
model is a simple one; adding greater detail could improve
its accuracy somewhat, especially late in the quarter when
more information becomes available. However, greater

10

detail also implies greater cost. Thus, we need to compare
the benefits to greater accuracy with the costs of putting
together and maintaining a more detailed model.
In the present context (where we are interested in
looking at contributions to forecast accuracy over horizons
of one to two years), a measure of the benefits can be
obtained by examining how the accuracy of real GDP
forecasts over one to two years is affected as we increase
the accuracy of the current quarter forecast. Here I will
make an extreme assumption about how much more accurate the current quarter forecast can be: I will assume that
real GDP this quarter is known with certainty.
Forecasts over a two-year horizon will be generated
using a BVAR model that is similar to one used for
forecasting at the Federal Reserve Bank of San Francisco.
This model is estimated on quarterly data (and I will refer
to it as the quarterly BVAR). It contains a total of ten variables, including real GDP, consumption, unemployment,
the dollar, a measure of money, measures of short and longterm interest rates, and inflation.
Chart 2 plots the percentage reduction in the RMSE of
the GDP forecast from the quarterly BVAR when the MI
forecast for the first period is included or when the actual
value of GDP for the first quarter is included." I show
forecasts for an eight-quarter horizon over the 1981.Ql1991.Q2 period. The errors are cumulative; that is, the
RMSE of the four-quarter ahead forecast measures the
errors in predicting the level of real GDP four quarters in
the future.
Including the MI forecast reduces the RMSE of the onequarter ahead forecast by about 50 percent and the twoquarter ahead forecast by 35 percent (compared to the case
when the MI forecast is not included). The degree of
improvement becomes smaller as the forecasting horizon
lengthens, falling to less than 15 percent after four quarters
and to less than 5 percent in the seventh and eighth
quarters.
The degree of improvement we obtain is, of course,
dependent upon the model that is being used to forecast
real output over the next two years. However, the question
of whether the returns to making the MI forecast more
precise are worth the effort can be answered in a way that is
less model-dependent. We begin by looking at how much
the forecast from the quarterly BVAR can be improved

9The first quarter here is actually the quarter for which we already have
data for the indicator variables. This was termed the contemporaneous
quarter in Sections I-III. The change in terminology is necessitated by
the introduction of the quarterly BVAR, which contains no contemporaneous information. Note also that the MI forecasts used here are based
on three months of information.

Economic Review / 1992, Number 2

Chart 2
Percentage Reduction in the
RMSE of the GOP Forecast
100

"1

60

50

40
30
20
10

When the first quarter's
real GOP is known

.....

When the MI forecast of the
first quarter is included

<;

......

<:-,

..•.

'

.

<;

O-t----,---.......-...---.----,-----,..----r--....
1
2
345
6
7
8
Quarters Ahead

to the perfect information case. This continues to be the
case at all forecast horizons; in fact, the difference between
the two curves is essentially zero from the fourth quarter
on. Of course, both curves are close to zero towards the end
of the forecast horizon and the difference between them at
that point is not very significant.
This exercise suggests that further attempts at improving the current quarter forecast of real GDP are not likely to
have substantial rewards in terms of improving our ability
to forecast real GDP over somewhat longer horizons. In
other words, if the objective is to forecast real GDP beyond
the first two quarters, then the simple MI model reaps a
large proportion of the gains that would accrue in going
from the case of no information about the first quarter's real
GDP to the case where the first quarter's real GDP is
known with certainty, and does so at relatively little cost.

v: SUMMARY AND CONCLUSIONS
when next quarter's real GDP is assumed to be known, that
is, assuming perfect information.
Perfect information implies that the first quarter value of
this number is 100 by assumption. More interestingly,
knowledge of the first quarter's real GDP reduces the
RMSE of the two-quarter ahead forecast by 40 percent and
the RMSE of the four-quarter ahead forecast by about 15
percent.
As before, the precise effects of including information
about next quarter on the one-year ahead forecast are likely
to depend upon the model that is being used, since models
differ in their ability to process information about the
next-or any other-quarter's GDP. Nevertheless, it is
possible to compare the marginal benefit of moving from
the no information case to the case where the MI forecast is
known to the marginal benefit of moving from knowledge
of the MI forecast to knowledge of next quarter's real GDP.
(Recall that this is a theoretical upper bound to further
improvements in the MI forecast.) Chart 2 provides a
simple way of making the comparison. At each point in
time, the marginal benefit of moving from the no information case to the case where MI is known is given by the
vertical distance between the horizontal axis and the MI
line; the marginal benefit of moving from knowledge of
MI to perfect information is measured by the vertical distance between the two curves. The greater the difference
between the two curves relative to the height of the MI
curve, the greater the advantage to improving upon the
MI forecast.
The chart indicates that at a two-quarter horizon, the relative improvement in going from the no information case to
including the MI model forecast is substantially greater
than the relative improvement in going from the MI model

Federal Reserve Bank of San Francisco

This paper has reviewed a simple method of predicting
real GDP. This method requires relatively few resources;
the forecasts are cheap to produce and update. The evidence presented above demonstrates that these forecasts
compare well to those obtained from major private sector
forecasters.
It is possible that the forecast of current quarter real
GDPgrowth could be made more precise by devoting
additional resources to the task. However, the evidence
presented above also suggests that, if the objective is to
forecast real GDP beyond the current quarter, then such an
endeavor is likely to lead to relatively limited returns.

REFERENCES

Hall, Robert E. 1991-1992. "The Business Cycle Dating Process."
NBER Reporter (Winter) pp. 2-3.
Hendry, D.E, and Mizon, G .E. 1978. "Serial Correlation as a Convenient Simplification, Not a Nuisance: A Comment on a Study by the
Bank of England." EconomicJournal pp, 549-563.
Judge, George G., WE. Griffiths, R. Carter Hill, Helmut Lutkepohl,
and Tsoung-Chao Lee. 1985. The Theory and Practice ofEconometrics. New York: John Wiley & Sons.
Todd, Richard M. 1984. "Improving Economic Forecasting with Bayesian Vector Autoregressions." Federal Reserve Bank of Minneapolis Quarterly Review (Fall).
Trehan, Bharat. 1989. "Forecasting Growth in Current Quarter Real
GNP." Federal Reserve Bank of San Francisco EconomicReview
(Winter) pp. 39-52.
U.S. Department of Commerce. 1984. Handbook ofCyclicallndicators.

11

12

Economic Review / 1992, Number 2

Does California Drive the West? An Econometric
Investigation of Regional Spillovers

Brian A. Cromwell
Economist, Federal Reserve Bank of San Francisco. The
author thanks Randall Eberts, Frederick Furlong, Michael
Hutchison, Philip Israilevich, Sun Bae Kim, Dwight
Jaffee, Elizabeth Laderman, Randall Pozdena, Carolyn
Sherwood-Call, Ronald Schmidt, Lori Taylor, and Bharat
Trehan for useful comments and suggestions. Karen Trenholme provided excellent research assistance.

This paper measures linkages between the California
economy and its neighbors, and the extent to which
economic shocks to California spill over to its neighbor
states, through vector autoregression techniques. Leading
and lagging relationships between California and other
western states are identified through Granger causality
tests. Then, under certain identifying assumptions, the
economic importance of these relationships is measured.
Finally, the sources ofthe linkages are then considered by
examining the effect of California on specific sectors
within a state. In general, the results suggest that the
California economy does have important spillover effects
on other western states-particularly those in close geographic proximity to it.

Federal Reserve Bank of San Francisco

In terms of population, output, and diversity, California
dwarfs its neighbors in the Twelfth Federal Reserve District
-which includes Alaska, Arizona, California, Hawaii,
Idaho, Nevada, Oregon, Utah, and Washington. In July
1990, the 12.9 million jobs in California accounted for
almost two-thirds (63 percent) of total employment in the
District. For comparison, it had five times as much employment as the next largest District state, Washington, which
has 2.2 million jobs.
This paper examines the extent to which the California
economy drives the western region. In particular, it attempts to measure linkages between the California economy and its neighbors, and the extent to which economic
shocks to California spill over to its neighbor states.
The topic is relevant to the most recent recession, which
hit California and the nation in mid-1990. Most District
states, however, were not affected until much later, with
employment declines becoming evident only in early 1991.
To the extent that systematic spillovers from California
occur with a lag of two to three quarters, this pattern of
regional recession would not be surprising. Accounting for
these spillovers would yield better forecasts of economic
developments in western states.
A more general motivation is that information on linkages and spillovers between states adds to the understanding of how regions operate and when regional analysis is
appropriate. A model of regional linkages due to trade
flows, for example, results in different predictions from a
model of linkages due to factor flows. Positive shocks that
increase economic
in one state may stimulate trade
with other states, inducing positive spiHovers. If the increased economic activity induces labor to migrate, however, a negative effect on neighbor states might result.
Furthermore, if regional economies are relatively open and
driven by national shocks, a broad macroeconomic perspective might be appropriate for monetary or fiscal policy
analysis. If regional economies are closed to spillovers
from the nation or other states, however, a region-by-region
approach to policy analysis might be called for. Finally, if
particular sectors (such as housing or finance) are shown to
be more closed than others, policies targeted toward those

13

sectors can be implemented on a regional rather than
national basis.
This paper measures linkages through vector autoregression (VAR) techniques. Employment growth rates
(used as a proxy
growth in economic
Twelfth
District states are estimated as a
of lagged growth
in own employment, lagged growth in California employment, and lagged growth in national employment. The
goal is to explore the extent to which eCCln01TIlC tlUl;;tuati()ns
in a state are driven by the state's own economy or by
linkages to California or national markets.
Leading and lagging relationships between California
other western states are
causality tests. A standard decomposition of the forecast error
variance then measures
economic impOItaIICe
relationships. The sources of the linkages are then explored
through examining the
on specific
sectors within a state.
In general, the results suggest that the California economy has important spillover effects on its neighboring states
in the Twelfth District, namely, Arizona, Nevada, Oregon,
Utah, and Washington, but not on Alaska, Hawaii, and
Idaho. In the reverse direction, Granger causality tests
suggest that only Arizona has significant spillover effects on
California.
The variance decomposition results indicate that the
measured spillovers from California to its neighbors is
relatively large and statistically significant through three
quarters.
state
largest measured linkages is
Arizona, fonowed by Nevada, Oregon, Washington,
Utah.
The sectoral breakdowns suggest varied sources of linkin Ariages. Shocks to California affect
zona, Oregon, and Utah, while the
sectors appear
to respond
Arizona, Nevada, Oregon, and Utah. l No
spillovers are observed finance. The observed spillovers
in manufacturing are consistent with a model of linkages
propagated through trade flows of manufactured products
between firms, while spillovers in the service sector suggest that trade flows
nonmanufacturing sectors-possibly tourism and recreation.
In sum, the results indicate that shocks to California
influence its neighbor states, and suggest the magnitude of
spillovers that can be expected given this historical
tionship. The estimates should be
with caution,
however. In particular, the VAR modeling approach does
not capture structural change or adequately measure factor
flows. Moreover, it may not control adequately for shocks

IWhile Washington exhibits a significant overall linkage, no one sector
is significantly affected.

14

common to western states (perhaps due to common industries). The spillovers identified in this paper, however,
indicate that these problems merit further research.
This paper is organized as follows. Section I reviews the
regions and considers the
theory of linkages
strengths
weaknesses of using VARs to model
Section II presents the basic results. Section III explores
which sectors are most affected by spillovers. Section IV
concludes and considers areas for future research.

I.

MODELING REGIONAL LINKAGES WITH

VARs

While
states
for
reasons, this paper is concerned with measuring spillovers
of economic shocks to the
economy to
neighbor states. As such our focus is on linkages that are
of goods (trade) and
principally economic in nature:
factors of production. 2 What then is the nature of the
economic shocks, and how are they transmitted through
these linkages?
Positive economic shocks to California could come from
the demand side (for example, due to jumps in national
demand for California products like computers, entertainment, aerospace), or from the supply side (for example,
from technological innovations that enhance productivity
or result in new products). Negative shocks, of course, also
have occurred and are of current concern. Falling national
demand for California defense products is reducing manufacturing activity. Recent natural
shocks include
the 1989 Loma Prieta earthquake, freezes, and drought.
Supply constraints induced by environmental problems,
inadequate infrastructure, or regulatory burdens also may
become binding.
Trade flows of goods and services between regions are an
obvious mechanism for transmission of economic shocks
from California to its neighbors. Increases in economic
activity in California heighten the demand for imports of
raw materials, intermediate inputs, and final products from
other states. Raw materials could include minerals, electricity, or water. Intermediate inputs could range from
lumber and wood products for housing, to electronic
components for defense and aerospace. Final products
could include the whole range of consumer goods. Economic growth in California also can affect the consumption
of services in other states, including entertainment (skiing
in Utah or casinos in Nevada).
2Linkages other than trade or factor flows also may exist. First, multiregional government institutions (such as Federal Reserve Districts) or
multiregional firms may exist. Second, information flows may give rise
to differential adaption rates of innovations across regions. Third,
physical flows of pollutants such as acid rain across regional boundaries
could occur.

Economic Review / 1992, Number 2

The transmission of shocks through trade should occur
relatively quickly, as California factories place orders for
goods, or as consumers plan vacations. If the shocks are
measured as changes in growth rates from trend, however,
they should be short-run in nature. A jump in demand from
level of economic
California would permanently raise
activity in a neighbor state, but the period of higher growth
would be of relatively short duration.
In general, if positive (negative) economic shocks to
California
over to other states through trade flows,
they should have a positive (negative) short-run effect on
growth in the state that dampens down relatively quickly.
Furthermore, since transportation costs increase with distance, I expect more trade to be conducted between California and states in close geographic proximity. As such,
to
states contiguous to
spillover effects than those at greater distance.
If the linkages between states are through factor flows as
well as trade, the expected spillover effects of shocks to
California become less clear. Positive shocks to California
that raise the demand for labor might attract workers from
other states, leading to a negative effect on economic
activity as the population and labor emigrates.
natively, a positive shock that raises demand for California
products might lead firms to
production
facilities to other states if supply constraints in infrastructure (or environment) become binding. Negative shocks to
productivity also could lead firms to relocate. Much attention is currently being given to California firms relocating
production facilities to other western states due to regulatory burdens and other perceived costs of operating in
California.
If the predominant mechanism for regional linkages is
factor flows, then I have no clear prediction of how shocks
to California will affect neighbor states. Spillovers propagated through factor flows, however, will likely occur over
a longer time horizon than those propagated through trade
flows. (Relocating a firm takes longer
.)
A further problem, however, is that spillovers involving
factor flows entail long-run
in regional
economies that will result in changed trade flows. The
VAR model assumes
are fixed
cannot distinguish between long-run and short-run
ences in the data. This limits our ability to distinguish
between trade and factor flows.
A final note is that trade and factor flows should be
of the California economy to
reciprocal. The relative
its neighbors, however, suggests that the neighbors' effect
on California growth will be smaller than California's
effects on its neighbors. Though theory predicts that a
relationship exists, in practice it may be difficult to pick up
a small effect in noisy data.

Federal Reserve Bank of San Francisco

The VAR Approach
This paper uses a VAR approach to model linkages
between states in the Twelfth District. The advantages of
this method include its parsimonious use of data, allowance for top-down effects from the nation to the region,
allowance for feedbacks (with a lag) from the region to the
nation, and identification of leading and lagging relationships between pairs of states. The drawbacks include the
lack an
model to
the me:cn:lnism of linkages and the need for untestable identifying
restrictions to measure the economic importance of
spillovers.
A vector autoregression is a relatively simple modeling
approach that has become widely used by economists to
gather evidence on business cycle
Typically,
these models focus on a limited number of random varias money, interest rates,
ables at the national level,
prices, and output. Each variable is expressed as a linear
function of past values of itself, past values of the other
variables, and nonrandom constant terms and time trends.
After estimating the model (equation by equation with
ordinary least
the results can be used to identify
leading and lagging
between variables and,
with further identifying restrictions, to measure the economic importance of these dynamic relationships.
The identification ofleading and lagging relationships is
accomplished through causality tests. For example, if there
are two time series m and y, the series y fails to Grangercause m according to the Granger (1969) test if, in a
regression of m on lagged m and lagged y, the latter
(lagged y) takes on a zero coefficient. If y fails to Grangercause m, that m is said to be exogenous with respect to y.
Furthermore, if addition m does Granger-cause y, m is
said to be causally prior to y .3
While statistical leading and lagging relationships can
be identified through Granger tests, measuring the economic importance of these relationships requires further
identifying restrictions. The standard approach developed
by Sims (l980a,b) uses the estimated VAR results to
measure the dynamic interactions among variables in two
different ways.
from a
average representation of a VAR model, each variable can be written as a
function of the errors. A tabulation of the response of the
ith variable to an innovation in the jth variable is caned an
impulse response function
shows how one variable

3See Cooley and Leroy (1985). In another approach presented by Sims
(1972), y fails to Granger-cause m if in a regression of yon lagged y and

future m, the latter takes on a zero coefficient. Jacobs, Leamer, and
Ward (1979) show that the Granger and Sims tests are implications of
the same null hypothesis.

15

responds over time to a single surprise increase in itself or
another variable. Second, a forecast error variance decomposition (or innovation accounting) can be used to analyze
the errors the model would make if used to forecast. It
determines the proportion of each variable's forecast error
that is attributable to each of the orthogonalized innovations in the VAR model.
Identification of a VAR system is achieved by assuming
a recursive chain of causality among the surprises in any
given .period. This identification (or ordering of equations), however, is justified only under a predeterminedness assumption. If Yt is predetermined with respect to m t ,
the conditional correlation between Yt and mt is attributed to the contemporaneous effect of Yt on mt ; the contemporaneous effect of mt on Yt is restricted to zero.
This assumption, however, is untestable in the absence of
prior restrictions derived from theory. In particular, since
Granger noncausality (which tests for the effect of lagged
as opposed to contemporaneous variables) is neither necessary nor sufficient for predeterminedness, predeterminedness is not tested by the Granger or Sims tests. 4

Table 1
Twelfth District State Payroll
Employment, JUly 1990
State

Payroll
Employment
(thousands)

Asa
Percent of
California

Asa
Percent
of U.S.

239

1.9

0.2

1,486

11.6

1.3

12,861

100.0

11.7

Hawaii

529

4.1

0.5

Idaho

384

3.0

0.3

Nevada

625

4.9

0.6

Oregon

1,255

9.8

1.1

725

5.6

0.7

2,157

16.8

2.0

Alaska
Arizona
California

Utah
Washington

U.S.

110,078

100.0

Identifying Assumptions for Regional Modeling
Previous research using VARs to measure nationalregional linkages by Sherwood-Call (1988) and Cargill and
Moms (1988) has used the identifying assumption that
growth in the (large) national economy is predetermined
with respect to any particular (small) state. The observed
contemporaneous correlation of errors stems from the
national economy affecting the region, and not vice versa. 5
To achieve identification between California and its
neighbors, I extend this assumption as follows: The national economy is predetermined with respect to states, and the
large California economy is predetermined with respect to
its smaller neighbors. (The orders of magnitude involved
are displayed in Table 1 which shows payroll employment
figures for the nine states in the Twelfth District in July
1990, the most recent business cycle peak.) Any observed
4See Cooley and Leroy (1985) for a detailed review of the applications
and pitfalls of vector autoregression.
5Sherwood-Call (1988) uses the portion of the forecast error for an
individual state attributable to national innovations as her measure of
linkage between the nation and state. Among Twelfth District states, she
found California to be most linked to the national economy. In modeling
the Nevada economy, Cargill and Moros (1988) also assume that the
nation is predetermined with respect to Nevada. Furthermore, they
recognize the proximity and interrelatedness of the California and
Nevada economies and include California civilian employment in the
system of VAR equations. VARs also have been used to generate
regional forecasts, as with the VAR model of Ninth District states ron by
the Federal Reserve Bank of Minneapolis (Todd 1984).

16

contemporaneous correlation of shocks between California
and its neighbors is due to California affecting the neighbors, rather than vice versa.
An alternative explanation and potentially serious objection, however, would be that the correlation of the errors
represents some joint regional shock common to both
California and its neighbors. For example, if California
and Nevada both rely heavily on the same industry (perhaps
tourism), an industry-specific shock could cause the observed error pattern. Exploring such possibilities is beyond
the scope of this paper and is left for future research.
A final cautionary note to the VAR analysis is the extent
to which results are robust. A criticism of the Sims
analysis of
intervention,
example, is that the
results often changed for seemingly arbitrary redefinitions
of variables, time periods, and periods of observations. In
this analysis I test the robustness of the results for different
time periods, but because of data limitations, I cannot test
for the robustness of the results across different measures
of economic activity. 6

6S ee Todd (1990) and Spencer (1989).

Economic Review / 1992, Number 2

II.

MODEL AND ESTIMATION

I examine the linkages between California and its neighbor states using a three-equation VAR model with employment growth rates for the nation (NATEMP), California
(CALEMP), and neighboring states (STEMP) as the random variables. Several specifications are tested. First, I
include all Twelfth District states (except California) in
STEMP. Second, I include only states contiguous to California (Oregon, Nevada, and Arizona) in STEMP to examine the importance of geographic proximity. Finally, I
estimate eight separate VARs (one each for the Twelfth
District states other than California) to examine stateby-state spillovers from California. In all specifications,
NATEMP excludes CALEMP and STEMP, and employment growth rates are taken from trend by including a
constant term in the regression.
Economic activity is measured with quarterly payroll
employment data. This variable is chosen as a proxy of
economic activity for several reasons. First, it is measured
consistently over time and across states from state-level
payroll records. Second, other state-level variables (such
as personal income) are in part derived from the payroll
employment data. Some alternative measures of state-level
economic activity (such as state gross product) are not
considered reliable at present. Third, employment data are
broken into sectors, allowing for the examination of the
source of spillovers between states. Finally, employment
fluctuations should adequately capture relative output fluctuations between states over time if relative capital-labor
ratios across states change little over time.
The estimation period is from 1947.Q1 to 1991.Q4 (except for Alaska and Hawaii). To test for robustness I also
break the sample period into two segments.
The basic form of the VAR is shown in equations (1)
through (3). The growth rate (in log difference form
signified by a dot) of each variable is estimated as a
function of 6 lags of itself and the other two variables using
ordinary least squares. 7
•

6.

NATEMPt = a 1
(1)

6

•

+ i=~1 131 NATEMPt -'
1

6

+ i=1
~ 132 CALEMPt _ i + i=1
~ 133 sniMPt-,. + ent

7The choice of lag length is somewhat arbitrary. A lag of over one year
was desired to accommodate seasonal fluctuations. Alternative lag
lengths yield qualitatively similar short-run effects, though different
long-run dynamics. As I am interested in short-run spillovers, a
relatively short lag length is chosen. Long-run dynamics, ofcourse, may
be biasing our short-run estimates.

Federal Reserve Bank of San Francisco

The estimated coefficients and standard errors of the
individual coefficients are numerous and difficult to interpret. Following standard procedure I instead report
summary statistics from the Granger tests, forecast error
variance decomposition, and impulse response analysis.
Frrst, I consider whether California has a Granger causal
effect on its neighbor states. Granger causation is tested
through an F test of the joint significance of the lagged
STEMP variables in the CALEMP equation. An F statistic
greater than the critical value of 2.10 results in rejection
of the null hypothesis of non-Granger causation. Results of
these tests are shown in the first column of Table 2.
When the other Twelfth District states are aggregated
together into STEMP, California does not appear to have a
leading predictive relation. The F statistic for non-Granger
causation is 1.09, which is below the critical value of2.10.
When only contiguous states are included in STEMP, however, the F statistic is 3.55, suggesting that developments
in California do have predictive power. Likewise, when individual states are examined, shocks to California appear
to have predictive power for Arizona, Nevada, Oregon,
Utah, and Washington, but not for Alaska, Hawaii, and
Idaho.
Second, I consider the reverse relationship, that is,
whether growth in neighboring states has a Granger causal
effect on California. The results (Table 2, second column)
show that, except for Arizona, the null hypothesis of nonGranger causation is not rejected for all states when tested
either individually or together. Since this reverse effect is
not significantly different from zero, the results show that
California is causally prior to Nevada, Oregon, Washington, and Utah, and to the contiguous states when aggregated. In other words, changes in California employment
growth have a predictive power for employment growth in
these neighboring states. California and Arizona appear to
be jointly determined, with employment growth in each
state having predictive power for the other.
While the tests identify a statistical leading effect of
California on its neighbors, measuring the magnitude (or
economic importance) of these dynamics requires identifying assumptions regarding the causal ordering of the
contemporaneous errors. As discussed in the previous

17

Table 2

Table 3

Results of Granger Causality Tests

Contemporaneous Correlation
and Variance Decomposition

California
"GrangerCauses"
State

State
"GrangerCauses"
California

No
1.09

No
0.51

State

(%)

Contiguous States
(OR, NV, AZ)

Yes
3.55

No
1.24

All Other 12th
District States

0.45

17.1

21.0

5.4

Alaska

No
1.87

No
0.50

Contiguous States

0.65

32.3

30.9

11.5

Arizona

Yes
5.02

Arizona

0.39

28.3

16.1

17.8

Yes
2.69

Nevada

0.46

27.5

10.5

11.0

No
0.44

No
0.91

Oregon

0.60

25.8

24.6

17.5

Washington

0.48

24.9

27.6

16.2

Idaho

No
1.63

No
0.82

Utah

0.33

21.0

25.9

18.9

Nevada

Yes
3.10

No
1.32

Idaho

0.40

17.7

18.4

16.9

Alaska

0.20

9.1

7.0

7.8

Oregon

Yes
3.75

No
1.44

Hawaii

0.14

3.0

25.2

2.9

Utah

Yes
2.81

No
2.00

Washington

Yes
3.47

No
1.08

State
Other 12th District States

Hawaii

Note: F test statistic of null hypothesis of non-Granger causality.
The critical value for rejecting the null hypothesis is 2.10.

section, the causal ordering I assume is that contemporaneous shocks flow from the nation to California and
neighbors, and from California to the neighbor states.
3
correlation of errors between California and its neighbors
from the estimated covariance matrices. The correlation
between California
aU other District states is 0.45. For
contiguous states the correlation is 0.65. For individual
states, the correlation ranges from 0.60 in Oregon to 0.14
in Hawaii. In general these correlations are large, and point
out the importance of the identifying assumption. The
contemporaneous shocks are assumed to be due to the impact of California on its neighbors. If the reverse is true, or
if some unobserved common factor is affecting both states,
the VAR results will be inconsistent.
Subject to this identifying assumption, the forecast error

18

Variance Decomposition
(%)

Correlation

California
(Reverse
California Nation
Order)

Note: Percent of forecast error variance attributable to California
after 24 quarters

I

the model makes for a neighbor state can be decomposed
into the error due to the state's own lags, the error due to the
nation, and the error due to California. I use this variance
decomposition as a measure of how states are linked to
California. Column 2 in Table 3 reports the proportion of
the forecast error at 24 quarters attributable to California.
For all other Twelfth District states, 17.1 percent of the
forecast error variance is attributable to California. In
linKa~~e to the nation is
contiguous states, however, the proportion of the forecast
error attributable to California rises to 32.3 percent (30.9
percent for the nation).
Among individual states, Arizona exhibits the largest
degree of linkage: 28.3 percent of the error the model
would make in forecasting Arizona is attributable to errors
(innovations) in the California equation. Arizona is followed closely by Nevada (27.5 percent), then Oregon,
Washington, Utah, (all in the 21 to 26 percent range), then
by Idaho, Alaska, and Hawaii, which exhibit relatively
little linkage to California.
The sensitivity of these results to the predeterminedness

Economic Review / 1992, Number 2

assumption is tested by reversing the ordering of the equations, that is, assuming that the neighbor states are predetermined with respect to California. These results are
shown in the final column of Table 3. When the states
are aggregated, reversing the ordering reduces the measured linkage by over half. For all Twelfth District states it
falls from 17.1 to 5.4, and for contiguous states it falls from
32.3 to 11.5. The results for the aggregate measures of
neighboring states thus are very sensitive to the ordering
assumption. For individual states, however, changing the
ordering assumption has less of an effect. Arizona's linkage falls from 28.3 to 17.8, Oregon from 25.8 to 17.5, and
Washington from 24.9 to 16.2. Utah and Idaho change
relatively little. Nevada, however, drops more than half
(from 27.5 to 11.0). The sensitivity ofthe results points out

.005

the importance of the contemporaneous correlations in
measuring spillovers.
An alternative measure of the effects of California on its
neighbors is obtained through impulse response analysis.
The effects of a one-standard-deviation shock to Caht()mia
on neighboring states over 24
is gn.plJ:ic,<Uy
shown in Charts 1 through 3.
For all Twelfth District states (shown in Chart 1) a onestandard-deviation shock to quarterly employment growth
in California of 0.0043 (in log
or approximately 0.43 percent) results in a 0.29 percent higher
growth rate in the rest of the District the first quarter.
response goes away by quarter 5. (It slightly overshoots,
then dampens to zero by quarter 18.) For contiguous states
(shown in Chart 2) the response to a shock to
is

Chart 1
Response of All Twelfth District States and California
to One-Standard-Deviation Shock to California

.004
California

.003

\~

.002

\
\

.001
.000

-.001

~

All 12th District States

-.002
-.003
-.004

4

2

6

8

10

12

14

16

18

20

22

24

Chart 2
Response of Contiguous States and California
to One-Standard-Deviation Shock to California
.005
.004
.003

\/\
\ falifornia

.002

\
\

.001
.000
-.001

..

-.002

Contiguous States
-.003
-.004
2

4

Federal Reserve Bank of San Francisco

6

8

10

12

14

16

18

20

22

24

19

aggregate measures, the responses in the individual states
slightly overshoot, then dampen to zero by quarter 18.
Are these spillovers statistically significant? Charts 4
and 5 report the impulse responses for the all-District and
contiguous states, respectively, with 95 percent confidence
bounds calculated through a Monte Carlo simulation. The
confidence bound for the all-District response is greater
than zero in quarter 1, touches zero in quarter 2, is just
above zero in quarter 3, then contains zero from quarter 4
on, suggesting that the measured spillover is not significantly different from zero beyond three quarters. The
results for contiguous states, however, suggest that the impulse is estimated more precisely. The confidence bound is
wen above zero through three quarters, then as with the allDistrict response, contains zero from quarter 4 on. Results

larger. The response rises to 0.31 percent in the first quarter,
and remains above the response for the all-Twelfth-District
aggregate until quarter4. This suggests that the magnitude
of spillovers from California is larger for contiguous states.
indiviclual states are
Chart 3.
These results also suggest that spillovers are larger states
that are geographically closer to California. The largest
peak responses are seen in Oregon (0.38 percent in quarter
1) and in Arizona and Nevada (both at 0.35 percent in
quarter 3). In contrast, smaller responses are seen in
(Idaho, Alaska, and Hawaii exhibit
responses but are not shown
clarity of exposition.) Nevada shows the largest sustained spillover (remainillg positive through quarter 6), while Oregon's is of
QUJratlon, rea1chulg zero by quarter 4. As with the

Chart 3
Response of Neighbor States
to One-Standard-Deviation Shock to California
.005

•(HI4

Oregon
Arizona
Washington

Utah
Nevada

.003
.002
.001
.000
-.001
-.002
-.003
-.004
4

2

6

8

10

12

14

16

18

20

22

24

Chart 4
Response of All Twelfth District States
with Two-Standard-Error Bound
.005
.004
.003
.002
.001
.000
-.001
-.002
-.003
-.004
2

20

4

6

8

10

12

14

16

18

20

22

24

Economic Review / 1992, Number 2

for individual states reveal statistically significant spillovers in Nevada (through quarter 6), Arizona (through
quarter 4), and Oregon, Washington, and Utah (through
quarter 3).
The robustness of the results is tested by splitting the
sample into two periods (1947.Ql-1970.Ql and 1970.Q21991.Q4). This tests for structural change, at the cost,
however, of reducing the degrees of freedom. In general,
splitting the sample period lowers the value of the F
statistics for Granger causality, with the leading relationship becoming insignificant in certain states. While the
overall qualitative pattern of the results does not change,
for most states the measured linkage to California appears
larger in the first period than in the second, while the
measured linkage to the rest of the nation rises. This
suggests that western states are becoming more integrated

into the national economy over time, while the relative
linkage to California is falling. The impulse responses in
both sample periods, however, both reveal significant spillovers for three quarters following a shock to California.
For the aggregate of states contiguous to California, for
example, the F statistic for Granger causality is 2.90 for the
first period, but only 0.6 in the second period, and the
measured linkage to California declines from 38.2 to 25.9.
The linkage to the nation, however, rises from 34.1 t040.9,
suggesting some substitution in linkage from California to
the nation. The pattern of the impulse responses (shown in
Chart 6) to a shock from California, however, is little
changed between the two sample periods and remains
significantly greater than zero for three quarters. While
these results are suggestive of structural change, testing for
this will involve a modeling approach that allows for time-

Chart 5
Response of Contiguous States
with Two-Standard-Error Bound
.005
.004
.003
.002
.001
.000
-.001
-.002
-.003
-.004
2

4

6

8

10

12

14

16

18

20

22

24

22

24

Chart 6
Response of Contiguous States
When Sample Period is Split in Two
.005
.004

.003
1970:2 -1991:4

.002

/!\

.001
.000
-.001
-.002
-.003
-.004
2

4

Federal Reserve Bank of San Francisco

6

8

10

12

14

16

18

20

21

varying coefficients and represents an area for future
research.
To summarize these initial results, California has statistically significant leading relationships for several
neighboring states, including Arizona, Nevada, Oregon,
Washington, and Utah. With the exception of Arizona, a
reverse effect on California is not seen. Furthermore, under
the identifYing assumption that observed contemporaneous
shocks flow from California to its neighbors, California
appears to have significant economic spillovers to its
neighbor states. The largest spillovers appear in states
geographically near California. The results are sensitive,
however, to the assumption of predeterminedness and the
choice of sample period. There is some indication that the
linJka~~e of California to neighboring economies may be
decreasing over time relative to their linkage to the national
economy.

III.

Table 4
California vs. Sectors in
Neighbor States
Results of Granger Causality Tests
State

Manufacturing Services

Finance

Arizona

Yes
3.4

Yes
3.5

No
1.5

No
0.6

Nevada

No
1.1

Yes
4.3

No
2.1

No
1.2

Oregon

Yes
4.3

Yes
2.3

Yes
4.1

No
1.1

Utaha

Yes
2.2

Yes

3.3

No
0.6

No
1.9

No
0.4

SECTORAL LINKAGES

To explore linkages between sectors in California and
sectors in its neighbors, I expand the three-equation VAR
model estimated in Section II to a six-equation system.
NATEMP and CALEMP remain unchanged from the
period. STEMP, however, is divided up into the
following sectors: manufacturing, services, "other," and
finance. An equation is included for each sector. As before,
each of these six components then is regressed on lagged
values of itself and lagged values of the other components.
I conduct this analysis only on states for which California
had a significant overall Granger causal effect.
The results are reported in Table 4. California appears to
a leading effect on manufacturing in Arizona, Oregon, and Utah. California also appears to have a leading
on the service sectors of its neighbors, with significant results seen for Arizona, Nevada, and Oregon. Of
particular interest is the strong result for Nevada, showing
the expected impact of California on the casino-related
service sector of the state. A significant effect is also seen
the
sectors in Utah
Oregon. (Service
employment is included in the "other" sector for Utah due
to data availability.) California does not appear to have an
effect on any specific sector in Washington, though the
"other" sector has the strongest measured effect with an
F statistic of 1.9 that is significant at the 80 percent level.
the California economy does not have a Granger
causal effect on the financial sectors of its neighbors.
To estimate the magnitude of these linkages, a causal
ordering is again needed. I again assume that the nation is
predetermined with respect to California and its neighbors, and that California is predetermined with respect to
its neighbors. More problematic, however, is determining

22

the direction of causality among the sectors. The results for
the linkage to California, however, are invariant to the
ordering of sectors.
The forecast error variances of the state sectors due to
California shocks are shown in Table 5. Note that in a six-

Washington

No
1.1

No

1.3

Note: F test statistic for null hypothesis of non-Granger causality.
The critical value for rejecting the null hypothesis is 2.10.
aFor Utah, no service sector data are available, so Services are
included in Other.

Table 5
Percent of Forecast Error Variance
in State Sector Attributable to
California after 24 Quarters
Variance Decomposition
Manufacturing

Services

Other

Arizona

26.4

11.6

17.1

10.0

Nevada

9.4

8.1

9.8

7.2

Oregon

16.2

11.9

11.3

9.7

Utaha

9.8

14.6

6.0

10.1

5.8

State

Washington

10.3

5.5

Finance

aFor Utah, no service sector data are available, so Services are
included in Other.

Economic Review / 1992, Number 2

equation system, the observed linkage to California declines because shocks in the other sectors affect the forecast
variance. The results are thus not strictly comparable to the
three-equation model, but are used to suggest relative
strengths of linkages across states and across sectors.
In general, manufacturing displays a higher degree of
linkage to California than the other sectors. Arizona manufacturing appears to be most linked to California, followed by Oregon and Washington. In services, Arizona
and Oregon display the greatest linkage. The "other" category displays large linkages in Arizona and Utah. In spite
of the significant Granger-test of California on Nevada, the
estimated linkage is of relatively small magnitude.
The observed spillovers in manufacturing are consistent
with a model of linkages propagated through trade flows
between firms. The spillovers in the service sector suggest
that linkages also exist in sectors such as tourism and
recreation. This is particularly true in the case of Nevada,
where growth in California has strong effects on the casinodominated recreation sector. The lack of spillovers in
finance suggests that growth in this sector is largely
determined by developments internal to the state, rather
than spillovers from California.

IV.

CONCLUSION AND FUTURE RESEARCH

Using a set of three-equation VAR models of the nation,
California, and other Twelfth District states, this paper
established that California has a statistically significant
leading relationship with employment growth in several of
its neighbor states-Arizona, Nevada, Oregon, Utah, and
Washington. The sectors affected are manufacturing in
Arizona, Oregon, and Utah, and services in Arizona,
Nevada, and Oregon. The financial sectors of these states
are not affected.
The magnitude of these linkages were then measured
through VAR variance decomposition and impulse response analysis. This measurement requires identifYing
assumptions regarding the observed correlation of contemporaneous
I assume
causal ordering
from the nation to California and other states, and from
California to its neighbors. Under this assumption, the
measured spillovers appear to be important, but dampen
relatively quickly.
These results are broadly consistent with a model of
regional linkages occurring through trade of goods and
services. Positive shocks to California have positive shortrun spillovers. The spillovers in manufacturing can be
attributed to orders for goods, while spillovers in services
potentially are due to demand for recreation and tourism.
An extension of this research will further explore these

Federal Reserve Bank of San Francisco

linkages and the reasonableness of the identifYing assumptions. Alternative explanations for the joint regional shocks
to California and its neighbors could include industrial mix
(aerospace or tourism, for example), or shocks associated
with being located on the Pacific Rim. An explicit accounting for aerospace between Washington and
for
example, could explore whether this industry is driving the
observed overall linkage in manufacturing.
This paper also suggests, however, that simple VAR
modeling of regional economies can be pushed only so far.
The results are sensitive to structural change, and imposing
a standard model on unique states results in dynamic
patterns that suggest problems in specification.
VAR modeling may effectively pick up trade flows, measuring longer-run factor flows suggests a modeling apy)roach
that explicitly accounts for structural change.

REFERENCES

Cargill, Thomas F., and Steven A. Moros. 1988. "A Vector Autoregression Model of the Nevada Economy." Federal Reserve Bank of San
Francisco Economic Review (Winter) pp. 21-32.
Cooley, Thomas F., and Stephen F. Leroy. 1985. "Atheoretical Macroeconometrics: A Critique." Journal ofMonetary Economics 16,
pp. 283-308.
Granger, C.W.G. 1969. "Investigating Causal Relations by Econometric Models and Cross-Spectral Methods." Econometrica 37,
pp.24-36.
Jacobs, Rodney L., Edward E. Leamer, and Michael P. Ward. 1979.
"Difficulties with Testing for Causation." Economic Inquiry pp.
401-403.
Sherwood-Call, Carolyn. 1988. "Exploring the Relationships between
National and Regional Economic Fluctuations." Federal Reserve
Bank of San Francisco Economic Review (Summer) pp. 15-25.
Sims, Christopher A. 1972. "Money, Income, and Causality." American Economic Review 62 (September) pp. 540-552.
_ _ _ _ . 1980a. "Comparison of Interwar and Postwar Business
Monetarism Reconsidered." American Economic Review
70 (May) pp. 250-257.
_ _ _ _ . 1980b. "Models and Their Uses." American Journal of
Agricultural Economics 71 (May) pp. 489-494.
Spencer, David E. 1989. "Does Money Matter? The Robustness of
Evidence from Vector Autoregressions." Journal ofMoney, Credit, and Banking 21 (November) pp. 442-454.
Todd, Richard M. 1984. "Improving Economic Forecasting with Bayesian Vector Autoregression." Federal Reserve Bank of Minneapolis Quarterly Review (Fall) pp. 18-29.
_ _ _ _ . 1990. "Vector Autoregression Evidence on Monetarism:
Another Look at the Robustness Debate." Federal Reserve Bank of
Minneapolis Quarterly Review (Spring) pp. 19-37.

23

24

Economic Review / 1992, Number 2

Changing Geographical Patterns
of Electronic Components Activity

Carolyn Sherwood-Call
Economist, Federal Reserve Bank of San Francisco. The
author would like to thank Stephen Dean, Karen Trenholme, and Brantley Dettmer for their research assistance,
and the editorial committee for many insightful comments.

Some observers have argued that high technology industries are leaving early technology centers, such as
Silicon Valley, for lower-cost locations. These assertions
are consistent with a world in which early innovations tend
to be concentrated geographically, but proximity to the
innovating region becomes less important than other costs
as the product's market grows and standardized production technologies are developed.
This study finds little evidence of such patterns in
electronic components activity within the U.S. In contrast,
the U.S. share of the total worldwide electronics market
hasfallen dramatically, while nations with lower costs and
less developed technological infrastructures are gaining
market share.

Federal Reserve Bank of San Francisco

During the 1980s, some observers argued that high
technology industries were leaving early technology centers, such as Silicon Valley, for lower-cost locations in the
U.S. and abroad (see, for example, Saxenian 1984). These
assertions are consistent with the views of some economists, who believe that the factors affecting firms' location
decisions may vary during the course of a product's life
cycle (Vernon 1966). In this view, innovation in a particular
industry tends to be concentrated in a region that offers
access to technological expertise, even if the general level
of costs in that region is relatively high. As the market for
the new product grows and standardized production technologies are developed, proximity to the innovating region
becomes less important, freeing firms to seek lower land
and labor costs elsewhere. Thus, according to this theory,
infant industries may be concentrated in high cost regions,
while mature industries are more likely to be located where
production costs are low.
Previous studies addressing similar issues (Malecki
1985, Park and Lewis 1991)found that geographical dispersion did not occur within the technology-oriented industries
they studied. However, these results are not necessarily
inconsistent with Vernon's hypothesis. First, they looked
only at changes within the U.S. Vernon's paper, in contrast,
discussed these changes in an international context, and the
forces suggested by his theory may be more readily apparent by making international comparisons.
Second, they looked for "dispersion," defined as an
even distribution of activity across all geographic areas.
However, a search for low-cost production sites would not
result in an even distribution of production across localities
if production moved en masse to a region that offered lower
land and labor costs. Thus, the level of geographic concentration in the industry could remain constant, even if the
location of that concentration were to change. In addition,
the geographic areas used for previous studies (census
regions and states) may be too large to capture some of the
changes that do occur.
An alternative explanation for the empirical studies'
results is that the product life cycle theory may not hold for
high-tech industries. One possible reason is that the pace of

25

innovation has been so rapid that many products have lifespans of only a few years. In this dynamic environment,
the investments in standardized production technology that
allow production to move away from the innovation site
may never be economically feasible. Moreover, the frequent changes mean that an "industry" as defined by data
classifications does not describe a single homogeneous
product, but instead includes a series of several distinct
products, which substitute for each other over time.
This study examines the issues raised by Vernon's theory
in the context of the electronic components industry. The
paper is organized as follows. The first section discusses
the factors that could cause firms' location decision parameters to change over the course of a product's life cycle.
Section II addresses whether the changes described by the
product life cycle theory have occurred internationally.
Section III addresses the same question within the U.S. ,
using more detailed U.S. data and defining the questions
somewhat differently from previous U.S. studies. Section
IV draws conclusions.

I.

THE PRODUCT LIFE CYCLE

Vernon (1966) provided a rationale for why firms' location decisions might change during the course of a product's life cycle. Like many stage theories, the stages
themselves are somewhat arbitrary, and the events of one
stage do not provide a compelling explanation of why
events ought to progress to the next stage. Nevertheless, its
major points are both plausible and consistent with popular
notions of the changes that have occurred in high technology industries. Moreover, it provides a convenient framework for a more general discussion of the changes in the
industry's structure.
Initially, production might be concentrated geographically simply because each innovation must take place
somewhere. However, some regions are more likely to be
seedbeds for innovations than others, since a critical mass
of related activity can yield external economies of scale
that make each input more productive. For example, a
region with a cluster of related activities is likely to have
the business service and financial infrastructure in place to
serve firms with similar needs. Moreover, workers with
appropriate skills are likely to be more plentiful in such a
location, and if these skills are relatively unusual in the
general population, the location will be particularly appealing to firms. For innovations in which work force
characteristics and local infrastructure are critical, proximity to these factors is likely to outweigh other considerations in firms' location decisions at the innovation stage.
Therefore, during the innovation stage a cluster of activity

26

could occur even in a location where the general level of
costs is high.
After the initial innovation comes a transition stage, in
which increased demand for the product makes investment
in production technology feasible and the technology of
production can be transferred from one location to another.
At this point, firms are not tied. to the site of the original
innovation as closely as they were in the first stage.
Nevertheless, the continued need for technological expertise as the production process is refined may lead the firm
to confine its site search to a smaller region than it might
otherwise explore. Thus, in this second stage, the industry
may spread out somewhat from its initial concentration of
activity.
The third and final stage of the product life cycle. is
standardization. In principle, standardization occurs when
technological innovations are complete, so the research
activities that previously were concentrated in the innovating region are no longer necessary. In practice, some
technological inputs may be required even when production is relatively standardized, but in any case, site location
decisions should be based primarily on the costs of inputs
to a standardized production process. Most interregional
cost differences would be expected to result from differences in land and labor costs.
If patterns in the electronic components industry were
consistent with the product .life cycle theory, and if the
initial activity were concentrated in high-cost areas, activity should have left the regions where early innovations
took place, and current activity should be most prevalent in
areas that offer low levels of production costs for the
industry. Wages and other direct costs should have become
more important locational determinants over time, while
attributes associated with technological expertise should
have become less important. These trends should have
been especially prevalent in the line production activities,
which would require relatively little technological expertise if they were standardized. Moreover, if the product life
cycle theory accurately describes changes in the electronic
components industry, research and production activities
should have become less closely linked to each other over
time.
In some ways, these kinds of changes seem plausible for
the high-tech industries in general and for the electronic
components industry in particular. Personal computers, an
important end use for many electronic components, provides an obvious example of a growing market for components during the 1977 to 1987 period.

Economic Review / 1992, Number 2

II.

INTERNATIONAL COMPARISONS

Table 1 presents various measures of production and
costs for the electronics industries in several important
producing countries, along with measures of technological
sophistication for those countries.' The table suggests that,
in 1988, the U.S. dominated the world's electronics industry by most measures. The U.S. made 38 percent of the
world's electronic products. Japan and the European Community (Ee) also contributed significantly to world production, with shares of 26 and 24 percent, respectively.
Thus, these three entities accounted for 88 percent of total
worldwide electronics production in 1988. Other producers, including India, Taiwan, Singapore, Brazil, and
South Korea, together accounted for only 6.4 percent of the
world's electronics production.
The U.S. outranks other producing countries by most
l"Electronics" is defined here to include electronic materials and components, software, computers, telecommunications equipment, business
equipment (copiers, fax machines, and so forth), and instruments.

measures of technological sophistication. The U.S. holds
commanding leads in terms of the number of telephones
per capita and the number of scientists and engineers. The
U.S. also ranks second (to Japan) in the number of scientists and engineers relative to total population.
Gross domestic product (GDP) per capita provides a
rough measure of the differences across countries in the
cost of doing business. Relatively high GDP per capita
reflects the high labor costs in those countries and also
suggests that the level of investment in both human and
physical capital is sufficient to generate relatively high
returns to land and other factors. By this measure, the U.S.
ranked second, at $18,393, with only Japan ($19,448)
posting a higher GDP per capita. Both the U.S. and Japan
have significantly higher GDP per capita than the EC's
$13,137, and none of the other producing countries listed
in Table 1 has GDP per capita that reaches even half of the
U.S. or Japan levels.
There are signs, however, that the U.S. domination of
the industry may be waning. The growth rate in U.S.
production between 1984 and 1988 was only 1 percent, by

Table 1
Electronics Sectors in Selected Countries
u.s.
Value of Electronics Production ($M, 1988)

ECa

S. Korea

186,232 127,208 115,136

9,103

7,890

Japan

Thiwan Singapore

Brazil

India

7,651

3,876

2,314 486,718

World

% of World Total (1988)

38.3

26.1

23.7

1.9

1.6

1.6

0.8

0.5

100

% of World Total (1984)

43.0

22.5

21.9

0.9

1.1

0.8

0.6

0.2

100

8

6

24

15

23

11

23

4

3.9

4.6

2.6

6.0

10.0

28.6

1.0

1.7

N/A

104.9

105.9

79.2

35.8

40.7

107.8

15.1

11.6

N/A

1776

1201

1454

254

194

71

257

200

N/A

1.3

9.6

1.5

9.8

2.6

-0.2

N/A

N/A

N/A

Telephones/ 1000 Pop. (1986)

791

558

520

186

228

417

84

4

N/A

Scientists & Engineers (000, 1986)

787

575

468

47

42

2

33

100

N/A

Scientists & Engineers/Million Pop.

3,230

4,712

1,443

1,116

2,149

923

230

128

N/A

18,393

19,448

13,137

2,881

3,794

7,654

2,304

326

N/A

Real Annual Growth Rate (%, 1984-88)
Production/GDP (%, 1987)
Production ($OOO)/Employment
Total Electronics Employment (000, 1986)
Annual Growth Rate (%, 1980-86)
General Technology Characteristics

GDP per Capita ($, 1987)

Note: "Electronics" is defined here to include electronic materials and components, software, computers, telecommunications equipment,
business equipment (copiers, facsimile machines, and so on), and instruments.
aEC data exclude Portugal and Greece.

Federal Reserve Bank of San Francisco

27

far the slowest growth among the group of countries
included in Table 1. Electronics industry growth rates in
countries that offer much lower costs were all in double
digits-stronger growth than in the countries that dominated worldwide production in 1988. In addition, according to the Semiconductor Industry Association (SIA), the
share of US. company production in total world semiconductor production fell fairly steadily, from 65 percent in
1977, to 60 percent in 1982, 45 percent in 1987, and 38
percent in 1988 (SIA 1990), before picking up slightly to 40
percent in 1990. 2 It is likely that these numbers understate
the movement of semiconductor production outside of the
US., since US.-based companies have moved their own
production offshore even as foreign companies have increased their production.
Thus, in some ways, the industry's patterns appear to be
consistent with the product life cycle hypothesis. The
U. S., a technology-oriented, high-cost country, dominated
the industry in its early days, suggesting that the US. was
the site of early innovations. Between 1984 and 1988, the
electronics industry grew fastest in the countries that
offered lower production costs. This change is consistent
with the industry moving toward the standardization stage,
with firms seeking locations that offer lower costs.
Another characteristic that is broadly consistent with the
product life cycle theory is that, within the electronics
industry, the product mix varies substantially across countries. Korea, for example, specializes in computer assembly, while Japan's electronics industry is dominated by
semiconductor fabrication. Differences in product mix and
capital intensity are reflected in wide variations in the value
of production per worker across countries. In the US.,
Japan, and Singapore, the average value of production per
worker is $100,000. In sharp contrast, the value per worker
is $40,000 or less in India, Brazil, Taiwan, and South
Korea.
Nevertheless, it is worth noting that the US. does retain
the world lead in some segments of the electronics industry.
US. producers dominate world production in highly profitable areas, such as processors that are vital to the calculating, graphics, and sound functions in computers.
Japanese producers, in contrast, dominate the less profitable market for standard memory chips (Pollack 1992).
In a similar vein, Saxenian (1990) suggests that during
the middle and late 1980s, Silicon Valley spawned a new
generation of flexible, interconnected firms that specialize
in particular aspects of technological development or pro-

2Note that semiconductors are just one type of electronic component.
Semiconductors are a subset of the electronic components category
measured by SIC 367.

28

duction. These firms do not attempt to make high-volume
commodity chips (as Intel and Advance Micro Devices had
in an earlier generation), but instead seek out small,
specialized niches in which they can take advantage of
their technological expertise and flexibility.
The view that US. producers continue to play an important role in the high-tech industries, but not in massproduced commodity products, also is supported by the
pattern of employment growth within the US. Between
1977 and 1987, the number of electronic components
workers in nonproduction functions (including research
and development, headquarters functions, and marketing)
grew 88 percent. During the same period, the number of
production workers grew by a relatively modest 28 percent.
These figures are consistent with a change in the US.
industry away from mass production and toward custom
products with smaller markets.
The overall picture that emerges is one of a very heterogeneous industry, in which different countries have tended
to specialize in different functions, and there is still a role
for U.S. producers, although that role is different from its
role in the past. The product life cycle theory is not strictly
applicable to a group of products as heterogeneous as this
one. The continued role ofUS.-based firms in developing
and implementing the new technologies, and the dominance of the US., the EC, and Japan in total worldwide
production, suggests that the product life cycle story does
not fully capture the dynamics of the electronic components industry. At the same time, though, the rapid growth
in assembly activity in Taiwan and South Korea is consistent with .the product life cycle theory's assertion that
standardization allows production activities to move to
regions that offer relatively low costs, even though their
technological infrastructures are less well developed.
III. THE ELECTRONIC COMPONENTS INDUSTRY
IN THE U.S.

Malecki (1985) and Park and Lewis (1991) examined
whether the kinds of geographic changes predicted by the
product life cycle theory are at work in high technology
industries within the US. Malecki found that there was
little dispersal across four census regions between 1973
and 1983 in the four 4-digit Standard Industrial Classification (SIC) categories he examined." Park and Lewis conducted shift-share analysis using state-level data for three
of Malecki's four 4-digit industries, with mixed results.

3Those four industries were Electronic Computing Equipment (SIC
3573), Semiconductors (SIC 3674), Medical and Surgical Instruments
(SIC 3841), and Computer Programming (SIC 7372).

Economic Review / 1992, Number 2

They concluded that their results did not support the
product life cycle model.
Glasmeier (1986) disaggregated employment in 2-digit
SIC categories by occupation. 4 She found that the technical
and professional jobs were concentrated geographically,
but that many of the locations of these concentrations had
relatively few production workers in the same industry
groups. This supports the product life cycle contention that
production and nonproduction activities tend to become
more separate as the product's life cycle progresses, al-

though the continued existence of a large cadre of nonproduction workers suggests that production was not yet
fully standardized at the end of her sample period.
This section addresses these issues using Census of
Manufactures data for the 3-digit SIC industry of electronic components (367). (See Box for a description of the
data.) The present study differs from previous work in that
it explores the question at the metropolitan area levelrather
than at the level of the state (Park and Lewis, Glasmeier) or
census region (Malecki).

Characteristics of Innovating Regions
"These industries are Chemicals (SIC 28), Nonelectrical Machinery (SIC 35), Electrical Machinery (SIC 36), Transportation Equipment
(SIC 37), and Scientific Instruments (SIC 38).

Table 2 presents information on various characteristics
of the ten Metropolitan Statistical Areas (MSAs) that

Box 1

Census of Manufactures Data
The manufacturing data in this study come from the
Census of Manufactures, produced by the US. Commerce Department. The Census of Manufactures provides metropolitan area data on such variables as
the number of production and nonproduction workers,
'work hours,and payroll costs. The Standard Industrial
Classification (SIC) used for this study is electronic
components (SIC 367), which includes electron tubes,
printed circuit boards, semiconductors and related devices, and electronic capacitors, resistors, coils, transformers, and connectors. It does not include finished
technology products such as computers or scientific
instruments.
The number of Metropolitan Statistical Areas
(MSAs) for which complete data are available varies
among the sample years. Data are withheld for MSAs
with only a small number of employers in SIC 367, and
for MSAs in which a single employer dominated that
MSA's industry. The data set includes 44 MSAs for 1977
and 68 MSAs for 1987.
The MSAs for which complete data are reported
leave much ofthe US. employment in the industry unaccounted for. For example, the 44 MSAs for which
1977 data are available account for only 49 percent of
US. employment in SIC 367. For 1987, the sample size
rises closer to 60 percent, but clearly a large portion of
the industry remains unreported. This large unreported
portion of the industry could potentially affect the
empirical results, if many of the industry's

Federal Reserve Bank of San Francisco

changes are occurring outside MSAs or in MSAs that do
not report complete data for SIC 367.
One problem with using SIC 367 is that the characteristics of the products classified in this 3-digit SIC
category have changed over time. Between 1977 and
1987, the share of semiconductors in the total rose from
30.5 percent to 33.8 percent. The share of electronic
connectors also rose, and printed circuit boards were
added as a separate category in 1987. (Prior to 1987,
printed circuit boards were included in "Electronic
Components, not elsewhere classified.") During the
ten-year period, electronic capacitors, resistors, and
tubes became significantly less important to the entire
industry. While going to the 4-digit industry level would
alleviate many of the problems associated with changes
.in the composition of SIC 367, too few MSAs report
4-digit data to allow a meaningful analysis.
Another potential problem with these data is that the
manufacturing census is conducted by establishment,
and an establishment is considered to be in SIC 367 only
if production occurs on the site. Therefore, an establishment engaged only in research and development, sales,
administration, or other "auxiliary" functions would
not be included in the totals for SIC 367, even if the firm
produces nothing but electronic components. This
means that the data regarding production activities are
likely to be more accurate than the data for nonproduction activities, although the extent of the problem with
nonproduction data is impossible to determine.

29

Table 2
Characteristics of Regions Producing Electronic Components, 1977
Per Capita Production
Personal
Worker
Wage
Income
($)
($/Hour)

Nonprod'n
High
4+ Years
Worker
School
of
Graduates- Collegea
Salary
(%)
(%)
($lYear)

MSA

Percent of U.S. Employment
SIC 367
All Production Nonprod'n 'Iotal

San Jose, CA

10.1

8.1

14.4

0.7

8,865

5.41

19,850

79.5

26.4

Chicago,IL

5.4

5.7

4.8

3.7

8,885

4.63

15,855

67.5

18.5

Los Angeles/Long Beach, CA

5.2

5.7

4.2

3.9

8,473

4.78

19,163

69.8

18.5

Phoenix, AZ

4.1

3.0

6.6

0.6

7,059

5.02

14,053

75.0

18.3

Dallas/Fort Worth, TX

3.7

3.8

3.7

1.4

7,878

5.49

15,233

70.0

20.2

Boston, MA

3.4

3.3

3.5

1.6

7,984

4.86

17,463

77.2

24.7

Anaheim/Santa Ana, CA

3.3

3.4

3.3

0.8

8,968

4.61

19,711

80.4

22.6

Nassau, NY

2.0

2.0

2.1

1.0

8,870

4.81

17,167

75.8

20.9

New York, NY

1.9

2.0

1.6

4.4

8,643

4.61

16,526

63.5

19.2

Philadelphia, PA

1.8

2.0

1.4

2.2

7,844

4.97

17,125

66.0

13.6

41.1

39.0

45.7

20.3

8,426

4.97

17,640

72.5

20.3

100

100

100

100

7,297

4.83

18,158

68.6

17.0

Ten MSAs
U.S.
aEducationdata are for 1980.

accounted for the largest shares of U.S. electronic components employment in 1977. Together, the top ten areas
accounted for 41 percent of electronic components employment. By way of comparison, these ten MSAs accounted
for about 20 percent of the nation's total employment
across all industries.
Of these ten MSAs, eight (San Jose, Chicago, Los Angeles/Long Beach, Phoenix, Dallas/Fort Worth, Boston,
Anaheim, and Nassau) are much more strongly represented in the electronic components industry than their
sizes would suggest. In contrast, New York and Philadelphia are large metropolitan areas whose contributions
to the electronic components industry derive primarily
from their large sizes. Their contributions to electronic
components employment actually are smaller than their
contributions to total employment.
These ten areas accounted for a significantly larger share
of the nation's nonproduction workers in the electronic
components industry than their share of production workers. They contained 46 percent of the nation's nonproduction workers in the electronic components industry,
but only 39 percent of national production workers. The
greater concentration among nonproduction workers than

30

among production workers is due to sharp differences in
only two MSAs: San Jose and Phoenix. The other important electronic components producing areas were either
proportionately represented by production and nonproduction workers, or were relatively over-represented by production workers.
One of the most striking observations from this table
is that San Jose clearly dominated the industry. The San
Jose area, which accounted for only 0.7 percent of the nation's total jobs in 1977, provided fully 11 percent of the
nation's electronic components employment. Moreover,
the San Jose MSA had more than twice as many electronic
components jobs as Chicago, which ranked second. San
Jose's share of the industry's nonproduction jobs was even
greater, at over 14 percent. These jobs, which cover
functions other than line production, include research and
development, sales, and headquarters functions. Thus,
nonproduction jobs are more likely to require advanced
education and technological training.
In most respects, the characteristics of the San Jose area
during the late 1970s were those that tend to be associated
with technological innovations. Stanford University, with
one of the nation's top electrical engineering programs, is
Economic Review / 1992, Number 2

located in the area. The University of California at Berkeley, with another top-rated electrical engineering program,
is only about 50 miles away. Moreover, the San Jose area
boasts a highly educated population. Of residents over 25
years old, 80 percent had high school diplomas and 26
percent had at least 4 years of college in 1980-much
higher proportions than the nation, where the figures
among the same age group were 69 percent and 17percent,
respectively.
The San Jose area had high costs by any measure. The
average hourly wage for production workers in the electronic components industry was $5.41 in San Jose, compared with $4.83 nationally. The average annual salary for
a nonproduction worker in San Jose was $19,850, much
higher than the $18,158 national average. Per capita personal income, a more general measure of the level of
incomes (and presumably costs) in an area, also was
relatively high in the San Jose area, at $8,865 compared to
a national average of $7,297. Housing prices are not listed
in the table, but home prices also were relatively high in the
San Jose area. 1980 Census data, for example, revealed that
the median monthly mortgage payment for an owneroccupied dwelling was $475 in San Jose, much higher than
the $365 national median. Rents in San Jose also were
much higher than the national average, with a median
monthly rental payment of $365, compared with the U.S.
median of $243.
Venture capital, an important source of funding for
high-tech start-up companies, also was more readily available in the San Jose area than in many other parts of the
country. While the New York/New Jersey/Connecticut
area clearly dominates the venture capital field with over a
third of the 100 biggest funds in 1986, Northern California
had 20 funds on the list, followed by the Boston area with
15 ("Venture Capital 100 for 1986"). In fact, 73 ofthe 100
largest funds were located in these three areas alone, with
the remainder scattered throughout the rest of the United
States.
The overall picture of the San Jose area that emerges
is one of a quintessential innovating region. It clearly
dominated the industry early on, particularly in the nonproduction areas that require highly educated, technical
personnel. The area was near universities with first-rate
electrical engineering programs, and had a highly educated population and relatively good access to venture
capital. Moreover, the high level of costs in the area, both
generally and in terms of labor for the electronic components industry, suggests that if the industry were moving
toward standardization during the late 1970s and early
1980s, firms would have had an incentive to move their
operations to lower-cost regions.
Most of the other areas listed in Table 2 had some of the

Federal Reserve Bank of San Francisco

characteristics of an innovating region. In particular, nine
out of ten had a higher percentage of college graduates than
the nation, and seven out of ten had a higher percentage of
high school graduates. But the characteristics of the other
MSAs are not as striking as those of San Jose. For example,
in or adjacent to each of these metropolitan areas there is at
least one university with an electrical engineering program. However, the only top-20 programs in areas producing electronic components are in San Jose (Stanford), Los
Angeles/Long Beach (UCLA, USC, and Cal Tech), and
Boston (MIT). 5 Nevertheless, most of the others have topranked electrical engineering departments within a few
hours' drive. Anaheim is adjacent to the Los Angeles area
and its universities. Princeton is within 50 miles of both
Philadelphia and New York. Similarly, Purdue and the
University of Illinois at Urbana-Champaign, both topranked departments, are within 150 miles of Chicago.
Dallas/Fort Worth is almost 200 miles from the University
of Texas at Austin. Among the most important metropolitan areas for producing electronic components, only
Phoenix does not have a top-ranked electrical engineering
department within a few hundred miles.s
If there were an incentive to shift activity to lower-cost
regions since 1977, costs in these cities would be expected
to have been relatively high. Table 2 shows that, for these
ten cities as a group, labor costs in the electronic components industry were only slightly higher than the national
average. Indeed, the average annual salary for nonproduction electronic component workers actually was lower for
these cities than it was nationally. San Jose is the only
metropolitan area in this group with both production wages
and nonproduction salaries higher than the national average. In contrast, personal income per capita, a more
general measure of the MSA's level of costs, was substantially higher in these areas than it was nationally. Taken
together, these figures suggest that, even if production and
nonproduction activities became less closely linked over

5 According

to the author's calculations based on information provided by
the Conference Board of Associated Research Councils (1982), the top
twenty electrical engineering programs were: MIT (Cambridge, MA),
Stanford (Stanford, CA), Illinois (Urbana/Champaign, IL), California
(Berkeley, CA), UCLA (Los Angeles, CA), USC (Los Angeles, CA),
Purdue (West Lafayette, IN), Maryland (College Park, MD), Cornell
(Ithaca, NY), Carnegie-Mellon (Pittsburgh, PA),Ohio State (Columbus,
OR), Michigan (Ann Arbor, MI), Wisconsin (Madison, WI), Texas
(Austin, TX), Rensselaer (Albany, NY), Princeton (Princeton, NJ), Cal
Tech (Pasadena, CA), Florida (Gainesville, FL), UCSD (San Diego,
CA), and UCSB (Santa Barbara, CA).
6The University of Arizona, in Tucson, is 34th of the 91ranked electrical
engineering departments; Arizona State in Tempe (a Phoenix suburb)
ranks 57th.

31

Changes within the U.S.

time, as the product life cycle theory suggests, the potential
cost savings from shifting electronic components activity
elsewhere may be relatively modest, except in the San Jose
area.
Around the late 1970s and early 1980s, the personal
computer became an important fixture in offices and
universities. With the huge growth in the industry, demand
for components increased enormously. Given these changes
in the industry, the characteristics of the regions that were
important producers of electronic components in 1977, and
the discussion of the changes that occur over a product's life
cycle, we would expect to see a significant reduction in the
importance of the San Jose area over time. In contrast, we
would expect to see much less dramatic changes in the
patterns among the other technology-oriented areas listed in
Table 2.

To see whether such changes have in fact occurred, Table
3 provides similar data for the ten most important areas in
the industry in 1987. 7 Contrary to expectations based on the
product life cycle theory, San Jose's share of national
employment in the electronic components industry rose
from 10.1percent in 1977to 11.5percent in 1987. Moreover,
San Jose became more dominant in both the production and
nonproduction parts of the industry. The San Jose area
accounted for 14.4 percent of the nation's nonproduction
workers in 1977 and 15.4 percent in 1987; the area's share of
total production workers rose from 8.1 percent to 9.0
percent during the same period.
Changes among the other producing cities also were
modest. One change is that the composition of the list

"Since educational attainment data were availablefor only one year, the
figures in Table 4 are identical to those in Table 2.

Table 3
Characteristics of Regions Producing Electronic Components, 1987
Per Capita Production
Personal
Worker
Wage
Income
($)
($/Hour)

Nonprod'n
High
4+ Years
Worker
School
of
Salary
Craduates- Collegea
($/Year)
(%)
(%)

MSA

Percent of U.S. Employment
SIC 367
All Production Nonprod'n 'Ibtal

San Jose, CA

1l.5

9.0

15.4

0.8

21,547

11.60

38,701

79.5

26.4

Los Angeles/Long Beach, CA

5.1

5.8

4.0

3.9

17,680

9.39

32,535

69.8

18.5

Phoenix, AZ

4.1

3.4

5.1

0.9

16,064

8.36

32,090

75.0

18.3

Anaheim/Santa Ana, CA

3.7

4.0

3.3

1.1

21,405

9.72

34,458

80.4

22.6

Boston, MA

3.5

3.4

3.6

1.7

20,330

9.44

31,936

77.2

24.7

Chicago,IL

3.1

3.6

2.3

3.0

17,662

8.19

30,360

67.5

18.5

Dallas/Fort Worth, TX

2.9

2.8

3.2

1.8

16,998

9.91

39,406

70.0

20.2

Nassau, NY

2.2

2.3

2.1

1.1

22,139

8.82

34,311

75.8

20.9

San Diego, CA

1.8

1.9

1.6

0.8

16,658

8.98

36,1l4

78.0

20.9

Minneapolis, MN

1.4

1.5

1.1

1.3

18,205

9.20

36,087

79.9

21.9

39.3

37.7

41.6

16.3

18,490

9.71

35,585

75.3

21.3

100

100

100

100

l5,511

9.32

34,751

68.6

17.0

Ten MSAs
U.S.
aEducationdata are for 1980.

32

Economic Review / 1992, Number 2

varies slightly between the two years. In 1987, New York
and Philadelphia moved down to ranks 12 and 11,respectively, while San Diego and Minneapolis moved into the
top ten. Chicago fell from number 2 to number six, but
other changes in rank within the top ten were small.
Taken together, the top ten MSAs accounted for 39
percent of national employment in the electronic components industry in 1987, down from 41 percent in 1977. The
ten cities as a group also accounted for a smaller share of
nonproduction employment in 1987 (42 percent) than they
did in 1977 (46 percent). This change is consistent with the
notion that technological expertise might diffuse or become less important as the product progresses through its
life cycle. The change in share for production workers,
however, was quite small, from 39 to 38 percent. This
small change tends to contradict the notion that firms are
moving production activities from their early centers to
other, lower-cost locations within the United States.
An additional prediction of the product life cycle theory
is that production and nonproduction activities become
less closely linked over time, as production processes
become more standardized. To see whether this pattern has
emerged within the U.S., I run simple correlations between each MSA's share of U.S. production and nonproduction employment for each year, using the entire
sample of MSAs. 8 If the linkage between the two has
become weaker over time, the correlation coefficient would
shrink over time.
In 1977, the correlation coefficient between MSAs'
shares of national production employment in SIC 367 and
their shares of nonproduction employment was quite high,
at 0.914. In 1987, the correlation was even higher, at
0.927. These figures suggest that, within the U.S., the
linkage between production and nonproduction activities
remains strong. This result contradicts the expectations
based on the product life cycle theory.
The Census of Manufactures data run only through
1987, and data for SIC 367 are not available for most MSAs
for non-census years. However, many MSAs do report
intercensal data on SIC 36, electric and electronic equipment, the 2-digit category that includes SIC 367.
For MSAs in which SIC 36 data are available, the shares
of the top ten cities remained relatively stable from 1977 to
1987, following the pattern seen in SIC 367 (see Appendix).
The share dropped off sharply between 1987and 1991, from
34.6 percent to 24.6 percent. However, over half of the
drop-off in SIC 36 between 1987 and 1990 occurred in Los

Angeles, falling from 8.9 percent to 3.5 percent. Since the
Bureau of Labor Statistics (BLS) does in fact report 3-digit
data for SIC 367 for Los Angeles, we can check to see if the
SIC 367 is responsible for the sharp decline in Los Angeles'
share of SIC 36 activity. The BLS numbers for SIC 367
reveal a much smaller decline in Los Angeles' share of
electronic components employment, from 4.5 percent to
3.9 percent. Thus, the evidence leaves open the possibility
that dramatic changes may have occurred in the electronic
components industry (SIC 367) since the 1987 manufacturing census, but the changes probably were not as dramatic
as the changes in SIC 36 were.

IV.

CONCLUSIONS

This paper started with the observation that many are
concerned about shifts in electronic components activity
away from historical centers such as Silicon Valley. Such a
shift is consistent with the views of some economists who
argue that the factors affecting firms' location decisions
may vary during the course of a product's life cycle.
This study analyzed a variety of data at the international
and national level. Consistent with previous work by
Malecki (1985) and Park and Lewis (1991), the analysis
found little evidence to support the contention that the
product life cycle theory explains changes in the location
of electronic components activity within the U.S. In particular, the San Jose metropolitan area, which includes
the Silicon Valley, continues to playa dominant role in the
electronic components industry within the United States.
In contrast, an examination of the international data
revealed that the U.S. share of the total worldwide electronics market has fallen dramatically, and that nations
with lower costs and less developed technological infrastructures are gaining market share. This finding is
consistent with expectations based on the product life cycle
theory. Nevertheless, the U.S. does continue to play an
important role in the industry, suggesting that complete
standardization of the industry either has not yet occurred
or will never occur in the fast-changing world of high-tech
production.

8Thesample includes 44 MSAs for 1977, and 68 MSAs for 1987.

Federal Reserve Bank of San Francisco

33

Appendix
MSA Employment as a Percentage of National Employment
SIC

1977

1982

1987

1991

San Jose, CA

367
36

10.1
3.9

11.7
5.8

11.5
5.7

5.1

Chicago,IL

367
36

5.4
7.7

4.2
6.2

3.1
5.0

4.9

Los Angeles/Long Beach, CA

367
367 (BLS)
36

5.2
4.4
7.1

5.5
4.2
8.2

5.1
4.5
8.9

3.9
3.5

Phoenix, AZ

367

4.1

4.1

4.1

Dallas/Fort Worth, TX

367
36

3.7
2.8

3.5
3.3

2.9
4.2

3.7

Boston, MA

367
36

3.4
3.0

3.5
3.3

3.5
3.3

2.2

Anaheim/ Santa Ana, CA

367
36

3.3
2.7

3.8
3.1

3.7
3.6

2.1

Nassau, NY

367

2.0

2.1

2.2

New York, NY

367
36

1.9
2.4

1.6
2.0

1.0
1.6

1.2

Philadelphia, PA

367
36

1.8
2.6

1.4
2.5

1.0
2.3

1.9

10 MSA Average

367
36

38.3
30.3

39.1
32.2

36.7
31.4

21.4

Note: Unless noted otherwise, SIC 367 data are from the Census of Manufacturers. SIC 36 data are from the Bureau of Labor Statistics
and are not available for Phoenix or Nassau.

34

Economic Review / 1992, Number 2

REFERENCES
Armington, Catherine. 1986. "The Changing Geography of HighTechnology Businesses." In Technology, Regions, and Policy, ed.
John Reese. Totowa, NJ: Rowman & Littlefield.
Carlton, Dennis W. 1983. "The Location and Employment Choices of
New FInns: An Econometric Model with Discrete and Continuous
Endogenous Variables." The Review of Economics and Statistics
(August) pp. 440-449.
Conference Board of Associated Research Councils. 1982. "Electrical
Engineering Programs." In An Assessment ofResearch-Doctorate
Programs in the United States: Engineering, pp. 69-82. Washington, DC: National Academy Press.

Rees, John, and Howard A. Stafford. 1986. "Theories of Regional
Growth and Industrial Location: Their Relevance for Understanding High-Technology Complexes." In Technology, Regions, and
Policy, ed. John Reese. Totowa, NJ: Rowman & Littlefield.
Saxenian, AnnaLee. 1990. "The Origins and Dynamics of Production
Networks in Silicon Valley." Working Paper 516. Institute of
Urban and Regional Development, University of California at
Berkeley (April).
_ _ _ _ _ . 1984. "The Urban Contradictions of Silicon Valley:
Regional Growth and the Restructuring of the Semiconductor
Industry." In Sunbelt-Snowbelt: Urban Development and Regional
Restructuring, eds. Larry Sawers and William K. Tabb. Oxford:
Oxford University Press.

Dorfman, Nancy S. 1988. "Route 128: The Development of a Regional
High Technology Economy." In The Massachusetts Miracle: High
Technology and Economic Revitalization, ed. D. Lampe. Cambridge, MA: MIT Press.

Schmenner, Roger W. 1982. Making Business Location Decisions.
Englewood Cliffs, NJ: Prentice-Hall, Inc.

Glasmeier, Amy K. 1986. "High-Tech Industries and the Regional
Division of Labor." Industrial Relations 25 (Spring) pp. 197-211.

U.S. Congress. Joint Economic Committee. 1982. Location of High
Technology Firms and Regional Economic Development. Washington, DC: Government Printing Office.

Landis, John, Cynthia Kroll, and Barbara Johnson. 1990. "Responses
to High Housing Prices: Economies, Firms, and Households."
Draft Report, Center for Real Estate and Urban Economics,
University of California, Berkeley.
Malecki, Edward. 1985. "Industrial Location and Corporate Organization in High Technology Industries." Economic Geography 59,
pp. 345-369.
Park, Siyoung, and Lawrence T. Lewis. 1991. "Developments in the
Location of Selected Computer-Related Industries in the United
States." Growth and Change (Spring) pp. 17-35.
Pollack, Andrew. 1992. "U.S. Chip Makers Stem the Tide in Trade
Battles with Japanese." New York Times (April 9).

Federal Reserve Bank of San Francisco

Semiconductor Industry Association. 1990. 1989-90 Yearbook and
Directory. Cupertino, CA.

U.S. Department of Commerce. International Trade Administration.
1990. The Competitive Status ofthe U.S. Electronics Sector: From
Materials to Systems. Washington, DC: Government Printing
Office.
Venture. 1987. "Venture Capital 100 for 1986." (August).

Vernon, Raymond. 1966. "International Investment and International
Trade in the Product Cycle." Quarterly Journal of Economics
(May) pp. 190-207.
Wasylenko, Michael, and Therese McGuire. 1985. "Jobs and Taxes:
The Effects of Business Climate on States' Employment Growth
Rates." National Tax Journal (December) pp. 497-511.

35