[Previous Article] [Next Article]


Neural Network Model Forecasts of the Niño 3.4 SST

contributed by Benyang Tang1, William Hsieh1 and Fredolin Tangang2

1Department of Earth and Ocean Sciences, University of British Columbia, Vancouver, B.C., Canada

2Department of Marine Science, Faculty of Science & Natural Resources, National University of Malaysia

Web site of the UBC Climate Prediction Group: http://www.ocgy.ubc.ca/projects/clim.pred/



We present here a new neural network model (model 2.1), which has better forecast skills than our earlier series 1 models which are described in Tang et al. (1994), and Tangang et al. (1997). The series 1 models all used the tropical FSU wind stress data as the predictor, while the new model 2.1 uses the tropical COADS sea level pressure (SLP) data as the predictor. Two other minor changes are (1) We no longer apply the detrending procedure used in our series 1 models, and (2) the domain of the predictor field has been reduced from 30S-30N to 20S-20N. These changes have led to a increase in forecast skills, especially at longer lead times--e.g. at 12-month lead time, the cross-validated forecast correlation skill for model 2.1 is 0.59, versus 0.37 for our previous series 1 model.

Here we give a brief description of our neural network model. We have implemented a procedure called bootstrap aggregating, or bagging, to increase the skills and the stability. Bagging (Breiman 1997) works as follows: First training pairs are formed, consisting of the data at the initial time and the forecast target time which is a certain number of months later (the lead time). The available training pairs are separated into a training set and a test set. The test set is reserved for testing only and not used for training. The training set is used to generate an ensemble of neural network models, where each member of the ensemble is trained using only a subset of the training set. The subset is drawn at random with replacement from the training set. The subset has the same number of training pairs as the training set; some pairs in the training set appear more than once in the subset, and about 37% of the training pairs in the training set are absent in the subset. The final model output is the average of the outputs from all members of the ensemble.

The advantage of bagging is to reduce the variance, or instability, of the neural network. The error surface of neural network training is full of local minima; trainings with different initial weights and training data are usually trapped in these different local minima. The local minima reflect partly the fitting to the regularities of the data and partly the fitting to the noise in the data. Bagging tends to cancel the noise part as it varies among the ensemble members, and tends to retain the fitting to the regularities of the data. The ensemble in our bagging neural network model has 30 members.

The neural networks were trained with the Niño 3.4 index (calculated from the NOAA SST gridded data, ftp:// nic.fb4.noaa.gov/pub/ocean/clim1/), and the COADS monthly sea level pressure (SLP) data of the tropical Pacific Ocean (Woodruff et al. 1987, http://www.cdc.noaa.gov/coads/). The gridded SLP data and the Niño 3.4 index were first summed into 3-month averages and the SLP data were reduced to a few EOF modes.

Three of the first seven EOF time series from the SLP data show considerable variations in decadal or longer time scales, with the third one showing a linear trend. We found that removing these trends degrades the forecast skills beyond 6 months significantly. We were surprised by this result, as the data contain only 3 or less cycles of these long-term variations, but somehow the neural network models seem to be able to capture their patterns.

The neural networks in this forecast have 31 inputs and 5 hidden neurons. The Niño 3.4 index, and the first 7 EOF time series of the SLP EOF of the initial month, form the first 8 inputs. The first 7 EOF time series of 3 months, 6 months and 9 months before the initial month are also used as inputs. The last 2 inputs are a sine and a cosine of 12-month period, indicating the phase of the annual cycle.

We designed a cross-validation scheme that estimates the forecast skill of the NN models. For each lead time from 3 months to 15 months and for each given year from 1950 to 1997, data from a window of 5 years, starting from January of the given year, were withheld. Training pairs were composed from the remaining data. After the model was trained, 12 forecasts, initialized from 12 consecutive months starting from November of the year before the given year, were made. Then the 5-year window was moved forward by 12 months and the procedure was repeated. The choice of 5-year windows achieves a balance of avoiding the influence of the training data on the test data, and allowing an efficient use of the available data. The forecast target month is at least 24 months away from the training data that follow. The overlapping between the forecast input data and the training data is legitimate as it also happens in real time forecasting.

All the forecasts for a given lead time were collected to form a continuous Niño 3.4 forecast time series. The two panels of Fig.1 compare the forecast Niño 3.4 at leadtimes of 6 months and 12 months with the observed Niño 3.4 SST. The correlation coefficients between the forecast and observed SST were calculated as skills for each decade from the 1950s to the 1990s, as well as for the whole period. The results are listed in Table 1.

Figure 2 shows the forecasts at leadtimes of 3, 6, 9, 12 and 15 months, using data up to November 1997. The forecasts indicate that the current warm condition will return to normal in early or middle of 1998, and turn into a La Niña event by the end of 1998.



Table 1. The test correlation skills for different test periods for 6 lead times.

Test Period

3-month lead 6-month lead 9-month lead 12-month lead 15-month lead 18-month lead
1960-1969 0.84 0.69 0.70 0.65 0.46 0.33
1970-1979 0.89 0.76 0.69 0.62 0.55 0.36
1980-1989 0.87 0.73 0.68 0.66 0.65 0.64
1990-1997 0.71 0.51 0.27 0.22 0.31 0.07
1960-1997 0.856 0.723 0.648 0.585 0.512 0.367






References

Tang, B., 1995: Periods of linear development of the ENSO cycle and POP forecast experiments. J. Climate, 8, 682-691.

Tangang, F.T., W.W. Hsieh and B. Tang, 1997: Forecasting the equatorial Pacific sea surface temperatures by neural network models. Climate Dynamics, 13, 135-147.

Woodruff, S.D., R.J. Slutz, R.L. Jenne, and P.M. Steurer, 1987: A comprehensive ocean-atmosphere data set. Bull. Am. Meteorol. Soc., 6, 1239-1250.

Work in progress:

Breiman, L., 1997: Bagging predictions. Machine Learning, in press. Available at ftp://stat.berkeley.edu/users/pub /breiman

Tangang, F.T., W.W. Hsieh and , B. Tang, 1998: Forecasting regional sea surface temperatures in the tropical Pacific by neural network models, with wind stress and sea level pressure as predictors, J. Geophys. Res., in press.

Tangang, F.T., B. Tang, W.W. Hsieh and A. Monahan, 1998: Forecasting ENSO events: a neural network approach, J. Climate, 11, in press.

Fig. 1. Continuous Niño 3.4 forecast time series (circles) using the model 2.1 neural networks in cross-validation mode at leads of 6 months (top) and 12 months (bottom). The observed SST is indicated by the solid line. Correlation coefficients between the forecast and observed SSTs are shown in Table 1.

Fig. 2. Current forecasts for Niño 3.4 SST using the neural networks at 3, 6, 9, 12 and 15 month lead times.



[Previous Article] [Next Article]