In these studies, our intent is to calculate the ozone trend in a given region over a given time, but in practice we can only estimate the real ozone trend given our limited information. Uncertainties of our calculations, model, and data prevent us from being able to derive the true depletion. We've already discussed how we construct our model to best fit the data, and mentioned that the estimated trend uncertainty is related to the unexplained variability, or residual term. This uncertainty is called the statistical uncertainty and is denoted by s. It is a measure of the goodness of our model fit to the data. But this is only half the story. We must also include the instrument uncertainty, denoted i, of our error estimates. This is the uncertainty of the measurements themselves owing to inherent uncertainties in the accuracy of our instruments used to obtain the data. To get the final trend uncertainty, we combine the statistical and instrumental uncertainties. There are several ways to do this, including simply adding the separate terms ( = s + i). This is the worst case scenario, because it implies that both sources of error effect the measurements in the same direction. The best case scenario would be if the error sources acted in opposite directions, such that they canceled. In our calculations, we assume a solution between these extreme cases. We combine the uncertainty terms using a root sum of squares. That is,
The uncertainty, , is expressed as an interval about the model estimated trend value. That is,
The model trend result is not complete without an uncertainty estimate. What we are calling the uncertainty estimate may also be called the standard error, or statistical error. In any statistical calculation, this quantity is necessary to interpret the results. Political polls are a common example  in addition to the percentage of the vote carried by each candidate, a statistical uncertainty term is always quoted.
According to statistical theory (see suggested readings) there is a 68% probability that the correct trend is within ± of the estimated trend. This interval is the 1s uncertainty level. There is a 95% probability that the correct answer lies within 2 of the estimated trend. This is the 2 uncertainty level. There is a 99.7% probability that the correct answer lies with 3 of the estimated trend, and so on. Clearly, as we widen our range, the chances that the true answer are within that range increase.
As an example, if our estimated trend is 6% per decade, and the uncertainty estimate is 1% per decade, the trend at the 1 uncertainty level is 6 ±1% per decade. That is, there is a 68% chance that the actual trend lies within the range of 7 to 5% per decade. The trend at the 2 uncertainty level is 6 ±2% per decade. There is a 95% chance that the actual trend lies within the range of 8 to 4% per decade. When a trend result is given with an uncertainty range, it must also state the uncertainty level.
In our discussion, we refer to trends as being statistically significant at the 2 level. This means that the trends are statistically different than a zero trend at the 2 uncertainty level, or put another way, a zero trend is not included in the uncertainty interval at the 2 level. If the uncertainty estimate in our example above was 4% per decade, then the trend at the 1s level is 6 ±4% per decade and the trend at the 2 level is 6 ±8% per decade. The first interval does not include zero, but the second interval does. Therefore, we say this trend is statistically significant at the 1 level, but not statistically significant at the 2 level.
Many factors can contribute to statistical error. There are other variations of the ozone time series that are not included as terms in the model, and if these variations mimic the expected trend we may incorrectly attribute these changes to ozone depletion. In addition, we are assuming a linear trend based on what we see in the data (e.g., Figures 9.01 and 9.02), but the true depletion may not be exactly linear. Finally, our ability to calculate the true trend is also limited by the finite number of independent pieces of information (data) that we have.
Correctly estimating the statistical uncertainty involves some detailed statistical computations. Therefore, we will only qualitatively discuss the factors that can affect the uncertainty in this section. The details of computing statistical uncertainty intervals can be found in statistical textbooks, including Mendenhall and Scheaffer (1973), Box and Jenkins (1976) and Neter et al. (1985).
We use the residual from our statistical regression model to estimate the size of the statistical uncertainty for each component in the model. In this discussion, we are primarily concerned with the uncertainty of the longterm, seasonally varying trend component. The uncertainty primarily depends on three things.
4.1.2 Explaining coherent patterns in the residuals  Our regression model has been used to account for ozone variability on scales from 3 months to 11 years. If variability at these scales has been correctly accounted for, then theory says that the statistical trend uncertainty is a simple function of the magnitude of the random variations, the number of model components, and the number of data points in the residual. [For more information, the reader is referred to the definition of standard error in statistics text books on linear regression.]
There are many more sources of variability than what we have accounted for in our model, including ENSO effects, volcanic eruptions, and other internal dynamical processes and external forcings on the atmosphere. We also know that our proxy time series do not exactly capture the actual variations of the ozone time series. Therefore, coherent signals remain in the residual from our model fit. The statistical term for this is a correlated residual. Examples of coherent structure in the residuals (bottom panels) can be seen in the series of plots in Figure 9.06ad. In the 3050 N model residual (red line in Figure 9.06a) for instance, we see that there are many periods where the residual is positive for an extended period, and then negative for an extended period. The residual is above zero for extended periods in early 1986, middle 1987, and early 1991. Even the global model residual, which is inherently less variable, shows extended periods of positive or negative deviations, particularly in 1987 (positive residual) and 19911992 (negative residual).
The statistical uncertainty depends on the number of independent pieces of information we have available. When coherent cycles remain in the residual, not all of the data points in the residual are independent. That is, the residual at one point in time is related to the residual at surrounding points. Therefore, the number of independent pieces of information is less than the total number of data points, and the statistical uncertainty increases. There are several techniques for calculating the statistical uncertainty for a correlated residual. More information can be found in Neter et al. (1985), Efron (1982) and Miller (1974).
After a satellite instrument is launched, it is very difficult to physically check how it is performing. Instruments in space slowly degrade over time, causing the data to slowly change, or drift, over time. If the drift is large enough, our trend calculations will be incorrect. Therefore, we must determine how to correct the data to account for these instrument changes. We call this instrument calibration. Instruments are complicated, and it is impossible to perfectly calibrate the instrument. Therefore, each data set has an instrument uncertainty (i) associated with it, based on the uncertainty of the instrument calibration. When calculating the longterm trend, we are mostly concerned with any longterm drift of the instrument which could mimic a trend in the data. Therefore, i is the longterm drift uncertainty of the calibration, and typically has units of percentage change per unit time.
As we calculate trends over longer time periods, we must use data from different instruments. This introduces the additional difficulty of maintaining and comparing the calibrations of several instruments. Not only can each instrument calibration vary in time, but each instrument may be slightly different to start with. All instrumental effects must be accounted for to create a consistent time series over the full time period. To accurately estimate the longterm trend in a multiinstrument ozone time series, we must know the relative calibration of the instruments from the beginning to the end of the time period to within ~1% per decade.
Determining the calibration of an instrument requires longterm analysis. Many instruments have onboard calibration systems to aid in determining the calibration. The calibration for satellites without this feature (or for satellites on which the calibration system failed) must be inferred from the available data. Many times the changes are subtle, and are difficult to distinguish from real changes of the ozone. This makes the determination of the instrument calibration a difficult task. Therefore, data products from an instrument are released in versions, where a new version tracks changes in the algorithm or calibration. The results presented in this chapter use the latest versions of each data set currently available. However, keep in mind that these studies are always being updated as new data and new versions become available.
