The importance of computer models in assessment studies cannot be overstated. If we think of Earth as one large laboratory, then the interaction of human beings and the planet is our experiment. The release of CFCs and other gases is an experiment in that laboratory that will have long-term, as yet unknown consequences on biological processes and the overall biosphere. Like it or not, that experiment has already begun: we have released a large quantity of chlorine and carbon dioxide (to name but two of many species) into the atmosphere. Their ultimate impact on the Earth system balance and the future health of the planet is a topic of some debate.
Because the time scale over which global climate change occurs is long compared to the life expectancy of humans, the debate could not be settled quickly if we had no way of forecasting likely future trends. Ideally, we would like to speed up the passage of time to get the answer. If we knew, for example, that continued release of carbon dioxide (CO2) at present rates would lead to an 8%C warming by 2020, we would be inspired to take steps to reduce CO2 emissions, even if those steps entailed high and disruptive economic costs, since the alternative is so unacceptable (melting of the polar ice caps and dramatic global climate change). On the other hand, if we knew for sure that the temperature would warm only slightly, a degree or two, then there would be no real need for such actions based on those model results. Similarly, we would like to know the potential impact on ozone of industrial emissions of CFCs, reactive nitrogen species, and other ozone depleting chemicals. By exploring different scenarios in which the rate of such ozone depleting chemicals are pumped into the atmosphere, and the resulting changes in global ozone levels, society (all of us) can make appropriate political and economic decisions.
Ozone concentrations in the stratosphere are controlled by a complicated combination of physical and chemical processes, as discussed in all of the previous chapters. Before we can trust a computer model simulation to predict realistically the future, we must include in them all the chemical and physical processes we know are important in the atmosphere. We then test our models by hindcasting or "predicting the past" to see how well the model output compares to actual historical data. If the model does not accurately represent the past, then we have reason to doubt its forecast of the future. In addition to the computational constraints imposed by the computers we use, there is the conceptual constraint imposed by our lack of full understanding of the chemical and dynamical processes underway in the real atmosphere. To the extent that we do not completely understand the chemistry and dynamics, our models will be in error. By continually comparing model output to actual data, we can improve our models and our conceptual understanding of the behavior of the real atmosphere.
Computer models translate theoretical ideas into specific predictions that can be compared with observations. When the model predictions agree with observations, the underlying theory is supported. When they disagree, there are three possible reasons: (1) the underlying theory is incorrect; (2) the theory is correct but the implementation of the theory in the model fails to lead to an accurate prediction; or (3) the observations themselves are incorrect, though this last possibility is much rarer today than it was in the past. Determining which of these possibilities explains a discrepancy between a model prediction and an observation is a primary task for atmospheric scientists, and the process of doing so is often referred to as atmospheric modeling.
2.1.1 Model extrapolation and impact assessments -- Building a model involves writing the complex computer code that represents different aspects of the atmosphere, either through explicit mathematical equations or simplifying numerical assumptions referred to as parameterizations. Such parameterizations arise from both computational constraints and conceptual limitations. Once the model is constructed and successfully run, its output is compared against actual observations. In this way, scientists evaluate the accuracy of the model.
After the model is "validated," it can be used to make predictions at times and in places where no observations of the atmosphere exist. This process is known as extrapolation. An example of extrapolation is as follows: environmental scientists are interested in determining the impact of a hypothetical fleet of supersonic commercial aircraft that would fly in the stratosphere. Such a fleet does not currently exist, but we can assess their impact through the use of our computer models. Rockets and the space shuttle also inject gases into the stratosphere that affect ozone. Our computer models allow us to determine the local and global impact of these emissions. These sorts of studies are typically referred to as impact assessments.
2.1.2 Ozone Hindcasting -- A number of processes influence ozone concentrations. Some are natural, such as cyclical sunspot activity and irregular volcanic eruptions; others are anthropogenic (manmade), such as CFC emissions, aircraft exhaust, and the emission of other gases associated with human activity such as methane (CH4), nitrous oxide (N2O), and methyl bromide (CH3Br). All of these processes have lead to net changes in ozone concentrations over time, and they are expected to continue to do so in the future. To be able to predict the state of the ozone layer 25 years in the future, however, we need reliable computer models. To determine the reliability of computer models over such long time scales, scientists frequently resort to hindcasting, which we've defined above as "predicting the past". It is the opposite of forecasting, or "predicting the future." In hindcasting studies, models are initialized with data representative of the state of the atmosphere in the past, for example, 25 years ago. These models are then run "forward in time" to the present and compared to the actual atmospheric data that exists over that time period. The extent to which the model agrees with the past observational data helps determine our confidence in the model's ability to predict the future.
It is important to note at the outset that no one model is appropriate for all of the tasks we would like to assign to it. The ideal atmospheric model would be one that incorporated all the dynamical, chemical, and radiative processes at all levels from the surface of Earth to the top of the atmosphere, and with a fine enough degree of accuracy (referred to as the resolution) that it would capture all processes that are ordinarily too small to represent mathematically, and hence are parameterized (such as the amount of moisture given off daily by a forest). It would accurately capture the full range of land-sea-air interactions. Even if all these processes were fully understood conceptually, which isn't the case, the computational power and time needed for such models are often prohibitively expensive, or even simply unavailable. It is for this reason that atmospheric scientists instead utilize a variety of simpler models, each of which has been constructed to address a particular problem. This is why there are several different types of models, each of which incorporate a different set of assumptions. In the next chapter, we will explore in more detail the different types of models in use.