There are a variety of techniques used for change detection, but all of them either directly or indirectly compare one image with others acquired on different dates. Key to these comparisons is the careful processing of the set of images so that they are precisely and accurately spatially registered to one another. This reduces effects caused by no registration or misregistration. In this section some of the more common techniques are discussed. Keep in mind that there are several other techniques in use that will not be discussed here (see the suggested readings for references to information about other techniques).
The simplest way to compare two or more images is to perform some sort of image calculation. This involves using values from image pixels in a formula to produce a new image where the pixels contain the results of the calculation. For example, the first pixel on line 1 of the first image can be subtracted from the first pixel on line 1 of the second image, and this result can be written to the first pixel on line 1 of a third image (see Chapter 4, Figure 4.05 ). Three types of calculations will be discussed -- subtraction, ratio, and regression.
3.1.1 Differences and ratios -- Results from image subtraction vary through a negative-positive range, which can be marked with thresholds to identify where land cover has changed. The settings of thresholds for identifying change are often defined by trial and error. The subtraction technique assumes that a change in land cover will produce a change in a singular reflectance or index, which in many cases holds true. Another simple technique is to divide (ratio) two images. Similar to image subtraction, it makes the same assumption regarding changes in reflectance. Resultant pixel values vary around 1, and highlight extreme cases of change to a greater extent.
3.1.2 Regression -- When evaluated in aggregate over a study area, image regression yields parameters (a and b in the following formula) which allow the adjustment of more recent image pixel values to reduce the effects from differences in atmospheric conditions and sun angles. Then a subsequent processing run will determine change measures for each pixel in the study area. Image regression adheres to the general formula: eij = Yij - (a + b * Xij) where Yij is the value of the pixel in line i and column j for the more recent image; Xij is the value of the same pixel location for the other image; a is the intercept, b adjusts for the difference in variance; and eij is the residual of the regression. The further eij is from zero, the greater the likelihood that the area represented by that pixel has changed between the two dates.
3.1.3 Time traces -- Perhaps the most pure form of monitoring involves assessing image measures for each pixel through a long and internally consistent time series. Spectral vegetation indices (SVIs) derived from the image data are commonly used in this type of examination. The single time trace for each pixel position comprises a history of that location's SVI signature, with the most recent SVI value providing some measure of a pixel's current status. The mechanics of deriving consistent index values from data acquired during different conditions of cloud cover, haze, and illumination angles, and sensor calibrations has been researched extensively, with enough success that nearly all terrestrial ecosystems have been measured in this manner for decades.
Intensive SVI datasets available in the public domain have been underutilized relative to their potential benefits. Among the typical output configurations are reporting cells of 8x8 kilometer spatial resolution, time steps of 10-day intervals and monthly, in some areas an established "average year" against which to plot deviations, and for a growing number of areas a set of clusters representing the regional or continental "universe" of seasonality characters. A simplified reduction of the time traces would be to calculate a normal year for the study area, to calculate also a variance from that normality for each reporting cell, and then finally to assess new data by measuring its deviation against the standard variance. There are many more complex strategies for using these index time series.
Automated classification refers to a series of techniques that use digital image processing algorithms to process images for interpretation purposes. In other words, automated classification can be used to transform a satellite image into a thematic map showing different types of land cover. Although, in most cases, automated classification is not quite as accurate as a human interpreter, by using digital techniques the interpretation can often be performed rapidly and objectively with acceptable results. The details of how automated classification works can be very complicated and is beyond the scope of this discussion. However, two generic methodologies bear mention.
The first method involves classifying multispectral imagery for two dates using the same legends and, as much as possible, the same procedures. The final step is to overlay the two maps and encode the areas that changed between the two dates. For example, we look for areas that were delineated as forest in the first image and as nonforest in the second image (see Figure 7.03).
Another way automated classification is used for change detection is to combine the images from two different dates and classify this image using techniques similar to those that are used to classify multispectral data. In this case, the image that is processed is a mix of multispectral (assuming the satellite images are multispectral) and multitemporal information. The output of such an automated classification will be an image with classes such as forest for both dates, nonforest for both dates, change from forest to nonforest, and change from nonforest to forest. This is a simple case but potentially the number of classes in the final image can be quite large.
Another technique for monitoring changes in vegetation cover is visual interpretation. This is a traditional approach used before the invention of digital imagery, and is still used very widely. The advantage of using visual interpretation techniques is that the human eye is very good at identifying different cover types based on tone, texture, shape, and relation of one area to another. It is a low tech approach and does not require sophisticated equipment, making it quite accessible to a wider variety of users. However, the delineations and quality control are significantly labor intensive, and preferable only when manual interpretation is key to correct classification, or where massive data volumes require sifting for a few obvious changes, or finally when the same areas need to be analyzed frequently over short time intervals. Also, change detection between land cover classes may vary with different personnel since the interpretations involve subjective judgment.
There are different visual approaches to analyze changes in land cover over time. One approach is to create separate land cover maps for each date by interpreting the images or photographs, and then overlaying the maps as was described for automated classification. The second method is to interpret the change in land cover directly without creating intermediary vegetation maps. To do this, there must be some way to superimpose the two images or at least be able to locate the same points on both images. Then a systematic comparison is made where each neighborhood is checked for differences between the two images. For example, if it was obvious that in the earlier image an entire area was forested and in the later image there were patches where the forest was clearcut, the interpreter would delineate the clearcut areas and label them as change from forest to nonforest. By systematically interpreting the image and noting all of the areas that underwent change, a map representing the change in vegetation cover between the two dates is created.
Although the process of change detection may seem straightforward from the descriptions above, there are limitations, and most are attributable to the incomplete manifestations of change within satellite imagery. Limitations in image characteristics include spatial and temporal resolutions, spectral matching of key indicators of change, view and illumination angles, atmospheric conditions and corrections, and geographic registration. Concerning habitat characteristics, limitations are found in atmospheric conditions, soil moisture, vegetation phenologies and structure. If each of these perturbing factors is carefully considered and minimized through the change detection effort, a surprisingly useful result may be revealed.