Remotely sensed data provide information in the following domains.
Each domain has an associated "resolution" relevant to the information gathered. Resolution refers to the level of detail at which data are measured. Different sensors have various resolutions in these domains.
Remote sensing instruments are designed to detect various wavelengths of the electromagnetic spectrum. Each discrete, distinctly recorded wavelength interval measured by a sensor is referred to as a "band" or "channel." Some instruments detect many discrete bands, each with relatively narrow wavelength widths, whereas others sense fewer, broader bands. A fundamental achievement of remote sensing has been to characterize the different spectral signatures of different objects and surfaces.
Most sensors are multispectral (detecting more than one band). Using multispectral data to create multispectral images (by building up image layers each representing a single spectral band's response of the same scene) provides the ability to differentiate objects that otherwise cannot be resolved by differences in texture or shape. Figures 5.01a and 5.01b show multispectral images made up of different spectral band layers. (Note: the images have been enhanced and undergone advanced processing, such that the multispectral scanner (MSS) data are not at the spatial resolution of the original data. More information about the MSS and TM sensors is given in Section 4.1 in this chapter).
Multispectral sensors have fewer than 70 channels (bands) and have bandwidths commonly measured in micrometers (µm). A more recent technology is that of hyperspectral sensors that detect and record data in very narrow spectral channels with bandwidths measured in nanometers (nm). One-thousand nanometers equal the length of 1 micrometer. Hyperspectral sensors have more than 100 bands.
Historically, visible and near-infrared wavelengths were the most commonly used spectral regions for vegetation study, but the use of microwave and thermal sensing systems to research various aspects of vegetation has become more widespread.
The spatial resolution of image data is defined by the smallest spatial area sampled or viewed by a sensor's detectors. Spatial resolution is described usually as "high" or "low," or "fine" and "coarse," respectively, and these descriptors refer to the degree of detail discerned by a sensor. Objects much smaller than a sensor's spatial resolution cannot be distinctly differentiated, so the smaller the dimensions of the resolution, the more detail one can see or "resolve" in an image, and thus the "higher" or "finer" the resolution. "Low" or "coarse" spatial resolution means the smallest area resolved by a sensor is relatively large, which means less detail.
The resolution of many satellite-based instruments ranges from about 10 meters to several kilometers. Defense/military satellites typically have higher (more detailed) spatial resolutions but the data are often classified and not available for widespread use. Commercial satellites planned for launch in the near future will carry sensors with spatial resolutions of 5 meters and less.
In digital images, a scene is created by displaying data (from a digital grid or array) as picture elements (pixels). Associated with each pixel are spatial and spectral attributes -- the spatial information includes the location (position) of each pixel in an image and the apparent size of the resolution cell (the area on the ground represented by each pixel), and the spectral information is the value assigned to each pixel, usually a numeric representation of the intensity of reflectance or emittance measured by a sensor for each resolution cell (pixel) in particular spectral bands.
Spatial resolution corresponds to the spatial area each displayed or printed pixel represents. Though almost always displayed as square on a computer display or image print, pixels can represent not only square but rectangular areas also, depending on characteristics of the sensor system that recorded the data. An example pixel layout is shown in Figure 5.02.
The spatial size of pixels representing square areas can be described with one value. For example, if spatial resolution is specified as simply "20 meters," each image pixel represents a 20-meter by 20-meter square on the ground (total area covered by the pixel is therefore 20 m x 20 m = 400 square meters). Information about the phenomenon being measured (for example, reflected light intensity) in the 20-meter by 20-meter square is stored as a single value for each image pixel. For data where the smallest resolved sampling area is rectangular, each image pixel (though appearing square) actually represents a rectangular area, so an x (horizontal direction on the image hardcopy or screen display) and y (vertical direction on the image hardcopy or screen display) resolution need to be specified if areal information is important.
2.2.1 A discussion of spatial scale terminology -- Spatial resolution is sometimes confused with the spatial scale of the data or image. Scale and resolution are often associated but they are not the same. The scale of an image is the ratio of the distance between two points on a hardcopy or displayed image (image distance) to the actual geographic distance between the same two points on the ground (ground distance). A large-scale map (e.g., 1:500) would be a map of great detail; great detail indicates high spatial resolution and is usually associated with maps or images covering a small area. A small-scale map (1:2,000,000) shows much less detail; less detail indicates low spatial resolution and is usually associated with maps or images covering large areas.
Misuse or confusion of scale terminology is common, with the phrase "large-scale" often used inaccurately to refer to large areal coverage, such as global sampling. At present, most global remote sensing data sets have small scales in actuality, because to cover the entire globe, great detail is not included. To be accurate and avoid confusion, some researchers describe datasets with which they are working as "large area" vs. "small area," avoiding scale terms.
The time of day or year at which an image was taken is an important consideration in the analysis of remotely sensed data. This may mean selecting a morning image over an afternoon image or perhaps a spring image over an autumn image. Also, knowing the date and time a particular image was taken can provide valuable information during image interpretation. This is especially true when interpreting vegetation classes.
Multitemporal imagery (also referred to as a "time series") is imagery acquired at different times. This is useful to study changes in the environment and to monitor various processes. Sometimes the dynamics of interest take place during the course of a day, a week, or over a number of years. To assess changes over a time sequence accurately, effects in the data not caused by true environmental change (such as differing atmospheric conditions and sun or view angle positions) must first be accounted for.
The term "bidirectional" refers to the angle at which the energy source strikes an object or surface, known as the illumination angle, and the angle at which the remote sensing instrument receives reflected or emitted energy, called the view angle. For example, consider looking at a field with trees in it throughout the day. In the morning with the sun low in the sky, many long shadows can be seen in your field of view. As the sun gets higher in the sky, the scene looks brighter overall since shadows get increasingly short. Face into the sun, and your eyes will see textures formed by brightly lit tops of objects interspersed with dark patches of shadows. If you view the scene with your back to the sun, your view is generally filled more with the bright return of light and much less shadow.
Many sensors have historically viewed earth surfaces at only one view angle -- looking straight down -- which is referred to as a "nadir" view. More recently, some sensors have been designed to acquire image data at more than one angular view, and the use of multiangle data is increasing.
Exactly how a scene appears to a sensor depends not only on the sun and view angles, but the nature of the surface material and the relief of the terrain (for example, flat vs. mountainous). Various objects and surfaces reflect light differently as Sun and view angles change. Water, for example, reflects light primarily in one direction, whereas sandy soil tends to reflect light in many directions. Short, even vegetation such as grass shows an overall brighter response than forest canopies of different trees with varying heights which cause clumps of shadows. This angular information can be very useful in object discrimination and canopy structure analysis.
One common application of multiangle imagery is to make a "stereopair" of aerial photographs or satellite images. By combining scenes taken from different angles, it is possible to construct a 3D view of features such as terrain, and if there is enough detail, smaller objects such as trees can be viewed in three dimensions.
Research relating to global vegetation dynamics involves using data taken at a wide variety of spatial scales, spatial resolutions and spectral resolutions. Multitemporal (time series) images are commonly used for monitoring vegetation dynamics, and the use of multiangular data is growing.
Understanding individual leaves and plants requires large-scale (small area) information, while the study of global patterns and processes is advanced by global or small-scale (large area) views. Data and images taken at a variety of spatial scales and from different information domains can be used together in many different ways, for example, to address the same research question at different scales, or to answer different questions each requiring different scale data.
Multiple scale (multiscale) data of different spatial resolutions can be combined using various techniques. This is helpful given that cost and processing limitations can restrict the acquisition of high resolution spatial data. When sampling at high spatial resolution, typically researchers can only afford to cover small geographic areas. In a multistage sampling frame, statistical methods are used to incorporate the detail of high spatial resolution data (obtained for selected small locations within a larger area) with the large-area coverage of low spatial resolution data. This sort of sampling approach can also be used to provide a check on the accuracy of image interpretation.
Another technique using multiscale data is to overlay two or more images of different scales. For example, low resolution multispectral data can be overlaid with a higher resolution panchromatic (single band) image. By doing this, the resulting image combines the multispectral information from the one image set with the spatial resolution of the panchromatic image. Figure 5.03 shows the result of merging 5-meter panchromatic and 30- meter multispectral images.