Digital data and image processing require the use of computers to methodically handle the raw values acquired by sensors, eventually producing data and then images in forms more readily useful. Any number of steps can be involved in the processing sequence, depending on a sensor's complexity, software design, and any changes in the sensor or its detector response over time. Typically a variety of data and image products are available for any given sensor, with basic products requiring fewer steps and advanced products that have gone through additional processing or enhancements.

Much data processing requires detailed knowledge of a sensor's characteristics, technical specifications and data recording systems, and for these reasons it has usually been done in centralized fashion. Software development, operational processing and quality control can be expensive and time consuming.

Initial processing (sometimes referred to as preprocessing) is often partially, if not entirely carried out by the organization providing the sensor imagery. This step converts raw values into a more useful format. Reformatted numbers then usually undergo some sort of "calibration" treatment, which adjusts a sensor's measured response to absolute values of illumination intensity. Often this is done using laboratory or field measurements of a sensor's response to either known standard sources of light or objects of standardized reflectance.

Geometric corrections minimizing spatial distortions may also be done before distributing image data. These corrections compensate for effects such as distortions inherent in a sensor and its data-acquisition design, Earth movement under a sensor platform, platform motion over Earth, distortions from terrain relief, and those involved in transferring a round object (Earth) to a 2D "flat" image surface. Distortions related to a sensor can be compensated for automatically using sophisticated algorithms, however, other spatial distortions must be dealt with on a case by case basis and require information about the surface being imaged.

More advanced processing can correct remote sensing data for the effects of the atmosphere. The atmosphere can modify light traveling through it, so measurements received by satellite and airborne sensors at varying distances above Earth (referred to as "at-sensor" values) are not necessarily what would have been recorded had a sensor been observing the reflection or emittance at surface level. Atmospheric effects are complex, and the extent of the effects depends on atmospheric conditions, spectral wavelength, sensor height above ground level, and other factors.

Other advanced processing techniques can adjust remote sensing data for variations caused by different sun angle and viewing angle conditions. How high the sun is in the sky, and the angle at which a sensor "looks" at a surface both influence observed brightness of surfaces.

The types of processing a dataset has undergone, which may include some or perhaps all of the above, are generally indicated by various product "level" categories; information on levels can be obtained from a data provider. Routine processing is usually done by a data provider; advanced processing may be done by a provider, or by researchers if they have the means. Users need to be aware of the data products they require for their particular work.

Computer display devices can ingest a variety of different data products and display them as images. Sometimes several types of image formatting are routinely done before image distribution and made available for input into various widely used software analysis programs. There are many ways to digitally manipulate data and images.

Image enhancement is used to facilitate visual interpretation. Using any of a number of special techniques, images can be enhanced to improve the identification of features that are of interest. These techniques often involve altering the contrast and brightness of an image so that it is easier to distinguish between different features. Also, filtering routines can be used to increase image sharpness and decrease noise (speckle) in an image.

Image classification uses quantitative techniques to identify and subdivide image pixels into classes. A variety of statistically based decision rules and spectral, spatial and temporal pattern recognition routines help determine the cover type of each pixel. There are two broad types of image classification: supervised (numerical descriptors of the desired land cover classes are specified to the classification program) and unsupervised (the classification program subdivides the pixels into natural groupings or clusters without any a priori specifications).

Data merging and GIS (Geographic Information Systems) integration combine image data for particular geographic areas with other geographically referenced datasets for the same area. For example, image data can be combined with soil maps, topographic maps, rainfall and soil moisture data.