The starting point lies in the nature of the signal representing the remotely
sensed scene. The MSS on Landsat is a good example from which to learn the principles.
Each band consists of the (averaged) total radiance (largely ground reflectance
but with contributions from the atmosphere) associated with the signal detectors
that measure it. The scene unit of measurement spatially is the ground area
that corresponds to the source of the radiance and is coincident with the sensor
unit that records it, namely, a single pixel. Each pixel will receive some number
of photons, collected between some spectral interval, from that ground area,
depending largely on the nature of the scene (its averaged radiance) and the
dwell time (related to spacecraft velocity and mirror oscillation rate). The
resulting radiation impinging on the detector is equivalent to a finite quantity
of energy. The quantized photons (whose cumulative effect gives rise to a specific
intensity) knock electrons loose (the photoelectric effect) that can be removed
from the detector (during its characteristic sampling time) as an electronic
signal which varies in proportion to the electrons released. That signal can
be expressed as a voltage whose variation is thus a measure of the photon energy/electron
release effect. (Some systems utilize changes in amperage instead, as related
to current variations).
There is now a value in voltage for the given signal recorded by a pixel for
that particular moment during the scanning period and for the "piece of real
estate" it covers. For successive pixels along a line of scan, as each is sampled
in succession there will be a particular voltage value associated with every
individual; when this variation is plotted as a function of intensity (radiance-dependent)
versus time, a continuous series of points (spaced by pixel dimensions) will
represent voltage radiance variations corresponding to changes in scene radiance.
This is repeated in the along scan direction (for the MSS, a set of six detectors
sweep along adjacent parallel lines during one advance of the scanning mirror).
As this series of sweeps build up during the advance of the observing platform,
a two-dimensional array of contiguous pixels is developed. Each pixel could
be represented by a number, some specific value - usually fractional or decimal
- of the range of voltages measurable by the sensor. The stream of successive
voltages can, in principle, be telemetered (either directly, or later, after
recording on magnetic tape or similar recording device) back to a receiving
station on Earth (or through a relay satellite).
It is feasible to convert this variable voltage signal into an image by using
it to drive some electro-optical instrument that produces a light beam output
that changes with the voltage. The different beam intensities can be used directly
to manufacture various types of photo products. Since film gray levels correlate
with the number of photons activating the silver chloride, a black and white
image consisting of ranges of gray levels that correspond to relative radiances
(e.g., varying reflectances) will result. Standard photointerpretive methods
then draw upon spatial patterns and feature or material brightnesses to facilitate
identification of scene categories (classes).
However, many manipulations of the raw data, using display or interpretive,
mathematically based, programs - contrast stretching, edge enhancement, ratioing,
supervised classification, etc. - can be applied to enhance interpretation.
In today's world, these are best and most rapidly accomplished using computers.
This requires that the analog signal be converted to digital values (A/D), such
as the DN's defined and described on page 1-12. This conversion can take place
either on the ground after the electronic signal is recorded, or, now most commonly,
onboard the spacecraft simultaneously with acquisition of the signal.
Nicholas M. Short, Sr.
email: nmshort@epix.net
Jeff Love, PIT Developer (love@gst.com