Translate

Tuesday, March 13, 2012

Aerial Photography


Aerial Photography

Airphotos have been an important source of data for mapping since the first decades of the 20th century. Aerial photography can be conducted from space, high or low altitude aircraft, or near ground platforms.
Aerial photographs are acquired using a photographic camera an film to record reflected EMR within the camera's field of view. This is an optical-chemical system in which the lens focuses EMR on the film which is coated with a light sensitive emulsion that detects reflected EMR in the wavelength band from 0.3 mm to 0.9 mm, i.e., from the mid ultra-violet to the near IR range. The result is a continuous tone photograph that has high spatial resolution (i.e. shows fine spatial detail) but low spectral resolution (i.e., is sensitive to EMR in a broad spectral band). The entire scene within the camera's field of view is processed instantaneously. However, there distortion in the image due to the fact that it is a perspective rather than planimetric view of the surface.
A variety of films and film formats can be used to acquire airphotos. The most common film format is 35 mm which is the standard slide or colour print film. Larger formats such as 70 mm are available and have the advantage of recording greater spatial detail. Most airphotos used for mapping are obtained using a 23 cm x 23 cm metric mapping camera. This large image size maximizes the spatial detail that can be captured on the image.
Film types can include black and white or colour films that are sensitive to the visible (0.4 mm to 0.7 mm) portion of the electromagnetic spectrum or to the near IR portion of the spectrum (0.7 mm to 0.9 mm). Normal photographic film is sensitive to a broad range of the electromagnetic spectrum. However, it is possible to record EMR in narrower bands by using filters to block out selected wavelengths. For example, if a film that is sensitive to visible light is used with a filter that blocks out blue and green wavelengths (0.4 mm to 0.6 mm), the film will only record EMR in the red portion of the spectrum (0.6 mm to 0.7 mm).

 

 

 

 

Airphoto Geometry

Airphotos used for mapping purposes are usually vertical airphotos, although oblique airphotos are often used to aid in visual interpretation or in mapping mountainous areas. The following diagram illustrates the configuration of the camera for both vertical and oblique airphotos. EMR reflected off the surface of the Earth is directed back toward the camera lens. The lens focuses the reflected EMR on the film in the back of the camera. The camera focal length is the distance from the front of the lens to the film. Using a normal lens, this distance is approximately 153 mm. As is discussed in more detail below, the camera focal length and the altitude of the lens above the ground determine the scale of the airphoto.

 

Vertical and Oblique Airphoto Geometry

apg1.gif (15382 bytes)



Because the EMR reflected off the Earth passes through the camera lens, the image formed on the fim is a negative image. The rays of relfected EMR represented by the diagonal lines expose opposite sides of the film - i.e., the right edge of the field of view on the ground appears as the left edge of the film and vice versa. However, an imaginary positive image plane exists at a distance equal to the camera focal length in front of the lens.
High and low oblique airphotos differ in that the horizon is visible on a high oblique airphoto (e.g. a view of the Earth from a space shuttle), but not on a low oblique airphoto.

Flight Lines

Air photos are usually taken in sequence along a series of parallel flight lines traversing the area of interest. Flight line maps are prepared that show the locations of the flight lines and the positions of individual images taken along each flight line. Prints are identified by a unique reference number that is superimposed on the image. On Federal airphoto series, the reference number is comprised of a roll number and a print numer. For example, the airphoto with the reference number A12919 111 is print number 111 on roll number A12919. On Province of Ontario airphoto series, the reference number is a combination of year, roll number, flight line number, and print number. For example, a provincial airphoto reference number of 91-44-2478-74 means that this is print number 74 on flight line 2478, roll number 44. The image was taken in 1991. Airphotos from other sources may use different referencing systems, but the image reference number always uniquely identifies each individual airphoto. By refering to the appropriate flight line map, you can determine the reference number(s) of print(s) that cover a particular area of interest.

 

 

 

 

 

 

 

 

Metadata on Airphotos

apg_apinfo.gif (7706 bytes)
In addition to the reference number, all airphotos show important metadata in the margin of the print. This includes a spirit level that can be used to determine whether the camera was level to the ground at the instant the airphoto was taken. A clock indicates the time of day that the image was taken, usually in Grenwich Mean Time. This can be useful in interpreting the image because the time of exposure can influence shadows on the image. An altimeter shows the altitude of the plane above  sea level (ASL) at the instant the airphoto was taken. Finally, a frame counter identifies the frame number which can be used for sequencing images along the flight path and also identifies the camera focal length. The altitude of the lens and the focal length can be used to determine the scale of the image.
The airphoto prints also show fiducial marks located mid way along the edges of the print. These are v-shaped notches that are used to locate the Y and Y axes of the image. The interesetion of the X and Y axes is the principal point of the airphoto. This point is the centre of the airphoto. For a vertical airphoto, this will be the point that was directly below the centre of the lens at the instant of exposure.

 

 

 

 

 

Fiducial Marks and Principal Point (P)

apg_pp.gif (2966 bytes)
Airphotos are taken so that the images overlap by approximately 60% along flight lines (overlap) and 20% to 30% between flight lines (sidelap). The exposure station is the position of the front nodal point of the lens at the instant of exposure. The distance between the exposure stations of two successive images is called the air base and is equal to the ground distance between the principal points of the two images. Overlap between images is essential to allow three-dimensional viewing of airphotos, but it is also needed to determine the direction of the flight line and to allow construction of mosaics that contain little distortion. This requires using only the central portion of each image since distortion increases towards the edge of the image due to perspective effects.

Overlap and Sidelap

apg_overlap.gif (5994 bytes)
While it might be expected that the flight line would coincide with the X-axis of the image, this is rarely the case. The flight line would only coincide with the X-axis if there was no wind and the plane was able to fly straight along the flight line. Under windy conditions, the pilot must compensate by flying slightly into the wind in order to stay on course. This process is known as crabbing and affects the area of overlap between images.

 

 

Crabbing

apg_flight.gif (6936 bytes)
The flight line can always be determined by plotting the principal point of one airphoto on the area of overlap with the adjacent image. This point is known as the conjugate principal point. In the example below, the first image has a tree at its principal point and the second image has a building. The location of the same building on the first image identifies the conjugate principal point. The flight line is given by connecting the principal point and the conjugate principal point with a straight line.

Conjugate Principal Point

apg_cpp.gif (10002 bytes)

 

 

 

 

 

Airphoto Scale

On a large scale map, the effect of curvature of the Earth's surface is negligible and the map is planimetrically correct. Map scale can therefore be defined as the ratio of map distance to ground distance, usually expressed as a representative fraction. On an airphoto, scale can be thought of as the ratio of photo distance to ground distance. We can estimate the scale as the ratio of the photo distance between the principal point and the conjugate principal point to the air base (ground distance between exposure stations). However, because the airphoto is a perspective view, this ratio is only approximately correct. Airphoto scale varies from the centre towards the edges of the image.
Airphoto scale can also be determined based on the camera focal length and the altitude of the front nodal point of the camera lens at the instant of exposure. However, an implication of this is that airphoto scale varies with terrain elevation. Higher elevations are closer to the camera lens and are therefore shown on the image at larger scale than areas of lower elevation that are further from the lens. This is illustrated in the following diagram.

Airphoto Scale

apg_scale.gif (15700 bytes)
The scale at point A can be determined as the ratio of the image distance ao in the positive image plane to the ground distance AOA. Since the triangles Loa and LOAA are similar triangles (same shape but different sizes), oa/OAA = Lo/LOA = f/H'A. , where f is the camera focal length and H'A is the height of the front nodal point of the lens above the ground at point A. This relationship proves that airphoto scale is equal to the focal length divided by the height of the lens above the terrain.
The airphoto metadata provide values for the focal length (f) and the height of the lens above sea level (H). To determine the scale at points A and B, we need to know their elevations above mean sea level. This information can be obtained by inspecting a topographic map of the area. Once we know the elevations of the two we, we can calculate the scale at these locations using scale = f / (H - hA). Assume that the ground elevation at A is 3000 m, the ground elevation at B is 1500, and the height of the lens above mean sea level is 4500m. Then
  • scale at A = f / (H - hA) = 150 mm / (4500 m - 3000 m) = 1 / 10,000
  • scale at A = f / (H - hB) = 150 mm / (4500 m - 1500 m) = 1 / 20,000

Example Problem

A runway  measures 5 cm on an airphoto but meaures 500 m on the ground. The elevation of the runway is 1000 m ASL. If the focal length of the camera is 150 mm, what is the altitude of the aircraft?
Using the relationship scale = photo distance / ground distance = f / (H - h):
  • 5 cm / 500 m = 150 mm / (H - 1000 m )
  • 5 cm / 50000 cm = 0.150 m / (H - 1000 m)
  • 1 / 10,000 = 0.150 m / (H - 1000 m)
  • H - 1000 m = 0.150 m * 10,000
  • H = 0.150 m * 10,000 + 1000 m = 2500 m

 

 

 

 

 

Factors Affecting Airphoto Scale

Airphoto scale is thus a function of several factors including: camera focal length, flying height of the aircraft, and ground elevation above sea level. A camera with a wide angle lens (shorter focal length) has a wider field of view and thus produces  a smaller scale image for a given film format. Conversely, a telephoto lens, with its longer focal length, views a smaller area and produces a larger scale image. Flying height also affects photo scale. The higher the altitude of the aircraft (or satellite), the smaller the scale of the resulting image. Variations in ground elevation are the main reason for scale distortion in airphotos. Higher elevations are closer to the camera and thus appear at larger scale in the image. It is general practice to try to minimize scale distortion by ensuring that the maximum relief of the area represented in the airphoto is less than 10% of the flying height. This implies that in Southern Ontario where the maximum relief is unlikely to exceed 200 m, a flying height of 2000 m is adequate to minimize scale distortion. However, in the Rockie Mountains where maxmimum relief might be 3,000 m, a flying height of 30,000 m would be required.

Camera Focal Length and Airphoto Scale

apg_focallen.gif (4982 bytes)

 

 

 

 

 

 

Distortion in Airphotos

There are several types of distortion in airphotos. Scale changes are primarily due to changes in terrain elevation but there can also be scale changes between successive images along a flight line due to changes in flying height between prints. This can occur due to turbulence that prevents the pilot from maintaining a constant altitude.
Further distortion can occur due to the camera not being level to the ground at the instant of exposure. This can occur if the nose of the aircraft is slightly up or down (pitch) or if a wing is tilted up or down (roll). Both conditions can be caused by turbulence or by manoeuvering to stay on course. The result is the introduction of distortion due to obliqueness in the image. Obliqueness is measured by the angle between a vertical plumb line through the centre of the lens and the optical axis of the camera lens. The principal point (P) is always at the intersection of the optical axis of the camera lens and the image plane. The nadir (N) is the intersection of the vertical plumb line through the centre of the lens and the image plane. On a true vertical airphoto, the principal point and the nadir are the same point but on an oblique airphoto they are at different positions on the image plane.

Obliqueness in Airphotos

apg_oblique.gif (5654 bytes)
Because photographs are perspective views, all airphotos are subject to radial or relief displacement which causes objects in the image to be displaced outward from the nadir of the image. Displacement increases with the height of the object and distance from the nadir. The following diagram illustrates the radial displacement of a series of hydro poles in a true vertical airphoto. The pole that lies directly below the camera lens is seen in plan view. Poles that lie close to the nadir (principal point in a vertical airphoto) are only slightly displaced while poles further from the nadir are displaced greater distances on the image. The bottom of the pole is closer to the nadir than the top of the pole. The same effect can be seen in images of urban areas in which the tops of buildings are displaced outward relative to the base of the buildings.

 

Radial Displacement

apg_raddisp.gif (5847 bytes)
Radial displacement is a source of distortion and can result in tall objects that are close to the nadir hiding objects that are further away from the nadir. Radial displacement can also make 3-d viewing difficult if objects appear too dissimilar in successive images. This is especially a problem if the scene contains tall objects.   Nevertheless, radial displacement can be useful. Radial displacement of objects results in the sides as well as the tops of objects being visible in the airphoto. This can facilitate interpretation since objects such as office buildings and apartment buildings that may be difficult to distinguish from a plan view may be distinguished based on the appearance of the sides of the buildings, e.g. whether or not there are balconcies which are more likely to be found on an apartment building. Since radial displacement is always outward from the nadir of the image, we can locate the nadir by finding the intersection of lines showing the direction of object displacement, e.g. lines representing the corners of buildings that have vertical walls. As will be discussed in the next section, we can also use radial displacement as a means of calculating the heights of objects in the image.
Distortion due to scale changes, obliqueness and radial displacement can make it difficult to transfer detail from airphotos to maps. This is most likely to be a problem in mountainous terrain where there is significant loca relief. The effect is to dramatically change the shape of objects as they appear in successive images along a flight line.

 

 

 

 

 

Distortion in Airphotos

apg_hilldistort.gif (12362 bytes)
The above diagram illustrates an extreme case in which successive images along a flight line have been taken from opposite sides of a mountain ridge. Left hand image, side A of the mountain occupies most of the image while side B is a narrow sliver. The opposite is true in the right hand image. The extreme difference between the two images will make it difficult to view this pair of images in three dimensions and will also make it difficult to  represent the mountain ridge accurately on a map. This problem can be minimized by: flying at higher altitudes to minimize scale variations and radial displacement (relief distortion); flying along valleys; using more overlap so that you have more principal points and more images to compare.

 

 

 

 

Height Calculations

The height of objects in airphotos can  be calculated using two different methods: the radial displacement method and the shadow length method.
The logic of the radial displacement method is illustrated in the following diagram. The vertical line PQ represents an object, e.g. a building, whose height (h) we want to calculate. On the image, the building is appears as the line aq. Because of radial displacement of objects in the image, the top of the building is displaced outwards on the image relative to the base of the building. D is the length of the side of the building on the image. R is the distance from the nadir (n) of the image to the top of the building on the image. In this example, we are assuming that we have a vertical airphoto so the nadir and principal point are the same. The triangles ONA, PQA and Ona are similar triangles (same proportions but differ sizes). Therefore, h/H = D/R. Thus the height of the building can be calculated using: h = H*D/R.

Object Height Using Radial Displacement Method

apg_hraddisp.gif (20929 bytes)
An alternative method for calculating object heights is based on the shadows cast by the object on the image.Objects that are located very close to the nadir will have little radial displacement, making estimation of height by the radial displacement method prone to measurement errors. However, objects on images taken under clear sky conditions do cast shadows, regardless of their position on the image. Although we cannot directly measure the height of objects on the image, we can measure the length of their shadows and can use the shadow length of the object to calculate its height provided that we know the solar angle at the time the image was taken. Airphotos include a clock that gives the the time of exposure in Grenwich Mean Time. We can convert this into local time if we know the longitude of the object. To get the solar angle, we would need to know the latitude of the object and the date of the image. There are now several web sites that have calculators that determine sun angle for a given location and time of day. The tangent of the solar angle is equal to the height of the object divided by the length of its shadow. Thus if we know the solar angle(a) and the length of the shadow(l), we can calculate the height (h) of the object as h = l*tan(a).

Object Height Based on Shadow Length

3dap_shadlen.gif (2602 bytes)







 

Recognition Elements

Airphoto interpretation is the process of viewing airphotos, identifying geographic features represented in airphotos based on image characteristics, and relating image characteristics to known ground conditions in order to obtain information about things you can't see in the airphoto. For example, an experienced interpreter can distinguish between high and low income areas on an airphoto based on looking at lot and building size and on associations between features such as presence of swimming pools, lots backing onto a golf course, etc.
Several image characteristics may be used to identify features and interptet ground conditions. These include pattern, shape, tone, texture, shadow, associated features, and size.
Patterns can help to identify natural, agricultural and urban features. Natural patterns often reflect surficial bedrock geology or dominant geomorphic processes. For example, evidence of glaciation may be found in scraping of the topsoil from bedrock or in depositional features such as drumlins or moraines. Patterns can also be used to differentiate agricultural features. For example, orchards and vineyards show distinctive spatial patterns. Fields subjected to circular irrigation are also clearly evident on airphotos, as are settlement patterns derived from splitting large blocks of land into smaller farms. In urban landscapes, patterns can help distinguish between residential, commercial and industrial areas and may even allow you to differentiate residential areas based on their age.
Shape is particularly important in interpretation of urban images. Shapes can help distinguish between different building types. Roof shape often provides a clear indication of the type of structure which may help in identifying its function.
Tone can be a useful image characteristic but can also be problematic. Tone tends to vary too much across the image, in part because tone is affected by shadows of objects in the image. Airphotos are usually taken in late morning, so the sun angle is typically from the southeast. Because of radial displacement of objects from the nadir, we see the sides of objects as well as the tops of objects in the airphoto. However, in the southeast quadrant of the image, we see the shadowed northwest sides of objects, producing a darker tone, while in the northwest quadrant, we see the sunlit sides, producing a lighter tone. This variation in tone can make interpretation more difficult. Tone may be used to delineate drainage networks since wetter soils will have a darker tone, but darker tones could also be caused by the presence of organic soils.
Texture is particularly important in interpreting vegetation types. Not only is ti possible to distinguish between broad forest classes such as deciduous vs coniferous forest, but an experienced interpreter can also distinguish varieties of trees, e.g. red maple vs sugar maple or cherry trees vs peach trees, based on the texture of the image. Detailed information about forest stands can be interpreted from airphotos, as has been done in producing Ontario's Forest Resource Inventory (FRI) maps. These maps are derived primarily through image interpretation with some field checking and describe the age and species composition of individual forest stands.
Shadows can reveal the types of structure or allow differentiation of different types of trees. Shadows are more pronounced on low sun angle photographs, making identification of feature types easier. However, shadows may hide detail in the image and affect the tone of the image which may make interpretation more difficult.
Many types of features can be easily identified by examining associated features. For example, a public school and a high school may be similar flat roofed building structures but it may be possible to identify the high school by its association with an adjacent football field and track. Similarly, a light industrial building and a shopping plaza may be difficult to distinguish based on the building structure type but the shopping plaza will be associated with a larger parking area than the industrial building.
The size of objects can also aid in interpretation. A cemetary and a campground may appear similar on an airphoto image since both show a regular spatial pattern of paths/roads and rectangular objects. In this case, the size of the objects can be used to aid in interpretation, although it may be necessary to determine the scale of the image to arrive at the correct interpretation.

 

 

3-D Airphoto Interpretation

Because of the overlap between successive airphotos along a flight, it is possible to view airphotos stereoscopically, i.e. in three dimensions. However, stereoscopic viewing is limited to the area of overlap between the images.
Stereoscopic viewing is based on binocular vision. Each of our eyes sees a scene from a slightly different perspective. Our brains reconstruct the two images recorded by our eyes into a three dimensional view of the scene. The same thing is possible with airphotos (or other images) provided that when we view the airphotos, each eye is focused on a single image.
There are several methods that can be used to ensure that each eye sees only one of a pair of images. Early 3-D movies relied on the use of analgyphs. To see the movie in 3-D, the audience was required to wear glasses with red and green lens. The coloured lenses filter out different colours, so each eye sees a different image which the brain reconstructs into a perspective view. Polarized light and projectors operate in a similar way. By changing the direction of polarization, each eye views a different image.
In airphoto interpretation, stereoscopic viewing is usually assisted by the use of pocket of mirror stereoscopes. Both operate on the same principle. Mirror stereoscopes have the advantage of being able to view larger images than is possible with a pocket stereoscope which is limited by the approximately 5 cm distance between our eyes. We look at a pair of overlapping airphotos through lenses that force each eye to see only one of the pair of photos. Once again, our brain reconstructs the three dimensional view from the pair of images.

Pocket Stereoscope

3dap_pster.gif (4080 bytes)

Mirror Stereoscope

3dap_mster.gif (4492 bytes)
Depth perception is a function of the parallax angle, which is the angle between the eyes and an object in a pair of stereo images. The parallas angle decreases with distance from the object. Because of radial displacement of objects in the image, the top of an object appears to be at a different depth than the bottom of an object.

Parallax Angle

3dap_parang.gif (5919 bytes)
In setting up airphotos for stereoscopic viewing, care must be taken to avoid psuedoscopic vision. Psuedoscopic vision can occur in two ways: if the order of the airphotos is reversed or if the shadows in the image point away from the observer. Both of these conditions will cause the 3-D image to appear to be inverted.

 

Psuedoscopic Vision

3dap_pseudo.gif (5386 bytes)
A final problem with three dimensional viewing of airphotos is vertical exaggeration. Objects in the image appear to be taller than in reality and slopes appear to be steeper. This exaggeration can sometimes aid in interpretation but is somewhat disorienting to inexperience viewers. Vertical exaggeration occurs because of the difference in geometry when taking the airphotos and when viewing the airphotos. Vertical exaggeration varies with camera focal lenght and % overlap between successive images. Vertical exaggeration can be calculated as:
  • VE = ( B / H ) / ( b / h)
  • where: B is the air base; H is the height of the aircraft above the ground; b is the eye base (approximately 6 cm) and h is the distance from the eye at which the stereo model is perceived (approximately 45 cm)

Vertical Exaggeration

3dap_ve.gif (9139 bytes)

Multi- Concept

Many applications of airphoto interpretation require interpretation of mulitple images. This can include use of multi-scale images, multi-temporal images and multi-spectral images.
Multi-scale images require a series of images at different scales, taken at the same time. Although simultaneous acquisition is difficult, if not impossible, it is often possible to acquire images from different sources that were taken at approximately the same time, i.e. within a few days of one another. Multi-scale images could include satellite-based Landsat MSS, Landsat Thematic Mapper or SPOT images, airborne MEIS or CASI images, and airphotos taken from different flying heights or using different camera lenses. In general, in interpreting multi-scale images, we use the larger scale images to interpret smaller scale imagery. Alternatively, smaller scale imagery may be used for reconnaisance purposes and larger scale imagery for more detailed analysis within selected sub-areas of the smaller scale image.
Multi-temporal images are used to analyze landscape change over time. Examples could include examining changes in river systems or sand dunes, monitoring crops over a growing season to forecast crop yields, or monitoring urban growth. In these types of application, we are using images of the same area acquired at different points in time.
Multi-spectral imagery is often used to aid in interpretation of specific types of features. For example, colour IR film clearly distinguishes water from land and is useful at distinguishing between different vegetation types which may be hard to interpret from normal black and white or colour airphotos. In this case, we can select spectral bands that are best suited to identifying the types of features we are interested in. We can also combine spectral bands to create a new index that may be more revealing than the individual bands alone.

Applications

There are numerous potential applications of airphoto interpretation. Airphoto interpretation has been widely used as the basis for land use classification and mapping, and for mapping changes in land use over time. In developing countries that often do not have reliable population databases, airphoto interpretation can be used to estimate housing density. By calculating the housing density for representative sample areas with an airphoto image, reliable estimates of housing density can be obtained for other similar areas in the image. If information is available on average household size, then this method can be extended to produce estimates of population density.
Airphotos have often been used in transportation studies and can be used to identify vehicle types, estimate traffic flows, identify parking problems on city streets, estimate parking lot usage, and even to measure the speed of vehicles on a highway.
Airphotos are regularly used in the aftermath of natural disasters such as earthquakes, volcanic erupttions or floods, to guide relief efforts. Insurance companies also make use of airphotos to assess damage and verify insurance claims.
Some municipalities use airphotos to identify building code violations and enforce compliance with permitting procedures. Most municipalities required building permits for any construction project larger than a small backyard shed. New construction can be identified on an airphoto and permit records can be checked to verify that a building permit was issued for the project. This type of application requires large scale imagery such as 1:5,000.
Airphoto interpretation has often been used to aid in locating businesses or public facilities such as schools, fire stations or libraries. By specifying a set of criteria that represent desireable locations for the business or public facility, airphoto interpretation can be used to identify sites that satisfy project requirements. In a similar manner, airphoto interpretation can be used to do avoidance screening. The objective here is to identify areas where development cannot occur. This could include areas of steep slopes, organic soils, buffer zones around marshes, rivers, shorelines or top of steep slopes, ecologically sensitive areas, conflicting land uses, class 1 and 2 agricultural land, or gravel deposits. An experienced interpreter can quickly identify these constraint areas on a airphoto, often by tracing their outlines on an acetate overlay. While this type of analysis is increasingly being done using geographic information systems, manual airphoto interpretation can be much faster than the time required to develop the GIS database.

ELEMENTS OF AERIAL PHOTOGRAPHY

Aerial photographs have been a main source of information about what is at the Earth's surface almost since the beginning of aviation more than 100 years ago. Until space imagery, these photos were the principal means by which maps are made of features and spatial relationships on the surface. Cartography, the technology of mapping, depends largely on aerial/satellite photos/images to produce maps in two dimensions or three (see next Section). Aerial photos are obtained using mapping cameras that are usually mounted in the nose or underbelly of an aircraft that then flies in discrete patterns or swathes across the area to be surveyed. These two figures show a camera and a cutaway indicating its operation:
A typical camera used to obtain aerial photos.
Cutaway diagram (simplified) showing a mapping aerial camera; the film is advanced automatically on reel spindles.
A variant of this camera system is the multispectral camera (also discussed on page 11-1). This type uses separate lenses, each with its own narrow band color filter, that are opened simultaneously to expose a part of the film inside the camera. Here is one such camera developed for use in the Skylab space station program:
The six band Skylab multispectral camera.
Aerial photos are taken from a variety of platforms: airplanes; helicopters; unmanned drones; balloons; kites; tall buildings. For the most common platform - airplanes - most cameras are mounted in the underside of the aircraft. Propeller or JetProp aircraft are preferred, for two reasons: 1) they fly slower, allowing easier film advance; 2) they cost less to operate. This photo shows two such aircraft used by NOAA in its remote sensing programs:
The Dept. of Commerce's principal vehicles used in aerial photography: The larger aircraft is the Lockheed turboprop WP-30 Orion; the smaller plane is the GulfStream JetProp Commander.
In previous sections, we have employed aerial photography to look closer at areas of which we had satellite based images (such as Morro Bay in Section 1). In fact, satellite image interpretation is in essence an extension of the concepts underlying aerial photography, taken to higher altitudes that allow coverage of larger pieces of real estate. Space remote sensing uses devices that, while much more costly to build and operate, rely on the same physical principles to interpret and extract information content.
Most textbooks on remote sensing are outgrowths of earlier texts that once dwelt dominantly on acquiring and interpreting of aerial photos. New books still include one to several chapters on this basic, convenient approach to Earth monitoring. We shall allot only limited space to explore some essentials of this expansive topic in the present section and the next. In Section 11, we consider photogrammetry as the tool for quantifying topographic mapping and other types of mensuration. For anyone seeking more details about aerial photography/photogrammetry, we recommend consulting the reading list in the RST Overview (first page), and/or going to Volume 1 (Module 1) of the Remote Sensing Core Curriculum. Below is a recommended entry from that reading list.
Avery, T.E. and Berlin, G.L., Fundamentals of Remote Sensing and Airphoto Interpretation, 6th Ed., 1992, MacMillan Publ. Co., 472 pp.

Examples of Aerial Photos

An aerial photo is just a black and white (b & w) or color "picture" of an area on the Earth's surface (plus clouds, often), either on print or on transparency, obtained by a film or digital camera located above that surface. This camera shoots the picture from a free-flying platform (airplane, helicopter, kite or balloon) some preplanned distance above the surface. Two types depend on the angle of view relative to the surface. The first, oblique photography, snaps images from a low to high angle relative to vertical. The example below is the most common type (high oblique), showing Lyttleton Harbor, near Christchurch, on South Island of New Zealand, with more detail in the foreground and a panorama with reduced detail in the background.
Low oblique aerial photo of Lyttleton Harbor, Christchurch, South Island, New Zealand.
< center>
10-1: For the moment we shall define resolution in a photograph as the size of the smallest object whose tonal appearance is notably different from its surroundings or background; technically there is a more precise definition, given in terms as the minimum spacing between two dark lines embedded in a light background that can be visually separated. How does spatial resolution vary in this oblique photo. ANSWER
The second type of aerial photos is oriented vertically, that is, it results from pointing the camera straight down (to the nadir, at the photo center point) to show the surface directly from above. The size of the photo and the sizes of the features represented within the photos can vary depending on the following: the camera's optical parameters, the surface area of the exposed film (frame size), the subsequent printing sizes (e.g., enlargement), and the altitude of the camera platform.

Image Scale

The ratio of the size of any object, feature, or area within the photo to its actual size in the picture is called the scale (defined and discussed on the third page of this Section).
We now present a series of aerial photos, acquired at different times and scales, most covering areas that lie within this June, 1977, Landsat image (scale = 1:1,000,000) of south-central Pennsylvania, a scene we have looked at in earlier sections, and especially during the Exam at the end of Section 1.
 False color Landsat-1 image of central Pennsylvania, including Harrisburg, originally printed at a 1:1,000,000 scale.
This scene contains heavily forested fold ridges. Some of the bluish-black areas are defoliation patches caused by the Gypsy Moth. Others areas near top center are surfaces covered with black dust from the Anthracite coal strip mining in fold valleys. Bluish areas in the wide valleys are fields still bare or with early stage growth. The Susquehanna River which empties into the top of Chesapeake Bay bisects the image. Near the left center, a blue pattern with spokes is Harrisburg, the state capital, with York below it and Lancaster to the right. Next, we show a standard medium-scale ( moderate area of coverage but with considerable detail [individual buildings still visible]), black and white aerial photo of part of Harrisburg. The scale value given is that of the original photo before it was reduced to your screen size; quoting this value helps to appreciate what can be seen (resolved) at that scale, no matter what the eventual picture size becomes through enlargement or reduction.
 Black and white vertical aerial photo showing the downtown part of the city of Harrisburg (right) and towns across the Susquehanna River to its west.
Harrisburg (Scale = 1:100,000)
The number in the upper left corner of this black and white photo of Harrisburg is the date; on the right is the Mission number; and in center is a number denoting the flight line and particular photo within that line. Individual fields, smaller rivers, bridges, and roads are easily picked out.
10-2: One meaning of scale is this: 1 inch on the photo equals X inches on the ground. For the 1:100,000 photo above, determine how many feet are represented by an inch (on the photo, or in this case, the image on your screen) and likewise how many mile(s) extend across that inch. ANSWER
The next photo is large scale (small coverage area and high resolution for identifying features smaller than buildings, e.g., cars) and covers an area within Harrisburg, just east of the previous photo, bisected by Interstate 83. Note particularly the lake-filled quarry (left center).
Aerial photograph of urban Harrisburg (scale=1:4000).
Urban Harrisburg (1:4,000)
10-3 In which other photo on this page can you find the quarry lake? For the above photo, what is/are the number of miles represented by an inch on the screen? Make an educated guess as to the effective resolution of this 1:4000 photo; how did you do it? ANSWER
In the lower right corner of the Landsat image is an agricultural area along the Chesapeake and Delaware Canal. Its expression in a moderately large-scale, natural-color photo is shown here:
Color aerial photograph along the Chesapeake Bay and Delaware Canal (scale=1:24,000).
Natural Color Photo (1:24,000)
At a still smaller scale, we next show a false-color, IR image of the Susquehanna Water Gap passing through Blue Mountain just north of Highway 81 (bottom of the picture) that, to the east, runs along the north side of Harrisburg.
Color IR aerial photograph of the Susquehanna Water Gap near Harrisburg (scale=1:8000).
Color IR Photo (1:8,000)
Much the same area is part of a small scale (large area coverage with reduced detail) aerial photo obtained from an RB-57, NASA aircraft, flown at an altitude near 15,200 m (about 50,000 ft) on February 5, 1974. On this date, the color-IR photo shows limited red tones from fields in which winter wheat is growing. The image is 25.2 km (15.7 mi) on a side (635 square km; 246 square mi).
Color infrared aerial photo, taken from the NASA RB-57 high altitude aircraft, again showing Harrisburg, and towns to the west.
High Altitude Aerial Photo (1:141000)

10-4: There is an easy way to determine whether a scale is large, medium, or small, by looking at its stated value, e.g., 1: 30,000. Propose a simple rule for this. ANSWER
A word of caution at this point. Because of shadow orientation and other factors, features in a photograph (much rarer in space imagery) that represent relief (differences in elevation), such as hills, can appear to the eye as inverted, i.e., a high appears as a valley, a valley as a hill. The best example the writer has found is a group of mesas and troughs on Mars. On the left is the correct expression (mesas look higher); on the right is the inverted case. If you ever see an aerial photo that does not look right (expected highs show up as lows), just reorient the photo (a few people may need to reorient their brain).
Normal view of mesas and troughs on MarsThe same image when rotated 180 degrees causes the troughs to appear higher.
Among the most obvious features in a photograph are tones and tonal variations (as grays or colors) and patterns made by these. These, in turn, depend on the physical nature and distribution of the elements that make up a picture. These "basic elements" can aid in identifying objects on aerial photographs.
Tone (closely related to Hue or Color) -- Tone refers to the relative brightness or color of elements on a photograph. It is, perhaps, the most basic of the interpretive elements because without tonal differences none of the other elements could be discerned.
Size -- The size of objects must be considered in the context of the scale of a photograph. The scale will help you determine if an object is a stock pond or Lake Minnetonka.
Shape -- refers to the general outline of objects. Regular geometric shapes are usually indicators of human presence and use. Some objects can be identified almost solely on the basis of their shapes: for example - the Pentagon Building, (American) football fields, cloverleaf highway interchanges
Texture -- The impression of "smoothness" or "roughness" of image features is caused by the frequency of change of tone in photographs. It is produced by a set of features too small to identify individually. Grass, cement, and water generally appear "smooth", while a forest canopy may appear "rough".
Pattern (spatial arrangement) -- The patterns formed by objects in a photo can be diagnostic. Consider the difference between (1) the random pattern formed by an unmanaged area of trees and (2) the evenly spaced rows formed by an orchard.
Shadow -- Shadows aid interpreters in determining the height of objects in aerial photographs. However, they also obscure objects lying within them.
Site -- refers to topographic or geographic location. This characteristic of photographs is especially important in identifying vegetation types and landforms. For example, large circular depressions in the ground are readily identified as sinkholes in central Florida, where the bedrock consists of limestone. This identification would make little sense, however, if the site were underlain by granite.
Association -- Some objects are always found in association with other objects. The context of an object can provide insight into what it is. For instance, a nuclear power plant is not (generally) going to be found in the midst of single-family housing.
These elements can be ranked in relative importance:
The elements of interpretation, ranked in complexity and value.
Since aerial photography is dependent on photographs, we need, at this juncture, some basic insight into how a photo is made.

The Photographic Process

Before beginning this page, a review of the answer to the first question, concerning the human eye, in the Quiz at the end of the Introduction may be helpful. With this in mind, use this diagram to compare the components and functions of the eye with that of a photo camera:
Comparison of the operation of the human eye in obtaining an image with the function of a film camera in recording an image.
Black and white (b & w) photographs start with exposing a light-sensitive film to incoming electromagnetic radiation (light), selected from the spectral range between ultraviolet through visible, and into the near infrared. The optical system of the camera focuses the light, reflected from the target, onto the focal plane (plane of focus). The film is held flat at the focal plane, and the light activates positions in the film area in the same spatial relation that the radiation photons had from the surfaces within the scene. The recorded exposure is a function of many variables, of which the three principal ones relate to the scene, the camera, and the film.:
1) The scene usually contains various objects that contribute their spectral character (reflected wavelengths) and intensities of the reflected radiation.
2) In the camera, we can vary the lens diameter, D, and the effective size of the aperture opening, d.
The aperture depends on the diaphragm width for admitting light. An open/shut shutter controls the duration of light admission. The optical characteristics of the lens vary the distance from the lens to the film (focal length, f) at which the focus is sharpest. This light-gathering system is adjustable for film response (ISO, formally ASA values);
3) In the film, its properties vary, e.g., which wavelengths it is most sensitive to, and under which conditions does it develop best as a negative and then printed.
For most cameras, the four variables that we normally adjust are:
1) The focus, by moving the lens back and forth, relative to the focal plane, so that the target image is in focus in the plane of the film;
2) The F-stop, defined as f/d, the focal length divided by the effective diameter of the lens opening. Typical values of the F-number are F/1 (the lens opening is the same size as the focal length), F/1.4, F/2 (the lens opening is half the focal length), and F/2.8 to F/22. The denominator increases by approximately the square root of 2 (1.414...), so that each decrease of F/d (i.e., denominator increases), leads to a decrease in the amount of light admitted by a factor of 2. Thus the F-number increases as the lens diameter decreases, and therefore we photograph dark scenes at low F-stops, e.g., F/2, and bright scenes at high F-stops, e.g. F/32.
3) The shutter speed (typically, in a sequence from 1/2000, 1/1000, 1/500, 1/250, 1/30, 1/15, 1/8, 1/2, 1/1, 2, 4 ...., in seconds), which controls film exposure times;
4) The film speed, i.e., the exposure levels over which the film responds. The ISO (ASA) rates film properties. High ISO numbers refer to "fast" film (high speed), e.g., ISO 1000, which requires less radiation (hence, a shorter exposure time or a smaller aperture) to achieve a given response. "Slow" film, e.g., ASA 64, requires a longer exposure or a larger aperture, but provides higher resolution. For aerial film, the AFS (Aerial Film Speed) is more commonly used.
One general equation for exposure is:
exposure equation
where
  • E = exposure in Joules (J) mm-2
  • s = intrinsic scene brightness, in J mm-2 sec-1
  • d = diameter of lens opening in mm
  • t = time in seconds
  • f = lens focal length, mm
(see Ch. 2 in Lillesand & Kiefer, 2000). Changes in any one or combination of these variables brings about variations in photo response characteristics. These differences can be favorable and we actuate them by adjusting one or more camera settings.
10-5: Given a camera in which you maintain control of the focal length, exposure time, and F-Stop (the old-fashioned or professional kind, not like those today that have automated the adjustment of these settings), and assuming it has a built-in light meter, enumerate the steps you would take in getting ready to take a picture of a) a nearby person, and b) a distant mountain range, on a sunny day and again near sunset. ANSWER
Black and white film consists of a base or backing, coated by an emulsion composed of gelatin, in which are embedded tiny crystals of silver halides (commonly, Silver Chloride, AgCl) together with wavelength sensitive dyes. The dyes respond to radiation from segments of the electromagnetic spectrum, such as, ultraviolet, visible,and visible/near IR. Special films respond to photons from shorter or longer wavelengths; for example, X-ray film. When a number of photons strike a halide crystal, they knock loose electrons from some of the silver (Ag) atoms, ionizing them. The number of electrons thus activated depends on the brightness (intensity) of the radiation. We can control the part of the spectral range by using color filters over the lens. These filters admit radiation from limited segments of the spectrum. This process is a photochemical reaction which conditions the halide grains for later chemical change, forming an intermediate latent image (invisible but ready to appear when we develop it).
Developing begins by immersing the film in an alkaline solution of specific organic chemicals that neutralize the electrons and reduce Ag+ ions into minute grains of black silver metal. The number of such metallic grains in a given volume determines the film (negative) density. For parts of the emulsion receiving more light, the density (darkness) of the film is greater. In the developing process, we must stop the ion conversion at some point using an acidic stop bath. We remove any silver halides that remain undeveloped by chemical fixing Volumes in the thin film that saw little exposure (fewer photons) end up with minimal silver grains and thus appear as light and clear in the film negative. We can control and modify the development process, and hence relative densities in the negative, by changing such variables as solution strengths, developer temperatures, and times in each processing step.
Next, we must use the negative to make a separate, positive, black and white print, in which dark tones correspond to darker areas in the scene, and light tones to light areas. We do this during the printing process. A print (or a positive transparency) consists of an emulsion, backed (in a print) by paper. We pass white light through the negative onto the print material. Clear areas allow ample light to pass and strike the print, which produces high densities of dark (silver-rich) tones. Thus, the initial low levels of photons coming from the target (relative darkness) ultimately produce a print image consisting of many silver grains that make the areas affected dark. Bright target areas in turn, being represented by dark areas in the negative that prevent light from passing, are expressed as light (whitish to light gray) tones in the print (little silver, so that the whiteness of the paper persists). Once again, we can control the relative levels of gray, or increasing darkness, in the development process by changing the same variables as above, by modifying exposure times, by using print papers with specific radiation responses, and by using filters with different spectral responses (minimizing passage of certain wavelengths) or light transmission. Thus, we can choose different average tonal levels of the print, and, more important, we can adjust the relative levels of gray (tones) to present a pictorial expression, called contrast. Contrast determines whether a scene with variable colors and brightness appears flat or presents wide ranges of light-dark areas that aid in discriminating features. Contrast is the ratio of density to the logarithmic value of exposure. We can plot this ratio in the Hurter-Driffield (H-D) curve, which is a straight line with a slope angle for a range of exposures but becomes curved at high and low exposures.
We can expose b & w films under a condition that converts them into multispectral images. We do this by using color filters that pass-limited ranges of wavelengths (bandpass filters) during exposure. As we explained in the Introduction, a red filter, for example, passes mainly radiation in the red and nearby regions of the visible spectrum. Reddish objects produce high exposures that appear in dark tones on a negative and reappear as light tones in b & w prints or in red on color positive film. We describe why this is so different from the response of b & w film in the following paragraphs. Green appears as dark in a b & w multispectral image, representing the red region, and as dark or subdued green in a color multispectral version. We can project multispectral positive transparencies for different color bands using several color filters onto color print paper to produce natural or false color composites, as described in the Introduction.
As an aside, transparencies representing different bands can be combined in a projection system, using filters to determine the colors sought, that result in a color composite. Commercial systems are available, as exemplified by this Color Additive Viewer made by International Imaging Systems, Inc. (their first prototype viewer was built for the writer during his early Landsat days at NASA Goddard).
A color additive viewer; the transparency film, with different bands, aligns any three multispectral bands, each with a unique color filter, into a single projection system that produces a color composite image on the viewing screen.
The use of filters to produce individual color photographs is one of two principal ways to make this product (the other uses multiple color-sensitive layers in the film itself). Much as does Landsat and other systems utilize filters on the sensors to subdivide the light received into wavelength intervals (the bands), a multiple camera array will have filters of different bandpass intervals over the different lens involved. Here is a plot that shows the spectral properties of such filters:
Spectral response curves for several filters.
How we use color film to produce color images involves some different concepts, although many of the same factors and mechanisms are still valid. Starting with the three additive primary colors, red, green, and blue, or the subtractive primary colors, yellow, cyan and magenta, we can make other colors by using the principles of either the color addition or the color subtraction process. Look at these diagrams:
Color Models
Color additive and subtractive models, using overlapping color circles.
Additive Color Model
Subtractive Color Model
Color addition works when we superimpose the primary colors on one another. For example, if we shine a green light and a red light on the same spot on a white wall, we will see some shade of orange or yellow, depending on the relative intensity of the red and green illumination. If we add a third blue light to the spot, we will see white or a shade of gray. Computer displays work this way. To create a color, we can typically choose a number between 0 and 255 to indicate how much of each of the three primary colors we want. If our display board has sufficient memory, we will have 2553 (16,581,375) colors to choose from.
In subtractive color, we use filters to remove colors. For example, a yellow filter removes colors other than yellow, as do cyan and magenta filters. If one superimposes all three filters, little or no visible light gets through, so either black or dark gray results. By combining pairs of the subtractive primary colors, we can create each of the additive primary colors. Magenta and yellow produce red. What corresponds to cyan and magenta and yellow and cyan?
The principles of color subtraction apply to color-sensitized film. This film consists of emulsion layers containing silver chloride treated with light sensitive dyes, each responding to a limited wavelength range. These layers act as subtractive filters during development. Thus each layer of the film responds to different sections of the scene's spectrum. These layers are stacked, respectively, as follows: a blue-sensitive layer on the top, then a yellow filter layer (to screen out ultraviolet and blue from passing into the next layers; omitted from the diagrams below), and finally, green- and red-sensitive layers.
Effects of color filters that permit transmittance or absorptance of light of different wavelengths.
From F.F. Sabins, Jr., Remote Sensing: Principles and Interpretation. 2nd Ed., © 1987. Reproduced by permission of W.H. Freeman & Co., New York City.
Referring to the above diagram, when a shade of red passes through a color layer sensitized to cyan (a blue-green, the complementary color to red; the sum of any primary color and its opposing complement always equals white), its absorption activates the dye/silver grains in that layer to produce, in a negative, cyan tones in areas associated spatially with reddish objects in the scene. In color film, the three subtractive color layers stack together (a fourth serves a special purpose, described below) on top of a clear base. To guide you in reasoning through production of other colors, check this schematic diagram:
 Schematic diagrams representing both color negative and positive film, showing how different color emulsion layers respond to light of different wavelengths; the process is two-step when prints are the final product (a different response pattern determines color transparencies).
From F.F. Sabins, Jr., Remote Sensing: Principles and Interpretation. 2nd Ed., © 1987. Reproduced by permission of W.H. Freeman & Co., New York City.
Thus, in a similar manner, light from a blue subject reacts with the yellow layer to produce a yellow shade (red and green make this complementary color) for its area on the negative. To test your understanding, from the diagram, set up the response for green objects (magenta, a bluish-red, is a mix of red and blue). There is an obvious rule working here:
10-6: To test your understanding, from the above diagram, you set up the response for green objects (magenta, a bluish-red, is a mix of red and blue). Also look at the diagram just below. No doubt you can see an obvious rule working here: consider, then state it. ANSWER
Additive and Subtractive Color Triangle diagram.
From F.F. Sabins, Jr., Remote Sensing: Principles and Interpretation. 2nd Ed., © 1987. Reproduced by permission of W.H. Freeman & Co., New York City.
As evident in the diagram, each primary color activates the layer containing the subtractive color opposite it . Several other rules or observations apply:
1) A given primary color does not directly activate the other two film layers.
2) Note that yellow + magenta = red. The red is common to each of these subtractive colors, with blue and green being filtered out. The same rationale applies to the other two combinations of subtractive colors.
3) White light exposes all three subtractive layers in the negative. The sum of these three layers (the center of the color diagram on the right) on a positive is black. Conversely, black (absence of light) objects produce a clear (not colored) area in the three layers of film
4) We must insert a fourth, special yellow filter layer below the yellow layer, because the dyes in the red and green sensitive layers below are also sensitive to blue, which this filter layer screens out and then dissolves away during developing.
To comprehend how to make a color print, follow this set of arguments: when white light passes through the color negative to initiate the printing, cyan areas transmit that light through the cyan layer of the print film (called the positive or reversal film) but not through the magenta or yellow areas, exposing each so that it assumes its color. Since the sum of yellow and magenta is red, during development, the print film is red in the areas that are cyan in the negative. The same line of reasoning applies to the magenta and yellow areas on the negative, with green and blue resulting. If the negative has yellow and magenta occupying the same areas on the two superimposed layers, the results will be green + green? = yellow, and so forth, for other non-primary colors. To reiterate, the blue from the cyan of the negative activates, first the yellow layer (sensitive to blue which is absorbed) and then the magenta layer (sensitive to green), but the cyan bottom layer is not sensitized by the cyan light (passes through), becoming clear during development. We can tailor this statement for each of the other two negative colors.
We can generate color transparencies by a similar color-reversal technique but without the need for a negative. First, we develop the exposed transparency film to cause it initially to act as a negative image (converting the sensitized silver chloride/dyes to color grains) in each of the three color emulsion layers. We then re-expose the film to white light to develop any remaining silver halide . This latent positive image is then chemically coupled or combined with color dyes to produce a positive color image in each layer. Next, we treat the film in a bleach, which, without affecting the dyes, converts silver into soluble salts and removes unused dye couplers, while removing the initial yellow filter layer. A red subject forms a magenta and a yellow image pattern on the green- and blue-sensitive layers. When white light projects through this transparency, yellow and magenta layers absorb blue and green respectively, allowing red to appear in the projected image where red should be spatially. (Likewise for the other colors.)
Other systems of color production have been devised. One mentioned briefly here is the IHS system, in which:
  • I = the color intensity or brightness,
  • H = the hue, comprised of a dominant wavelength, averaged from a limited range of adjacent wavelengths
  • S = saturation, the purity of the color relative to gray.
This system is sensitive to controllable transformations (computer-aided) that optimize and enhance color representations.




No comments:

Post a Comment