All About Digital Earth Watch—Chapter 1


Introduction: The Power of Digital Images
Of our five senses, sight provides near-instantaneous information over a large range of distances and space. So it is no coincidence that people use a plethora of images to convey a rich variety of data: photographs, maps, drawings, paintings, movies, text documents, signs, etc.  

With the inventions of the computer and the Internet, images are now more than memories. Each contains millions of color intensity measurements organized spatially allowing us to measure the location, motion, orientation, shape, texture, size, chemical and physical properties of objects in the images. Image analysis software provides ready access to these data. There are nearly unlimited uses of digital image data. Every aspect of industry uses digital images creatively. The caption in the image to the right has examples of types of data that can be found in a digital image.


This is a picture of one of the author's son and dog many years ago. There is quite a bit of personal meaning for him, but there is a tremendous amount of data within this picture too. Knowing the width of the rock wall that they are walking on, the dimensions of the boy, dog, and nearby grass can be measured, the width of the tree trunk in the background may be measured because the block of granite has a known length, and the colors of everything in the image may be analyzed.

Digital images offer a powerful way share data in the Information Age. But how many of us really know how to harness the data? A primary goal of the NASA-funded project, Digital Earth Watch was to provide scaffolded activities and software for people to learn how to manipulate digital images and to analyze the types of data they contain:

  • spatial data (size, orientation, position, etc.),
  • spectral data (color), and
  • temporal data (change over time).

Below is a flow diagram of the ways to extract information from digital images using the free software, AnalyzingDigitalImages, developed for this project.

Consider using this as a checklist of your understanding of digital images. If a topic is not familiar to you, read the appropriate sections and use the referenced website activities and software.


Differences Between Paper-Based and Digital Images

When you first learned to paint, at some point you most likely mixed all of your paints together.  The resulting color? Something dark - a dark gray, brown, or even black (it depends on the type and variety of paint colors you had). Yet when white light from the Sun passes through a prism, we see a rainbow of colors. A rainbow of colors of light makes white; a rainbow of paints made black. This is the biggest difference between paper-based images and digital images: pigments/paints subtract light reflecting from their surfaces, while in digital images, separate beams of colored light are added together by your eye and brain to create color. 

Because colors are produced differently by pigments/paints and beams of colored light, different sets of primary colors are used to make the rainbow of colors.  In paints, it was red, yellow, and blue, but this has been updated to yellow, magenta, and cyan (this became important when color images were printed onto paper). In digital images, the primary colors are red, green, and blue.

Primary Colors of Light

Pigment:

c-m-y venn diagram

Light: 

r-g-b venn diagram

Notice that mixing two of the primary colors of either pigments or light produces the primary color of the other (look at the intersections of two circles of color above). To thoroughly understand the data in digital images, you must be familiar with how colors are created using red, green, and blue light in varying intensities.  If you aren't, try out the recommended activities suggested below.

Recommended activities and software to learn about color with light:

  • Dueling Light Beams: Make colors by mixing colored light in a darkened room - Movies of the activities are available on this page if you can't get the materials together.

  • Compare how colors mix in paint and light using the "Compare Colors" tab panel in the ColorBasics software.  Recommended to use Chapter 1 of "ABCs of Digital Earth Watch Software" (http://www.globalsystemsscience.org/studentbooks/dew)" or the Compare Colors page in this site.

  • When you feel proficient with making colors with light, have fun playing the computer or another person using the "Play with Colors" tab panel in the ColorBasics software. The computer keeps track of how well you (and your opponent) are identifying the red, green, and blue intensities of the challenge colors.

  • Assessment: See how well you can make colors from red, green, and blue intensities using the "Test Yourself" tab panel in ColorBasics software. TIP: You should not move on until you can correctly identify the red, green, and blue intensities for the 10 randomly generated colors in 15 tries or less.

NOTE: ColorBasics is available free (at http://www.globalsystemsscience.org/software/download)

What Is Color?

Color is a quality of visible light that is emitted, reflected, or transmitted from an object.  So first, what is light? Another name for light is electromagnetic (EM) radiation, which is a stream of photons emanating from a surface.  Photons are packets of energy that have characteristics of both waves and particles, but each photon contains a specific amount of energy. All the colors in the rainbow that humans see ...
red - orange - yellow - green - blue - violet
...are photons in a relatively narrow band of energies of the EM radiation, which is called visible light.  Photons of violet light have more energy than photons of green light and all other colors. Red photons have the least energy of all the visible light. In terms of waves, red has the longest wavelength and violet the shortest.  

Photons of light striking an object have three possible fates:

  • Photons are absorbed by the object
  • Photons are reflected from the object
  • Photons are transmitted right through the object.

The light that reflects from a surface is greatly influenced by surface irregularities.  Most surfaces in our everyday world are quite rough when viewed microscopically, and these surfaces produce diffuse reflection, meaning you can't see an image in the reflection (unlike a mirror or smooth metallic surface).   Most objects we see and photograph have diffuse reflection of light from their surfaces.  Adding up the intensity of visible light that is absorbed, reflected, and transmitted will equal the intensity of the light falling on the object.  In other words, the percent light reflected, transmitted, and absorbed adds up to 100%.  Keeping this in mind while analyzing images allows one to assess material properties of the photographed objects (example are the leaves below).

Photons and electrons

On the Basics of DEW>EM Spectrum page (http://dew.globalsystemsscience.org) there are links to a movie created by NASA showing five ways light is emitted (given off) by atoms and molecules. Of those, only two generate visible light: electron shell change and blackbody radiation.  Since objects need to be at several thousand degrees to emit visible light from blackbody radiation, most of the light we see day to day is light given off when electrons change from a higher to a lower orbital energy state. Conversely, when light is absorbed electrons move to a higher orbital energy state farther from the atom's nucleus.  Photons are absorbed only if they have the exact energy (the exact color) to move an electron to the higher energy orbital position. 

Color helps us determine the chemical composition of objects: the types of atoms and how they are bonded together in objects determine the colors of light that are absorbed.  

Visible light that isn't absorbed by an object do not cause the electrons to change their orbital position, but the photons can still interact with the electrons.  The electrons vibrate with small amplitudes for brief time periods, and the light is re-emitted as the same color in the form of reflected and/or transmitted light.  Light that isn't absorbed or reflected is transmitted through the object.

For a more thorough treatment of the physics of color, the book The Physics and Chemistry of Light by Kurt Nassau (1983) has 15 ways color is created by atoms and molecules!


    
Examples of light reflected from a leaf (above, left) and light transmitted through leaves (above, right).
Since green light is both the predominant color reflected and transmitted,
it is the color that is least absorbed - and the one that is not used by plants for photosynthesis. 
The colors of light that drive photosynthesis (or that chlorophyl absorbs) are red and blue light.
See how foresters use this knowledge to
assess the health of plants (http://dew.globalsystemsscience.org/tools/plant-stress-detection-filters).

Recommended activities to study how light interacts with objects:

Explore how to use LEDs (Light Emitting Diodes) to measure the light reflected, transmitted, absorbed and/or emitted from objects in Chapter 5 of ABCs of Digital Earth Watch Software (http://www.globalsystemsscience.org/studentbooks/dew).




Color in Digital Photographs
Diagram of color pixels

A digital image is an electronic file of the numeric values of the color intensity of small tiles of uniform color (called pixels, which is a shortened form of picture element) organized in a two dimensional field.  There are two general ways to consider how color is represented in the image. The first is that each pixel has three primary color intensities associated with it (red, green, and blue), and the pixel is at a specific location in the image.  The combined three intensities are most likely created by a chemical and/or physical properties of the object at this location in the image. 

It is easy to forget that a pixel is actually an area that has physical meaning relative to the objects in the digital image.  Since the color is uniform across the pixel, we can't see detail smaller than the size of the pixel.  This doesn't relate just to digital images, our eyes have limited resolution to the detail we can see.  For example, we can't see the stomates in a leaf or the hairs on the legs of a fly without a magnifying glass.  At some point, we cannot see detail smaller than the sensors (rods and cones) in our eyes. 

Below is an example of what we would see looking at the same scene but with different size sensors, which create different size pixels.  Note that you aren't zooming into the image, just seeing more detail with smaller pixels.  In a sense, it is helpful to think about pixels as uniform color tiles that are often found in bathrooms and kitchens.  When organized, they can be used to make pictures.


Starting with a 2x2 pixel image, each image has 4 times as many pixels compared to the image on its left
(twice as many pixels high and wide). Notice how the amount of detail changes with smaller and smaller pixels.

The second way to consider how colors are organized in a digital image is that there are three separate layers of color intensities, red, green, and blue, each organized in a two dimensional field. It may be helpful to think about each of these layers as being produced by being in a dark room where only that color light is shining.  Below is an example of a hand that looks remarkably different in different lighting conditions.  We all look younger using red light, and as the wavelengths get shorter, more detail of the damage caused by the sun and aging are evident.

RGB color hand   red hand
green hand   blue hand


Recommended activities and software to learn about pixels:

  • Examine the same image with different size pixels. Try identifying the mystery pictures using the "Pixels" tab panel in the software, DigitalImageBasics.  NOTE: the software will also pixelate any image on your computer (works best with images that are 512 x 512), and it displays the images with the largest/fewest pixels first.  Move the cursor across the image to see the size of the pixel as well as its color based on red, green, and blue intensities.  Recommended to use Chapter 2 of "ABCs of Digital Earth Watch Software" (http://www.globalsystemsscience.org/studentbooks/dew). 
  • Turn the three color layers of any digital image on and off  using the "Colors" tab panel in the software,DigitalImageBasics.  Move the cursor across the image to see the pixel's color based on red, green, and blue intensities.  Also, display the original to compare how the image has changed as the layers of color are manipulated.
NOTE: DigitalImageBasics is available free (at http://www.globalsystemsscience.org/software/download).


 

Digital Images Not Based on Visible Light

There are two types of digital images that are not based on visible light (so are not photographs): maps representing the presence and/or amounts of a variable and images based on electromagnetic radiation outside the visible light spectrum (often called false color images).

When making a map of a spatially distributed variable, the magnitudes of the values are converted to color intensities.  In some cases, the values are converted directly to a color intensity of red, green, or blue, or the values can be divided among a variety of colors, as shown in the image below.  In this example, the color key is essential to convert the colors to the original values of chlorophyll in the ocean.  With image analysis software, such as AnalyzingDigitalImages, you may select a range of colors and display only these values, so you will be able to display specific ranges of phytoplankton concentrations using the image below.

 

Seawifs satellites (http://oceancolor.gsfc.nasa.gov/SeaWiFS/BACKGROUND/SEAWIFS_970_BROCHURE.html)
have been monitoring phytoplankton for more than a decade by analyzing the color of the ocean,
which is affected by the amount of chlorophyll in the floating plankton.
The colors in map above are based on the concentration of phytoplankton across the oceans.

Because each digital image contains three color layers, up to three sets of data may be displayed in each image.  Below is an example of what three different fields of data would produce in the final image.  If you know how to interpret color, you will be able to identify how the trends do and do not overlap, which then helps to explore cause and effect  or at least correlation between variables.  For this example, let ocean temperature be displayed in the red color layer, ocean salinity in the green layer, and phytoplankton concentration in the blue layer.  In all three layers, black represents the lowest values, the the brightest intensities of a given color the maximum value of the variable.

 
red gradient
Field of ocean surface temperatures where black represents
the coldest waters and bright red the warmest waters.
green gradient
Field of ocean salinity where black represents the
least saline water and bright green the most saline.
blue gradient
Field of phytoplankton concentration where black represents
the lowest concentrations and bright blue the highest values.

Combined fields of ocean surface temperatures,ocean salinity,
and phytoplankton concentration where each field is mapped in a separate color layer.

In the fictitious example above, the source of warm and cold water appear to be different from the causes of high and low salinity.  There is an interesting correlation of the phytoplankton: it is lowest where temperature and salinity are lowest and when both are near maximum values; and the highest phytoplankton concentrations in cold water with high salinity and in warm water with low salinity.  

Recommended activities to study how to work with digital maps:       
 

Representative Color in Digital Images

Today there are many sensors designed to "photograph" a wide range of the electromagnetic spectrum, and in order to visualize these data, the intensities of the invisible light are used as one or more of the color layers in a digital image. This is particularly common with satellite imagery. Since images taken from above the Earth's surface are so unusual because of their unique perspective, we will first explore a common satellite image (Landsat, in particular) using a scene photographed from the ground.

Most digital cameras, with a little alteration, can take images in the near infrared light (NIR), which is invisible to our eyes.  What color would you use to display it?  As a first try, how about in shades of gray, with black representing no infrared light, and white showing high intensities of infrared light. Compare this image to the true color image.

near infrared image
Near infrared image of landscape.
same image in visible light
Same landscape in visible light.

Since a digital image may contain three measurements of intensities within the electromagnetic spectrum, we can combine several of the measurements from the above images to create one.  In this case, the NIR will be displayed as the red layer, the red intensities as the green layer, and the green intensities as the blue layer - so none of the true colors are displayed as the same color in the digital image.  This is truly what is often called a false color image! A more flattering term for this is representative color—a display color is representing a detected energy of light that is normally invisible to us.

near infrared as red
Near infrared intensities displayed in
red layer.
red as green
Red intensities displayed in green layer.
green as blue
Green intensities displayed in blue layer.

near infrared, red, and green image
Voila, pink trees and cyan (blue-green) pathways!
This false color image is one of the standard color composites used with Landsat data.
It is often referred to as an NRG image: near IR, red, and green image.

There are many satellites orbiting the Earth making measurements in a wide range of the electromagnetic spectrum. What would an image look like when using the NIR, red, and green sensors?  Below is an example of what a Landsat satellite image would look like using these wavelengths.  

Since 1972 Landsat sensors have provided rich information about land features, including the mapping of the location, aerial extent, and health of vegetated landcover.  The NRG color composite illustrated below helps scientists to compare vegetation coverage of selected location over time.  

Based on this color mapping, healthy vegetation reflects a large portion of NIR and will appear as red to pink.  Urban areas and other non-vegetated areas reflect relatively large amounts of all light will appear light blue/cyan/gray.  Water, which does not reflect much NIR, red, or green light, appears dark.

landsat image, near infrared as red></a><br>     <font size=
Near infrared intensities
displayed in red layer.
landsat image, red as green
Red intensities displayed in green layer.
landsat image, green as blue
Green intensities displayed in blue layer.

Since our eyes have varying sensitivity to red, green, and blue, it is helpful to view each set of measurements in black and white.

landsat image, near infrared as gray
Near infrared intensities displayed
in black and white.
landsat image, red as gray
Red intensities displayed in black and white.
landsat image, green as gray
Green intensities displayed in black and white

landsat image displayed as near infrared, red, and green  
A Landsat image of eastern Massachusetts with the NRG color composite presented above: 
NIR measurements are mapped as the image's red layer, red intensities appear as green colors, and measured green light is displayed as blue. 
Image from Landsat Clic 'N Pic .

Satellites typically measure more than three wavelengths of the electromagnetic spectrum.  Landsat measures 7 wavelengths.  This means there are 210 unique ways to view the combinations of Landsat measurements with a digital image!  This is why standard color composites were created - to minimize and standardize the options for the viewer, but there are ways to use all of the measurements at once.  To access more than three sets of satellite measurements, you need to use advanced remote sensing software which are described in Chapter 6.

 

Benefits of Digital Images

There are many ways to use digital images to measure variables of objects in an image and share data with a larger community.  In order to do so efficiently and accurately, you need to be familiar with the fundamental ways data may be accessed spatially and spectrally.  When there are multiple images of the scene, then changes of the spatial and spectral variables are possible.  The DEW information, learning activities, and free software are available to help you learn these fundamentals as well as provide powerful tools for you to begin using digital images to make measurements that are important to you.  Consider joining the PicturePost community (see Chapter 8) to begin sharing photographs and measurements of changes in your local environment.

 

What data are in digital images:


Previous Chapter Top of page Next Chapter
Ċ
01intro.pdf
(6347k)
Alan D. GOULD,
Feb 28, 2012, 11:06 AM
Comments