Thermal Imaging

Somax is equipped with an Panasonic AMG8833 infrared type thermal sensor. Thermal sensors have a wide range of applications. Somax currently uses the sensor to locate humans and other warm bodied creatures.

Somax can send thermal data to different endpoints such as a file or display. When the data is displayed it must be formatted to meet the capabilities of the display. Temperature data is represented as a single floating point value. To display the values in a meaningful way, each value must be converted to a color value. These color values must then be assigned to a location on the display. Often the dimensions of the display are not the same as the image data and so an additional conversion is needed to fill the displays vacant pixels.

Converting temperature values to color is achieved by mapping a value to a color palette. Filling the vacant pixels is done by interpolation.

The discipline that is concerned with thermal imaging is called Thermography. Below is some thoughts and information on palette mapping and interpolation methods used in thermography.

Palette Mapping

Thermal image mapped to the Ironbow color palete

Palette Mapping is the process of assigning a color to a value. The science behind this type of mapping is called False Color Visualization.

Palettes for thermal imaging are not standard but usually employee a strategy that emphasizes contrast. Below are a few the most popular.

Common thermal imaging palettes.

Color palettes are implemented by two primary mechanisms. Gradients and lookup tables.

Gradients are continuous distributions that mix a starting, ending and sometimes intermediary set of colors in series. Many algorithms and even software programs exist for generating gradients.

Look up tables are finite discrete subsets of gradients. In computer graphics look up tables are often referred to as Indexed color Like the generation of gradients many methods exist to select the colors for Look up tables.

Embedded applications favor look up tables when computation resources are limited. Gradients are favored when memory resources are limited.

For more information on generating the theory and implementation of false color in thermography, please see Petter Sundin's masters thesis: Intuitive Colorization of Temperature in Thermal Cameras.

Interpolation

Scaling up thermal images using interpolation.

Interpolation is a mathematical means of assigning values to unknown data points that fall within the range of the known data points. Closely related is extrapolation which is used to predict values that lie outside of the range of the input data. In our case we wish to resize an image by up scaling. As mentioned previously, this involves filling in the values that are missing based on existing values and when the border pixels of the original image are used as the border of the new image we can interpolate our solution.

AI Intelligence bubble ... Interpolation, extrapolation and neural networks.

Neural networks are amazing but they have one drawback, they cannot extrapolate. Some methods exist to extrapolate via interpolation but the formal answer is that they simply do not. Read more over at statistics4u about the limitation. Read about a work around in the Douglas Blank blog post: Can a Neural Network Extrapolate?

The Somax code base interpolates using one of several methods: nearest neighbor, linear, quadratic, cubic, and lanczos4. See the differnce in methods by checking out Anthony Tanbakuchi's illustrated comparison of these methods.


Somax thermal images are initially 8 x 8 pixels in size. To display the images they must be scaled up to a minimum of 128 x 128 pixels. This is quite a bit of scaling and at first glance would seem like poor image quality is inevitable. To my surprise the quality is actually quite respectable. It is very easy to make out the human form and at closer range even individual fingers. With this seemingly small amount of information many applications within gesture control, warm body object detection and object tracking can be built.


Interpolation is a guess based on some rules applied to available data. In our case we have an 8 x 8 array of temperature values. I was initially concerned about scaling this up to 128 x 128 so it got me to thinking about how I might acquire additional data.

My first thought was to use a second sensor. It is known that the error within a measurement can be reduced when employing sampling techniques. In fact, the sensor has a moving average mode that does something similar across multiple readings of the same device. This would help but at the expense of a more expensive sensor and valuable real estate on the gimbal.

I was reading up on camera calibration for a low resolution camera I will be adding to the gimbal. This got me to thinking about the affects of calibration on the AMG8833. I had remembered seeing camera specifications in the AMG8833 data sheet. They call out the camera system the overall horizontal and vertical viewing angle, the viewing angle by pixel, the optical gap between pixels, and the central 4 pixels (the sweet spot) combined vertical and horizontal view angle. In the diagram below you can see the how these values spread out as the angle approaches its maximum. It seems logical to either correct for this initially or find a way to use it during interpolation.

Later I finalized the driver code for a different sensor. This sensor returns ranging information in millimeters. That got the gears in the old noodle turning again!

From the datasheet the angle and distance between pixels is fixed and known. If I combine this information with my newly acquired distance data then with a bit of trigonometry I should be able to work out the coverage area and location, in 3 space, of each pixel. The corollary is that calculating the area and location of the of the optical gaps should also by possible in a similar manner.

I considered the possibility that the sensor might already use this information. If this was the case then using it again would only serve to degrade the image. While it may be possible that they are correcting with these values, it most certainly would not factor in the distance to the target. I'm thinking the most they could correct for reliably is the distortions between the lens and the thermopile.

I also believe that it may be possible to correct some of the overall error, which as mentioned previously, is in the neighborhood of 5 degrees Celsius. Since temperature is derived from long wave infrared and has the same properties as light, then the same equations will apply. One of these equations is able to derive the loss in intensity as a consequence of distance between the observer and the emitter. Since the distance can now be determined then so can the fall off of intensity and so the ending estimation of the temperature should improve. One other thing to take into account is that the ranging sensor is reporting the distance of a specific region of space. This would limit which pixels are correctable unless the gimbal or some other mechanism is employed to range the full view angle of the thermal sensor.

This is worth investigation. More to come!


Optical characteristics of the AMG8833