Techniques
Cameras
I use monochrome video cameras from "The Imaging Source" and image through my 9¼" Celestron telescope (see About). | ![]() ![]() |
Imaging in Red & Infra-red (IR) Light
Though the Moon is essentially a black-and-white object, imaging in white light will not give the best results due to atmospheric refraction. At high magnifications, colour fringing caused by atmospheric refraction is quite apparent. Blue light (450-480 nm) is more strongly refracted than red light (620-700 nm), with the effect increasing with decreasing altitude above the horizon. Light is also scattered by molecules and dust in the atmosphere, which is a separate effect. Blue light is far more strongly scattered than red light, hence the blue colouration of the sky. So even under good seeing conditions (i.e. minimal turbulence), an image captured in blue light will look less sharp than a red image. | ![]() |
![]() | Atmospheric turbulence results in rapid variation in refraction, causing stars to “twinkle”. The underlying cause is rapid variation in air density hence the refractive index of the air column through which the starlight passes. Due to turbulence, at high magnifications the Moon appears to have a rapidly moving, “boiling” surface. This is simulated in the animated GIF (left), built from 10 consecutive aligned frames. This was recorded in red light under good seeing conditions; far less detail is visible under typical or poor conditions. Turbulence is far less disruptive to red light than blue light, and near infra-red light (700-1000 nm) is even less susceptible to atmospheric effects. |
Fortunately some “black-and-white” CCD chips are very sensitive to red and infra-red light, allowing imaging at wavelengths that are least affected by the atmosphere. Under perfect conditions (no atmosphere), the resolving power of a telescope (R) should be better in blue light (450-480 nm) vs longer wavelength red light (620-7000 nm) with R (in arcseconds) = 0.21×λ/D, where λ = wavelength in nm and D = telescope aperture in mm. But in practice red outperforms blue in most cases due to the atmosphere. There are many quality filters that allow isolation of selected wavelengths (colours) of light. I use either an Astronomik Red type IIc filter (~580-670 nm, >95%) or Astronomik ProPlanet 742 IR filter (742-1100 nm, >95%) and present processed images in black-and-white. Lunar features show more tonal variation in red than in infra-red light. Lunar ray systems in particular are much more distinct when imaged in red light. In general, if seeing is good, the red filter will deliver better resolution than the IR filter.
Video Imaging
Rather than record a single image, the DMK cameras are used to record AVI files, which contain thousands of images. This data is then used to construct a single image. There are two main reasons for doing this. The first is to overcome atmospheric turbulence. Even at 1/60 sec exposures, the majority of individual images will be blurred or otherwise distorted by the atmosphere. However up to 8% of images will be good, with a handful of excellent images - they just have to be identified. Typically I would record ~3,500 images and select only 250-300 of the best images for further processing. As an example, the three images below are the best (left), median (centre) and worst (right) frames from 120 images of crater Tycho captured over a 2 second period.
![]() | ![]() | ![]() |
Best frame from 120 frames | Typical median frame | Worst frame from 120 frames |
The second reason is that "stacking", the process of aligning images on common features and then summing or integrating the images to give a single consensus image, results in impressive levels of image enhancement. This is due to improvement in the signal-to-noise ratio (S/N), allowing subtle details to rise up out of the background noise. (S/N ∝ √n, where n = number of stacked images.) Processed images of Tycho derived from a 4,000 frame AVI illustrates the point below. The single best frame from 4,000 (left) is clearly sharper than the best frame above, and an image constructed from the 16 best frames is even better (centre). The best 256 frames gives a composite image that contains a high level of detail and very little noise (right).
![]() | ![]() | ![]() |
Best frame from 4000 frames | Best 25 frames (from 4000) stacked | Best 256 frames stacked |
![]() | But rubbish in, rubbish out - a subset of good images is required for best results. The image to the left was constructed by stacking all 4,000 frames and the fine detail is lost. Further image improvement can be made by careful use of image processing techniques such as wavelet manipulation, unsharp masking, blurring to remove noise, etc. Over-processing can however introduce artifacts. The registration, quality rank ordering, stacking and wavelet manipulation of individual images in AVI files can be performed by Registax, one of the most powerful and impressive image processing software packages around. And it is free. | |
All 4000 frames stacked - poor result |
In older versions of Registax the user would select a good quality reference frame and the alignment point(s). Whilst the software did a good job rank ordering the images by quality, I would also manually go through 200+ images, discarding some before stacking. With a single core processor a large AVI could take 20 min to align. Registax 6 entirely automates the process, including the automated selection of hundreds of alignment points. Whilst some parameters can be altered, basically the user selects the number of frames to stack and the software does the rest. With powerful multicore processors, image processing is completed in minutes and the results are better than manual processing with Registax 5. Consequently I reprocessed my older images with Registax 6. Here is a high quality image of Tycho with detail to 1 km, built from 270 images software-selected from 3,500 by Registax 6, recorded with the DMK 41AU02 in red light. | ![]() |
Image Scale and Resolution
In capturing an image on the CCD chip, a "continuous" analogue image formed by the telescope optics is recorded as a set of "discrete" digital signals by the pixel array. If too few pixels are used, hard won detail is thrown away. If the signal is spread over too many pixels it becomes weak and detail is compromised due to longer exposure times. The animation shows 50 × 50 pixels taken from five good AVI frames. This is Tycho's central mountain region, blown up to show how details at the limit of resolution are spread over several pixels. Averaging these good frames gives a high quality image. So how many pixels? According to the "Nyquist" sampling theorem, a signal can be accurately reproduced provided the sampling rate is at least twice the maximum frequency in the signal. In plain language, the image must be projected at a scale where the limit of resolution (as a linear dimension) exceeds the width of two pixels. | ![]() |
There are several measures of telescope resolution, but a good practical guide is Dawes' limit, empirically derived from double star observations in the 1800s by William R. Dawes. For my 9¼" (235 mm) Celestron, Dawes' limit = 0.49". However unless you live in a special location such as Mauna Kea, the atmosphere itself limits resolution to ~0.5" (0.00014°) and only fleetingly. More often >1" is the best that can be achieved except for rare occasions. A 0.5" limit of resolution allows lunar detail down to just under 1 km to be discerned; the Moon has a diameter of 3,474 km and an average angular size of 1,890".
An analogue image with 0.5" resolution requires sampling at a scale of 0.25" or less. For the image scale of 0.5" to exceed two pixels, the projection scale must be at least 22.4 µm/arcsecond for the DMK 21AU618 and 18.6 µm/arcsecond for the DMK 41AU02. A further consideration however is signal noise inherent in the electronics, which occurs on a single pixel scale. The impact of noise can be minimized by blurring or averaging neighbouring pixel values. When a larger image scale than the minimum suggested by sampling theory is used, this method of noise reduction tends not to damage 'real' detail. In practice I have found an image scale of 6 pixels per arcsecond works well. This is achieved for the DMK 21AU618 and DMK 41AU02 using a 3x Televue® barlow lens and a 2.5x Televue® Powermate respectively. | ![]() |
Visual Observing
On the few occasions that I opt to observe the Moon visually, I use high quality eyepieces with long eye relief. My favourite is a 10 mm Televue Radian, which gives a magnification of 235× and apparent field of 60°. When seeing is very good, I sometimes use an 8 mm Radian (294×). For the reasons outlined above, a yellow filter such as a Meade 4000 series #12 filter can improve image quality by cutting out the blue end of the spectrum. Observing the Moon's surface on the computer screen using the DMK 41AU02 video camera and a red filter is also a very nice high magnification approach provided the seeing is good.
Imaging Tips
Focus: Sharp focus is everything. The standard focusers on most commercial SCTs do not give the fine control required for very high power observing. I replaced the standard unit with a Feather Touch® microfocuser from US company Starlight Instruments. It provides a 1:10 fine focus dial. The focuser thread is also cut to give a tighter fit than is typical. I have found that critical focus comes within a quarter turn of the microfocuser ~ 1/40th of a turn of the regular focuser. I have also fitted a small geared servo motor that drives the microfocuser very slowly and smoothly via a removable rubber belt. Focus needs to be periodically adjusted as the telescope cools at night. Focus for photography is determined by the on screen image, not through the eyepiece (though they are set to be parfocal). |