Magnification under the microscope, part 2: a closer look at the optimal focal ratio


A few years ago, in part 1 of this article, I explained the relationship between a camera's pixel size (|px|) and its optimal focal ratio (f#). This resulted in the formula f# > 3 x |px| x (i), where (i) equals 1 for monochrome cameras and equals 2 for colour cameras with a Bayer pattern. Now this method was based on the desire to still be able to observe a Rayleigh object. However, it is possible to determine the ideal f# in other ways, each with a slightly different outcome. In this second part of Magnification under the Microscope, three other methods are discussed and it is re-examined whether the factor 2 is really necessary with the colour camera. In addition, a method is discussed that enables us to determine whether a recording is over- or undersampled.


To keep it as clear as possible, this article has been split into a number of topics:


A new concept and formula

This second part requires a new concept and accompanying formula:

Spatial Cut-off Frequency (f0)

In optics, the Spatial Cut-off Frequency is an accurate way of quantifying the smallest object that can be resolved by an optical system. Without the effects of Fraunhofer diffraction, a telescope with a small aperture (say 5 centimetres) could theoretically produce images as sharp as those with an aperture of tens of meters. In the first part of this article we have already seen that Fraunhofer diffraction produces an Airy-disc with diffraction rings surrounding it. For this reason, a small-diameter telescope will always perform worse than a large-diameter one.
The resulting limitation is also known as the Spatial Cut-off Frequency (f0). For a perfect optical system, it is given by:

f0 = (λ x f#)-1 [cycles/µm]1 [1]

With λ in µm and f# the focal ratio of the telescope.


The ideal f# according to the spatial cut-off frequency

Formula [1] gives the frequency of the Spatial Cut-off Frequency, so the corresponding wavelength is (λ x f#). If we apply Nyquist to this, we find the optimal pixel size |px| as:

|px| = (λ x f#) / 2 [µm] [2]

Which we can simplify to:

f# = 2 x λ-1 x |px| [2.1]

For green light (λ = 540nm), this becomes:2

f# = 2 / 0.540 x |px| = 3.70 x |px| [2.2]

This factor of 3.7 is the lower limit, just like in the first part (3.7 times the pixel size is therefore the minimum focal ratio that is required). However, this is more than 20% more than the factor 3 found in the first part of this article. However, these two factors are not contradictory. The factor 3 was determined for Rayleigh objects, while the factor 3.7 belongs to the Spatial Cut-off Frequency. As already shown in the first part, there are two more criteria: those of Dawes and Sparrow (see Optimal focal ratio (part 1)).
Of these three criteria, only Sparrow's is roughly in agreement with the Spatial Cut-off Frequency, because there is no dip in intensity between the two objects (assuming that both objects have the same intensity).


The ideal f# according to Rayleigh

In the first part the Rayleigh criterion was introduced. It is defined as two light sources spaced apart by the Airy-disc radius. With green light (540nm) it was equal to:

rAiry = 1.22 x 0.540 x f# = 0.66 x f# [µm] [3.1]

Since we need to capture this radius with two pixels, we find for the pixel size |px|:

|px| = (0.66 x f#) / 2 [µm] [3.2]

Which can be written as:

f# = 2 / 0.66 x |px| = 3 x |px| [3.3]


The ideal f# according to Sparrow

In the first part we saw that the Sparrow criterion is approximately 77% of the Rayleigh criterion.

Applying the 77% of the Sparrow criterion, [3.3] becomes:

f# = 2 / (0.77 x 0.66) x |px| = 3.94 x |px| [4.1]

This is considerably larger and even larger than the factor 3.7 found via the Spatial Cut-off Frequency. Now between the Rayleigh criterion and the Sparrow criterion there is still the Dawes criterion, so time to look at that too.


The ideal f# according to Dawes

The Dawes criterion was established empirically at the time by simply looking with a particular telescope when the intensity dip between the two objects was no longer visible. This criterion therefore has no basis in physics and depends on the observer and telescope used. But for the sake of completeness it is still nice to also calculate with this criterion.
In the first part we saw that the Dawes criterion is approximately 85% of the Rayleigh criterion. With green light (540nm) [3.3] becomes:

f# = 2 / (0.85 x 0.66) x |px| = 3.57 x |px| [5.1]

This is only a fraction less than the factor 3.7 found via the Spatial Cut-off Frequency.


Full visible light bandwidth graph

The following graph shows the pixel-size to focal-ratio factor for all four criteria for the whole bandwidth of visible light between 400nm and 700nm:


When considering the spatial cut-off frequency the pixel-size to focal-ratio factor thus varies from about 3 in far red to about 5 in deep blue, and is approximately 3.7 in central green. But this is only valid for a perfect telescope that is not affected by our atmosphere.


The influence of seeing

So far, the influence of seeing has not been taken into account and has been disregarded in all calculations. It goes without saying that seeing plays a major role in planetary images, but exactly how large it is and how it affects the optimal focal ratio is a question that has not yet been addressed here. The image as it falls on the imaging chip is influenced by a combination of the Fraunhofer diffraction and the seeing. Before entering the objective side of the telescope, the star's (or planet's) light is smeared by the atmosphere as it passes through it, with a typical magnitude of a few arc-seconds. This smearing follows a normal (Gaussian) distribution. Quantitatively, its value is usually taken as twice the RMS radius of the smeared area.3


The critical value of seeing and aperture for a diffraction-limited image.
Figure 1: The critical value of seeing and aperture for a diffraction-limited image.
The influence of the seeing can be calculated from the convolution of the Fraunhofer diffraction (based on the Bessel function) with the Gaussian distribution. In the convolution, both the Bessel and Gauss functions are normalized to 1 to normalize the resulting convolution as well. In addition, the Gaussian function is scaled to fit the aperture of the telescope. Basically what the convolution does is that for any point on the 2D Fraunhofer distribution, the surrounding area is sampled with a radius given by the Gaussian distribution. The sum of these is then taken as the intensity value for that point.4
We will see below that each arc-second of seeing leads to approximately a doubling of oversampling, so we very rarely get close to the theoretically optimal focal ratio unless we use a small-aperture scope. How the seeing and telescope affect this can be seen in figure 1. The line in the graph shows the critical value above which a telescope is no longer diffraction-limited at a certain seeing.
Suppose a Celestron C11 (aperture 279mm) is used in the imaging, it will no longer produce a diffraction-limited image in green light of 574nm when the seeing exceeds approximately 0.5 arc-seconds. A 180mm f/15 Maksutov becomes seeing-limited at a seeing of approximately 0.8 arc seconds. So only when seeing is better than 0.5 and 0.8 arcseconds respectively the rule for the focal ratio applies for these telescopes. If the seeing is about an arc-second worse, the focal ration will have to be reduced for optimal sampling. In practice we will not do this, because the seeing can sometimes come close to optimal seeing during the recording. When stacking with software such as Autostakkert!, it is about the best seeing during the recording that determines to what extent the seeing leads to oversampling. If it still is higher than the critical value, the seeing leads to oversampling.


The effect of seeing on oversampling.
Figure 2: The effect of seeing on oversampling.
The extent to which oversampling occurs can be seen in figure 2, in which the seeing is plotted against the oversampling-factor. The values ​​are based on eight sunspot animations from the research into the relationship between seeing, aperture and the visible level of detail by Van der Werf and myself.4 The graph includes data from two popular telescopes among planetary photographers: a 279mm (11″) diameter f/10 Schmidt-Cassegrain and a 180mm f/15 Maksutov. As we have seen above, the critical seeing for these two telescopes is respectively 0.5 and 0.8 arc seconds (at 574nm), below these values ​​the seeing no longer has any effect and the telescopes are diffraction-limited. The image shows the analysis with ImageJ (see below) of the above eight sunspot animations and shows that in both telescopes the effect of seeing on the oversampling factor is approximately the same (the oversampling factor is the degree to which oversampling occurs in the recorded data and thus allows the theoretical optimal focal ratio to be divided without loss of detail). On average the relationship between seeing and oversampling factor for these two telescopes is 1.6 x [seeing]0.74 (for green light of 574nm). If we are going to stack, only the lowest seeing during the recording affects the oversampling-factor. On an ideal evening, it can come close to the telescope's critical seeing. At a best seeing of 0.6 arc-seconds, the oversampling-factor will only be about 1.1, at 0.7 arc-seconds this is about 1.2, so the ideal focal ratio will only have to be adjusted by a few tens of percent (slightly more with the SCT than with the Maksutov).


The colour camera reconsidered

Part 1 of this article showed that because it has a Bayer pattern, the colour camera has only one red, one blue and two green pixels per four pixels. This means that the effective pixel size must be multiplied by 2 for red and blue and by the square root of 2 for green. Since red and blue have the lowest resolution, the optimal focal ratio should be multiplied with 2.
Now this conclusion is fine in itself, provided that a single image is considered, but what happens if we stack? When stacking images, we can assume that the planet will not be stationary on the chip due to seeing and tracking errors. Processing software such as AutoStakkert!, determines the centre of the planet and thus moves the data by a number of pixels (a whole number of pixels I assume, but perhaps this also happens at the sub-pixel level). So, imagine that frame 2 has a planetary centre that is 1 pixel (or an odd number of pixels) to the left or right and 0 pixels up/down, then stacking causes two adjacent pixels to be filled with data. Similarly, frame 3 may also have an odd number of pixels shifted up/down, so it will fill adjacent pixels in that direction. Assuming that's how stacking of colour images works, that means colour cameras listen to the same f# rule as mono cameras (so we don't need to multiply by a factor of 2). However, it does mean that we have to stack more data (4 times as much for R/B, 2 times as much for G) to get the same signal-to-noise ratio as with a monochrome camera.


Conclusion

We have seen above that different approaches to this problem lead to different answers. Assuming the Spatial Cut-off Frequency and green light with a wavelength of 540nm, we arrive at a factor of 3.7, which is considerably higher than the factor of 3.0 according to the Rayleigh criterion from the first part of this article. The fact that these are not contradictory is because the Rayleigh criterion has not yet reached the limit of what is still detectable in terms of detail. The Sparrow criterion is the theoretical limit at which two objects at equal intensity can no longer be separated. If we start calculating with this, then, when we round off, a factor of 3.9 follows. The Dawes criterion, which originated from empirical research and indicates when objects can no longer be visually separated, leads to (again rounded off) a factor of 3.6.

Criteriumf# (@ 540nm)
Sparrow-criterion:3.9 x |px|
Spatial Cut-off Frequency:3.7 x |px|
Dawes-criterion:3.6 x |px|
Rayleigh Criterion (from part 1):3.0 x |px|

Now the question remains which factor is really useful. If the Dawes criterion were to be widely applicable (i.e. apply to all observers and telescopes), then no extra detail would be visible above a factor of 3.6, which corresponds well with the factor of 3.7 resulting from the Spatial Cut-off Frequency. Now the calculations in these two parts apply to ideal optics in the green part of the spectrum and aberrations, while reduced contrast and the seeing have not yet been taken into account. Especially the seeing affects the smallest details that we can still capture and generally requires us to adjust the factor downwards by a few tens of percent. The factor 3 from the first part is therefore a safe limit to use because, given the ever-present seeing and aberrations, it is likely that the factor 3.7 will lead to oversampling.
Contrary to what was discussed in the first part, the optimal focal ratio does not have to be adjusted to the type of camera, provided that stacking is applied. The sampling interval compensation factor (i) is therefore omitted. However, data collected with a colour camera requires a larger stack to achieve the same signal-to-noise ratio as when using a monochrome camera. In formula form:

f# > 3 x |px|

Below we will see how undersampling or oversampling can be measured.


Testing oversampling and undersampling

FFT of wave patterns at 2x (top) and 4x oversampling.
Figure 3: FFT of wave patterns at 2x (top) and 4x oversampling.
So far we have only considered the ideal f# and the effects of oversampling and undersampling from a theoretical point of view. However, it would be nice if we could also test this. If we can see afterwards whether we are over- or undersampling, we get an idea of ​​whether we are doing the right thing, or whether we may need to adjust our set-up.
The image editing package Fiji, also known as ImageJ, has exactly the functionality we need for this: the Fast Fourier Transformation (FFT). The FFT is an efficient variant of the Discrete Fourier Transformation, an algorithm for analysing the frequencies present in a sampled signal. By applying a Fourier transformation to an image in Fiji, we gain insight into whether this has indeed been achieved.
Figure 3 shows how Fiji makes this insightful. The top row shows 2-fold oversampling, the bottom 4-fold. At the far left are images of wave patterns. At the top, it consists of two vertical light lines, followed by two dark lines. So the whole wave consists of four lines, while two lines would suffice to draw this pattern, so this is double oversampling . In the lower image, each whole wave is made up of eight lines, so it is oversampled fourfold. The images in the centre show the Frequency Spectrum of both patterns. These spectra are produced at the same ratio as the source file, are direction sensitive (see figure 8) and has the origin (0) in the centre (this is a user choice as we will see below). The centre of the image therefore represents a frequency of 0px-1 (i.e. for a wavelength of an infinite number of pixels), the centre of the edges (i.e. horizontally left and right and vertically above and below the centre) represent a frequency of 0.5px-1 (so for a wavelength of 2 pixels, the ideal sampling rate). Since the wave direction in the left-hand images runs along the horizontal axis, we should expect frequency peaks in the corresponding spectrum at 50% (1/2) and 25% (1/4) from the centre on the horizontal axis respectively (the two opposite directions) and indeed we see this happening in the middle images. The right-hand images show this in a profile plot.


Fiji's FFT and FFTJ functions.
Figure 4: Fiji's FFT and FFTJ functions.
Fiji is easy to install. After downloading the package, it can be extracted to the desktop. Inside the root directory is the main application ImageJ-win64.exe. Fiji has two methods of performing an FFT. The first is native and is called FFT, the second via a plug-in called FFTJ and is part of the installation. Figure 4 shows where the two functions are located. The two functions each have their advantages. FFT works directly with colour images, but FFTJ is more flexible and produces a somewhat clearer result. On the other hand, FFTJ needs a gray-scale image. Fortunately, after opening a colour image via File/Open, the Image/Color/Split Channels function can be used to split it into the individual RGB channels that FFTJ can use. Optionally, a full gray-scale image can also be made via Image/Color/RGB to Luminance. From here on I will only use the FFTJ method.


The work-flow for FFTJ in Fiji.
Figure 5: The work-flow for FFTJ in Fiji.
To understand what FFTJ produces, let's first look at two examples: white and pink noise. A white-noise image can be found on WikiPedia. With File/Open we open this image (see figure 5, A). We then open the plug-in via Plugins/FFTJ/FFTJ (figure 5, B). Here we select the image as the Real part of input, while we leave the Imaginary part of input to none, set the Complex Number Precision to Double Precision and the FFT Direction to Forward.
If we now click OK, a Disclaimer window will open (figure 5, D) to indicate that FFTJ has started. Depending on the size, the image may take several minutes to process, after which the FFTJ window opens (figure 5, E). Here we set the Fourier Domain Origin to At Volume-Center and click the Show Frequency Spectrum (logarithmic) button, after which the result is shown as an image (figure 5, F).
What FFTJ has now done is generate an image (the Frequency-Spectrum) with the same dimensions as the original and used it as a canvas to show the frequency components on it. Now it is only a matter of stretching this image, which can be done via Image/Adjust/Brightness Contrast. This B&C window (figure 5, G) has an Auto-button, which gives a reasonable result after a few clicks, after which this can be further refined with the left-right buttons of the Maximum (in the red circles). In this example we again see a picture with noise. If we look closely, some structure is visible, which is already an indication that the source file was not a complete white-noise.


From left to right, three white noises: the original, with black frame, and original scaled at 200%.
Figure 6: From left to right, three white noises: the original, with black frame, and original scaled at 200%.
Now that we know how Fiji works, let's do a few tests. First I framed the white-noise image with a black frame, as if the white-noise part is a planet against the night background. The image is made exactly twice as wide and twice as high. Then I scaled the original white-noise image by 200%, making it the same size as the one with the black frame. Figure 6 shows these three images side by side. Below it are the corresponding Frequency Spectra of FFTJ and at the very bottom left a copy of the left Frequency Spectrum to indicate two areas that are clearly darker than the rest, because these areas are also found in the other two spectra.
Since there is white noise here, the whole canvas should be randomly filled. The fact that two dark areas are still visible means that the source does not contain 100% white noise. The middle image (the one with white-noise and a black frame) still contains the original data, but the black predominates around it. The resulting Frequency Spectrum is therefore completely filled with noise again, but with the 'bonus' of a lighter cross through the centre due to the black along the edges of the source image. The image on the right is blown up by 200%, so all the original pixels are spread over four pixels (this is double oversampling). The resulting Frequency Spectrum shows this because it now contains data only halfway to the edge, while the edges are almost uniformly black.


From left to right, three pink noises: the original, with black frame, and scaled 200%.
Figure 7: From left to right, three pink noises: the original, with black frame, and scaled 200%.
Figure 7 shows the same processing, but now for pink noise. The source image looks similar to figure 6, but the Frequency Spectrum immediately shows that the noise is not uniform (in the centre the intensity is higher than towards the edges). Adding a black frame again only makes the Frequency Spectrum also two times larger and with a central cross. Here, too, patterns are recognizable in both Frequency Spectra. When scaled up to 200%, the Frequency Spectrum again shows that the source data is not only non-uniform, but also oversampled.

Now that it's clear how to interpret the results of FFTJ, we can start analysing a real image. To this end, I took my image of Jupiter from September 24, 2022 and ran the original green channel from AutoStakkert!3 through FFTJ (see figure 8). The image was taken with a C11 EdgeHD at f/20 and a ZWO ASI174MM with |px| = 5.9µm. At the top is the analysis of the original stack with the Frequency Spectrum next to it (the strong diagonal in it is caused by the bands of Jupiter).


FFTJ analysis of a Jupiter image with a C11 EdgeHD at f/20 with a ZWO ASI174 (5.9 micron pixels).
Figure 8: FFTJ analysis of a Jupiter image with a C11 EdgeHD at f/20 with a ZWO ASI174 (5.9 micron pixels).
This clearly shows that this is oversampling. How much this is can be deduced by scaling the original image. At the bottom left this has been done with 50% and the adjacent Frequency Spectrum shows that this is almost the correct sampling. Going even further, to 33% (bottom right), we see that the Frequency Spectrum is clipped at the edges of the image, which means that at 33% there is undersampling. The correct sampling in this case is therefore somewhere between f/10 (50%) and f/6.67 (33%). However, this seems to contradict the theory. After all, the camera has a pixel size of 5.9µm, so the telescope must be at least 3 x 5.9 = f/17.7 and that is more than the f/20 in this case.


Sunspots at increasing seeing (from left to right 0, 1, and 2") and corresponding frequency spectra.
Figure 9: Sunspots at increasing seeing (from left to right 0, 1, and 2") and corresponding frequency spectra.
This apparent discrepancy is caused by the seeing. Recently I published an article together with Siebren van der Werf on the influence of seeing and the diameter of the objective on the level of detail in sunspot observations.4 To clarify this, we programmed a convolution of a Gaussian distribution with Fraunhofer diffraction to generate sunspot images for different types of telescopes and seeing levels. Figure 9 shows three images as they would have been taken with a C11 at f/10 and a camera with a pixel size of 2.95µm with increasing seeing of 0, 1 and 2 arc seconds respectively. The results of FFTJ clearly show that the seeing has a significant influence on the oversampling. As we saw above, the oversampling increases by 1.6 x [seeing]0.74 (for popular planetary telescopes). The fact that the first image is already oversampled is because the source file used for the article was already degraded by seeing. Although we have tried to deal with this with deconvolution, further sharpening and scaling, we clearly did not succeed (and that was not important for that article, it was about to make the influence of seeing understandable). The fact that the image of Jupiter shown above is oversampled is therefore almost certainly due to the seeing, which was apparently somewhere around 2 arcseconds (assuming the theory is correct and that we should indeed be imaging at about f/20).
So, if we are going to test with FFT(J), we have to take into account that the seeing has a significant influence on the Frequency Spectrum.


A simple oversampling detection test

Resampling Jupiter to test oversampling (see adjacent text for which one is the original).
Figure 10: Resampling Jupiter to test oversampling (see adjacent text for which one is the original).
A very simple and straight-forward method to test oversampling is resizing the image. A good start is to first reduce it to 50%, immediately followed by a 200% resize. If both images look the same, the original was oversampled by a factor 2.
If no difference can be seen this method can be repeated using 33% - 300% resize, but when differences are apparent a test with 75% - 133% can be done to see if oversampling is less than 33%.
Adjacent image shows Jupiter where the left image was resized 50% - 200% and the right image is the original. As no difference can be seen between them we can state that the original was oversampled by a factor 2. The original recoding was made using a C11 @ f/20 and a ZWO ASI174MM camera with 5.9μ pixel-size. The spatial cut-off frequency tells us that, using this camera, the optimal focal ratio would be 3.7 x 5.9 = f/21.8. But despite being imaged at f/20, we still see that the image was oversampled by about a factor 2, indicating that seeing was about 1.5 arc-seconds or worse.


Footnotes

[1]: Saleh en Teich, p. 502.

[2]: I chose green light in these calculations because both our eye and the camera are most sensitive here. If we look at the entire spectrum within which we image, say from 400nm to 700nm, then there is still considerable variation. The factors then become 5.0 and 2.86 respectively. Now blue is most scattered by our atmosphere, so it is unlikely that we will actually hit the limit in that colour, even if we were to persist to f# = 5 x |px|.

[3]: Seykora, pp.390-91. Sometimes the seeing is also defined as the FWHM (Full Width at Half Maximum): FWHM=√(2ln(2))×(2 × rms-radius) = 2.355σ, see: ESO, Schaefer, p.411, Karachik, p.3.

[4]: Hilster en Werf.



Bibliography

ESO, “Analysis of telescope image quality data data“, (last visited 12 August 2022).

Hilster, N. de, Werf, S. van der, “The effect of aperture and seeing on the visibility of sunspots when using early modern and modern telescopes of modest size”, (16-18 August 2022), DOI: arXiv:2208.07244.

Karachik, N.V., Pevtsov, A.A., Nagovitsyn, Y.A., “The effect of telescope aperture, scattered light, and human vision on early measurements of sunspot and group numbers”, (12 July 2019), p.3. DOI: arXiv:1907.04932.

Saleh, B.E.A., Teich, M.C., Fundamentals of Photonics, (Hoboken, 2019).

Schaefer, B.E., “Visibility of Sunspots”, in: The Astrophysical Journal, vol. 411, (July 1993), pp.909-919., p.411.

Seykora, E.J., “Solar Scintillation and the Monitoring of Solar Seeing”, in: Solar Physics, vol. 145, (1993), pp.390-91.


If you have any questions and/or remarks please let me know.


Home Geodesy Navigation Astronomy Literature
InFINNity Deck... Astrophotography... Astro-Software... Astro Reach-out... Equipment... White papers...
Hardware... Imaging...
Imaging artefacts Optimal focal ratio (part 1) Optimal focal ratio (part 2) Solar imaging (part 1) Solar imaging (part 2) Solar imaging (part 3)