Optical Microscopy – What is Resolution and how can it be Improved?

It is important to understand the optical properties of microscopy equipment if you want to choose the right digital camera for a microscope - with an appropriate resolution for a specific application.

In fact, it is the microscope’s properties, and not the camera, that define the smallest resolvable detail in a slide.

When light rays travel through a small opening, they are subject to the diffraction phenomenon. This occurs when light rays begin to diverge from the incident axis and interfere with each other. The impact of diffraction will be more pronounced if the opening is smaller. As light rays diverge, they travel varying distances to their target, and in this case, the image sensor of the camera.

Airy Disk

The phase of each light ray is altered by the change in the distance traveled, producing an interference pattern. If the small opening is circular, for example as in a microscope objective, the interference pattern created appears like a bull’s eye target. This is called an "Airy disk."

The image given below shows the Airy disk interference pattern. The image on the left side is a simulated Airy disk pattern; the image on the right is an actual Airy disk; achieved by projecting a laser beam through a pinhole. The image in the middle is a 3D plot of the intensity of the Airy disk.

Source: Wikipedia

Resolution of Optical Systems

The minimum resolution of the optical system is directly linked to the size of the center circle of the light and is defined by the diameter of the first dark circle. A loss of sharpness will occur if the center circles of two Airy disks start to overlap, and if they overlap by more than the radius of the circles, they will not be resolved. This is called a "diffraction limited system."

In the following, the left-side image shows two Airy disks that are completely resolvable; the middle image depicts Airy disks that are at the crucial overlap point, and the right-side image displays two Airy disks that are no longer resolvable.

Source: Wikipedia

The objective’s numerical aperture (NA) and the wavelength of light (λ) used are the two variables that determine the size of the Airy disk in a microscope. The Rayleigh criterion can be used to determine the smallest resolvable dimension of an optical system using these variables in the equation given below:

Using green light (λ = 550 nm) in the case of a 20x objective with a 0.5 NA, the smallest resolvable dimension will be 0.671 µm. When projected onto the image sensor, this dimension is magnified by coupler magnifications and the product of the objective.

In the case of a 0.5x coupler, combined with the 20x objective and a 1/2" sensor, this dimension becomes 6.71 µm. In order to sample this dimension properly, the Nyquist theorem needs a pixel size of at least half of this smallest resolvable dimension. Therefore, the image sensor of the camera should have pixels not larger than 3.36 µm each in order to properly resolve the dimension at this magnification.

When the values of certain variables were modified, it was seen that a larger NA needed smaller pixels for the similar magnification as there is less diffraction. However, the pixels can be larger if red light is used, because the wavelength of the light is also longer. Since the minimum resolvable dimension is projected onto a larger area of the sensor, increased magnification will also enable larger pixels.

The minimum resolvable dimension of objects increases when enlarged with higher magnification, for example 60x or 100x, and therefore the use of smaller pixels will not provide any additional detail as there is no extra information available. Since larger pixels will sufficiently gather all the available detail in a highly magnified image, a lower resolution sensor is required for a specified sensor format size.

As light is being collected from a smaller region of the slide at higher magnifications, the larger pixels will also contribute to improved sensitivity of the camera. This is suitable for other applications where high sensitivity is necessary and low light is a problem.

The following table provides a few solid examples with different objective magnifications and numerical apertures. It must be noted that, although the Nyquist theorem needs the minimum resolution to be divided in order to determine the pixel size, monochrome sensors will resolve small dimensions more efficiently if the smallest dimension is divided by three; and by four for color sensors, because of the Bayer filter pattern’s layout on the pixels. The table below shows the largest pixel size needed to fully resolve an object’s smallest visible detail using the coupler, sensor size, and wavelength from the earlier example: 0.5x, 1/2", and 550 nm, respectively.

Magnification / NA Resolution Limit (µm) Projected Size (µm) Required Pixel Size (µm) Mono / Color Sensor Resolution (Mono)
10x / 0.30 1.12 11.2 3.73 / 2.80 1717 x 1288
20x / 0.50 0.67 13.4 4.47 / 3.36 1431 x 1073
40x / 0.75 0.45 17.9 5.96 / 4.47 1073 x 805
60x / 0.85 0.39 23.7 7.89 / 5.92 811 x 608
100x / 1.30 0.26 25.8 8.60 / 6.45 744 x 558

 

Selecting a Camera with the Smallest Pixel Size

Since large pixels collect more light, it is not possible to choose a camera with the smallest pixel size. This means there is trade-off between sensitivity and resolution, and a balance between these two factors will offer the most favorable solution. The pixel size should be large enough to capture an adequate amount of light to produce a quality image without requiring long exposure times, and yet must be small enough to resolve the required amount of detail.

If users are debating the merits of two cameras that have similar sensor sizes and different pixel sizes, they must err on the side of smaller pixels. In general, image sensors with more pixels will have smaller pixels. By itself, the sensor will provide more resolution at a cost of reduced sensitivity, but Lumenera CCD cameras support a “pixel binning” feature, where the software will form a larger “super pixel” by grouping a cluster of pixels together.

This will help increase the sensitivity of the sensor and can be used in situations such as imaging at higher magnifications, where larger pixels are preferred and high pixel density is not required for increased sensitivity.

This information has been sourced, reviewed and adapted from materials provided by Lumenera Corporation.

For more information on this source, please visit Lumenera Corporation.

Ask A Question

Do you have a question you'd like to ask regarding this article?

Leave your feedback
Submit