The oceans, some five hundred million years ago, were full of trillions of trilobites. Trilobites were distant relatives of the horseshoe crab. All trilobites possessed a broad range of vision, owing to their compound eyes — single eyes made up of tens to thousands of minute autonomous units, each with its own lens, cornea and light-sensitive cells.
However, one group, Dalmanitina socialis, was extraordinarily farsighted. Their bifocal eyes, each positioned on stalks and made up of two lenses that curved light at various angles, allowed these marine animals to instantaneously view prey floating close by as well as distant predators approaching from over 1 km away.
Enthused by the eyes of D. socialis, scientists at the National Institute of Standards and Technology (NIST) have designed a tiny camera comprising a bifocal lens possessing a record-setting depth of field, which is the distance over which the camera can create clear images in a single photo.
The camera can instantaneously image objects as near as 3 cm and as distant as 1.7 km. They formulated a computer algorithm to rectify deviations, refine objects at midway distances between these near and far focal lengths, and produce a final all-in-focus image encompassing this vast depth of field.
These kinds of lightweight, large-depth-of-field cameras, which combine photonic technology at the nanoscale with software-powered photography, have the potential to transform future high-resolution imaging platforms.
Particularly, the cameras would significantly improve the capacity to create very detailed images of clusters of organisms that inhabit a large field of view, cityscapes and other photographic uses wherein both far and near objects have to be brought into clear focus.
NIST scientists Henri Lezec and Amit Agrawal, together with their colleagues from the Nanjing University and the University of Maryland in College Park, present their findings online in the journal Nature Communications (April 19th issue).
The scientists created an array of minute lenses called metalenses. These are ultrathin films imprinted or etched with groupings of nanoscale pillars customized to exploit light in precise ways.
To engineer their metalenses, Agrawal and his contemporaries covered a flat surface of a glass with millions of minute, rectangular nanoscale pillars. The orientation and shape of the fundamental nanopillars focused light in such a manner that the metasurface instantaneously behaved like a macro lens (for nearby objects) and a telephoto lens (for faraway objects).
The nanopillars trapped light from a scene of interest, which can be split into two equal components — light that is right circularly polarized and left circularly polarized. (Polarization can be defined as the direction of the electric field of a light wave; right circularly polarized light possesses an electric field that turns clockwise, while left circularly polarized light possesses an electric field that turns counterclockwise.)
The nanopillars curved the right and left circularly polarized light by diverse amounts, based on the arrangement of the nanopillars. The researchers oriented the nanopillars, which were rectangular, so that a portion of the inward light had to move via the longer part of the rectangle and the remaining inward light via the shorter part.
In the longer path, light had to travel via more material and thus underwent more bending. Regarding the shorter path, the light had less material to pass through and thus less bending.
Light that is bent by diverse amounts is tuned to a different focus. The greater the bending, the nearer the light is focused. In this manner, based on whether light passed via the shorter or longer part of the rectangular nanopillars, the metalens creates images of both distant (1.7 km away) and close by (a few centimeters) objects.
Without additional processing, however, that would leave objects at midway distances (more than a few meters from the camera) unfocused. Agrawal and his contemporaries employed a neural network — a computer algorithm that imitates the human nervous system — to program software to identify and rectify flaws such as color aberration and blurriness in the objects that were located midway between the far and near focus of the metalens.
The researchers tested their camera by positioning objects of different shapes, colors and sizes at various distances in a scene of interest and using software correction to create a final image that was focused and free of deviations over the whole kilometer range of depth of field.
The metalenses created by the researchers improve the light-gathering capacity without compromising image resolution. Furthermore, since the camera automatically rectifies aberrations, it has a high tolerance for error, allowing scientists to utilize simple, easy to create designs for the tiny lenses, Agrawal explained.
Fan, Q., et al. (2022) Trilobite-inspired neural nanophotonic light-field camera with extreme depth-of-field. Nature Communications. doi.org/10.1038/s41467-022-29568-y.