A new diffractive network design enables real-time 3D and multispectral imaging without digital reconstruction or moving parts.

Image Credit: wertinio/Shutterstock.com
A recent study published in Light: Science & Applications introduces a new optical framework for engineering three-dimensional (3D) point spread functions (PSFs) using spatially incoherent diffractive networks.
This finding offers a fully optical solution for manipulating volumetric data and eliminates the need for digital reconstruction, spectral filters, or axial scanning. It has wide-ranging potential across imaging, sensing, and data processing applications.
PSF Engineering and Its Role in Imaging
PSFs define how light from a single point source spreads in an optical system, directly impacting image clarity and depth resolution.
Engineering PSFs to intentionally shape their responses has allowed the researchers to enhance performance in 3D microscopy, astronomy, and data storage. Traditional PSF engineering has relied on fixed phase masks at the Fourier plane, which limit spatial and spectral adaptability, key requirements for more advanced imaging applications.
Recent advances in diffractive optics, including deep-learning-optimized multilayer structures, have enabled the use of powerful 2D light manipulation for tasks such as classification, encryption, and phase retrieval.
However, until now, these systems have not been used to create spatially varying 3D PSFs, which are essential for volumetric and multispectral imaging.
Mapping Arbitrary Volumes with Diffractive Networks
The researchers developed a general framework capable of generating arbitrary 3D spatial and spectral PSFs across both input and output volumes.
Their system uses a stack of transmissive, phase-only layers optimized through deep learning to create precise intensity transformations from input to output voxel grids. Each voxel maps to the output via a unique PSF, defined by a matrix that captures non-negative, real-valued emission data.
To reduce transformation error, the system uses a number of trainable phase features proportional to twice the product of input and output voxel counts (2NiNo). Deeper networks consistently outperformed shallower configurations, especially those with four or more diffractive layers.
The study also explored the network's physical constraints. It found that performance declines as voxel spacing nears the axial diffraction limit. Layer spacing and aperture size determine the numerical aperture, which in turn determines the maximum resolvable voxel density. Bit depth is also critical; a minimum of 8-bit control is required for reliable results.
Interestingly, mismatches between training and actual hardware bit depth substantially reduced accuracy, underlining the importance of hardware-aware design.
Download your PDF copy now!
Simulation Results and Experimental Scenarios
The team demonstrated the network system's key abilities by using simulations. In one example, a four-layer diffractive network successfully performed snapshot 3D imaging across four axial planes, each spaced 2.67 wavelengths apart.
Signals from each plane were routed to distinct detector pixels, enabling direct spatial demultiplexing without post-processing. Emission patterns from the sources within the volume were successfully reconstructed with minimal deviation from target output intensities.
The setup was tested to assess its spectral versatility with input sources emitting at 580, 600, and 620 nm. The network accurately separated signals by both wavelength and axial position, assigning them to specific detector pixels, delivering simultaneous 3D and spectral imaging on a single detector array.
The researchers also demonstrated the PSFs' real-world capabilities by testing the system's responses to refractive index mismatches between training and operation conditions.
The study found that the diffractive network's performance declined when trained assuming an air medium but evaluated with a water-immersed input volume. However, retraining the network under the correct refractive index conditions restored imaging accuracy, demonstrating the system's adaptability through proper environmental modeling.
Broader Implications for Imaging and Sensing
This method opens the door to compact, low-power imaging devices that can optically perform complex 3D and spectral tasks. Removing the need for mechanical motion or digital computation offers a path to real-time diagnostics in wearable optics, portable sensing, lab-on-chip systems, and biomedical platforms such as flow cytometry and holographic microscopy.
Beyond imaging, this approach could be used in optical signal processing, spatial data encoding, and future memory storage systems, wherever precise, diffraction-limited light control is required.
This study presents an impressive method for universal 3D PSF synthesis using deep-learning-optimized diffractive networks. The system achieves all-optical, real-time volumetric imaging and multispectral sensing without traditional hardware limitations.
Future work could include experimental validation, integration with current imaging tools, adaptation to nonlinear or scattering media, and implementation with programmable or reconfigurable optical materials.
Journal Reference
Rahman, M.S.S., Ozcan, A. Universal point spread function engineering for 3D optical information processing. Light Sci Appl 14, 212 (2025). DOI: 10.1038/s41377-025-01887-x, https://www.nature.com/articles/s41377-025-01887-x
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.