Posted in | News | Imaging

Enhancing Holographic Imaging Through Optics-Inspired Deep Learning

Holographic imaging has long grappled with unpredictable distortions in dynamic settings, making it a formidable challenge.

Enhancing Holographic Imaging Through Optics-Inspired Deep Learning
Leveraging spatial coherence as a physical prior to guiding the training of a deep neural network, TWC-Swin method excels at capturing both local and global image features and eliminates image degradation caused by arbitrary turbulence. Image Credit: Tong, X., et al.

Traditional deep learning approaches often stumble when confronted with various scenarios because they heavily rely on specific data conditions.

To confront this issue, a team of researchers from Zhejiang University delved into the intersection of optics and deep learning. In doing so, they unveiled the pivotal role of physical priors in aligning data and pre-trained models effectively.

They investigated how spatial coherence and turbulence affect holographic imaging and proposed an inventive technique, TWC-Swin, to restore high-quality holographic images in the presence of these disruptions.

This groundbreaking research is published in the Gold Open Access journal Advanced Photonics.

Spatial coherence measures the orderly behavior of light waves. Chaotic light waves can render holographic images blurry and noisy, carrying less information. Maintaining spatial coherence is vital for achieving clear and sharp holographic imaging.

Dynamic environments, such as those characterized by oceanic or atmospheric turbulence, introduce fluctuations in the refractive index of the medium. This disrupts the phase correlation of light waves and distorts spatial coherence. Consequently, holographic images may become blurred, distorted, or even lost.

The Zhejiang University researchers devised the TWC-Swin method to address these challenges. TWC-Swin, an abbreviation for “train-with-coherence swin transformer,” harnesses spatial coherence as a physical prior to guiding the training of a deep neural network. This network, built on the Swin transformer architecture, excels at capturing local and global image features.

To evaluate their method, the authors developed a light processing system that generated holographic images under varying spatial coherence and turbulence conditions. These holograms featured natural objects and served as training and testing data for the neural network.

The results unequivocally demonstrate that TWC-Swin proficiently restores holographic images, even when spatial coherence is low and turbulence is arbitrary, surpassing traditional convolutional network-based methods.

Furthermore, the method exhibits robust generalization capabilities, extending its applicability to previously unencountered scenes not included in the training dataset.

This research marks a significant breakthrough in addressing image degradation in holographic imaging across diverse scenarios. By integrating physical principles into deep learning, this study reveals a successful synergy between optics and computer science.

The current research sets the stage for enhanced holographic imaging, granting the ability to perceive clearly through turbulence.

Journal Reference:

Tong, X., et al. (2023) Harnessing the magic of light: spatial coherence instructed swin transformer for universal holographic imaging. Advanced Photonics. doi.org/10.1117/1.AP.5.6.066003

Source: https://spie.org/?SSO=1

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit
Azthena logo

AZoM.com powered by Azthena AI

Your AI Assistant finding answers from trusted AZoM content

Your AI Powered Scientific Assistant

Hi, I'm Azthena, you can trust me to find commercial scientific answers from AZoNetwork.com.

A few things you need to know before we start. Please read and accept to continue.

  • Use of “Azthena” is subject to the terms and conditions of use as set out by OpenAI.
  • Content provided on any AZoNetwork sites are subject to the site Terms & Conditions and Privacy Policy.
  • Large Language Models can make mistakes. Consider checking important information.

Great. Ask your question.

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.