Deep learning, a form of machine learning, is considered to be one of the main technologies behind latest advances in applications such as automated image and video labeling and also real-time speech recognition.
The method, which employs multi-layered artificial neural networks in order to automate data analysis, has also shown major promise for health care: For instance, it could be used for automatically identifying abnormalities in patients’ CT scans, X-rays and various other medical data and images.
UCLA researchers, in two new papers, report that they have developed new applications for deep learning: enhancing optical microscopy and reconstructing a hologram to develop a microscopic image of an object.
Their new holographic imaging technique is capable of producing enhanced images when compared to the existing methods that make use of multiple holograms, and it is easier to implement since it needs fewer measurements and carries out computations in a faster manner.
The research was headed by Aydogan Ozcan, an associate director of the UCLA California NanoSystems Institute and the Chancellor’s Professor of Electrical and Computer Engineering at the UCLA Henry Samueli School of Engineering and Applied Science; and also by postdoctoral scholar Yair Rivenson and graduate student Yibo Zhang, both of UCLA’s electrical and computer engineering department.
For one particular study (PDF), published in Light: Science and Applications, the researchers developed holograms of Pap smears, which are used for screening for cervical cancer, and blood samples, and also breast tissue samples. In every single case, the neural network learned to extract and then separate the features of the true image of the object from unwanted light interference and also from various other physical byproducts of the image reconstruction process.
“These results are broadly applicable to any phase recovery and holographic imaging problem, and this deep-learning–based framework opens up myriad opportunities to design fundamentally new coherent imaging systems, spanning different parts of the electromagnetic spectrum, including visible wavelengths and even X-rays,” said Ozcan, who also is an HHMI Professor at the Howard Hughes Medical Institute.
Another advantage of the new technique is the fact that it was obtained without the need for modeling light–matter interaction or a solution of the wave equation, which can be time-consuming and challenging to model and calculate for each separate sample and form of light.
“This is an exciting achievement since traditional physics-based hologram reconstruction methods have been replaced by a deep-learning–based computational approach,” Rivenson said.
UCLA researchers Harun Günaydin and Da Teng, both members of Ozcan’s lab, were the others members of the team.
In the second study, featured in the journal Optica, the researchers made use of the same deep-learning framework in order to improve the quality and resolution of optical microscopic images.
That advance could enable pathologists and diagnosticians looking for extremely small-scale abnormalities in a large blood or tissue sample, and Ozcan stated that it represents the powerful opportunities for deep learning in order to enhance optical microscopy for medical diagnostics and various other fields in the sciences and engineering.
Support for Ozcan’s research was provided by the National Science Foundation–funded Precise Advanced Technologies and Health Systems for Underserved Populations and by the NSF, and also by the Army Research Office, the National Institutes of Health, the Howard Hughes Medical Institute, the Vodafone Americas Foundation and the Mary Kay Foundation.