Posted in | News | Microscopy

Salk Researchers Use Deep Learning to Improve Microscope Image Quality

Deep learning is a potential tool for scientists to glean more detail from low-resolution images in microscopy, but it's often difficult to gather enough baseline data to train computers in the process. Now, a new method developed by scientists at the Salk Institute could make the technology more accessible--by taking high-resolution images, and artificially degrading them.

The new tool, which the researchers call a "crappifier," could make it significantly easier for scientists to get detailed images of cells or cellular structures that have previously been difficult to observe because they require low-light conditions, such as mitochondria, which can divide when stressed by the lasers used to illuminate them.

It could also help democratize microscopy, allowing scientists to capture high-resolution images even if they don't have access to powerful microscopes. The findings were published March 8, 2021, in the journal Nature Methods.

"We invest millions of dollars in these microscopes, and we're still struggling to push the limits of what they can do," says Uri Manor, director of the Waitt Advanced Biophotonics Core Facility at Salk. "That's the problem we were trying to solve with deep learning."

Deep learning is a type of artificial intelligence (AI) in which computer algorithms learn and improve by studying examples. To use deep learning to improve microscope images--either by improving the resolution (sharpness) or reducing background "noise"--the system would need to be shown many examples of both high- and low-resolution images.

That's a problem, because capturing perfectly identical microscopy images in two separate exposures can be difficult and expensive. It's especially challenging when imaging living cells that might be moving around during the process.

That's where the crappifier comes in. According to Manor, the method takes high-quality images and computationally degrades them, so that they look something like the lowest low-resolution images the team would acquire.

Manor's team showed high-resolution images and their degraded counterparts to the deep learning software, called Point-Scanning Super-Resolution, or PSSR. After studying the degraded images, the system was able to learn how to improve images that were naturally poor quality.

That's significant because, in the past, computer systems that learned on artificially-degraded data still struggled when presented with raw data from the real world.

"We tried a bunch of different degradation methods, and we found one that actually works," Manor says. "You can train a model on your artificially-generated data, and it actually works on real-world data."

"Using our method, people can benefit from this powerful, deep learning technology without investing a lot of time or resources," says Linjing Fang, image analysis specialist at the Waitt Advanced Biophotonics Core Facility, and lead author on the paper. "You can use pre-existing high-quality data, degrade it, and train a model to improve the quality of a lower-resolution image."

The team showed that PSSR works in both electron microscopy and with fluorescence live cell images--two situations where it can be extraordinarily difficult or impossible to obtain the duplicate high- and low-resolution images needed to train AI systems.

While the study demonstrated the method on images of brain tissue, Manor hopes it could be applied to other systems of the body in the future.

He also hopes it could someday be used to make high-resolution microscopic imaging more widely accessible. Currently, the most powerful microscopes in the world can cost upwards of a million dollars, because of the precision engineering required to create high-resolution images. "One of our visions for the future is to be able to start replacing some of those expensive components with deep learning," Manor says, "So we could start making microscopes cheaper and more accessible."

Other authors on the study are Sammy Weiser Novak, Cara R. Schiavon, Tong Zhang and Melissa Wu of the Salk Institute; Fred Monroe of the Wicklow AI Medical Research Initiative; Lindsey Kirk and Kristen Harris of the University of Texas at Austin; Seungyoon B. Yu and Gulcin Pekkurnaz of the University of California San Diego; Kyle Kastner of the Université de Montréal, Yoshiyuki Kubota of the National Institute for Physiological Sciences, Okazaki, Japan; Zhao Zhang of the University of Texas at Austin, and Alaa Abdel Latif, Zijun Lin,, Andrew Shaw and Jeremy Howard of the University of San Francisco.

The research was supported by the National Science Foundation, the Chan-Zuckerberg Initiative, the Waitt Foundation, the National Cancer Institute, the National Institute on Deafness and Other Communication Disorders, the National Institute of Mental Health, the Wicklow AI Medical Research Initiative, the Parkinson's Foundation, the National Institute of Health, and the Japan Society for the Promotion of Science.

Source: https://www.salk.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.