Experts Discuss Contribution of Latest Optical Technologies to Field of 3D Medicine

Live 3D imaging is one of the hottest topics in optics today, transforming medical imaging capabilities and delivering the immersive experience behind augmented and virtual reality. During The Optical Society’s Light the Future centennial program in Heidelberg, Germany on 26 July, Dr. Joseph Izatt of Duke University and Microsoft’s Bernard Kress gave an insider’s look at how these technologies are advancing medicine and changing the future of how we interact with computers.

Picking up where Ray Kurzweil left off in the previous Light the Future presentation about artificial intelligence and the continually shrinking scale of medicine, Izatt discussed some of the latest optical technologies delivering micron precision to surgery. Specifically, he sees the convergence of real-time 3D imaging technologies, augmented reality (AR) and virtual reality (VR) visualization and surgical robotics as driving such advanced capabilities.

Optical coherence tomography (OCT) has already delivered the ability to map the entire vascular network within the retina, down to the single capillary level. Thanks to the computing power of graphical processing units (GPUs), the real power of this 3D resolution is just starting to emerge by bringing the images to the surgeon live during an operation — in real time, using a variety of display methods.

Izatt and collaborators have developed a way to integrate OCT hardware into the surgical microscopes typically used for eye surgery. The retina, for example, has the consistency of wet tissue paper and any surgery requires complex and delicate surgical techniques. Describing the implications of the OCT device Izatt said, “In addition to just visualizing this better, this technology enables quantitative measurements that cannot be made without the technology.”

Where he sees major challenges are in the ways this live 3D imaging is displayed to the surgeon. Izatt’s team has had success with stereoscopic heads-up displays they developed to integrate the live OCT images into surgeon’s microscope oculars. This is a relatively small field of view to work with, however, and there is great potential for developing more advanced visualizations for surgeons.

Some operating rooms have already employed the use of 3D TV displays, which don’t have the physical discomfort that might come with looking through a microscope. While they also offer a slightly larger field of view, at around 55°, Izatt sees great potential for head-mounted AR/VR displays that could offer much closer to a full 4π steradian field of view – in other words, allowing the surgeon to see these volumetric images from any angle she or he chooses to look.

A challenge in using head-mounted displays arises, perhaps surprisingly, when adapting the display and volumetric perspective for a surgeon’s intuitive hand-eye coordination.  Here, Izatt sees great potential for the use of haptic – or touch driven – control of robotics. With the right levels of sensory feedback, such cooperative control not only offers higher precision for microsurgery, but also offers better safety for the patient.

The technology of such head-mounted display systems was the focus of the session’s second presentation given by Bernard Kress, optical architect for Microsoft’s HoloLens AR headset.

For many, the names HoloLens, Occulus, Vive, or even Google Cardboard may seem very new. In terms of the market, however, Kress notes that head mounted displays have already passed through the hype cycle, paving the way for their ubiquity in as little as 10 years. “At Microsoft we think strongly that this will be the next computing platform,” says Kress.

Although there is plenty of overlap, he points out that VR, AR and smart eyewear such as Google Glass (which Kress also helped develop before moving to Microsoft), are three very different technologies – for now, at least. He, along with many other major players in the field, sees the fusing of the three technologies as an eventuality. These are far from trivial claims when taking into account the capabilities and challenges involved in these technologies.

In addition to the physical differences between any given person wanting to comfortably adorn their head with such massively complex devices, each person will also have differences in field of view and vision processing. Moreover, there are multiple elements to be considered within a single user’s vision where different parts of our field of view are optimal for different types of vision. An example of this is reading text in the near field versus detecting the motion of objects in the periphery.

For the time being at least, AR and VR require different solutions to these problems. While VR has the largest field of view to account for, the entire visual environment is simulated and can be accounted for in development. AR has a more normal 220° field of view, but simulated objects are painted on top of the viewer’s reality and must be capable of adapting to continually changing landscapes.

Aside from the technical challenges in developing head mounted displays, Kress also pointed out some of the market challenges already being addressed and conquered. Availability of these devices, which typically run close to 1000 USD today, is an obvious hurdle. With so few people having access to their use, this can also allow cultural barriers to fester, leaving many averse to trying something so unfamiliar and even fearful of such exotic technology.

This is where Kress sees products like Google Cardboard and Pokémon Go as playing major roles in promotion and democratization. Each of these has millions of users already gaining familiarity with AR and VR capabilities. “Imagine you’re playing Pokémon Go with HoloLens instead of your smart phone. It’s just the tip of the iceberg,” says Kress.

Despite the difficulties, Kress emphasizes that AR and VR technologies are taking off. And as they do, both the user experience and (shrinking) hardware standards will ultimately be shaped by the growing toolkit of optical technologies. Some of the technologies Microsoft is already playing with include geometric phase holograms, surface plasmons, metamaterials and parity time symmetry optics.

Whether it’s for finding a single capillary in a retina, or finding Pikachu in the park, VR and AR technologies have eyes clearly focused on the future.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.