Posted in | News | Optics and Photonics

Hypermultiplexed Integrated Photonics–based Optical Tensor Processor

A paper recently published in the journal Science Advances introduced a hypermultiplexed integrated photonics–based tensor optical processor (HITOP) for scalable optical computing with hyperdimensional parallelism.

Scalable optical computing with hyperdimensional parallelism is possible thanks to HITOP

Image Credit: amgun/Shutterstock.com

Existing Tensor Processing Limitations

Tensor processors have emerged as a key computing building block in artificial intelligence (AI) and high-performance computing (HPC) due to their capabilities in data-intensive algorithms. Their adoption has driven progress in areas such as iterative solvers, deep learning, and tackling NP-hard optimization problems. Specifically, large language models (LLMs) have demanding computing hardware requirements, training times, and energy costs. This requirement of computing power has acted as a key barrier in AI model deployment and development.

Conventional Von Neumann architectures are not efficient for performing these functions due to tensor processing’s huge memory interface requirements. Harnessing different types of intrinsic parallelism, new computing approaches are being developed. The major figures of merit (FoM) to be optimized are energy efficiency, model scalability, computing power, computing density, and nonlinearity with compact footprint, low latency, and energy.

The Potential of Optics

Optics can improve the computing FoM substantially due to the large optical bandwidth and ultra-low loss of light propagation. However, current techniques based on wavelength and spatial multiplexing still rely on energy-intensive high-speed analog-to-digital converters and require O(N²) modulator devices, since each matrix element needs its own dedicated modulator. Time-multiplexed methods have recently shown promise for scalable computing. Yet, existing optical interference-based demonstrations have limited scalability and computing accuracy.

On the device level, digital memory interfacing with low-loss, low-voltage, high-speed optical transmitters is still a critical bottleneck, as existing technologies require limited optical bandwidth and complex wavelength tuning, and broadband modulators have form trade-off limitations. As a result, innovations spanning from architectural design down to the device level are essential to fully realize the benefits of optical computing for scalable and energy-efficient performance.

The Study

In this study, researchers introduced a hypermultiplexed tensor optical processor that leverages three-dimensional space-time-wavelength optical parallelism to achieve trillions of operations per second. This approach enables O(N²) operations per clock cycle while using only O(N) modulator devices, significantly improving scalability and efficiency. HITOP is fundamentally a hybrid optoelectronic system designed for general-purpose tensor processing. The primary goal of the study was to meet the increasing demand for scalable and energy-efficient computing hardware.

The system was developed using wafer-fabricated III/V micrometer-scale lasers and thin-film lithium niobate (TFLN) electro-optics to encode at tens of femtojoules (fJ) per symbol. Lasing threshold incorporated analog inline rectifier (ReLU) nonlinearity to realize low-latency activation. The III/V-semiconductor vertical cavity surface-emitting laser (VCSEL) transmitters were investigated for wavelength multiplexing owing to their high transmission speeds, power efficiency, and scalability.

A TFLN modulator platform was developed that simultaneously performed weight streaming and time-wise dot products across several wavelengths. To overcome existing limitations in silicon photonics, TFLN photonics can be a realistic solution for optical transmitters. TFLN photonics combines scalable fabrication capabilities and superior Pockels properties in integrated next-generation optoelectronic circuits.

The dual-port TFLN modulator design, paired with differential detection to operate negative and positive weight values that are critical for AI computing, is essential to realize parallel processing. Each TFLN modulator in HITOP was biased at the quadrature, with equal power on two output arms for differential detection cancellation. Data modulation shifted optical power between arms, creating positive and negative photon currents proportional to encoding voltages.

HITOP’s high scalability was enabled by low energy consumption and system simplicity. HITOP computed O(N²) multiply-accumulate (MAC) operations every clock cycle, but read out only after integrating MAC products over K (≈1000) time steps. This approach significantly reduced the required electronics power for high-speed readout and the optical power needed to achieve the target computing precision.

Significance of the Study

HITOP established neural connectivity using wavelength division multiplexing and spatial beam routing, enabling high-density on-chip integration. Weight data with 0.6 V peak-to-peak voltage (~90 fJ per symbol) and low-voltage operation kept the modulators in a quasilinear region, where less than 0.4% data encoding errors were achieved (8 bits of precision).

It differed from state-of-the-art (SOTA) optical computing systems due to several advantages, including scalability using O(N) modulators for O(N²) throughput and high-speed, energy-efficient photonic material platforms with scalable, compact laser modulators and broadband TFLN photonics for electro-optic conversion at tens of fJ per encoding.

Additional advantages of the system include high model scalability through temporal data mapping, enabling each device to activate ten billion parameters per second. It also features time-integrating receivers that support low optical power operation and reduce latency by leveraging laser thresholding via inline optoelectronic analog nonlinearity. The system scalability was verified using machine learning (ML) models of 405,000 parameters.

These advantages are crucial for enabling efficient, scalable AI computing. The system was evaluated on models with nearly half a million parameters and demonstrated a full-system energy efficiency of 260 trillion operations per second, more than a 100-fold improvement over state-of-the-art digital systems such as the NVIDIA H100.

To summarize, this work unlocks the potential of light for low-energy AI accelerators through a combination of energy-efficient processing, programmability, and high clock rates for diverse applications like large AI model training and decision-making in edge deployment in real time.

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Source:

Journal Reference

Ou, S. et al. (2025). Hypermultiplexed integrated photonics–based optical tensor processor. Science Advances, 11(23). DOI: 10.1126/sciadv.adu0228, https://www.science.org/doi/full/10.1126/sciadv.adu0228

Samudrapom Dam

Written by

Samudrapom Dam

Samudrapom Dam is a freelance scientific and business writer based in Kolkata, India. He has been writing articles related to business and scientific topics for more than one and a half years. He has extensive experience in writing about advanced technologies, information technology, machinery, metals and metal products, clean technologies, finance and banking, automotive, household products, and the aerospace industry. He is passionate about the latest developments in advanced technologies, the ways these developments can be implemented in a real-world situation, and how these developments can positively impact common people.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dam, Samudrapom. (2025, July 31). Hypermultiplexed Integrated Photonics–based Optical Tensor Processor. AZoOptics. Retrieved on August 01, 2025 from https://www.azooptics.com/News.aspx?newsID=30440.

  • MLA

    Dam, Samudrapom. "Hypermultiplexed Integrated Photonics–based Optical Tensor Processor". AZoOptics. 01 August 2025. <https://www.azooptics.com/News.aspx?newsID=30440>.

  • Chicago

    Dam, Samudrapom. "Hypermultiplexed Integrated Photonics–based Optical Tensor Processor". AZoOptics. https://www.azooptics.com/News.aspx?newsID=30440. (accessed August 01, 2025).

  • Harvard

    Dam, Samudrapom. 2025. Hypermultiplexed Integrated Photonics–based Optical Tensor Processor. AZoOptics, viewed 01 August 2025, https://www.azooptics.com/News.aspx?newsID=30440.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.