AI is transforming sensor technology from passive data collectors into intelligent, adaptive systems, dramatically boosting their accuracy, sensitivity, and robustness across real-world environments.1
By tightly integrating machine learning with advanced sensors, industries from robotics and biomedical devices to industrial automation and infrastructure monitoring are unlocking powerful new capabilities and applications.1

Image Credit: ana sukprakon/Shutterstock.com
Why Real-Time Vision Systems Need a Rethink?
For more than three decades, CMOS image sensors have relied on a largely unchanged architecture in which pixels capture light, data are transmitted off sensor, and physically separate processors perform computation. While effective for conventional imaging, this decoupled design is increasingly misaligned with the requirements of real time intelligent vision at the edge.2
This misalignment is driven by two fundamental constraints. The power wall arises as higher resolution, faster frame rates, and on device intelligence dramatically increase the energy cost of moving raw visual data between sensor, memory, and processor.2
The memory wall further limits performance through latency and bandwidth bottlenecks in hierarchical memory systems, which are particularly restrictive for edge platforms operating under milliwatt level power budgets and microsecond scale latency requirements.2
Separating sensing, memory, and processing is inefficient because it treats vision as passive data acquisition rather than active perception. In contrast, biological vision systems perform substantial preprocessing at the retina, reducing redundancy before higher level processing.2
Commercial demands in robotics, autonomous vehicles, and wearable devices intensify the need for change, as continuous visual perception under strict power and form factor constraints makes off sensor data transfer impractical.2
Concurrently, advances in neuromorphic and bio inspired imaging, including event-based sensors and spiking neural networks, demonstrate that meaningful computation can occur directly at the sensor. These developments converge on in sensor computing, where sensing and processing are co designed to enable efficient, low power intelligent vision.2
Download the PDF of this article
The Breakthrough: Optical Synapse from Hebei University
Recent work by Wang et al. from Hebei University marks an important advance toward integrated optical intelligence. The key innovation is a low energy photoelectric memristor that combines optical sensing, memory, and neuromorphic processing within a single device, functioning as an artificial optical synapse.3
The device employs emerging low dimensional semiconductors, including layered oxide and chalcogenide systems such as Bi2O2Se based heterostructures. These materials offer high carrier mobility, strong light matter interaction, and compatibility with thin film fabrication. Optical stimuli directly modulate the conductance of the memristive element, removing the need for separate photodetectors and memory units.4
Functionally, the device reproduces essential retinal and synaptic behaviors. Light acts as both input and learning signal, enabling persistent or transient conductance changes similar to synaptic plasticity. Temporal integration of optical pulses supports adaptive responses such as contrast enhancement and noise suppression, operations typically handled by digital processors.4
Reported performance is well suited to edge AI applications, with synaptic energy consumption reaching picojoule and femtojoule levels, fast switching speeds compatible with real time vision, and nonvolatile memory that supports stateful operation without continuous power.4
The research team, led by Professor Yan, has emphasized scalability, array level integration, and compatibility with existing semiconductor manufacturing, positioning optical synapses as viable components for future intelligent sensor platforms rather than laboratory demonstrations.3
Commercial Significance and Industry Relevance
The commercial implications of on-sensor optical AI are substantial. One immediate possibility is the partial replacement of conventional CMOS image sensors in embedded vision systems. Rather than outputting raw pixel intensities, future sensors could directly emit feature maps, event streams, or task-specific representations, dramatically reducing downstream computation.5
Such capabilities are especially compelling for compact, autonomous devices that must operate continuously on limited energy budgets. Drones, micro-robots, and medical implants all stand to benefit from sensors that “understand” visual input at the point of capture. In these contexts, even modest reductions in data movement translate into meaningful gains in operational lifetime and responsiveness.5
For manufacturers developing edge-AI or neuromorphic chips, optical synapses offer a complementary pathway to existing electronic approaches. IBM’s TrueNorth and Intel’s Loihi, for example, demonstrated the efficiency of spiking neural architectures, yet they still rely on conventional sensors and electronic interconnects.5-6
Optical AI hardware, pursued by companies such as SynSense and Prophesee, pushes intelligence closer to the physical signal itself. Integrating optoelectronic synapses at the sensor level could further collapse the boundary between perception and cognition.5
Synergies also emerge with application domains such as smart glasses, augmented and virtual reality, surveillance, and medical imaging. In AR/VR, low-latency visual processing is critical for user comfort, while power consumption constrains wearable form factors. In medical imaging, on-sensor intelligence could enable adaptive acquisition, selectively emphasizing diagnostically relevant features while minimizing radiation dose or illumination intensity.5-6
Methods Summary
At a high level, the optical synapse device is fabricated by stacking ultrathin material layers onto a substrate using controlled deposition techniques. Electrodes define a channel whose electrical conductance can be modulated. When light illuminates the active region, it generates charge carriers that alter the internal state of the device, much like adjusting the strength of a biological synapse.7
Testing typically involves exposing the device to sequences of light pulses with varying intensity and duration. Each pulse produces a measurable change in electrical response, which is recorded to evaluate memory retention, sensitivity, and speed. By carefully designing the pulse patterns, researchers can demonstrate learning-like behaviours, such as gradual strengthening under repeated stimulation or fading responses when stimulation ceases.7
An intuitive analogy is the human retina: just as retinal neurons preprocess light before sending compressed information to the brain, the optical synapse preprocesses visual input before passing it to higher-level circuits. The result is not a photograph, but a representation already shaped by relevance and context.7
Future Directions and Industry Outlook
Despite its promise, on-sensor optical AI faces several challenges. Scaling from single devices to large, uniform arrays remains nontrivial, particularly when variability in material properties can affect learning behaviour. Integration with existing readout circuitry and packaging technologies must also be addressed to ensure reliability and durability under real-world operating conditions.2
Nonetheless, progress in neuromorphic optics and integrated photonics suggests that these hurdles are surmountable. Advances in wafer-scale growth of low-dimensional materials, alongside hybrid electronic–photonic integration, are steadily narrowing the gap between laboratory demonstrations and manufacturable systems.2
The broader implication is a redefinition of what a sensor is. Over the next decade, sensors may evolve from passive data sources into active computational elements, entities that perceive, decide, and adapt. On-sensor optical AI embodies this shift, pointing toward a future in which intelligence begins not in the processor, but at the very moment light is captured.
References and Further Readings
- Cao, L.; Abedin, S.; Cui, G.; Wang, X., Artificial Intelligence and Machine Learning in Optical Fiber Sensors: A Review. Sensors (Basel, Switzerland) 2025, 25, 7442.
- Chen, L.; Xia, C.; Zhao, Z.; Fu, H.; Chen, Y., Ai-Driven Sensing Technology. Sensors 2024, 24, 2958.
- Wang, K.; Ren, S.; Jia, Y.; Yan, X., An Ultrasensitive Biomimetic Optic Afferent Nervous System with Circadian Learnability. Advanced Science 2024, 11, 2309489.
- Mallick, D.; Ghosh, S.; Chen, A.-H.; Liao, J.; Yoo, J.; Lu, Q.; Randolph, S. J.; Retterer, S. T.; Eres, G.; Chen, Y. P., Next-Generation Electronics by Co-Design with Chalcogenide Materials. npj Spintronics 2025, 3, 41.
- Baek, Y.; Bae, B.; Shin, H.; Sonnadara, C.; Cho, H.; Lin, C.-Y.; Mu, Y.; Shen, C.; Shah, S.; Wang, G., Edge Intelligence through in-Sensor and near-Sensor Computing for the Artificial Intelligence of Things. Npj Unconventional Computing 2025, 2, 25.
- Golovastikov, N. V.; Kazanskiy, N. L.; Khonina, S. N. In Optical Fiber-Based Structural Health Monitoring: Advancements, Applications, and Integration with Artificial Intelligence for Civil and Urban Infrastructure, Photonics, MDPI: 2025; p 615.
- Sun, X.; Hu, Y.; Jiang, C.; Yang, S., Optoelectronic Synapses Based on Two-Dimensional Transition Metal Dichalcogenides for Neuromorphic Applications. InfoScience 2025, e70005.
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.