Light and Sound Waves Combined in Silicon Chips for Novel Signal Processing

The digital era is enabled by integrated circuits in silicon. Now, with the advent of photonics—components used for producing, guiding, and detecting light—the abilities of electronic circuits have been expanded even more.

Measured frequency response of a narrow radio-frequency filter, realized using light and sound waves in a silicon chip. Blue: Experimental results. Red: Designed response. (Image credit: Bar-Ilan University)

Collectively, photonics and electronics support the whole systems to facilitate data communication and processing, all on a single chip. But there are some aspects that cannot be done even by optical and electrical signals because they tend to move very fast.

According to Professor Avi Zadok of Bar-Ilan University’s Faculty of Engineering and Institute of Nanotechnology and Advanced Materials, sometimes, it is actually better to move slowly.

Important signal processing tasks, such as the precise selection of frequency channels, require that data is delayed over time scales of tens of nano-seconds. Given the fast speed of light, optical waves propagate over many meters within these timeframes. One cannot accommodate such path lengths in a silicon chip. It is unrealistic. In this race, fast doesn’t necessarily win.

Avi Zadok, Professor, Faculty of Engineering, Institute for Nanotechnology and Advanced Materials, Bar-Ilan University

Actually, the problem is a rather archaic one. For six decades, analog electronic circuits have been encountering similar problems in signal processing. Now, a suitable solution has been identified in the form of acoustics: this involves transforming a target signal from the electrical field to the form of an acoustic wave. Obviously, when compared to light, the speed of sound is slower by a factor of 100,000.

Acoustic waves can achieve the required delays over tens of micrometers rather than meters. Path lengths such as these can be effortlessly accommodated on a chip. After propagation, it becomes possible to convert the delayed signal back into electronics.

In the latest study, which was recently reported in the journal Nature Communications, Zadok and colleagues applied this working principle to silicon-photonic circuits.

There are several difficulties with introducing acoustic waves to silicon chips,” stated doctoral student Dvir Munk from Bar-Ilan University, who also took part in the research. “The standard layer structure used for silicon photonics is called silicon on insulator. While this structure guides light very effectively, it cannot confine and guide sound waves. Instead, acoustic waves just leak away.”

Because of this complication, the standard layer structure was not used in earlier works that integrate sound and light waves in silicon. On the other hand, extra, non-standard materials had to be integrated in a hybrid manner.

That first challenge can be overcome by using acoustic waves that propagate at the upper surface of the silicon chip. These surface acoustic waves do not leak down as quickly. Here, however, there is another issue: Generation of acoustic waves usually relies on piezo-electric crystals. These crystals expand when a voltage is applied to them. Unfortunately, this physical effect does not exist in silicon, and we much prefer to avoid introducing additional materials to the device.

Dvir Munk, Doctoral Student, Bar-Ilan University

As a substitute, students Munk, Moshe Katzman, and colleagues depended on metal illumination. “Incoming light carries the signal of interest,” Katzman explained. “It irradiates a metal pattern on the chip. The metals expand and contract, and strain the silicon surface below. With proper design, that initial strain can drive surface acoustic waves. In turn, the acoustic waves pass across standard optical waveguides in the same chip.”

Katzman continued, “Light in those waveguides is affected by the surface waves. In this way, the signal of interest is converted from one optical wave to another via acoustics. In the meantime, significant delay is accumulated within very short reach.”

The concept involves integrating sound and light in typical silicon without the use of piezo-electric crystals or any suspension of membranes. Although acoustic frequencies of up to 8 GHz are achieved, the concept can be scaled up to 100 GHz.

Apart from silicon, the working principle is also applicable to all substrates. Applications are also presented—the concept is applied to narrowband filters of input radio-frequency signals. In this case, 40 ns long delays are utilized by the highly selective filters.

Rather than use five meters of waveguide, we achieve this delay within 150 microns,” stated Munk.

Acoustics is a missing dimension in silicon chips because acoustics can complete specific tasks that are difficult to do with electronics and optics alone. For the first time we have added this dimension to the standard silicon photonics platform. The concept combines the communication and bandwidth offered by light with the selective processing of sound waves.

Avi Zadok, Professor, Faculty of Engineering, Institute for Nanotechnology and Advanced Materials, Bar-Ilan University

Such devices could be used in upcoming cellular networks, broadly referred to as 5G. However, in these networks, the use of digital electronics alone may not be sufficient to support the signal processing needs. But this could be resolved by sound and light devices.


Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type