As computing resources and data storage are transferred to the cloud, the demand on data centers to provide appropriate levels of data transfer and storage capacity has radically increased over the last decade.1
Stored data must be accessed and transferred continuously, both to and from these data centers and within the data center itself. This creates an exponentially increased load on interconnects from the backplane to backplane of the data servers in the data center. Just one sizeable data center may need hundreds of thousands of these interconnects.
Key Factors driving higher capacity data center interconnect bandwidth include:
- Gigabit Ethernet development: The evolution of 10 Gigabit Ethernet (GE), 25 GE and 40 GE network adapters.
- Constant data mobility and availability: The distribution of virtual computing and/or storage resources across numerous physical devices.
- Cloud IT: Now, just one request can prompt several data exchanges between servers in a single data center, as well as between servers housed in other data centers.
- New storage technology: Solid-state drives (SSD), flash memory and software‐defined storage all boost the appeal of cloud storage.
- Dynamic resource allocation: Dynamic allocation of storage, server and network resources allow for improved resource sharing.
Interconnects in Action
The diagram below illustrates the typical Clos topology (Figure 1) which can be found in many large data centers.1 Here, a huge amount of interconnects are required to ensure that efficient communications are maintained throughout the thousands of servers in individual data centers, and between other data centers on campus.
As the data rates at the backplane have expanded past the 10 GB/s rate to 25 GB/s and up to 800 GB/s, copper connections customarily used for the connection of backplane to top of rack (TOR) switches and TOR to Leaf switches have been replaced with optical interconnects. This has been essential in accommodating the higher data transfer rate required.2
Figure 1. Interconnect Clos topology – This provides a more direct interconnection between Top of Rack (TOR) switches and all other servers, however it also results in a large number of interconnects being required. Every Leaf switch in the Leaf Spine architecture connects to every switch in the network fabric. The spine switches have the same level of connection to the leaf switches as the leaf switches to the top of rack switches.
This has resulted in the need to replace more copper connections with optical fiber, meeting these needs through the utilization of more efficient optoelectronic transceiver components which are compact, cost effective and energy efficient.
Not only that, but this migration drives the need for new technology and designs. Traditional transceiver components used at these data rates in long haul systems are not always suitable for use in the data center, because both the active and passive components designed to meet these demands are configured for longer distance interconnect (prompting higher costs), have higher rates of power consumption and normally have a larger form factor.
As optical data center interconnects begin to need data transfer rates similar to those of long haul fiber connections, optical interconnect bandwidth enhancement using dense wavelength division multiplexing (DWDM), single mode fiber (SMF), multimode fiber (MMF), multiple spatial modes, coherent digital optical transmission, increased data rates, and other similar technologies are expected to become commonplace.
Energy consumption is a key consideration within all these technologies, and implementing the most appropriate optical interconnect can help minimize not only this, but also heat dissipation.3 These considerations can impact upon the optical design and optical filters necessary.
Optical Filters in the Data Center
Optical transceivers installed at the ends of the fiber interconnect tend to use optical filters to manage the various wavelength channels utilized in CWDM, WDM, DWDM and other configurations that employ multiple wavelengths.
Figure 2. Some of Iridian’s thin film Optical filters for telecom/datacenter applications
Optical filters employed within these shorter interconnect applications make use of the same base filter coating technology as filters used in Metro or long haul applications. However, the filter size, optical design and thickness are modified to accommodate the unique requirements of these exceptionally compact products.
Ensuring sufficient optical performance while aiming to incur the lowest cost possible is essential; thankfully, a series of filter solutions separate from conventional telecom offerings is available to address these needs.
The use of custom filter solutions for interconnect applications is standard practice, and filter manufacturers are always working to meet these needs. Filters used within these products incorporate the usual single wavelength bandpass and edgepass found in single channel WDM systems, (CWDM bandpass (Figure 3) and DWDM edge filter (Figure 4)). Etalons or other such optical filters may also be found in the integrated laser assemblies (ITLA) sources that are often used within these systems.
Figure 3. Typical CWDM bandpass filter ‐www.iridian.ca
Figure 4. Typical DWDM edge pass filter - www.iridian.ca
The increased prevalence of data centers used for distributed computing and cloud storage is the primary driver in the need for ever higher data transmission rates within the data center.
In turn, this is expanding both the role and capability of optical interconnects, resulting in a huge increase in demand for more specialized optical filters and optoelectronics which are essential within those data centers. The future is bright for those organizations working to improve, enable and develop the cloud.
References and Further Reading
- Zhou, X., Optical Fiber Technology (2017), http://dx.doi.org/10.1016/j.yofte.2017.10.002
- Chongjin Xie 978‐1‐5090‐5016‐1/17‐ 2017 IEEE Optical Interconnects Conference (OI) Pages: 37 – 38
- Joseph M. Kahn and David A. B. Miller, NATURE PHOTONICS | VOL 11 | JANUARY 2017 | www.nature.com/naturephotonics.
This information has been sourced, reviewed and adapted from materials provided by Iridian Spectral Technologies.
For more information on this source, please visit Iridian Spectral Technologies.