Economics of Data Center Optics
The use of optics in data centers has been slowly increasing over the last ten years. While there is promise of one converged data center network with Fibre Channel over Ethernet, Discerning Analytics, LLC (DA) is skeptical that we will ever see FCoE be the entire network. What makes most sense economically in the near term is what has been adopted up until now, that is separate storage, LAN and clustering.
First and foremost of these will be the surge of demand in 2016 for the new 100G QSFP28 format. This module, in conjunction with new 100GbE switch silicon from several vendors, will trigger high volume deployment of 100GbE in hyper-scale computing environments.
This is truly an historic moment, as it represents the beginning of a transition from 10G technology, which served customers for the past decade, to faster 25G speeds. During 10G’s reign, we’ve witnessed component vendors deliver tens of millions of modules and the price for them dropping 20% annually. 10G technology was later bootstrapped to serve the demand for 40GbE but this was really an evolutionary, not revolutionary, step.
All the world's data – pictures, video, sounds, and text – has to traverse complex networks of optical fibers that crisscross cities, regions, and countries. To better handle the glut of information, a research team from NOKIA Bell Labs, have developed a new device that could become a crucial component of new flexible and optimized networks.
When it comes to optically interconnected network routers, do you prefer your boxes white or black? We’re discussing whether you believe in “bare metal” hardware that requires you to install your own operating system (white box), or heavily pre-integrated hardware and software solutions (black box).
Data center customers are demanding a steep downward trajectory in the cost of 100G pluggable transceivers. Existing 100G module MSAs (Multi-Source Agreements) such as PSM4 and CWDM4 have limited capacity for cost reduction due to the cost of the fiber (PSM4) and the large number of components (both PSM4 & CWDM4). Similarly, two-lambda PAM4 (2x50G) trades off some optical components for a more expensive DSP and will struggle to improve upon the cost of CWDM4.
Data centers have become the largest target client sector for equipment manufacturers and components suppliers over the past ten years. Yet many of these vendors still do not understand that when it comes to data centers, none are the same. And, while there are large categories of data centers, within them there can be considerable variation in networking needs. In fact, some of the largest data centers network asset acquisition trends have changed drastically in this time period.
First and foremost of these will be the surge of demand in 2016 for the new 100G QSFP28 format. This module, in conjunction with new 100GbE switch silicon from several vendors, will trigger high volume deployment of 100GbE in hyper-scale computing environments.
This is truly an historic moment, as it represents the beginning of a transition from 10G technology, which served customers for the past decade, to faster 25G speeds. During 10G’s reign, we’ve witnessed component vendors deliver tens of millions of modules and the price for them dropping 20% annually. 10G technology was later bootstrapped to serve the demand for 40GbE but this was really an evolutionary, not revolutionary, step.
All the world's data – pictures, video, sounds, and text – has to traverse complex networks of optical fibers that crisscross cities, regions, and countries. To better handle the glut of information, a research team from NOKIA Bell Labs, have developed a new device that could become a crucial component of new flexible and optimized networks.
When it comes to optically interconnected network routers, do you prefer your boxes white or black? We’re discussing whether you believe in “bare metal” hardware that requires you to install your own operating system (white box), or heavily pre-integrated hardware and software solutions (black box).
There is some underlying confusion in the market about transceiver warranties and what can make them void – if anything. As a provider of third-party transceivers focused on quality and value over brand name, we wanted to debunk the myths bring clarity to any uncertainty and ease any doubt in the minds of consumers, so they can make the best choices for their business.
Data center interconnect (DCI) is a growing market for the entire optical value chain – from carriers to optical component suppliers. Standard OTN, DWDM, Carrier Ethernet or legacy SONET/SDH alone cannot address the high-bandwidth, low-power and high-density demands of DCI. As a result, many equipment makers have developed specific solutions to address it. Some of these solutions are described below.Data center customers are demanding a steep downward trajectory in the cost of 100G pluggable transceivers. Existing 100G module MSAs (Multi-Source Agreements) such as PSM4 and CWDM4 have limited capacity for cost reduction due to the cost of the fiber (PSM4) and the large number of components (both PSM4 & CWDM4). Similarly, two-lambda PAM4 (2x50G) trades off some optical components for a more expensive DSP and will struggle to improve upon the cost of CWDM4.
Data centers have become the largest target client sector for equipment manufacturers and components suppliers over the past ten years. Yet many of these vendors still do not understand that when it comes to data centers, none are the same. And, while there are large categories of data centers, within them there can be considerable variation in networking needs. In fact, some of the largest data centers network asset acquisition trends have changed drastically in this time period.
- 上一条/Previous:上一篇:Metro 100G and Beyond
- 下一条/Next:下一篇:Meeting the Data Center Demands