en

The switch to 400 GbE may be closer than you think from freemexy's blog

The switch to 400 GbE may be closer than you think Large hyperscale cloud providers initially championed 400 Gigabit Ethernet because of their endless thirst for networking bandwidth. Like so many other technologies that start at the highest end with the most demanding customers,Wireless the technology will eventually find its way into regular enterprise data centers.
Most enterprise networks are primarily using 100 GbE for their backbone and leaf-spine infrastructure with 10 GbE and 25 GbE switches further down the stack. Because these are production environments, customers are hesitant to change anything, either because the equipment has not fully depreciated yet or applications are not reaching bandwidth limitations. Seemingly, if customers are not topping out the 100 GbE infrastructures today, then there would not be much demand for a four-time increase in bandwidth.
 But, for other reasons, 400 GbE may be headed toward enterprise data centers in the future.All these dynamics point to the need for the greater network bandwidth that 400 GbE can provide. But businesses won't start ripping out their existing core infrastructure and rewiring their data centers anytime soon. More likely, we'll see a phase-in for 400 GbE in the leaf and spine where more bandwidth density can help relieve crowded aggregation networks. 400 GbE can be split via a multiplexer into smaller increments with the most popular options being 2 x 200 Gb, 4 x 100 Gb or 8 x 50 Gb.
 At the aggregation layer, these new higher-speed connections begin to increase in bandwidth per port, we will see a reduction in port density and more simplified cabling requirements.As an example, one of the most common leaf-spine switches today is the Cisco Catalyst 9300 Series, which features six 100 Gb uplinks for 600 Gb of aggregate upstream bandwidth. When setting up two top-of-rack switches for redundancy, networks will require 12 upstream 100 Gb links, or only three 400 Gb links.
 Though the top of rack at the leaf level may stay the same, a 4-to-1 reduction in connections to the spine will reduce port counts and provide more room for future expansion. Most importantly, this will bring new breathing room to infrastructures that are starting to feel the pinch, either from space or port availability.While port density at the aggregation level may be an important driver for 400 GbE, there is another area where density matters as well. The Quad Small Form Factor Pluggable (QSFP) Ethernet transceivers that customers use -- like QSFP , QSFP28 or QSFP56 -- will not support 400 GbE bandwidth. 

The Wall

No comments
You need to sign in to comment