Ethernet congestion control       


 Mitch Gusat photo

Ethernet congestion control - overview

Commercial data centers are under great pressure to reduce costs and power consumption. In the network area, the current focus is to converge fiber channel (FC) storage networks with Ethernet communication networks. This will reduce the number of cables, switches and network interface cards (NICs) as well as the cost of managing two different networks. A new Ethernet standard for 10 Gb/s and higher, called DCB (data center bridging), is being standardized to address the combined requirements of the two older network technologies. Flow control, multiple priorities and transmission selection are part of DCB.

Another important part is congestion control (CC) to prevent the catastrophic aggregate performance drop for lossless networks in case of congestion. These mechanisms allow an increase of network utilization while maintaining quality of service.

Our team has contributed benchmarks to validate mechanisms that were evaluated at the 802.1Qau standard task group and significantly shaped the CC mechanism, which is expected to be ratified at the end of 2009 [5]. Currently, we are working on validating the performance of IBM preferred vendor-based solutions and helping them to incorporate competitive CC functionality into their chips.