Awesome, What Happened to Networking in 2011?

DL380 Gen10

After about eight or nine years of networking innovation stagnation, the number of new innovations starting in 2010 and exploding in 2011 is astounding. Speed and feeds are increasing, but the exciting work in 2011 occurred in new technologies to support initiatives like cloud computing, storage and data convergence, as well as migrating to IPv6.

Awesome, What Happened to Networking in 2011

 

Here are the highlights.

Multipath Ethernet was all the rage in 2011. Protocols like Multichassis Link Aggregation (MLAG), Transparent Interconnection of Lots of Links (TRILL), Shortest Path Bridging (SPB) and proprietary protocols are all aimed at solving one of the thorniest issues in networking:getting rid of spanning tree and making use of all the interconnects between switches. The problem is that none of the multipath Ethernet product suites are standards-compatible. Part of the issue is that TRILL and SPB still aren’t fully ratified, so there isn’t a standard to conform to. But the other part is that early implementations of the current protocol drafts have gone far afield of what will likely be the final version. Brocade’s VCS uses only TRILL framing but not IS-IS, which is used by switches to form a coherent view of the network. Cisco’s FabricPath has taken TRILL and “enhanced” it to work better. Both Cisco and Brocade claim they will support standard TRILL after it is ratified.

Of course, the question has to be asked: Is multichassis link aggregation (MLAG) good enough? Unless you have an Internet-scale data center with tens of thousands of servers, you probably don’t have the port count, port density, nor strict SLAs that would require a partial or full mesh network that a TRILL-based network could provide. If all you need to do is to reduce the EoR/ToR switch to core oversubscription, then MLAG may be a workable choice. HP thinks that eschewing both TRILL and SPB in favor of MLAG is the way to go.

Juniper, for its part, went in a totally different direction with QFabric, by taking the chassis concept and distributing the components to a stand-alone director that acts as the brains of QFabric and ToR switches that connect to servers, as well as home-running back to a backplane chassis. It’s a bold move, and the proprietary approach is one that we have been critical of.

The question of whether multipath Ethernet standards will ever be implemented and, more importantly, whether various vendor products will interoperate is cloudy at best. Perhaps standards don’t matter and vendor choice does, because in all likelihood, if you are going to buy into a vendor’s fabric, you’re going all in.

 

All-in with OpenFlow

Software Defined Networking (SDN), which allows applications and stuff other than traditional network management systems to manipulate the network, builds on multipath Ethernet, converged networking and orchestration, have primarily been used to build private clouds in your own data center. The darling, of course, is OpenFlow, a protocol designed for controller-based flow management. The hyperbole around OpenFlow has been thick with claims that it will commoditize switching, make networks faster and more reliable, and treat male pattern baldness. The first two claims are just outrageous.

There is value in OpenFlow, and the promise of a programmable network that is both dynamic and robust is powerful, but let’s remember that Openflow made its commercial debut in 2011 with NEC and Fujisu announcing switch platforms at Interop 2011 and BigSwitch announcing a controller. The InteropNet Labs Openflow demonstration showed the tip of the iceberg of what can be accomplished with OpenFlow-based networking, but we have yet to see anything unique or innovative. That’s coming.

What is promising is the vendor backing of the Openflow Networking Foundation, an industry consortium founded by some of the largest Internet companies, including Deutsche Telekom, Facebook, Google, Microsoft, Verizon and Yahoo, and that includes participants from every major networking vendor.

 

IPv6 Out With A Wimper

You’ve been warned. In February, the IANA handed out the last of its IPv4 address space to the Regional Internet Authorities (RIR). There are no more to allocate, and the RIRs are parsimoniously allocating the remainder. While the IPocalypse is not a cause for panic, you’d be remiss if you haven’t been planning to migrate to IPv6 in the near future. There are going to be some challenges, mostly in supporting existing IPv4 servers and devices that will never have an IPv6 stack, as well as supporting any Internet-facing services. We’ve put together a resource page that we update to get you started.

What’s bigger news is that there is so little IPv6 adoption under way. It’s as if the lack of a hard deadline, like we had with Y2K, means that adoption can be pushed off indefinitely. The fact of the matter is that, despite products coming to the fore, moving to IPv6 presents some significant hurdles.

In 2010, the Interop conference announced it was giving back its IPv4 class A address space to IANA (potentially worth millions on the market) and moved to a dual-stack IPv4/IPv6 network for the show. While it went OK, the InteropNet team did have some lessons to learn. Everyone dealing with networks–engineers, support staff, end users, and so on–have grown accustomed to reading off IP addresses. But as the InteropNet engineers found out, that is untenable in IPv6 networking, where the address strings are long.

 

PS: Original resource from networkcomputing.com

 

 

Share This Post

Post Comment