WAN Optimization 101: Know Your Options
November 30, 2006
By Charlie Schluting
_
http://www.enterprisenetworkingplanet.com/netsp/article.php/3646606_
WAN, or Wide Area Network, is a term used to describe most external network connectivity to a business. WAN optimization is a hot topic, and there are many vendors who'd like you to realize this. In lieu of getting a faster connection, there are many different approaches to optimizing your WAN connectivity.
The standard mention of "WAN" implies a high cost, low bandwidth (relative to your local network), and high latency connection to an ISP. These take the form of T1 or T3 circuits, or SONET-based connections such as OC-3 or OC-48's (define). "WAN" is almost as vague a term as "network." It doesn't really mean any one thing, but is commonly used to discuss non-local network connections.
The real problem with WAN connectivity is that business-critical applications are generally in direct competition with all other Internet traffic on your link. Often business applications are delayed because of unwanted traffic to a site. Spam, viruses, and even worker-driven Web traffic can tremendously hinder a business's ability to complete its mission.
The real culprit is TCP, because it really doesn't care about much aside from getting things through reliably. All applications are treated equally, unless some sort of engineering has been done to prevent this. Furthermore, TCP will happily use all available bandwidth, in a very bursty, inconsistent manner. TCP's congestion control mechanism is to simply start sending traffic slowly, and then increases until loss occurs. When loss happens, the lost data must be retransmitted, creating even more congestion. If only there was a way to send all high-bandwidth traffic to known endpoints intelligently, without using TCP's fickle congestion control methods.
Products do exist that will allow site-to-site optimization by placing optimizers at the entrance to each network. Specified TCP connections can be terminated locally, and then the WAN-facing link gets to use a proprietary protocol, optimized for the WAN link it's using. Just taking TCP out of the picture in this situation can tremendously improve throughput. Some marketing material even says 5000 percent throughput, for what that's worth.
Of course, that's just for site-to-site business applications, with two configured endpoints. What about all other traffic? If the WAN link is completely saturated, the special traffic above still won't improve that much.
Some proposed solutions involve QoS (quality of service) to identify which traffic is important, and which isn't. Coloring packets normally accomplish this. Classifying which traffic gets priority can then be used to, give priority to such traffic.
Again, we're in a situation where we have prioritized important traffic, but we still have a congested link. The slightly effective workaround is to implement some type of queuing, or traffic shaping mechanism. Now things start to get ugly. The queuing idea, simply put, classifies all traffic into queues. The most important data gets processed first, and the rest later. When congestion already exists on a link, all queuing accomplishes is to slow everything down, except the high-priority traffic.
The level of optimization required highly depends on the specific application. An ideal solution will allow you to prioritize traffic, and guarantee a certain amount of available bandwidth for mission critical applications. Sorting out highly critical traffic such as ERP and CRM applications, and placing a higher priority on that traffic as opposed to Web browsing or video can go a long way toward ensuring efficiency.
In the WAN optimization arena, there exist two types: B-DRO and D-DRO. DRO stands for Data Replication Optimization. B-DRO is for branch office to data center traffic—generally a low speed link. D-DRO is for data center to data center connections.
Complete WAN optimization solutions allow a business to do much more than simply queue the bad traffic. They can block unwanted (in and outbound) traffic, allow it at certain time during the day, give priority to certain hosts, and enforce many other related policies. They will optimize the actual traffic as well, providing lower latency and higher throughput for the most critical applications. Compression is a very powerful tool.
Broadly speaking, all WAN optimization solutions boil down to is a laundry list of a few available tricks:
Traffic prioritization
End-to-end tunnels, employing better protocols than TCP, or TCP tricks
TCP tricks: selective ACKs, limiting retransmissions, reordering packets, and compression
We didn't mention VoIP prioritization or MPLS providers. Frequently, MPLS providers cannot help the real source of congestion: your last mile. If that's not the bottleneck, then MPLS services are useful, and the hype is well deserved. Just realize that an understanding of your traffic and the network bottlenecks is required, before a decision can be made. VoIP was left out because if you're using VoIP, you probably have a WAN optimization solution in place already—VoIP requires bandwidth availability. When saturation begins to take hold, the first application to become usable is VoIP, and it isn't hard to prioritize.
Disaster recovery solutions often involve replicating data over D-DRO WAN links. The sharing of files, or actually hosting files over a WAN link is also very tempting. Everyone who has attempted CIFS or NFS over WAN links knows that this is a road fraught with perils. Wide Area File Systems, or WAFS, is designed to allow remote offices to remain serverless. The WAFS technology deploys many tricks to make this possible, but it is something completely different than general WAN acceleration technologies discussed here. Many WAN optimizers now support WAFS. So even though there is a clear distinction in functionalities, these products are merging, like switches and routers.
Join us next week when we discuss the purpose of WAFS, and how it can make the cost of branch offices insignificant, from the IT budget's standpoint at least.