Internet Performance Optimization The Early Days of the Internet


(T) When Pathfinder was about to land on Mars in 1997, the challenge for NASA emerged paradoxically not in space but on earth on the Internet. The 20 NASA mirrored sites around the world had to serve a demand of 32.8 M hits when several computer-networking problems occurred. A router, at the NASA Ames site in Mountain View, CA which was not properly configured, caused the site network to be unusable. Several Web servers ran out of disk space and crashed. And the memory of the two major Web servers at NASA’s JPL, connected through two T3 lines, had to be quadrupled.

In 1995 after the Internet became a commercial entity, the response time of web pages for every Internet user was very noticeable. Internet networks were overloaded by the rapid growth of Web servers.

The first factor in latency was (and still is) in the way that HTTP works with TCP/IP. As an application protocol, HTTP makes very inefficient use of TCP. HTTP requires many TCP connections to be created and destroyed per Web page transferred. HTTP ignores some of the fundamental concepts of TCP/IP design. However, it is the remarkable congestion control mechanisms of TCP/IP which saved the Internet from the famous prediction of Bob Metcalf, founder of 3Com, about the collapse of the Internet.

Web servers were (and still are) generally not the bottlenecks, except in very few cases, such as in the NASA example. Processing of HTTP required no more than 5% of the utilization of a server. A typical UNIX server could on average handle 3.5 million hits/day. There were however some cases in which Web servers could become bottlenecks. A Web server could only handle a certain number of simultaneous connections, and a connection was not released until the HTTP request was serviced. If responding to the request took a large amount of time (such as in the case of retrieving large video files from the disks or if a lot of computation was needed for a search engine) then the Web server could starve for connections. And incoming users would see increased numbers of “server not responding responses” as connections could not be serviced.

The major and practically only source of Web latency was (and still is) in the network. Contributing first to the network latency is the bandwidth of WANs. A WAN could inject as much as 100-500 ms of latency even when the link is not fully utilized. Upgrading a WAN link from T1 to T3 would only improve latency by 20% for transferring a Web page from Boston to San Francisco despite a 30,000% increase in bandwidth!

Contributing second to network latency was routing. With the exponential increase of Web traffic, routing was degrading at an alarming rate. Internet routing was becoming very unstable with routes fluttering that is changing between sources and destinations. Some research on the Internet backbone showed that BGP updates, the Internet routing protocol, were dominated by pathological or redundant updates adding more traffic to the Internet. Routing instability led to general network instability.

Further research on Internet packets revealed that Murphy’s law was in full force. All assumptions about network behaviors were violated. Packets were frequently lost, corrupted, or arrived badly out of order.

The only solution to reduce latency on the Internet was believed to be through the deployment of reverse proxy caches that could disseminate the content of Web servers function of the demand of their contents.

But a few technology breakthroughs significantly contributed to improving the performance of the wireline Internet. Internet routers became much more efficient at processing packets. A new protocol was created MPLS (multi-protocol label switching) to enable network operators to better deal with network congestion. And, large deployments of optical transport and optical switching equipment led the network operators to catch up with the demand for Internet content.

Unfortunately, network operators invested in so much network equipment, that their costs grew much faster than their revenues leading to the rise of the telecom bubble that burst in 2001.

Good Readings
Van Jacobson, “How to kill the Internet”, SIGCOMM’95, Cambridge, MA, August 05
L. Zhang, S. Floyd, V. Jacobson, Adaptive Web Caching
C. Labovitz, G. Malan, F. Jahanian, “Internet routing instability”, SIGCOMM’97
Vern Paxson, “End-to-end routing behavior in the Internet”, SIGCOMM’96, Stanford, CA, August 06 
Vern Paxson, “End-to-end Internet packet dynamics”, SIGCOMM’97
Requirements for Traffic Engineering over MPLS

Note: the picture above is a “Gelb Rot Blau” from Vassily Kandinsky.

Copyright © 2005-2011 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com.

Categories: Internet