Tuesday, September 1, 2009

The Design Philosophy of the DARPA Internet Protocols; Clark

This paper describes some of the basic design principles that went into the DARPA Internet, the foundations of the Internet as we know and love it today. The top goal of the DARPA Internet was to connect together existing networks, such as the ARPANET and ARPA packet radio network. Two additional aspects of the Internet that carried over from the ARPANET are packet switching and store-and-forward of packets by gateway packet switches, though it's really the incorporation of existing networks that really drove the direction of the Internet. It's not clear to me how the Internet would be different today if the designers had chosen to create a new unified, multi-media system instead of incorporating existing networks; would it be something significantly higher performing and survivable?

The top secondary goal of the designers was survivability: that, short of complete network partition, service should continue in some way between sender and receiver. To maintain this state information, the design uses "fate sharing," that is, keeping this state information at an endpoint host. The argument is that losing the state information about a host is okay if the host entity itself is lost at the same time.

Another secondary goal included accommodating different types of service, differing by requirements in speed, latency, and reliability. Providing for each of these different services proved to be too much for one protocol to handle effectively, which ultimately led to the split of TCP and IP as well as the creation of UDP. Additionally, the Internet was meant to allow for a variety of networks, which it does by making a minimal set of assumptions: the network must be able to transport a datagram. On top of that, there are some fuzzier requirements; the packet must be of "reasonable" size with delivered with "reasonable" reliability through a network with "suitable" addressing.

In creating a framework that cobbled together existing networks, the designers of the Internet caused problems for themselves in terms of not being able to accurately simulate or formalize performance constraints for the Internet architecture. Also of interest is that the end-to-end principle made the architecture less cost-effective and (at least initially) more difficult to connect new hosts to. While use of the datagram as a building block served its purpose of providing some of the flexibility-related secondary goals, it did not provide for easy resource management and accounting, which would have been better served by some notion of a "flow" through the network. All these factors, from datagrams to byte-managed TCP to packet-switching, allowed the Internet to achieve its main goal but also played some part in keeping it from being the cleanest, most effective, high-performing implementation possible.

1 comment:

Randy H. Katz said...

This is an ideal summary in terms of length and depth!