What Happened Before I Logged-on
What Makes the Internet Work?
by John Woody
It is sometimes helpful to go back to the basics to get a fresh grasp of what anything is. This is especially true when dealing with the Internet. Keeping with the theme of this month’s
The government agency charged with the responsibility for designing the Internet had among its goals the development of a set of standards that would allow any computer on the network talk to any other computer without regard to that computer’s type or build. These standards became the protocols that govern how data is exchanged between different networks/computers. Originally, the government project by the ARPA (Advanced Research Projects Agency) of the Department of Defense (DoD) envisioned that the Internet project would be limited to research scientists logging onto remote computers to run programs remotely. Soon, file transfer, electronic mail, and mailing lists were added to provide new needed capabilities. Thus, in 1968, as now, the three main functions of the Internet, FILE TRANSFER, E-MAIL, and REMOTE LOGIN form the basis of what is done on the Internet.
The right protocol, that set of conventions which determine how the
data will be handled between different programs, within the computer and
network, was finalized in 1974 and has become the defacto standard for
networking in nearly every network working today. That protocol is
The Internet is a collection of networks connected together at points so that all of the networks are in communication with each other. These connecting points called gateways are serviced by routers which contain the addresses of the two connecting networks. The routers know where to forward each packet so that it reaches its destination without problems. All of the LAN (Local Area Networks) eventually connect into what is called the backbone. that portion of the Internet that connects the whole network togther. The backbone first operated by the military (ARPA) and then the National Science Foundation, called ARPANET, carries thousands upon thousands of IP packets at one time. All of these networks are essentially incompatible with one another by just connecting them together by the transmission medium, i.e., cable or radio, etc. It was the job of the ARPA project to define a means to connect these networks so that they would work together. That is where the TCP/IP protocol came into being. Now that backbone has been enlarged to include nearly all of the long-distance telephone carriers as well as many made to order long distance transmission media. One of the great things that happened in that early development was that the source code for all of the protocols were made open source code. This idea has overtaken the proprietary network systems as everyone has taken into use the open system concept in building individual networks. For example, the Alamo PC Organization training network uses the TCP/IP protocols in its operation. TCP/IP has become the defacto central protocol for all networks.
As long as the Internet development was in the hands of the military and scientists, its growth was deliberate and controlled. Few users had access. Development was orderly. Bell Telephone Laboratories developed a new operating system named UNIX. The ARPA project decided that TCP/IP would be disseminated through UNIX to all of the universities who wanted it, and gave University of California at Berkeley a research contract to develop TCP/IP in UNIX. The Berkeley UNIX version became known as BSD UNIX and was the standard used by most universities to develop their own network systems. The U. S. Military made a commitment to use TCP/IP in all of its networks. This coupled with university use, completed the circle to make TCP/IP the standard. TCP/IP being used in all the LANs made it easy for these groups to connect to each other. The year after the U. S. Military switched to TCP/IP, the Internet doubled in size, and this was before any civilian use.
The NSF (National Science Foundation) took a major interest in the Internet in the late 1970's and developed a project to connect all of the major research universities together. This project, known as the Computer Science Network or CSNET. Each university paid the cost of its connection. Research universities which could not afford the full cost were provided limited or shared network services. The university connection brought many professors and graduate students in the Internet project, resulting in huge advances in the technology from research topics.
With FSF and military support assured, a means of managing the technology advances was established in the form of a coordinating group, the IAB (Internet Activities Board). Volunteers from the computer research community were placed into groups as a task force to review and propose changes. Each group in the task force reported their suggestions, reviews, and proposals to a group called the RFC (Request For Change) Editor. All proposals or suggestions made to this group were reviewed and discussed. Those changes that made it through the review process were turned into specifications as a RFC. In 1989, the IAB joined the Internet Society, the governing body for the Internet. One of the task force groups established by the IAB is the IETF (Internet Engineering Task Force), and is charged with reviewing and refining the short-term development of the Internet. It was in this time frame that many commercial computer businesses joined the Internet development.
In 1987, the NSF developed a research WAN (Wide Area Network) backbone to support its computer research centers at thirteen universities. This became known as the NSFNET backbone. Commercial companies, IBM and MCI were involved in this project along with MERIT, a Michigan University group handling their local backbone network. This is the beginning of time when the rest of us started getting involved in the Internet. Use increased to a point that its capacity had to be tripled within about one year. The NSF next undertook to further privatize the backbone operation and thus draw private/commercial research funding into it’s development by having IBM, MCI and MERIT form a nonprofit company named ANS (Advanced Networks and Services). This happened. By 1992, ANS had built a new WAN called ANSNET that was running at 30 times any previous backbone capacity. Separately, MCI developed a WAN known as the vBNS (very high-speed Backbone Network System). This was the true transferring of ownership of these assets to a private company that moved the Internet toward commercialization and privatization. From this point on, the Internet has had exponential growth, in 1983 the Internet connected 562 computers together. Ten years later, it connected 1,200,000 computers. Today, that number has quadrupled.
What Does All This have To Do With Me
Additionally, those responsible for solving Internet problems in the IAB, Internet Society, and IETF have really done their homework by refining the original intent to the seamless functioning that the protocols underlying the Internet do for us today. The underlying protocols have not been changed so much as to have been refined to work with today’s high speed computers. The real advances have been in the hardware designed and built to take advantage of those underlying protocols. Hardware and transmission media make up most of the advances we enjoy. Bigger data pipes, such as broadband data connectivity for individual users is one example. One of the next big steps will be bring the Internet directly to our wrist watch so that we can speak directly to everyone else. The hand held PDA (Personal Data Assistants) and hand held computers coupled with wireless transmission media such as wireless modems and cell telephones provide mobility we haven’t really experienced yet. Also, coming into use are the WAP (Wireless Application Protocol) compliant cell telephones which are Internet ready.
John Woody is a networking communications consultant specializing in small office, home office networks, training setup, and internet connectivity.