Comm Corner Logo
Comm Corner
What Happened Before I Logged-on 
What Makes the Internet Work?
by John Woody

Alamo PC Organization: HOME > PC Alamode Magazine > Columns > Comm Corner 

It is sometimes helpful to go back to the basics to get a fresh grasp of what anything is.  This is especially true when dealing with the Internet.  Keeping with the theme of this month’s PC Alamode, I will attempt to do that.  From the beginning, the Internet was designed to be a general set of computer services open to anyone who wishes to use it for any purpose, good or bad.  Well, maybe, the designers wanted everything to be ALL GOOD.  For the most part, that has been the case.

The government agency charged with the responsibility for designing the Internet had among its goals the development of a set of standards that would allow any computer on the network talk to any other computer without regard to that computer’s type or build.  These standards became the protocols that govern how data is exchanged between different networks/computers.  Originally, the government project by the ARPA (Advanced Research Projects Agency) of the Department of Defense (DoD) envisioned that the Internet project would be limited to research scientists logging onto remote computers to run programs remotely.  Soon, file transfer, electronic mail, and mailing lists were added to provide new needed capabilities.  Thus, in 1968, as now, the three main functions of the Internet, FILE TRANSFER, E-MAIL, and REMOTE LOGIN form the basis of what is done on the Internet.

The right protocol, that set of conventions which determine how the data will be handled between different programs, within the computer and network, was finalized in 1974 and has become the defacto standard for networking in nearly every network working today.  That protocol is named TCP/IP (Transmission Control Protocol /Internet Protocol).  This protocol, based on packet data transmission, i.e., data files being transmitted is placed in defined size “envelopes”, given a delivery address and sent in accordance with the protocol standards, regardless of the file size.  This means that if I send a data file with 500,000 bytes of data, it is broken into smaller packets of approximately 2,500 bytes each and is given the delivery address, i.e., the Internet Protocol (IP) of the recipient and is sent to that location.  The Transmission Control Protocol (TCP) portion ensures that each packet is sequentially addressed and received by the destination address.  The neat thing about all this is that it does not make any difference what path each packet takes in the transmission.  Think of it as sending 200 letters through the post office to the same address, but having the post office deliver the letters using 50 mailmen, each on his own schedule.  The address ensures that each letter will arrive at the destination and the address will provide the receiver the proper sequence in which to open them.  And, unlike the post office, the TCP protocol makes sure that each packet is receipted for, ensuring that each transmission is complete at the receiving end.

The Internet is a collection of networks connected together at points so that all of the networks are in communication with each other.  These connecting points called gateways are serviced by routers which contain the addresses of the two connecting networks.  The routers know where to forward each packet so that it reaches its destination without problems.  All of the LAN (Local Area Networks) eventually connect into what is called the backbone. that portion of the Internet that connects the whole network togther.  The backbone first operated by the military (ARPA) and then the National Science Foundation, called ARPANET, carries thousands upon thousands of IP packets at one time.  All of these networks are essentially incompatible with one another by just connecting them together by the transmission medium, i.e., cable or radio, etc.  It was the job of the ARPA project to define a means to connect these networks so that they would work together.  That is where the TCP/IP protocol came into being.  Now that backbone has been enlarged to include nearly all of the long-distance telephone carriers as well as many made to order long distance transmission media.  One of the great things that happened in that early development was that the source code for all of the protocols were made open source code.  This idea has overtaken the proprietary network systems as everyone has taken into use the open system concept in building individual networks.  For example, the Alamo PC Organization training network uses the TCP/IP protocols in its operation.  TCP/IP has become the defacto central protocol for all networks.

As long as the Internet development was in the hands of the military and scientists, its growth was deliberate and controlled.  Few users had access.  Development was orderly.  Bell Telephone Laboratories developed a new operating system named UNIX.  The ARPA project decided that TCP/IP would be disseminated through UNIX to all of the universities who wanted it, and gave University of California at Berkeley a research contract to develop TCP/IP in UNIX.  The Berkeley UNIX version became known as BSD UNIX and was the standard used by most universities to develop their own network systems.  The U. S. Military made a commitment to use TCP/IP in all of its networks.  This coupled with university use, completed the circle to make TCP/IP the standard.  TCP/IP being used in all the LANs made it easy for these groups to connect to each other.  The year after the U. S. Military switched to TCP/IP, the Internet doubled in size, and this was before any civilian use.

The NSF (National Science Foundation) took a major interest in the Internet in the late 1970's and developed a project to connect all of the major research universities together.  This project, known as the Computer Science Network or CSNET.  Each university paid the cost of its connection.  Research universities which could not afford the full cost were provided limited or shared network services.  The university connection brought many professors and graduate students in the Internet project, resulting in huge advances in the technology from research topics.

With FSF and military support assured, a means of managing the technology advances was established in the form of a coordinating group, the IAB (Internet Activities Board).  Volunteers from the computer research community were placed into groups as a task force to review and propose changes.  Each group in the task force reported their suggestions, reviews, and proposals to a group called the RFC (Request For Change) Editor.  All proposals or suggestions made to this group were reviewed and discussed.  Those changes that made it through the review process were turned into specifications as a RFC.  In 1989, the IAB joined the Internet Society, the governing body for the Internet.  One of the task force groups established by the IAB is the IETF (Internet Engineering Task Force), and is charged with reviewing and refining the short-term development of the Internet.  It was in this time frame that many commercial computer businesses joined the Internet development.

In 1987, the NSF developed a research WAN (Wide Area Network) backbone to support its computer research centers at thirteen universities.  This became known as the NSFNET backbone.  Commercial companies, IBM and MCI were involved in this project along with MERIT, a Michigan University group handling their local backbone network. This is the beginning of time when the rest of us started getting involved in the Internet.  Use increased to a point that its capacity had to be tripled within about one year.  The NSF next undertook to further privatize the backbone operation and thus draw private/commercial research funding into it’s development by having IBM, MCI and MERIT form a nonprofit company named ANS (Advanced Networks and Services).  This happened.  By 1992, ANS had built a new WAN called ANSNET that was running at 30 times any previous backbone capacity.  Separately, MCI developed a WAN known as the vBNS (very high-speed Backbone Network System). This was the true transferring of ownership of these assets to a private company that moved the Internet toward commercialization and privatization.  From this point on, the Internet has had exponential growth, in 1983 the Internet connected 562 computers together.  Ten years later, it connected 1,200,000 computers.  Today, that number has quadrupled.

What Does All This have To Do With Me
The basic fact that all this has to do with each of us is that the Internet works seamlessly for each of us with out problems.  Granted that sometimes problems arise, but for the most part the Internet moves what we send through it without problems.  Most of the problems are self-inflicted or result from a Windows OS glitch.  E-mail is delivered instantly to nearly anywhere in the world and is accurately received.  Nearly anything that can be digitized can be transferred to another address, including photos, programs, drawings, graphics, or documents.  And, so much more information and mis-information is available at to overwhelm nearly any search we attempt to find about nearly anything.  And, we can check out books from the San Antonio Public Library from our home computer.  Notice that I have included those three functions, e-mail, file transfer, and remote logon, the Internet provides in the proceeding three sentences.  That was the original intent of the Internet and remains the primary function it plays for each of us today.  What has transpired from the 1989 time frame to today is that our OS (Operating Systems) and client applications have gotten more user friendly.  The GUI (Graphic User Interface) built into today’s OSs simplify how we use the Internet.  Point and Click, Drag and Drop, and all those other GUI features in the OS and applications make the user operation so easy that we do not even have to know what is happening in our computer or on the Internet.

Additionally, those responsible for solving Internet problems in the IAB, Internet Society, and IETF have really done their homework by refining the original intent to the seamless functioning that the protocols underlying the Internet do for us today.  The underlying protocols have not been changed so much as to have been refined to work with today’s high speed computers.  The real advances have been in the hardware designed and built to take advantage of those underlying protocols.  Hardware and transmission media make up most of the advances we enjoy.  Bigger data pipes, such as broadband data connectivity for individual users is one example.  One of the next big steps will be bring the Internet directly to our wrist watch so that we can speak directly to everyone else.  The hand held PDA (Personal Data Assistants) and hand held computers coupled with wireless transmission media such as wireless modems and cell telephones provide mobility we haven’t really experienced yet.  Also, coming into use are the WAP (Wireless Application Protocol) compliant cell telephones which are Internet ready.

The underlying protocols that govern use of the Internet run seamlessly from one’s personal computer to the Internet without the user having to really understand what is going on to get to that cool WWW site.  The trickiest part users have to put up with is getting the personal computer to make the proper connection to the Internet access point.  That point is the phone number that the ISP (Internet Service Provider) has established for that connection.  The current OSs are network compliant and have the connection protocols, PPP (Point-to-Point Protocol), built-in.  All that is required of the user is an User ID and Password.

John Woody is a networking communications consultant specializing in small office, home office networks, training setup, and internet connectivity.