Internet History & Glossary


The Internet was born about thirty-five years ago out of an effort to connect the Defense Department network—called the ARPANet (Advanced Research Project Agency Network—and various audio and satellite networks. It was designed to support military research, especially research into how to build networks that could withstand partial outages—such as those sustained in a bomb attack, natural disaster or power outage—and still function. In the ARPANet model, communication always occurs between a source computer and a destination computer. The network itself is assumed to be unreliable, and thus designed to require a minimum of information from its computer clients (what the source and destination computers are called).

To send a message on the network, a source computer simply has to put data in an "envelope" called an IP (Internet Protocol) packet and address the packets correctly. The communicating computers—not the network itself, which was composed of phone lines, radio or satellite signals—were also given the responsibility for ensuring that the communication was accomplished. Thus the destination computer had the task of receiving and assembling the bits of message and displaying it in its entirety to the end user (the person sitting in front of the destination computer). The philosophy was that every computer on the network could talk as a peer (in the same language) with any other computer.

Thus messages from one computer could be scattered in little bits and sent to find the fastest, safest and surest way to their destination address over phone, radio, cable or satellite communication links to be delivered and reassembled into the original message by the receiving computer. The idea seemed fantastic thirty-five years ago, but has since proved invaluable.

Using this model, the US was able to develop a working network linking academic and research institutions. Users who had access to this network became addicted to this type of communication and demanded more networking. While the ISO (Organization for International Standardization) for networking spent years designing a standard for computer networking, researchers and others couldn't wait. Developers in the US, UK and Scandinavia began designing IP software—in a variety of computer languages—for every conceivable type of computer in the world. Governments, schools and organizations across the planet bought up this software before realizing their computers—now equipped with sophisticated communication software—couldn't "talk" to one another because they "spoke and heard" in different computer languages. What was needed was a system like ARPANet which allowed all computers—whether Apple, Compaq, IBM, Fujitsu, etc.—to not only communicate with one another within their institution but with other institutions and across the world as well. Soon many companies were building private networks using the same communications protocols as the ARPANet.

One of these newer networks was the NSFNet (National Science Foundation Network) commissioned by the National Science Foundation, a US agency. In the late 80s, the NSF created five supercomputer centers at major universities in the US. (Up to this point, the world's fastest computers had only been available to weapons developers and a few researchers from large corporations.) The NSF made these computers available for ANY scholarly research. As these computers were so expensive to set up, they could only afford five. They needed a way to connect these computers together to allow clients to access them. At first they went to ARPANet but soon backed out because of bureaucratic problems.

The NSF then decided to build its own network, connected with 56,000 bps (bits per second) phone lines. It soon became obvious that if they tried to connect every university directly to a supercomputing center, they would go broke (phone lines are paid for by the mile). So they decided to create regional networks. In each area of the country, schools would be connected to their nearest neighbor. Each chain was connected to a supercomputer center at one point, and the centers were connected together. With this configuration, any computer could eventually communicate with any other by forwarding the "conversation" through its neighbors.

This worked REALLY well. Soon sharing supercomputers also allowed the connected sites to share a lot of other things not related to the centers. Suddenly these schools had a world of data and collaborators at their fingertips. The network's traffic increased until eventually the computers controlling the network and the telephone lines connecting them were overloaded. In 1987 a contract to manage and upgraded the network was awarded a company outside the government. In 1995, NSFNet began a phased withdrawal to turn what has become the backbone of the Internet (called vBNS) over to a consortium of commercial backbone providers (PSINet, UUNET, ANS/AOL, Sprint, MCI, and AGIS-Net99).

In the early days of ARPANet and NSFNet, users communicated using FTP (file transfer protocol) and BBS (Bulletin Board Service). FTP is a software program that sends entire files from one computer to another. BBS is an electronic message database where people can "log in" (connect to the BBS via their computer) and leave messages, which could be retrieved by others who logged in to the BBS. Most of these people were computer aficionados and didn't mind that they had to learn computer code to operate and use the system and that files appeared as lines of text on their screens. The Internet would still be a distant dream if it were not for the development, in 1993, of Mosaic, described as "the killer application of the 1990s." Mosaic was the first program to provide a slick multimedia GUI (Graphical User Interface) to the Internet's burgeoning wealth of distributed information services at a time when access to the Internet was expanding rapidly outside its previous domain of academia and large industrial research institutions.

Mosaic was originally designed and programmed by Marc Andreessen and Eric Bina at NCSA (National Center for Supercomputing Applications). Version 1.0 was released in April 1993. This remarkable program, essentially the first "Browser," allowed the user to see text and image data from the Internet displayed in a Window in full color. The end user had only to point and click his cursor over hypertexted words (usually in blue and underlined) to be taken to another page on the network. What allowed this to happen was the use of HTML (HyperText Markup Language), which the page designer used in the form of "tags." Text or pictures that were so tagged could be linked to any page on the Internet, or displayed in a wide variety of font styles and colors. Now there are a wide variety of browsers available but all of them are basically expansions of Mosaic.

The development of GUI browsers made the Internet more accessible for the mainstream public and individual computer users were soon demanding more services and faster information transfer times. Within a few years DSL (Digital Subscriber Lines) —a family of digital telecommunications protocols designed to allow high speed data communication over existing copper telephone lines between end-users and telephone companies, cable modems and satellite modems ensured an ever expanding role for the Internet in our lives.

jump to top of page


  • Browser: A program that allows a person to read hypertext. The browser gives some means of viewing the contents of nodes (or "pages") and of navigating from one node to another. They act as clients to remote web servers. Examples of browsers for the internet include: KidSafe Explorer, Lynx, MacWeb, Minuet, Mosaic, Mozilla, Microsoft Explorer, Netscape Navigator, Opera, Web Explorer, etc. (For an expanded list of brosers check out: Web Developers Notes Browser List Page.
  • CGI: Common Gateway Interface. A standard for running external programs from a Web HTTP server. CGI specifies how to pass arguments to the executing program as part of the HTTP request. It also defines how some computer code will be interpreted. Commonly, the program will generate some HTML that will be passed back to the browser but it can also request URL redirection. CGI allows the returned HTML (or other document type) to depend in any arbitrary way on the request. The CGI program can, for example, access information in a database and format the results as HTML. A CGI program can be any program that can accept command line arguments. Perl is a common choice for writing CGI scripts. Some HTTP servers require CGI programs to reside in a special directory, often "/cgi-bin" but better servers provide ways to distinguish CGI programs so they can be kept in the same directories as the HTML files to which they are related. Whenever the server receives a CGI execution request it creates a new process to run the external program. If the process fails to terminate for some reason, or if requests are received faster than the server can respond to them, the server may become swamped with processes. In order to improve performance, Netscape devised NSAPI and Microsoft developed the ISAPI standard, which allowed CGI-like tasks to run as part of the main server process, thus avoiding the overhead of creating a new process to handle each CGI invocation.
  • Client: A computer system or process that requests a service of another computer system or process (a "server") using some kind of protocol and accepts the server's responses. A client is part of a client-server software architecture. For example, a workstation requesting the contents of a file from a file server is a client of the file server.
  • DNS: Domain Name System. A system that provides unique “addresses” for all computers that connect to the Internet.
  • Domains: Domains are simply “addresses” on the Internet. These addresses are divided up into sections, separated by periods (usually pronounced as "dot") and back slashes (/). There are 6 original High Level Domains. These are: 1) com—for commercial organizations (i.e. businesses), 2) edu—for educational organizations (i.e. universities, secondary schools, etc.), 3) gov—for non-military government organizations, 4) mil—for military organizations (i.e. army, navy, etc.), 5) org—for organizations, and 6) net —for network resources. These domains are used at the end of the domain address to delineate which type of address it is. There are also domain names for foreign countries (i.e. uk—United Kingdom, fi —Finland, ir—Ireland, etc.) These domain names follow the High Level Domain names. So… would get you to Oxford University Home Page, the "www" denoting the World Wide Web, the "ox" denoting Oxford, the "ac" denoting an academic institution (used in the UK in place of "edu" which is used in the US) and the "uk" denoting that the page originates from the United Kingdom. Alternately, gets you to the University of Hawaii, Manoa Home Page.
  • NOTE: Currently, with the rapid expansion of the Internet and many new High Level Domains are being considered.
  • DSL: Digital Subscriber Line. A family of digital telecommunications protocols designed to allow high-speed data communication over the existing copper telephone lines between end-users and telephone companies. When two conventional modems are connected through the telephone system (PSTN), it treats the communication the same as voice conversations. This has the advantage that there is no outlay required from the telephone company (telco) but the disadvantage is that the bandwidth available for the communication is the same as that available for voice conversations, usually 64 kb/s at most. The twisted-pair copper cables into individual homes or offices can usually carry significantly more than 64 kb/s but the telco needs to handle the signal as digital rather than analog. There are many implementations of the basic scheme, differing in the communication protocol used and providing varying service levels. The throughput of the communication can be anything from about 128 kb/s to over 8 Mb/s, the communication can be either symmetric or asymmetric (i.e. the available bandwidth may or may not be the same upstream and downstream). Equipment prices and service fees also vary considerably. The first technology based on DSL was ISDN, although ISDN is not often recognized as such nowadays. Since then a large number of other protocols have been developed, collectively referred to as xDSL, including HDSL, SDSL, ADSL, and VDSL.
  • Ethernet: TCP/IP (see definitions below) computers are frequently connected to the Internet via the Ethernet (a type of LAN).
  • FTP: File Transfer Protocol
  • GUI: Graphical User Interface. An operating system that uses pictures rather than just words to represent the input and output of a program. A program with a GUI runs under some windowing system (e.g. The X Window System, MacOS, Microsoft Windows, Acorn RISC OS, NEXTSTEP). The program displays certain icons, buttons, dialogue boxes, etc. in its windows on the screen and the user controls it mainly by moving a pointer on the screen (typically controlled by a mouse) and selecting certain objects by pressing buttons on the mouse while the pointer is pointing at them. This contrasts with a command line interface where communication is by exchange of strings of text. Windowing systems started with the first real-time graphic display systems for computers, namely the SAGE Project and Ivan Sutherland's Sketchpad (1963). Douglas Engelbart's Augmentation of Human Intellect Project in the 1960s developed the On-Line System, which incorporated a mouse-driven cursor and multiple windows. Several people from Engelbart's project went to Xerox PARC in the early 1970s, most importantly his senior engineer, Bill English. The Xerox PARC team established the WIMP concept, which appeared commercially in the Xerox 8010 (Star) system in 1981. Shortly thereafter Jef Raskin, the Macintosh team at Apple Computer (which included former members of the Xerox PARC group) continued to develop such ideas in the first commercially successful product to use a GUI, the Apple Macintosh, released in January 1984. In 2001 Apple introduced Mac OS X. Microsoft modeled the first version of Windows, released in 1985, on Mac OS. Windows was a GUI for MS-DOS that had been shipped with IBM PC and compatible computers since 1981. Apple sued Microsoft over infringement of the look-and-feel of the MacOS. The court case ran for many years.
  • Hypertext: A term coined by Ted Nelson around 1965 for a collection of documents (or "nodes") containing cross-references or "links" which, with the aid of an interactive browser program, allow the reader to move easily from one document to another. The extension of hypertext to include other media (sound, graphics, and video) has been termed "hypermedia," but is usually just called "hypertext," especially since the advent of the Internet and HTML.
  • HTML: HyperText Markup Language. This “coding” or programming language formats a document to be sent and received over the Internet. “Tags” (instructions in brackets) are embedded in the text of a document. These tags tell the viewer's browser how to display or "interpret" the document's graphics, color, text and other formatting for display on the monitor. (e.g. < b >< i >< font color = red > Hi! < /font >< /i >< /b > tells the computer to display "Hi!" as Hi! , where the computer is told to display "Hi!" in bold italics and in red.) NOTE: The tags to link text or graphics to another page are < a href = " new page ">, followed by a written description of the new page, followed by < /a >. Thus, < a href = “ ” > Media Design < /a > would direct the computer to go to and would appear in the browser as "Media Design."
  • HTTP: HyperText Transfer Protocol. The software used on the Internet for the exchange of HTML documents.
  • ISDN: Integrated Services Digital Network. A set of communications standards allowing an individual wire or optical fibre to carry voice, digital network services and video. ISDN is intended to eventually replace the plain old telephone system. ISDN was first published as one of the 1984 ITU-T Red Book recommendations. ISDN uses mostly existing Public Switched Telephone Network (PSTN) switches and wiring, upgraded so that the basic "call" is a 64 kilobits per second, all-digital end-to-end channel. ISDN is offered by local telephone companies, but most readily in Australia, France, Japan and Singapore, with the UK somewhat behind and availability in the USA rather spotty.
  • IAB: Internet Architecture Board.
  • IESG: Internet Engineering Steering Group. Just what it sounds like! A bunch of real smart guys carefully steering the development of the Internet into the future.
  • IETF: Internet Engineering Task Force.
  • InterNIC: Internet Network Information Center. In cooperation with the Internet community, the National Science Foundation developed and released in the Spring of 1992 a solicitation for one or more Network Information Service (NIS) Managers to provide and/or coordinate services for the NSFNet community. Three organizations were selected to receive cooperative agreements in the areas of Information Services, Directory and Database Services, and Registration Services. Together these three awards constitute the InterNIC. General Atomics provides information services, AT & T provides directory and database services, and Network Solutions, Inc. (NSI) provides registration services. (e.g. I had to apply and pay interNIC to obtain my own URL, which is Currently many companies who are "web hosts" will sell you a domain, but you still have to look at interNIC to find out if the domain name of your choice is available (is not in use by someone else).
  • IP: Internet Protocol. Software program used to help computers communicate over the Internet.
  • ISO/OSI: International Standardization Organization/Open Systems Interconnect. Used to describe aspects of your computer's ability to communicate with other computers.
  • ISO: Organization for International Standardization. Group that makes the rules of the Internet and plans for its future—based in Geneva, Switzerland.
  • ISOC: Internet Society.
  • ISP: Internet Service Provider. A site that has a large enough computer to store information and software and "land lines" (telephone lines) that connect the user (you) with the WWW. You must have: a) a computer, b) a phone line or DSL line or cable that can handle digital data (most do), c) software that can "communicate" with other computers.
  • LAN: Local Area Network. Used to describe "small" networks, usually inside a company building or within a single corporation. All the computers in that company or corporation are connected using network protocol software and thus can communicate with each other.
  • NAPs: Network Access Points. In the United States, a network access point (NAP) is one of several major Internet interconnection points that serve to tie all the Internet access providers together so that, for example, an AT&T user in Portland, Oregon can reach the Web site of a Bell South customer in Miami, Florida. Originally, four NAPs - in New York, Washington, D.C., Chicago, and San Francisco - were created and supported by the National Science Foundation as part of the transition from the original U.S. government-financed Internet to a commercially operated Internet. Since that time, several new NAPs have arrived, including WorldCom's "MAE West" site in San Jose, California and ICS Network Systems' "Big East." The NAPs provide major switching facilities that serve the public in general. Using companies apply to use the NAP facilities and make their own intercompany peering arrangements. Much Internet traffic is handled without involving NAPs, using peering arrangements and interconnections within geographic regions. The vBNS network, a separate network supported by the National Science Foundation for research purposes, also makes use of the NAPs.
  • NIC: Network Information Center.
  • NNTP: Network News Transfer Protocol.
  • NOC: Network Operations Center.
  • OSI: Open Systems Interconnect Protocol. Software program used to help computers communicate over the Internet.
  • PANS: Pretty Amazing New Stuff.
  • PPP: Point-To-Point Protocol. Software program used to help computers communicate over the Internet.
  • Protocol: A set of formal rules describing how to transmit data, especially across a network. Low-level protocols define the electrical and physical standards to be observed, bit- and byte-ordering and the transmission and error detection and correction of the bit stream. High-level protocols deal with the data formatting, including the syntax of messages, the terminal-to-computer dialogue, character sets, sequencing of messages etc.
  • PSTN: Public Switched Telephone Network. The collection of interconnected systems operated by the various telephone companies and administrations (telcos and PTTs) around the world. Also known as the Plain Old Telephone System (POTS) in contrast to xDSL and ISDN. The PSTN started as human-operated analogue circuit switching systems (plugboards), progressed through electromechanical switches. By now this has almost completely been made digital, except for the final connection to the subscriber (the "last mile"). The signal coming out of the phone set is analogue. It is usually transmitted over a twisted pair cable still as an analogue signal. At the telco office this analogue signal is usually digitized, using 8,000 samples per second and 8 bits per sample, yielding a 64 kb/s data stream (DS0). Several such data streams are usually combined into a fatter stream: in the US 24 channels are combined into a T1, in Europe 31 DS0 channels are combined into an E1 line. This can later be further combined into larger chunks for transmission over high-bandwidth core trunks. At the receiving end the channels are separated, the digital signals are converted back to analogue and delivered to the received phone. While all these conversions are inaudible when voice is transmitted over the phone lines it can make digital communication difficult.
  • PTT: Post, Telephone and Telegraph administration. One of the many national bodies responsible for providing communications services in a particular country. Traditionally, PTTs had monopolies in their respective countries. This monopoly was first broken in the US, with the UK joining somewhat later. Currently the markets are being deregulated in Europe as well as other parts of the world.
  • RFC: Requests for Comments. Set of papers on which the Internet's standards are documented and published.
  • Server: 1. A program that provides some service to other (client) programs. The connection between client and server is normally by means of message passing, often over a network, and uses some protocol to encode the client's requests and the server's responses. The server may run continuously, waiting for requests to arrive or it may be invoked by some higher-level program that controls a number of specific servers. 2. A computer which provides some service for other computers connected to it via a network. The most common example is a file server which has a local disk and services requests from remote clients to read and write files on that disk, often using Sun's Network File System (NFS) protocol or Novell Netware on IBM PCs.
  • SLIP: Serial Line Internet Protocol. Software program used to help computers communicate over the Internet.
  • SMTP: Simple Mail Transfer Protocol. Software used to transfer electronic mail between computers, usually over Ethernet. It is a server-to-server protocol, so other protocols are used to access the messages. The SMTP dialog usually happens in the background under the control of the message transport system, (e.g. sendmail) but it is possible to interact with an SMTP server using Telnet to connect to the normal SMTP port.
  • SNA: System Network Architecture. IBM's proprietary high level networking protocol standard, used by IBM and IBM compatible mainframes.
  • TCP: Transmission Control Protocol. Software program used to help computers communicate over the Internet.
  • Telnet: A terminal emulation protocol that allows you to log onto a remote computer system on the Internet.
  • UNIX: An early popular operating system important to the development of the Internet as it was used as the primary operating system for university computers.
  • UPD: User DataGram Protocol. Software program used to help computers communicate over the Internet.
  • URL: Uniform Resource Locator. "Addresses" on the Internet are given as URLs. (e.g. the URL to my home page is: where the "http" refers to HyperText Transfer Protocol, the "www" refers to the WorldWideWeb, the "" refers to my "domain" or location where all my web pages are stored and the "index2" refers to my index page and "htm" refers to the fact that the page is written in HTML format.)
  • Usenet: An informal group of systems that exchange news (predates the Internet).
  • vBNS: The vBNS (very high-speed Backbone Network Service) is a network that interconnects a number of supercomputer centers in the United States and is reserved for science applications requiring the massive computing that supercomputers can provide. Scientists at the supercomputer centers and other locations apply for time on the supercomputers and use of the vBNS by describing their projects to a committee that apportions computer time and vBNS resources. The vBNS and the supercomputer centers were initiated and are maintained by the National Science Foundation (NSF). The vBNS began operation in April, 1995, as the successor to the NSFNet. The NSFNet itself succeeded DARPANET, the original Internet network. The vBNS is the scientific portion of the Internet that NSF continues to fund. The physical infrastructure for the original Internet is now owned and maintained by the national commercial backbone companies in the United States and worldwide. Currently, MCI provides the backbone infrastructure for the vBNS under contract from the National Science Foundation. The backbone consists mainly of interconnected Optical Carrier levels (OCx) lines (operating at 155 Mbps or higher). The vBNS provides connections to the four national network access points (NAPs). The vBNS infrastructure itself is not shared with commercial companies and ordinary users. As part of the evolution toward a commercially self-sustained Internet, the National Science Foundation continues to operate the routing arbiter, a service that the NAPs and other routers use to route and reroute packets and optimize traffic flow on the Internet. The routing arbiter service is managed by Merit under a contract from the NSF that expires in July, 1999. The vBNS has recently become part of the infrastructure of Internet2. A new NFS-funded initiative is developing an advanced network infrastructure referred to as the National Technology Grid.
  • WWW: World Wide Web. Used to describe the network of phone lines that connect computers.
NOTE: You can find more help at my WWW/Internet Resource Page. You can also find more computer definitions at FOLDOC (Free Online Dictionary of Computing), and more Internet definitions at WhatIs?com.
Go to top of page

© Media Design, All Rights Reserved.
Site built by Media Design