Tuesday, June 26, 2007

Broad Band Internet

Broadband Internet access

From Wikipedia, the free encyclopedia

Jump to: navigation, search
A WildBlue Satellite Internet dish.
A WildBlue Satellite Internet dish.

Broadband Internet access, often shortened to "broadband Internet" or just "broadband", is a high data-transmission rate Internet connection. DSL and cable modem, both popular consumer broadband technologies, are typically capable of transmitting faster than a dial-up modem (56 kbit/s (kilobits per second)). Upload speed for a dial-up modem is even slower (31.2 kbit/s for V.90, 44 kbit/s for V.92).

Broadband Internet access became a rapidly developing market in many areas in the early 2000s; one study found that broadband Internet usage in the United States grew from 6% in June 2000 to over 30% in 2003. [1]

Modern consumer broadband implementations, up to 30 Mbit/s, are several hundred times faster than those available at the time the Internet first became popular (such as ISDN and 56 kbit/s) while costing less than ISDN and sometimes no more than 56 kbit/s, though performance and costs vary widely between countries.

"Broadband" in this context refers to the relatively high available bitrate, when compared to systems such as dial-up with lower bitrates (which could be referred to as narrowband).

Contents

[hide]

[edit] Overview

Broadband transmission rates
Connection Transmission Speed
DS-1 (Tier 1) 1.544 Mbit/s
E-1 2.048 Mbit/s
DS-3 (Tier 3) 44.736 Mbit/s
OC-3 155.52 Mbit/s
OC-12 622.08 Mbit/s
OC-48 2.488 Gbit/s
OC-192 9.953 Gbit/s
OC-768 39.813 Gbit/s
OC-1536 79.6 Gbit/s
OC-3072 159.2 Gbit/s

Broadband is often called high-speed Internet, because it usually has a high rate of data transmission. In general, any connection to the customer of 256 kbit/s (0.250 Mbit/s) or more is considered broadband Internet. The International Telecommunication Union Standardization Sector (ITU-T) recommendation I.113 has defined broadband as a transmission capacity that is faster than primary rate ISDN, at 1.5 to 2 Mbit/s. The FCC definition of broadband is 200 kbit/s (0.2 Mbit/s) in one direction, and advanced broadband is at least 200 kbit/s in both directions. The Organization for Economic Co-operation and Development (OECD) has defined broadband as 256 kbit/s in at least one direction and this bit rate is the most common baseline that is marketed as "broadband" around the world. There is no specific bitrate defined by the industry, however, and "broadband" can mean lower-bitrate transmission methods. Some Internet Service Providers (ISPs) use this to their advantage in marketing lower-bitrate connections as broadband.

In practice, the advertised bandwidth is not always reliably available to the customer; ISPs often allow a greater number of subscribers than their backbone connection can handle, under the assumption that most users will not be using their full connection capacity very frequently. This aggregation strategy works more often than not, so users can typically burst to their full bandwidth most of the time; however, peer-to-peer (P2P) file sharing systems, often requiring extended durations of high bandwidth, stress these assumptions, and can cause major problems for ISPs who have excessively overbooked their capacity. For more on this topic, see traffic shaping. As takeup for these introductory products increases, telcos are starting to offer higher bit rate services. For existing connections, this most of the time simply involves reconfiguring the existing equipment at each end of the connection.

As the bandwidth delivered to end users increases, the market expects that video on demand services streamed over the Internet will become more popular, though at the present time such services generally require specialized networks. The data rates on most broadband services still do not suffice to provide good quality video, as MPEG-2 video requires about 6 Mbit/s for good results. Adequate video for some purposes becomes possible at lower data rates, with rates of 768 kbit/s and 384 kbit/s used for some video conferencing applications, and rates as low as 100 kbit/s used for videophones using H.264/MPEG-4 AVC. The MPEG-4 format delivers high-quality video at 2 Mbit/s, at the high end of cable modem and ADSL performance.

Increased bandwidth has already made an impact on newsgroups: postings to groups such as alt.binaries.* have grown from JPEG files to entire CD and DVD images. According to NTL, the level of traffic on their network increased from a daily inbound news feed of 150 gigabytes of data per day and 1 terabyte of data out each day in 2001 to 500 gigabytes of data inbound and over 4 terabytes out each day in 2002.[citation needed]

[edit] Technology

The standard broadband technologies in most areas are DSL and cable modems. Newer technologies in use include VDSL and pushing optical fiber connections closer to the subscriber in both telephone and cable plants. Fiber-optic communication, while only recently being used in fiber to the premises and fiber to the curb schemes, has played a crucial role in enabling Broadband Internet access by making transmission of information over larger distances much more cost-effective than copper wire technology. In a few areas not served by cable or ADSL, community organizations have begun to install Wi-Fi networks, and in some cities and towns local governments are installing municipal Wi-Fi networks. As of 2006, high speed mobile Internet access has become available at the consumer level in some countries, using the HSDPA and EV-DO technologies. The newest technology being deployed for mobile and stationary broadband access is WiMAX.

[edit] Multilinking Modems

It is possible to roughly double dial-up capability with multilinking technology. What is required are two modems, two phone lines, two dial-up accounts, and ISP support for multilinking, or special software at the user end. This option was popular with some high-end users before ISDN, DSL and other technologies became available.

[edit] Dual Analog Lines

Diamond and other vendors had created dual phone line modems with bonding capability. The speed of dual line modems is faster than 90 kbit/s. To use this modem, the ISP should support line bonding. The Internet and phone charge will be twice the ordinary dial-up charge.

[edit] ISDN

Integrated Service Digital Network (ISDN) is one of the oldest high-speed digital access methods for consumers and businesses to connect to the Internet. It is a telephone data service standard. Its use in the United States peaked in the late 1990s prior to the availability of DSL and cable modem technologies. Broadband service is usually compared to ISDN-BRI because this was the standard high-speed access technology that formed a baseline for the challenges faced by the early broadband providers. These providers sought to compete against ISDN by offering faster and cheaper services to consumers.

A basic rate ISDN line (known as ISDN-BRI) is an ISDN line with 2 data "bearer" channels (DS0 - 64 kbit/s each). Using ISDN terminal adapters (erroneously called modems), it is possible to bond together 2 or more separate ISDN-BRI lines to reach speeds of 256 kbit/s or more. The ISDN channel bonding technology has been used for video conference applications and high-speed data transmission.

Primary rate ISDN, known as ISDN-PRI, is an ISDN line with 23 DS0 channels and total speed of 1,544 kbit/s (US standard). ISDN E1 (European standard) line is an ISDN lines with 30 DS0 channels and total speed of 2,048 kbit/s. Because ISDN is a telephone-based product, a lot of the terminology and physical aspects of the line are shared by the ISDN-PRI used for voice services. An ISDN line can therefore be "provisioned" for voice or data and many different options, depending on the equipment being used at any particular installation, and depending on the offerings of the telephone company's central office switch. Most ISDN-PRI's are used for telephone voice communication using large PBX systems, rather than for data. One obvious exception is that ISP's usually have ISDN-PRI's for handling ISDN data and modem calls.

It is mainly of historical interest that many of the earlier ISDN data lines used 56 kbit/s rather than 64 kbit/s "B" channels of data. This caused ISDN-BRI to be offered at both 128 kbit/s and 112 kbit/s rates, depending on the central office's switching equipment.

Advantages:

  1. Constant data speed at 64 kbit/s for each DS0 channel.
  2. Two way high speed symmetric data transmission, unlike ADSL.
  3. One of the data channels can be used for phone conversation without disturbing the data transmission through the other data channel. When a phone call is ended, the bearer channel can immediately dial and re-connect itself to the data call.
  4. Call setup is very quick.
  5. Low latency
  6. ISDN Voice clarity is unmatched by other phone services.
  7. Caller ID is almost always available for no additional fee.
  8. Maximum distance from the central office is much greater than it is for DSL.
  9. When using ISDN-BRI, there is the possibility of using the low-bandwidth 16 kbit/s "D" channel for packet data and for always on capabilities.

Disadvantages:

  1. ISDN offerings are dwindling in the marketplace due to the widespread use of faster and cheaper alternatives.
  2. ISDN routers, terminal adapters ("modems"), and telephones are more expensive than ordinary POTS equipment, like dial-up modems.
  3. ISDN provisioning can be complicated due to the great number of options available.
  4. ISDN users must dial in to a provider that offers ISDN Internet service, which means that the call could be disconnected.
  5. ISDN is billed as a phone line, to which is added the bill for Internet ISDN access.
  6. "Always on" data connections are not available in all locations.
  7. Some telephone companies charge unusual fees for ISDN, including call setup fees, per minute fees, and higher rates than normal for other services.

T-1/DS-1

These are highly-regulated services traditionally intended for businesses, that are managed through Public Service Commissions (PSCs) in each state, must be fully defined in PSC tariff documents, and have management rules dating back to the early 1980s which still refer to teletypes as potential connection devices. As such, T-1 services have very strict and rigid service requirements which drive up the provider's maintenance costs and may require them to have a technician on standby 24 hours a day to repair the line if it malfunctions. (In comparison, ISDN and DSL are not regulated by the PSCs at all.) Due to the expensive and regulated nature of T-1 lines, they are normally installed under the provisions of a written agreement, the contract term being typically one to three years. However, there are usually few restrictions to an end-user's use of a T-1, uptime and bandwidth speed may be guaranteed, quality of service may be supported, and blocks of static IP addresses are commonly included.

Since a T-1 was originally conceived for voice transmission, and voice T-1's are still widely used in businesses, it can be confusing to the uninitiated subscriber. It is often best to refer to the type of T-1 being considered, using the appropriate "data" or "voice" prefix to differentiate between the two. A voice T-1 would terminate at a phone company's central office (CO) for connection to the PSTN; a data T-1 terminates at a point of presence (POP) or datacenter. The T-1 line which is between a customer's premises and the POP or CO is called the local loop. The owner of the local loop need not be the owner of the network at the POP where your T-1 connects to the Internet, and so a T-1 subscriber may have contracts with these two organizations separately.

The nomenclature for a T-1 varies widely, cited in some circles a DS-1, a T1.5, a T1, or a DS1. Some of these try to distinguish amongst the different aspects of the line, considering the data standard a DS-1, and the physical structure of the trunk line a T-1 or T-1.5. They are also called leased lines, but that terminology is usually for data speeds under 1.5 Mbit/s. At times, a T-1 can be included in the term "leased line" or excluded from it. Whatever it is called, it is inherently related to other high-speed access methods, which include T-3, SONET OC-3, and other T-carrier and Optical Carriers. Additionally, a T-1 might be aggregated with more than one T-1, producing an nxT-1, such as 4xT-1 which has exactly 4 times the bandwidth of a T-1.

When a T-1 is installed, there are a number of choices to be made: in the carrier chosen, the location of the demarc, the type of channel service unit (CSU) or data service unit (DSU) used, the WAN IP router used, the types of speeds chosen, etc. Specialized WAN routers are used with T-1 lines that route Internet or VPN data onto the T-1 line from the subscriber's packet-based (TCP/IP) network using customer premises equipment (CPE). The CPE typical consists of a CSU/DSU that converts the DS-1 data stream of the T-1 to a TCP/IP packet data stream for use in the customer's Ethernet LAN. It is noteworthy that many T-1 providers optionally maintain and/or sell the CPE as part of the service contract, which can affect the demarcation point and the ownership of the router, CSU, or DSU.

Although a T-1 has a maximum of 1.544 Mbit/s, a fractional T-1 might be offered which only uses an integer multiple of 128 kbit/s for bandwidth. In this manner, a customer might only purchase 1/12th or 1/3 of a T-1, which would be 128 kbit/s and 512 kbit/s, respectively.

T-1 and fractional T-1 data lines are symmetric, meaning that their upload and download speeds are the same.

Wired Ethernet

Where available, this method of broadband connection to the Internet would indicate that the Internet access is very fast. However, just because Ethernet is offered doesn't mean that the full 10, 100, or 1000 Mbit/s connection is able to be utilized for direct Internet access. In a college dormitory for example, the 100 Mbit/s Ethernet access might be fully available to on-campus networks, but Internet access speeds might be closer to 4xT-1 speed (6 Mbit/s). If you are sharing a broadband connection with others in a building, the access speed of the leased line into the building would of course govern the end-user's speed.

However, in certain locations, true Ethernet broadband access might be available. This would most commonly be the case at a POP or a datacenter, and not at a typical residence or business. When Ethernet Internet access is offered, it could be fiber-optic or copper twisted pair, and the speed will conform to standard Ethernet speeds of up to 10 Gbit/s. The primary advantage is that no special hardware is needed for Ethernet. Ethernet also has a very low latency.

Rural broadband

One of the great challenges of broadband is to provide service to potential customers in areas of low population density, such as to farmers and ranchers. In cities where the population density is high, it is easy for a service provider to recover equipment costs, but each rural customer may require thousands of dollars of equipment to get connected. A similar problem existed a century ago when electrical power was invented. Cities were the first to receive electric lighting, as early as 1880, while in the United States some remote rural areas were still not electrified until the 1940s, and even then only with the help of federally funded programs like the Tennessee Valley Authority (TVA).

Several rural broadband solutions exist, though each has its own pitfalls and limitations. Some choices are better than others, but are dependent on how proactive the local phone company is about upgrading their rural technology.

Satellite Internet

Main article: Satellite Internet

This employs a satellite in geostationary orbit to relay data from the satellite company to each customer. Satellite Internet is usually among the most expensive ways of gaining broadband Internet access, but in rural areas it may only compete with cellular broadband. However, costs have been coming down in recent years to the point that it is becoming more competitive with other high-speed options.

Satellite Internet also has a high latency problem caused by the signal having to travel 35,000 km (22,000 miles) out into space to the satellite and back to Earth again. The signal delay can be as much as 500 milliseconds to 900 milliseconds, which makes this service unsuitable for applications requiring real-time user input such as certain multiplayer Internet games and first-person shooters played over the connection. Despite this, it is still possible for many games to still be played, but the scope is limited to real-time strategy or turn-based games. The functionality of live interactive access to a distant computer can also be subject to the problems caused by high latency. These problems are more than tolerable for just basic email access and web browsing and in most cases are barely noticeable.

There is no simple way to get around this problem. The delay is primarily due to the speed of light being only 300,000 km/second (186,000 miles per second). Even if all other signaling delays could be eliminated it still takes the electromagnetic wave 233 milliseconds to travel from ground to the satellite and back to the ground, a total of 70,000 km (44,000 miles) to travel from you to the satellite company.

Since the satellite is usually being used for two-way communications, the total distance increases to 140,000 km (88,000 miles), which takes a radio wave 466 ms to travel. Factoring in normal delays from other network sources gives a typical connection latency of 500-700 ms. This is far worse latency than even most dial-up modem users' experience, at typically only 150-200 ms total latency.

Most satellite Internet providers also have a FAP (Fair Access Policy). Perhaps one of the largest cons against satellite Internet, these FAPs usually throttle a user's throughput to dial-up speeds after a certain "invisible wall" is hit (usually around 200 MB a day). This FAP usually lasts for 24 hours after the wall is hit, and a user's throughput is restored to whatever tier they paid for. This makes bandwidth-intensive activities nearly impossible to complete in a reasonable amount of time (examples include PtP and newsgroup binary downloading).

Advantages

  1. True global broadband Internet access availability
  2. Mobile connection to the Internet (with some providers)

Disadvantages

  1. Very high latency compared to other broadband services, especially 2-way satellite service
  2. Unreliable: drop-outs are common during travel, inclement weather, and during sunspot activity
  3. The narrow-beam highly directional antenna must be accurately pointed to the satellite orbiting overhead
  4. The Fair Access Policy limits heavy usage
  5. VPN use is discouraged, problematic, and/or restricted with satellite broadband, although available at a price
  6. One-way satellite service requires the use of a modem or other data uplink connection
  7. VoIP is not supported.
  8. Satellite dishes are huge. Although most of them employ plastic to reduce weight, they are typically between 80 and 120 cm (30 to 48 inches) in diameter.

Cellular Broadband

Cellular telephones are becoming more and more capable as Internet browsers. The widespread use of cellular phones in most areas has allowed cellular telephone networks to expand quickly into broadband Internet service networks. Since the cellular phone towers are already in place, cellular broadband access is rapidly becoming a popular means to access the Internet, with or without a cell phone.

Most of the cell phones sold today have some kind of support for Internet access. Broadband access is mainly concentrated in the cities at this time (2007), but all of the major U.S. carriers intend to expand the broadband offerings they have. New broadband technologies such as the 3G EVDO Rev. 0 and Rev. A are being deployed for CDMA phones, and HSDPA for GSM phones in the US. Currently (2007), GSM phones in the US are most often on a low-speed EDGE system, however, but HSDPA should catch up soon.

This means that for now, nationwide broadband cellular in the U.S. is only offered by carriers that use EVDO or HSDPA, offering customers a typical 400-700 kbit/s download speed. With cellular speed ratings, the companies always specify a range of typical speeds due to the fact that congested cellular networks mean lower data download speeds. They do not highlight the fact that the technology is capable of 2.4 Mbit/s burst download rates, because this is nowhere near what can ever be expected.

Since cellular networks often cover large areas of the nation, many traveling people prefer cellular Internet access to other technologies such as WiFi wireless and satellite. Although some satellite services allow end-users to reposition their dish antenna, there are considerable drawbacks to pointing a large satellite dish on a mobile platform (such as an automobile or vessel). Cellular service can normally be received using a small omnidirectional antenna.

Because many people need to connect computer equipment to the Internet, and not just their cell phone, cellular broadband access is available with this in mind. A user with a single computer can access the Internet by tethering their cell phone to their laptop or PC, normally using a USB connection. There are also Cardbus, ExpressCard, and USB modems available that can perform a similar function but require no cell phone. Some of these modem cards are compatible with cellular broadband routers, which allow more than one computer to be connected to the Internet using one cellular connection.

Advantages

  1. The only broadband connection available on many cell phones and PDA's
  2. Mobile wireless connection to the Internet
  3. Available in all metropolitan areas, most large cities, and along major highways throughout the U.S. (See a map)
  4. No need to aim an antenna in most cases
  5. The antenna is extremely small compared to a satellite dish (~100 cm or ~36 inches in diameter)
  6. Lower latency compared to satellite Internet
  7. Higher availability than WiFi "Hot Spots"
  8. A traveler who already has cellular broadband will not need to pay different WiFi Hot Spot providers for access.

Disadvantages

  1. Unreliable: drop-outs are common during travel and during inclement weather
  2. Not truly nationwide service
  3. Speed varies widely throughout the day, sometimes falling well below the 400 kbit/s target during peak times
  4. Asymmetric service: the upload rate is always much slower than the download rate.
  5. High latency compared to other broadband services

Remote DSL

This allows a service provider to set up DSL hardware out in the country in a weatherproof enclosure. However, setup costs can be quite high since the service provider may need to install fiber-optic cable to the remote location. Also, the remote site has the same distance limits as the metropolitan service, and can only serve an island of customers along the trunk line within a radius of about 2 km (7000 ft).

DSL repeater

This is a very new technology which allows DSL to travel longer distances to remote customers. One version of the repeater is installed at approximately 3 km (10,000 ft) intervals along the trunk line, and strengthens and cleans up the DSL signal so it can travel another 3 km (10,000 ft).

Power-line Internet

This is a new service still in its infancy that may eventually permit broadband Internet data to travel down standard high-voltage power lines. However, the system has a number of complex issues, the primary one being that power lines are inherently a very noisy environment. Every time a device turns on or off, it introduces a pop or click into the line. Energy-saving devices often introduce noisy harmonics into the line. The system must be designed to deal with these natural signaling disruptions and work around them.

Broadband over power lines (BPL), also known as Power line communication, has developed faster in Europe than in the US due to a historical difference in power system design philosophies. Nearly all large power grids transmit power at high voltages in order to reduce transmission losses, then near the customer use step-down transformers to reduce the voltage. Since BPL signals cannot readily pass through transformers, repeaters must be attached to the transformers. In the US, it is common for a small transformer hung from a utility pole to service a single house. In Europe, it is more common for a somewhat larger transformer to service 10 or 100 houses. For delivering power to customers, this difference in design makes little difference, but it means delivering BPL over the power grid of a typical US city will require an order of magnitude more repeaters than would be required in a comparable European city.

The second major issue is signal strength and operating frequency. The system is expected to use frequencies in the 10 to 30 MHz range, which has been used for decades by licensed amateur radio operators, as well as international shortwave broadcasters and a variety of communications systems (military, aeronautical, etc.). Power lines are unshielded and will act as transmitters for the signals they carry, and have the potential to completely wipe out the usefulness of the 10 to 30 MHz range for shortwave communications purposes.

Wireless ISP

This typically employs the current low-cost 802.11 Wi-Fi radio systems to link up remote locations over great distances, but can use other higher-power radio communications systems as well.

Traditional 802.11b was licensed for omnidirectional service spanning only 100-150 meters (300-500 ft). By focusing the signal down to a narrow beam with a Yagi antenna it can instead operate reliably over a distance of many miles.

Rural Wireless-ISP installations are typically not commercial in nature and are instead a patchwork of systems built up by hobbyists mounting antennas on radio masts and towers, agricultural storage silos, very tall trees, or whatever other tall objects are available. There are currently a number of companies that provide this service. A wireless Internet access provider map for USA is publicly available for WISPS.

Blast

iBlast was the brand name for a theoretical high-speed (7 Mbit/s), one-way digital data transmission technology from Digital TV station to users that was developed between June 2000 to October 2005.

Advantages:

  1. Low cost, high speed data transmission from TV station to users. This technology can be used for transmitting website / files from Internet.

Disadvantages:

  1. One way data transmission and should be combined with other method of data transmission from users to TV station.
  2. Privacy/security.
  3. Lack of 8VSB tuner built into many consumer electronic devices needed to receive the iBlast signal.

In the end, the disadvantages outweighed the advantages and the glut of fiberoptic capacity that ensued following the collapse of the Internet bubble drove the cost of transmission so low that an ancillary service such as this was unnecessary, and the company folded at the end of 2005. The partner television stations as well as over 500 additional television stations not part of the iBlast Network continue to transmit separate digital signals as mandated by the Telecommunications Act of 1996.

WorldSpace

WorldSpace is a digital satellite radio network based in Washington DC. It covers most of Asia and Europe plus all of Africa by satellite. Beside the digital audio, user can receive one way high speed digital data transmission (150 Kilobit/second) from the Satellite.

Advantages:

  1. Low cost (US$ 100) receiver that combine digital radio receiver and data receiver. This technology can be used for transmitting website / files from Internet.
  2. Access from remote places in Asia and Africa.

Disadvantages:

  1. One way data transmission and should be combined with other method of data transmission from users to Worldspace HQ,
  2. Privacy/security.

Broadband worldwide

See also List of countries by broadband users for June 2006 stats

Broadband subscribers per 100 inhabitants, by technology, December 2006 in the OECD (source)

Rank ↓ Country ↓ DSL ↓ Cable ↓ Other ↓ Total ↓ Total Subscribers ↓
1 Denmark 17.4% 9.0% 2.8% 29.3% 1,590,539
2 Netherlands 17.2% 11.1% 0.5% 28.8% 4,705,829
3 Iceland 26.5% 0.0% 0.7% 27.3% 80,672
4 Korea 13.2% 8.8% 4.5% 26.4% 12,770,911
5 Switzerland 16.9% 9.0% 0.4% 26.2% 1,945,358
6 Finland 21.7% 3.1% 0.2% 25.% 1,309,800
7 Norway 20.4% 3.8% 0.4% 24.6% 1,137,697
8 Sweden 14.4% 4.3% 4.0% 22.7% 2,046,222
9 Canada 10.8% 11.5% 0.1% 22.4% 7,161,872
10 United Kingdom 14.6% 4.9% 0.0% 19.4% 11,622,929
11 Belgium 11.9% 7.4% 0.0% 19.3% 2,025,112
12 United States 8.0% 9.8% 1.4% 19.2% 56,502,351
13 Japan 11.3% 2.7% 4.9% 19.0% 24,217,012
14 Luxembourg 16.0% 1.9% 0.0% 17.9% 81,303
15 Austria 11.2% 6.3% 0.2% 17.7% 1,460,000
16 France 16.7% 1.0% 0.0% 17.7% 11,105,000
17 Australia 13.9% 2.9% 0.6% 17.4% 3,518,100
18 Germany 14.7% 0.3% 0.1% 15.1% 12,444,600
19 Spain 10.5% 3.1% 0.1% 13.6% 5,917,082
20 Italy 12.6% 0.0% 0.6% 13.2% 7,697,249
21 Portugal 7.9% 5.0% 0.0% 12.9% 1,355,602
22 New Zealand 10.7% 0.5% 0.6% 11.7% 479,000
23 Czech Republic 3.9% 2.0% 3.5% 9.4% 962,000
24 Ireland 6.8% 1.0% 1.4% 9.2% 372,300
25 Hungary 4.8% 2.9% 0.1% 7.8% 791,555
26 Poland 3.9% 1.3% 0.1% 5.3% 2,032,700
27 Turkey 2.9% 0.0% 0.0% 3.0% 2,128,600
28 Slovak Republic 2.2% 0.5% 0.2% 2.9% 155,659
29 Mexico 2.1% 0.7% 0.0% 2.8% 2,950,988
30 Greece 2.7% 0.0% 0.0% 2.7% 298,222


Let Surf to:

Broadband technologies

Broadband implementations

Broadband applications

Wednesday, March 14, 2007

The Internet

What Is The Internet (And What Makes It Work) - December, 1999
By Robert E. Kahn and Vinton G. Cerf

This paper was prepared by the authors at the request of the Internet Policy Institute (IPI), a non-profit organization based in Washington, D.C., for inclusion in their upcoming series of Internet related papers. It is a condensation of a longer paper in preparation by the authors on the same subject. Many topics of potential interest were not included in this condensed version because of size and subject matter constraints. Nevertheless, the reader should get a basic idea of the Internet, how it came to be, and perhaps even how to begin thinking about it from an architectural perspective. This will be especially important to policy makers who need to distinguish the Internet as a global information system apart from its underlying communications infrastructure.

INTRODUCTION

As we approach a new millennium, the Internet is revolutionizing our society, our economy and our technological systems. No one knows for certain how far, or in what direction, the Internet will evolve. But no one should underestimate its importance.

Over the past century and a half, important technological developments have created a global environment that is drawing the people of the world closer and closer together. During the industrial revolution, we learned to put motors to work to magnify human and animal muscle power. In the new Information Age, we are learning to magnify brainpower by putting the power of computation wherever we need it, and to provide information services on a global basis. Computer resources are infinitely flexible tools; networked together, they allow us to generate, exchange, share and manipulate information in an uncountable number of ways. The Internet, as an integrating force, has melded the technology of communications and computing to provide instant connectivity and global information services to all its users at very low cost.

Ten years ago, most of the world knew little or nothing about the Internet. It was the private enclave of computer scientists and researchers who used it to interact with colleagues in their respective disciplines. Today, the Internet’s magnitude is thousands of times what it was only a decade ago. It is estimated that about 60 million host computers on the Internet today serve about 200 million users in over 200 countries and territories. Today’s telephone system is still much larger: about 3 billion people around the world now talk on almost 950 million telephone lines (about 250 million of which are actually radio-based cell phones). But by the end of the year 2000, the authors estimate there will be at least 300 million Internet users. Also, the total numbers of host computers and users have been growing at about 33% every six months since 1988 – or roughly 80% per year. The telephone service, in comparison, grows an average of about 5-10% per year. That means if the Internet keeps growing steadily the way it has been growing over the past few years, it will be nearly as big as today’s telephone system by about 2006.

top

THE EVOLUTION OF THE INTERNET

The underpinnings of the Internet are formed by the global interconnection of hundreds of thousands of otherwise independent computers, communications entities and information systems. What makes this interconnection possible is the use of a set of communication standards, procedures and formats in common among the networks and the various devices and computational facilities connected to them. The procedures by which computers communicate with each other are called "protocols." While this infrastructure is steadily evolving to include new capabilities, the protocols initially used by the Internet are called the "TCP/IP" protocols, named after the two protocols that formed the principal basis for Internet operation.

On top of this infrastructure is an emerging set of architectural concepts and data structures for heterogeneous information systems that renders the Internet a truly global information system. In essence, the Internet is an architecture, although many people confuse it with its implementation. When the Internet is looked at as an architecture, it manifests two different abstractions. One abstraction deals with communications connectivity, packet delivery and a variety of end-end communication services. The other abstraction deals with the Internet as an information system, independent of its underlying communications infrastructure, which allows creation, storage and access to a wide range of information resources, including digital objects and related services at various levels of abstraction.

Interconnecting computers is an inherently digital problem. Computers process and exchange digital information, meaning that they use a discrete mathematical “binary” or “two-valued” language of 1s and 0s. For communication purposes, such information is mapped into continuous electrical or optical waveforms. The use of digital signaling allows accurate regeneration and reliable recovery of the underlying bits. We use the terms “computer,” “computer resources” and “computation” to mean not only traditional computers, but also devices that can be controlled digitally over a network, information resources such as mobile programs and other computational capabilities.

The telephone network started out with operators who manually connected telephones to each other through “patch panels” that accepted patch cords from each telephone line and electrically connected them to one another through the panel, which operated, in effect, like a switch. The result was called circuit switching, since at its conclusion, an electrical circuit was made between the calling telephone and the called telephone. Conventional circuit switching, which was developed to handle telephone calls, is inappropriate for connecting computers because it makes limited use of the telecommunication facilities and takes too long to set up connections. Although reliable enough for voice communication, the circuit-switched voice network had difficulty delivering digital information without errors.

For digital communications, packet switching is a better choice, because it is far better suited to the typically "burst" communication style of computers. Computers that communicate typically send out brief but intense bursts of data, then remain silent for a while before sending out the next burst. These bursts are communicated as packets, which are very much like electronic postcards. The postcards, in reality packets, are relayed from computer to computer until they reach their destination. The special computers that perform this forwarding function are called variously "packet switches" or "routers" and form the equivalent of many bucket brigades spanning continents and oceans, moving buckets of electronic postcards from one computer to another. Together these routers and the communication links between them form the underpinnings of the Internet.

Without packet switching, the Internet would not exist as we now know it. Going back to the postcard analogy, postcards can get lost. They can be delivered out of order, and they can be delayed by varying amounts. The same is true of Internet packets, which, on the Internet, can even be duplicated. The Internet Protocol is the postcard layer of the Internet. The next higher layer of protocol, TCP, takes care of re-sending the “postcards” to recover packets that might have been lost, and putting packets back in order if they have become disordered in transit.

Of course, packet switching is about a billion times faster than the postal service or a bucket brigade would be. It also has to operate over many different communications systems, or substrata. The authors designed the basic architecture to be so simple and undemanding that it could work with most communication services. Many organizations, including commercial ones, carried out research using the TCP/IP protocols in the 1970s. Email was steadily used over the nascent Internet during that time and to the present. It was not until 1994 that the general public began to be aware of the Internet by way of the World Wide Web application, particularly after Netscape Communications was formed and released its browser and associated server software.

Thus, the evolution of the Internet was based on two technologies and a research dream. The technologies were packet switching and computer technology, which, in turn, drew upon the underlying technologies of digital communications and semiconductors. The research dream was to share information and computational resources. But that is simply the technical side of the story. Equally important in many ways were the other dimensions that enabled the Internet to come into existence and flourish. This aspect of the story starts with cooperation and far-sightedness in the U.S. Government, which is often derided for lack of foresight but is a real hero in this story.

It leads on to the enthusiasm of private sector interests to build upon the government funded developments to expand the Internet and make it available to the general public. Perhaps most important, it is fueled by the development of the personal computer industry and significant changes in the telecommunications industry in the 1980s, not the least of which was the decision to open the long distance market to competition. The role of workstations, the Unix operating system and local area networking (especially the Ethernet) are themes contributing to the spread of Internet technology in the 1980s into the research and academic community from which the Internet industry eventually emerged.

Many individuals have been involved in the development and evolution of the Internet covering a span of almost four decades if one goes back to the early writings on the subject of computer networking by Kleinrock [i], Licklider [ii], Baran [iii], Roberts [iv], and Davies [v]. The ARPANET, described below, was the first wide-area computer network. The NSFNET, which followed more than a decade later under the leadership of Erich Bloch, Gordon Bell, Bill Wulf and Steve Wolff, brought computer networking into the mainstream of the research and education communities. It is not our intent here to attempt to attribute credit to all those whose contributions were central to this story, although we mention a few of the key players. A readable summary on the history of the Internet, written by many of the key players, may be found at www.isoc.org/internet/history. [vi]

From One Network to Many: The role of DARPA

Modern computer networking technologies emerged in the early 1970s. In 1969, The U.S. Defense Advanced Research Projects Agency (variously called ARPA and DARPA), an agency within the Department of Defense, commissioned a wide-area computer network called the ARPANET. This network made use of the new packet switching concepts for interconnecting computers and initially linked computers at universities and other research institutions in the United States and in selected NATO countries. At that time, the ARPANET was essentially the only realistic wide-area computer network in existence, with a base of several dozen organizations, perhaps twice that number of computers and numerous researchers at those sites. The program was led at DARPA by Larry Roberts. The packet switches were built by Bolt Beranek and Newman (BBN), a DARPA contractor. Others directly involved in the ARPANET activity included the authors, Len Kleinrock, Frank Heart, Howard Frank, Steve Crocker, Jon Postel and many many others in the ARPA research community.

Back then, the methods of internetworking (that is interconnecting computer networks) were primitive or non-existent. Two organizations could interwork technically by agreeing to use common equipment, but not every organization was interested in this approach. Absent that, there was jury-rigging, special case development and not much else. Each of these networks stood on its own with essentially no interaction between them – a far cry from today’s Internet.

In the early 1970s, ARPA began to explore two alternative applications of packet switching technology based on the use of synchronous satellites (SATNET) and ground-based packet radio (PRNET). The decision by Kahn to link these two networks and the ARPANET as separate and independent networks resulted in the creation of the Internet program and the subsequent collaboration with Cerf. These two systems differed in significant ways from the ARPANET so as to take advantage of the broadcast and wireless aspects of radio communications. The strategy that had been adopted for SATNET originally was to embed the SATNET software into an ARPANET packet switch, and interwork the two networks through memory-to-memory transfers within the packet switch. This approach, in place at the time, was to make SATNET an “embedded” network within the ARPANET; users of the network would not even need to know of its existence. The technical team at Bolt Beranek and Newman (BBN), having built the ARPANET switches and now building the SATNET software, could easily produce the necessary patches to glue the programs together in the same machine. Indeed, this is what they were under contract with DARPA to provide. By embedding each new network into the ARPANET, a seamless internetworked capability was possible, but with no realistic possibility of unleashing the entrepreneurial networking spirit that has manifest itself in modern day Internet developments. A new approach was in order.

The Packet Radio (PRNET) program had not yet gotten underway so there was ample opportunity to change the approach there. In addition, up until then, the SATNET program was only an equipment development activity. No commitments had been obtained for the use of actual satellites or ground stations to access them. Indeed, since there was no domestic satellite industry in the U.S. then, the only two viable alternatives were the use of Intelsat or U.S. military satellites. The time for a change in strategy, if it was to be made, was then.

top

THE INTERNET ARCHITECTURE

The authors created an architecture for interconnecting independent networks that could then be federated into a seamless whole without changing any of the underlying networks. This was the genesis of the Internet as we know it today.

In order to work properly, the architecture required a global addressing mechanism (or Internet address) to enable computers on any network to reference and communicate with computers on any other network in the federation. Internet addresses fill essentially the same role as telephone numbers do in telephone networks. The design of the Internet assumed first that the individual networks could not be changed to accommodate new architectural requirements; but this was largely a pragmatic assumption to facilitate progress. The networks also had varying degrees of reliability and speed. Host computers would have to be able to put disordered packets back into the correct order and discard duplicate packets that had been generated along the way. This was a major change from the virtual circuit-like service provided by ARPANET and by then contemporary commercial data networking services such as Tymnet and Telenet. In these networks, the underlying network took responsibility for keeping all information in order and for re-sending any data that might have been lost. The Internet design made the computers responsible for tending to these network problems.

A key architectural construct was the introduction of gateways (now called routers) between the networks to handle the disparities such as different data rates, packet sizes, error conditions, and interface specifications. The gateways would also check the destination Internet addresses of each packet to determine the gateway to which it should be forwarded. These functions would be combined with certain end-end functions to produce the reliable communication from source to destination. A draft paper by the authors describing this approach was given at a meeting of the International Network Working Group in 1973 in Sussex, England and the final paper was subsequently published by the Institute for Electrical and Electronics Engineers, the leading professional society for the electrical engineering profession, in its Transactions on Communications in May, 1974 [vii]. The paper described the TCP/IP protocol.

DARPA contracted with Cerf's group at Stanford to carry out the initial detailed design of the TCP software and, shortly thereafter, with BBN and University College London to build independent implementations of the TCP protocol (as it was then called – it was later split into TCP and IP) for different machines. BBN also had a contract to build a prototype version of the gateway. These three sites collaborated in the development and testing of the initial protocols on different machines. Cerf, then a professor at Stanford, provided the day-to-day leadership in the initial TCP software design and testing. BBN deployed the gateways between the ARPANET and the PRNET and also with SATNET. During this period, under Kahn's overall leadership at DARPA, the initial feasibility of the Internet Architecture was demonstrated.

The TCP/IP protocol suite was developed and refined over a period of four more years and, in 1980, it was adopted as a standard by the U.S. Department of Defense. On January 1, 1983 the ARPANET converted to TCP/IP as its standard host protocol. Gateways (or routers) were used to pass packets to and from host computers on “local area networks.” Refinement and extension of these protocols and many others associated with them continues to this day by way of the Internet Engineering Task Force [viii].

top

GOVERNMENT’S HISTORICAL ROLE

Other political and social dimensions that enabled the Internet to come into existence and flourish are just as important as the technology upon which it is based. The federal government played a large role in creating the Internet, as did the private sector interests that made it available to the general public. The development of the personal computer industry and significant changes in the telecommunications industry also contributed to the Internet’s growth in the 1980s. In particular, the development of workstations, the Unix operating system, and local area networking (especially the Ethernet) contributed to the spread of the Internet within the research community from which the Internet industry eventually emerged.

The National Science Foundation and others

In the late 1970s, the National Science Foundation (NSF) became interested in the impact of the ARPANET on computer science and engineering. NSF funded the Computer Science Network (CSNET), which was a logical design for interconnecting universities that were already on the ARPANET and those that were not. Telenet was used for sites not connected directly to the ARPANET and a gateway was provided to link the two. Independent of NSF, another initiative called BITNET ("Because it's there" Net) [ix] provided campus computers with email connections to the growing ARPANET. Finally, AT&T Bell Laboratories development of the Unix operating system led to the creation of a grass-roots network called USENET [x], which rapidly became home to thousands of “newsgroups” where Internet users discussed everything from aerobics to politics and zoology.

In the mid 1980s, NSF decided to build a network called NSFNET to provide better computer connections for the science and education communities. The NSFNET made possible the involvement of a large segment of the education and research community in the use of high speed networks. A consortium consisting of MERIT (a University of Michigan non-profit network services organization), IBM and MCI Communications won a 1987 competition for the contract to handle the network’s construction. Within two years, the newly expanded NSFNET had become the primary backbone component of the Internet, augmenting the ARPANET until it was decommissioned in 1990.At about the same time, other parts of the U.S. government had moved ahead to build and deploy networks of their own, including NASA and the Department of Energy. While these groups originally adopted independent approaches for their networks, they eventually decided to support the use of TCP/IP.

The developers of the NSFNET, led by Steve Wolff who had the direct responsibility for the NSFNET program, also decided to create intermediate level networks to serve research and education institutions and, more importantly, to allow networks that were not commissioned by the U.S. government to connect to the NSFNET. This strategy reduced the overall load on the backbone network operators and spawned a new industry: Internet Service Provision. Nearly a dozen intermediate level networks were created, most with NSF support, [xi] some, such as UUNET, with Defense support, and some without any government support. The NSF contribution to the evolution of the Internet was essential in two respects. It opened the Internet to many new users and, drawing on the properties of TCP/IP, structured it so as to allow many more network service providers to participate.

For a long time, the federal government did not allow organizations to connect to the Internet to carry out commercial activities. By 1988, it was becoming apparent, however, that the Internet's growth and use in the business sector might be seriously inhibited by this restriction. That year, CNRI requested permission from the Federal Networking Council to interconnect the commercial MCI Mail electronic mail system to the Internet as part of a general electronic mail interconnection experiment. Permission was given and the interconnection was completed by CNRI, under Cerf’s direction, in the summer of 1989. Shortly thereafter, two of the then non-profit Internet Service Providers (UUNET [xii] and NYSERNET) produced new for-profit companies (UUNET and PSINET [xiii] respectively). In 1991, they were interconnected with each other and CERFNET [xiv]. Commercial pressure to alleviate restrictions on interconnections with the NSFNET began to mount.

In response, Congress passed legislation allowing NSF to open the NSFNET to commercial usage. Shortly thereafter, NSF determined that its support for NSFNET might not be required in the longer term and, in April 1995, NSF ceased its support for the NSFNET. By that time, many commercial networks were in operation and provided alternatives to NSFNET for national level network services. Today, approximately 10,000 Internet Service Providers (ISPs) are in operation. Roughly half the world's ISPs currently are based in North America and the rest are distributed throughout the world.

top

A DEFINITION FOR THE INTERNET

The authors feel strongly that efforts should be made at top policy levels to define the Internet. It is tempting to view it merely as a collection of networks and computers. However, as indicated earlier, the authors designed the Internet as an architecture that provided for both communications capabilities and information services. Governments are passing legislation pertaining to the Internet without ever specifying to what the law applies and to what it does not apply. In U.S. telecommunications law, distinctions are made between cable, satellite broadcast and common carrier services. These and many other distinctions all blur in the backdrop of the Internet. Should broadcast stations be viewed as Internet Service Providers when their programming is made available in the Internet environment? Is use of cellular telephones considered part of the Internet and if so under what conditions? This area is badly in need of clarification.

The authors believe the best definition currently in existence is that approved by the Federal Networking Council in 1995, http://www.fnc.gov and which is reproduced in the footnote below [xv] for ready reference. Of particular note is that it defines the Internet as a global information system, and included in the definition, is not only the underlying communications technology, but also higher-level protocols and end-user applications, the associated data structures and the means by which the information may be processed, manifested, or otherwise used. In many ways, this definition supports the characterization of the Internet as an “information superhighway.” Like the federal highway system, whose underpinnings include not only concrete lanes and on/off ramps, but also a supporting infrastructure both physical and informational, including signs, maps, regulations, and such related services and products as filling stations and gasoline, the Internet has its own layers of ingress and egress, and its own multi-tiered levels of service.

The FNC definition makes it clear that the Internet is a dynamic organism that can be looked at in myriad ways. It is a framework for numerous services and a medium for creativity and innovation. Most importantly, it can be expected to evolve.

top

WHO RUNS THE INTERNET

The Domain Name System

The Internet evolved as an experimental system during the 1970s and early 1980s. It then flourished after the TCP/IP protocols were made mandatory on the ARPANET and other networks in January 1983; these protocols thus became the standard for many other networks as well. Indeed, the Internet grew so rapidly that the existing mechanisms for associating the names of host computers (e.g. UCLA, USC-ISI) to Internet addresses (known as IP addresses) were about to be stretched beyond acceptable engineering limits. Most of the applications in the Internet referred to the target computers by name. These names had to be translated into Internet addresses before the lower level protocols could be activated to support the application. For a time, a group at SRI International in Menlo Park, CA, called the Network Information Center (NIC), maintained a simple, machine-readable list of names and associated Internet addresses which was made available on the net. Hosts on the Internet would simply copy this list, usually daily, so as to maintain a local copy of the table. This list was called the "host.txt" file (since it was simply a text file). The list served the function in the Internet that directory services (e.g. 411 or 703-555-1212) do in the US telephone system - the translation of a name into an address.

As the Internet grew, it became harder and harder for the NIC to keep the list current. Anticipating that this problem would only get worse as the network expanded, researchers at USC Information Sciences Institute launched an effort to design a more distributed way of providing this same information. The end result was the Domain Name System (DNS) [xvi] which allowed hundreds of thousands of "name servers" to maintain small portions of a global database of information associating IP addresses with the names of computers on the Internet.

The naming structure was hierarchical in character. For example, all host computers associated with educational institutions would have names like "stanford.edu" or "ucla.edu". Specific hosts would have names like "cs.ucla.edu" to refer to a computer in the computer science department of UCLA, for example. A special set of computers called "root servers" maintained information about the names and addresses of other servers that contained more detailed name/address associations. The designers of the DNS also developed seven generic "top level" domains, as follows:

Education - EDU
Government - GOV
Military - MIL
International - INT
Network - NET
(non-profit) Organization - ORG
Commercial - COM

Under this system, for example, the host name "UCLA" became "UCLA.EDU" because it was operated by an educational institution, while the host computer for "BBN" became "BBN.COM" because it was a commercial organization. Top-level domain names also were created for every country: United Kingdom names would end in “.UK,” while the ending “.FR” was created for the names of France.

The Domain Name System (DNS) was and continues to be a major element of the Internet architecture, which contributes to its scalability. It also contributes to controversy over trademarks and general rules for the creation and use of domain names, creation of new top-level domains and the like. At the same time, other resolution schemes exist as well. One of the authors (Kahn) has been involved in the development of a different kind of standard identification and resolution scheme [xvii] that, for example, is being used as the base technology by book publishers to identify books on the Internet by adapting various identification schemes for use in the Internet environment. For example, International Standard Book Numbers (ISBNs) can be used as part of the identifiers. The identifiers then resolve to state information about the referenced books, such as location information (e.g. multiple sites) on the Internet that is used to access the books or to order them. These developments are taking place in parallel with the more traditional means of managing Internet resources. They offer an alternative to the existing Domain Name System with enhanced functionality.

The growth of Web servers and users of the Web has been remarkable, but some people are confused about the relationship between the World Wide Web and the Internet. The Internet is the global information system that includes communication capabilities and many high level applications. The Web is one such application. The existing connectivity of the Internet made it possible for users and servers all over the world to participate in this activity. Electronic mail is another important application. As of today, over 60 million computers take part in the Internet and about 3.6 million web sites were estimated to be accessible on the net. Virtually every user of the net has access to electronic mail and web browsing capability. Email remains a critically important application for most users of the Internet, and these two functions largely dominate the use of the Internet for most users.

The Internet Standards Process

Internet standards were once the output of research activity sponsored by DARPA. The principal investigators on the internetting research effort essentially determined what technical features of the TCP/IP protocols would become common. The initial work in this area started with the joint effort of the two authors, continued in Cerf's group at Stanford, and soon thereafter was joined by engineers and scientists at BBN and University College London. This informal arrangement has changed with time and details can be found elsewhere [xviii]. At present, the standards efforts for Internet is carried out primarily under the auspices of the Internet Society (ISOC). The Internet Engineering Task Force (IETF) operates under the leadership of its Internet Engineering Steering Group (IESG), which is populated by appointees approved by the Internet Architecture Board (IAB) which is, itself, now part of the Internet Society.

The IETF comprises over one hundred working groups categorized and managed by Area Directors specializing in specific categories.

There are other bodies with considerable interest in Internet standards or in standards that must interwork with the Internet. Examples include the International Telecommunications Union Telecommunications standards group (ITU-T), the International Institute of Electrical and Electronic Engineers (IEEE) local area network standards group (IEEE 801), the Organization for International Standardization (ISO), the American National Standards Institute (ANSI), the World Wide Web Consortium (W3C), and many others.

As Internet access and services are provided by existing media such as telephone, cable and broadcast, interactions with standards bodies and legal structures formed to deal with these media will become an increasingly complex matter. The intertwining of interests is simultaneously fascinating and complicated, and has increased the need for thoughtful cooperation among many interested parties.

Managing the Internet

Perhaps the least understood aspect of the Internet is its management. In recent years, this subject has become the subject of intense commercial and international interest, involving multiple governments and commercial organizations, and recently congressional hearings. At issue is how the Internet will be managed in the future, and, in the process, what oversight mechanisms will insure that the public interest is adequately served.

In the 1970s, managing the Internet was easy. Since few people knew about the Internet, decisions about almost everything of real policy concern were made in the offices of DARPA. It became clear in the late 1970s, however, that more community involvement in the decision-making processes was essential. In 1979, DARPA formed the Internet Configuration Control Board (ICCB) to insure that knowledgeable members of the technical community discussed critical issues, educated people outside of DARPA about the issues, and helped others to implement the TCP/IP protocols and gateway functions. At the time, there were no companies that offered turnkey solutions to getting on the Internet. It would be another five years or so before companies like Cisco Systems were formed, and while there were no PCs yet, the only workstations available were specially built and their software was not generally configured for use with external networks; they were certainly considered expensive at the time.

In 1983, the small group of roughly twelve ICCB members was reconstituted (with some substitutions) as the Internet Activities Board (IAB), and about ten “Task Forces” were established under it to address issues in specific technical areas. The attendees at Internet Working Group meetings were invited to become members of as many of the task forces as they wished.

The management of the Domain Name System offers a kind of microcosm of issues now frequently associated with overall management of the Internet's operation and evolution. Someone had to take responsibility for overseeing the system's general operation. In particular, top-level domain names had to be selected, along with persons or organizations to manage each of them. Rules for the allocation of Internet addresses had to be established. DARPA had previously asked the late Jon Postel of the USC Information Sciences Institute to take on numerous functions related to administration of names, addresses and protocol related matters. With time, Postel assumed further responsibilities in this general area on his own, and DARPA, which was supporting the effort, gave its tacit approval. This activity was generally referred to as the Internet Assigned Numbers Authority (IANA) [xix]. In time, Postel became the arbitrator of all controversial matters concerning names and addresses until his untimely death in October 1998.

It is helpful to consider separately the problem of managing the domain name space and the Internet address space. These two vital elements of the Internet architecture have rather different characteristics that color the management problems they generate. Domain names have semantics that numbers may not imply; and thus a means of determining who can use what names is needed. As a result, speculators on Internet names often claim large numbers of them without intent to use them other than to resell them later. Alternate resolution mechanisms [xx], if widely adopted, could significantly change the landscape here.

The rapid growth of the Internet has triggered the design of a new and larger address space (the so-called IP version 6 address space); today's Internet uses IP version 4 [xxi]. However, little momentum has yet developed to deploy IPv6 widely. Despite concerns to the contrary, the IPv4 address space will not be depleted for some time. Further, the use of Dynamic Host Configuration Protocol (DHCP) to dynamically assign IP addresses has also cut down on demand for dedicated IP addresses. Nevertheless, there is growing recognition in the Internet technical community that expansion of the address space is needed, as is the development of transition schemes that allow interoperation between IPv4 and IPv6 while migrating to IPv6.

In 1998, the Internet Corporation for Assigned Names and Numbers (ICANN) was formed as a private sector, non-profit, organization to oversee the orderly progression in use of Internet names and numbers, as well as certain protocol related matters that required oversight. The birth of this organization, which was selected by the Department of Commerce for this function, has been difficult, embodying as it does many of the inherent conflicts in resolving discrepancies in this arena. However, there is a clear need for an oversight mechanism for Internet domain names and numbers, separate from their day-to-day management.

Many questions about Internet management remain. They may also prove difficult to resolve quickly. Of specific concern is what role the U.S. government and indeed governments around the world need to play in its continuing operation and evolution. This is clearly a subject for another time.

top

WHERE DO WE GO FROM HERE?

As we struggle to envision what may be commonplace on the Internet in a decade, we are confronted with the challenge of imagining new ways of doing old things, as well as trying to think of new things that will be enabled by the Internet, and by the technologies of the future.

In the next ten years, the Internet is expected to be enormously bigger than it is today. It will be more pervasive than the older technologies and penetrate more homes than television and radio programming. Computer chips are now being built that implement the TCP/IP protocols and recently a university announced a two-chip web server. Chips like this are extremely small and cost very little. And they can be put into anything. Many of the devices connected to the Internet will be Internet-enabled appliances (cell phones, fax machines, household appliances, hand-held organizers, digital cameras, etc.) as well as traditional laptop and desktop computers. Information access will be directed to digital objects of all kinds and services that help to create them or make use of them [xxii].

Very high-speed networking has also been developing at a steady pace. From the original 50,000 bit-per-second ARPANET, to the 155 million bit-per-second NSFNET, to today’s 2.4 – 9.6 billion bit-per-second commercial networks, we routinely see commercial offerings providing Internet access at increasing speeds. Experimentation with optical technology using wavelength division multiplexing is underway in many quarters; and testbeds operating at speeds of terabits per second (that is trillions of bits-per-second) are being constructed.

Some of these ultra-high speed systems may one-day carry data from very far away places, like Mars. Already, design of the interplanetary Internet as a logical extension of the current Internet, is part of the NASA Mars mission program now underway at the Jet Propulsion Laboratory in Pasadena, California [xxiii]. By 2008, we should have a well functioning Earth-Mars network that serves as a nascent backbone of the interplanetary Internet.

Wireless communication has exploded in recent years with the rapid growth of cellular telephony. Increasingly, however, Internet access is becoming available over these networks. Alternate forms for wireless communication, including both ground radio and satellite are in development and use now, and the prospects for increasing data rates look promising. Recent developments in high data rate systems appear likely to offer ubiquitous wireless data services in the 1-2 Mbps range. It is even possible that wireless Internet access may one day be the primary way most people get access to the Internet.

A developing trend that seems likely to continue in the future is an information centric view of the Internet that can live in parallel with the current communications centric view. Many of the concerns about intellectual property protection are difficult to deal with, not because of fundamental limits in the law, but rather by technological and perhaps management limitations in knowing how best to deal with these issues. A digital object infrastructure that makes information objects “first-class citizens” in the packetized “primordial soup” of the Internet is one step in that direction. In this scheme, the digital object is the conceptual elemental unit in the information view; it is interpretable (in principle) by all participating information systems. The digital object is thus an abstraction that may be implemented in various ways by different systems. It is a critical building block for interoperable and heterogeneous information systems. Each digital object has a unique and, if desired, persistent identifier that will allow it to be managed over time. This approach is highly relevant to the development of third-party value added information services in the Internet environment.

Of special concern to the authors is the need to understand and manage the downside potential for network disruptions, as well as cybercrime and terrorism. The ability to deal with problems in this diverse arena is at the forefront of maintaining a viable global information infrastructure. “ IOPS.org” [xxiv] – a private-sector group dedicated to improving coordination among ISPs - deals with issues of ISP outages, disruptions, other trouble conditions, as well as related matters, by discussion, interaction and coordination between and among the principal players. Business, the academic community and government all need as much assurance as possible that they can conduct their activities on the Internet with high confidence that security and reliability will be present. The participation of many organizations around the world, including especially governments and the relevant service providers will be essential here.

The success of the Internet in society as a whole will depend less on technology than on the larger economic and social concerns that are at the heart of every major advance. The Internet is no exception, except that its potential and reach are perhaps as broad as any that have come before.

top


[i] Leonard Kleinrock's dissertation thesis at MIT was written during 1961: "Information Flow in Large Communication Nets", RLE Quarterly Progress Report, July 1961 and published as a book "Communication Nets: Stochastic Message Flow and Delay", New York: McGraw Hill, 1964. This was one of the earliest mathematical analyses of what we now call packet switching networks.

[ii] J.C.R. Licklider & W. Clark, "On-Line Man Computer Communication", August 1962. Licklider made tongue-in-cheek references to an "inter-galactic network" but in truth, his vision of what might be possible was prophetic.

[iii] [BARAN 64] Baran, P., et al, "On Distributed Communications", Volumes I-XI, RAND Corporation Research Documents, August 1964. Paul Baran explored the use of digital "message block" switching to support highly resilient, survivable voice communications for military command and control. This work was undertaken at RAND Corporation for the US Air Force beginning in 1962.

[iv] L. Roberts & T. Merrill, "Toward a Cooperative Network of Time-Shared Computers", Fall AFIPS Conf., Oct. 1966.

[v] Davies, D.W., K.A. Bartlett, R.A. Scantlebury, and P. T. Wilkinson. 1967. "A Digital Communication Network for Computers Giving Rapid Response at Remote Terminals," Proceedings of the ACM Symposium on Operating System Principles. Association for Computing Machinery, New York, 1967. Donald W. Davies and his colleagues coined the term "packet" and built one node of a packet switching network at the National Physical Laboratory in the UK.

[vi] Barry M. Leiner, Vinton G. Cerf, David D. Clark,Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff, "A Brief History of the Internet," www.isoc.org/internet/history/brief.html and see below for timeline

[vii] Vinton G. Cerf and Robert E. Kahn, "A Protocol for Packet Network Intercommunication," IEEE Transactions on Communications, Vol. COM-22, May 1974.

[viii] The Internet Engineering Task Force (IETF) is an activity taking place under the auspices of the Internet Society (www.isoc.org). See www.ietf.org

[ix] From the BITNET charter:

BITNET, which originated in 1981 with a link between CUNY and Yale, grew rapidly during the next few years, with management and systems services provided on a volunteer basis largely from CUNY and Yale. In 1984, the BITNET Directors established an Executive Committee to provide policy guidance.

(see http://www.geocities.com/SiliconValley/2260/bitchart.html)

[x] Usenet came into being in late 1979, shortly after the release of V7 Unix with UUCP. Two Duke University grad students in North Carolina, Tom Truscott and Jim Ellis, thought of hooking computers together to exchange information with the Unix community. Steve Bellovin, a grad student at the University of North Carolina, put together the first version of the news software using shell scripts and installed it on the first two sites: "unc" and "duke." At the beginning of 1980 the network consisted of those two sites and "phs" (another machine at Duke), and was described at the January Usenix conference. Steve Bellovin later rewrote the scripts into C programs, but they were never released beyond "unc" and "duke." Shortly thereafter, Steve Daniel did another implementation in C for public distribution. Tom Truscott made further modifications, and this became the "A" news release.

(see http://www.ou.edu/research/electron/internet/use-soft.htm)

[xi] A few examples include the New York State Education and Research Network (NYSERNET), New England Academic and Research Network (NEARNET), the California Education and Research Foundation Network (CERFNET), Northwest Net (NWNET), Southern Universities Research and Academic Net (SURANET) and so on. UUNET was formed as a non-profit by a grant from the UNIX Users Group (USENIX).

[xii] UUNET called its Internet service ALTERNET. UUNET was acquired by Metropolitan Fiber Networks (MFS) in 1995 which was itself acquired by Worldcom in 1996. Worldcom later merged with MCI to form MCI WorldCom in 1998. In that same year, Worldcom also acquired the ANS backbone network from AOL, which had purchased it from the non-profit ANS earlier.

[xiii] PSINET was a for-profit spun out of the NYSERNET in 1990.

[xiv] CERFNET was started by General Atomics as one of the NSF-sponsored intermediate level networks. It was coincidental that the network was called "CERF"Net - originally they had planned to call themselves SURFNET, since General Atomics was located in San Diego, California, but this name was already taken by a Dutch Research organization called SURF, so the General Atomics founders settled for California Education and Research Foundation Network. Cerf participated in the launch of the network in July 1989 by breaking a fake bottle of champagne filled with glitter over a Cisco Systems router.

[xv] October 24, 1995, Resolution of the U.S. Federal Networking Council

RESOLUTION:

"The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term "Internet".

"Internet" refers to the global information system that --

(i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons;

(ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and

(iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein."

[xvi] The Domain Name System was designed by Paul Mockapetris and initially documented in November 1983. Mockapetris, P., "Domain names - Concepts and Facilities", RFC 882, USC/Information Sciences Institute, November 1983 and Mockapetris, P.,"Domain names - Implementation and Specification", RFC 883, USC/Information Sciences Institute, November 1983. (see also http://soa.granitecanyon.com/faq.shtml)

[xvii] The Handle System - see www.handle.net

[xviii] See Leiner, et al, "A Brief History…", www.isoc.org/internet/history/brief.html

[xix] See www.iana.org for more details. See also www.icann.org.

[xx] see www.doi.org

[xxi] Version 5 of the Internet Protocol was an experiment which has since been terminated

[xxii] see A Framework for Distributed Digital Object Services, Robert E Kahn and Robert Wilensky at www.cnri.reston.va.us/cstr/arch/k-w.html

[xxiii] The interplanetary Internet effort is funded in part by DARPA and has support from NASA. For more information, see www.ipnsig.org

[xxiv] See www.iops.org for more information on this group dedicated to improving operational coordination among Internet Service Providers.

top