ALOHAnet, also known as the ALOHA System, [1] [2] [3] or simply ALOHA, was a pioneering computer networking system developed at the University of Hawaii. ALOHAnet became operational in June 1971, providing the first public demonstration of a wireless packet data network. [4] [5]
The ALOHAnet used a new method of medium access, called ALOHA random access, and experimental ultra high frequency (UHF) for its operation. In its simplest form, later known as Pure ALOHA, remote units communicated with a base station (Menehune) over two separate radio frequencies (for inbound and outbound respectively). Nodes did not wait for the channel to be clear before sending, but instead waited for acknowledgement of successful receipt of a message, and re-sent it if this was not received. Nodes would also stop and re-transmit data if they detected any other messages while transmitting. While simple to implement, this results in an efficiency of only 18.4%. A later advancement, Slotted ALOHA, improved the efficiency of the protocol by reducing the chance of collision, improving throughput to 36.8%.
ALOHA was subsequently employed in the Ethernet cable based network in the 1970s, and following regulatory developments in the early 1980s it became possible to use the ALOHA random-access techniques in both Wi-Fi and in mobile telephone networks. ALOHA channels were used in a limited way in the 1980s in 1G mobile phones for signaling and control purposes. In the late 1980s, the European standardization group GSM who worked on the Pan-European Digital mobile communication system GSM greatly expanded the use of ALOHA channels for access to radio channels in mobile telephony. In the early 2000s additional ALOHA channels were added to 2.5G and 3G mobile phones with the widespread introduction of General Packet Radio Service (GPRS), using a slotted-ALOHA random-access channel combined with a version of the Reservation ALOHA scheme first analyzed by a group at BBN Technologies.
One of the early computer networking designs, development of the ALOHA network was begun in September 1968 at the University of Hawaii under the leadership of Norman Abramson and Franklin Kuo, along with Thomas Gaarder, Shu Lin, Wesley Peterson and Edward ("Ned") Weldon. The goal was to use low-cost commercial radio equipment to connect users on Oahu and the other Hawaiian islands with a central time-sharing computer on the main Oahu campus. The first packet broadcasting unit went into operation in June 1971. Terminals were connected to a special purpose terminal connection unit using RS-232 at 9600 bit/s. [6]
ALOHA was originally a contrived acronym standing for Additive Links On-line Hawaii Area. [7]
The original version of ALOHA used two distinct frequencies in a hub configuration, with the hub machine broadcasting packets to everyone on the outbound channel, and the various client machines sending data packets to the hub on the inbound channel. If data was received correctly at the hub, a short acknowledgment packet was sent to the client; if an acknowledgment was not received by a client machine after a short wait time, it would automatically retransmit the data packet after waiting a randomly selected time interval. This acknowledgment mechanism was used to detect and correct for collisions created when two client machines both attempted to send a packet at the same time.
ALOHAnet's primary importance was its use of a shared medium for client transmissions. Unlike the ARPANET where each node could only talk to a single node at the other end of a wire or satellite circuit, in ALOHAnet all client nodes communicated with the hub on the same frequency. This meant that some sort of mechanism was needed to control who could talk at what time. The ALOHAnet solution was to allow each client to send its data without controlling when it was sent, and implementing an acknowledgment/retransmission scheme to deal with collisions. This approach radically reduced the complexity of the protocol and the networking hardware, since nodes do not need to negotiate who is allowed to speak.
This solution became known as a pure ALOHA, or random-access channel, and was the basis for subsequent Ethernet development and later Wi-Fi networks. [5] Various versions of the ALOHA protocol (such as Slotted ALOHA) also appeared later in satellite communications, and were used in wireless data networks such as ARDIS, Mobitex, CDPD, and GSM.
The Aloha network introduced the mechanism of randomized multiple access, which resolved device transmission collisions by transmitting a packet immediately if no acknowledgement is present, and if no acknowledgment was received, the transmission was repeated after a random waiting time. [8] The probability distribution of this random waiting time for retransmission of a packet that has not been acknowledged as received is critically important for the stability of Aloha-type communication systems. The average waiting time for retransmission is typically shorter than the average time for generation of a new packet from the same client node, but it should not be allowed to be so short as to compromise the stability of the network, causing a collapse in its overall throughput. [9]
Also important was ALOHAnet's use of the outgoing hub channel to broadcast packets directly to all clients on a second shared frequency and using an address in each packet to allow selective receipt at each client node. [4] Separate frequencies were used for incoming and outgoing communications to the hub so that devices could receive acknowledgments regardless of transmissions.
The original version of the protocol (now called Pure ALOHA, and the one implemented in ALOHAnet) was quite simple:
Pure ALOHA does not check whether the channel is busy before transmitting. Since collisions can occur and data may have to be sent again, ALOHA cannot efficiently use 100% of the capacity of the communications channel. How long a station waits until it retransmits, and the likelihood a collision occurs are interrelated, and both affect how efficiently the channel can be used. This means that the concept of retransmit later is a critical aspect; The quality of the backoff scheme chosen significantly influences the efficiency of the protocol, the ultimate channel capacity, and the predictability of its behavior.
To assess Pure ALOHA, there is a need to predict its throughput, the rate of (successful) transmission of frames. [10] First make a few simplifying assumptions:
Let T refer to the time needed to transmit one frame on the channel, and define frame-time as a unit of time equal to T. Let G refer to the mean used in the Poisson distribution over transmission-attempt amounts. That is, on average, there are G transmission attempts per frame-time.
Consider what needs to happen for a frame to be transmitted successfully. Let t refer to the time at which it is intended to send a frame. It is preferable to use the channel for one frame-time beginning at t, and all other stations to refrain from transmitting during this time.
For any frame-time, the probability of there being k transmission-attempts during that frame-time is:
The average number of transmission-attempts for two consecutive frame-times is 2G. Hence, for any pair of consecutive frame-times, the probability of there being k transmission attempts during those two frame-times is:
Therefore, the probability () of there being zero transmission-attempts between t-T and t+T (and thus of a successful transmission for us) is:
The throughput can be calculated as the rate of transmission attempts multiplied by the probability of success, and it can be concluded that the throughput () is:
The maximum throughput is 0.5/e frames per frame-time (reached when ), which is approximately 0.184 frames per frame-time. This means that, in Pure ALOHA, only about 18.4% of the time is used for successful transmissions.
An improvement to the original ALOHA protocol was Slotted ALOHA, which introduced discrete time slots and increased the maximum throughput. [11] A station can start a transmission only at the beginning of a time slot, and thus collisions are reduced. In this case, only transmission-attempts within 1 frame-time and not 2 consecutive frame-times need to be considered, since collisions can only occur during each time slot. Thus, the probability of there being zero transmission attempts by other stations in a single time slot is:
the probability of a transmission requiring exactly k attempts is (k-1 collisions and 1 success): [10]
The throughput is:
The maximum throughput is 1/e frames per frame-time (reached when G = 1), which is approximately 0.368 frames per frame-time, or 36.8%.
Slotted ALOHA is used in low-data-rate tactical satellite communications networks by military forces, in subscriber-based satellite communications networks, mobile telephony call setup, set-top box communications and in the contactless RFID technologies.
Reservation ALOHA, or R-ALOHA, is an effort to improve the efficiency of Slotted ALOHA. The improvements with Reservation ALOHA are markedly shorter delays and ability to efficiently support higher levels of utilization. As a contrast of efficiency, simulations have shown that Reservation ALOHA exhibits less delay at 80% utilization than Slotted ALOHA at 20–36% utilization. [12]
The chief difference between Slotted and Reservation ALOHA is that with Slotted ALOHA, any slot is available for utilization without regards to prior usage. Under Reservation ALOHA's contention-based reservation schema, the slot is temporarily considered "owned" by the station that successfully used it. Additionally, Reservation ALOHA simply stops sending data once the station has completed its transmission. As a rule, idle slots are considered available to all stations that may then implicitly reserve (utilize) the slot on a contention basis.
Packet reservation multiple access (PRMA) is an implicit reservation scheme. Some fixed number of slots form a frame. After each frame, the satellite broadcasts the status of each slot from the previous frame, which indicates the reservation status of the corresponding slots of the next frame. All ground stations wishing to transmit compete exactly like slotted ALOHA during any "free slot" of that next frame (i.e., either no one transmitted in that slot of the previous frame, or there was a collision when multiple ground stations transmitted in that slot of the previous frame). If exactly one ground station happens to transmit during a "free slot", that ground station succeeds in reserving that slot of a frame -- the corresponding slot is implicitly reserved in all future frames. From then on, the satellite broadcasts that that particular ground station has reserved that slot of the frame, and that ground station can continue transmitting with a guaranteed data rate during that slot of the frame; other ground stations are careful *not* to transmit during that slot of the frame, so there are no collisions during reserved slots. When a ground station with a reserved slot has nothing to send, it simply stops transmitting, which gives up its reservation; the satellite notices its reserved slot is idle in one frame, and broadcasts that fact, which indicates that that slot will be a "free slot" in the next frame. [13] [14]
Maximum channel efficiency for slotted ALOHA is 36%; PRMA improves maximum channel efficiency to 80%. [14]
Demand assigned multiple access (DAMA), also called reservation ALOHA, is an explicit reservation scheme often used in satellite communications. DAMA alternates between two phases: During the reservation phase of a frame, DAMA acts like slotted ALOHA for some fixed number of short slots, except instead of ground stations sending complete packets, ground stations only send short requests for later transmission. The satellite collects all the successful requests (i.e., the ones not destroyed by collision) and sends them back as a reservation list assigning specific ground stations to specific TDM slots. During the TDM phase of a frame, the ground stations obey the reservation list and each one only transmits during the long TDM slot(s) reserved for it. Collisions may occur during the reservation phase, but not during the TDM phase. [14] [15]
Maximum channel efficiency for slotted ALOHA is 36%; DAMA improves maximum channel efficiency to 80%. [14]
The use of a random-access channel in ALOHAnet led to the development of carrier-sense multiple access (CSMA), a listen before send random-access protocol that can be used when all nodes send and receive on the same channel. CSMA in radio channels was extensively modeled. [16] The AX.25 packet radio protocol is based on the CSMA approach with collision recovery, [17] based on the experience gained from ALOHAnet. A variation of CSMA, CSMA/CD is used in early versions of Ethernet.
ALOHA and the other random-access protocols have an inherent variability in their throughput and delay performance characteristics. For this reason, applications that need highly deterministic load behavior may use master/slave or token-passing schemes (such as Token Ring or ARCNET) instead of contention systems.
The central node communications processor was an HP 2100 minicomputer called the Menehune, which is the Hawaiian language word for dwarf people, [18] and was named for its similar role to the original ARPANET Interface Message Processor (IMP) which was being deployed at about the same time. In the original system, the Menehune forwarded correctly received user data to the UH central computer, an IBM System 360/65 time-sharing system. Outgoing messages from the 360 were converted into packets by the Menehune, which were queued and broadcast to the remote users at a data rate of 9600 bit/s. Unlike the half-duplex radios at the user TCUs, the Menehune was interfaced to the radio channels with full-duplex radio equipment. [19]
The original user interface developed for the system was an all-hardware unit called an ALOHAnet Terminal Control Unit (TCU) and was the sole piece of equipment necessary to connect a terminal into the ALOHA channel. The TCU was composed of a UHF antenna, transceiver, modem, buffer and control unit. The buffer was designed for a full line length of 80 characters, which allowed handling of both the 40- and 80-character fixed-length packets defined for the system. The typical user terminal in the original system consisted of a Teletype Model 33 or a dumb CRT user terminal connected to the TCU using a standard RS-232 interface. Shortly after the original ALOHA network went into operation, the TCU was redesigned with one of the first Intel microprocessors, and the resulting upgrade was called a Programmable Control Unit (PCU).
Additional basic functions performed by the TCUs and PCUs were generation of a cyclic-parity-check code vector and decoding of received packets for packet error detection purposes, and generation of packet retransmissions using a simple random interval generator. If an acknowledgment was not received from the Menehune after the prescribed number of automatic retransmissions, a flashing light was used as an indicator to the human user. Also, since the TCUs and PCUs did not send acknowledgments to the Menehune, a steady warning light was displayed to the human user when an error was detected in a received packet. Considerable simplification was incorporated into the initial design of the TCU as well as the PCU for interfacing a human user into the network.
In later versions of the system, simple radio relays were placed in operation to connect the main network on the island of Oahu to other islands in Hawaii, and Menehune routing capabilities were expanded to allow user nodes to exchange packets with other user nodes, the ARPANET, and an experimental satellite network. [4]
Two fundamental choices which dictated much of the ALOHAnet design were the two-channel star configuration of the network and the use of random access for user transmissions.
The two-channel configuration was primarily chosen to allow for efficient transmission of the relatively dense total traffic stream being returned to users by the central time-sharing computer. An additional reason for the star configuration was the desire to centralize as many communication functions as possible at the central network node (the Menehune) to minimize the cost of the original all-hardware terminal control unit (TCU) at each user node.
The random-access channel for communication between users and the Menehune was designed specifically for the traffic characteristics of interactive computing. In a conventional communication system, a user might be assigned a portion of the channel on either a frequency-division multiple access or time-division multiple access basis. Since it was well known that in time-sharing systems (circa 1970), computer and user data are bursty, such fixed assignments are generally wasteful of bandwidth because of the high peak-to-average data rates that characterize the traffic.
To achieve a more efficient use of bandwidth for bursty traffic, ALOHAnet developed the random-access packet switching method that has come to be known as a pure ALOHA channel. This approach effectively dynamically allocates bandwidth immediately to a user who has data to send, using the acknowledgment and retransmission mechanism described earlier to deal with occasional access collisions. While the average channel loading must be kept below about 10% to maintain a low collision rate, this still results in better bandwidth efficiency than when fixed allocations are used in a bursty traffic context.
Two 100 kHz channels in the experimental UHF band were used in the implemented system, one for the user-to-computer random-access channel and one for the computer-to-user broadcast channel. The system was configured as a star network, allowing only the central node to receive transmissions in the random-access channel. All user TCUs received each transmission made by the central node in the broadcast channel. All transmissions were made in bursts at 9600 bit/s , with data and control information encapsulated in packets.
Each packet consisted of a 32-bit header and a 16-bit header parity check word, followed by up to 80 bytes of data and a 16-bit parity check word for the data. The header contained address information identifying a particular user so that when the Menehune broadcast a packet, only the intended user's node would accept it.
In the 1970s ALOHA random access was employed in the nascent Ethernet cable based network [20] and then in the Marisat (now Inmarsat) satellite network. [21]
In the early 1980s frequencies for mobile networks became available, and in 1985 frequencies suitable for what became known as Wi-Fi were allocated in the US. [22] These regulatory developments made it possible to use the ALOHA random-access techniques in both Wi-Fi and in mobile telephone networks.
ALOHA channels were used in a limited way in the 1980s in 1G mobile phones for signaling and control purposes. [23] In the late 1980s, the European standardization group GSM who worked on the Pan-European Digital mobile communication system GSM greatly expanded the use of ALOHA channels for access to radio channels in mobile telephony. In addition, SMS message texting was implemented in 2G mobile phones. In the early 2000s additional ALOHA channels were added to 2.5G and 3G mobile phones with the widespread introduction of General Packet Radio Service (GPRS), using a slotted-ALOHA random-access channel combined with a version of the Reservation ALOHA scheme first analyzed by a group at BBN Technologies. [24]
Ethernet is a family of wired computer networking technologies commonly used in local area networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3. Ethernet has since been refined to support higher bit rates, a greater number of nodes, and longer link distances, but retains much backward compatibility. Over time, Ethernet has largely replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET.
General Packet Radio Service (GPRS), also called 2.5G, is a mobile data standard on the 2G cellular communication network's global system for mobile communications (GSM). Networks and mobile devices with GPRS started to roll out around the year 2001. At the time of introduction it offered for the first time seamless mobile data transmission using packet data for an "always-on" connection, providing improved Internet access for web, email, WAP services, and Multimedia Messaging Service (MMS).
In digital radio, packet radio is the application of packet switching techniques to digital radio communications. Packet radio uses a packet switching protocol as opposed to circuit switching or message switching protocols to transmit digital data via a radio communication link.
Network throughput refers to the rate of message delivery over a communication channel in a communication network, such as Ethernet or packet radio. The data that these messages contain may be delivered over physical or logical links, or through network nodes. Throughput is usually measured in bits per second, and sometimes in packets per second or data packets per time slot.
Time-division multiple access (TDMA) is a channel access method for shared-medium networks. It allows several users to share the same frequency channel by dividing the signal into different time slots. The users transmit in rapid succession, one after the other, each using its own time slot. This allows multiple stations to share the same transmission medium while using only a part of its channel capacity. Dynamic TDMA is a TDMA variant that dynamically reserves a variable number of time slots in each frame to variable bit-rate data streams, based on the traffic demand of each data stream.
Carrier-sense multiple access with collision avoidance (CSMA/CA) in computer networking, is a network multiple access method in which carrier sensing is used, but nodes attempt to avoid collisions by beginning transmission only after the channel is sensed to be "idle". When they do transmit, nodes transmit their packet data in its entirety.
Carrier-sense multiple access with collision detection (CSMA/CD) is a medium access control (MAC) method used most notably in early Ethernet technology for local area networking. It uses carrier-sensing to defer transmissions until no other stations are transmitting. This is used in combination with collision detection in which a transmitting station detects collisions by sensing transmissions from other stations while it is transmitting a frame. When this collision condition is detected, the station stops transmitting that frame, transmits a jam signal, and then waits for a random time interval before trying to resend the frame.
Carrier-sense multiple access (CSMA) is a medium access control (MAC) protocol in which a node verifies the absence of other traffic before transmitting on a shared transmission medium, such as an electrical bus or a band of the electromagnetic spectrum.
In telecommunications and computer networks, a channel access method or multiple access method allows more than two terminals connected to the same transmission medium to transmit over it and to share its capacity. Examples of shared physical media are wireless networks, bus networks, ring networks and point-to-point links operating in half-duplex mode.
A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for information transfer of, for example, a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.
Round-robin (RR) is one of the algorithms employed by process and network schedulers in computing. As the term is generally used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority. Round-robin scheduling is simple, easy to implement, and starvation-free. Round-robin scheduling can be applied to other scheduling problems, such as data packet scheduling in computer networks. It is an operating system concept.
In wireless networking, the hidden node problem or hidden terminal problem occurs when a node can communicate with a wireless access point (AP), but cannot directly communicate with other nodes that are communicating with that AP. This leads to difficulties in medium access control sublayer since multiple nodes can send data packets to the AP simultaneously, which creates interference at the AP resulting in no packet getting through.
Exponential backoff is an algorithm that uses feedback to multiplicatively decrease the rate of some process, in order to gradually find an acceptable rate. These algorithms find usage in a wide range of systems and processes, with radio networks and computer networks being particularly notable.
A duplex communication system is a point-to-point system composed of two or more connected parties or devices that can communicate with one another in both directions. Duplex systems are employed in many communications networks, either to allow for simultaneous communication in both directions between two connected parties or to provide a reverse path for the monitoring and remote adjustment of equipment in the field. There are two types of duplex communication systems: full-duplex (FDX) and half-duplex (HDX).
A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.
Multiple Access with Collision Avoidance for Wireless (MACAW) is a slotted medium access control (MAC) protocol widely used in ad hoc networks. Furthermore, it is the foundation of many other MAC protocols used in wireless sensor networks (WSN). The IEEE 802.11 RTS/CTS mechanism is adopted from this protocol. It uses RTS-CTS-DS-DATA-ACK frame sequence for transferring data, sometimes preceded by an RTS-RRTS frame sequence, in view to provide solution to the hidden node problem. Although protocols based on MACAW, such as S-MAC, use carrier sense in addition to the RTS/CTS mechanism, MACAW does not make use of carrier sense.
In radio, cooperative multiple-input multiple-output is a technology that can effectively exploit the spatial domain of mobile fading channels to bring significant performance improvements to wireless communication systems. It is also called network MIMO, distributed MIMO, virtual MIMO, and virtual antenna arrays.
In statistical time division multiplexing, contention is a media access method that is used to share a broadcast medium. In contention, any computer in the network can transmit data at any time.
Mobile Slotted Aloha (MS-Aloha) is a wireless network protocol proposed for applications such as vehicle networks.
In mathematics and telecommunications, stochastic geometry models of wireless networks refer to mathematical models based on stochastic geometry that are designed to represent aspects of wireless networks. The related research consists of analyzing these models with the aim of better understanding wireless communication networks in order to predict and control various network performance metrics. The models require using techniques from stochastic geometry and related fields including point processes, spatial statistics, geometric probability, percolation theory, as well as methods from more general mathematical disciplines such as geometry, probability theory, stochastic processes, queueing theory, information theory, and Fourier analysis.