top of page

IV. What is a Protocol?

A protocol is a set of rules that governs the communications

between computers on a network. These rules include guidelines that

regulate the following characteristics of a network: access method,

allowed physical topologies, types of cabling, and speed of data transfer.

 

The most common protocols are:

· Ethernet

· LocalTalk

· Token Ring

· FDDI

· ATM

· Token bus

· ALOHA

 

(A) Ethernet

 

The Ethernet protocol is by far the most widely used. Ethernet uses

an access method called CSMA/CD (Carrier Sense Multiple Access/Collision

Detection). This is a system where each computer listens to the cable

before sending anything through the network. If the network is clear, the

computer will transmit. If some other node is already transmitting on the

cable, the computer will wait and try again when the line is clear.

Sometimes, two computers attempt to transmit at the same instant. When

this happens a collision occurs. Each computer then backs off and waits a

random amount of time before attempting to retransmit. With this access

method, it is normal to have collisions. However, the delay caused by

collisions and retransmitting is very small and does not normally effect the

speed of transmission on the network.

 

The Ethernet protocol allows for linear bus, star, or tree topologies.

Data can be transmitted over twisted pair, coaxial, or fiber optic cable at a

speed of 10 Mbps up to 1000 Mbps.

 

Access Method

 

· CSMA/CD Carrier Sense Multiple Access Collision Detection is a

network access method in which devices that are ready to transmit

data first check the channel for a carrier. If no carrier is sensed, a

device can transmit. If two devices transmit at once, a collision

occurs and each computer backs off and waits a random amount of

time before attempting to retransmit. This is the access method

used by Ethernet. This standard enables devices to detect a

collision. After detecting a collision, a device waits a random delay

time and then attempts to re-transmit the message. If the device

detects a collision again, it waits twice as long try to re-transmit the

message. This is known exponential back off.

 

· CAM is short for channel access method. It is a protocol for how

data is transmitted in the bottom two layers of the OSI model.

CAMs described how network systems put data on the network

media, how low-level errors are dealt with and how the network

policies itself. Polling, contention and token passing are example of

CAMs.

 

o Polling. Polling is a CAM. In a master/slave scenario, the

master queries each slave device in the turn as to whether it

has any data to transmit. If the slave answers yes then the

device is permitted to transmit its data. If the slave answers

no then the master moves on and polls the next slave

device. The process is repeated continuously.

 

o Contention. Contention is a competition for resources. The

term is used especially to describe the situation when two or

more nodes attempt to transmit a message across the same

wire at the same time. The contention protocol defines what

happens when this occurs.

 

o Token Passing. Token passing uses a token, or a series of

bits to grant a device permission to transmit over the

network. Whichever device has the token can put data into

the network. When its transmission is complete the device

passes the token along to the next device in the topology.

System rules in the protocol specifications mandates how

long a device may keep the token, how long it can transmit

for and how to generate a new token if there isn’t one

circulating.

 

· CSMA/CA Carrier Sense Multiple Access Collision Avoidance is a

network access method in which each device signals its intent to

transmit before it actually does so. This prevents other devices

from sending information, thus preventing collisions from occurring

between signals from two or more devices. This is the access

method used by LocalTalk.

 

A.1. Fast Ethernet

 

To allow for an increased speed of transmission, the Ethernet

protocol has developed a new standard that supports 100 Mbps. This is

commonly called Fast Ethernet. Fast Ethernet requires the use of

different, more expensive network concentrators/hubs and network

interface cards. In addition, category 5 twisted pair or fiber optic cable is

necessary. Fast Ethernet is becoming common in schools that have been

recently wired. There are three types of Fast Ethernet.

 

1. 100BaseTX for use with level 5 UTP cable

2. 100BaseFX for use with fiber optic cables

3. 100BaseT4 which uses an extra two wires for use with level

3 UTP.

 

Most Fast Ethernet networks use the star topology, in which access

is controlled by a central interface. Two types of star topologies are

possible: broadcast star and switched. In a broadcast star, the central

interface is hub that sends the messages to all the hosts; while in a

switched type, the central interface is a hub (or switch) that sends

messages to their destination hosts.

 

A.2 Gigabit Ethernet

The most recent development in the Ethernet standard is a protocol

that has a transmission speed of 1 Gbps. Gigabit Ethernet is primarily

used for backbones on a network at this time. In the future, it will

probably be used for workstation and server connections also. It can be

used with both fiber optic cabling and copper. The 1000BaseTX, the

copper cable used for Gigabit Ethernet, is expected to become the formal

standard in 1999.

 

(B) LocalTalk

LocalTalk is a network protocol that was developed by Apple

Computer, Inc. for Macintosh computers. The method used by LocalTalk is

called CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance).

It is similar to CSMA/CD except that a computer signals its intent to

transmit before it actually does so. LocalTalk adapters and special twisted

pair cable can be used to connect a series of computers through the serial

port. The Macintosh operating system allows the establishment of a peerto-

peer network without the need for additional software. With the

addition of the server version of AppleShare software, a client/server

network can be established.

The LocalTalk protocol allows for linear bus, star, or tree topologies

using twisted pair cable. A primary disadvantage of LocalTalk is speed. Its

speed of transmission is only 230 Kbps.

 

(C) Token Ring

The Token Ring protocol was developed by IBM in the mid-1980s.

The access method used involves token-passing. In Token Ring, the

computers are connected so that the signal travels around the network

from one computer to another in a logical ring. A single electronic token

moves around the ring from one computer to the next. If a computer does

not have information to transmit, it simply passes the token on to the next

workstation. If a computer wishes to transmit and receives an empty

token, it attaches data to the token. The token then proceeds around the

ring until it comes to the computer for which the data is meant. At this

point, the data is captured by the receiving computer. The Token Ring

protocol requires a star-wired ring using twisted pair or fiber optic cable.

It can operate at transmission speeds of 4 Mbps or 16 Mbps. Due to the

increasing popularity of Ethernet, the use of Token Ring in school

environments has decreased.

 

Token ring networks consist of station directly linked to each other

by a single communication line. Messages travel from one host to host

around the ring until they reach their correct destination. As with a bus

network, each interface must be capable of recognizing its own address to

receive a message. If a message is passed to a host, which is not the

correct destination, the message is re-transmitted to the next host in the

ring. To avoid collision, a method called token passing is usually used.

 

A token is a frame bits which is passed from one host to the next.

A token maybe empty or it may contain a message. If an empty token is

received and the station wishes to transmit data, it holds the token and

writes into it the destination address, its own address and the message.

The token is then passed to the next host. As the token is no longer

marked empty, it means that no other host can transmit a message until

this token becomes empty again. When token finally reaches its

destination, the destination host reads the message and then marks the

message as read. Then it passes this token to the next host. The passing

continues until the message reaches the sender. The sender then marks

the token empty. The same token was used to send a message; at the

same time it served as an acknowledgement that the message was

received.

 

At the implementation level, the token may be a special 8-bit patter

– for example, 11111111. This means that token is empty. Bit stuffing is

used to prevent this pattern from appearing in the data being passed.

When station wants to transmit a packet, it is required to seize the token

and remove it form the ring before transmitting. To remove the token, he

ring interface, which connects the host to the ring, must monitor all bits

that pass by. As the last bit of the token passes by, the ring interface

inverts it, changing the pattern to 11111110, known as the connector.

This will be interpreted as: what follows is a message. Those monitoring

the channel will not seize the token. Immediately after the token has been

so transformed, the host making the transformation is permitted to begin

transmitting.

 

(D) FDDI

Fiber Distributed Data Interface (FDDI) is a network protocol that is

used primarily to interconnect two or more local area networks, often over

large distances. The access method used by FDDI involves token-passing.

FDDI uses a dual ring physical topology. Transmission normally occurs on

one of the rings; however, if a break occurs, the system keeps information

moving by automatically using portions of the second ring to create a new

complete ring. A major advantage of FDDI is speed. It operates over fiber

optic cable at 100 Mbps. FDDI supports 500 host on a single network.

 

FDDI is highly reliable because it consists of two counter rotating

rings. A secondary ring provides an alternate data path in the event a

fault occurs on the primary ring. FDDI host incorporated this secondary

ring into the data path to route traffic around the fault. A dual-attached

rooted host on the network is attached to both these rings.

 

A dual-attached host on the ring has at least two ports – an A port,

where the primary ring comes in and the secondary ring goes out, and a B

port where the secondary ring comes in and the primary ring goes out. A

station may also have number of M ports, which are attachments for

single-attached hosts. Hosts with at least one M port called concentrators.

 

The sequence in which hosts gain access to the medium is

predetermined. A host generates a special signaling sequence called a

token that controls the right to transmit. This token is continually passed

around the network from one node to the next. When a station has

something to send, it captures the token, sends the information in well

formatted FDDI frames, and then releases the token. The header of these

frames includes the address of the host(s) that will copy the frame. All

nodes read the frame as it passed around the ring to determine if they are

the recipients of the frame. If they are they extract the data,

retransmitting the frame to the next host on the ring. When the frame

returns to the originating host, the originating host strips the frame.

 

(E) ATM

Asynchronous Transfer Mode (ATM) is a network protocol that

transmits data at a speed of 155 Mbps and higher. ATM works by

transmitting all data in small packets of a fixed size; whereas, other

protocols transfer variable length packets. ATM supports a variety of

media such as video, CD-quality audio, and imaging. ATM employs a star

topology, which can work with fiber optic as well as twisted pair cable.

 

ATM is most often used to interconnect two or more local area

networks. It is also frequently used by Internet Service Providers to utilize

high-speed access to the Internet for their clients. As ATM technology

becomes more cost-effective, it will provide another solution for

constructing faster local area networks.

 

ATM evolved form the standardization efforts for Broadband

Integrated Service Digital Networks (BISDN) which began in the

Consultative Committee on International Telephone and Telegraphy

(CCITT) in the mid 1980’s. it was originally intimately bound up with the

emerging Synchronous Digital Hierarchy (SDH) standards, and was

conceived as a way in which arbitrary-bandwidth communication channels

could be provided within multiplexing hierarchy consisting of a defined set

of fixed-bandwidth channels.

 

The basic principles of ATM as formulated by CCITT are:

 

1. ATM is considered as a specific packet oriented transfer mode based on

fixed length cells. Each cells consist of an information field and a header,

used mainly to determine the virtual channel and to perform the

appropriate routing. Cell sequence integrity is preserved per virtual

channel.

 

2. ATM is connection-oriented. The header values are assigned to each

section of a connection for the complete duration of the connection.

Signaling and user information are carried on separate virtual channels.

 

3. The information field of the ATM cells is carried transparently through the

network. No processing like error control is performed on it inside the

network.

 

4. All services (voice, video, data, etc.) can be transported via ATM, including

connectionless services. To accommodate various services, an adaptation

function for fitting information on all services into ATM cells is provided.

 

In the ATM model, the sender first establishes a connection (i.e., a virtual

circuit) to the receiver or receivers. During connection establishment, a route is

determined from the sender to the receiver(s) and routing information is stored

in the switches along the way. Using this connection, packets can be sent, but

they are chopped up by the hardware into small, fixed-size units called cells. The

cells for a given virtual circuit all follow the path stored in the switches. When

the connection is no longer needed, it is released and the routing information

purged from the switches.

 

With this scheme, it is now possible for a single network to be used to

efficiently transport an arbitrary mix of voice, data, broadcast television,

videotapes, radio and other information, replacing what were previously separate

networks (telephone, X.25, cable TV, ect.). In all cases, what the network sees is

cells; it does not care what is in them.

 

Cell switching lends itself to multicasting (one cell going to many

destinations), a technique needed for transmitting broadcast television to

thousands of houses at the same time. Conventional circuit switching, as used in

telephone networks, cannot handle this. Broadcast media such as cable TV can,

but they cannot handle point-to-point traffic without wasting bandwidth. The

advantage of cell switching is that it can handle both point-to-point and

multicasting efficiently.

 

Fixed-size cells allow rapid switching, which is much more difficult to achieve

with current store-and-forward packet switches. They also eliminate the danger

of a small packet being delayed because a big one is hogging a needed line.

With cell switching, after each cell is transmitted, a new one can be sent, even a

new one belonging to a different packet.

 

ATM has its own protocol hierarchy composed of the (lowest to highest)

physical layer, ATM layer, adaptation layer, and several upper layers. The

physical layer has the same functionality as layer 1 in the OSI model. The ATM

layers deals with the cells and cell transport, including routing, so it covers OSI

layer 2 and part of layer 3. However unlike OSI layer 2, the ATM does not

recover lost or damage cells. The adaptation layer handles breaking packets into

cells and reassembling them at the other end, which does not appear explicitly in

the OSI model until layer 4.

 

(F) Token Bus

Token-bus system provide a horizontal bus channel (bus),

while providing access to this bus channel as if it were a ring. The

protocol eliminates the collision found in carrier sense channel and

allows the use of a non-ring channel.

 

The protocol uses a control frame called an access token or

access right. Once held by a host, this token gives the exclusive

use of the bus. The token-holding host uses the bus for a period of

time to send and receive data, then passes the token to the next

designated host. In the bus topology, all host listen to and receive

the access token, but only the host allowed to seize the channel is

the host designated in the access token. All other hosts must wait

their turn to receive the token.

 

The host received the token through a cyclic sequence,

which forms a logical ring on the physical bus. This form of token

passing is called explicit token system, because the bus topology

causes the ordering of the hosts’ use of the channel.

 

(G) ALOHA

ALOHA was a method developed in the early 1970’s by Norman

Abramson of the University of Hawaii. The idea behind ALOHA is for

uncoordinated users to compete for a channel. Although the original

technique used ground-based radio packet system, the method may be

applied to any channel media where users are contending for its use. The

method has already been used in satellite and but networks.

 

The premise in ALOHA is that users are acting on a peer-to-peer

basis and all users have equal access to the channel. A user station

transmits whenever there is a data to send. We refer to the data as data

packets. It is therefore possible that several users will send data packets

at the same time or that the time when they send their data packets

overlap. Simultaneous transmission will result in the data packets being

distorted. This situation is called packet collision. The receiving host will

have a way of detecting that a packet has collided with other packets

from other users. The packet collision necessitates the retransmission of

the damage packets. Since the users of a satellite link know exactly what

was transmitted to the up-link channel and when it was transmitted, then

all the sender has to do is to listen to the down-link channel for

acknowledgement one up-down delay time after the packet was sent. If

no acknowledgement is received after one up-down time, then this will be

interpreted as the packet colliding with other packets, requiring

retransmission. However, before retransmission the sender will have to

delay this a random amount of time to avoid colliding again with the same

user. This method is called Random ALOHA.

 

When the channel is heavily utilized, Random ALOHA will suffer from

degradation of throughput due to the fact that there will be numerous collisions.

Hence the introduction of the Slotted ALOHA

 

Slotted ALOHA requires a synchronized clock to be present at the earth

and satellite stations. The clocks are synchronized to send traffic at specific

periods. For example, the clocks may be set to send packets at 30 millisecond

(ms) interval. This 30 ms is the packet duration, which is the time needed to

send one packet to the channel. All station are required to send packets at the

start of each slot period. This way, if there is a collision, it is a complete overlap.

The same method as in the Random ALOHA is used to decide when to transmit

packets.

bottom of page