Data link layer can be characterized by two types of layers:
- Medium Access Layer (MAL)
- Logical Link Layer
Aloha Protocols:
History:
The Aloha protocol was designed as part of a project at the
University of Hawaii. It provided data transmission between computers on
several of the Hawaiian Islands using radio transmissions.
- Communications was typically between remote stations and a central sited named Mennen or vice versa.
- All message to the Mennen were sent using the same frequency.
- The outgoing frequency was also used for messages from the central site to remote computers.
- All stations listened for message on this second frequency.
Pure Aloha:
Pure Aloha is an un-slotted fully-decentralized protocol. It
is extremely simple and trivial to implement. The ground rule is - "when
you want to talk, just talk!". So, a node which wants to transmits, will
go ahead and send the packet on its broadcast channel, with no consideration
whatsoever as to anybody else is transmitting or not.
One serious drawback here is that, you don't know whether
what you are sending has been received properly or not (so as to say,
"whether you've been heard and understood?"). To resolve this, in
Pure Aloha, when one node finishes speaking, it expects an acknowledgement in a
finite amount of time - otherwise it simply re-transmits the data. This scheme
works well in small networks where the load is not high. But in large, load
intensive networks where many nodes may want to transmit at the same time, this
scheme fails miserably. This led to the development of Slotted Aloha.
Slotted Aloha:
This is quite similar to Pure Aloha, differing only in the
way transmissions take place. Instead of transmitting right at demand time, the
sender waits for some time. This delay is specified as follows - the timeline
is divided into equal slots and then it is required that transmission should
take place only at slot boundaries. To be more precise, the slotted-Aloha makes
the following assumptions:
- All frames consist of exactly L bits.
- Time is divided into slots of size L/R seconds (i.e., a slot equals the time to transmit one frame).
- Nodes start to transmit frames only at the beginnings of slots.
- The nodes are synchronized so that each node knows when the slots begin.
- If two or more frames collide in a slot, then all the nodes detect the collision event before the slot ends.
In this way, the number of collisions that can possibly take
place is reduced by a huge margin. And hence, the performance become much
better compared to Pure Aloha. collisions may only take place with nodes that
are ready to speak at the same time. But nevertheless, this is a substantial
reduction.
Carrier Sense Multiple Access Protocols:
In both slotted and pure ALOHA, a node's decision to
transmit is made independently of the activity of the other nodes attached to
the broadcast channel. In particular, a node neither pays attention to whether
another node happens to be transmitting when it begins to transmit, nor stops
transmitting if another node begins to interfere with its transmission. As
humans, we have human protocols that allow allows us to not only behave with
more civility, but also to decrease the amount of time spent
"colliding" with each other in conversation and consequently
increasing the amount of data we exchange in our conversations. Specifically,
there are two important rules for polite human conversation:
- Listen before speaking: If someone else is speaking, wait until they are done. In the networking world, this is termed carrier sensing - a node listens to the channel before transmitting. If a frame from another node is currently being transmitted into the channel, a node then waits ("backs off") a random amount of time and then again senses the channel. If the channel is sensed to be idle, the node then begins frame transmission. Otherwise, the node waits another random amount of time and repeats this process.
- If someone else begins talking at the same time, stop talking. In the networking world, this is termed collision detection - a transmitting node listens to the channel while it is transmitting. If it detects that another node is transmitting an interfering frame, it stops transmitting and uses some protocol to determine when it should next attempt to transmit.
It is
evident that the end-to-end channel propagation delay of a broadcast channel -
the time it takes for a signal to propagate from one of the the channel to
another - will play a crucial role in determining its performance. The longer
this propagation delay, the larger the chance that a carrier-sensing node is
not yet able to sense a transmission that has already begun at another node in
the network.
CSMA- Carrier Sense Multiple Access
This is the simplest version CSMA protocol as described
above. It does not specify any collision detection or handling. So collisions
might and WILL occur and clearly then, this is not a very good protocol for
large, load intensive networks.
So, we need an improvement over CSMA - this led to the development of
CSMA/CD. CSMA/CD- CSMA with Collision Detection
In this protocol, while transmitting the data, the sender
simultaneously tries to receive it. So, as soon as it detects a collision (it
doesn't receive its own data) it stops transmitting. Thereafter, the node waits
for some time interval before attempting to transmit again. Simply put, "listen
while you talk". But, how long should one wait for the carrier to be
freed? There are three schemes to handle this:
- Persistent: In this scheme, transmission proceeds immediately if the carrier is idle. However, if the carrier is busy, then sender continues to sense the carrier until it becomes idle. The main problem here is that, if more than one transmitters are ready to send, a collision is GUARANTEED!!
- Non-Persistent: In this scheme, the broadcast channel is not monitored continuously. The sender polls it at random time intervals and transmits whenever the carrier is idle. This decreases the probability of collisions. But, it is not efficient in a low load situation, where number of collisions are anyway small. The problems it entails are:
- If back-off time is too long, the idle time of carrier is wasted in some sense
- It may result in long access delays
- p-Persistent: Even if a sender finds the carrier to be idle, it uses a probabilistic distribution to determine whether to transmit or not. Put simply, "toss a coin to decide". If the carrier is idle, then transmission takes place with a probability p and the sender waits with a probability 1-p. This scheme is a good trade off between the Non-persistent and 1-persistent schemes. So, for low load situations, p is high (example: 1-persistent); and for high load situations, p may be lower. Clearly, the value of p plays an important role in determining the performance of this protocol. Also the same p is likely to provide different performance at different loads.
CSMA/CD doesn't work in some wireless
scenarios called "hidden node" problems. Consider a situation,
where there are 3 nodes - A, B and C communicating with each other using a
wireless protocol. Moreover, B can communicate with both A and C, but A and C
lie outside each other's range and hence can't communicate directly with each
other. Now, suppose both A and C want to communicate with B simultaneously.
They both will sense the carrier to be idle and hence will begin transmission,
and even if there is a collision, neither A nor C will ever detect it. B on the
other hand will receive 2 packets at the same time and might not be able to
understand either of them. To get around this problem, a better version called
CSMA/CA was developed, specially for wireless applications.
CSMA with Collision Avoidance
We have observed that CSMA/CD would break down in wireless networks because of hidden node and exposed nodes problems. We will have a quick recap of these two problems through examples.Hidden Node Problem:
In the case of wireless network it is possible that A is
sending a message to B, but C is out of its range and hence while
"listening" on the network it will find the network to be free and
might try to send packets to B at the same time as A. So, there will be a
collision at B. The problem can be looked upon as if A and C are hidden from
each other. Hence it is called the "hidden node problem".
Exposed Node Problem:
If C is transmitting a message to D and B wants to transmit
a message to A, B will find the network to be busy as B hears C transmitting.
Even if B would have transmitted to A, it would not have been a problem at A or
D. CSMA/CD would not allow it to transmit message to A, while the two
transmissions could have gone in parallel.
Addressing hidden node problem (CSMA/CA)
Consider the figure above.Suppose A wants to send a packet
to B. Then it will first send a small packet to B called "Request to
Send" (RTS). In response, B sends a small packet to A called "Clear
to Send" (CTS). Only after A receives a CTS, it transmits the actual
data. Now, any of the nodes which can hear either CTS or RTS assume the network
to be busy. Hence even if some other node which is out of range of both A and B
sends an RTS to C (which can hear at least one of the RTS or CTS between A and
B), C would not send a CTS to it and hence the communication would not be
established between C and D.
One issue that needs to be addressed is how long the rest of the nodes
should wait before they can transmit data over the network. The answer is that
the RTS and CTS would carry some information about the size of the data that B
intends to transfer. So, they can calculate time that would be required for the
transmission to be over and assume the network to be free after that.Another
interesting issue is what a node should do if it hears RTS but not a
corresponding CTS. One possibility is that it assumes the recipient node has
not responded and hence no transmission is going on, but there is a catch in
this. It is possible that the node hearing RTS is just on the boundary of the
node sending CTS. Hence, it does hear CTS but the signal is so deteriorated
that it fails to recognize it as a CTS. Hence to be on the safer side, a node
will not start transmission if it hears either of an RTS or a CTS. The assumption made in this whole discussion is that if a node X can send packets to a node Y, it can also receive a packet from Y, which is a fair enough assumption given the fact that we are talking of a local network where standard instruments would be used. If that is not the case additional complexities would get introduced in the system.
Does CSMA/CD work universally in the wired networks ?
The problem of range is there in wired networks as well in
the form of deterioration of signals. Normally to counter this, we use
repeaters, which can regenerate the original signal from a deteriorated one.
But does that mean that we can build as long networks as we want with
repeaters. The answer, unfortunately, is NO! The reason is the beyond a certain
length CSMA/CD will break down.
The mechanism of collision detection which CSMA/CD follows is through
listening while talking. What this means is so long as a node is transmitting
the packet, it is listening on the cable. If the data it listens to is
different from the data it is transmitting it assumes a collision. Once it has
stopped transmitting the packet, and has not detected collision while
transmission was going on, it assumes that the transmission was successful. The
problem arises when the distance between the two nodes is too large. Suppose A
wants to transmit some packet to B which is at a very large distance from B.
Data can travel on cable only at a finite speed (usually 2/3c, c being the
speed of light). So, it is possible that the packet has been transmitted by A
onto the cable but the first bit of the packet has not yet reached B. In that
case, if a collision occurs, A would be unaware of it occurring. Therefore
there is problem in too long a network. Let us try to parametrize the above problem. Suppose "t" is the time taken for the node A to transmit the packet on the cable and "T" is the time , the packet takes to reach from A to B. Suppose transmission at A starts at time t0. In the worst case the collision takes place just when the first packet is to reach B. Say it is at t0+T-e (e being very small). Then the collision information will take T-e time to propagate back to A. So, at t0+2(T-e) A should still be transmitting. Hence, for the correct detection of collision (ignoring e)



No comments:
Post a Comment