标签:des style http color io os ar for strong
"In October of ‘86, the Internet had the first of what became a series of congestion collapses.
..., the data throughput from LBL to UC Berkeley (sites separated by 400 yards and 2 IMP - i.e., routers - hops) dropped from 32 Kilo bits/sec to 40 bits/sec." |
|
In other words:
We assume that every TCP source is sending at maximum data rate
This is achieved by sending packets whose size is as large as possible In other words, in the analysis of TCP congestion control scheme, we always assume that: TCP always transmits packets of size equal to MSS bytes |
The dependency is pretty complicated and very dynamic in nature
The following examples will derive a simple relationship between the data transmission rate and the transmit window size.
But do not conclude that data rate is proportional to the window size. The above examples are "idealized". Network delays, route changes and other factors can make the relationship very unpredictable and dynamic.
Transmit Window
|
Congestion Window Size is
|
In fact, it changes faster than the weather and it is just as unpredictable...
|
AWS is negotiated at connection establishment and remains unchanged afterwards
CWND changes over time !!!
TWS = min (AWS, CWND)
|
(We have not yet discussed HOW TCP changes the value of CWND - will come next)
In the remainder of the discussion, we will discuss how TCP updates the value of CWND
Rather, TCP will try to reach this maximum transmission rate in a piece meal fashion
The start up phase ends when TCP has reached the maximum transmission rate that it "believed" to be safe.
Because TCP has reach the maximum safe level, it would appear that there is still some more capacity available - it would be a shame NOT to use the available capacity !!!
|
|
If the network can handle this transmission rate, TCP will not need to do any congestion control !!! (Because the bottle neck is at the receiver...)
The picture above shows a scenario where the network capacity is less than what the receiver can handle - i.e., the network is the bottle neck.
Because the packet drop happens at the moment when the sender was transmitting 50 Kbps , the new target congestion rate is set to 25 Kbps
(In the figure, it happens when sender is transmitting 30 Kbps )
Because the packet drop happens at the moment when the sender was transmitting 30 Kbps , the new target congestion rate is set to 15 Kbps
And so on....
In the slow start phase, transmission rate increases exponentially in time.
In the congestion avoidance phase, transmission rate increases linearly in time.
We will look at each mechanism separately and indicated when the mechanism is appropriate.
|
Initilization:
Slow Start:
|
|
|
Why not just set CWND to SSThrehHold and be done with it ???
(i.e., CWND > SSThresHold)
|
Example of TCP operation in the congestion avoidance phase:
TCP sends out 4 packets (each containing MSS bytes) to the receiver.
CWND = CWND + MSS * MSS/CWND // CWND = 4 MSS = 4 MSS + MSS * MSS/(4 MSS) = 4 MSS + MSS * 1/4 = 4.25 MSS |
CWND = CWND + MSS * MSS/CWND // CWND = 4.25 MSS = 4.25 MSS + MSS * MSS/(4.25 MSS) = 4.25 MSS + MSS * 1/4.25 = 4.485 MSS |
CWND = CWND + MSS * MSS/CWND // CWND = 4.485 MSS = 4.485 MSS + MSS * MSS/(4.485 MSS) = 4.485 MSS + MSS * 1/4.485 = 4.708 MSS |
CWND = CWND + MSS * MSS/CWND // CWND = 4.708 MSS = 4.708 MSS + MSS * MSS/(4.708 MSS) = 4.708 MSS + MSS * 1/4.708 = 4.92 MSS |
(In the slow start phase, CWND DOUBLES after each RTT seconds)
CWND = CWND + MSS * MSS/CWND + MSS/8
|
So why so foolish ???
If TCP would stop increasing CWND, it would not be true to its goal.
(This technique is similar to kids testing their boundary by asking their parents for favors over and over again... The boundary may have moved :-))
This is the new "safe" operation level...
Fast Retransmit
export PATH=/usr/local/gnu/gcc/4.1.0/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/gnu/gcc/4.1.0/lib:$LD_LIBRARY_PATH
/home/cheung/NS/run-ns Tahoe.tcl
The NAM (Network Animation) output file is here: click here
/home/cheung/NS/bin/nam Tahoe.nam
In gnuplot, issue the command:
plot "WinFile" using 1:2 title "Flow 1" with lines 1
You should see this plot:
You can see the operation of TCP Tahoe clearly from the above figure:
TCP marks SSThresh = 25 (approximately) and begins another slow start
SSThresHold is approximately 22.
Most Fast Retransmit actions
When TCP performs a fast restransmit (so TCP did not timeout):
|
export PATH=/usr/local/gnu/gcc/4.1.0/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/gnu/gcc/4.1.0/lib:$LD_LIBRARY_PATH
/home/cheung/NS/run-ns Reno.tcl
The NAM (Network Animation) output file is here: click here
/home/cheung/NS/bin/nam Reno.nam
In gnuplot, issue the command:
plot "Reno-Window" using 1:2 title "Flow 1" with lines 1
You should see this plot:
You can see that this small change in TCP Reno has resulted in a huge performance improvement:
There are much more problems and issues with TCP after they introduced TCP Reno
(Here is a paper that points out the phenomenon: click here )
Example that illustrates TCP synchronization:
/home/cheung/NS/run-ns Reno.tcl
To see the plot of the CWND of TCP, save the Congestion Window CWND plot file in your directory and run gnuplot
In gnuplot, issue the command:
plot "WinFile" using 1:2 title "Flow 1" with lines 1, "WinFile2" using 1:2 title "Flow 2" with lines 2
You should see this plot:
This kind of behavior is not good, because the best way to utilizate all network capacity is for one of the flow to cut back
(But it should NOT always be the same flow, otherwise you have unfairness)
/home/cheung/NS/run-ns Reno.tcl
(The NAM file is too big and I deleted it...)
To see the plot of the CWND of TCP, save the Congestion Window CWND plot file in your directory and run gnuplot
In gnuplot, issue the command:
plot "WinFile" using 1:2 title "Flow 1" with lines 1, "WinFile2" using 1:2 title "Flow 2" with lines 2
You should see this plot:
In these networks, the usable window size is huge... hundreds of thousands of packets.
TCP cannot afford the luxury to increase its window size by 1 in each RTT.
In order to reach the fill capacity of the network, TCP must increase faster...
/home/cheung/NS/run-ns Reno.tcl
(The NAM file is too big and I deleted it...)
To see the plot of the CWND of TCP, save the Congestion Window CWND plot file in your directory and run gnuplot
In gnuplot, issue the command:
plot "WinFile" using 1:2 title "Flow 1" with lines 1
You should see this plot:
http://www.mathcs.emory.edu/~cheung/Courses/558-old/Syllabus/6-transport/TCP.html
标签:des style http color io os ar for strong
原文地址:http://www.cnblogs.com/forcheryl/p/4053288.html