4.2.2. Simulation Results and Algorithm Effectiveness Analysis
(1) Throughput
Figure 7 shows the curve of the average saturated throughput of node5 with the DCF, HBCC, HSCC, CA-MAC and HRCC algorithms, varying with load, under a simple tree topology. As can be seen from
Figure 7, when the network load is small, the throughput of the four protocols increases linearly with the increase in input load. As the network load continues to increase, the network gradually becomes congested, and the throughput stops growing, in turn, in the DCF algorithm (when the load is 25 packets/s), the HRCC algorithm (when the load is 15 packets/s), the HSCC algorithm (when the load is 35 packets/s), the CA-MAC algorithm (when the load is 15packets/s) and the HBCC algorithm (when the load is 40 packets/s). The average saturation throughputs of the DCF, HRCC, HSCC, CA-MAC and HBCC algorithms are approximately 29 packets/s, 34 packets/s, 49 packets/s, 34 packets/s and 55 packets/s, respectively.
It can be seen from the simulation results in
Figure 7 that the throughput performance of HBCC using hop-by-hop bidirectional congestion control is better than the HSCC algorithm, HRCC algorithm, CA-MAC algorithm and DCF algorithm. The average saturation throughput in HBCC is about 12% higher than in the HSCC algorithm, about 62% higher than in the HRCC algorithm, about 62% higher than in the CA-MAC algorithm, and about 90% higher than in the DCF algorithm. Because HBCC adopts bidirectional congestion control, when a node is congested, the node can handle the congestion quickly according to the congestion condition judged, so that the packet of the congested node can be sent out as soon as possible. The congestion is reduced, and the waiting time of the buffered data packet in the queue is also reduced, so that the data packet is sent to the destination node5 as soon as possible. Therefore, the throughput of the network is greatly improved.
(2) Buffer overflow packet loss ratio
Figure 8 shows the curves of buffer overflow loss ratio with network load using the DCF, HBCC, HSCC, CA-MAC and HRCC algorithms, under a simple tree topology. As can be seen from
Figure 8, the buffer overflow packet loss ratio of the four algorithms increases with increasing network load. It can be seen from the figure that when the DCF algorithm has a load of 10 packets/s, the HRCC algorithm has a load of 15 packets/s, the HSCC algorithm has a load of 25 packets/s, CA-MAC algorithm has a load of 15 packets/s, or the HBCC algorithm has a load of 30 packet/s, the network begins to lose data packets. As the load on the network increases, the buffer overflow packet loss ratio of the HBCC algorithm is lower than that of the other three algorithms.
It can be seen from the simulation results in
Figure 8 that the buffer overflow packet loss ratio performance of HBCC with bidirectional congestion control is better than the HSCC algorithm, HRCC algorithm, CA-MAC algorithm and DCF algorithm. When the load is 40 packets/s, the buffer overflow packet loss ratio using the HBCC algorithm is about 44% lower than the HSCC algorithm, about 79% lower than the HRCC algorithm, about 79% lower than the CA-MAC algorithm, and about 80% lower than the DCF algorithm. Because HBCC adopts bidirectional congestion control, when a node is congested, it can adaptively adjust the
CW of the previous hop node to obtain the lower priority of access to the channel and suppress the data receiving rate of the current node. It can also adaptively adjust the contention window of the current node to obtain a higher priority of access to the channel and improve the sending rate of the current node. In this way, the congested node can reduce the length of the buffer queue of the current node more quickly. Therefore, the congestion of the network is alleviated, and the buffer overflow packet is avoided.
(3) Average end-to-end delay
Figure 9 shows the curves of average end-to-end delay with network load using the DCF, HBCC, HSCC, CA-MAC and HRCC algorithms, under a simple tree topology. As can be seen from
Figure 9, the average end-to-end delay of the four algorithms increases with increasing network load. It can be seen from the figure that the average end-to-end delay of the four algorithms is, from high to low: HRCC, DCF, HBCC, HSCC.
Because the idea of the HRCC algorithm is to increase the time for the last hop node to wait for access to the channel in exchange for congestion mitigation, this will inevitably lead to a certain delay increase, but in exchange for a lower packet loss ratio and higher throughput. This is worthwhile for the entire network, because packet loss ratio is the most important network performance evaluation index. The average end-to-end delay of the HSCC algorithm is the lowest, because when the node is congested, by increasing the priority of access for the congested node, the time that the node waits for access to the channel is reduced, and the time of the data packet in the buffer queue is reduced. So, the end-to-end delay is lower. However, the CA-MAC algorithm does not have the delay optimization added in this paper, so the delay is still very large compared with HRCC. The HBCC algorithm combines two algorithms, so the average end-to-end delay curve lies between the two algorithms, and the congestion mitigation effect is better. Through the analysis of the above three network performance indicators, we can see that the HBCC algorithm can greatly improve the average saturation throughput of the network, reduce the average end-to-end delay, and greatly reduce the network buffer overflow packet ratio.
(4) Analysis of the congestion control process of the HBCC algorithm
Here, we analyze how the HBCC algorithm works by tracking the length of the buffer queue.
Figure 10 shows the distribution of the buffer queues of node0, node1 (the buffer queue of node2 is similar to that of node1), node3, and node4 in the simulation time when the network loads are 20 packets/s and 25 packets/s.
Figure 11 shows the distribution of the buffer queues of node0, node1, node3 and node4 in the simulation time when the network loads are 30 packets/s and 35 packets/s. In
Figure 10 and
Figure 11, the green histogram is the DCF algorithm, the red histogram is the HBCC algorithm, the brown is the intersection areas of two algorithms, the horizontal axis is the length of the buffer queue, and the vertical axis is the number of occurrences.
As can be seen from
Figure 8, when the network load is 20 packets/s, the DCF algorithm has generated packet loss, and the HBCC algorithm has not generated packet loss. As shown in
Figure 10c, the buffer queue length of node3 in the DCF algorithm has reached the full queue, while the buffer queue of node3 in the HBCC algorithm has not reached the full queue. When the network load is increased to 25 packets/s, the full queue has appeared at node0 under the DCF algorithm due to the larger quantity of data packets generated. The length of the buffer queue of node3 continues to increase. However, under the HBCC algorithm, the buffer queues of node0 and node3 do not reach the full queue. When the load is heavier, the packet loss ratio of the DCF algorithm is aggravated, while that of the HBCC algorithm does not occur. These results are also confirmed in the packet loss ratio curve of
Figure 8.
From the change in queue length, we can see that network congestion has been greatly alleviated. This is because when it is detected that the buffer queue is larger than the congestion threshold, the contention window is adaptively adjusted according to the queue length, and the adjusted contention window changes the node data transmission rate. The congestion condition of node1 and node3 is 0–1, and the
CW is adjusted using Equation (2). The buffer queue length of node1 is increased, and the data reception rate of node3 is reduced. It can be seen from
Figure 10b that the buffer queue of node1 is slightly increased, and the buffer queue of node3 is below the full queue. The congestion condition of node3 and node4 is 1–0. At this time, the
CW is adjusted by Equation (5). The adjusted
CW increases the data transmission rate of node3, and a certain data packet is sent to node4. It can be seen from
Figure 10d that the buffer queue of node4 increases.
Figure 10c shows that the buffer queue of node3 is reduced to below the full queue. When the load is 25 packets/s, the congestion condition of node0 and node3 is 1–1, and the
CW is adjusted according to Equation (7). Node3 preferentially handles congestion, and the buffer queue of node3 falls under the HBCC algorithm, so the congestion of node0 can also be relieved.
Figure 10a shows that the buffer queue of node0 is also reduced.
The queues analyzed in
Figure 10 are all cases where HBCC does not experience congestion. It can be seen from
Figure 8 that the HBCC algorithm has generated congestion when the load reaches 35 packets/s. Therefore, according to
Figure 11, this paper analyses the change in nodes’ buffer queue length from non-congestion to congestion. As shown in
Figure 11, under the DCF algorithm, as the load increases, the buffer queue lengths of node0 are increasing, and a large number of them are clustered near the full queue. The buffer queue of node3 is saturated (the delay is also confirmed by the delay curve shown in
Figure 9, but it can be seen from
Figure 8 that the network packet loss ratio is increasing. The buffer queue lengths of node1 and node4 are very low. Under the HBCC algorithm, the buffer queues of node0 and node3 are greatly reduced, while the buffer queue lengths of node1 and node4 are both increased.
Comparing the changes in the buffer queues of nodes under the two protocols, it can be seen that when the congestion is aggravated, the congestion mitigation strength of the HBCC algorithm is gradually increased, and the length of buffer queues of congested nodes decreases to a great extent. This is because under the HBCC algorithm, when the length of the buffer queue increases, the adaptive adjustment of the competition window becomes very strong. Under the DCF algorithm, when the network loads are 30 packets/s and 35 packets/s, node0 has reached the full queue, and the buffer queue length of node3 is also clustered around 50. However, the buffer queue lengths of node1 and node4 are very low. Under the HBCC algorithm, the congestion condition of node0 and node3 is 1–1. The HBCC algorithm adjusts the
CW of node0 according to Equation (7) and changes its data sending rate. After the congestion of node3 is alleviated, node0 also experiences an alleviation of congestion to a certain extent. As can be seen from
Figure 11a, the buffer queue of node0 is basically below the full queue. Under the HBCC algorithm, the congestion condition of node1 and node3 is 0–1. The HBCC algorithm adaptively adjusts the
CW of node1 according to Equation (2). The adjusted contention window reduces the data transmission rate of node1. The buffer queue of node 1 shown in
Figure 11b has been greatly improved, which reduces the data receiving rate of node3. The congestion condition of node3 and node4 is 1–0. The HBCC algorithm adjusts the
CW of node3 according to Equation (5) to increase its data sending rate. As can be seen from
Figure 11c, the buffer queue length of node3 is basically below the full queue. The length of the buffer queue in node 4 has increased, as shown in
Figure 11d.
The node queue status under the two algorithms at different loads is shown in
Table 3. The adjustment effect of the HBCC algorithm is shown in
Table 4.
Table 4 shows that when the load is 20 packets/s, the average buffer queue length of each node in the HBCC algorithm decreases compared with the DCF algorithm. Because the degree of congestion is low at this time, the nodes in the HBCC algorithm are mostly in a no-congestion state. Therefore, compared with the DCF algorithm, the length of the buffer queue in non-congested nodes is basically unchanged. When the load is 25 packets/s, the average buffer queue length of node0 in the HBCC algorithm is higher than that in the DCF algorithm, but there is no full queue in node0, and no packet loss occurs. Other buffer queue changes are consistent with the HBCC algorithm principle.
According to the above analysis, when the network is congested, the HBCC algorithm improves the average saturation throughput and packet loss ratio performance of the network without substantially increasing the delay. In order to verify that the algorithm has good practicability and robustness, this paper proceeds to perform verification experiments in a relatively complex network environment with convergent and stochastic topology.