next up previous contents
Next: Measuring link failure and Up: Experiments Previous: Hardware used for the   Contents


Experiment 1: Measuring MPLS multicast throughput

Figure 6.3: Topology of the network used to measure the performance of the MPLS-Linux multicast implementation. We compare the throughput achieved with IP routing, MPLS unicast, MPLS multicast in groups of two to four members.
\includegraphics[width=\textwidth]{figures/testing_configuration_perf}
The goal of the first set of experiments is to evaluate the throughput achievable by our multicast MPLS forwarding engine and compare it with the data throughput on a unicast path. In each experiment, a source is sending data to one or several receivers using a modified version of the ttcp tool [52] [66] that supports multicast traffic. The ttcp tool is a traffic generation tool in which it is possible to choose the size, the total amount of data sent and the packet sending rate of the generated traffic. At the sender, we configure the ttcp tool to send 100,000 UDP packets of 8192 bytes at the maximum speed offered by the hardware. Since we use UDP packets to measure throughputs, packets may be dropped between the source and the receivers and the receivers do not necessarily receive every packet. The throughput seen by a receiver is the amount of data received by the receiver divided by the time taken to receive the data. A program runs on each receiver and computes the throughput seen by the receiver. We run each experiment five times, thus for each experiment we collect five values for the throughput seen by each receiver.

In Experiment 1.1.1 (see Figure 6.3(a)), the source PC2 sends IP packets to the receiver PC4. All three PCs involved in the experiments, PC2, PC3 and PC4, exclusively use IP forwarding mechanisms and MPLS forwarding is disabled. In Experiment 1.1.2 (see Figure 6.3(b)), we set up a unicast LSP between the source PC2 and the receiver PC4. PC2 is setup as an ingress LER, PC3 is a LSR and PC4 is an egress LER. In Experiments 1.1.3 to 1.1.8, we set up multicast LSPs for multicast routing trees with two (Figure 6.3(c)), three (Figure 6.3(d)) and four (Figure 6.3(e)) group members. The core of the multicast routing tree is PC3, and PC2 is the sender. In Experiments 1.1.3 and 1.1.6, PC4 is the only receiver. In Experiments 1.1.4 and 1.1.7, both PC4 and PC5 are receivers and in Experiments 1.1.5 and 1.1.8, PC4, PC5 and PC6 are receivers. We perform Experiments 1.1.1 and 1.1.2 with the probing mechanism that detects link failures and repairs deactivated. We perform the experiments that involve MPLS multicast with the probing mechanism deactivated (Experiments 1.1.3, 1.1.4 and 1.1.5), and then with the probing mechanism activated (Experiments 1.1.6, 1.1.7 and 1.1.8).

Table 6.2: Multicast MPLS forwarding engine performance with UDP packets of 8192 bytes. Multicast MPLS achieves throughputs comparable with MPLS unicast and IP unicast. The probing mechanism has little influence on the maximum throughputs achieved. The size of the group has a limited impact on the performance of multicast MPLS.
    Exp. Description Throughput $th^1_a$ Difference
    number   (Standard variation) to MPLS
        Mbits/s Unicast ($rdiff^1_a$)
    Probing mechanism deactivated
    1.1.1 IP unicast 93.552 (0.0039) +0.279 %
    1.1.2 MPLS Unicast $th^1_0=$93.292 (0.0001) (reference)
    1.1.3 Group of 2 members 93.286 (0.0106) -0.006 %
    1.1.4 Group of 3 members:
      PC4
      PC5
    1.1.5 Group of 4 members:
      PC4
      PC5
      PC6
    Probing mechanism activated, $T_p$=10 ms
    1.1.6 Group of 2 members 93.215 (0.0090) -0.083 %
    1.1.7 Group of 3 members:
      PC4
      PC5
    1.1.8 Group of 4 members:
      PC4
      PC5
      PC6


Consider Table 6.2. The first and second columns contain the experiment number and a short description for each experiment. Each experiment consists of five runs. In the third column, we give the average throughput seen by each receiver for the five runs of each experiment, and the standard deviation for the set of the five values of the throughput. In the fourth column, we give the relative difference $rdiff^1_a$ between the throughput $th^1_a$ seen by a receiver $a$ and the throughput achieved with MPLS unicast when the probing mechanism is not activated $th^1_0$; therefore, $rdiff^1_a = \frac{th^1_a-th^1_0}{th^1_0}$.

First, we notice that IP unicast is faster than MPLS (Experiments 1.1.1 and 1.1.2). For each incoming packet, an ingress LER performs a lookup in the IP routing table to find the FTN of the packet and then a lookup in the MPLS output table to find the NHLFE pointed by the FTN. Then the LER pushes a label according to the information contained in the NHLFE. With IP routing only one lookup is performed thus IP routing is faster than MPLS forwarding. Nevertheless, routers perform these lookups fast and the difference between the throughputs achieved by IP and MPLS unicast is small (0.279 %).

Second, in all experiments, the throughput for all multicast group members is the same. For example, in the experiments with the group of four members (Experiments 1.1.5 and 1.1.8), the throughputs at PC4, PC5 and PC6 are the same (93.3 Mbits/s when the probing mechanism is deactivated, 92.1 Mbits/s when the probing mechanism is activated). However, the throughput decreases with the number of group members. In the group of four members, when the probing mechanism is activated, the throughput is 1.3 % lower than when IP unicast is used (Experiment 1.1.8), while this throughput loss is only 0.8 % in groups of two or three members (Experiments 1.1.6 and 1.1.7). Packet duplication is a time consuming operation and has a negative impact on the throughput on the network. We could not test the performance of the duplication mechanism for a larger number of duplications.

Third, the probing mechanism has a limited impact on the maximum throughput in the network (Experiments 1.1.3 and 1.1.6, 1.1.4 and 1.1.7, 1.1.5 and 1.1.8). Throughputs in the network are lower when the probing mechanism is activated. The throughput decrease due to the probing mechanism reaches 1.3 % (Experiment 1.1.8). The probes consume bandwidth on the links and this bandwidth is not available for the data that the sender has to transmit.

In summary, adding the multicast capability to the routers has a limited impact on the maximum throughput of the network when using UDP packets of 8192 bytes.

Table 6.3: Multicast MPLS forwarding engine performance with UDP packets of 1024 bytes. The limits of the hardware are reached and the performance of multicast MPLS is severely degraded when the traffic consists of small packets.
    Exp. Description Throughput $th^2_a$ Difference
    number   (Standard variation) to MPLS
        Mbits/s Unicast ($rdiff^2_a$)
    Probing mechanism deactivated
    1.2.1 IP Unicast 91.694 (0.0025) +0.64 %
    1.2.2 MPLS Unicast $th^2_0=$91.114 (0.2217) (reference)
    1.2.3 Group of 2 members 88.809 (0.2215) -2.53 %
    1.2.4 Group of 3 members:
      PC4
      PC5
    1.2.5 Group of 4 members:
      PC4
      PC5
      PC6
    Probing mechanism activated, $T_p$=10 ms
    1.2.6 Group of 2 members 80.609 (1.4546) -11.53 %
    1.2.7 Group of 3 members:
      PC4
      PC5
    1.2.8 Group of 4 members:
      PC4
      PC5
      PC6


Now, we repeat the experiments but set the UDP packet size to 1024 (instead of 8192 bytes) in Experiments 1.2.1 to 1.2.8. Consider Table 6.3. In the third column, we give the average throughput seen by each receiver for the five runs of each experiment, and the standard deviation for the set of the five values of the throughput. In the fourth column, we give the relative difference $rdiff^2_a$ between the throughput $th^2_a$ seen by a receiver $a$ and the throughput achieved with MPLS unicast when the probing mechanism is activated $th^2_0$; therefore, $rdiff^2_a = \frac{th^2_a-th^2_0}{th^2_0}$.

The throughputs achieved in the network are lower than with 8192-byte packets. With IP unicast and MPLS unicast (Experiments 1.2.1 and 1.2.2) the throughput is now 91 Mbits/s. With MPLS multicast, the throughput is comprised between 80 Mbits/s (Experiment 1.2.8) and 91 Mbits/s (Experiments 1.2.3 and 1.2.4). With smaller packets, the number of packets that routers need to process each second is larger than with large packets, thus decreasing the performance of the routers. In this experiment, we reach the processing capacity limit of the PC routers. Our PC routers are not able to forward UDP packets of 1024 bytes at the maximum speed allowed by the network hardware. In the following experiments, we use only 8192-byte UDP packets in order to use the full capacity of the links and not overload the CPUs of the PC routers.


next up previous contents
Next: Measuring link failure and Up: Experiments Previous: Hardware used for the   Contents
Yvan Pointurier 2002-08-11