Multicast Routing with PIM-SM

To get a better understanding of Multicast PIM-SM routing I created a case study. In the picture below the GNS3 topology I used.

topology

I used EIGRP between the routers for full unicast connectivity. On all te routers I configured “IP multicast-routing” and on the connected interfaces I configured the “ip pim sparse-mode”. R3 serves as a Rendevouz Point (RP) and R4 serves as the Mapping Agent (MA).

First configure the full unicast connectivity. I will show one sample, the other routers are pretty straight forward.

R1
!
interface FastEthernet1/0
ip address 10.0.0.5 255.255.255.252
duplex auto
speed auto
!
interface FastEthernet2/0
ip address 10.0.0.1 255.255.255.252
duplex auto
speed auto
!
interface FastEthernet3/0
ip address 172.18.100.254 255.255.255.0
duplex auto
speed auto
!
router eigrp 10
network 1.1.1.1 0.0.0.0
network 10.0.0.0 0.0.0.3
network 10.0.0.4 0.0.0.3
network 172.18.100.0 0.0.0.255
no auto-summary
eigrp router-id 1.1.1.1
!

Make sure there is full unicast connectivity from 172.18.100.254 to 192.168.157.254. Check your routing table on router R1 and use the ping command. The routing table will look like the picture blow.
R1_RT

Now that we have full unicast connectivity we can start with the multicast basics. First configure multicast routing on all the routers. This can be done with the following command: “ip mulicast-routing”. After that configure the “ip pim sparse-mode” command on all the interfaces which are participating in the multicast routing. In my topology this means all physical interfaces and the Loopback interfaces on router R3 and R4.
After this we have basic multicast connectivity.

Now it’s time to configure the Rendevous Point, but first a little explanation about PIM-SM, Rendevouz Point’s and Mapping Agents.

PIM-SM*
PIM-SM is a protocol for efficiently routing IP packets to multicast groups that may span wide-area and inter-domain internets. The protocol is named protocol-independent because it is not dependent on any particular unicast routing protocol for topology discovery, and sparse-mode because it is suitable for groups where a very low percentage of the nodes will subscribe to the multicast session. Unlike earlier dense-mode multicast routing protocols such as DVMRP and dense-multicast routing which flooded packets across the network and then pruned off branches where there were no receivers, PIM-SM explicitly constructs a tree from each sender to the receivers in the multicast group
explicitly builds unidirectional shared trees rooted at a rendezvous point (RP) per group, and optionally creates shortest-path trees per source. PIM-SM generally scales fairly well for wide-area usage.

Rendevouz Point*
A Rendezvous Point (RP) is used as a temporary way to connect a would-be multicast receiver to an existing shared multicast tree passing through the rendezvous point. When volume of traffic crosses a threshold, the receiver is joined to a source-specific tree, and the feed through the RP is dropped. You can think of this as obtaining copies of something through a friend who already subscribes, and when it proves useful or interesting, it’s worth the bother to become a direct subscriber.

I used auto-RP to configure a RP, so a little more about auto-RP

auto-RP
Auto-RP automatically distributes information to routers as to what the RP address is for various multicast groups. It simplifies use of multiple RP’s for different multicast group ranges. It avoids manual configuration inconsistencies, and allows for multiple RP’s acting as backups to each other. Cisco routers automatically listen for this information.

Auto-RP relies on a router designated as RP mapping agent. Potential RP’s announce themselves to the mapping agent, and it resolves any conflicts. The mapping agent then sends out the multicast group-RP mapping information to the other routers.

Mapping-Agent
The RP mapping agent listens to the announced packets from the RPs, then sends RP-to-group mappings in a discovery message that is sent to 224.0.1.40. These discovery messages are used by the remaining routers for their RP-to-group map. You can use one RP that also serves as the mapping agent, or you can configure multiple RPs and multiple mapping agents for redundancy purposes.

If you want to know more about the theory behind multicast routing check out the “Routing TCP/IP Volume II book written by Jeff Doyle and Jennifer DeHAven Carroll. Or check out this link: http://www.cisco.com/c/en/us/support/docs/ip/ip-multicast/9356-48.html

Let’s start configuring. To configure the RP on R3 use the commands as in the below configuration:

R3
!
interface Loopback0
ip address 3.3.3.3 255.255.255.255
ip pim sparse-mode
!
ip pim send-rp-announce Loopback0 scope 15
!

And configure the Mapping-Agent on router R4. RP and MA can be configured on the same router, but for the sake of the example I configure them on different routers.

R4
!
interface Loopback0
ip address 4.4.4.4 255.255.255.255
ip pim sparse-mode
!
ip pim send-rp-discovery Loopback0 scope 15
!

Basically this is all there is to make PIM-SM, RP and MA to function. To check if everthing works as expected, use the following commands:

Show ip pim rp mapping
pim rp map
As can be seen in the above picture the 224.0.0.0/4 groups are connected to the RP on router R3 and the RP is chosen via Aur0-RP

To check if there is multicast connectivity check the multicast routing table:

Show ip mroute
sh ip mrou

Now were up to the next part, how to stream a videofile over the multicast network to a multicast receiver.

In my example I used 3 virtual machines with windows 7 installed, I use VMware workstation for it. To stream the videofile I installed VLC on C1. See below the configuration I used to stream the file
C1
VLC sets the TTL default to 1. I the topology used, this won’t work because there are at least 4 hops between sender and receiver. I set the TTL to 12. Check this with a wireshark trace like in the picture below
ttl
Because I set the multicast (group)address to 239.255.255.250 on C1, check on router R1 if the correct multicast group is “created” with the “show ip igmp groups” command.
sh ip igmp grou

 

Now configure the client, as shown in the picture below

C2

And there we go! We are looking to an udp multicast videostream. To make C3 work, follow the same steps.

This is a faitly simple example of the of PIM-SM and multicast, but nevertheless it gives a perfect view of the multicast capabilities!

If you run into any trouble, there are some handy multicast troubleshooting tools like mtrace and mstat.

Mtrace is the multicast equivalent of traceroute
mtrace

The mstat command gives you an overview of the multicast path in ascii style. It shows drops, duplicates, TTL’s and delays between the nodes in the paths.
mstat

 

 

*Source of this information is http://www.cisco.com

 

Advertisements