12 September 2014

Multicast: When everyone gets it but not everyone understands

 Got to play with multicast.  When broken down step by step like the INE workbook, it wasn't so bad. Hopefully I can maintain that understanding as I move forward.  Now, I did notice something interesting before moving on.  For the switch portion of this lab, I used IOU.  I got snooping working but I could not get profiles or MVR.  Really hoping that is not on the test since they are virtual as well.

A big gotcha for this multicast section is that I cannot open the PIM section of the 15 code configuration guide.  Any release.  To do the multicast section, I has to revert back to the 12.4T configuration guide.  The 15 code guide just returns a HTTP 404.

Interesting.  There is a command to check the RPF for a source.  It is show ip rpf <source_ip>.  It shows the following:
R5#show ip rpf 155.1.146.6
RPF information for ? (155.1.146.6)
  RPF interface: GigabitEthernet0/0.45
  RPF neighbor: ? (155.1.45.4)
  RPF route/mask: 155.1.146.0/24
  RPF type: unicast (eigrp 100)
  Doing distance-preferred lookups across tables
  RPF topology: ipv4 multicast base, originated from ipv4 unicast base

To debug RFP, you can disable the CEF switching on interfaces that multicast packets are received on and sent out of with no ip mfib cef output and no ip mfib cef input.  Then do a debug of the process switched multicast packets and signal any RFP issues.  This will let you know what interface the multicast packets are coming in on and what happened with them.  If the packet is accepted (sent on), it will show that too and what interface it is sent out of.  The no ip mroute-cache command is deprecated.  Cisco is wanting us to use the no ip mfib commands now.  Good to know.

When using ip mroute, be as specific as possible for the source.  Too loose and you can mess up other multicast sources.  It will cause a source to fail RPF by having it look to be coming from the interface in the mroute.  Tighter is always better.

So in dynamic routing protocols, the static entries are king.  In multicast, static RP entries lose to dynamic entries.  Way to beat that is the use of the override option when configuring a static RP.  Using override will, you guessed it, override the dynamic option and force the use of the static.

PIM Sparse mode uses tunnels to talk with the RP from the sender.  The sender and RP use the tunnel to encapsulate joins.  The RP gets two tunnels.  One for encapsulating and one for un-encapsulating.  These tunnels can mess up configs too.  If you paste a config into a router, and the multicast tunnle gets made before a manual tunnel, the manual tunnel will fail if they share the same number.  The tunnels are not configurable and can be seen with show derived-config interface tunnel#.  An example is below.
R6#show derived-config int tun0
Building configuration...

Derived configuration : 205 bytes
!
interface Tunnel0
 description Pim Register Tunnel (Encap) for RP 150.1.5.5
 ip unnumbered GigabitEthernet0/0.146
 tunnel source GigabitEthernet0/0.146
 tunnel destination 150.1.5.5
 tunnel tos 192
end

To prevent the fallback from Sparse-mode to Dense-mode in PIM, use no ip dm-fallback.  Good for when you have only a couple of groups that you want to be dense without any way of the others coming back to it.  Helps keep traffic from going all over.

PIM Assert is used so that only one router on a broadcast segment sends multicast traffic.  To determine the winner, the routers let each other know the AD and metric of the routing protocol to get to the source.  Best AD wins, followed by best metric.  If both are a tie, then highest IP wins.  An ip mroute with no distance specified beats EIGRP in AD for example. 

When assigning a router to be the RP for a network via AutoRP, I have to remember to enable it for PIM as well.  And since two multicast addresses have to propagate throughout the network, sparse-dense-mode is the best choice for multicast.  Just remember though that this can be a problem if a negative any (deny any) list is used for the RP any where in the network.  This will cause all networks to become dense mode operational.

With the ip pim rp-announce-filter command, if you omit the rp-list option, all announcements with groups matching the group-list are matched.  If you don't use the group-list option, then all updates from the RPs in the rp-list are matched.  Good way to deny or allow all based on RP or group.

Now I get the use of ip pim autorp listener.  If all the interfaces are in sparse mode, it allows the router to still flood 224.0.1.39 and 224.0.1.40 in dense mode.  It makes sure that there is no failback to dense mode for any other groups.

PIM NBMA mode only works with sparse mode.  Good to know.  And since Auto RP works only in dense mode, you have to make changes to the network.  One way is to create a tunnel from spoke to spoke so that if the cRP is behind one and the mapping agent behind another, they can talk.

When setting the multicast boundary and wanting to filter AutoRP messages, you have to use a standard ACL.  You cannot base it on the source or the RP.  It looks at any incoming IGMP and PIM messages to see if needs to drop or allow the traffic.  Unicast PIM Register messages are not affected by this.

Where AutoRP floods information about who the RP is or who wants to be it, BSR just goes hop by hop.  Should make boundaries easier to configure.  No multicast out an interface and done.  If only it was that easy.  Also have to remember that BSR messages are subject to RPF.  Yep.  Another one of those bite ya in butt things.

The highest hash-value that you can apply to ip pim bsr-candidate is 31.  Good to know if comes up.  Using that value with more than one RP, will cause them to load balance.  You take one, I take on scenario.

Multicast stub routing is a way to help out smaller sites.  It limits what PIM and IGMP information is sent across to the stub router to limit traffic.  You configure it on the border router to the stub and then on the stub facing the clients, configure ip igmp helper-address <main router>.  The stub router can be set completely to dense mode as well just to make sure that all traffic gets to the distro router and clients.  No PIM adjacency is ever formed from the hub to the client router via the ip pim neighbor-filter <ACL> which denies and permits who can form a neighborship.

Yep.  It's another Cisco-ism.  Multiple ways to do things.  You can do ip multicast boundary to filter multicast traffic of course.  But there is also ip igmp access-group as well.  This filters based on the multicast groups that is trying to be joined.  According to INE, it is more common method.  When doing the filter, you can use either standard or extended ACLs.  Standard ACLs are used to filter ICMP v1, v2, and v3 receivers.  Extended ACLs allow you to filter IGMPv3 reports. 

To limit the aggregate number of multicast groups that are joined by receivers that are directly connected, use the ip igmp limit command.  This can be done globally or per interface.  Basically it limits the amount of mroute states created due to IGMP reports.

The designated querier, one that makes sure someone is still listening, is based on the lowest IP address.  The PIM DR is based on the highest IP address.  Can't overload one router on the segment.  Share the wealth.

Periodic IGMP queries are sent based on the ip igmp query-interval command.  If a non-designated router running multicast doesn't hear any membership queries based on the time in the command ip igmp querier-timeout, it will try to become the new designated querier.  Without that command, the timeout is two times the query-interval of that same interface with the default being 60 seconds.  To shorten leave times, you can configure the ip igmp query-max-response-time so that everyone knows to send responses in a timely manner.  The ip igmp last-member-query-interval is how fast special IGMP Leave messages need to be seen to be counted.  Nothing else, just  put on the interface ip igmp immediate-leave with and ACL and that interface no longer cares about multicast once it gets a Leave message. 

Steps to make a multicast helper map
1) Set up a multicast network between the two broadcast domains
2) Enable broadcast forwarding on the ingress router to the multicast network with ip forward-protocol
3) On the ingress router to the multicast network, at the interface connecting to the broadcast domain, enter the ip multicast helper-map broadcast command.  The ACL for this command has to be extended for the UDP matching.
4) Enable broadcast forwarding on the multicast network egress router with ip forward-protocol as well as put ip multicast helper-map on the interface connected to the multicast network.  Again the ACL has to be extended.
5) Enable directed-broadcast on the interface connected to the broadcast network on the egress router.  Can also specify a different broadcast address with ip broadcast-address

You can test helper maps with DNS.  Enable DNS name resolution on the first broadcast network and don't enter a DNS server.  The router will eventually broadcast for 255.255.255.255 and if the ACL for the helper map is any any or specific for that entry, then it gets hit.  You can also use an extended traceroute.

When enabling bidirectional PIM, make sure that you enable bi-directional PIM with ip pim bidir-enable.  Also learned that the rp-candidate command is particular about its options.  Contrary to what the documentation says about the placement of the group-list option in relation to the bidir option, the bidir option has to come last.

When setting up source specific multicast, you don't need an RP if that is all you are doing.  The receiver (ip igmp join-group <group> source <source>) and everyone in between builds the shortest path to the source based on the source specified.  SSM also uses either the default range (232.0.0.0/8) or can be given a range with an ACL.  No shared trees are used for either range and (*,G) joins are dropped.  IGMP version 3 only has to be enabled on the receiving interface.  Not on all.

To be able to exchange multicast traffic between two different ASs, you need to do the following:
1)  Turn on PIM between the two ASs.  PIM SM is most common.  Limit BSR/AutoRP leaks.
2)  Exchange route information using a routing protocol.  BGP is most common since it has an extension for this.

When applying the multicast address-family to BGP, make sure to activate all neighbors.  Without this, not everyone is going to know about multicast routes.  You can also use pre-pending to manipulate routes.  If allowed, redistribute the IGP into the multicast address-family for help with RPF checks and routing.

After some initial headaches with finding a Layer 2 image in IOU that does IGMP snooping, I just hit the same problem with  MVR (Multicast VLAN Registration).  Just beautiful.  Anyways, on to MVR.  There are four basic steps to configure MVR.
1)  Enable it with mvr and mvr group <multicast-group>
2)  Set the MVR VLAN with mvr vlan <vlan-id>.  This is the VLAN that carries all the multicast traffic and spans all the switches.  Feel free to define the mode here as well.
3)  Tell the switch what the sending and receiving interfaces are with mvr type <source|receiver> at the interface level.
4)  Optionally, create a static group join with mvr vlan <vlan-id> group <ip-address>.  This is done on the receiving ports. 

IGMP profiles are another one that I cannot do in IOU/IOL.  That is fine.  I think that I have it figured out.  IGMP profiles are for when you want to permit and deny at the switch level.  It looks a lot like named ACLs but with numbers instead of a name.  I do like the hierarchical way of configuring things.  You apply the profile to the interface with ip igmp filter #.

No comments:

Post a Comment