Skip to content




multicast


Table of Content

multicast

IP addressing, layer 3

  • 224.0.0.0/24
    • local network control block
    • operate within a link-local scope
    • TTL of 1 or 2
    • OSPF, EIGRP, and etc
  • 232.0.0.0/8
    • source-specific multicast block
    • for multicast streams whose sources are already known
  • 239.0.0.0/8
    • organization-local scope, private IPs
    • comparable to rfc1918
    • should be limited to use within an AS

MAC addressing, layer 2

  • ARP to create a legal L2 frame
  • source IP is unicast with one-to-one correlation between IP address and MAC address
  • destination IP is multicast where suitable MAC address must be used in the layer 2 frame
  • the goal is to have the sender and receiver agree on a single, suitable destination MAC address
    • specifically, the interested receivers should accept frames with this destination MAC address

Mapping layer 3 to layer 2

  • Receiver already knows which L3 multicast address it's listening at
  • The layer 2 frame must reach the NIC, and it's fairly simple for unicast with ARP
  • The layer 2 address must be dynamic and well-known
    • dynamic because the layer 3 multicast address is highly arbitrary
    • well-known becase a unique layer 3 multicast address must have a consistent layer 2 address
  • A simple mapping procedure is used to generate a well-known multicast MAC address
  • MAC OUI for IP multicast is 01-00-5E
    • of the remaining 24 bits, the most significant bit is fixed to 0, leaving 23 bits to map to the layer 3 IP address
    • LO 23 bits of the IP are simply copied to LO 23 bits of the MAC
    • as 5 bits are left unmapped, 32 (2^5) IPs map to the same MAC
      • e.g., 230.1.2.3, 230.129.2.3, and 239.1.2.3 all map to 0100-5E01-0203

Multicast forwarding basics

  • in unicast, the packets are guided towards a destination
  • in multicast, the packets are guided away from a source
  • two major interface types per multicast source
    • upstream interface
      • metrically closest to the source
    • downstream interfaces
      • interfaces with interested receivers
  • this entire concept is termed RPF, reverse path forwarding

Multicast routes

  • multicast routing table stores and organizes information by multicast groups (G)
    • called forwarding state, not really routes
  • each multicast group has a source
    • specific source (S as in the IP)
    • any/unknown source (* as in wildcard)
  • the table further organized by a combination of sources for the groups
    • (S, G) - forwarding state with specific source
    • (*, G) - forwarding state with any/unknown source
  • each forwarding state has an associated set of upstream and downstream interfaces
    • IIF, incoming interface
    • OIL, outgoing interface list
  • CREATING CORRECT FORWARDING STATES WITH ALL LISTED COMPONENTS IS THE MAJOR TASK OF A MULTICAST ROUTING PROTOCOL
  • OIL is updated as new receivers join groups

Multicast routing protocol

Three primary responsibilities:

  • identify upstream interface, IIF
    • unicast forwarding table, FIB, information is used
    • can be overridden with MC specific information
  • identify downstream interfaces, OIL
    • OIL is created as the request come in to the router
  • maintain dynamic multicast trees

Multicast forwarding, data flow

  • multicast packets arrive at a router on IIF for a group
  • multicast packets are forwarded on the OIL for the group
  • keep this consistent to not to cause dangerous multicast loops involving packet replications at every router

Multicast trees

  • defined root of the tree with branches flowing in via IIF and out via OIL
  • (S, G) trees are rooted at the source
  • (*, G) trees are shared trees (to be covered later)

Multicast loop prevention

RPF check

  • multicast packets go through a reverse path forwarding RPF check
  • logic
    • the router inspects the source IP of the packet
    • the IIF for the source is identified
    • if the packet was not received on a valid IIF, it's dropped

RPF check override

  • there can be cases where the multicast tolopogy does not match the unicast topology (RPF information source)
  • most vendor implementations alow for RPF overrides
    • separate routing table for RPF checks
    • can be populated statically or via multi-topology IGP implementations
    • MBGP can also be used
  • static RPF override
    • static multicast route
    • ip mroute {network} {mask} {next-hop}

Multicast receiver signaling within a VLAN

  • mostly automatic
  • basic switches treat multicast as broadcast
    • send it out on all ports except the ingress
    • interested receivers accept the packets, others drop
  • multicast-aware switches
    • inspect the multicast signaling between receivers and routers
    • create proper multicast state for each switchport
    • only interested receivers are forwarded the multicast packets

Multicast receiver signaling using IGMP

Internet Group Management Protocol

  • once a receiver knows which multicast group to receive, it must inform a local, multicast-enabled router
    • receiver acquires the information from the application layer
    • the router is called the last hop router, LHR
    • reachable at layer 2
    • this is the step that separates multicast from broadcast
  • receiver sends an IGMP message to join the group
  • a multicast router on the segment listens on all IGMP messages
    • once it receives a relevant IGMP message on an interface, the interface is added to the OIL for this group
    • show ip igmp groups
      • group, egress interface, reporter/receiver

IGMP v1 and v2

  • v2 compared with v1
    • mechanism for receivers to leave a certain group
      • v1 depended on silent leaves (?)
      • router could potentially forward unwanted MC packets for a period of time after the last interested host had left the group
    • election process for a querier
      • queries are essentially status checks
      • v1 depended on the multicast routing protocol

IGMP messages in v1 and v2

  • membership report
    • how receivers signal their interest in a multicast group to multicast routers
    • IGMP join, informally
  • leave group (v2 only)
    • how receivers inform the multicast routers when they are no longer interested in a group
  • membership query
    • how multicast routers confirm continued interest of receivers in a multicast group
    • two forms:
      • generated periodically to update status of all multicast groups
      • group specific, event-driven to update status of a particular multicast group where the event is a host leaving a group (v2 only)

IGMP membership reports

  • membership report is how receivers join multicast groups
    • the multicast router (LHR) on the local segment is being signaled
      • listens to all multicast traffic
  • IP packet
    • source IP address
    • destination IP address, group, multicast IP address
    • IP options, router alert
      • TTL=1
    • the LHR accepts because it is listening on all multicast traffic
    • the LHR processes this because of the router alert option

IGMP leave group, v2 enhancement

  • membership report is how receivers leave multicast groups
    • LHR is signaled
  • IP packet
    • source IP address
    • destination IP address is always 224.0.0.2 meaning ALL-ROUTERS
    • IP options - router alert
    • group address is included in the packet

IGMP membership query

  • membership query is how LHR update IGMP states
    • sent to all multicast hosts on a segment
    • one host per group must respond for the group to remain active
  • IP packet
    • source IP address is querier's IP, LHR
    • destination IP address
      • general query on 224.0.0.1 meaning ALL-SYSTEMS
        • group address in this case is 0.0.0.0
      • group-specific query on multicast IP address of the group being queried
    • IP options - router alert

IGMP receiver joins a new group, v1 and v2

  • a receiver sends a membership report (join)
  • recipients:
    • multicast router
    • other group members
  • actions:
    • group members - no action
    • multicast router
      • create new state for the group if none existed
      • refresh timers on current state if group exists

IGMPv3

Receiver signaling

  • v3 allows for the signaling source information along with the group
    • enables the use of SSM, source-specific multicast
    • it provides a mechanism for excluding certain sources as well
  • new multicast address for membership reports in v3
    • 224.0.0.22
  • backwards compatibility with v1 and v2

V3 "join", membership reports

  • membership report is how receivers join multicast groups
  • its IP packet now hold multiple groups
    • record type, a new field in IGMPv3

IGMPv3 record type for membership reports (join)

  • record type is the new field
  • ability to signal SSM and ASM groups
  • allows the report to also carry the leave messages
  • filter mode is available
    • "include"
      • multicast stream is requested only from sources included in the record
    • "exclude"
      • multicast stream is requested only from sources excluded from the record
  • record types
    • current state record
      • MODE_IS_INCLUDE
      • MODE_IS_EXCLUDE
    • filter-mode-change (when the mode of a group changes)
      • CHANGE_TO_INCLUDE_MODE
      • CHANGE_TO_EXCLUDE_MODE
    • source-list-change (when sources for a group change)
      • ALLOW_NEW_SOURCES
      • BLOCK_OLD_SOURCES

IGMPv3 membership query

  • membership query is how multicast routers update IGMP states
    • one host per group must respond for the group to remain active
  • IP packet
    • src: querier
    • dst:
      • 224.0.0.1 for general query
      • group ipaddr for group specific query
      • group and source specific query - multicast IP address of the group being queried

IGMPv3 join ASM group

  • sent to 224.0.0.22
  • record type in most cases
    • CHANGE_TO_EXCLUDE_MODE
    • default mode is include in most cases
  • number of sources
    • 0 to indicate ASM
  • recipients
    • multicast routers listening for 224.0.0.22
  • actions:
    • create new state for the group if none existed
    • refresh timers for current state if group exists

IGMPv3 join SSM group

  • sent to 224.0.0.22
  • record type
    • ALLOW_NEW_SOURCES
    • default mode was already include..., but now you're allowing new source specific mc
  • number of sources
    • = 1 to indicate SSM

  • recipients
    • multicast routers listening for 224.0.0.22
  • actions:
    • create new SSM state for the group if none existed
    • refresh timers on current SSM state if one exists

IGMPv3 periodic general queries and reports

  • sent to 224.0.0.1

IGMPv3 leave ASM group

In short, "include zero, which means exclude everything".

  • sent to 224.0.0.22
  • record type
    • CHANGE_TO_INCLUDE_MODE
    • the mode for active ASM groups is exclude mode
  • number of sources
    • 0
  • LHR to check if other existing members are still interested

IGMPv3 leave SSM group

In short, "block old sources which is blah blah blah".

  • sent to 224.0.0.22
  • recordtype
    • BLOCK_OLD_SOURCES
    • mode for ssm groups is include mode
  • number of sources
    • 1 specifying the group to leave

  • LHR to check if other existing members are still interested

Source signaling

  • the router connected to the multicast source is called a first hop router, FHR
  • when FHR receives a multicast packet on a multicast enabled interface, it creates (S, G) entry
    • the router becomes the root of the source tree or the SPT, shortest path tree
  • after this state is created, multicast routing protocol takes over

PIM, protocol independent multicast

  • the most popular multicast routing protocol
  • PIM does not maintain a multicast routing database
    • PIM uses the unicast routing table to perform its duties
    • a major duty is to perform reliable RPF checks on the multicast traffic
  • this definition applies to both PIM Dense Mode and Sparse Mode

PIM neighbor discovery hello message

  • Both PIM modes require the establishment of neighbor adjacencies throughout the network
    • leads to the creation of the multicast tree, topology of links
    • routed multicast traffic will only flow between two directly connected PIM nodes
    • can be radically different from the unicast routing topology
  • PIM uses hello messages to discover directly connected neighbors
    • destination 224.0.0.13 meaning ALL-PIM-Routers
    • IP protocol 103
    • TTL 1
    • Options carried as TLVs
    • note) PIM SM routes can become adjacent with PIM DM routers
  • PIM hello packet
    • version 2
    • type 0
    • relevant options:
      • 1 hold time
      • 19 dr priority
      • 20 generation id
      • 21 state refresh (DM only)

PIM dense mode, PIM DM

  • PIM-DM elects a designated router (DR) for every multi-access segment (ethernet segment)
  • the PIM-DM implementation itself has no need for a DR
    • PIM-SM does rely on it...
    • ICMPv1 implementation used to use it, and PIM-DM DR becomes the IGMPv1 DQ, designated querier
  • PIM-DM uses the DR priority option in the hello message
    • when multiple routers have the same priority, IP address gets used

PIM-DM behavior

  • PIM-DM creates multicast trees known as source distribution trees, also called shortest path trees, SPT in PIM nomenclature
    • SPTs are (S,G) trees rooted at the multicast source
  • a very simple assumption...
    • every router needs every mutlicast feed, meaning all (S,G) traffic must be forwarded on every non-RPF multicast interface
    • also known as push or the implicit join model
  • it is receiving router's job to "prune" itself from the tree
    • inform the upstream router to stop sending an (S,G) feed
  • automatic SPT formation
    • a router receives a multicast packet on the IIF
      • router does the RPF check
      • the router adds all other (DM-enabled) multicast interfaces to the OIL
    • this process starts at the FHR and repeats until the packets have reached the LHR/leaves in the topology

PIM join/prune messages

  • no separate prune message
  • used in both PIM-DM and SM
  • capable of signaling both "Joins" as well as "Prunes"
    • Joins in PIM-DM are implicit

PIM-DM reactive SPT pruning

  • if the LHRs have no receivers for the group, they send a prune message on the IIF
    • the RPF neighbor receives this msg
    • prunes the interface for the group by removing it from the OIL of the (S,G)
  • if there are no interfaces left in the OIL, a router will send a prune on the IIF
  • if a packet fails the RPF check, a prune message is sent on the interface

Basically the tree is pruned back where it's not needed

skipping a lot of sections on PIM-DM

PIM-SM

ask and ye shall receive

  • PIM-SM addresses scaling issues in PIM-DM
  • the major difference is in idea of "implicit join" now discarded
    • PIM-DM is push model
    • PIM-SM is pull model

PIM-SM initial signaling

  • the end goal of PIM-SM is to bring the source's traffic to the interested receivers
    • connect the FHRs to the LHRs
  • sources
    • signal to the FHRs about their existence
    • occurs completely in data plane
  • receivers
    • signal to the LHRs about their existence
    • either dynamically via IGMP or statically via interface configuration

PIM-SM Rendezvous Point

  • the FHRs know about the sources, the LHRs know about the receivers
    • one way would be for them to send it to each other directly???
  • simpler solution
    • "central registry" or rendezvous point, RP
    • pick a special multicast-enabled router and assign it special duties
      • receive multicast state information from both FHRs and LHRs
      • compare this information and work to connect them if need be
    • FHRs, LHRs, and the RP should all agree on who is the RP

PIM-SM, LHR with interested receivers

  • once an LHR creates a (*,G) state, it can start the PIM signaling process
  • PIM trees are build from the leaves towards the root
    • RP is considered to be the root for a (*,G) tree
      • RP tree or shared tree
  • LHR sends a join/prune message (join version)
  • sent upstream on the RPF interface for the RP IP
    • processed hop-by-hop, directed towards the RP by each router between the LHR and RP
  • each router also creates a (*,G) state (if it does not exist)
    • tree is being built in reverse from the leaves to the root
      • start from LHR, next router in RPF towards RP, and finally to RP as root
      • interface receiving the "join" is included in the OIL
      • RPF interface to the RP becomes the IIF

PIM-SM, FHR

  • LHR's method to join RP is the Join/Prune msg
  • this is not a good solution for disseminating information about sources
    • join/prune was for building tree from leaves up to root, in reverse...
  • what if the FHR simply informed the RP about the stream, (S,G)?
    • then the RP can use the join/prune to build the tree towards the source provided by the FHR
    • and the RP only need to build this tree if there is even any interested receiver present
  • PIM register message is sent from FHR to RP
    • unicast
    • src: FHR
    • dst: RP
    • contents - information about the (S,G)

PIM-SM, RP duties

  • RP is the root for receivers for those gave it (*,G)
  • RP received (S,G) from FHR by PIM Register msg
  • RP can correlate the two

PIM-SM, RP on-demand join

  • RP now has to join the (S,G) SPT to first receive the multicast packets
  • PIM trees are build from the leaves towards the root from the RP
    • RP sends a join/prune msg toward FHR, following RPF IIF
    • each middle hop routers (MHR) on the way creates (S,G) state
      • the tree is being built in reverse from RP to MHR to FHR

PIM neighbor discover hello msg

  • both PIM modes require the establishment of neighbor adjacencies throughout hte network
    • creates the base multicast tree, where multicast traffic can potentially flow
  • PIM uses hello messages to discover directly connected neighbors
    • dst: 224.0.0.13 meaning ALL-PIM-Routers
    • IP protocol: 103
    • TTL: 1
    • options carried as TLVs

PIM join/prune msg

  • used in both PIM-DM, PIM-SM
  • capable of signaling both joins and prunes
  • join messages are processed hop-by-hop in the PIM topology
    • dst: 224.0.0.13
    • IP protocol: 103
    • TTL: 1