A border router in the PfR world is a router that has an exit interface to an ISP or other attached networks. The border router is where policy decisions from the MC are enforced.
Border routers monitor for prefixes and transit links on behalf of the MC, and report their data to MC for it to be computed and matched against configured policies.
The MC will instruct the border router to alter any packet flows in the network as necessary. Keep in mind that border router processes can be housed within the MC in some setups.
In any PfR environment there will always be a single device that manages all aspects of PfR operations. The Msater Controller or MC will monitor outbound traffic flows and apply policies to optimize routing for specific subnets and exit links. While the MC is making all of the routing decisions in the network, its important to understand that this device does not have to be in-line with the traffic flows it’s controlling. It just needs to be reachable by other participating slave routers. The MC can support up to 20 managed exit interfaces.
The MC and border router can be run on a single device, this is used primarily in SOHO environments where there are a limited number of routing devices needed.
The MC in branch office networks resides coreside on one of the multiple border routers.
The MC can also be standalone, sitting in a datacenter for the customer, border routers will communicate to it over a private VPN to exchange traffic data and routing decisions.
The Master Controller is the device in the PfR domain that checks metrics against configured routing policy thresholds for edge devices participating in the PfR domain. Given the Master Controllers significance in determining network routing decisions, a risk is introduced by this device to be leveraged by unauthorized parties to take down a network.
To mitigate that level of risk, PfR incorporates MANDATORY authentications. Edge devices that are sending data to the Master controller are referred to as slaves, and communication between the slave and the master must be authenticated using an authentication key and key-chain. This key must be configured on all PfR devices in order for PfR computing to work.
There are three interface types, these are defined based on their roles in the PfR environment. These roles identify if an interface forwards a packet in or out of the network:
- Internal Interfaces: These interfaces connect to the internal network and will always be interfaces used for communication with the device in the infrastructure that is designated as the control plane manager for the performance routing environment. This device is known as a Master Controller
- External Interfaces: These interfaces transmit packets out of the network. There must be two interfaces designated external to successfully deploy OER.
- Local Interfaces: These interfaces are used in the formation of the control plane mechanism that drives OER. The interface defines the source interface used to communicate to the Master Controller.
The PfR Phase Wheel helps illustrate the operation of the various phases PfR uses to calculate best path. PfR consists of five unique phases each of which is iterated in a specific order and repeated in a cycle, like a ‘Phase Wheel’ so to speak. This wheel runs constantly going through all the phases in order:
- Profile Phase – Referred to as the learning phase, the PfR router learns the flows that have high latency and throughput. The specific traffic that is being ‘profiled or learned’ is referred to as a ‘traffic class’. the list of all monitored traffic classes (MTC) is referred to as the MTC list.
- Measure Phase – Collects and computes the performance metrics for the specific MTC traffic identified by objects in the MTC list.
- Apply Policy Phase – Low and High thresholds are configured and defined in policy and out of policy performance categories.
- Control Phase – Manipulates the traffic flow by injecting Policy-based routing (PBR)
- Verify Phase – After controls are introduced, OER will verify OOP event performance and make adjustments as needed to being the policy back into normal performance criteria.
Cisco began experimenting with routing protocols that would make best route selections based on variable parameters like link load and bandwidth. EIGRP was the first attempt at this and is where the K values came from. Cisco eventually came out with Optimized Edge Routing (OER) which gave us the capability to perform prefix-based route optimizations.
OER in its first iteration was limited with how it could optimally route traffic. The criteria used to manipulate traffic was limited, OER relied on packet loss, response time, path availability and traffic load distribution to make routing decisions.
Modern networks needed more than prefix based route optimizations, they also needed application specific needs to be accounted for and so Performance Routing was created.
PfR was built using OERs foundation and extended its capabilities to include criteria based on application type, application performance requirements, in addition to the traditional network performance criteria.