Category: Review

Review/Perspective: Cloud Deployment Models

7.1.a Compare and contrast Cloud deployment models

  • 7.1.a [i] Infrastructure, platform, and software services [XaaS]
  • 7.1.a [ii] Performance and reliability
  • 7.1.a [iii] Security and privacy
  • 7.1.a [iv] Scalability and interoperability


XaaS consists of the following:

Software as a service (SaaS) – application services are delivered over a network on a subscription and on-demand basis.

Platform as a Service (PaaS) – run-time environments and software dev frameworks delivered over the network on a pay as you go basis.  Typically presented as APIs to customers.

Infrastructure as a Service (IaaS) – compute, network, and storage are delivered over the network on a pay as you go basis.

Performance and reliability:

Cloud deployments require high availability to maintain network services to customers. This requires careful consideration of Fault tolerance, in that network design engineers have to account for this when coming up with redundancy plans for datacenter environments.

Automation reduces TCO and makes it easier for engineers to do things, such as creating VLANs, testing MPLS traffic engineering, or creating backups.  The tradeoff involves maintenance of the software in the form of in house development of the automation services that are running.  This benefit of automation becomes more prevalent in larger network deployments as the costs of maintaining the automation are typically less than the cost of a large IT staff.

Automation should be deployed where it makes sence, and where it can be maintained with a reasonable amount of effort. This is how performance and reliability are maximized.  Accessibility also needs to be considered to ensure sufficient bandwidth is available to reach the cloud environment.

Security and Privacy

Security related to cloud deployments many consider public cloud security to be more secure than private cloud security where digital security is strong and all communications are secured over the public internet, the counter argument to this is that physical security can be questionable.  Geographic diversity in the event of natural disasters is something to consider in addition to the specific region where cloud data is actually stored.  Some regions of the world known to have unfriendly relations towards the home country is something to think about.  These uncertainties can be accounted for by using availability zones, where cloud providers will ensure data is confined within a specific geographic region.

Scalability and Interoperability

Achieving cloud scalability relies on a few components supporting cloud architecture such as network fabric, application design, and virtualization/segmentation design.

  • Public cloud
    • Scalability – Appears to be infinite which allows customers to provision new services fast
    • Interoperability – developers choose which cloud provider APIs to use, these are typically offered as part of the cloud offering.
  • Private Cloud
    • Scalability – High capital and operating expenses to expand which limits scale
    • Interoperability – Works with underlying platform (ie openstack application should be deployable to another openstack instance.)
  • Virtual Private Cloud
    • Scalability – Scales well with public cloud resources
    • Interoperability – Combination of public/private depending on where resources are located.  Migration between the two could limit interoperability depending on where APIs are located
  • Inter-Cloud
    • Scalability – Highest scalability massively distributed architecture
    • Interoperability – up to dev to use cloud provider apis, assumes consistent api presentation between different cloud AS’s

Review/Perspective: Virtualization Technologies

2.1.h Describe chassis virtualization and aggregation technologies

  • 2.1.h [i] Multichassis
  • 2.1.h [ii] VSS concepts
  • 2.1.h [iii] Alternative to STP
  • 2.1.h [iv] Stackwise
  • 2.1.h [v] Excluding specific platform implementation

Multi-chassis virtualization allows for multiple networking devices to aggregate into a single logical device increasing total forwarding capacity and including inherent hardware redundancy.

VSS – Virtual Switching System is a virtualization technology that allows two Cisco 6500 switches to act as a single logical virtual switch.  This increases operational efficiencies and scales bandwidth up to 1.4 Tb/s, this is similar to stackwise however VSS is limited to two physical chassis connected together.

Alternatives to STP

All of the following technologies allow aggregate links between two or more devices to be in a forwarding state outside negating or replacing the effects of running traditional spanning tree removing the possibility of blocked ports.

Virtual Port Channel  – enables loop free topologies where STP is no longer involved in maintaining network topology and no links between access and aggregation are blocked.  STP does still run however it’s only run on the primary vPC device where the secondary device only transparently forwards BPDUs back/fourth between the primary and the rest of the network.
TRILL – Transparent Interconnection of Lots of Links – TRILL replaces STP as a mechanism to find loop-free trees within Layer 2 broadcast domains to enable scaling.
FabricPath – also known as Layer 2 multipathing allows bandwidth between access and aggregation layer to be scaled beyond what is possible with traditional spanning tree or vPC

StackWise – is a switching technology that allows multiple cisco switches to aggregate into a single switching backplane.  StackWise cables connect the individual switches connecting the switching backplane to each other device.  This allows for increased port density.  The switches are also viewed logically as a single device in the CLI as far as management is concerned.  You can still access the switches individually if needed as well.  StackWise assigns a master switch where all management and configurations are defined and the other connected switches take on a slave mode where they inherit the configurations defined within the master.

Excluding specific platform implementation means when testing new feature functionality, underlying specific platform limitations are not taken into account.  for example an IOS feature that’s being tested in the IOS may function differently when installed on a 3560 as opposed to a 2960, hardware differences and limitations are not taken into account when they exclude platform implementations, they are only testing the functionality of the feature within the IOS image itself.

Review/Perspective: Layer 2 protocols

2.1.b Implement and troubleshoot layer 2 protocols

  • 2.1.b [i] CDP, LLDP
  • 2.1.b [ii] UDLD



Cisco Discovery Protocol is a proprietary protocol that allows two Cisco devices to communicate device specific information to each other on the attached port.  This is useful for navigating Cisco networks and seeing what devices are connected to each port in addition to other information.

Link Layer Discovery Protocol is the standard version of CDP and can be used in multi-vendor network environments.




Unidirectional Link Detection is a layer 2 messaging protocol that serves as an echo mechanism to detect failure of transmit or receive between a pair of devices.  There are two modes for failure normal and aggressive, if a failure is detected and normal is configured no action is taken.  If aggressive mode is configured, the device will try to connect to the UDLD neighbor 8 times and if unsuccessful will put the port into an err-disable state.


Review/Perspective: Switch Administration

2.1.a Implement and troubleshoot switch administration

  • 2.1.a [i] Managing MAC address table
  • 2.1.a [ii] errdisable recovery
  • 2.1.a [iii] L2 MTU


Managing MAC address table

The MAC Address table found in layer 2 IOS contains the MAC address of every known device on a network, and the port their traffic is coming in on.  As a frame enters the switch, it examines the ethernet frame and records the MAC address and the port it comes in on and maintains a database of MACs:ports so it knows where to forward frames as needed.  Otherwise if a destination MAC is missing from the MAC table, switches will flood that frame out all ports in a last ditch effort to find the host the frame is destined for.

Commands to view the MAC address table can be found in the BeThePackets Wiki at:


errdisable recovery

Should a port kick into errdisable, you can recover it typically by bouncing the port using the shut/no shut commands.



MTU is the Maximum Transmission Unit which defines the largest size of frames an interface can transmit without the need to fragment.  there are three types of MTU that are recognized when configuring a switch:

  • 10/100Mbps switch interfaces
  • 1000Mbps switch interfaces
  • Routed and SVI interfaces

MTU mismatches can occur if the interface MTU configuration is lower than the originally sized ethernet frame.  You can adjust this by configuring the offending interface with the correct size MTU.


Review/Perspective: Troubleshooting Methodologies

1.3.b Apply troubleshooting methodologies

  • 1.3.b [i] Diagnose the root cause of networking issue [analyze symptoms, identify and describe root cause]
  • 1.3.b [ii] Design and implement valid solutions according to constraints
  • 1.3.b [iii] Verify and monitor resolution


Diagnose the root cause of networking issue

There are two basic approaches to troubleshooting:

  • Climb the Stack approach – begin at layer 1 and work your way up until you find a problem.  You can do it the other way around and start at layer 7 and work your way down until you find something wrong as well.
  • Divide and Conquer method – usually faster at finding the problem because you start at the layer that’s having the issue.  You then either go up or down the OSI stack to find the issue based on the observed problems in the problem layer.

Design and implement valid solutions according to constraints and Verify and monitor resolution

Before coming up with a plan for a network it’s important to understand the design requirements, this includes organizational and technical goals, and organizational and technical restraints.

Organizational constraints include budget, personnel, policy, and schedule.

Technical constraints include parameters that may limit the solution such as legacy applications or protocols that exist that must be supported may limit the design.  other examples include, existing wiring does not support new technology, and bandwidth doesn’t support new applications.

PPDIOO is the Cisco method of implementing new network designs based on organizational goals and provides a change process to apply to an existing network.

Prepare – this phase establishes business requirements and develops a network strategy, propose a high level conceptual architecture to support the strategy.

Plan – this phase identifies network requirements based on goals, facilities and user needs.  This phase also characterizes sites, analyzes the network, performs gap analysis against best practice architectures and looks at the operational environment.  Project plan is developed to manage the tasks and responsible parties to do the design and implementation.  The project plan aligns with the scope, cost, and resources defined in the original business requirements (Prepare phase)

Design – The network design is developed based on the technical and business requirements from the previous two phases.  This is a comprehensive and detailed design that meets business and technical requirements.  Design includes plans for redundancy and high availability.

Implement – equipment is installed and configured according to detailed design specifications.  Devices can be new or replace current infrastructure in this phase to support the project plans requirements based on business needs.  Each step in the implementation phase should be documented including plans for rollback in case of failure and any additional reference information as needed.

Operate – this phase maintains the networks day to day operations.  Network management applications to monitor the networks health are used to gauge how well the implemented design is managing the real production traffic planned in the project.  Any problems,  traps, or network events must be documented for review.

Optimize – this involves proactive network management by identifying issues seen on the existing network and modifying the network design to appropriately correct those issues and improve overall network performance.  Ultimately leading to the network life cycle where this entire process starts over again for each of the changes required for the network.

Review/Perspective: IOS Troubleshooting

1.3.a Use IOS troubleshooting tools

  • 1.3.a [i] debug, conditional debug
  • 1.3.a [ii] ping, traceroute with extended options
  • 1.3.a [iii] Embedded packet capture
  • 1.3.a [iv] Performance monitor

Debug, Conditional Debugs

All CCIE relevant debug commands are compiled on the BeThePackets Wiki at:

Ping and Traceroute and Embedded packet capture

Ping and Traceroute configurations can be found on the BeThePackets Wiki at:

Performance monitor

Performance Monitoring allows you to monitor the status of network flows to inform you of potential issues that could cause problems with other specific network applications such as Voice and Video.

Configuration can be found on the BeThePackets Wiki at:

Review/Perspective: Changes to a Network

1.2.a Evaluate proposed changes to a network

  • 1.2.a [i] Changes to routing protocol parameters
  • 1.2.a [ii] Migrate parts of a network to IPv6
  • 1.2.a [iii] Routing protocol migration
  • 1.2.a [iv] Adding multicast support
  • 1.2.a [v] Migrate spanning tree protocol
  • 1.2.a [vi] Evaluate impact of new traffic on existing QoS design


Evaluation of network changes should always be done to fully understand the scope of work being done, the impact changes to the network can cause to existing network routing, and planning to ideally cause the least amount of network downtime should network changes require downtime.

Changes to routing protocol parameters

Each routing protocol IOS supports can be configured with very high granularity to manipulate what routes are advertised, which neighbors they are allowed to become neighbors with, metrics and weights of prefix’s can be manipulated to make them better or worse, and even advertise equal cost routes.  Routing protocol changes usually require an overall understanding of the routing of the entire network so if changes are imposed there is little to no impact on existing routing.

Migrate parts of a network to IPv6

If you intend of servicing IPv6 in only portions of a network you can do so, however the question then becomes if you intend those IPv6 networks to be able to communicate with your other IPv4 networks.

If not, then network configuration becomes easy as IPv6 can be configured on top of IPv4 in what’s known as a Dual Stack Environment, making the transition to IPv6 very smooth since it can be built and tested on top of the existing IPv4 network. With the underlying IPv4 networks being phased out/removed over time as needed.

If you intend on the IPv6 networks to communicate with other IPv4 prefixes on your network then it would be required to implement some method of 6to4 network translation to facilitate inter-protocol communications.

Routing protocol migration

This can be typically done in a number of different ways.  If two separate networks needs to be merged together into one and each uses its own IGP for routing, temporarily redistributing routes into each network is an option provided no subnet overlap exists.  If it does then some form of network address translation would be needed for the two networks to be able to communicate with each other temporarily until a new IP addressing scheme can be devised and implemented.  Careful consideration on multiple interconnects into the same network need to be considered and accounted for if network advertisement and redistribution is used so routing loops aren’t imposed on the network at multiple interconnect points between the two.  This allows the initial communication between the two separate networks.

As for the migration, to another protocol you can systematically begin configuring the new IGP on top of the old IGP provided you’ve adjusted the global Administrative Distance of the protocol so routing decisions are not accounted for in the IPv4 RIB for that protocol.  Once all routes have been advertised/learned properly by the IGP process you can begin adjusting Administrative distance of the new IGP down to a preferred metric so it begins to take over and adjusts the IPv4 RIB with the new IGP best path routes.  At which point you can remove the old IGP configuration completely.

Adding multicast support

When considering multicast functionality it’s useful to determine which method of multicast is ideal for the network.  If its a small network with no congestion then PIM can be used to PIM-DM between routers to send multicast traffic.  If there are congestion issues or you prefer more efficient multicast traffic processes PIM-SM can be used between routers.  IGMP is typically used on LANS for multicast between hosts.

Migrate spanning tree protocol

Spanning tree protocol migration is simpls if its a migration from PVST+ to RSTP.  RSTP is compatible and works with PVST+ however you can still encounter convergence issues if you don’t migrate all devices at once if a link error occurred.  Also, all VLANs will essentially bounce all port connectivity once switched over while the devices go through their process of Root determination, and learning the designated network paths to forward/block.

Transitioning from PVST or RSTP to MST can be simple if the device you’re implementing MST on is a transport device in between other PVST or RSTP routers.  All thats needed is an MST instances to the appropriate ingress/egress interfaces to reconnect them to the rest of the PVST/RSTP network.  This method could also be used as a systematic approach to replacing or consolidating VLANs throughout the network to an MST only solution if migrating to MST.

Evaluate impact of new traffic on existing QoS design

If new traffic classes are introduced to a network that already has pre-defined QoS design it depends how its introduced to how it would impact existing QoS.

If the new traffic is plentiful and saturates the network then a decision would need to be made to adjust the QoS design to accommodate the new traffic so it can pass, or rate limit it in a low queue so it has least priority.

You can use Netflow and Top Talkers in routers to determine the exact kind of traffic is traversing an interface in either direction.  You can also use netflow exporters to export this data to applications that can read it and keep track of it historically so network flows can be reported on and analyzed so network engineers can make informed decisions on how to service the traffic.

Review/Perspective: UDP Operations

1.1.f Explain UDP operations

  • 1.1.f [i] Starvation
  • 1.1.f [ii] Latency
  • 1.1.f [iii] RTP/RTCP concepts



Starvation occurs primarily on TCP traffic when it is combined in the same class of service as UDP traffic.  If UDP traffic is the source of congestion, TCP traffic in the same class is essentially starved because of the way TCP behaves during times of congestion.  TCP will slow down the rate at which it transmits traffic when it experiences congestion…only exacerbating the problem even further.


UDP Latency

Outside from the typical causes of latency, UDP has the ability to be sent without acknowledgement, which sets it apart from TCP in that communications don’t have to wait for an acknowledgement before being sent.  Allowing UDP to be considered ‘faster’ than TCP in terms of transmitting data, lacking the reliability that TCP provides.


RTP/RTCP Concepts

RTP is an IETF standard protocol made to manage the real time streams of data as they’re sent.  RTP typically carries realtime media streams such as audio and video and is carried using UDP.

RTCP is an out of band control protocol encapsulated with RTP to monitor quality of service, provides transmission statistics, and aids in syncronization of multiple media streams.

Review/Perspectives: TCP Operations

1.1.e Explain TCP operations

  • 1.1.e [i] IPv4 and IPv6 PMTU
  • 1.1.e [ii] MSS
  • 1.1.e [iii] Latency
  • 1.1.e [iv] Windowing
  • 1.1.e [v] Bandwidth delay product
  • 1.1.e [vi] Global synchronization
  • 1.1.e [vii] Options


IPv4 and IPv6 PMTU

IPv4 MTU is typically not discovered as it is in IPv6 PMTUD.  Instead Packets that arrive at a chokepoint where MTU is smaller than the payload size that was transmitted will cause the packet to become fragmented mid-transit or dropped all together.  IPv4 can optionally use path discovery to verify the largest size packet will traverse on a given path before transmitting, IPv6 PMTUD will do this automatically.  Also fragmentation occurs at the source if the path MTU size winds up being smaller than the packet size.  Therefore it is strongly suggested to utilize the features of PMTUD to avoid fragmentation from happening on IPv6 packets.



TCP Segments carry the actual data between two TCP endpoints, these segments sizes are dependent on the Maximum Segment Size setting if one is configured.  Lack of a specific MSS setting inherently still has a maximum threshold, given when MTU is considered for TCP communications.  The amount of space left within the MTU size after all transport headers are taken into account is effectively the MSS without it being specifically defined.  You would typically manipulate the MSS for TCP communications when you know that traffic will encounter paths that require additional transport headers that aren’t considered locally before being transmitted out.  This effectively makes the segment size smaller reserving space for the anticipated transport headers to be included along the transport path.



Latency is the amount of time it takes from an originating source to send data to the receiving destination.  Factors that influence the amount of time it takes for data to get from start to finish include:

  • Propagation Delay – This is the amount of time it physically takes for transmitted signals to get from source to destination.
  • Serialization – This is the amount of time it takes for the conversion of bytes to be formed into a bit stream and placed onto an interface to be transmitted.
  • Data Protocols – Some protocols require checksums such as three way handshakes before data can be sent, the amount of time for acknowledgements to come back from these handshakes takes time
  • Routing and Switching – the amount of time it takes for a router or switch to determine the direction and interface the packet must exit out of takes time
  • Queuing and buffering – this can only happen on a congested link where packets can be held in a buffer or queue before being serialized onto an interface for transmission, the amount of time the traffic stays in queue can add to latency



Windowing is a TCP operation that allows more than one segment to be transmitted before being acknowledged during the TCP acknowledgement process.  This allows for faster, more efficient methods of transmitting data across an unreliable network path.  Windowing essentially doubles the number of unacknowledged segments sent for every successful acknowledgement received provided there are no lost segments in transit.  At which point if there is an interruption in the sequencing of segments received the window starts over and only sends and acknowledges single segments then repeats the increased windowing process.


Bandwidth delay product

The delay-bandwidth product of a transmission path defines the amount of data TCP should have within the transmission path at any one time, in order to fully utilize the available channel capacity.

Bandwidth delay product is defined as capacity of a pipe = bandwidth (bits/ sec) * RTT (s) where capacity is specific to TCP and is a bi-product of how the protocol itself operates.

BDP = total_available_bandwidth (KBytes/sec) x round_trip_time (ms)

If BDP is ever larger than the the TCP receive window, TCP receive window is the limiting factor for bandwidth, an adjustment to the TCP window to be larger would be needed to fix this issue.


Global synchronization

TCP global synchronization occurs with TCP flows during times of high network congestion.  the typical behavior of interfaces is when enough traffic fills up the hardware transmit queues additional packets attempting to be sent are subject to tail drop.

TCP has automatic recovery of dropped packets, as a result TCP will reduce the rate at which is sends traffic for a certain period of time, and then tries to determine if the network is no longer congested by slowly ramping up the speed at which it transmits traffic.  This is known as the slow-start algorithm.

When all senders experience this same problem at the same time, when all senders are influenced to send/receive slower/faster as a result of this problem at the same time this is referred to as Global synchronization and leads to inefficient use of bandwidth to to large numbers of dropped packets which must be re-transmitted, at a reduced sending rate.

Tail drop is the leading cause of this problem, features such as RED or WRED can reduce the likelihood of this occurring as well as keeping queue sizes down to a manageable level.



TCP options field was introduced to allow the TCP protocol to add new functionality for the evolving new age networks of today and in the future.  Features like TCP windowing is accomplished by extending the window size field to a larger value allowing TCP to send more segments that the original size field allowed.

Review/Perspectives: IP Operations part 2

1.1.d Explain IP operations

  • 1.1.d [iii] IPv4 and IPv6 fragmentation
  • 1.1.d [iv] TTL
  • 1.1.d [v] IP MTU1

IP Fragmentation – is an IP process that breaks datagrams into smaller pieces so packets may be formed small enough to pass through a link with a small MTU than the original datagram size.  Fragments are reassembled by the receiving host.

In IPv4 If the size of the PDU is larger than the next hops MTU the device has two options:

  • Drop the packet and send back an ICMP message to indicate packet is to big
  • Fragment the packet and send it over the link with smaller MTU.

IPv6 hosts are required to determine the optimal path MTU before sending packets, and guarantees that any IPv6 packet smaller or equal to 1280 bytes must be deliverable.

IPv4 routers will fragment data where IPv6 routers do not fragment, but rather drop packets larger than their MTU.

IPv4 and IPv6 even though the headers are different between the two, they both contain fields necessary to determine if fragmentation is needed.

TTL – The IP header contains a field for the TTL counter, for each hop a packet traverses the TTL value starts at 255 and decrements by 1 for each successive hop.  Once it reaches 0 the path to get to the destination is considered unreachable and dropped.  the routing device which dropped the packet will send an ICMP message back to the sender informing them of the unreachable status of the destination due to TTL expiry.

TTL behavior is handled slightly differently on MPLS label switched networks.  When a packet enters an MPLS cloud the IP TTL value is copied after being decremented to the MPLS TTL values of the labels pushed onto this traffic.  Once the traffic has reached its MPLS destination the TTL value is decremented by 1 as the label is taken off.


If  the label on  the packet must be swapped in transit, the TTL of the incoming label is copied to the swapped label. the TTL is copied to all top level labels pushed onto the packet. If the operation is to pop a label the TTL is decremented by 1 and copied to the newly exposed label unless the value is greater than the TTL of the new label in which the copy does not happen.


TTL Expiring and Labels – when a LSR receives a label switched packet that decrements its TTL to 0 it will discard the frame and send back an ICMP message like a normal router does, however the LSr may not have an IP path to the source of the packet.  In this case the ICMP message is forwarded along the LSP the original packet was following.  In general P routers on an MPLS backbone do not house all VPN routing information, which is the reason why the message is forwarded back on the same LSP path the original packet was taking in hopes that the originating LSP router upon receiving the ICMP packet will forward the message to the ultimate originator.

This operation is only performed if the MPLS payload is IPv4 or IPv6 traffic, any other type of transport protocol used is dropped.



IP MTU – The MTU is  the max length of data that can be transmitted by a protocol in one instance.  Typically for Ethernet is set to 1500 bytes by default and this is the largest number of bytes that can be carried within an Ethernet frame.


You can set this value globally requiring a restart of the device or you can set it per interface.  Jumbo frames with MTU size up to 9000 is supported for Gig+ links.

TCP MSS is the the MTU subtracting the number of bytes required for IP and TCP/other headers where needed.  To manipulate the value of the MSS field, use the interface configuration ip tcp adjust-mss command and set the value.