Q1. DRAG DROP
Drag and Drop Cisco PFR adjacency types.
Answer:
Q2. Refer to the exhibit.
R1, R2, and R3 have full network connectivity to each other, but R2 prefers the path through R3 to reach network 172.17.1.0/24. Which two actions can you take so that R2 prefers the path through R1 to reach 172.17.1.0/24? (Choose two.)
A. Set the reference bandwidth to 10000 on R1, R2, and R3.
B. Configure the cost on the link between R1 and R3 to be greater than 100 Mbps.
C. Set the reference bandwidth on R2 only.
D. Configure a manual bandwidth statement with a value of 1 Gbps on the link between R1 and R3.
E. Modify the cost on the link between R1 and R2 to be greater than 10 Gbps.
F. Configure a manual bandwidth statement with a value of 100 Mbps on the link between R1 and R2.
Answer: A,B
Explanation:
By default, the reference bandwidth used in Cisco routers is 100Mbps, so FastEthernet and above will have a cost of 1, so a gigabit interface and 10GE interface will be equal with a fastethernet. This is not ideal. If we change the reference bandwidth to 100000 then the faster links will be used. Changing the reference bandwidth needs to be done on all routers in the OSPF network. Increasing the cost on the R1-R3 link will also cause the traffic to take the more direct route.
Q3. Which two statements about route summarization are true? (Choose two.)
A. RIP, IGRP, and EIGRP can automatically summarize routing information at network address boundaries.
B. EIGRP can automatically summarize external routes.
C. The area range command can aggregate addresses on the ASBR.
D. The summary-address command under the router process configures manual summarization on RIPv2 devices.
E. The ip classless command enables classful protocols to select a default route to an unknown subnet on a network with other known subnets.
Answer: A,E
Q4. Which two mechanisms can be used to eliminate Cisco Express Forwarding polarization? (Choose two.)
A. alternating cost links
B. the unique-ID/universal-ID algorithm
C. Cisco Express Forwarding antipolarization
D. different hashing inputs at each layer of the network
Answer: B,D
Explanation:
This document describes how Cisco Express Forwarding (CEF) polarization can cause suboptimal use of redundant paths to a destination network. CEF polarization is the effect when a hash algorithm chooses a particular path and the redundant paths remain completely unused.
How to Avoid CEF Polarization
. Alternate between default (SIP and DIP) and full (SIP + DIP + Layer4 ports) hashing inputs configuration at each layer of the network.
. Alternate between an even and odd number of ECMP links at each layer of the network.The CEF load-balancing does not depend on how the protocol routes are inserted in the routing table. Therefore, the OSPF routes exhibit the same behavior as EIGRP. In a hierarchical network where there are several routers that perform load-sharing in a row, they all use same algorithm to load-share.
The hash algorithm load-balances this way by default:
1: 1
2: 7-8
3: 1-1-1
4: 1-1-1-2
5: 1-1-1-1-1
6: 1-2-2-2-2-2
7: 1-1-1-1-1-1-1
8: 1-1-1-2-2-2-2-2
The number before the colon represents the number of equal-cost paths. The number after the colon represents the proportion of traffic which is forwarded per path.
This means that:
For two equal cost paths, load-sharing is 46.666%-53.333%, not 50%-50%.
For three equal cost paths, load-sharing is 33.33%-33.33%-33.33% (as expected).
For four equal cost paths, load-sharing is 20%-20%-20%-40% and not 25%-25%-25%-25%.
This illustrates that, when there is even number of ECMP links, the traffic is not load-balanced.
.Cisco IOS introduced a concept called unique-ID/universal-ID which helps avoid CEF polarization. This algorithm, called the universal algorithm (the default in current Cisco IOS versions), adds a 32-bit router-specific value to the hash function (called the universal ID - this is a randomly generated value at the time of the switch boot up that can can be manually controlled). This seeds the hash function on each router with a unique ID, which ensures that the same source/destination pair hash into a different value on different routers along the path. This process provides a better network-wide load-sharing and circumvents the polarization issue. This unique -ID concept does not work for an even number of equal-cost paths due to a hardware limitation, but it works perfectly for an odd number of equal-cost paths. In order to overcome this problem, Cisco IOS adds one link to the hardware adjacency table when there is an even number of equal-cost paths in order to make the system believe that there is an odd number of equal-cost links.
Reference: http://www.cisco.com/c/en/us/support/docs/ip/express-forwarding-cef/116376-technote-cef-00.html
Q5. Consider a network that mixes link bandwidths from 128 kb/s to 40 Gb/s. Which value should be set for the OSPF reference bandwidth?
A. Set a value of 128.
B. Set a value of 40000.
C. Set a manual OSPF cost on each interface.
D. Use the default value.
E. Set a value of 40000000.
F. Set a value of 65535.
Answer: C
Explanation:
Unlike the metric in RIP which is determined by hop count and EIGRP’s crazy mathematical formulated metric, OSPF is a little more simple. The default formula to calculate the cost for the OSPF metric is (10^8/BW). By default the metrics reference cost is 100Mbps, so any link that is 100Mbps will have a metric of 1. a T1 interface will have a metric of 64 so in this case if a router is trying to get to a FastEthernet network on a router that is through a T1 the metric would be 65 (64 +1). You do however have the ability to statically specify a metric on a per interface basis by using the ip ospf cost # where the cost is an integer between 1-65535.
So the big question is why would you want to statically configure a metric? The biggest advantage of statically configuring an OSPF metric on an interface is to manipulate which route will be chosen dynamically via OSPF. In a nut shell it’s like statically configuring a dynamic protocol to use a specific route. It should also be used when the interface bandwidths vary greatly (some very low bandwidth interfaces and some very high speed interfaces on the same router).
Q6. Which option is the default number of routes over which EIGRP can load balance?
A. 1
B. 4
C. 8
D. 16
Answer: B
Explanation:
By default, EIGRP load-shares over four equal-cost paths. For load sharing to happen, the routes to load-share over must show up in the IP forwarding table or with the show ip route command. Only when a route shows up in the forwarding table with multiple paths to it will load sharing occur.
Reference: http://www.informit.com/library/content.aspx?b=CCIE_Practical_Studies_I&seqNum=126
Q7. Refer to the exhibit.
Switch DSW1 should share the same MST region with switch DSW2. Which statement is true?
A. Configure DSW1 with the same version number, and VLAN-to-instance mapping as shown on DSW2.
B. Configure DSW1 with the same region name, number, and VLAN-to-instance mapping as shown on DSW2.
C. DSW2 uses the VTP server mode to automatically propagate the MST configuration to DSW1.
D. DSW1 is in VTP client mode with a lower configuration revision number, therefore, it automatically inherits MST configuration from DSW2.
E. DSW1 automatically inherits MST configuration from DSW2 because they have the same domain name.
Answer: B
Q8. Refer to the exhibit.
Which two actions can you take to enable CE-1 at site A to access the Internet? (Choose two.)
A. Create a default route for site A on PE-1 with the next hop set to the PE-2 interface to the Internet.
B. Originate a default route in site B with the next hop set to the PE-2 Internet interface, and import the default route into site A.
C. Create a default route on CE-1 with the next hop set to the PE-1 upstream interface.
D. Originate a default route in site A with the next hop set to the PE-2 interface to CE-1.
E. Create a static default route on CE-1 with the next hop set to the PE-2 interface to the Internet.
Answer: A,B
Q9. Refer to the exhibit.
If the route to 10.1.1.1 is removed from the R2 routing table, which server becomes the master NTP server?
A. R2
B. the NTP server at 10.3.3.3
C. the NTP server at 10.4.4.4
D. the NTP server with the lowest stratum number
Answer: D
Explanation:
NTP uses a concept called “stratum” that defines how many NTP hops away a device is from an authoritative time source. For example, a device with stratum 1 is a very accurate device and might have an atomic clock attached to it. Another NTP server that is using this stratum 1 server to sync its own time would be a stratum 2 device because it’s one NTP hop further away from the source. When you configure multiple NTP servers, the client will prefer the NTP server with the lowest stratum value.
Reference: https://networklessons.com/network-services/cisco-network-time-protocol-ntp/
Q10. Which two options are reasons for TCP starvation? (Choose two.)
A. The use of tail drop
B. The use of WRED
C. Mixing TCP and UDP traffic in the same traffic class
D. The use of TCP congestion control
Answer: C,D
Explanation:
It is a general best practice to not mix TCP-based traffic with UDP-based traffic (especially Streaming-Video) within a single service-provider class because of the behaviors of these protocols during periods of congestion. Specifically, TCP transmitters throttle back flows when drops are detected. Although some UDP applications have application-level windowing, flow control, and retransmission capabilities, most UDP transmitters are completely oblivious to drops and, thus, never lower transmission rates because of dropping. When TCP flows are combined with UDP flows within a single service-provider class and the class experiences congestion, TCP flows continually lower their transmission rates, potentially giving up their bandwidth to UDP flows that are oblivious to drops. This effect is called TCP starvation/UDP dominance. TCP starvation/UDP dominance likely occurs if (TCP-based) Mission-Critical Data is assigned to the same service-provider class as (UDP-based) Streaming-Video and the class experiences sustained congestion. Even if WRED or other TCP congestion control mechanisms are enabled on the service-provider class, the same behavior would be observed because WRED (for the most part) manages congestion only on TCP-based flows.
Reference: http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND/Qo S-SRND-Book/VPNQoS.html
Q11. Refer to the exhibit.
ASN 64523 has a multihomed BGP setup to ISP A and ISP B. Which BGP attribute can you set to allow traffic that originates in ASN 64523 to exit the ASN through ISP B?
A. origin
B. next-hop
C. weight
D. multi-exit discriminator
Answer: D
Explanation:
MED is an optional nontransitive attribute. MED is a hint to external neighbors about the preferred path into an autonomous system (AS) that has multiple entry points. The MED is also known as the external metric of a route. A lower MED value is preferred over a higher value. Example at reference link below:
Reference: http://www.cisco.com/c/en/us/support/docs/ip/border-gateway-protocol-bgp/13759-37.html
Q12. External EIGRP route exchange on routers R1 and R2 was failing because the routers had duplicate router IDs. You changed the eigrp router-id command on R1, but the problem persists. Which additional action must you take to enable the routers to exchange routes?
A. Change the corresponding loopback address.
B. Change the router ID on R2.
C. Reset the EIGRP neighbor relationship.
D. Clear the EIGRP process.
Answer: D
Q13. RIPv2 is enabled on a router interface. The "neighbor" command is also configured with a specific IP address. Which statement describes the effect of this configuration?
A. RIP stops sending multicast packets on that interface.
B. RIP starts sending only unicast packets on that interface.
C. RIP starts ignoring multicast packets on that interface.
D. RIP starts sending unicast packets to the specified neighbor, in addition to multicast packets.
Answer: D
Q14. Which three types of address-family configurations are supported in EIGRP named mode? (Choose three.)
A. address-family ipv4 unicast
B. address-family vpnv4
C. address-family ipv6 unicast
D. address-family ipv6 multicast
E. address-family vpnv6
F. address-family ipv4 multicast
Answer: A,C,F
Q15. Which statement describes the BGP add-path feature?
A. It allows for installing multiple IBGP and EBGP routes in the routing table.
B. It allows a network engineer to override the selected BGP path with an additional path created in the config.
C. It allows BGP to provide backup paths to the routing table for quicker convergence.
D. It allows multiple paths for the same prefix to be advertised.
Answer: D
Explanation:
BGP routers and route reflectors (RRs) propagate only their best path over their sessions. The advertisement of a prefix replaces the previous announcement of that prefix (this behavior is known as an implicit withdraw). The implicit withdraw can achieve better scaling, but at the cost of path diversity. Path hiding can prevent efficient use of BGP multipath, prevent hitless planned maintenance, and can lead to MED oscillations and suboptimal hot-potato routing. Upon nexthop failures, path hiding also inhibits fast and local recovery because the network has to wait for BGP control plane convergence to restore traffic. The BGP Additional Paths feature provides a generic way of offering path diversity; the Best External or Best Internal features offer path diversity only in limited scenarios. The BGP Additional Paths feature provides a way for multiple paths for the same prefix to be advertised without the new paths implicitly replacing the previous paths. Thus, path diversity is achieved instead of path hiding.
Reference: http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/iproute_bgp/configuration/xe-3s/irg-xe-3s-book/irg-additional-paths.html