sábado, 2 de febrero de 2008

Semestre 1 CCNP, Módulo 1

Module 1: Overview of Scalable Internetworks

Overview

Initially, TCP/IP networks relied on simple distance vector routing protocols and classful 32-bit IP addressing. These technologies offered a limited capacity for growth. Network designers must now modify, redesign, or abandon these early technologies to build modern networks that can scale to handle fast growth and constant change. This module explores networking technologies that have evolved to meet this demand for scalability.

Scalability is the capability of a network to grow and adapt without major redesign or reinstallation. It seems obvious to allow for growth in a network, but growth can be difficult to achieve without redesign. This redesign may be significant and costly. For example, a network may provide a small company with access to e-mail, the Internet, and shared files. If the company tripled in size and demanded streaming video or e-commerce services, could the original networking media and devices adequately serve these new applications? Most organizations cannot afford to recable or redesign their networks when users are relocated, new nodes are added, or new applications are introduced.

Good design is the key to the capability of a network to scale. Poor design, not an outdated protocol or router, will prevent a network from scaling properly. A network design should follow a hierarchical model to be scalable. This module discusses the components of the hierarchical network design model and the key characteristics of scalable internetworks.

NOTE:

It is required that the student study the commands covered in the module using the labs and the Command Reference. Not all required commands are covered in sufficient detail in the text alone. Successful completion of this course requires a thorough knowledge of command syntax and application.

The Command Reference can be found on the Cisco.com website at the following URL:

http://www.cisco.com/ univercd/cc/td/doc/ product/software/ios123/ 123mindx/index.htm

1.1 The Hierarchical Design Model

1.1.1 The three-layer hierarchical design model

A hierarchical network design model breaks the complex problem of network design into smaller, more manageable problems. Each level, or tier in the hierarchy addresses a different set of problems. This helps the designer optimize network hardware and software to perform specific roles. For example, devices at the lowest tier are optimized to accept traffic into a network and pass that traffic to the higher layers. Cisco offers a three-tiered hierarchy as the preferred approach to network design.

In the three-layer network design model, network devices and links are grouped according to three layers:

* Core
* Distribution
* Access

The three-layer model is a conceptual framework. It is an abstract picture of a network similar to the concept of the Open System Interconnection (OSI) reference model.

Layered models are useful because they facilitate modularity. Devices at each layer have similar and well-defined functions. This allows administrators to easily add, replace, and remove individual pieces of the network. This kind of flexibility and adaptability makes a hierarchical network design highly scalable.

At the same time, layered models can be difficult to comprehend because the exact composition of each layer varies from network to network. Each layer of the three-tiered design model may include the following:

* A router
* A switch
* A link
* A combination of these

Some networks may combine the function of two layers into a single device or omit a layer entirely.

The following sections discuss each of the three layers in detail.

The Core Layer
The core layer provides an optimized and reliable transport structure by forwarding traffic at very high speeds. In other words, the core layer switches packets as fast as possible. Devices at the core layer should not be burdened with any processes that stand in the way of switching packets at top speed. This includes the following:

* Access-list checking
* Data encryption
* Address translation

The Distribution Layer
The distribution layer is located between the access and core layers and helps differentiate the core from the rest of the network. The purpose of this layer is to provide boundary definition using access lists and other filters to limit what gets into the core. Therefore, this layer defines policy for the network. A policy is an approach to handling certain kinds of traffic, including the following:

* Routing updates
* Route summaries
* VLAN traffic
* Address aggregation

Use these policies to secure networks and to preserve resources by preventing unnecessary traffic.

If a network has two or more routing protocols, such as Routing Information Protocol (RIP) and Interior Gateway Routing Protocol (IGRP), information between the different routing domains is shared, or redistributed, at the distribution layer.

The Access Layer
The access layer supplies traffic to the network and performs network entry control. End users access network resources by way of the access layer. Acting as the front door to a network, the access layer employs access lists designed to prevent unauthorized users from gaining entry. The access layer can also give remote sites access to the network by way of a wide-area technology, such as Frame Relay, ISDN, or leased lines.

1.1.2 Router function in the hierarchy

The core, distribution, and access layers each have clearly defined functions. For this reason, each layer demands a different set of features from routers, switches, and links. Routers that operate in the same layer can be configured in a consistent way because they all must perform similar tasks. The router is the primary device that maintains logical and physical hierarchy in a network. Therefore, proper and consistent configurations are imperative. Cisco offers several router product lines. Each product line has a particular set of features for one of the three layers:

* Core layer – 12000, 7500, 7200, and 7000 series routers.
* Distribution layer – 4500, 4000, and 3600 series routers.
* Access layer – 2600, 2500, 1700, and 1600 series routers.

The following sections revisit each layer and examine the specific routers and other devices used.

1.1.3 Core layer example

The core layer is the center of the network and designed to be fast and reliable. Access lists should be avoided in the core layer. Access lists add latency and end users should not have access directly to the core. In a hierarchical network, end user traffic should reach core routers only after those packets have passed through the distribution and access layers. Access lists may exist in those two lower layers.

Core routing is done without access lists, address translation, or other packet manipulation. Because of this, it may seem as though the least powerful routers would work well for so simple a task. However, the opposite is true. The most powerful Cisco routers serve the core because they have the fastest switching technologies and the largest capacity for physical interfaces.

The 7000, 7200, and 7500 series routers feature the fastest switching modes available. These are the Cisco enterprise core routers. The 12000 series router is also a core router designed to meet the core routing needs of Internet Service Providers (ISPs). Unless the company is in the business of providing Internet access to other companies, it is unlikely a 12000 series router will be found in the telecommunications closet.

The Cisco 7000, 7200, and 7500 series routers are modular. This provides scalability since administrators can add interface modules when needed. The large chassis of this series can accommodate dozens of interfaces on multiple modules for virtually any media type. This makes these routers scalable and reliable core solutions.

Core routers achieve reliability through the use of redundant links, usually to all other core routers. When possible, these redundant links should be symmetrical having equal throughput, so that equal-cost load balancing may be used. Core routers need a relatively large number of interfaces to enable this configuration. Core routers achieve reliability through redundant power supplies. They usually feature two or more hot-swappable power supplies, which may be removed and replaced individually without shutting down the router.

Figure presents a simple core topology using 7507 routers at three key sites in an enterprise. Each Cisco 7507 is directly connected to every other router. This type of configuration is a full mesh. There are also two links between each router to provide redundancy. Core links should be the fastest and most reliable leased lines in the WAN:

* T1
* T3
* OC3
* Anything better

If redundant T1s are used for this WAN core, each router needs four serial interfaces for two point-to-point connections to each site. Ultimately, the design requires even more than this because other routers at the distribution layer will also need to connect to the core routers. Fortunately, interfaces can be added to the 7507 due to modularity.

With the high-end routers and WAN links involved, the core can become a huge expense, even in a simple example such as this. Some designers will choose not to use symmetrical links in the core to reduce cost. In place of redundant lines, packet-switched and dial-on-demand technologies, such as Frame Relay and ISDN, may be used as backup links. The trade-off for saving money by using such technologies is performance. Using ISDN BRIs as backup links can eliminate the capability of equal-cost load balancing.

The core of a network does not have to exist in the WAN. A LAN backbone may also be considered part of the core layer. Campus networks, or large networks that span an office complex or adjacent buildings, might have a LAN-based core. Switched Fast Ethernet and Gigabit Ethernet are the most common core technologies, usually run over fiber. Enterprise switches, such as the Catalyst 4000, 5000, and 6000 series, shoulder the load in LAN cores. This is because they switch frames at Layer 2 much faster than routers can switch packets at Layer 3. These switches are modular devices and can be equipped with route switch modules (RSMs) adding Layer 3 routing functionality to the switch chassis.

1.1.4 Distribution layer example

The distribution layer enforces policies to limit traffic to and from the core. Distribution layer routers handle less traffic than core layer routers, therefore they need fewer interfaces and less switching speed. However, a fast core is useless if a slowdown of data transfer at the distribution layer prevents user traffic from accessing core links. For this reason, Cisco offers robust, powerful distribution routers, such as the 4000, 4500, and the 3600 series routers. These routers are modular, allowing interfaces to be added and removed depending on what is needed. However, the smaller chassis of these series are much more limiting than those of the 7000, 7200, and 7500 series.

Distribution layer routers bring policy to the network by using a combination of the following:

* Access lists
* Route summarization
* Distribution lists
* Route maps
* Other rules to define how a router should deal with traffic and routing updates

Many of these techniques are covered later in the course.

The figure shows two 3620 routers have been added at Core A, in the same wiring closet as the 7507. In this example, high-speed LAN links connect the distribution routers to the core router. Depending on the size of the network, these links may be part of the campus backbone and will most likely be fiber running 100 or 1000 Mbps. In this example, Dist-1A and Dist-2A are part of the Core A campus backbone. Dist-1A serves remote sites while Dist-2A, serves access routers at Site A. If Site A uses VLANs, Dist-2A may be responsible for routing between the VLANs.

Both Dist-1A and Dist-2A use access lists to prevent unwanted traffic from reaching the core. In addition, these routers summarize their routing tables in updates to Core A. This keeps the Core A routing table small and efficient.

1.1.5 Access layer example

Routers at the access layer permit users at Site A access to the network. Routers at remote site Y and remote site Z also permit users access to the network.

Access routers generally offer fewer physical interfaces than distribution and core routers. For this reason, Cisco access routers feature a small, streamlined chassis that may or may not support modular interfaces. This includes the 1600, 1700, 2500, and 2600 series routers.

Two 2621s have been added to the access layer of the network at Site A. These 2621 routers have two FastEthernet interfaces. User end stations connect through a workgroup switch or hub to one FastEthernet interface. The other FastEtherent interface connects to the high-speed campus backbone of Site A.

Each remote site in the example requires only one Ethernet interface for the LAN side and one serial interface for the WAN side. The WAN interface connects by way of Frame Relay or ISDN to the distribution router in the wiring closet of Site A. For this application, the 2610 router provides a single 10-Mbps Ethernet port and will work well at these locations. These remote sites, Y and Z, are small branch offices that must access the core through Site A. Therefore, Dist-1A is a WAN hub for the organization. As the network scales, more remote sites may access the core with a connection to the distribution routers at the WAN hub.

1.2.1 Five characteristics of a scalable network

Although every large internetwork has unique features, all scalable networks have essential attributes in common. A scalable network has five key characteristics:

* Reliable and available – A responsive network should provide Quality of Service (QoS) for various applications and protocols without making responses at the desktop worse.
* Responsive – A responsive network should provide Quality of Service (QoS) for various applications and protocols without making responses at the desktop worse. The internetwork must be capable of responding to latency issues common for Systems Network Architecture (SNA) traffic. However, the internetwork must still route desktop traffic without compromising QoS.
* Efficient – Large internetworks must optimize the use of resources, especially bandwidth. It is possible to increase data throughput without adding hardware or buying more WAN services. To do this, reduce unnecessary broadcasts, service location requests, and routing updates.
* Adaptable – An adaptable network can accommodate different protocols, applications, and hardware technologies.
* Accessible but secure – An accessible network allows for connections using dedicated, dialup, and switched services while maintaining network integrity.

The Cisco IOS offers a rich set of features that support network scalability. The remainder of this module discusses IOS features that support these five characteristics of a scalable network.

1.2.2 Making the network reliable and available

A reliable and available network provides users with 24 hour a day, seven day a week access. In a highly reliable and available network, fault tolerance and redundancy make outages and failures invisible to the end user. The high-end devices and telecommunication links that ensure this kind of performance come with a high price tag. Network designers constantly have to balance the needs of users with the resources at hand.

When choosing between high performance and low cost at the core layer, the network administrator should choose the best available routers and dedicated WAN links. The core must be designed to be the most reliable and available layer. If a core router fails or if a core link becomes unstable, routing for the entire internetwork might be adversely affected.

Core routers maintain reliability and availability by rerouting traffic in the event of a failure. Robust netowrks can adapt to failures quickly and effectively. To build robust networks, the Cisco IOS offers several features that enhance reliability and availability. These features include the following:

* Support for scalable routing protocols
* Alternate paths
* Load balancing
* Protocol tunnels
* Dial backup

The following sections describe these features.

Scalable Routing Protocols
Routers in the core of a network should converge rapidly and maintain reachability to all networks and subnetworks within an Autonomous System (AS). Simple distance vector routing protocols, such as RIP, take too long to update and adapt to topology changes to be viable core solutions. Compatibility issues may require that some areas of a network run simple distance vector protocols such as RIP and Routing Table Maintenance Protocol (RTMP), an Apple proprietary routing protocol. It is best to use a scalable routing protocol in the core layer. Good choices include Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), or Enhanced Interior Gateway Routing Protocol (EIGRP).

Alternate Paths
Redundant links maximize network reliability and availability, but they are expensive to deploy throughout a large internetwork. Core links should always be redundant. Other areas of a network may also need redundant telecommunication links. If a remote site exchanges mission-critical information with the rest of the internetwork, that site would be a candidate for redundant links. To provide another dimension of reliability, an organization may even invest in redundant routers to connect to these links. A network that consists of multiple links and redundant routers will contain several paths to a given destination. If a network uses a scalable routing protocol, each router maintains a map of the entire network topology. This map helps routers select an alternate path quickly if a primary path fails. EIGRP actually maintains a database of all alternate paths if the primary route is lost.

Load Balancing
Redundant links do not necessarily remain idle until a link fails. Routers can distribute the traffic load across multiple links to the same destination. This process is called load balancing. Load balancing can be implemented using alternate paths with the same cost or metric. This is called equal-cost load balancing. They can also be implemented over alternate paths with different metrics. This is referred to as unequal-cost load balancing. When routing IP, the Cisco IOS offers two methods of load balancing. They are know as per packet and per destination load balancing. If fast switching is enabled, only one of the alternate routes will be cached for the destination address. All packets in the packet stream bound for a specific host will take the same path. Packets bound for a different host on the same network may use an alternate route. This way, traffic is load balanced on a per destination basis.

Per packet load balancing requires more CPU time than per destination load balancing. However, per packet load balancing allows load balancing that is proportional to the metrics of unequal paths, which can help utilize bandwidth efficiently. The proportional distribution makes per packet load balancing better than per destination load balancing.

Protocol Tunnels
An IP network with Novell NetWare running Internetwork Packet Exchange (IPX) at a handful of remote sites may provide IPX connectivity between the remote sites by routing IPX in the core. Even if only two or three offices sparingly use NetWare, this will create additional overhead associated with routing a second routed protocol, or IPX, in the core. It would also require that all routers in the data path have the appropriate IOS and hardware to support IPX. For this reason, many organizations have adopted ’IP only‘ policies at the network core because IP has become the dominant routed protocol.

Tunneling gives an administrator a second and more agreeable option. The administrator can configure a point-to-point link through the core between the two routers using IP. When this link is configured, IPX packets can be encapsulated inside IP packets. IPX can then traverse the core over IP links and the core can be spared the additional burden of routing IPX. Using tunnels, the administrator increases the availability of network services.

Dial Backup
Sometimes two redundant WAN links are not enough or a single link needs to be fault tolerant. However, the possibility of purchasing a full-time redundant link is too expensive. In these cases a backup link can be configured over a dialup technology, such as ISDN, or even an ordinary analog phone line. These relatively low-bandwidth links remain idle until the primary link fails.

Dial backup can be a cost-effective insurance policy, but it is not a substitute for redundant links that can effectively double throughput by using equal-cost load balancing.

1.2.3 Making the network responsive

End users notice network responsiveness as they use the network to perform routine tasks. Users expect network resources to respond quickly, as if network applications were running from a local hard drive. Networks must be configured to meet the needs of all applications, especially time delay sensitive applications such as voice and video. The IOS offers traffic prioritization features to tune responsiveness in a congested network. Routers may be configured to prioritize certain kinds of traffic based on protocol information, such as TCP port numbers. Traffic prioritization ensures that packets carrying mission-critical data take precedence over less important traffic.

If the router schedules these packets for transmission on a first-come, first-served basis, users could experience an unacceptable lack of responsiveness. For example, an end user sending delay-sensitive voice traffic may be forced to wait too long while the router empties its buffer of queued packets.

The IOS addresses priority and responsiveness issues through queuing. Routers that maintain a slow WAN connection often experience congestion. These routers need a method to give certain traffic priority. Queuing refers to the process that the router uses to schedule packets for transmission during periods of congestion. By using the queuing feature, a congested router may be configured to reorder packets so that mission-critical and delay sensitive traffic is processed first. These higher priority packets are sent first even if other low priority packets arrive ahead of them. The IOS supports four methods of queuing, as described in the following sections:

* First-in, first-out (FIFO) queuing
* Priority queuing
* Custom queuing
* Weighted fair queuing (WFQ)

Only one of these queuing methods can be applied per interface because each method handles traffic in a unique way.

1.2.4 Making the network efficient

An efficient network should not waste bandwidth, especially over costly WAN links. To be efficient, routers should prevent unnecessary traffic from traversing the WAN and minimize the size and frequency of routing updates. The IOS includes several features designed to optimize a WAN connection:

* Access lists
* Snapshot routing
* Compression over WANs

The following sections describe each of these features.

Access Lists
Access lists, Figure , also called Access Control Lists (ACLs), can be used to do all of the following:

* Prevent traffic that the administrator defines as unnecessary, undesirable, or unauthorized
* Control routing updates
* Apply route maps
* Implement other network policies that improve efficiency by curtailing traffic

One access list may be applied on an interface for each protocol, per direction, in or out. Different filtering policies can be defined for IP, IPX, and AppleTalk.

Snapshot Routing
Distance vector routing protocols typically update neighbor routers with their complete routing table at regular intervals. These timed updates occur even when there have been no changes in the network topology since the last update. If a remote site relies on a dialup technology, such as ISDN, it would be cost prohibitive to maintain the WAN link in an active state 24 hours a day. RIP routers expect updates every 30 seconds by default. This would cause the ISDN link to reestablish twice a minute to maintain the routing tables. It is possible to adjust the RIP timers, but snapshot routing provides a better solution to maximize network efficiency.

Snapshot routing allows routers using distance vector protocols to exchange their complete tables during an initial connection. Snapshot routing then waits until the next active period on the line before again exchanging routing information. The router takes a snapshot of the routing table. The router then uses this picture for routing table entries while the dialup link is down. The result is that the routing table is kept unchanged so that routes will not be lost because a routing update was not received. When the link is re-established, usually because the router has identified interesting traffic that needs to be routed over the WAN, the router again updates its neighbors.

Compression over WANs
The IOS supports several compression techniques that can maximize bandwidth by reducing the number of bits in all or part of a frame. Compression is accomplished through mathematical formulas or compression algorithms. Unfortunately, routers must dedicate a significant amount of processor time to compress and decompress traffic, increasing latency. Therefore, compression tends to be an efficient measure only on links with extremely limited bandwidth.

The IOS also supports the following bandwidth optimization features:

* Dial-on-demand routing (DDR)
* Route summarization
* Incremental updates

Dial-on-Demand Routing
Dedicated WAN circuits, even Frame Relay, may be cost prohibitive for every remote site. DDR offers an efficient, economic alternative for sites that require only occasional WAN connectivity. A DDR WAN link is not a dedicated link that is always on. The DDR link will activate and establish the WAN connection when interesting traffic is sent over the WAN link by way of the WAN router. For example, in Figure , a router configured for DDR will listen for interesting traffic and wait to establish the WAN link. The administrator defines what is interesting traffic. When the router receives traffic that meets the criteria, the link is activated. DDR is most commonly used with ISDN circuits.

in Figure , a router configured for DDR will listen for interesting traffic and wait to establish the WAN link. Again, the administrator will define what is interesting traffic. When the router receives traffic that meets the criteria, the link is activated. DDR is most commonly used with ISDN circuits.

Route summarization
The number of entries in a routing table can be reduced if the router uses one network address and mask to represent multiple networks or subnetworks. This technique is called route aggregation or route summarization. Some routing protocols automatically summarize subnet routes based on the major network number. Other routing protocols, such as OSPF and EIGRP, allow manual summarization. Route summarization is presented in Module 2.

Incremental updates
Routing protocols such as OSPF, IS-IS, and EIGRP send routing updates that contain information only about routes that have changed. These incremental routing updates use the bandwidth more efficiently than simple distance vector protocols that transmit their complete routing table at fixed intervals, whether a change has occurred or not.

1.2.5 Making the network adaptable

An adaptable network will handle the addition and coexistence of multiple routed and routing protocols. EIGRP is an exceptionally adaptable protocol because it supports routing information for three routed protocols:

* IP
* IPX
* AppleTalk

The IOS also supports route redistribution, which is described in Module 8. Route redistribution allows routing information to be shared among two or more different routing protocols. For example, RIP routes could be redistributed, or injected, into an OSPF area.

Mixing routable and non-routable protocols
A network delivering both routable and non-routable traffic has some unique problems. Routable protocols, such as IP, can be forwarded from one network to another based on a network layer address. Non-routable protocols, such as SNA, do not contain a network layer address and cannot be forwarded by routers. Most non-routable protocols also lack a mechanism to provide flow control and are sensitive to delays in delivery. Any delays in delivery causing packets to arrive out of order can result in the session being dropped. An adaptable network must accommodate both routable and non-routable protocols.

1.2.6 Making the network accessible but secure

Accessible networks let users connect easily over a variety of technologies. Campus LAN users typically connect to routers at the access layer through Ethernet or Token Ring. Remote users and sites may have access to several types of WAN services. Cost and geography play a significant role in determining what type of WAN services an organization can deploy. Therefore, Cisco routers support all major WAN connection types. As shown in Figure , these services include all of the following:

* Circuit-switched networks that use dialup lines
* Dedicated networks that use leased lines
* Packet-switched networks

Circuit-switched networks are dialup, and leased lines are dedicated.

* Dialup and dedicated access – Cisco routers can be directly connected to basic telephone service or digital services such as T1/E1. Dialup links can be used for backup or remote sites that need occasional WAN access, while dedicated leased lines provide a high-speed, high capacity WAN core between key sites.
* Packet-switched – Cisco routers support Frame Relay, X.25, Switched Multi-megabit Data Service (SMDS), and ATM. With this variety of support, the WAN service, or combination of WAN services, to deploy can be determined based on cost, location, and need.

Often, the easier it is for legitimate remote users to access the network, the easier it is for unauthorized users to break in. An access strategy must be carefully planned so that resources, such as remote access routers and servers, are secure. If a company enables users to telecommute through dialup modem, the network administrator must secure access. The routers can be secured with access lists. Routers can also be secured with authentication protocol, such as the Password Authentication Protocol (PAP) or the Challenge Handshake Protocol (CHAP). These protocols require the user to provide a valid name and password before the router permits access to other network resources.

1.3.1 The International Travel Agency

The labs in this course reference the fictitious International Travel Agency (ITA), which maintains a global data network. The ITA business scenario provides a tangible, real-world application of the concepts introduced in the labs. Use the diagram of the ITA WAN topology, Figure , to become familiar with the company and its network.

1.4.1 Getting started and building Start.txt

1.4.2 Capturing HyperTerminal and Telnet sessions

1.4.3 Access control list basics and extended ping

1.5.1 Equal-cost load balancing with RIP

1.5.2 Unequal-cost load balancing with IGRP

Summary

This module has defined scalability and provided examples of Cisco IOS features that enable successful network expansion. This module also explained the three layer design model. This conceptual model helps administrators configure routers to meet specific needs for a layer. Recall that scalable networks also have the following characteristics:

* Reliable and available
* Responsive
* Efficient
* Adaptable
* Accessible but secure

These concepts will apply in the entire CCNP 1 curriculum.

by sdominguez.com

  • prepara tu examen ccna.
  • prepara tu examen ccnp.
  • prepara tu certificacion ccnp.
  • prepara tu certificacion ccna.
  • prepara tu examenes ccna.
  • prepara tu examenes ccnp.
  • prepara tu certificaciones ccnp.
  • prepara tu certificacion ccna.

No hay comentarios: