Visitors Counter

Monday, September 12, 2011

ATM Internetworking Design

Asynchronous Transfer Mode (ATM) is the first networking architecture developed specifically for supporting multiple services. ATM networks are capable of supporting audio (voice), video and data simultaneously. ATM is currently architected to support up to 2.5 Gbps bandwidth. Data networks immediately get a performance enhancement when moving to ATM due to the increased bandwidth over a WAN. Voice networks realize a cost savings due in part to sharing the same network with data and through voice compression, silence compression, repetitive pattern suppression, and dynamic bandwidth allocation. The ATM fixed-size 53-byte cell enables ATM to support the isochronicitiy of a time-division multiplexed (TDM) private network with the efficiencies of public switched data networks (PDSN).

Most network designers are first challenged by the integration of ATM with the data network. Data network integration requires legacy network protocols to traverse a cell-based switched network. ATM can accomplish this in several ways. The first of these is LAN emulation.

  1. LAN emulation (LANE)
    1. LAN Emulation Client (LEC)
  2. ATM employs a standards based specification for enabling the installed base of legacy LANs and the legacy network protocols used on these LANs to communicate over an ATM network. This standard is known as LAN emulation (LANE). LANE uses the Media Access Control (MAC) sublayer of the OSI data link control Layer 2. Using MAC encapsulation techniques enables ATM to address the majority of Layer 2 and Layer 3 networking protocols. ATM LANE logically extends the appearance of a LAN thereby providing legacy protocols with equivalent performance characteristics as are found in traditional LAN environments. illustrates a typical ATM topology with LANE support.

  • Data forwarding
  • Address resolution
  • Registering MAC addresses with the LANE server
  • Communication with other LECs using ATM virtual channel connections (VCCs).

End systems that support the LEC functions are:

  • ATM-attached workstations
  • ATM-attached servers
  • ATM LAN switches (Cisco Catalyst family)
  • ATM attached routers (Cisco 12000, 7500, 7000, 4700, 4500 and 4000 series)
  • LAN Emulation Configuration Server (LECS)

The ELAN database is maintained by the LAN emulation configuration server (LECS). In addition, the LECS builds and maintains an ATM address database of LAN Emulation Servers (LES). The LECS maps an ELAN name to a LES ATM address. The LECS performs the following LANE functions:

  • Accepts queries from a LEC
  • Responds to LEC query with an ATM address of the LES for the ELAN/VLAN
  • Serves multiple emulated LANs
  • Manually defined and maintained

The LECS assigns individual clients to a ELAN by directing them to the LES that corresponds to the ELAN.

      1. LAN Emulation Server (LES)
      2. LECs are controlled from a central control point called a LAN Emulation Server (LES). LECs communicate with the LES using a Control Direct Virtual Channel Connection (VCC). The Control Direct VCC is used for forwarding registration and control information. The LES uses a Control Distribute VCC, a point-to-multipoint VCC, enabling the LES to forward control information to all the LECs. The LES services the LAN Emulation Address Resolution Protocol (LE_ARP) request which it uses to build an maintain a list of LAN destination MAC addresses.

      3. Broadcast Unknown Server (BUS)
      4. ATM is based on the notion that the network is point-to-point. Therefore, there is no inherent support for broadcast or any-to-any services. LANE provides this type of support over ATM by centralizing broadcast and multicast functions on a Broadcast And Unknown Server (BUS). Each LEC communicates with the BUS using a Multicast Send VCC. The BUS communicates with all LECs using point-multipoint VCC known as the Multicast Forward VCC. A BUS reassembles received cells on each Multicast Send VCC in sequence to create the complete frame. Once a frame is complete is then sent to all the LECs on a Multicast Forward VCC. This ensures the proper sequence of data between LECs.

      5. LANE Design Considerations

The following are guidelines for designing LANE services on Cisco routers:

  • The AIP has a bi-directional limit of 60 thousand packets per second (pps).
  • The ATM interface on a Cisco router has the capability of supporting up to 255 subinterfaces.
  • Only one active LECS can support all the ELANs. Other LECS operate in backup mode.
  • Each ELAN has one LES/BUS pair and one or more LECs.
  • LES and BUS must be defined on the same subinterface of the router AIP.
  • Only one LES/BUS pair per ELAN is permitted.
  • Only one active LES/BUS pair per subinterface is allowed.
  • LANE Phase 1 standard does not provide for LES/BUS redundancy.
  • The LECS can reside on a different router than the LES/BUS pair.
  • VCCs are supported over switched virtual circuits (SVCs) or permanent virtual circuits (PVCs).
  • A subinterface supports only one LEC.
  • Protocols such as , AppleTalk, IP and IPX are routable over a LEC if they are defined on the AIP subinterface.
  • AN ELAN should be in only one subnet for IP.
      1. Network Support

The LANE support in Cisco IOS enables legacy LAN protocols to utilize ATM as the transport mechanism for inter-LAN communications. The following features highlight the Cisco IOS support for LANE:

  • Support for Ethernet-emulated LANs only. There is currently no token-ring LAN emulation support.
  • Support for routing between ELANs using IP, IPX or AppleTalk.
  • Support for bridging between ELANs
  • Support for bridging between ELANs and LANs
  • LANE server redundancy support through simple server redundancy protocol (SSRP)
  • IP gateway redundancy support using hot standby routing protocol (HSRP)
  • DECnet, Banyan VINES, and XNS routed protocols

  • Addressing
  • LANE requires MAC addressing for every client. LANE clients defined on the same interface or subinterface automatically have the same MAC address. This MAC address is used as the end system identifier (ESI) value of the ATM address. Though the MAC address is duplicated the resulting ATM address representing each LANE client is unique. All ATM addresses must be unique for proper ATM operations. Each LANE services component has an ATM address unique form all other ATM addresses.

  • LANE ATM Addresses

LANE uses the NSAP ATM address syntax however it is not a Layer 3 network address. The address format used by LANE is :

  • A 13-byte prefix that includes the following fields defined by the ATM Forum:
  • AFI (Authority and Format Identifier) field (1 byte)
  • DCC (Data Country Code) or ICD (International Code Designator) field (2 bytes)
  • DFI field (Domain Specific Part Format Identifier) (1 byte)
  • Administrative Authority field (3 bytes)
  • Reserved field (2 bytes)
  • Routing Domain field (2 bytes)
  • Area field (2 bytes)
  • A 6-byte end-system identifier (ESI)
  • A 1-byte selector field
      1. Cisco's Method of Automatically Assigning ATM Addresses

The Cisco IOS supports an automated function of defining ATM and MAC addresses. Theses addresses are used in the LECS database. The automation process uses a pool of eight MAC address that are assigned to each router ATM interface. The Cisco IOS applies the addresses to the LANE components using the following methodology:

  • All LANE components on the router use the same prefix value. The prefix value identifies a switch and must be defined within the switch.
  • The first address in the MAC address pool becomes the ESI field value for every LANE client on the interface.
  • The second address in the MAC address pool becomes the ESI field value for every LANE server on the interface.
  • The third address in the MAC address pool becomes the ESI field value for the LANE broadcast-and-unknown server on the interface.
  • The fourth address in the MAC address pool becomes the ESI field value for the LANE configuration server on the interface.
  • The selector field for the LANE configuration server is set to a 0 value. All other components use the subinterface number of interface to which they are defined as the selector field.

Understanding NPIV and NPV

Two technologies that seem to have come to the fore recently are NPIV (N_Port ID Virtualization) and NPV (N_Port Virtualization). Judging just by the names, you might think that these two technologies are the same thing. While they are related in some aspects and can be used in a complementary way, they are quite different. What I’d like to do in this post is help explain these two technologies, how they are different, and how they can be used. I hope to follow up in future posts with some hands-on examples of configuring these technologies on various types of equipment.

First, though, I need to cover some basics. This is unnecessary for those of you that are Fibre Channel experts, but for the rest of the world it might be useful:

  • N_Port: An N_Port is an end node port on the Fibre Channel fabric. This could be an HBA (Host Bus Adapter) in a server or a target port on a storage array.
  • F_Port: An F_Port is a port on a Fibre Channel switch that is connected to an N_Port. So, the port into which a server’s HBA or a storage array’s target port is connected is an F_Port.
  • E_Port: An E_Port is a port on a Fibre Channel switch that is connected to another Fibre Channel switch. The connection between two E_Ports forms an Inter-Switch Link (ISL).

There are other types of ports as well—NL_Port, FL_Port, G_Port, TE_Port—but for the purposes of this discussion these three will get us started. With these definitions in mind, I’ll start by discussing N_Port ID Virtualization (NPIV).

N_Port ID Virtualization (NPIV)

Normally, an N_Port would have a single N_Port_ID associated with it; this N_Port_ID is a 24-bit address assigned by the Fibre Channel switch during the FLOGI process. The N_Port_ID is not the same as the World Wide Port Name (WWPN), although there is typically a one-to-one relationship between WWPN and N_Port_ID. Thus, for any given physical N_Port, there would be exactly one WWPN and one N_Port_ID associated with it.

What NPIV does is allow a single physical N_Port to have multiple WWPNs, and therefore multiple N_Port_IDs, associated with it. After the normal FLOGI process, an NPIV-enabled physical N_Port can subsequently issue additional commands to register more WWPNs and receive more N_Port_IDs (one for each WWPN). The Fibre Channel switch must also support NPIV, as the F_Port on the other end of the link would “see” multiple WWPNs and multiple N_Port_IDs coming from the host and must know how to handle this behavior.

Once all the applicable WWPNs have been registered, each of these WWPNs can be used for SAN zoning or LUN presentation. There is no distinction between the physical WWPN and the virtual WWPNs; they all behave in exactly the same fashion and you can use them in exactly the same ways.

So why might this functionality be useful? Consider a virtualized environment, where you would like to be able to present a LUN via Fibre Channel to a specific virtual machine only:

  • Without NPIV, it’s not possible because the N_Port on the physical host would have only a single WWPN (and N_Port_ID). Any LUNs would have to be zoned and presented to this single WWPN. Because all VMs would be sharing the same WWPN on the one single physical N_Port, any LUNs zoned to this WWPN would be visible to all VMs on that host because all VMs are using the same physical N_Port, same WWPN, and same N_Port_ID.
  • With NPIV, the physical N_Port can register additional WWPNs (and N_Port_IDs). Each VM can have its own WWPN. When you build SAN zones and present LUNs using the VM-specific WWPN, then the LUNs will only be visible to that VM and not to any other VMs.

Virtualization is not the only use case for NPIV, although it is certainly one of the easiest to understand.

Now that I’ve discussed NPIV, I’d like to turn the discussion to N_Port Virtualization (NPV).

N_Port Virtualization

While NPIV is primarily a host-based solution, NPV is primarily a switch-based technology. It is designed to reduce switch management and overhead in larger SAN deployments. Consider that every Fibre Channel switch in a fabric needs a different domain ID, and that the total number of domain IDs in a fabric is limited. In some cases, this limit can be fairly low depending upon the devices attached to the fabric. The problem, though, is that you often need to add Fibre Channel switches in order to scale the size of your fabric. There is therefore an inherent conflict between trying to reduce the overall number of switches in order to keep the domain ID count low while also needing to add switches in order to have a sufficiently high port count. NPV is intended to help address this problem.

NPV introduces a new type of Fibre Channel port, the NP_Port. The NP_Port connects to an F_Port and acts as a proxy for other N_Ports on the NPV-enabled switch. Essentially, the NP_Port “looks” like an NPIV-enabled host to the F_Port on the other end. An NPV-enabled switch will register additional WWPNs (and receive additional N_Port_IDs) via NPIV on behalf of the N_Ports connected to it. The physical N_Ports don’t have any knowledge this is occurring and don’t need any support for it; it’s all handled by the NPV-enabled switch.

Obviously, this means that the upstream Fibre Channel switch must support NPIV, since the NP_Port “looks” and “acts” like an NPIV-enabled host to the upstream F_Port. Additionally, because the NPV-enabled switch now looks like an end host, it no longer needs a domain ID to participate in the Fibre Channel fabric. Using NPV, you can add switches and ports to your fabric without adding domain IDs.

So why is this functionality useful? There is the immediate benefit of being able to scale your Fibre Channel fabric without having to add domain IDs, yes, but in what sorts of environments might this be particularly useful? Consider a blade server environment, like an HP c7000 chassis, where there are Fibre Channel switches in the back of the chassis. By using NPV on these switches, you can add them to your fabric without having to assign a domain ID to each and every one of them.

Here’s another example. Consider an environment where you are mixing different types of Fibre Channel switches and are concerned about interoperability. As long as there is NPIV support, you can enable NPV on one set of switches. The NPV-enabled switches will then act like NPIV-enabled hosts, and you won’t have to worry about connecting E_Ports and creating ISLs between different brands of Fibre Channel switches.

I hope you’ve found this explanation of NPIV and NPV helpful and accurate. In the future, I hope to follow up with some additional posts—including diagrams—that show how these can be used in action. Until then, feel free to post any questions, thoughts, or corrections in the comments below. Your feedback is always welcome!

Security: Operating an Incident Response Team

After an IRT is established, your next concern is how to successfully operate your team. This chapter covers the following topics to help you improve the operation of your IRT:

  • Team size and working hours
  • New team member profile
  • Advertising team’s existence
  • Acknowledging incoming messages
  • Cooperation with internal groups
  • Prepare for the incidents
  • Measure of success

Team Size and Working Hours

One of the more common questions that organizations setting up their incident response team ask is, “How large should an IRT be?” Providing that budget is not a constraint, the team’s size is a function of what services the IRT wants to provide, the size and the distribution of the constituency, and planned working hours. (That is, will the IRT operate only during office hours or around the clock?) In practice, if you are starting from scratch and the IRT’s task is defined as “go and deal with the incidents,” a small team should be sufficient for the start. The first 12 to 18 months of the IRT’s operation can show whether you need more people on the team.

Many teams after their formation operate only during the office hours. (For example, if you are in western Europe, that would be Monday through Friday from 09:00 to 17:00.) For this kind of coverage a two-person team should suffice. Although office-hours coverage is fine for the start, the IRT should look into extending its working hours to be active around the clock.

The main reason for extending the working hours is that some services (for example, a public website) are available at all times. If someone compromises computers providing these services, the IRT must be able to respond swiftly and not two days later after the weekend is over. Miscreants do not work standard office hours, so the IRT must do the same.

One of the standard ways to extend working hours is to have someone who is on-call. This person can answer telephone calls and check incoming emails after hours and over the weekend. This setup can be augmented by cooperating with other teams. If, for example, the IT has someone who is on-site outside office hours, the IT person might be the one who will accept telephone calls, monitor emails, and alarm the IRT only when needed.

From a technical perspective, it is easy to have someone on-call. It is not necessary to have someone in the office because modern, smart mobile telephones can receive and send emails, surf the Internet, and you can even use them to talk. Smart telephones generally cannot do encryption, so you would need to devise a way to decrypt and encrypt messages. From a staffing perspective, if you want around-the-clock and weekend coverage, the number of the people in IRT would depend on whether the duties can be shared with other teams in the organization. If the duties can be shared, you might not need to increase the size of the IRT. If not, increasing the team size should be considered. A three-member team might be a good size given that one person might be on vacation and another might be sick, which would leave only one active person. Although two people can also provide around-the-clock coverage, it would be a stretch and might burn them out if they would operate that way for a prolonged period of time.

If the host organization is within the EU, it must pay attention to the European Working Time Directive (Council Directive 93/104/EC and subsequent amendments), which regulates that the working week must not be longer than 48 hours, which also includes overtime. On the other hand, people might opt-out from the directive and work as long as required. The host’s human resources department must investigate this and set up proper guidelines.

Irrespective of what hours the IRT operates, that fact must be clearly stated and communicated to other teams and the constituency. Do not bury it somewhere deep in the documentation but state it prominently close to the place containing the team’s contact details. Setting the right expectations is important.

When the IRT operates only during office working hours, the team must not forget that it is living in a round and multicultural world. Living in a round world means that the team must state its time zone. Do not assume that people will automatically know in which time zone the team operates based just on the city and the country. It is possible that the constituency, or larger part of it, is actually situated in a different time zone from the one in which the IRT physically operates.

A multicultural world means that people in one country have different customs from people in other countries. We do not necessarily have weekends or holidays on the same days. Take an example of an IRT that operates from Israel and a large part of its constituency is in Europe. Will it operate on Saturdays? What are its office hours? Will it work on December 25th? The people who report an incident to the team might not know these details in advance. It might be the first time they are reporting something to the team, and they do not know what to expect. The point is that all the information related to your working hours must be visibly and clearly stated on your team’s website.

Digression on Date and Time

While on the topic of a multicultural world, we must mention date and time formats. You always must use an unambiguous format for the date and time. To that end, ISO 8601 is strongly recommended to be adopted by the IRT. In short, according to the ISO 8601, a date should be written in YYYY-MM-DD and time in hh:mm:ss format. ISO format is suitable when the data is automatically processed. Because not all people are familiar with the ISO 8601 standard, it is highly recommended to use the month’s name (for example, October or Oct) instead of its number in all correspondence. That way, you can eliminate any possible ambiguity on a date. When sending data that is a result of some automated process or that will be processed, you should also add a note that all dates are in the ISO format so that recipients know how to interpret them.

As far as the time is concerned, do not forget to include the time zone. This is especially important if recipients are in different time zones. You can either use the time zone’s name (for example, “GMT” or “Greenwich Mean Time”) or the offset from the GMT (for example, GMT + 0530—GMT plus 5 hours and 30 minutes). The preference should be to include the offset rather than the time zone’s name because the names can be ambiguous. For example, EST can mean either Eastern Summer Time or Eastern Standard Time. Eastern Summer Time is used in Australia during the summer, and its offset from GMT is 11 hours (GMT + 1100). On the other hand, Eastern Standard Time can be in either Australia or North America. The Eastern Standard Time in Australia is used during the winter and is 10 hours ahead of GMT (GMT + 1000), whereas the same Eastern Standard Time in North America has an offset of –5 hours from GMT (GMT – 0500).


Wireless Application Protocol

WAP stands for Wireless Application Protocol, a secure specification that allows users to access information instantly via handheld wireless devices such as mobile phones, pagers, two-way radios, smartphones and communicators.The idea comes from the wireless industry, from companies such as Phone.com, Nokia and Ericsson. The point of this standard is to serve Internet contents and Internet services to wireless clients, WAP devices, such as mobile phones and terminals. The authoritative source for WAP is http://www.wapforum.org/.


WAP supports most wireless networks. These include CDPD, CDMA, GSM, PDC, PHS, TDMA, FLEX, ReFLEX, iDEN, TETRA, DECT, DataTAC, and Mobitex.

WAP is supported by all operating systems. Ones specifically engineered for handheld devices include Palm OS, EPOC, Windows CE, FLEXOS, OS/9, and Java OS.

WAPs that use displays and access the Internet run what are called microbrowsers--browsers with small file sizes that can accommodate the low memory constraints of handheld devices and the low-bandwidth constraints of a wireless-handheld network.

Although WAP supports HTML and XML, the WML language (an XML application) is specifically devised for small screens and one-hand navigation without a keyboard. WML is scalable from two-line text displays up through graphic screens found on items such as smart phones and communicators. WAP also supports WMLScript. It is similar to JavaScript, but makes minimal demands on memory and CPU power because it does not contain many of the unnecessary functions found in other scripting languages.

Because WAP is fairly new, it is not a formal standard yet. It is still an initiative that was started by Unwired Planet, Motorola, Nokia, and Ericsson.

People on the move need services, information and entertainment that can keep up with them. With access to mobile services, decisions and interactions happen here and now. The value of mobile services to end-users is boosted by three separate elements: personalization, time-sensitivity and location awareness. Combining these three effectively adds even more value.

Wireless application protocol (WAP) is a protocol that has successfully established a de facto standard for the way in which wireless technology is used for Internet access. WAP technology has been optimized for information delivery to thin-client devices, such as mobile phones

WAP Server

A WAP Server is frequently misused term. A WAP server by itself is really nothing more than a HTTP server - ie. A web server. In order to confuse everyone, Nokia has a product that they call a WAP server, which is a WAP gateway, and a HTTP server all in one. Ie. This is actually a content providing servers and a gateway. The gateway takes care of the gateway stuff, and the web server provides the contents.

WAP Gateway

A WAP gateway is a two-way device (as any gateway). Looking at if from the WAP device's side, since a WAP device can only understand WML in its tokenized/compiled/binary format, the function of the WAP gateway is to convert content into this format. Looking at it from the HTTP server's side, the WAP gateway can provide additional information about the WAP device through the HTTP headers, for instance the subscriber number of a WAP capable cellular phone, its cell id and even things like location information (whenever that becomes available).

WAE Wireless Application Environment

The Wireless Application Environment specifies a general-purpose application environment based fundamentally on World Wide Web technologies and philosophies. WAE specifies an environment that allows operators and service providers to build applications and services that can reach a wide variety of different platforms. WAE is part of the Wireless Application Protocol.

WSP Wireless Session Protocol

The Wireless Session Protocol provides the upper-level application layer of WAP with a consistent interface for two session services. The first is connection-mode service that operates above a transaction layer protocol, and the second is a connectionless service that operates above a secure or non-secure datagram transport service.

How long will WAP last?

First of all let me remind you that these are my personal views, and the bottom line is that it is the consumer who has the faith of WAP in his hand. Good technology has been wasted before just because the market has chosen something else to support. Take the VHS, Beta and Video2000 home video standards of some years back. Technically speaking, Video2000 offered the best quality, but the market chose VHS, which is probably the worst of the three.

Anyway, on to the future of WAP. Unfortunately WAP is currently being marketed as "the internet on your phone". I'm sure that most WAP devices will be mobile phones, but WAP is not in any way limited to phones. Further, anyone who has worked with WAP knows that it's wrong to say that WAP is a "web" browser as such.

WAP can on the other hand offer services and applications similar to the ones you find on the Internet in a very thin client environment. Thin meaning virtually no processor power, very limited display rendering capabilities and so on. How well these applications work are up to the developers. It's true that WAP currently limits the developers in many ways, but the technology is new, and there are ways around almost every obstacle.

Many see the death of WAP when they are shown hand held micro PCs and PDAs, arguing that the limited display size and lack of a proper keyboard will mean the end of WAP. Personally I think this is wrong. First the amount of devices you'll end up with. Most people will need to carry both their mobile phone and their micro PC/PDA. My opinion is that the consumer will think; the more I can do with just one device the better. Then there's the question of cost. Two devices cost more than one. The majority of WAP users should be normal people, and they'll want to spend as little as possible.

Manufacturers have and will try to solve these problems by combining the PC and the mobile phone. The problem then becomes size. For a device like this to be usable by a human, there are certain restrictions. First of the entire input interface. Currently the best-input interface available is the common QWERTY keyboard. For this keyboard to work the keys and the space between them must have a minimum size or only very small children will be able to use it. Second, the output interface. The human eye I guess is best suited to look at a display down to five inches. Anything smaller than this and you'll need to move the device closer to your eyes. A display like this will make a hand held device very large and impossible to put in a normal sized pocket. The typical mobile phone display is about two inches. If you want to present a normal 640 by 480 image on a two-inch display, you'd have to have the display surgically attached to your cornea. I doubt this would sell.

The typical combination PDA and mobile phone today is something like the Nokia Communicator. The drawback with this is that you cannot comfortably use the device unless you have one hand free to hold the device or that the device is firmly seated. A normal mobile phone can be operated with just one hand, both holding and "typing". Some argue that it is impossible to type using the numeric keypads of a mobile phone. It's true that it's more complicated than using a normal keyboard, but then again you're not meant to be writing an essay on a WAP device. And the billions of SMS messages sent from mobile phones every day at least proves that it's not impossible.

The bottom line is that WAP is not "the web" on your mobile phone, and that WAP should have all the prospects of a long life as long as developers understand that it's what's inside the applications that matter, and not necessarily how it is packaged.

Improving Performance Over Wireless Networks

TCP is a common transport protocol that is used in almost all the internet applications.With the advent of PDAs and with many wireless data applications TCP is a major sourceas a transport protocol. Because of the unreliability of the wireless channel, much workhas to be done in order for the data reliable. This characteristic is seen in the TCP. But theother major challenge posed is the speed of the data. So there is a compromise betweenthe speed and reliability. There has been much research going on like changing the syntaxof TCP and adding extra protocols at data link layer etc.

The main problem with the TCP is that TCP falsely assume the packet loss as congestion. The TCP sender detects a packet loss when a time out happens or duplicate acknowledgements happen.

TCP cannot recover from a loss without timing out unless
1) The connection has a large num of outstanding packets
2) Enough ACKs flowing back from the receiver.

There is a situation where a packet cannot be recovered by a FAST RETRANSMISSIONbecause reduced window size reduced window size does not produce enough outstandingpackets return duplicate ACKs.


One of the common methods in improving the performance onwireless is done at physical layer using Forward error correction (FEC). This method hasmany disadvantages as it doesn’t solve the problem entirely. The solutions to solve theproblem were defined in three categories: Link layer, Split connection, and Proxy.A protocol called AIRMAIL besides FEC makes link layer reliable .An entire window is sent by the base station. Advantage in doing like this is, we need notbother about the acknowledgements for each packet. Unfortunately the main issueignoring here is the worst case scenario, if the error rate is high. Then we don’t have anyidea of the errors until the end of the window

In the case of split connection, there is a split in the TCP connection between thesource and base station and between base station and receiver. This approach fails at thebasic principle of violating the TCP syntax.

In the case of Proxy approach, between the sender and receiver a proxy is inserted. Snoopprotocol uses this approach. The main disadvantage in the case of snoop protocol is that ittakes into account only the cumulative acknowledgements and it makes manyassumptions inappropriately the pattern in the losses.

Throughput of a TCP connection is a measure of its performance.Maximum throughput occurs when the TCP congestion window is equal to theBANDWIDTH DELAY product of the link. At this stage we are using the maximumcapacity of the Link. To achieve high throughput we need to either

  • Try mechanisms to Avoid Loss
  • Or else try mechanisms to recover fast if loss occurs

Problems in Wireless Links

On wireless networks most packet losses are due to poor link quality and intermittentconnectivity. The random characteristics of the channel make it difficult to predict end to end data rates, delay statistics and packet loss probabilities.

The Link Layer

Link layer protocols operate independently of the higher layer protocols. TCPimplementations have large retransmission times that are multiples of 500ms whereaslink layer retransmissions have times on the order of 200ms. Link layer protocols thatwere used do not attempt in order delivery across the link and caused packets to arriveout of order at the receiver. Currently not particular standard is there for the link layerprotocols. Most of the link layer protocols use stop-and-wait, go-back-N, selective repeat,and forward error correction to provide reliability. It is shown from various researchsimulations that correcting errors at the link layer has increased the overall performance.But the retransmissions at the link layer does not always improve performance becauseof the poor TCP performance if there are local retransmissions. Therefore a solution atlink layer must ensure in order delivery of packets.

TULIP (Transport Unaware Link Improvement Protocol)

  • It has the ability to maintain local recovery of all lost packets thereby preventingunnecessary and delayed retransmission of packets and subsequent reduction ofTCP's congestion window. No modification to network or Transport layersoftware
  • Efficient link layer protocol taking advantage of opposing flows by piggybackinglink layer ACK's with transport layer ACK's.Throughput which is up to three times higher than TCP with no modifications.
  • TULIP does not depend on TCP state information i.e. TCP headers etc so TULIPis able to adapt to different versions of TCP TULIP piggybacks TCP ACK’s withlink layer ACK’s thereby doesn’t need extra bandwidth.

The solution to this is to provide a link layer that TCP which is able totackle the losses and recover the state before TCP notices it (i.e. before TCP timeout).Thesolution attempts to hide the wireless losses by using local retransmissions and forwarderror correction over the wireless link. This link layer should make at-most use of thelong TCP timeout this protocol TULIP uses the selective repeat retransmission strategytogether with packet interleaving strategy. TULIP lies between the MAC layer and theTCP/IP layer. TULIP maintains no TCP state information and makes no decisions on aTCP session basis but solely on a destination basis. TULIP thereby greatly reduces itsoverhead when multiple TCP sessions are active for a particular destination.

TULIP passes one packet at a time to the MAC layer. TULIP use two additional signalscalled TRANS and WAIT .TULIP needs MAC layer to inform TULIP that thattransmission for the packet passed to it has started. The MAC layer informs the start ofthe transmission though the TRANS signal. After receiving the TRANS signal TULIPstarts a timer and waits for t1 seconds before sending the next packet, because either sidehas variable length data to send and because such packets are longer than lacks The MAClayer much informs TULIP it should wait longer than t1.This procedure of packet interleaving allows the two sources to be clocking during thetransfer of bidirectional data.

The basic principle for the TULIP is MAC level Acceleration. MAClevel Acceleration is the mechanism to reduce the link delay TULIP includes a MACacceleration feature and uses the three way handshake implemented by the FAMA-NCSprotocol. In this handshake the sender uses non persistent carrier sensing to transmit RTS(request to send) to receiver and the receiver sends back CTS (clear to send) that lastslonger than an RTS. This CTS is a tone that forces all other nodes to back off longenough to allow data packet to arrive collision free at the receiver.MAC Acceleration works as follows:
1) Sender transmits a TULIP packet (containing data) after a RTS CTS handshake;receiver sends back TULIP ACK to sender immediately.
2) If the receiver wants to send back a data packet (whose size is 40bytes or less)when it had received a TULIP packet from this packet is piggybacked with theTULIP ACK and sent to sender. There is no RTS CTS handshake for that datapacket.
3) If the receiver’s data packet is larger then 40bytes, then only there is an RTS CTShandshake for that packet.
TULIP uses a Cumulative ACK feature. Whenever a packet fails to arrive at the receiverfrom the sender the receiver sends back an ACK with a bit vector indicating that thecorresponding packet has not been received. The receiver does not stop receivingsuccessive packets from the sender .It receives them and stores them in a buffer .Then itprepares a retransmission list of the packets missing and forwards an ACK giving theinformation of the missing packets.When the receiver receives the missing packets and they are all in the correct order thenonly it passes them together to the next higher layer.

Performance

The properties of wireless channels are entirely different to that of the wired channels.Wireless channels have high bit-error rates (BER). Also these wireless channels cancause burst errors especially when it is in a deep fade for significant amount of time.There are various strategies have been proposed in categorizingthe different proposals:
1) End-to-end
2) Split connection
3) Link layer
End-to-end connection basically handles all kinds of losses. An optimum end-to-endscheme can employ the following strategies:

  • The optimum error categories depend on the type of network. If analyzedfrom the sender’s point of view, this method can be used if the accuracy ofthe detection scheme can be sacrificed in the exchange for the minimalchanges at the intermediate nodes.
  • In this method it is better to employ selective acknowledgement scheme,because it allows the TCP sender to recover more efficiently from themultiple packet drops in a given window.

Throughput

The bit error rates vary from 0 to 15 million bits/million. The receiver window size is42Kbytes.

TULIP protocol makes a retransmission list at the sender upon receiving the first ACKand knows which packets are missing. Because this information is sent back from thereceiver. It retransmits the packets as soon as it receives this ACK Errors further downthe window are recovered before the first error. Other protocols must rely heavily ontimers and cumulative ACKs and get stuck trying to retransmit the packets in a series oflosses.End to End delays are drastically reduced using TULIP.

Developing Network Security Strategies

Developing security strategies that can protect all parts of a complicated network while having a limited effect on ease of use and performance is one of the most important and difficult tasks related to network design. Security design is challenged by the complexity and porous nature of modern networks that include public servers for electronic commerce, extranet connections for business partners, and remote-access services for users reaching the network from home, customer sites, hotel rooms, Internet cafes, and so on. To help you handle the difficulties inherent in designing network security for complex networks, this chapter teaches a systematic, top-down approach that focuses on planning and policy development before the selection of security products.

The goal of this chapter is to help you work with your network design customers in the development of effective security strategies, and to help you select the right techniques to implement the strategies. The chapter describes the steps for developing a security strategy and covers some basic security principles. The chapter presents a modular approach to security design that will let you apply layered solutions that protect a network in many ways. The final sections describe methods for securing the components of a typical enterprise network that are most at risk, including Internet connections, remote-access networks, network and user services, and wireless networks.

Security should be considered during many steps of the top-down network design process. This isn't the only chapter that covers security. Chapter 2, "Analyzing Technical Goals and Tradeoffs," discussed identifying network assets, analyzing security risks, and developing security requirements. Chapter 5, "Designing a Network Topology," covered secure network topologies. This chapter focuses on security strategies and mechanisms.

Network Security Design

Following a structured set of steps when developing and implementing network security will help you address the varied concerns that play a part in security design. Many security strategies have been developed in a haphazard way and have failed to actually secure assets and to meet a customer's primary goals for security. Breaking down the process of security design into the following steps will help you effectively plan and execute a security strategy:

  1. Identify network assets.
  2. Analyze security risks.
  3. Analyze security requirements and tradeoffs.
  4. Develop a security plan.
  5. Define a security policy.
  6. Develop procedures for applying security policies.
  7. Develop a technical implementation strategy.
  8. Achieve buy-in from users, managers, and technical staff.
  9. Train users, managers, and technical staff.
  10. Implement the technical strategy and security procedures.
  11. Test the security and update it if any problems are found.
  12. Maintain security.

Chapter 2 covered steps 1 through 3 in detail. This chapter quickly revisits steps 1 through 3 and also addresses steps 4, 5, 6, and 12. Steps 7 through 10 are outside the scope of this book. Chapter 12, "Testing Your Network Design," addresses Step 11.

Identifying Network Assets

Chapter 2 discussed gathering information on a customer's goals for network security. As discussed in Chapter 2, analyzing goals involves identifying network assets and the risk that those assets could be sabotaged or inappropriately accessed. It also involves analyzing the consequences of risks.

Network assets can include network hosts (including the hosts' operating systems, applications, and data), internetworking devices (such as routers and switches), and network data that traverses the network. Less obvious, but still important, assets include intellectual property, trade secrets, and a company's reputation.

Analyzing Security Risks

Risks can range from hostile intruders to untrained users who download Internet applications that have viruses. Hostile intruders can steal data, change data, and cause service to be denied to legitimate users. Denial-of-service (DoS) attacks have become increasingly common in the past few years. See Chapter 2 for more details on risk analysis.

Analyzing Security Requirements and Tradeoffs

Chapter 2 covers security requirements analysis in more detail. Although many customers have more specific goals, in general, security requirements boil down to the need to protect the following assets:

  • The confidentiality of data, so that only authorized users can view sensitive information
  • The integrity of data, so that only authorized users can change sensitive information
  • System and data availability, so that users have uninterrupted access to important computing resources

According to RFC 2196, "Site Security Handbook:"

  • One old truism in security is that the cost of protecting yourself against a threat should be less than the cost of recovering if the threat were to strike you. Cost in this context should be remembered to include losses expressed in real currency, reputation, trustworthiness, and other less obvious measures.

As is the case with most technical design requirements, achieving security goals means making tradeoffs. Tradeoffs must be made between security goals and goals for affordability, usability, performance, and availability. Also, security adds to the amount of management work because user login IDs, passwords, and audit logs must be maintained.

Security also affects network performance. Security features such as packet filters and data encryption consume CPU power and memory on hosts, routers, and servers. Encryption can use upward of 15 percent of available CPU power on a router or server. Encryption can be implemented on dedicated appliances instead of on shared routers or servers, but there is still an effect on network performance because of the delay that packets experience while they are being encrypted or decrypted.

Another tradeoff is that security can reduce network redundancy. If all traffic must go through an encryption device, for example, the device becomes a single point of failure. This makes it hard to meet availability goals.

Security can also make it harder to offer load balancing. Some security mechanisms require traffic to always take the same path so that security mechanisms can be applied uniformly. For example, a mechanism that randomizes TCP sequence numbers (so that hackers can't guess the numbers) won't work if some TCP segments for a session take a path that bypasses the randomizing function due to load balancing.

Developing a Security Plan

One of the first steps in security design is developing a security plan. A security plan is a high-level document that proposes what an organization is going to do to meet security requirements. The plan specifies the time, people, and other resources that will be required to develop a security policy and achieve technical implementation of the policy. As the network designer, you can help your customer develop a plan that is practical and pertinent. The plan should be based on the customer's goals and the analysis of network assets and risks.

A security plan should reference the network topology and include a list of network services that will be provided (for example, FTP, web, email, and so on). This list should specify who provides the services, who has access to the services, how access is provided, and who administers the services.

As the network designer, you can help the customer evaluate which services are definitely needed, based on the customer's business and technical goals. Sometimes new services are added unnecessarily, simply because they are the latest trend. Adding services might require new packet filters on routers and firewalls to protect the services, or additional user-authentication processes to limit access to the services, adding complexity to the security strategy. Overly complex security strategies should be avoided because they can be self-defeating. Complicated security strategies are hard to implement correctly without introducing unexpected security holes.

One of the most important aspects of the security plan is a specification of the people who must be involved in implementing network security:

  • Will specialized security administrators be hired?
  • How will end users and their managers get involved?
  • How will end users, managers, and technical staff be trained on security policies and procedures?

For a security plan to be useful, it needs to have the support of all levels of employees within the organization. It is especially important that corporate management fully support the security plan. Technical staff at headquarters and remote sites should buy into the plan, as should end users.

Developing a Security Policy

According to RFC 2196, "Site Security Handbook:"

  • A security policy is a formal statement of the rules by which people who are given access to an organization's technology and information assets must abide.

A security policy informs users, managers, and technical staff of their obligations for protecting technology and information assets. The policy should specify the mechanisms by which these obligations can be met. As was the case with the security plan, the security policy should have buy-in from employees, managers, executives, and technical personnel.

Developing a security policy is the job of senior management, with help from security and network administrators. The administrators get input from managers, users, network designers and engineers, and possibly legal counsel. As a network designer, you should work closely with the security administrators to understand how policies might affect the network design.

After a security policy has been developed, with the engagement of users, staff, and management, it should be explained to all by top management. Many enterprises require personnel to sign a statement indicating that they have read, understood, and agreed to abide by a policy.

A security policy is a living document. Because organizations constantly change, security policies should be regularly updated to reflect new business directions and technological shifts. Risks change over time also and affect the security policy.

Components of a Security Policy

In general, a policy should include at least the following items:

  • An access policy that defines access rights and privileges. The access policy should provide guidelines for connecting external networks, connecting devices to a network, and adding new software to systems. An access policy might also address how data is categorized (for example, confidential, internal, and top secret).
  • An accountability policy that defines the responsibilities of users, operations staff, and management. The accountability policy should specify an audit capability and provide incident-handling guidelines that specify what to do and whom to contact if a possible intrusion is detected.
  • An authentication policy that establishes trust through an effective password policy and sets up guidelines for remote-location authentication.
  • A privacy policy that defines reasonable expectations of privacy regarding the monitoring of electronic mail, logging of keystrokes, and access to users' files.
  • Computer-technology purchasing guidelines that specify the requirements for acquiring, configuring, and auditing computer systems and networks for compliance with the policy.

Developing Security Procedures

Security procedures implement security policies. Procedures define configuration, login, audit, and maintenance processes. Security procedures should be written for end users, network administrators, and security administrators. Security procedures should specify how to handle incidents (that is, what to do and who to contact if an intrusion is detected). Security procedures can be communicated to users and administrators in instructor-led and self-paced training classes.

7 Layers of the OSI Model

OSI short for Open System Interconnection, an ISO standard for worldwide communications that defines a networking framework for implementing protocols in seven layers. ISO shorts for International Organization for Standardization. Founded in 1946, ISO is an international organization composed of national standards bodies from over 75 countries. For example, ANSI (American National Standards Institute) is a member of ISO. ISO has defined a number of important computer standards; the most significant of which is perhaps OSI (Open Systems Interconnection) a standardized architecture for designing networks.


In OSI layers control is passed from one layer to the next, starting at the application layer in one station, proceeding to the bottom layer, over the channel to the next station and back up the hierarchy.

In communications the term channel refers to a communications path between two computers or devices. It can refer to the physical medium (the wires) or to a set of properties that distinguishes one channel from another. For example, TV channels refer to particular frequencies at which radio waves are transmitted. IRC channels refer to specific discussions.

Most of the functionality in the OSI model exists in all communications systems, although two or three OSI layers may be incorporated into one.

OSI is also referred to as the OSI Reference Model or just the OSI Model.

Physical Layer

The physical later is concerned with transmitting raw bits over a communication channel. The design issues have to do with making sure that when one side sends a 1 bit, the other side as a 1 bit, not as a 0 bit receives it. Typical questions here are how many volts should be used to represent a 1 and how many for a 0, how many microseconds a bit lasts, whether transmission may proceed simultaneously in both directions, how the initial connection is established and how it is torn down when both sides are finished, and how many pins the network connector has and what each pin is used for. The design issues here deal largely with mechanical, electrical, and procedural interfaces, and the physical transmission medium, which lies below the physical layer. Physical layer design can properly be considered to be within the domain of the electrical engineer.

Data Link Layer

The main task of the data link layer is to take a raw transmission facility and transform it into a line that appears free of transmission errors in the network layer. It accomplishes this task by having the sender break the input data up into data frames (typically a few hundred bytes), transmit the frames sequentially, and process the acknowledgment frames sent back by the receiver. Since the physical layer merely accepts and transmits a stream of bits without any regard to meaning of structure, it is up to the data link layer to create and recognize frame boundaries. This can be accomplished by attaching special bit patterns to the beginning and end of the frame. If there is a chance that these bit patterns might occur in the data, special care must be taken to avoid confusion. The data link layer should provide error control between adjacent nodes.

A noise burst on the line can destroy a frame completely. In this case, the data link layer software on the source machine must retransmit the frame. However, multiple transmissions of the same frame introduce the possibility of duplicate frames. A duplicate frame could be sent, for example, if the acknowledgment frame from the receiver back to the sender was destroyed. It is up to this layer to solve the problems caused by damaged, list, and duplicate frames. The data link layer may offer several different service classes to the network layer, each of a different quality and with a different price.

Another issue that arises in the data link layer (and most of the higher layers as well) is how to keep a fast transmitter from drowning a slow receiver in data. Some traffic regulation mechanism must be employed in order to let the transmitter know how much buffer space the receiver has at the moment. Frequently, flow regulation and error handling are integrated, for convenience.

If the line can be used to transmit data in both directions, this introduces a new complication that the data link layer software must deal with. The problem is that the acknowledgment frames for A to B traffic competes for the use of the line with data frames for the B to A traffic. A clever solution piggybacking has been devised.

In most practical situations, there is a need for transmitting data in both directions. One way of achieving full-duplex data transmission would be to have two separate communication channels, and use each one for simplex data traffic (in different directions). If this were done, we would have two separate physical circuits, each with a "forward" channel (for data) and a "reverse" channel (for acknowledgment). In both cases the bandwidth of the reverse channel would be almost entirely wasted. In effect, the user would be paying the cost of two circuits but only using the capacity of one.

A better idea is to use the same circuit for data in both directions. In this model the data frames from A to B are intermixed with the acknowledgment frames from A to B. By looking at the "kind" field in the header of an incoming frame, the receiver can tell whether the frame is data or acknowledgment.

Although interweaving data and control frames on the same circuit is an improvement over having two separate physical circuits, yet another improvement is possible. When a data frame arrives, instead of immediately sending a separate control frame, the receiver restrains it and waits until the network layer passes it the next packet. The acknowledgment is attached to the outgoing data frame. In effect, the acknowledgment gets a free ride on the next outgoing data frame. The technique of temporarily delaying outgoing acknowledgment so that they can be hooked onto the next outgoing data frame is widely known as piggybacking.

Network Layer

This layer provides switching and routing technologies, creating logical paths, known as virtual circuits for transmitting data from node. Routing and forwarding are functions of this layer, as well as addressing, internetworking error handling, congestion control and packet sequencing.

The network layer is concerned with controlling the operation of the subnet. A key design issue is determining how packets are routed from source to destination. Routes could be based on static tables that are "wired into" the network and rarely changed. They could also be determined at the start of each conversation, for example a terminal session. Finally, they could be highly dynamic, being determined anew for each packet, to reflect the current network load.

If too many packets are present in the subnet at the same time, they will get in each other's way, forming bottlenecks. The control of such congestion also belongs to the network layer.

Since the operators of the subnet may well expect remuneration for their efforts, there is often some accounting function built into the network layer. At the very least, the software must count how many packets or characters or each customer sends bits, to produce billing information. When a packet crosses a national border, with different rates on each side, the accounting can become complicated.

When a packet has to travel from one network to another to get to its destination, many problems can arise. The addressing used by the second network may be different from the first one. The second one may not accept the packet at all because it is too large. The protocols may differ, and so on. It is up to the network layer to overcome all these problems to allow heterogeneous networks to be interconnected. In broadcast networks, the routing problem is simple, so the network layer is often thin or even nonexistent.

NFS uses Internetwork Protocol (IP) as its network layer interface. IP is responsible for routing, directing datagrams from one network to another. The network layer may have to break large datagrams, larger than MTU, into smaller packets and host receiving the packet will have to reassemble the fragmented datagram. The Internetwork Protocol identifies each host with a 32-bit IP address. IP addresses are written as four dot-separated decimal numbers between 0 and 255, e.g., 129.79.16.40. The leading 1-3 bytes of the IP identify the network and the remaining bytes identify the host on that network. The network portion of the IP is assigned by InterNIC Registration Services, under the contract to the National Science Foundation, and the local network administrators assign the host portion of the IP, locally by noc@indiana.edu. For large sites, usually subnetted like ours, the first two bytes represent the network portion of the IP, and the third and fourth bytes identify the subnet and host respectively. Even though IP packets are addressed using IP addresses, hardware addresses must be used to actually transport data from one host to another. The Address Resolution Protocol (ARP) is used to map the IP address to it hardware.

Transport Layer

This layer provides transparent transfer of data between end systems, or hosts, and is responsible for end-to-end error recovery and flow control. It ensures complete data transfer.

The basic function of the transport layer is to accept data from the session layer, split it up into smaller units if need be, pass these to the network layer, and ensure that the pieces all arrive correctly at the other end. Furthermore, all this must be done efficiently, and in a way that isolates the session layer from the inevitable changes in the hardware technology.

Under normal conditions, the transport layer creates a distinct network connection for each transport connection required by the session layer. If the transport connection requires a high throughput, however, the transport layer might create multiple network connections, dividing the data among the network connections to improve throughput. On the other hand, if creating or maintaining a network connection is expensive, the transport layer might multiplex several transport connections onto the same network connection to reduce the cost. In all cases, the transport layer is required to make the multiplexing transparent to the session layer.

The transport layer also determines what type of service to provide to the session layer, and ultimately, the users of the network. The most popular type of transport connection is an error-free point-to-point channel that delivers messages in the order in which they were sent. However, other possible kinds of transport, service and transport isolated messages with no guarantee about the order of delivery, and broadcasting of messages to multiple destinations. The type of service is determined when the connection is established.

The transport layer is a true source-to-destination or end-to-end layer. In other words, a program on the source machine carries on a conversation with a similar program on the destination machine, using the message headers and control messages.

Many hosts are multi-programmed, which implies that multiple connections will be entering and leaving each host. Their needs to be some way to tell which message belong to which connection. The transport header is one place this information could be put.

In addition to multiplexing several message streams onto one channel, the transport layer musk takes care of establishing and deleting connections across the network. This requires some kind of naming mechanism, so that process on one machine has a way of describing with whom it wishes to converse. There must also be a mechanism to regulate the flow of information, so that a fast host cannot overrun a slow one. Flow control between hosts is distinct from flow control between switches, although similar principles apply to both.

Session Layer

This layer establishes, manages and terminates connections between applications. The session layer sets up, coordinates, and terminates conversations, exchanges, and dialogues between the applications at each end. It deals with session and connection coordination.

The session layer allows users on different machines to establish sessions between them. A session allows ordinary data transport, as does the transport layer, but it also provides some enhanced services useful in some applications. A session might be used to allow a user to log into a remote time-sharing system or to transfer a file between two machines.

One of the services of the session layer is to manage dialogue control. Sessions can allow traffic to go in both directions at the same time, or in only one direction at a time. If traffic can only go one way at a time, the session layer can help keep track of whose turn it is.

A related session service is token management. For some protocols, it is essential that both sides do not attempt the same operation at the same time. To manage these activities, the session layer provides tokens that can be exchanged. Only the side holding the token may perform the critical operation.

Another session service is synchronization. Consider the problems that might occur when trying to do a two-hour file transfer between two machines on a network with a 1-hour mean time between crashes. After each transfer was aborted, the whole transfer would have to start over again, and would probably fail again with the next network crash. To eliminate this problem, the session layer provides a way to insert checkpoints into the data stream, so that after a crash, only the data after the last checkpoint has to be repeated.

Presentation Layer

This layer provides independence from differences in data representation (e.g., encryption by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer.

The presentation layer performs certain functions that are requested sufficiently often to warrant finding a general solution for them, rather than letting each user solve the problems. In particular, unlike all the lower layers, which are just interested in moving bits reliably from here to there, the presentation layer is concerned with the syntax and semantics of the information transmitted.

A typical example of a presentation service is encoding data in a standard, agreed upon way. Most user programs do not exchange random binary bit strings. They exchange things such as people's names, dates, amounts of money, and invoices. These items are represented as character strings, integers, floating point numbers, and data structures composed of several simpler items.

Different computers have different codes for representing character strings, integers and so on. In order to make it possible for computers with different representation to communicate, the data structures to be exchanged can be defined in an abstract way, along with a standard encoding to be used "on the wire". The presentation layer handles the job of managing these abstract data structures and converting from the representation used inside the computer to the network standard representation.

The presentation layer is also concerned with other aspects of information representation. For example, data compression can be used here to reduce the number of bits that have to be transmitted and cryptography is frequently required for privacy and authentication.

Application Layer

This layer supports application and end-user processes. Communication partners are identified, quality of service is identified, user authentication and privacy are considered, and any constraints on data syntax are identified. Everything at this layer is application-specific. This layer provides application services for file transfers, e-mail and other network software services. Telnet and FTP are applications that exist entirely in the application level. Tiered application architectures are part of this layer.

The application layer contains a variety of protocols that are commonly needed. For example, there are hundreds of incompatible terminal types in the world. Consider the plight of a full screen editor that is supposed to work over a network with many different terminal types, each with different screen layouts, escape sequences for inserting and deleting text, moving the cursor, etc.

One way to solve this problem is to define an abstract network virtual terminal for which editors and other programs can be written to deal with. To handle each terminal type, a piece of software must be written to map the functions of the network virtual terminal onto the real terminal. For example, when the editor moves the virtual terminal's cursor to the upper left-hand corner of the screen, this software must issue the proper command sequence to the real terminal to get its cursor there too. All the virtual terminal software is in the application layer.

Set Up Multiple SSIDs and VLANs on a DD-WRT Router

DDWRT is arguably the most popular firmware replacement or upgrade for select wireless routers. In addition to many other things, it gives you the ability to create virtual wireless networks (multiple SSIDs) and configure virtual LANs (VLANs). These features let you offer public or separated access, and are usually found only in more expensive enterprise-level gear. You get them and much more at the cost of just a cheap home router.

In this tutorial, we’ll create a second SSID, segregate it from the main SSID, make two of the LAN ports on the back of the router connect to just the new SSID, and leave the other two LAN ports connected to the main SSID.

You might want to, for example, use this second SSID to offer your visitors wireless Internet access, or encrypt it for use by another department in your organization. Plus, you can also plug computers into the individual networks and/or expand each with more access points. We’ll make it so users won’t be able to snoop or communicate with users from the other SSID or LAN ports, to protect your shared folders and resources.

For the record, this tutorial is based off using the standard DD-WRT version 24 Service Pack 1—more specifically, Build 10011.

Before continuing, flash your compatible wireless router with the DD-WRT firmware.

Creating the Virtual Wireless Network

Let’s get started! Bring up the web-based GUI by typing the IP address (192.168.1.1) into a browser and logging in with the username and password you created at the first login. Then follow these steps to create the new virtual SSID:

  1. Select the Wireless tab.
  2. Under the Virtual Interfaces section, click the Add button to add a new virtual interface.
  3. Specify the basic wireless settings.
  4. For the Network Configuration, choose Unbridged.
  5. Input an IP address that’s in a different subnet, such as 192.168.2.1. Just make sure the second to last digit isn’t a 1.
  6. For the subnet mask, you’ll probably want to use the usual one: 255.255.255.0.
  7. Click Apply Settings to save and apply the changes.

Now create a new bridge and assign the new SSID to it:

  1. Select Setup > Networking.
  2. In the Create Bridge section, click the Add button, type br1 into the first (blank) field on the left, and click Apply Settings.
  3. In the new fields, input the same IP address and subnet mask that you did earlier in the Wireless settings, and click Apply Settings.
  4. In the Assign to Bridge section, click the Add button, select br1 in the left drop-down menu, select wl0.1 for the Interface, and click Apply Settings.

How To Resolve Limited Or No Connectivity Errors in Windows

When attempting to set up or make network connections on a Windows computer, you may encounter a Limited Or No Connectivity error message similar to the following:

Limited or no connectivity: The connection has limited or no connectivity. You might be unable to access the Internet or some network resources.

This message can result from any of several different technical glitches or configuration problems. Follow these steps to resolve Limited Or No Connectivity errors in Windows.

Here's How:
  1. Determine whether your network access is functioning properly (that you can reach local network resources and the Internet). If you are using a broadband Internet and Windows XP Service Pack 2, this message is often a false error report.

    If your network access is non-functional. continue to the following steps.

  2. If your computer connects to the network through a cable , resetting (powering off and on) the router may resolve the issue. If not using a broadband router, or if resetting your router only temporarily resolves the issue and the error message re-appears later, continue to the following steps.

  3. If connecting to your network using wi-fi and using wireless security, your wep or other security key may not be set properly. Check the wireless security configuration on your computer's network adapter and update if it necessary.

  4. If connecting to your network using an cable, your cable may have failed. Temporarily replace your network cable with a new one to determine whether this resolves the issue.

  5. If using a broadband router on your network, check your computer's IP address to verify it is valid and not a private address that starts with 169.254. An invalid address of the form 169.254.x.x signifies your computer's inability to obtain a usable IP address from your router.

    To resolve DHCP configuration problems, proceed to the following steps.

  6. Reboot your computer, router (if present) and broadband modem together, then re-test your connection.

  7. If your connection remains non-functional, run the Windows Network Repair utility on your computer.

  8. If your connection remains non-functional, update your router settings to change from dynamic to static IP address configuration, and set an IP address on the computer appropriately.

  9. If your connection remains non-functional, unplug your router and connect the computer directly to your broadband modem. If this configuration is functional, contact the manufacturer of your router for additional support.

  10. If your computer is connecting to your network directly through a modem, or if your Internet access remains non-functional after following the instructions above, contact your Internet provider for support.

Ten Reasons to Become a Network Administrator

As Information Technology (IT) has become such an indispensable part of contemporary life, the position of a network administrator has become more and more significant. A network administrator installs, supports, and even designs computer systems for an organization or business. They are in charge of the organization’s Intranet, WANs, Internet, LANs, and network segments. They also make sure that the IT systems run correctly and that everything, from the software programs to the computers, are working. A network administrator must be industrious, skilled in all aspects of computer science, and patient. The possibility to earn a very good salary in this position exists and all types of businesses, from large corporations to the government, hire network administrators making the job outlook a very good one. If you enjoy troubleshooting as far as computers are concerned and like working with area networks then this could be a very good career choice for you. Continue reading to learn ten reasons to become a network administrator and what incentives the field might hold for you.

1. Good job outlook- Employment is anticipated to grow quicker than average for all occupations within the next ten years.

2. A variety of educational backgrounds are accepted- Computer skills are necessary, but individuals can normally enter this profession with many different levels of formal education, from bachelor’s degrees to master’s degrees and even associate degrees in some instances.

3. Comfortable work environment- Network administrators usually work in well-lighted laboratories or offices that are modern and comfortable.

4. The ability to work from home- In some instances, a network administrator can work from home, especially since computer networks are expanding and workers are therefore able to carry out their responsibilities from a remote location.

5. Advancement opportunities- Network administrators can often advance to more senior-level and supervisory positions once they have gained experience within their company.

6. Potential to earn a good salary- The median annual salary for a network administrator was around $65,000 in 2008. Some administrators were able to earn more than $100,000, however, depending on who they worked for and how long they had been with the company.

7. A variety of work environments- Network administrators work in a variety of environments, from small businesses to large corporations and even for the government.

8. You enjoy working with computers- A network administrator has a range of job duties, from installing and maintaining network software and hardware to monitoring networks to make sure that they are working properly. If you enjoy working with computers and troubleshooting then you could do very well at this job.

9. Good benefits- Most companies have good benefits for their network administrators. Benefits can include, but are not limited to: retirement plans, life and health insurance, vacation and sick leave, and paid training.

10. Relevant experience and skill can sometimes be enough- Some companies will accept those who have relevant work experience and demonstrated skills in the field as an alternative to having a strong educational background in computer sciences.