Network Overhead, What It Is? and How It’s Affecting the Overall Performance of Entire Network System? Essay Example

Network Overhead, What It Is? and How It’s Affecting the Overall Performance of Entire Network System? Essay

DATA COMMUNICATIONS & NETWORKING PROJECT II Network overhead, what it is? And how it’s affecting the overall performance of entire network system? SUBMITTED BY: ASM A. KARIM SPRING-2012 CIT-370-25 AS M K AR IM P AGE – 1 PENNSYLVANIA COLLEGE OF TECHNOLOGY ABSTRACT In general terms, overhead means anything extra that shouldn’ t be. So what is overhead in networking or how it’s affecting the entire network performance? It does not seem irrelevant for us to know something that most of us deal in everyday life knowingly or unknowingly.

The technology like Ethernet that we use most to transfer data over the network is the main focus on this paper to measure the overhead. We all know the bandwidth we get from ISP, that’s what we paid for, but are we fully able to use the full bandwidth? No, we can’t but why? This paper will give the general idea of some networking terms along with details about network overhead. It also will answer those questions those I initiated here by providing some analysis and experiments. This paper may help us to understand some critical factors that need to be accounted during network design period for best optimal network performances.

INTRODUCTION Overhead is an important term in networking for the design, implementation, and performance issues. Understanding overhead properly is basic to understanding the methodology employed by various technologies to get information from one place to another and the cost involved. According to PC Magazine, overhead is the amount of processing or transmitting time used by the system software, database manager or network protocols that trans mits additional codes in order to control and manage the data transfer over the network (PC Magazine).

Keep in mind, when assessing the performance of networks there is always a difference between theoretical speed ratings, and real world throughput. If we are able to design the seamless network, then the difference should be relatively small but still significant, otherwise it can be extremely large and not negligible from the perspective of network performance. As a networking student, my job in the real world should involve measuring the network performances by calculating the overhead more precisely and extensively.

This paper will cover the basics of overhead affiliated with TCP/IP based network by briefly explaining the theoretical and analytical performance analysis of a real world scenario that was conducted in a lab environment. AS M K AR IM P AGE – 2 This paper briefly describes some networking terminology including networking performance requirements with performance impact factors both in theoretical and real world scenarios. The complete paper is broken down into five different sections with a few subsections.

The first section provides some critical issues that establishes why overhead is a concern in network design. Section two briefly describes some networking terminology that may help us to better understand the different terms and analysis provided in this research paper. Section three distinguishes the relationship between overhead and network application types and its behavior by providing some details and simple analysis. Section four provides the brief scenario about the conducted experiment design procedure, experimental results, and data analysis.

Finally, section five covers various limitations of the experiment, opportunity of future improvements, and what we learned along with a short summary about the complete paper. 1: WHY OVERHEAD IS AN IMPORTANT FACTOR IN NETWORK DESIGN? Overhead is simply what a network or communication method is supposed to be able to do and what it actually does. In a connection oriented TCP/IP based packet networking system, during the course of an IP packet’s journey from transmitting end to receiving end, IP packets encapsulated and de-encapsulated into and out of framing headers and trailers that define how he packet will make its way to the next hop in the path. So every network has to have some degree of normal network overhead in order to establish and maintain the connection, which guarantees that we will never be able to use all of the bandwidth of any connection for data transmissions. For example, in a 10Mbps Ethernet connection the line may be able to transmit 10 million bits every second, but not all of those bits are data! Some of those bits are used for addressing and controlling purposes because we can’t just throw data into the network in raw form.

Also, many of those bits are used for general overhead activities, dealing with SYN and ACK, collisions on transmission, error checking, re-transmission, and so on. Beyond those, there are several other issues that greatly impacting the network performances like hardware and software; overhead exists at each of its layer from the application and operating system to significant hardware configuration. Some of the visible and familiar issues are ability of the hardware to process the data and bandwidth limitations that exist in the chain AS M K AR IM P AGE – 3 of data transmission.

Bandwidth limitations cause network throughput issues because the entire network can only run as fast as its slowest link. These bottlenecks create reduced network performance. Another great factor is the asymmetry which offers higher bandwidth in one direction then other for internet access. This was developed in terms of common user internet uses behavior that people download more than they upload to the internet. From the better network design perspective it’s important to know the speed rating for both directi ons. To give a practical example I used the web based network speed test application form www. peedtest. org and conducted five different tests from my home router to five different servers located in five different states. The bar graph on screen shot-1 displays the higher bandwidth speed represented by the color Blue and upload speed is represented by the color Yellow. Screen shot-1: Download and Upload speed comparisons. The screen shot-2, on the next page displays the distance between my home routers and the connected servers located in five different states. This screen shot also establishes the point why and how overhead is affecting the networks. If we nalyze the data on that image, we will see that the server located in the greatest distance is suppose to have higher latency but less download and upload speed. But that’s not what we see from the highlighted part of the image, right? Sever located in PA has 100 miles less distance than the server located in NY. But the PA server shows the higher latency and less bandwidth speed for download and upload. AS M K AR IM P AGE – 4 Screen shot-2: Bandwidth and Latency comparison in terms of distance What is causing the speed to reduce? Why is it behaving completely opposite then the theory?

Those are the questions that will be answered in the next few sections of this paper. 2: SOME COMMON NETWORK TERMINOLOGY Metrics are used to measure aspects of network and protocol performance. The values for such metrics in various scenarios indicate the level of performance of a network application. This section defines terms and metrics used industry-wide for measuring network application performance. These terms and metrics are used throughout this paper. 2. 1: BANDWIDTH Bandwidth simply is a measure of how fast data transferred on our network.

Think about water flowing through a pipe; the wider the inside of a pipe the more water can get through it. So, bandwidth is a measure of the diameter of this pipe, and it represents the overall capacity of the connection that is measured in bits per second (bps). Bandwidth can refer in terms of actual and theoretical throughput. For example, traditional Ethernet networks can theoretically support 10Mbps or more, but actual throughput can’t be achieved due to overhead in the computer hardware and operating system.

For easy understanding, I used a simple calculation by manually configuring the NIC speed to 10Mbps with full duplex mode to reduce the collisions. From the screen shot-3 on next page; we can see that the computer measured data in Bytes rather than bits. To measure the bandwidth it takes to transfer 110 KB of data over the network, first we added 20% overhead that means 10bits/Byte instead of 8bits/Byte, than converted Bytes into bits (10X110KBs =1100bits). My NIC can transmit 10Mbps or 10,000Kbps of data (only data transfer occurring) so, it will take 1100Kbits/10,000kbps = 0. 11 seconds to transfer the document.

AS M K AR IM P AGE – 5 Screen shot-3: LAN connection status 2. 2: LATENCY According to About. com, the term Latency refers to any of several kinds of delays typically incurred in processing of network data. Low latency network connection like Ethernet generally experiences small delays rather than high latency connection like satellite internet (Mitchell). Excessive latency creates bottlenecks that prevent data from filling the network pipe, thus decreasing effective bandwidth. 2. 3: ROUND TRIP TIME (RTT) Time in milliseconds for a request to makes a trip from a source to a destination host and back again is RTT.

Lower values indicated better performance. Forward and return path times are not necessarily equal. Ping is another term used in round trip time. RTT values are affected by network infrastructure, distance between nodes, network conditions, and packet size. Packet size, congestion and payload compressibility impact RTT when measured on slow links, such as dial-up connections. Other factors affect RTT, including forward error correction and data compression, which introduce buffers & queues that increase RTT, and decrease performance. 2. : THROUGHPUT Network throughput refers to the volume of data that can flow through a network. Network throughput is constrained by factors such as the network protocols used, the capabilities of routers and switches, and the type of cabling, such as Ethernet and fiber optic. Network throughput in wireless networks is constrained further by the capabilities of NICs on client systems. AS M K AR IM P AGE – 6 3: RELATIONSHIP BETWEEN APPLICATION TYPES AND NETWORK OVERHEAD This section will describe some different application types those used mostly for network data transmission.

It also explains the TCP/IPs relationship with overhead. There are two fundamental types of network applications: transactional and streaming. These application types are also called interactive and batch processing applications respectively. According to Windows Development Center, transactional applications are stop-and-go applications. They usually perform request/reply operations, often ordered. Examples of transactional applications include synchronous remote procedure called RPC, as well as some HTTP and Domain Name System (DNS) implementations (Recongnizing Slow Applications).

Streaming applications move data. To describe streaming applications with a parallel term, streaming applications adhere to a pedal-to-the-metal data transmission philosophy, usually with little concern for data ordering. Examples of streaming applications include network backup and file transfer protocol (FTP). Transactional applications are affected by the overhead required for connection establishment and termination. For example, each time a connection is established on an Ethernet network, three packets of approximately 60 bytes each must be sent, and approximately one RTT is required for the exchange.

When termination of a connection occurs, four packets are exchanged. This is for each connection an application that opens and closes connections often generates this overhead on each occurrence. Another aspect that initiates overhead is TCP/IP, which has characteristics that enable the protocol to operate as its standardized implementation requirements dictate. A TCP/IP optimization called the Nagle Algorithm can also limit data transfer speed on a connection. The Nagle Algorithm is designed to reduce protocol overhead for applications that send small amounts of data, such as Telnet, which sends a single character at a time.

Another aspect of TCP/IP is slow-start, which takes place whenever a connection is established. When a connection is established, regardless of the receiver’s window size, a 4 KB transmission can take up to 3-4 RTT due to slow-start. When a TCP connection is closed, connection resources at the node that initiated the cl ose are put into AS M K AR IM P AGE – 7 a wait state, called TIME-WAIT, to guard against data corruption if duplicate packets linger in the network. This can cause depletion of resources required per-connection, such as RAM and ports, when applications open and close connections frequently. : LAB DESIGN FOR MEASURING OVERHEAD IN DATA COMMUNICATION STEPS FOLLOWED: ? DESIGN PHASE ? IMPLEMENTATION PHASE ? FILE TRANSFERS AND DATA COLLECTIONS ? GRAPHICAL REPRESENTATION AND ANALYSIS 4. 1: DESIGN PHASE During the design phase, I had to do some homework to identify the better file transfer tools, right file size, and naming conventions in order to conduct the experiment. Graphical representation of the experimental setup is shown by the screen shot below. Also, details about the PCs and networking devices that were used in the experiment are given in the table.

ROUTER SETTINGS Brand: Linksys WRT54G2 IP Configuration: Static IP Address: 192. 168. 1. 1 Wireless Mode: Disabled DHCP Mode: Disabled Security Mode: Disabled FTP SERVER/CLIENT PC Brand: Dell PC Processor Seed: 2. 5GHz Server Encryption: Disabled IP Configuration: Static NIC Speed: 10Mbps Mode: Half Duplex The screen shot below provides detailed information about those files I used in the experiment and the IP address configuration for individual devices. Name File-1 File-2 Size (KB) 2221 KB 1036 KB Size (Bytes) 2274304 1060864 Device Server PC Client PC IP Address 192. 168. 1. 10 192. 168. 1. 5 AS M K AR IM

P AGE – 8 4. 2: IMPLEMENTATION In this experiment, I used a CISCO proprietary Linksys router and two Dell PCs with Gigabyte Ethernet card and two twisted pair CAT5 cables. Both PC’s network interface card (NIC) is set to work in 10Mbps and in half duplex mode in order to increase the collision that is what we see in most internet access networks. Then, I configured the Linksys router as described in the table on previous page and disabled services such as DHCP server mode, wireless mode, and security mode. Also, I installed the Filezilla server software on the server PC and Client software on the client PC.

After that I assigned the Static IP on both PCs as shown in the table on the previous page. After installing the software on PCs and run it and I made some changes on the default configuration to satisfy my requirements, such as upload speed to 100 Kbps and download speed to 1024 Kbps as shown in the screen shot below. Finally, I created the user “asm”, a password, and a shared folder named “files” and stored both File-1 and File-2 for accessing and downloading them from the client PC’s. Screen shot-4: Server and Client software setting details Screen shot-5: Connection details information from the client PC’s. AS M K AR IM

P AGE – 9 4. 3: FILE TRANSFER AND DATA COLLECTION In this stage of the experiment, connection between server and client is established and the client PC is ready to download the files from the server PC. First, I launched the Wireshark in the client PC’s to capture the packet and then started downloading the files in two different times. After each successful download, I collected some necessary data from Wireshark, like frame details, summary, IO Graphs, and Protocol Hierarchy Statistics and I will use those data in later sections to calculate the experimental overhead and compare them with the theoretical overhead.

The screen shot below is displaying the partial data transmissions for File-1 that were captured using Wireshark. Screen shot-6: File-1 partial transfer frame. After the transfer of each file, I used the Wireshark summary feature to collect the details about the downloaded file. You can see both summaries from the screen shot below here. If we compare and analyze the summary for both files, then we will see there is significant difference between both files statistics and that proves the existence of overhead. Screen Shot-7: File-1 Summary AS M K AR IM Screen Shot-8: File-2 summary

P A G E – 10 4. 4: GRAPHICAL REPRESENTATION AND ANALYSIS In the Wireshark summary on the previous page, we can see there is 379KB (2600KB – 2221KB) extra data required to send 2221KB (File-1) of data. On the other hand, there is 154KB of data required to send 1036KB (File-2) of data. We will also see File-2 (10Packets/Sec) sends more packets on each second than File-1 (6 Packets/Sec). And File-2 has a higher average packet size (614 bytes) as compared to File-1 avg. packet size (583 bytes), because the larger the packets are, the fewer of them it takes to fill the pipe.

Remember, file size and packet size is different. Now, I will analyze some different network protocols and describe how they affect data transmission based the IO graph for the files that I created using Wireshark. The color coded graph below here represents five different protocols, those being widely used in networking. As you can see, the black line represents transmission control protocol (TCP), green line ftp-data transmission, and blue for UDP, and red line for file transfer protocol (FTP).

Screen shot-9: Graphical representation bandwidth uses by different protocols for File-1 As I mentioned on section-1 (why overhead is an important factor in network design), every network has to have some degree of normal network overhead in order to establish and maintain the connection for data transmission. That’s what we see here on the line gra ph. In the starting point, from 0-5 seconds is used by UDP packets to establish the connection and then TCP as well as FTP packets used to transfer the data in almost constant rates up to the end of transfer.

Notice here that TCP sends more packets than FTP, because TCP has to send SYN AS M K AR IM P A G E – 11 and AKW packets for each packet it receives to avoid data loss. As you can imagine, there are some other protocols’ data being transmitted even though I tried to avoid some common protocols that you will see in the real world. 4. 5: DATA COMPARISON AND ANALYSIS In this section, I will calculate the theoretical values for experimental files and compare them with the experimental output that is shown in both screen shots below here. Since I am using Ethernet to transfer files, let me explain some details about it.

Ethernet is the world’s most popular LAN technology that supports 10/100/1000Mbps transfer rate with different medium. Its frame size can range from 64 to 1518 bytes and require 20 extra bytes of data, whereas 12 bytes of data is used as inter-packet gap and 8 bytes of preamble are spaces between packets that cannot be used. This means, in effect, that when a 64 bytes Ethernet frame is sent, 84 bytes are actually allocated for the transmission of the frame. Another drawback of 64 bytes Ethernet frame is that only 46 bytes are actually available for IP packets.

Let’s calculate the theoretical value for File-1 using 64 bytes Ethernet frames and pretend there is no other communication occurring. As I mentioned previously, I am using 10Mbps Ethernet connection. So, 10Mbps = 10*1000*1000 = 10,000,000 bits per second (BPS). Let’s see how many packets I actually can send using the Ethernet medium. First, I add preamble and interpacket gap that equal to 84 (64+20) bytes and then convert it into bits. So, I can calculate PPS and that is 14,880 PPS (10000000/ (84*8)). Then, I converted it into BPS to calculate the overhead and found 7,618,560 BPS (14880*64*8), which is not equal to 10Mbps.

And I calculated the amount of bits that are being allocated to the inter-packet gap and preamble, that came to 2,380,800 BPS (14880*20*8) which we cannot use. Finally, I added this two BPS value and got approximately 10Mbps (7618560+2380800=9999360). From this calculation, you can see there is 23% overhead incurred by the gap and preamble of total bandwidth. If we calculate the overhead for inside of 64 bytes Ethernet frame, we will see there are 38 bytes ((64+20)-46) consumed as overhead, which means only 54% of the bits are the IP packets and over 46% of my connection is consumed by the overhead. AS M K AR IM P A G E – 12

After calculating these theoretical values, we can now compare them with experimental values. First, I will analyze and explain the experimental results and then compare them. From the screen shot, we can see the “FTP Data” transfer rate for File-2 (0. 046Mbit/s) is higher than File1 (0. 026Mbit/s); that create the significant difference in time it take to download the file. We already saw that from the summary image in the earlier section; where File-2 transfers 10. 021 packets per second, File-1 does only 6. 026 packets. Screen shot-10: Protocol hierarchy statistics for file-1 Screen shot-11: Protocol Hierarchy Statistics for file-2

AS M K AR IM P A G E – 13 The screen shots 10 and 11 provide the details percentage-wise analysis for each and individual protocol types along with its usage’s details for per packet and bytes size. More than 51% of packets are used to transfer the FTP Data out of 86% TCP packets , where the rest is used by other protocols for File-1; for File-2, more than 54% packets are used for FTP Data transfer out of 92% TCP packets. This is because File-2 (0. 046Mbit/s) is transferring larger packets size than File-1 (0. 026Mbit/s) that completely satisfy the point I discussed earlier; the larger the packets, the fewer of them it takes to fill the pipe.

The fewer frames, the fewer gaps, the fewer preambles, and thus it incurred less overhead, which is true in the case of File-2 transmission. It does prove that big frames are more efficient than small ones, and that’s why we get 14% total overhead for File-2 than 17% total overhead for File-1. To finalize our analysis, the statement about existence of overhead in Ethernet networks is true and its variable to the function of packet distribution. The smaller packets we have, the less efficient Ethernet will be, and the larger the packets, the more efficient the Ethernet network.

If we consider the real world network scenario, we will see there will be a significant lack of efficiency due to other factors that greatly impact the network. Some of those issues I already discussed in the earlier sections. The experiment I conducted and discussed here is not sufficient enough to understand the complex and numerous data transactions that occurred in the enterprise level network in multiple levels of communication such as application, hardware, transmission, and various protocol level communications. : LIMITATIONS In this experiment, the lack of sophisticated and more advanced tools to collect the data and being unable to access the real enterprise network does not significantly degrade the experiment, but having access to those will make the experiment more realistic and professional. The data were collected one time for each file because there are so many things that need to be considered and described in the analysis section of the experiment.

The sole purpose of FTP data transfer over the Ethernet medium and measuring the overhead is successful even though we cannot analyze the overhead that depends on different applications and protocol types. AS M K AR IM P A G E – 14 CONCLUSIONS: After defining the overhead, this paper explains how overhead is related to the network and how it’s affecting the network performance. Throughout the paper, a few examples are used to demonstrate the existence of network and explain why overhead is an important factor for the network performance.

In short, some different applications and protocols are explained to demonstrate the relationship with overhead, as well as different OSI layer context being explained. Important networking terminologies such as bandwidth and latency are explained and demonstrated to establish the relationship between overhead. At the end of the paper, a very brief and detailed analysis provides the most important and vital information about the Ethernet network by providing the real world concept and analytical tools. Results of this experiment demonstrate that overhead is a vital factor in networking for desi gn, implementation, and performance issues.

It also helps us to understand the different methodology employed by various technologies for data transmission over the network. Details of this experiment provide us the core factors as well as general factors that need to be considered in order to properly design the network. CITATIONS: is there enough bandwidth. (n. d. ). Retrieved 4 2012, from www. imakenews. com: http://www. imakenews. com/kin2/e_article000345313. cfm? x=b11,0,w Mitchell, B. (n. d. ). Network Bandwidth and Latency. Retrieved 4 2012, from About. com: http://compnetworking. about. com/od/speedtests/a/network_latency. tm Network Switching Tutorial. (n. d. ). Retrieved 4 2012, from www. technick. net: http://www. technick. net/public/code/cp_dpage. php? aiocp_dp=guide_networking_switching PC Magazine. (n. d. ). Retrieved from www. pcmag. com: Pc magazine. (n. d. ). Retrieved from http://www. pcmag. com/encyclopedia_term/0,1233,t=overhead&i=48685,00. asp theoreticla speed vs practical throughput. (n. d. ). Retrieved 4 2012, from www. appleinseder. ocm: http://www. appleinsider. com/articles/08/03/28/exploring_time_capsule_theoretical_speed_vs_practic al_throughput. html AS M K AR IM P A G E – 15

Add a Comment

Your email address will not be published. Required fields are marked *