Network Protocols and Standards: The Quick Overview
Skip to content

Network Protocols and Standards

Network Protocols and Standards

Network Protocols

Network protocols in the simplest sense are the policies and standards, including but not limited to, the formats, procedures, and rules that define how devices, two or more, communicate within a network. These policies and standards govern the end-to-end processes involved in communication and ensure the timely, secure delivery of data and network communication. Protocols in a network incorporate the processes, constraints, and even requirements of accomplishing communication between servers, routers, computers, and other devices that may be network-enabled. Normally, these network protocols are installed & confirmed by the receivers & senders so that network & data communication is done, and they apply to both the hardware and software nodes that communicate with each other on a network.

As such, a protocol can be likened to the language through which communication happens on the internet. This is because there is a set of mutually accepted rules that are also implemented as both ends of what is perceived to be the communication channel to ensure proper communication exchange. Two devices can exchange information only if they adopt the rules. We, therefore, cannot even dare communicate over the internet without the use of network protocols. Every protocol is defined using unique terms, and each has a different name. Typically, messages travel from the sender to the receiver through a medium just like normal communication does. In this case, the medium refers to the physical path over which the information will travel once it is sent and expected by the receiver. It uses a protocol.

These formats are used among communicating systems to exchange messages, where each has a precise meaning and is intended for a particular recipient. This recipient then produces a singular response from a pool of all probable responses predetermined for the specific situation being examined. This characteristic is typically autonomous of its intended implementation communication protocols agreed to by the parties involved, and to do this, network protocols are developed according to technical standards. This kind of arrangement is also the same for a programming language, and it can, therefore, be said that network protocols act to communication as programming languages do to computations.

Different network protocols describe different aspects of communication. A group of network protocols that have been designed to work in collaboration is known as protocol suites. When protocol suites are implemented in software, they are known as protocol stacks. The Internet Engineering Task Force publishes internet communication protocols, and hence, it handles both wired and wireless networking that has become a prominent part of present-day networking.

The International Organization for Standardization (ISO), on the other hand, handles other types of networking. Yet another organization, the ITU-T, handles network protocols for telecommunications and public switched telephone network (PSTN) formats. This and internet coverage are uniting over time. As such, the standards are doing the same and are moving towards convergence as well.

Communicating Systems

Between other media & devices in a network, communication is exchanged in every instance. This type of exchange is administered by predetermined agreements set out in communication protocol specifications. These specifications work to define the state-dependent behaviors, actual data that is exchanged, and the nature of communication. Digitally, in computing systems, the rules are expressed as data structures and algorithms, while in communication, they are expressed as network protocols.

Operating systems, usually contain cooperating processes that work to manipulate the data that has been shared within devices to know what was being communicated. The network protocols that govern this communication and network protocols are also embedded in the process code. Communicating systems do not have shared memory, and therefore, they have to use a shared transmission medium for communication with each other. Transmission as the way to achieve the ultimate goal of communication may not be reliable. As such, individual systems sometimes end up using different operating systems and different hardware.

When implementing a network protocol, software modules for network protocols and frameworks within the operating systems of machines interface. The framework is responsible for implementing the operating system’s networking functionality. Protocols are usually expressed in a portable programming language, and when this happens, protocol software and the operating system are made independent of each other. The TCP/ IP and OSI models are the most popular frameworks.

A design approach that has been deemed successful is abstraction layering from the days as early as when internet development was taking place. Abstraction layering was a useful design approach for both the operating system and compiler design. There are similarities between communication protocols and programming languages, and this meant that the monolithic networking programs could be broke down into protocols that could work together, giving rise to the concept of layered network protocols.

As a result of such developments, today, layered protocols are the basis of protocol design. One protocol is generally not enough for systems when transmitting information. Instead, there are sets of protocols that cooperate to ensure transmission, and they are known as protocol suites. Protocols are arranged based on the way they function in groups.

To illustrate, take a group of transport protocols, for example. Here, the groups of layers pertain to certain functionalities, where each of the layers solves a particular class of problems that relate to different aspects, such as the internet, application, transport, and network functions. For a message to get transmitted, each layer gives a chosen protocol. The subsequent protocol’s selection is attained when the message is drawn-out by a protocol selector in each of the layers.

Basic Requirements for Network Protocols

For data to get across, an entire network is just a small part of the equation when it comes to transmission. Once the data is received, more things happen. For instance, data has to be evaluated so that it can be understood how far the conversation has reached. Network Protocols, therefore, must be inclusive of rules of engagement that will describe the context. The rules in question express communication syntax.

There are other rules as well, and these ones determine the usefulness of the data that has been transmitted according to the context of the exchange. They are the rules that express the semantics of communication. Communication, in this case, involves semantics and syntax.

There is the sending and receiving of data on communication systems, and protocols define and specify the rules that are responsible for the government of transmission. The aspects described below are, therefore, to be addressed.

Data Formats for Exchange of Data

In this sense, there is an exchange of information bit-strings. There is a division of the bit-strings into fields, and every one of these fields carries information that is relevant to the protocol in question. There is a division in the bit-string, so it consists of two parts: the payload and header. The payload is responsible for carrying the actual message, and the header, on the other hand, is responsible for fields that are relevant to the protocol’s operation. There is a maximum transmission unit for bit-strings, and sometimes, some bit-strings are longer than this specified minimum. In such cases, the bit-strings end up being divided into smaller appropriate-sized pieces.

Address Formats for the Exchange of Data

Addresses in networking are just like addresses for humans in real life. They identify who the sender of information is and who the intended receiver is. The header area in the bit-string described above contains this information, and this allows the recipients of the message to determine if the bit-strings will be of use to them or not so that they can process or ignore the message therein. There may be a connection between the receiver and the sender, and this is identified using what is known as an address pair. Address pair come with values that have meanings for the receiver and sender. Sometimes, the addresses come with special values that have meaning. This is what results in a broadcast message in a local network. An addressing scheme is the set of rules that describes the meanings of the address values in the address pair.

Mapping of Addresses

This happens when protocols need to address one scheme to another. For example, when there is a need to translate an application specified logical IP address to an Ethernet Mac address, address mapping must happen so that the address of the first scheme is understood to the second scheme.

Detection of Errors That Occur During Transmission

Data detection is a necessary and important part of the process of data transmission in networks. It is especially necessary in cases where data corruption has occurred. The most common approach to this issue is the attachment of CRCs to the end of packets. When the CRCs are added, then it is possible for the receiver of data to establish that there are some differences that have occurred as a result of corruption. This gives the receiver a basis for rejecting the packet, and therefore, arrangements are made for retransmission.

Routing

Sometimes, you may find that systems do not connect to each other directly. In such cases, there is the employment of intermediary systems that work to connect the intended receiver with the message. These routers forward the message on behalf of the sender and make it possible for the receiver to get the intended message. The team “Router” is used when these connections happen on the internet, and the resulting interconnection of networks is referred to as internetworking.

Acknowledging

When there is the expectation of communication, then acknowledgment that the correct data was received is necessary. The receiver usually sends the acknowledgment to the original sender of the message.

Timeouts and Retries

Interestingly, despite taking all the necessary precautions, packets tend to be lost in networks sometimes. At other times, there may be a simple delay in the delivery of the packets. This is where acknowledgment plays a role because the sender expects that the receiver sends an acknowledgment so that they are sure that the message was received. The acknowledgment is expected in a set amount of time, and this gave rise to the concept of timeout. When the time lapses and the sender has not received an acknowledgment, it becomes a cue that there is a need to retransmit the information. In other cases, links may permanently be broken. In such cases, retransmission usually loses its effect, resulting in a restricted number of retries. When a number of retries exceed that of the limit, then an error follows.

The Direction in Which Information Flows

Sometimes, transmissions occur in one direction as would be in the case of information that flows from one sender at a time or half-duplex. If this happens, then there is a problem that will need to be addressed. This is the root of the concept of media access control as arrangements are made so that the case of contention and collisions are involved. Collisions happen when two senders want to simultaneously send out information, and contention happens when the two senders both wish to transmit data.

Control of Sequences

Sometimes, bit-strings may need to be transmitted after division into smaller pieces. However, problems may arise as most times, when these bit-strings are sent individually on the network, they may get delayed and sometimes, lost as they may take different routes to reach their destination. In such cases, these pieces of bit-string end up reaching the destination out of sequence. Retransmissions, on the other hand, will result in duplicate pieces, which does not solve the problem. As such, the pieces are marked with sequence information when they are still with the sender. If, therefore, they reach the receiver when out of sequence, the receiver has the right tool to determine what is duplicated and can know what was lost and either reassemble or ask for retransmission as is best seen.

Control of Flow

The flow needs to be controlled when the sender is transmitting packets of data faster than can be received and processed by the intermediate network or receiver. The best way to establish flow control is by messaging the sender and receiver.

Design of Protocols

To generate a group of common principles governing protocol design for networks, the system engineering principles have been put into use. Therefore, to design complex protocols, it is necessary to decompose simpler protocols that can cooperate within the conceptual framework. There is a concurrent type of operation in communicating systems. Synchronizing the software that receives and transmits messages in proper sequences is an essential part of this type of programming.

Traditionally, concurrent programming has been discussed in theory when it comes to operating systems. Formal verification is important because concurrent programs usually contain a big number of hidden bugs. Communicating sequential processes is the mathematical approach that studies communication and concurrency. Alternatively, concurrency can be modeled using machines that are finite, and such machines include Mealy and Moore, which is utilized in digital electronic systems as design tools and are encountered in the telecommunication and electronic hardware used in devices.

There are a lot of analogies between programming and computer communication. A protocol’s transfer mechanism, in this case, is comparable to the central processing unit. Among the programmers, there are rules governing the design of protocols that can cooperate even when independent of each other.

Usually, protocols are layered to form what is known as a protocol stack. As a protocol design principle, layering involves breaking protocols into smaller pieces, each of which will work to accomplish a specific task while interrelating, in trivial and undefined ways, with the other aspects of the protocol. The idea behind layering is that it allows individual aspects of the protocol to undergo testing and design without having to face combined explosion cases, and yet, the design can be kept relatively simple.

Internet communication protocols are made for complex yet diverse settings. Their design, however, is simple and modular and fit into the coarse hierarchy of function as defined within the internet protocol suite. The first cooperating protocol, the TCP/ IP protocol, as a result of the decomposition of the Transmission Control Program, and what resulted was a layered communication tool. Another model is the OSI model, which consists of seven layers. This one was modeled as something that would eventually guide general communication and has strict guidelines of protocol interaction, as well as rigorous notions of layering as a functionality concept.

Application software is constructed on a layer of data transport, and under the data transport layer, there is the delivery of a datagram and a typically connectionless routing mechanism on the internet. The relaying of packets happens in a layer that involves network link technologies such as Ethernet. Layering, hence, provides the opportunity for exchanging technologies whenever there is a need. As such, sometimes, protocols are stacked in different arrangements, such as tunneling, that allows the connection of networks that are not similar. The asynchronous transfer mode has the internet protocol tunneled across it.

Protocol Layering

Protocol layering forms the basis for protocol design. Apart from allowing decomposition of single and complex protocols, a functional decomposition additionally exists. Each protocol goes into a protocol layer, which is essentially a functional class. The suite of internet protocol contains the network interfaces that serve as functions, including the application-transport-internet. The diagram below expresses protocol layering:

Protocol Layering Diagram

In networking, computations and algorithms go together while communication and data involve messages and protocols. A data flow diagram, therefore, consists of messages flowing. In a message flow diagram, the system has a vertical flow in protocols and a horizontal flow between systems. This flow is governed by data formats and rules as specified by protocols. Vertical protocols are, however, unlayered mainly because they are not obedient to layering principles that stipulate that protocols must be layered to enable the layer at the destination that accepts the same object handed by the source layer. Horizontal protocols, on the other hand, are layered and obey the layering principles as they are from a protocol suite. The designer of the protocol is permitted by the protocol layer to focus on a certain layer at a time or an instance, while it does have to worry about the performance of other layers.

The vertical protocols do not need to be identical in the two systems. There is a need, despite this, to satisfy even the small assumptions that the principles of protocol layering are obeyed particularly for layered protocols. How? Mostly by encapsulation. A message will be divided into small pieces, and these pieces can be called either message, packets, streams, network frames, or IP datagrams. The names will largely depend on the layer in which they are. The header area data contain information that classifies the source, as well as the packet’s final destination on the network or packet.

For vertical protocols, the rule is that transmission pieces are meant to be encapsulated in the lower protocols’ data areas. The data is encapsulated as described on the side of the source, and the opposite takes place on the side of the destination. The rule of encapsulation, therefore, ensures that the rules of principles of layering persist in every transmission line except the lowest layer. For the purpose of ensuring that both sides are governed by a similar set of protocols, the messages carry information that identifies the protocol in their header.

The network’s and protocol layering’s design architecture are interrelated, and one cannot function without the next. To fully understand the features that define the relationship between network services and internet architecture, read below.

The internet is a source of universal interconnection. All networks that interconnect physically appear as part of a single large network or the internet or internetwork.

The internet addresses defined above consist of two main components: the net-id and hosted. The net-id gives an identification of the network, and the host-id identifies who the host is. The internet address is an identification of the address to the network and not the individual computer. The net-id is useful for routers, as they decide where a packet should be sent.

Independence in the network technology is achieved using the ARP, A low-level address resolution protocol. The ARP allows the mapping of internet addresses to physical addresses in a process called address resolution. The physical addresses, in this case, are, therefore, used by the network interface layer’s protocols. The TCP/ IP protocols, for instance, makes use of any underlying technology.

Physical networks are connected through routers that function by forwarding packets between these interconnected networks. Routers, therefore, make it possible for one host to reach another on the physical network. The message will flow between two systems that are in communication and datagrams are passed from a router to another until the message reaches the intended recipient or destination on a network that is physically attached.

To make the decision to deliver a datagram directly or whether it is to be sent to a router that is nearby, an IP routing table comes into the picture. An IP table typically consists of pairs of network-ids and all paths that can be taken so that a destination is reached. These paths can either be those of direct delivery or it can be an indication that the address of another router can reach the destination quicker. There can be a special entry that specifies the default path that is used when there are not any other known paths.

All networks are treated equally in this case, and, therefore, a point-to-point link, a LAN, and a WAN network are all considered as one network with no special privilege allotted to one or the other.

Packet-switched system and service is an offering via the internet. This is preferred because it adopts well with different hardware, including the Ethernet. As a result, connectionless delivery indicates that messages or streams can be divided into pieces that are distinctly multiplexed on the high speed interconnected machines that allow the concurrent use of connections.

Every information piece, therefore, identifies the destination. Data packet delivery is sometimes unreliable. Aside from the losses and delays, there can be duplication and delivery data packets that are out of order. This irregularity may be a result of a failure of the underlying networks. This unreliable connectionless system of delivery is defined by the IP.

The IP is also responsible for the specification of the routing function as it chooses the path over which a set of data will be sent. TCP/ IP protocols can also be used on connection-oriented systems. These systems build up exclusive use virtual circuits between receivers and senders. Once these virtual circuits are set up, IP datagrams are sent over the circuits as if they were data and are forwarded to IP protocol modules in a technique called tunneling. Tunneling is used on ATM networks and X. 25 networks.

The TCP defines the reliable stream transport service using connectionless packet delivery systems. The services and the application programs within the layer above are layered, and they are called application services that make use of the TCP. If a program wishes to have direct interaction with the packet delivery system, it does so using the user datagram protocol.

Software Layering

After the establishment of protocols and protocol layering, software design can follow. The software design is also layered in an organization and has a relationship with protocol layering. To send a message on a system, the top module has to interact with the modules that are directly below it and hand over the message meant for encapsulation. The module will, therefore, encapsulate the message in the data area and fill the header with information regarding the protocol it implements.

What follows is an interaction with the module below it, and it carries out the interaction by handing over the new information to the place deemed most appropriate. The module at the bottom interacts directly with the bottom module of the next system, so the message is sent across to the other system. The reverse happens on the receiving system so that the message that was sent on one system gets delivered ultimately in its original source to the module on top of the receiving system.

Sometimes, there are protocol errors. When this happens, the receiver will usually discard the received piece and send a message back to the original source about the condition. This is done by sending the message across and or sending it across the network if it happens at the bottom layer. The message is divided and reassembled at the point that introduced the reassembly or division.

The translation of programs us divided into subproblems:

  • Compiler
  • Assembler
  • Link editor
  • Loader

Translation software is also layered, and this allows the independent designing of software layers. There is an analogy between programming languages and protocols, and the designers of the TCP/ IP protocol were keen enough on this fact to ease the complexity that comes with translating programs when layering.

Take the example of translating a pascal program that is compiled into an assembled into a program. The assembler program is assembled to the object code which links together with a library object code by a link editor. The product is an executable code that is loaded into physical memory. The modules that fall below the application layer are considered to be part of the operating system, and the data that passes within the modules are less expensive if compared to passing data between the transport layer and an application program. The operating system boundary is that which exists between the transport and application layers.

Strict layering

Strict layering involves adhering strictly to a layered model. However, this practice is not usually the best when it comes to approaching networking as it can usually have an impact on performance. There somewhat has to be a trade-off between performance and simplicity within the network.

Using protocol is already universal today when it comes to computer networking. However, this does not mean that it is free from criticism as it has faced the same among researchers because abstracting protocol stacks may cause higher layers to copy the lower layers’ functionality.

Development of Protocols

The selection of network protocols precedes communication. The rules that govern the selection can be expressed by data structures and algorithms. Expressing algorithms in a portable software language enhances the operating system and hardware independence. The protocol specification is wide, and even source codes can be considered as such. However, it is only source independence of the specification that provides wider interoperability.

The standards for network protocols are created by obtaining the support of a standards organization, and obtaining this support also initiates the process of standardization. This process is what is commonly referred to as protocol development. Voluntarily, the members of the standards organization agree to adhere to the work that results. Members, in this case, are often in control of large market shares that are relevant to the protocol, and the standards are enforced by the government and in some cases, the law. This implementation of standards by law is important because standards are of importance in regard to the public interest. Unfortunately, in other cases, protocol standards may not be sufficient for widespread acceptance and hence, the source code may need to be disclosed and even enforced by the law.

There is a need for protocol standards, but to understand fully, the point is going to be demonstrated by what happened to IBM’s bi-sync protocol (BSC). BSC is a link-level protocol that is used in the connection of two separate nodes. Originally, the design was meant for use with the multimode network, but his use only revealed the dearth of the protocol. There was no standardization, so what happened was that organizations and manufacturers alike felt the need to create other versions that were incompatible on their networks. They did this with many motives, including to discourage others from using any equipment designed by other manufacturers. Today, there are over 50 variants of the same BSC protocol. Having a standard would have prevented the eventualities.

On the inverse, some network protocols can indeed gain market dominance without standardization. Such network protocols are often referred to as de facto standards, and they are most common in developing niches and emerging markets. Also, they can be common in monopolized markets. These kinds of network protocols usually hold the market in a generally negative grip, especially when the intention is to scare away the competition.

Historically, standardization can be viewed as a measure against the de facto standards. There are, however, positive exceptions to the ills brought about by de facto standards. For instance, if you take the case operating systems, such as the GNU/ Linux, you find that there is no negative grasp of the market in any way. The sources are published for this operating system and are maintained in this way, and hence, they invite the competition. There may be other solutions for open systems interconnection apart from standardization.

The Process of Standardization

The process of standardization is not really complex but involves a series of steps. First, it starts off with ISO commissioning of a sub-committee workgroup. The workgroup then does the work of issuing the working drafts and discussions that surround the protocol to interested parties, which may include other bodies involved in standardization. With such intense discussions, there is bound to be a debate, a lot of questions, and even disagreement on what the provisions of the standard should be and the needs that it can satisfy. All these conflicting views are always taken into consideration, and what they strive to achieve is a balance. After a compromise is reached, a draft proposal of what comprises the working group is made.

This draft proposal is then taken for discussion with the standard bodies for the member countries. Further, a discussion is also done within each country. There are more comments and suggestions that are collated, and eventually, national views come together and are formulated before being taken to the members of ISO who will vote for the proposal. If by any chance, a proposal is rejected, the draft has to consider every counter-proposal and objections and use the information to draft a new proposal that will be taken for the vote. Before the end of this process, there is usually a lot of feedback, compromise, and modification. The final draft that is considered reaches a status called draft international standard, and once it is standardized, it is considered an international standard.

The process that a draft proposal takes to reach international standard status can often take years for completion. The original draft created by the designer will differ significantly from the copy that makes it the standard and will have some of the features outlined below:

  • Various modes of operation that will allow for different aspects of performance, for instance, the set-up of different packet sizes at the time of startup. This is usually advisable when parties are unable to reach a consensus on what should be the optimum packet size.
  • Undefined parameters or some that are allowed to take values that are set at the discretion of the implementer. This, just like the various modes of operation described above, usually is a reflection of how much the views of the members conflicted.
  • Parameters that are reserved for use in the future. This usually reflects that the members of the standardization board reached a consensus that the facility had to be provided. However, in such a case, they could not agree on how the facility should have been provided within the time they had available.
  • There will be ambiguities and inconsistencies found as the standards continue to be implemented.
  • OSI standardization

Before the internet, there was ARPANET. For the ARPANET, protocols were standardized. However, sometimes standardization may not be enough. The reasons stated in the case of de facto standards are different from the case here. Here, what makes standardization insufficient is the fact that the protocol also needed a framework to enable operation. There is, hence, a need to develop a future-proof, framework that is also for general purposes that is suitable for network protocols that are structured.

Such developments are important because they would not only allow clearly definitions of the protocol responsibilities at different levels but also, would be instrumental for the prevention of overlapping functionalities. These needs resulted in the development of the OSI Open Systems Interconnection model for reference. The OSI Open Systems Interconnection reference model is a vital framework used for designing standard services and protocols that conform to the different layer specifications.

With this OSI model, the systems in communication are presumed connected through an underlying medium that provides a primary mechanism for transmission. The above layers are numbered from one through to 7, and each layer provides service to the layer above it using the services of the below layers. The interface through which the layers communicate with each other are called service access points, and the corresponding layers at the systems are known as peer entities.

For communication to happen, the peer entities in a layer use a protocol that is implemented by a number layer below. If systems do not have a direct connection, relays-intermediate peer entities are used. There are addresses that identify service access points, and the domains that provide these naming services are not necessarily restricted to one layer. This makes it possible to use the same naming domain in all layers. Each layer has two network protocols: the service standards protocol and the protocol standards. Service standards define how a layer communicates with the one above it while the protocol layer defines communication between peer entities at each level.

Below, there is an explanation of the layers and functionalities of the original RM/ OSI model. The given order is from the lowest to the highest.

The physical layer

This layer describes all the physical connections, such as electrical characteristics and the transmission techniques used. It also includes the setup, clearing, and even maintenance of the physical connections.

The data link layer

The data link layer is responsible for setting up, maintaining, and releasing data-link connections. Any errors that occur in the physical layer can be detected here and, subsequently, can be corrected. These errors are reported to the next layer, the network layer. This layer also defines the exchange of data link units.

The network layer

The network layer is responsible for setting up, maintaining, and releasing network paths to be used between transport peer entities. The layer also provides relay and routing functions as needed, and the transport layer negotiates the quality of service at the set-up of the connection. The layer also takes responsibility for controlling network congestion.

The transport layer

The transport layer provides the basis for transparent yet reliable transfer of data in a way that is cost-effective as described by the quality of service selected. This layer supports the multiplexing of many transport connections on a network. Also, it may support the split of a transport connection into many others.

The session layer

The session layer provides a variety of services to the presentation layer, including establishing and releasing session connections and quarantine services for sending presentation entities to instruct the entity receiving the session so that the latter is instructed not to release data to the presentation entity without any permission. It also establishes and releases normal and expedited exchange of data and performs interaction management so that presentation entities are able to determine whose turn it is to perform some control, resynchronize a session connection, and to report unrecoverable exceptions to the entity of presentation.

Presentation layer

The presentation layer provides services to the application layer, including the request to establish a session and to the transferring of data. Also, it allows the negotiation of which syntax is supposed to be used between the layers of the application. It also performs special-purpose transformations, for instance, data encryption and data compression.

Application layer

The application layer provides services to application processes, including identifying the intended partners of communication and establishing the necessary authority to allow communication. Also, it plays a role in determining the availability and authentication of partners and agrees on the privacy mechanisms necessary for communication and agrees on the responsibility for recovery, and how to ensure data integrity and allows synchronization between application processes that cooperate. Also, it identifies any constraints, including data ad character constraints on syntax. Lastly, it also provides services dealing with cost determinations and acceptable service quality and selects dialogue discipline, such as what logon and logoff procedures are to be followed.

The table below represents the seven layers.

OSI Layers Diagram

The RM/ OSI layering scheme defers from the TCP/ IP layering scheme because it does not assume a connectionless network. The RM/ OSI has a connection-oriented network, and this type is more suited for local area networks. The use of connections for communication implies that there are virtual and circuit sessions are used hence the session layer and lack of one in the TCP/ IP model of layering. ISO constituent members were mainly concerned with wide area networks, and so the development on the RM/OSI reflects this as it concentrates on networks with connections. Connectionless networks were mentioned as an addition to the RM/ OSI. Today, however, the RM/ OSI model includes connectionless services, and this has caused the TCP and IP models to develop into international standards.

Cable Infrastructure

Structured cabling is typically a type of open network structure that is usable by data, access control, telephony, and building automation systems, among others. It is a source of economical operation and flexibility.

Concepts of data rate, bandwidth, and throughput

All network connections have what is known as data rates. A data rate is the rate at which bits are transmitted. In some networks, for instance, LANs, data varies with time. Throughput is a related concept that essentially refers to the effective rate of transmission when taking into account factors such as protocol inefficiencies, transmission overheads, and competing traffic. Usually, throughput is measured at higher network layers compared to data rates.

Bandwidth refers to either throughputs or data rates but is mostly used in relation to the data rate. Commonly also, the term is used in relation to radios, where bandwidth refers to the width of frequency band available and use proportional or equal to the data rate achievable. When referring to TCP, an alternative term, ‘goodput,’ is used to refer to the throughput of the application layer. When goodput is calculated, retransmitted data can only be counted once. The measurement of data rates is done in kilobits or megabits per second (bps). When calculating data rates, remember that a kilobit is 103 while a megabit is 106.

Concept of packets

The concept of packets is the brainchild of Paul Baran. In 1962, Baran wondered how networks would survive in the event of node failure. This kind of failure existed mainly because there were centrally switched network protocols. Donald Davies in 1964 developed the same concept, giving it the name which it still uses today: packets and packet switching Simply put, packets are modest-sized data buffers that get transmitted through shared links as a unit.

Usually, packets come prefixed with a header that contains information for delivery. Just imagine how every envelope comes with a name to ensure delivery to the right place. Headers in datagram forwarding, for instance, contain a destination address while headers in networks have an identifier for the virtual circuit; most networking today is based on the use of packets. Packets are called frames when they are in the LAN layer and segments when in the transport layer.

LANs have an intrinsic maximum packet size that they can support and usually, this comes to around 1500 bytes of data for Ethernet. TCP originally held 512 bytes. You may wonder how packets are transferred from large data pockets to smaller ones. Every layer adds its header: typically, IP headers-20 bytes, Ethernet headers-14 bytes, and TCP headers-20 bytes, and IP headers-20 bytes. Datagram forwarding networks have headers that contain the delivery information, including destination address. Internal network nodes are called switches/ routers, and these will ensure that the packet is delivered to the specified address.

Concept of datagram forwarding

When a packet has to be delivered, there is a data packet that contains a destination address. The switches and routers on the way must observe the address and deliver the packet to the destination. The packet can only be delivered to the right destination by the provision of each router with a forwarding table of pairs. This is typically the destination, next, hop pair. What happens is that a packet arrives then the switch/ router will look up the next destination address in the forwarding table.

When this information is looked up, then it is easy to find the next_hop information. The next_hop information is the immediate next address in the loop that the packet should be forwarded to so that it is one more step closer to its destination. Every router or switch is responsible for only one step in the path that is meant to deliver the packet to its destination. When all is well within the layer, a packet is delivered to its destination, one hop at a time without interference. The destination entries are in a forwarding table.

However, they do not necessarily usually correspond with the destination addresses except in the forwarding of Ethernet datagrams. What happens with IP routing is that the destination entries in the table will often correspond to the prefixes of the IP address. This is a strategy that is meant for saving space. The requirement here is that switches can perform lookup operations using the destination address and forwarding table in the packet that has just arrived to determine what the next-hop should be.

LANs and Ethernet

Below is a simple diagram that captures the components of a local area network.

Local Area Network Diagram

LAN stands for local area networks. A LAN consists of physical links (serial lines), interfacing software that is common, and connecting the hosts to the links and network protocols that link everything together.

Ethernet is a physical/ data link layer. As a physical layer of the network, Ethernet focuses on hardware elements, for instance, cables, network interface cards, and repeaters. Ethernet is the most used protocol in the physical layer. For instance, the Ethernet network specifies the type of cables that can be used, the topology, and the length of cables.

The data link layer addresses how data packets are sent from node to node, and Ethernet makes use of the CSMA/ CD access. SCMA/ CD stands for carrier sense multiple access/ collision detection. The SCMA/ CD is a system in which a computer must put an ear out for the cable before sending information through the network.

If there is a clear way, then the computer will continue with the transmission. If there is another node that is already doing the transmission, then the computer will have to wait before trying to transmit again once the line is clear. In other instances, two computers may want to transmit at the same time, and this causes what is known as a collision.

In such a case, the computers will both take a step back and try to transmit again after random amounts of time before trying to retransmit. It is common for collisions to happen with this type of access method. This type of delay is, however, not big and usually small and does not affect transmission speeds on the network.

Originally, Ethernet was developed in 1983 and had speeds of 10Mbps. While this may not look like much now, it was the good speed in the early days. Ethernet used a coaxial cable. The Ethernet protocol allows bus, star, and tree topologies, and these may depend on the type of cables used. The original Ethernet cabling was heavy and expensive to purchase and even install. Maintenance was an issue, and there was no easy way to retrofit the coaxial cable into existing facilities. The current Ethernet cables are now modified and use a twisted pair wire. They can transmit at speeds of 10, 00, and 1000 megabits per second.

The fast Ethernet protocol transmits at speeds of up to 100 Mbps and requires the use of not only different but also expensive network hubs and interface cards. Also, they make use of 5 twisted pairs of optic fiber. There is also Gigabit Ethernet that has transmission speeds of 1Gbps, which is the same as 1000 Mbps. It is used along with copper and fiber optic cabling. A summary of an Ethernet protocol is shown below:

Ethernet Protocol Cable Speed Diagram
nv-author-image

Era Innovator

Era Innovator is a growing Technical Information Provider and a Web and App development company in India that offers clients ceaseless experience. Here you can find all the latest Tech related content which will help you in your daily needs.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.