1. Home
  2. Network Admin
  3. Qos explained
We are reader supported and may earn a commission when you buy through links on our site. Read Disclosure

QoS Meaning in Networking: What is Qos? (Tutorial)

Quality of Service, or QoS, is a complex subject. But its use is so common these days that every network administrator should know about it. QoS became popular as more and more networks started to carry data that need to be prioritized while at the same time recreational network usage was becoming more and more commonplace.

Our intention is not to make you QoS experts but, instead, we want to shed some light on the subject is as much as a non-technical way as possible.

To put it simply, our goal with this is to answer the following question: What is the QoS meaning in networking, and what is it good for?

QoS Meaning in Networking: What is Qos

This is not a course on QoS theory and implementation. We won’t show you a single switch or router command. Our goal is to allow you to simply grasp the essence of QoS.

We’ll begin by clarifying what QoS is–and isn’t. After that, we’ll pause briefly to discuss a few tools from SolarWinds that you might want to try. Then, we’ll discuss the different factors that can affect network performance. This will take us to the core of our matter: how QoS works. As you’ll see, it is much simpler than it appears. And before we conclude, we’ll discuss what happens when you don’t use QoS and what QoS can’t help you with.

What is QoS?

As network usage grew to include more and more traffic of different type and as network congestion became more and more frequent and important, engineers soon realized that they needed a way to organize and prioritize traffic. QoS is not one thing but a combination of features and technologies that work together to accomplish that.

Through a lot of trial and error, we now have a relatively universal QoS system that can be used to reliably ensure that important traffic gets the attention it needs.

An important aspect of QoS is that it has to be implemented from end to end to be of any use. QoS is set up on the devices–such as switches and routers–that handle traffic. Any such device in the data path must have the correct QoS configuration or else things it won’t have the expected effect.

Also, each device must have a QoS configuration that is compatible with the others’. QoS uses priority markings to accomplish its magic. You can easily imagine what would happen if one device considered high higher priority figure as more important while another did the opposite.

QoS Meaning in Networking

We often compare network to vehicular traffic where highways represent network links and vehicles represent data packets. It is a rather good analogy as there are many similarities between the two types of traffic. Probably more than there are differences. We’ll use that same analogy to try to concretely explain what QoS is.

So, let’s imagine a busy highway. It’s Friday afternoon at rush hour and there are lots of cars and trucks. Traffic is already moving quite slowly but, to make things worse, we’re approaching an intersection and, on the other side of that intersection, there’s some road work going on, doing nothing but adding to the problem. Most of you have more than likely been in such a situation.

Highway Congestion

To try to help traffic move a little better, there’s a traffic cop at the upcoming intersection. He does his best to try to give each and every motorist his fair share of the road. But even with his assistance, things are not moving much and, like it or not, you’re stuck in traffic.

Then, in the distance, you hear an ambulance’s siren coming from behind you. This is when the traffic cop at the intersection shifts in high gear.

Recognizing that the ambulance really needs to go through, he makes sure to let the traffic in front of the ambulance go through and to stop opposing traffic, ensuring that it can continue its route with as little delay as possible. Meanwhile, other motorists have to wait for their turn before they can resume their route once the priority vehicle has passed.

SolarWinds QoS: The best tools!

Before we go any further, I’d like to discuss a few tools from SolarWinds. Although they are not directly related to QoS, both are very useful in identifying where there are bottlenecks in your networks and what is causing them.

They will help you assess the current situation, which is the first step in correcting problems in general and implementing QoS.

1. Network Performance Monitor

The SolarWinds’ flagship product, the Network Performance Monitor is possibly one of the best SNMP bandwidth monitoring tools. This is a tool that will use the Simple Network Management Protocol to graph the evolution of network circuits bandwidth utilization over time. The software’s dashboard, its views, and charts are fully customizable. The tool can be set up with minimal efforts and can start monitoring almost immediately after installation. NPM can scale from the smallest networks to huge ones with hundreds of devices spread over multiple sites.

SolarWinds QoS: NPM Network Summary

The SolarWinds Network Performance Monitor uses SNMP to poll devices at regular intervals–typically five minutes–and read their interface counters.

It then computes the bandwidth utilization stores it in a database for future reference and displays graphs showing bandwidth usage evolution over time. NPM is a huge tool with several extra features. For instance, it can build network maps and display the critical path between two devices.

Pricing for the Network Performance Monitor starts at around $3,000. A 30-day FREE trial is available should you prefer to try the product before buying it.

2. NetFlow Traffic Analyzer (FREE Trial)

The SolarWinds NetFlow Traffic Analyzer gives the administrator a more detailed view of network traffic. It doesn’t just show bandwidth usage in bits per second.

SolarWinds NTA Dashboard Summary

The tool provides detailed information on the observed traffic. It will tell you what type of traffic is more prevalent or what user is using more bandwidth. It will also provide invaluable information on the different types of traffic–such as web browsing, business apps, telephony or streaming video–that are carried on your network.

The NetFlow Traffic Analyzer uses the NetFlow protocol to gather detailed usage information from your network devices. The NetFlow protocol is built into many networking devices from various vendors. When configured, networking devices send detailed information about each network “conversation”, or flow, to a NetFlow collector and analyzer. The SolarWinds NetFlow Traffic Analyzer is one such collector and analyzer.

If you want to try the product before committing to purchasing it, a free 30-day trial version can be downloaded from SolarWinds. This is a fully-featured version that has no limitation but time.

Factors Affecting Network Performance

In a typical network, data delivery can be affected by several factors. We’ve compiled a list of the primary factors that can affect network performance.

Low Throughput

This has to do with a network link’s capacity. Some can handle more traffic than others. It is usually measured in Bits–or often kilo or megabits–per second. If you exceed the link’s capacity, congestion will occur and performance will be degraded.

Dropped Packets

Packets can be dropped by networking devices for several reasons. Perhaps they got corrupted in the transit and cant be recognized anymore. But more commonly packets are dropped when they arrive on a device whose buffers are already full. The receiving application will usually realize that some data is missing and request its retransmission which will cause additional delays and performance degradation.


Noise and interference can corrupt data. This is especially true in wireless communications and over long copper wires. When errors are detected, the receiving application will ask for the missing data to be retransmitted, again degrading performance.


Latency has to do with network devices queuing data before sending it out. It can also happen when longer routes are used to avoid congestion. It should not be confused with throughput. With latency, the delay can build up over time, even if the throughput is sufficient.


Jitter is defined as a variation in the delay it takes for each data packet to reach its destination. It happens for various reasons. For instance, two packets may take different routes. The consequence is that, when jitter gets too high, packets can arrive out of sequence at their destination. If the packets are part of a Word document, they will be correctly reordered and no one will be affected but if we’re talking about voice or streaming video data, it can cause all sorts of issues.

As we’ve just seen, some types of traffic–such as voice or streaming video–will be more affected by performance issues. This is why different traffic needs different handling and why QoS exist.

How QoS works

Before we begin, I’d like to state a few things. First, I’m not a networking engineer. Second, the goal of this explanation is not to be absolutely accurate. I’m knowingly oversimplifying things and even perhaps twisting the reality to a certain extent to make this section easier to digest. My goal is to give you a general idea of how it works, not to train you on QoS configuration.

QoS works by identifying what traffic is more “important” and by prioritizing that traffic throughout the network. There’s no “golden rule” as to what traffic is more important than other. Obviously, some traffic–such as voice or streaming video–will normally be considered important simply because it won’t work properly when suffering from performance degradation. Some traffic–such as web browsing in many organizations–is considered unimportant and will therefore not be prioritized.

There are two components to QoS. First, the traffic must be classified and marked. Although there are several ways traffic can be marked, Differentiated Services in the most prevalent today. This is the one we will detail in a short while. The second component is the queuing which will ensure that priority data will be transmitted with as little delays as possible. Queueing is done at the network devices according to Differentiated Services markings.

Differentiated Services, or DiffServ, use a six-bit code in the header of each packed to mark is according to several classes of increasing priority. This marking is referred to as the Differentiating Services Code Point, or DSCP. Typical DSCP values range from 0, the least important traffic to 48, the most important one.

Classification And Marking

For network traffic to be correctly handled according to its priority, it must first be classified and marked appropriately. Marking can be done right at the source. For instance, it is not uncommon for IP telephone sets to mark their traffic as DSCP 46, a high-priority value. For traffic that is not marked at the source, things are a tad more complicated.

Unmarked traffic doesn’t actually exist with DiffServ. By default, all traffic is marked DSCP 0, the lowest priority. It is up to the first network device handling the traffic–usually a switch–to mark it. How is it done? Mostly through ACLs.

ACLs, or Access Control Lists, are a feature of most networking equipment that can be used to identify traffic. As their name implies, they were originally used as a mean of controlling access. ACLs identify traffic based on several criteria. Among them, the more common are the source and destination IP address and the source and destination port number. Throughout the years, ACLs have become more and more refined and can now be used to precisely select a very specific traffic.

In the case of ACLs used to insert QoS markings, the rules not only specify how to recognize traffic but also what DSCP value to mark it with.


Now that traffic is marked, all that left is to prioritize it according to its marking. This is normally accomplished by using multiple queues with increasing priority. Although DSCP values are 6-bit wide and can, therefore, range from 0 to 63, networking equipment rarely uses that many queues. It is typical for most networking equipment to use from three to seven queues with five being the most common number. With five queues and over 60 markings, you’ve certainly figured that more than one DSCP value goes in each queue.

The lowest priority queue, which is often called the best-effort or BE queue will be the one that gets the least attention from the routing engine. Conversely, the highest priority queue, which we often call real-time or RT will get the most attention. This ensures that “important” traffic will be routed or switched in priority. Of course, this also means that best-effort might be seriously delayed and perhaps even never delivered. This is something to keep in mind when classifying and marking best-effort traffic

Consequences Of Not Using QoS

The consequences of not using QoS vary widely. For instance, if your network carries no highly sensitive traffic such as IP telephony or streaming video, not using QoS might make no difference. This is especially true when your current traffic levels are low. In fact, in a situation of low traffic, QoS brings almost no benefit. If we go back to our highway analogy. If the ambulance is alone on a 5-lane highway, it won’t need to be prioritized.

But in situations where your network suffers from any–or many–issues such as overutilization and congestion, then the absence of QoS will lead to all sorts of problems. For traffic that requires real-time or near-real-time transmission–such as IP telephony, it could, for example, be the cause of garbled, chopped, or unintelligible Audio. Video streaming would also be affected, resulting in excessive buffering during playback.

But even other services could suffer from the absence of QoS. Imagine that a corporate network user is trying to access an important web-based accounting system while at the same time, hundreds of users are on their lunch break, heavily browsing the Internet. This could render the accounting application unusable unless its traffic is correctly prioritized using QoS.

QoS Won’t Fix Everything

But as good as it is, implementing QoS is not the solution to every problem. Network administrators tend to think that implementing QoS will relieve them of the need to add bandwidth. While it is true that implementing QoS will cause an immediate and very apparent improvement of the operation of high-priority traffic. It will also degrade lower priority one.

QoS will take care of temporary network congestion and it will ensure that business-critical services continue to operate correctly while there is congestion but it won’t stop it. You still need to monitor network usage and have a capacity planning program in place.


QoS should be part of any organization’s network strategy but it should not be the only item. But more than anything, extreme care should be applied in planning and setting up QoS. While it can do small miracles when correctly applied, it could make the situation much worse for certain users. And before you implement QoS, monitoring tools should also be put in place to assess the situation. Those same tools will provide invaluable after implementation as well.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.