Network Speed, Bandwidth, and Throughput
In networking, it is necessary to distinguish between speed, bandwidth, and throughput, as they each have distinct meanings, despite often being used interchangeably in casual conversation.
-
Bandwidth refers to the theoretical maximum capacity of a network link or interface, indicating the maximum amount of data that can be transmitted per second. For instance in Ethernet, a Gigabit Ethernet interface possesses a bandwidth of 1000 Mbps or 1 Gbps. It represents the potential of the link but doesn’t guarantee that all this capacity will be usable at all times.
-
Speed refers to the effective data transfer rate over a network, typically measured as an average over time. It is influenced by various factors, including the available bandwidth, latency, packet loss, protocol overhead, and Quality of Service (QoS) configurations. While "speed" is not a precise technical term, it is commonly used to describe the user experience of data transfer rates.
-
Throughput refers to the actual amount of usable data successfully transmitted over a network in a given time. It accounts for real-world factors such as protocol overhead, network congestion, and retransmissions due to errors, reflecting the practical performance of the network. It does not typically include the overhead introduced by protocol headers and control plane transmissions. Unlike bandwidth, which is a theoretical maximum, throughput indicates the real data delivery capability of the network under current conditions.
An increase in bandwidth does not necessarily mean in increase in speed. Also, the concepts of speed and bandwidth as described above should not be confused with the interface speed and bandwidth which are parameters that can be set on an interface of a Cisco network device.
Links
https://networklessons.com/cisco/ccnp-tshoot/cisco-ios-show-interface-explained
https://networklessons.com/cisco/ccna-routing-switching-icnd1-100-105/introduction-to-ethernet