|
Answer» Could you please tell me what the factors that determine the network latency are? And what exactly is the measurement of bandwidth. Some say Hertz, others say (giga/mega/kilo...)bit/s. And, in TCP/IT Guidebook, latency x bandwidth to calculate capacity of a,say, adsl LINE. How comes it, i mean, if bandwidth measured in hertz?
by the way, I also ask about the theoretical data rate and the real one (throughput): for example the FPT Mega Style ADSL pack claims its ideal downloading speed is 1,5Mbps, which means the real data rate should never exceed 1,5Mbps. But in reality the number even reaches more than 200KB/s (1,6Mbps) (of course with the assistance of download accelerator) Network latency: influenced by distance, electric resistance (in fact another physical property of the transmission medium - I have a lapsus) when we talk about transmission through metal cables (so we don't cover here the cases of optic fiber or wireless transmission), number of hops between source and destination, what data transformations occur in different equipments (like router, switches, repeaters, other equipments) - in short everything that could cause delay (don't take into consideration the signal alteration or other effects).
Bandwidth is calculated relative to transmission medium, but usually is EXPRESSED in quantity per time unit. If we are talking about wireless transmission than network bandwidth could mean the frequencies range used for data transmission. BUT, in wired transmissions, bits are also periodic signals, they are sent with a specific frequency. So it is not such a difference between hertz (measurement unit for frequency) and byte (measurement unit for data) - hertz is a rough estimation, how many alternations of a signal in the period of time, while bytes per unit of time means quantity of data. You may need a few cycles to send a bit, you may need a single one, you clearly need some "blank" signals to transmit the start and the end of the bit etc. Network data is sent accordingly to a certain frequency; but a fair chunk of transmitted data is comprised of "CONTROL bits" and other ELEMENTS of control, necessary for proper communications. It is more "natural" to speak of network bandwidth as the quantity of data processed in the time unit - you talk about frequency in some special cases like when you develop/study a communication protocol, a communication equipment, when you manage or modify such things. It is not "common" language to express network bandwidth as frequency.
Your last point: some ISPs allow for a certain limit expand when the client has such needs. It depends on their bandwidth usage. Another consideration, are you sure that there is not a rounding? MAYBE your reporting program it is not so exact as it claims. And the last comment: between you and the servers where you have transfer are ISP equipments. Sometimes, they are buffering transferred data, and even when there are some "glitches", your transfer seems to go with higher speed because you don't see the transfer between ISP equipment and the remote server, but the transfer (which you requested from the remote server) between your equipment and ISP equipment.
If I explained something in fuzzy mode, please tell me and I will try to refine.
|