Difference between revisions of "Network Latencies"

From ScenarioThinking
Jump to navigation Jump to search
 
(3 intermediate revisions by 2 users not shown)
Line 11: Line 11:
*Demand for latency sensitive applications:  
*Demand for latency sensitive applications:  
**Online computer games  
**Online computer games  
**high performance computing
**high performance computing (Universities, Pharmaceutical, other R&D departments)
**video conferencing
**video conferencing
**voice over IP
**voice over IP
Line 20: Line 20:


*Optical Fibres
*Optical Fibres
*Increasingly faster integrated circuits (for network equipement)




Line 26: Line 27:
*Network Switching Techology
*Network Switching Techology
*<b>Cost</b> of (Re)placing new Switches and Cables
*<b>Cost</b> of (Re)placing new Switches and Cables
*Lack of business/consumer applications: Hard to exploit parallelism




==Paradigms:==
==Paradigms:==
Speed of light ultimately limits network latency. To reach the other side of the world and back again (round-trip of 40.000 kilometres) with the speed of light, it would take us: 40.000/300.000 ~= 0.1333 seconds ~= 133 ms. Due to switching and other hardware extra delay is added and thus takes a little longer (280ms-350ms).


==Experts:==
==Experts:==
 
Google


==Timing:==
==Timing:==
 
The creation of the internet, and consumer/business interest into it stimulates faster development of high bandwidth and low latency networks.


==Web Resources:==
==Web Resources:==
[1] [http://www.abo.fi/~oholm/distance/AIRdistance.shtml Air Distances]

Latest revision as of 11:15, 13 March 2005

Description:

What are the driving forces behind network latency? What is latency?

Latency is the amount of time needed for a packet to go from place A to B, and to receive a response packet from B back to A.

[t=0, place=A] send packet to B ==> [t=25, place=B] Sent response to A ===> [t=50, place=A]

Network latency together with bandwidth are the two major harware inhibitors for parallel computing. This holds for clusters and grids, but also for system-on-a-chip solutions like the Cell architecture, which in fact is a cluster/grid on a chip.

Forces Stimulating the Development of Low Latency Infrastructure:

  • Demand for latency sensitive applications:
    • Online computer games
    • high performance computing (Universities, Pharmaceutical, other R&D departments)
    • video conferencing
    • voice over IP
    • Lots of new market opportunities (High Performance Computing for everybody):
      • Complex Stock Market Analysis on Demand
      • Remote control of equipement X
      • Complex whatever, for consumers etc.
  • Optical Fibres
  • Increasingly faster integrated circuits (for network equipement)


Forces Inhibiting Low Latency Infrastructure Development:

  • Speed of light
  • Network Switching Techology
  • Cost of (Re)placing new Switches and Cables
  • Lack of business/consumer applications: Hard to exploit parallelism


Paradigms:

Speed of light ultimately limits network latency. To reach the other side of the world and back again (round-trip of 40.000 kilometres) with the speed of light, it would take us: 40.000/300.000 ~= 0.1333 seconds ~= 133 ms. Due to switching and other hardware extra delay is added and thus takes a little longer (280ms-350ms).

Experts:

Google

Timing:

The creation of the internet, and consumer/business interest into it stimulates faster development of high bandwidth and low latency networks.

Web Resources:

[1] Air Distances