What is Latency?

What is Latency? KB ID 0001874

What is Latency?

I hear people use the word ‘Latency‘ a lot, mostly without ever really understanding what it is, unlike its close relations bandwidth and thoughput* which are measurments of data, latency is a measurment of TIME, and in a lot scenarios is variable depending on what’s happening.

*Note: Too low bandwidth and thoughput can increase latency.

There will always be latency, becasue we are bound by the laws of physics, to pass a ‘light pulse’ down a fibre optic cable from London to Paris, will take less time than it will to pass that same lightpulse from London to New York. We call this propogation delay.

  1. Propagation Delay: This is the time it takes for a signal to travel from the sender to the receiver through the physical medium (such as fiber optics or copper cables). The speed of propagation is close to the speed of light but can vary slightly depending on the medium.
  2. Transmission Delay: This is the time required to push all the packet’s bits onto the wire. It is influenced by the size of the packet and the transmission rate of the network.
  3. Processing Delay: This is the time taken by network devices like routers and switches to process the packet header and make forwarding decisions. Processing delays are generally very small but can add up across multiple devices.
  4. Queuing Delay: This occurs when a packet waits in a queue before it can be transmitted. Queuing delays can vary significantly depending on the network congestion and the configuration of the network devices.
  5. Propagation Distance: The physical distance between the source and destination plays a critical role in latency. Longer distances naturally result in higher latency due to the increased time it takes for signals to travel.
  6. Network Congestion: High traffic volumes can cause congestion in the network, leading to increased queuing delays and, consequently, higher overall latency.
  7. Bandwidth and Throughput: Although bandwidth is the maximum rate of data transfer, actual throughput can be lower due to various factors, including network congestion and overheads. Lower throughput can contribute to higher latency.
  8. Protocol Overheads: Different network protocols have various overheads associated with them. For instance, the Transmission Control Protocol (TCP) has higher overhead due to its error-checking and recovery features compared to the User Datagram Protocol (UDP).
  9. Hardware and Software Limitations: The performance of network hardware (like routers, switches, and network interface cards) and software (such as drivers and network stacks) can impact latency. Faster and more efficient hardware and software reduce latency.

Latency is typically measured in milliseconds (ms) and can be assessed using various tools and techniques, such as ping tests and traceroute commands. Lower latency is especially crucial for applications requiring real-time interaction, such as online gaming, video conferencing, and financial trading systems.

Minimizing network latency involves optimizing network infrastructure, improving hardware and software efficiency, and ensuring adequate bandwidth and throughput to handle the expected traffic load.

What is Latency and Why is this Important?

Well the complaint is nearly always “We are experiencing latency issues“, usually when the ‘users’ are having performance issues with ‘something’. Now sometimes the problem IS the network (shock & horror). But all the bandwidth/Thoughput and Low latency in the worlds will not help you if you have a poorley coded application, or your DNS is not seup correctly.

But it’s not just old and poorley coded applications that require low latency Some application platforms we take for granted can suffer for example.

  1. Online Gaming: Real-time multiplayer online games require low latency to ensure smooth gameplay and quick reactions. High latency can result in lag, making the gaming experience frustrating and uncompetitive.
  2. Video Conferencing: Applications like Zoom, Microsoft Teams, and Skype require low latency to facilitate real-time communication. High latency can cause delays, leading to awkward conversations and reduced communication quality.
  3. Voice over IP (VoIP): Services like Skype, WhatsApp, and other internet-based telephony services need low latency to provide clear and immediate voice communication. High latency can cause echo and delays, making conversations difficult.
  4. Financial Trading: Stock trading platforms and high-frequency trading systems rely on low latency to execute trades in milliseconds. Even minor delays can result in significant financial losses or missed trading opportunities.
  5. Telemedicine: Remote medical consultations, surgeries, and other healthcare services often require low latency to ensure accurate diagnostics and timely intervention.
  6. Augmented Reality (AR) and Virtual Reality (VR): AR and VR applications need low latency to provide immersive and responsive experiences. High latency can cause motion sickness and degrade the user experience.
  7. Industrial Automation and Control Systems: Manufacturing processes, robotics, and other industrial applications require low latency for precise control and real-time monitoring to ensure safety and efficiency.
  8. Autonomous Vehicles: Self-driving cars and drones rely on low latency for real-time data processing and decision-making to navigate safely and respond to dynamic environments.
  9. Cloud Gaming: Services like Google Stadia, NVIDIA GeForce Now, and Xbox Cloud Gaming stream games from the cloud to users’ devices. Low latency is critical to provide a responsive gaming experience comparable to playing on a local console or PC.
  10. Smart Grids: Advanced electrical grid systems require low latency for real-time monitoring and control to manage power distribution efficiently and respond to fluctuations in demand and supply.
  11. Remote Desktop Applications: Tools like Remote Desktop Protocol (RDP) and Virtual Network Computing (VNC) require low latency to provide a seamless and responsive experience when accessing and controlling a remote computer.
  12. Live Streaming: Interactive live streaming platforms like Twitch and YouTube Live require low latency to ensure minimal delay between the broadcaster and viewers, enabling real-time interaction through chat and other features.

Ensuring low latency for these applications often involves optimizing network infrastructure, using efficient communication protocols, and sometimes deploying edge computing to process data closer to the source.

Related Articles, References, Credits, or External Links

NA

 

VMware Converter Slow!

KB ID 0001584

Problem

I was P2Ving a server for a client this week. I did a ‘trial run’ just to make sure everything would be OK, and got this;

Yes, that says 13 days and 29 minutes! Suddenly doing this at 1700hrs on a Friday became a moot point! (Note: I was using VMware vCenter Converter Standalone version 6.2)

Solution

At first I assumed this was a network problem, so I moved everything onto the same Gigabit switch, and made sure all the NICs were connected at 1Gbps. Still no improvement. I then shut down as many services on the source machine as I could, still it was terribly slow 🙁

Firstly, make sure Concurrent Tasks, and Connections per Task are set to ‘Maximum’.

Then locate the converter-worker.xml file and edit it;

Usually located at “C:\ProgramData\VMware\VMware vCenter Converter Standalone

Note: ProgramData is, (by default) a hidden folder!

Locate the section, <useSsl>true</useSsl>, change it to <useSsl>false</useSsl> then save and exit the file.

Then restart the ‘VMware vCenter Converter Standalone Worker‘ service.

Boom! That’s better.

Related Articles, References, Credits, or External Links

NA

Backup Exec – Using RDX Drives

KB ID 0000578

Problem

While I like RDX drives, (they have advantages over magnetic tape), but they do have a drawback, throughput.

As you can see the removable drive/cartridges are just 1TB SATA Drives in a protective jacket, with a “write protection switch” on them.

So they should be perfect as a backup medium, the problem is, the drive carrier itself runs off the USB bus, so they can’t run faster than 48MB a second (I’ve not seen a server that has USB 3 on it yet). HP literature says that its backup rate is 108GB an hour. However for a small business that can be more than acceptable. It’s advantage, if it keeps the client that wants to take his backups home with them on a “Tape” happy (Because that’s what they have always done).

So the other week I found myself with a shiny new RDX Drive and an old SBS 2003 Server running Backup Exec 11d.

Solution

Note: If you are running Backup Exec versions 10 or 11 you CANNOT perform backups with GRT. If you want this functionality then you need to upgrade to a newer version (GRT to RDX drive works fine with Backup Exec 2010 R3).

1. Once you have physically installed the drive and connected it to the servers internal USB interface, you should see the drive listed below disk drives.

2. With an RDX Cartridge loaded it behaves just like a 1TB Drive (because that’s exactly what it is).

3. To use the drive in Backup Exec you need to create a new “Removable Backup-to-Disk Folder”.

4. Give the removable folder a sensible name, and I set the maximum size to 1023GB to make sure it can’t try and outgrow the drive.

5. Once complete it will create “Media” in the removable folder that it names incrementally as it sees new cartridges, in the FLDR000001, FLDR000002, etc, format. Treat these the same as any other backup media, i.e. you can add them to media groups for different backup jobs.

Related Articles, References, Credits, or External Links

NA

HP Networking ‘ProCurve’ – Trunking / Aggregating Ports

KB ID 0000638 

Problem

I was lending a hand this week, while my colleague swapped out a lot of switches. I don’t usually deploy a large number of HP switches, so I was surprised when we installed a chassis switch and after patching the fiber links, the Cisco Catalyst switches all got upset and we lost three out of four ping packets.

I (wrongly) assumed that STP would be enabled, so I wandered back and pulled the second fiber link. I knew from conversations I’d had before, that HP call having multiple uplinks between the same switch, to increase throughput “Trunking”. (Note: For people like me, who think that switch trunks are links for carrying multiple VLAN traffic. In “HP Land” trunking means aggregating switch uplinks).

Solution

Note: Up to four uplinks can be aggregated into one trunk.

Option 1 Configure a Trunk via Telnet/Console Cable

1. Connect to the switch either by Telnet or via the console cable > Log in > type menu {Enter} > The Switch menu will load > Select “2. Switch Configuration…”.

2. Port/Trunk Settings.

3. Press {Enter} > Edit >Scroll to the first port you want to add to the trunk > Use the arrow keys to navigate to the “Group” column > Press {Space} > Select the first unused trunk > Arrow to the “Type” column > Change to “Trunk” > Press Enter > Save.

4. Repeat to add the additional “Links”, then configure the mirror image on the switch at the other end.

Option 2 Configure a Trunk via the Web / GUI Console

1. Log into the wen console > Interface >Port Info/Config > Select the first link you want to trunk > Change.

2. Set the Trunk Type to “Trunk” > Change the Trunk Group to the next available trunk > Save.

3. Repeat to add the additional “Links”, then configure the mirror image on the switch at the other end.

 

Related Articles, References, Credits, or External Links

NA