How Do You Measure 5G Latency? Expert Q & A Session, Part 3
Recently, we hosted our webinar, “Is Your 5G Data Architecture Ready for Microservices?”, hosted in conjunction with FierceWireless. This unique session, VoltDB’s Chief Technologist, Dheeraj Remella engaged in a roundtable discussion with Iain Gillott, Founder and President, iGR about a variety of topics related to 5G and how to prepare your architecture for the coming changes this new network will bring.
We received an outpouring of questions from attendees throughout the webinar and we will highlight some of the most pressing inquiries as part of a blog series. This is part three of the series, in which Iain, specifically, discusses his thoughts on 5G latency and how it’s measured. You may view the webinar in its entirety on-demand here.
Editor’s Note: Unless otherwise noted, the question is coming from a webinar attendee via our chat functionality during the session. These answers have been edited for clarity and grammar.
Q: How Do You Measure 5G Latency?
Iain Gillot: There’s one more question that I do want to take right now. It’s something I said actually about the performance on 5G. I mentioned that the latency we’re aiming for is less than 10 milliseconds and somebody said, “Where’s that from? How are you measuring the latency?”
Typically, network latency is measured from the device, up to the radio, down through the front hole into the radius of the baseband processor, back out into the core, and then to the application itself. A typical ping test if you like. LTE today is 60 to 70 milliseconds. The fastest I’ve ever seen is 41 milliseconds – I was actually standing in one of the operator’s headquarters, so you’d hope it was good there – but that was from my device to the radio, down through the network, into the core, and then turn around and come back. That doesn’t include any processing time for the application itself. So, if I’m streaming a movie for example, and it takes a while for that stream to start, that wouldn’t be included in the 60 to 70 milliseconds.
With 5G, 10 milliseconds is to do that same thing and the way we do this is by moving the processing to the edge. They actually move it closer to the base station and remove the transporter in the middle. You’ll also hear of a one millisecond target that allows for a lot of industrial-type applications – especially for control of robots and things like that in industrial manufacturing. That’s actually a radio latency, so it goes from device to radio, processed, and goes straight back in one millisecond. There are different expectations for latency, but that’s typically how we’re looking at it from an architecture point-of-view.
Missed the previous 2 posts in this series?
Let us know what you think — add a comment below.