What is the latency of TICO technology?

07.05.19 09:03 AM By Nils Finger

What is latency anyway?

The terms "latency", "low-latency", "ultra-low latency", "zero latency" or even "no latency" are used by many vendors providing video streaming solutions. However, without clear numbers, it is difficult to measure these statements.

"Latency refers to time interval or delay when a system component is waiting for another system component to do something. This duration of time is called latency." 

Technopedia.com

While latency can in fact be clearly measured in time, we as humans have a different perception of latency based on the situation in which we find ourselves. 


When live-streaming TV over the internet, a latency of one to multiple seconds is nothing uncommon. In fact, the average broadcast latency for streamed video is around 6 seconds. A delay we can usually all cope with. Except for those extreme situations that your cable-TV neighbour cheers for the goal of your favourite football team while on your side the ball is nowhere near leaving Ronaldo's foot. Or even worse, your satellite TV friends already send you a text commenting on the last outrageous death on Game of Thrones, all before you even see Jamie Lannister pulling his sword. Still we can all agree on anything below one second being a “very low latency”.


Looking at live conferencing, we're already getting more vulnerable. A conversation with your Korean sales representative with just a single second of latency can lead to tedious meetings and serious frustration. Nobody would call this “low-latency” anymore.


An even worse situation in the meeting room can occur when using KVM extension. Have you ever tried operating a computer mouse on a second screen with the connection being just slightly off real-time? With a full second of latency in this situation, a desktop icon will suddenly become an almost unreachable target for your cursor. Here one second is perceptually far off "low latency".

At intoPIX, although not present in all the applications noted above, we have a simple way of avoiding this problem of perceptual latency. We always aim to build image processing and video compression that offers the very lowest latency in order to improve visual communication and interactivity in a multitude of applications. And all of our customers greatly value our technologies particularly because they give them bandwidth savings without the troubles of added latency.

According to a study from the MIT*, the human eye starts to perceive latency from around 13 milliseconds - all of intoPIX's technologies meet this requirement.

But let’s skip the pure perception and get down to the numbers. When we speak about the TICO SMPTE RDD35 lightweight low latency technology, we speak about MICROSECONDS of latency. But how much, or rather how fast, is that actually?


Here the basics: 

  1 second = 1000 milliseconds

            1 second = 1 000 000 microseconds
  1 millisecond = 1000 microseconds     
1 microsecond = 0,001 milliseconds       
1 microsecond = 0,000001 seconds        

How can we calculate the final latency?

Since TICO is a line-based compression technology, let’s first correlate latency in frames/lines with latency in (milli-) seconds for a video stream running at 60 frames per second:

Latency in ...

... seconds       ... frames   

________________________________

 1 second    =    60 frames

333 milliseconds    =    20 frames          

50 milliseconds    =    3 frames          

    16,66 milliseconds    =    1 frame                        

8,44 milliseconds    =    0,5 frames          


TICO technology can reach a latency of as low as 6.5 lines of pixels at the encoder and 6 lines of pixels at the decoder – compiling into a total latency of just 12,5 lines.** If we now consider a UHDTV 4K video stream, with each frame having a vertical size of 2160 lines, it practically means the following:

12,5 lines / 2160 lines x 16,66 = 0,0964 milliseconds

                                                                               __     


TICO 4K60P real-time encoding & decoding only takes 96 MICROSECONDS.


**TICO profile 1 when implemented on a chip (FPGA/ASIC).

So, could we say "zero"?

It is less than 1 frame, so is it "zero frame latency"? It is less than a millisecond, so is it "zero millisecond latency"? It all depends on your perspective. What is clear however, is that at this level, latency is not a concern anymore. It is so low, that you cannot notice it with the bare eye and cannot capture it in any application.

When taking a picture of both, the source and the destination, going through our encoder and decoder, you will not capture any latency. Even when you display time code information!


Thus, can we say "zero"? We believe we can!

Not yet convinced? Request an evaluation.