SMPTE 2110 Professional Media over IP Infrastructure
with added -22 for compressed video essence
The long awaited SMPTE ST 2110 standards for professional media over IP infrastructures is out for about a year now and serves as a major contributor to the industry’s movement toward IP-based infrastructures. The suite of standards specifies the carriage, synchronization, and description of separated video, audio and ancillary data streams over IP for live production, playout, and other professional media applications. By adding timestamps, all elements can be routed separately and brought together at any endpoint. This synchronized separation of streams, as opposed to SMPTE ST 2022, promises to simplify the process of adding metadata such as captions, subtitles, Teletext, time codes, and simplified video editing, as well as tasks such as the processing of multiple audio languages and types. video, audio and ancillary data streams over IP for live production, playout, and other professional media applications. By adding timestamps, all elements can be routed separately and brought together at any endpoint. This synchronized separation of streams, as opposed to SMPTE ST 2022, promises to simplify the process of adding metadata such as captions, subtitles, Teletext, time codes, and simplified video editing, as well as tasks such as the processing of multiple audio languages and types.
Today, the standard suite is already being embraced by the industry and many are offering equipment and solutions based on the SMPTE ST 2110. To look at vendors offering ST2110 products already, check out the members of the AIMS Alliance.
To shed a light on all parts of the suite, we have enlisted and explained them in the following:
The ST 2059 (PTP) is used to distribute time and timebase to each device within the system by giving timestamps to the separate streams. It specifies the various system clocks and how the RTP timestamps are calculated for Video, Audio and ANC signals. This enables each component flow — audio, video, metadata —to be synchronized to each other, while remaining independent streams.
This standard specifies the real-time, RTP-based transport of uncompressed active video essence over IP networks. An SDP-based signalling method is defined for image technical metadata necessary to receive and interpret the stream.
It supports resolutions up to 32k x 32k, thus well covering the currently trending UHD formats, Y’Cb’Cr’, RGB, XYZ and I’Ct’Cp’ color spaces, HDR and HFR content, 4:2:2/10, 4:2:2/12, 4:4:4/16, and more.
Part 21 defines a timing model for SMPTE 2110-10 video RTP streams as measured leaving the RTP sender, and defines the sender SDP parameters used to signal the timing properties of such streams.
With the introduction of the upcoming Part 22, the SMPTE 2110 specifically and officially defines a standardized way for transporting compressed video over IP workflows such as TICO (SMPTE RDD35) or TICO-XS (known as the new JPEG XS).
The IETF RTP payload of JPEG XS is now defined . The introduction of compressed video to ST 2110 intensifies the already existing advantages of moving to IP based workflows – flexibility, scalability, unlimited accessibility – by allowing users to transport generally high-bandwidth videos like 4K and 8K over cost-effective COTS 1GbE/10GbE networks. Using TICO or TICO-XS innovative ultra-low latency & lossless quality codecs positions compression a solid sustainable solution for creating cost-effective, bandwidth-efficient and high quality live production workflows. In no means, it will be inferior to uncompressed video concerning neither quality nor latency. It will just be better in bandwidth and to use COTS equipment such as 1GbE, 10GbE to manage multiple streams in HD, 4K and 8K .
ST 2110-30 deals only with the real-time, RTP-based transport of PCM digital audio streams over IP networks. An SDP-based signalling method is defined for metadata necessary to received and interpret the stream. Non-PCM digital audio signals, which includes compressed audio, are beyond the scope of this standard.
Part 31 can handle non-PCM audio. In this part, the real-time, RTP-based transport of AES3 signals over IP networks, referenced to a network reference clock, is specified. Like AES3, the audio signal is always stereo.
2110-40 basically says how to use the IETF RFC 8331 with 2110, for generically wrapping ancillary data items in IP. It specifies the transport of SMPTE ST 291-1 Ancillary (ANC) data packets related to digital video streams over IP networks. In this way, it enables break-away routing of Audio and VANC.
For more than a century, the people of the Society of Motion Picture and Television Engineers® (SMPTE®, pronounced “simp-tee”) have sorted out the details of many significant advances in media and entertainment technology, from the introduction of “talkies” and color television to HD and UHD (4K, 8K) TV. Since its founding in 1916, the Society has received an Oscar® and multiple Emmy® Awards for its work in advancing moving-imagery engineering across the industry. SMPTE has developed thousands of standards, recommended practices, and engineering guidelines, more than 800 of which are in force today.
For more information visit: https://www.smpte.org/st-2110
Adding the JPEG-XS mezzanine compression on ST 2110-22
intoPIX has released accelerated encoder / decoder IP-cores for FPGA and accelerated SDKs running on Nvidia GPUs & Intel x86 :
- TICO-XS IP-cores - with the first release capable of delivering 8, 10, 12 bit capability, 4:2:2 and 4:4:4 color sampling, HD to 4K at up to 60fps. Thanks to their extremely small footprint, TICO-XS encoder and decoder IP-cores fit onto the smallest Intel and Xilinx FPGAs, requiring no additional memory and enabling a firmware upgrade of existing FPGA-based systems.
- FastTICO-XS SDKs - that live-streamed HD and 4K running on Intel X86 CPU processors and decoded 8K60p on Nvidia GPU.