Prospective customers often ask us:
- What end-to-end latency is actually achievable using the Ceeblue Media Fabric?
- How will choosing one player or protocol over another affect latency?
- Is there a hidden asterisk with “actual results may vary” somewhere in the fine print?
The purpose of this article is to provide answers to these questions with as few asterisks as possible.
What end-to-end latency is actually achievable using the Ceeblue Media Fabric?
The Ceeblue Media Fabric can provide latencies as low as sub-200 milliseconds, end-to-end. Of course, the latency of real time workflows depends on the successful optimization of many different factors at different stages. Each of our customers’ integrations is unique and represents different benefits and challenges, and none will obtain the exact results shown here. We are providing these anecdotal results so that our customers can get a sense of what they can expect when using the most bleeding-edge real time solution available today.
How will choosing one protocol over another affect latency?
In January 2024, Ceeblue performed a series of real-world tests using different combinations of players, ingest protocols and delivery protocols.
This testing is not exhaustive, and there are protocols and combinations of protocols that we happily support but which were excluded from this process so that we could focus on our most-requested formats.
These realistic benchmarks will not only provide invaluable guidance for our customers, based on real-world results, but will also contribute to a comprehensive report being drafted by the CDN Alliance’s Low Latency Workgroup, which Ceeblue co-chairs, regarding the state of low-latency solutions.
The Testing Environment: True End-to-End Latency
- Browser : Windows 10, Chrome 120.0.6099.224
- Broadcaster Location : Tulle, France
- Ceeblue Node : Fokkerweg , Netherlands
- Viewer Location : Tulle, France
- Source : Gstreamer 1.20.4 with epochtime plugin.
- Stream configuration : 1 video track, h264, 720p, fps=30, segmentation=2s
GStreamer was used to push the stream from the residence of one of our engineers, using a residential internet connection, with a standard, ISP-provided WiFi router.
The engineer then opened the webpage which loads our demo environment. This environment utilizes on-canvas embedded epoch timestamps which record the exact moment the video is streamed from the source browser.
When playback occurs on the viewer’s browser, this timestamp is compared to the current time and the difference between these two times is displayed as the end-to-end latency of the stream.
Ceeblue provides incredibly fast start times, and in the first second or two a stream will have slightly higher latency (around 1 second for WebRTC, for example); but this latency quickly drops down to a stable latency that is much lower, which our engineer has duly recorded for each and every combination of ingest protocol, output protocol, and player on the table below.
Output Protocol | Player | Passthrough / Transcode | Input Protocol | ||
RTMP | SRT | WebRTC1 | |||
WebRTC | Ceeblue | Passthrough | 250 | 400 | 240 |
With Ceeblue transcode | 300 | 500 | 300 | ||
HESP | THEOplayer | Passthrough | 700 | 960 | 700 |
With Ceeblue transcode | 870 | 1,130 | 760 | ||
LL-HLS2 (CMAF) | hls.js | Passthrough | 3,220 | 2,800 | 2,900 |
With Ceeblue transcode | 3,330 | 3,100 | 3,160 | ||
DASH (CMAF) | dash.js | Passthrough | 4,900 | 4,920 | 4,900 |
With Ceeblue transcode | 5,500 | 5,500 | 5,360 | ||
HLS (MPEG-TS) | hls.js | Passthrough | 10,000 | 10,100 | 9,800 |
With Ceeblue transcode | 10,800 | 11,200 | 10,800 |
These measurements reflect the full End-to-End (round-trip) from Tulle, France → Oude Meer, Netherlands → Tulle, France, which is an approximate distance of 1,628 km / 1,012 mi “as the crow flies.”
- Using https://github.com/meetecho/simple-whip-client ↩︎
- HLS and LLHLS tests were done with the hls.js default configuration, not optimized for low latency and without moving in the timeline ↩︎
Breaking Down the Results of the Protocol / Player Benchmarking
Right off the bat, these observations stand out:
Input Protocol
- WebRTC with WHIP ingest was the lowest-latency option (or was tied for lowest) in 8 out of 10 configurations, and had only ≤ 100 ms more latency than the fastest configuration in the two which it did not win.
- RTMP ingest was a very close second.
- SRT, due to its inherent buffering, places third among Input Protocols.
- The “is-live=true” option is important in Gstreamer videotestsrc/audiotestsrc to get the best latency.
Output Protocol
- The fragmented protocols result in higher latencies in part because there is more buffering (this may also result in less precise results).
- WebRTC and HESP are both tremendously lower latency than HLS, DASH and even LL-HLS.
- The Ceeblue transcoder adds as little as 50 milliseconds of latency.
These Are Actual Results, But Yours Still May Vary
After reviewing these results, another of our engineers replicated the WebRTC testing from another country, and he obtained higher latencies for SRT and lower latencies for both RTMP and WebRTC ingest:
Output Protocol | Player | Passthrough / Transcode | Input Protocol | ||
RTMP | SRT | WebRTC | |||
WebRTC | Ceeblue | Passthrough | 208 | 431 | 178 |
With Ceeblue transcode | 281 | 528 | 284 |
Based on these real-world results, as well as our experience with dozens of customers, we can generalize and provide the following ballpark guidance:
Protocol | Expected Latency |
WebRTC | 300 ms |
HESP | 800 ms |
LL-HLS | 3,000 ms |
DASH | 5,000 ms |
HLS | 10,000 ms |
As we mentioned above, each workflow has different components and requirements, and these will ultimately determine the tools used, the protocols selected, and as a result the achievable latency. That being said, the above chart represents reasonable, nonspecific latencies that can be expected
HESP Optimization Since Testing
These tests were undertaken four months ago, and since then one of our engineering teams has been hard at work optimizing our HESP implementation in anticipation of some new services we will be launching and announcing at NAB Las Vegas.
We are incredibly excited to announce that with our current implementation of HESP we are achieving latencies previously only achievable using WebRTC, cutting the latencies of HESP by more than half!
Output Protocol | Player | Passthrough / Transcode | Input Protocol | ||
RTMP | SRT | WebRTC | |||
HESP | Ceeblue | Passthrough | 240 | 455 | 205 |
With Ceeblue transcode | 292 | 516 | 266 |
HESP is an HTTP-based, CDN- and DRM-friendly, cacheable protocol that will soon be taking the sub-500 millisecond market by storm. Ceeblue, as the only provider of both WebRTC and HESP, are literally doubling down on real time, and look forward to providing the integral real time component to even more innovative sub-second solutions, which bring people closer together every day.
Keep your eyes peeled in the coming weeks for future Ceeblue updates regarding our HESP offerings. We have exciting open-source news that will make HESP integration a matter of hours instead of days, just as we have done for WebRTC integration…. Stay tuned.
EXPLORE THE CEEBLUE WEBRTC CLIENT SDK
Source Code
github.com/CeeblueTV/webrtc-client
Prebuilt NPM Package
npmjs.com/package/@ceeblue/webrtc-client
TRY OUT THE WEBRTC VIDEO.JS PLUGIN
github.com/CeeblueTV/videojs-plugins