Building What Was Missing: How WebRTS Reinvents Real-Time Streaming at Scale

Share on facebook
Share on twitter
Share on linkedin
Danny Burns's photos in a magazine on a article about WebRTS, read by a person in a car.

I’m Danny and I’m the Founder and CTO of Ceeblue. I’ve been building networks and systems since before most of the people reading this were out of school—or even born, if we’re being fully transparent. I’ve worked for British banks, ran infrastructure for European telcos, co-founded a successful streaming company and then sold it to build something new (which turned out to be Ceeblue). And yet, out of all the years of jumping between racks and routers, what excites me most is what we’ve done this time. That’s because, with WebRTS, we’ve finally filled a gap that’s been staring this industry in the face for far too long.

Let me tell you a bit about how we got here, what we built, and why I think it’s going to change the face of real-time video streaming as we know it.


The Problem with “Real-Time”

When You Know There’s a Hole in the Bucket

Back when I was still running my previous startup, we were helping clients get video out to the world using some of the best tools available at the time—RTMP, HLS, you name it. But latency was always a devil on our shoulder. And as devices, audiences, and expectations evolved, you start to notice more clearly: real-time wasn’t really real-time.

For a while the RTMP/Flash combo was the only real-time game in town. It was clunky but good enough. Then Adobe killed Flash, and with it, the only real-time option we had.

We looked at WebRTC to fill the gap, and I was optimistic. Sub-second latency in the browser sounded like the stuff of dreams. But it wasn’t built for one to many. We figured out how to make it scale, as did a couple of other companies in this space, and it works. And we refined it and now it works great. But for one-to-many applications when the “many” is a huge number, WebRTC has its technical and economic limitations: it doesn’t run over traditional CDNs and it doesn’t play nice with corporate firewalls.

So I had only half-solved the goal I set out to achieve when I started Ceeblue:  we created a scalable real-time streaming solution, but with an asterisk or two. We were innovating on top of the best technologies out there, but there was something missing, there was a hole, and we were going to have to fill it.


The Limitations of WebRTC and HESP

From Patchwork to Purpose-Built

Before deciding on creating something new, we tried one more existing technology that seemed to have a lot of potential to fill the gap. We joined the HESP Alliance and in a few months integrated HESP support into our platform alongside WebRTC. We were the first provider to provide customers with the choice of two sub-second technologies: WebRTC and HESP. There is another company that recently started offering both like we do: Dolby.io. But that’s because they gobbled up Millicast, a top-notch WebRTC shop, and then they acquired Theo Technologies, the team behind HESP. So yeah, kinda the same but also not.

We opened HESP up a bit, allowing it to run on 3rd-party CDNs, and we created a client library for it that allowed Ceeblue customers the freedom to choose more than just the one player, but in the end, HESP wasn’t the solution we were really looking for, either.

Having two or three or five different sub-second protocols wasn’t the endgame. It turns out that our HLS and DASH and LL-HLS and WebRTC and HESP were just stepping stones. What we really needed was a new way forward—something built for the requirements of the boots-on-the-ground live streaming as it is today: fragmented, congested, and full of viewers watching from offices, mobile networks, and coffee shop Wi-Fi. So we went back to the drawing board, described the least optimal streaming scenario, and worked to build something that could survive it. That’s the origin of WebRTS.

Let me be super-clear about one thing: WebRTS is not a “better WebRTC.” It’s a new framework. It’s totally different in every way except for the low latency they both achieve. Built from the ground up for sub-500ms video delivery over standard CDNs, without proprietary players or vendor lock-in. Think of it as what real-time should be—secure, scalable, stable, and simple enough that a developer doesn’t have to rip their hair out trying to get it to work.


WebRTS workflows, with sub-500ms delivery and live streaming over CDNs.

Why WebRTS Had to Be Built

The Bits That Matter

Let’s get into what actually makes WebRTS different.

Built for CDNs, Not Against Them

We didn’t build WebRTS to run on some exotic proprietary CDN either. It scales using the same CDN infrastructure you’re already paying for. Akamai, Cloudfront, Fastly—it doesn’t care. And it works in plain old HTML5 players like video.js. No apps to install. No “please update your plugin” warnings.

Low-Latency Without Lock-In

We’ve squeezed performance, compatibility, and efficiency into one tidy framework. And we’ve open-sourced the client too, because progress doesn’t happen in silos. The hope is that others will take it, build on it, and help drive the industry forward.

What Makes WebRTS Different

Adaptive Live-Point Recovery

For starters, it’s adaptive in a way that most real-time protocols just aren’t. We designed a skipping mechanism that’s completely non-invasive. It doesn’t stall or stutter—it just skips past network congestion imperceptibly and keeps the stream on the live point. No buffering. No frozen presenters. No spinning circles of doom.

Sub-second live streaming with DRM

It’s also got full MultiDRM baked in. That means we support PlayReady, FairPlay, and Widevine out of the box. If you’re broadcasting a Premier League match or a high-stakes poker tournament, your content is safe—and still arriving in under 500ms.


Real-World Use Cases

Built for the Tough Stuff

Where WebRTS really shines is under pressure.

Whether it’s live sports with second-screen spoiler risk, real-money betting with every fraction of a second worth dollars, or corporate town halls in secure office networks—WebRTS performs where others flinch.

Let’s say you’re watching a tennis match on your phone while your mate texts you “Ace!” before you see it. That’s a broken experience. WebRTS fixes it. We’ve tested it at global scale, streaming interactive events with partners like Sparkup, in collaboration with P2P partners like Quanteec, and we’ve integrated EZDRM’s multiDRM solution.

You know, all the things you’re obviously going to want.


Why Ceeblue Was the One to Build It

Some folks ask why this didn’t come out of one of the big players. Well, here’s the truth: most of the big names have too much riding on their current stack to take the risk. They’ve built empires on HLS and DASH, and they’ll keep selling you “low latency” solutions that are still seconds behind.

We’re also not afraid of getting our hands dirty. A lot of what makes WebRTS possible comes down to years of tackling gnarly edge cases and wading into messy territory others avoided. We didn’t build WebRTS to be flashy—we built it to solve real problems for real people delivering real video to the world. We’ve dealt with the real problem dodgy Wi-Fi networks, corporate firewall issues, and every kind of packet-loss horror show you can imagine. And we’ve designed WebRTS to handle all of it—with grace.


Now, I won’t pretend this was easy. Building WebRTS meant challenging years of assumptions, ditching legacy thinking, and testing relentlessly. I’ve been in meetings where people looked at us sideways—“You’re trying to do sub-second over TCP with DRM? Good luck, mate.”

But that’s where the team really shone. We’ve got engineers who live and breathe streaming, partners who believe in the mission, and a culture of collaboration that I’m genuinely proud of. I’ve worked at companies where every sprint feels like a battle. At Ceeblue, it feels like a jam session—everyone riffing, improving, building.

We’re still a tight-knit bunch. We meditate, we share ideas, and we debate fiercely—but always with the goal of making the product better. That kind of culture doesn’t show up on a spec sheet, but it makes all the difference when you’re trying to build something this ambitious.

And it’s paid off.


Looking Ahead: webrts.org and the Open Ecosystem

We’re not done. Far from it. We’re still optimizing, still integrating, still pushing the boundaries.

Next up is the community. We’re spinning up an open ecosystem around WebRTS—starting with webrts.org. It’ll be a place where developers, broadcasters, researchers, and curious minds can come together to experiment, innovate, and make real-time streaming actually real-time.

Because the truth is, there’s no single finish line here. Viewers keep expecting more. Providers keep pushing higher and higher qualities—4K video, surround-sound immersive audio experiences. We need to keep adapting, pushing more bytes through unreliable internet connections.

And that’s why we’re opening the doors. We want folks to build with us, test with us, argue with us, and make WebRTS better.


Final Takeaway: Real-Time That’s Ready Now

If you’re still relying on traditional or Low-Latency HLS for your live stream—just stop. You can’t fake real-time with tricks and caching optimizations. And if you’re investing in solutions that lock you into proprietary players and specialty CDNs, you’re not scaling. You’re stalling.

You don’t need to throw away everything. WebRTS plugs in. It works with your existing CDNs. It doesn’t require a forklift upgrade.

And it’s ready now.


Images of Danny Burns, printed in a book, with a magnifying glass.

Final Thought

I’m 68 now. Been in this game a long time. I’ve seen protocols rise and fall, watched companies flame out because they chased flash instead of function.

But every once in a while, you get to build something that actually fixes what’s broken, in a way no one’s ever tried before—and does it elegantly.

That’s what WebRTS is for me. Not just a product, but an actual path forward that’s not just spinning the same old wheels. A way to get back to what this work is really about: connection, performance, and the joy of knowing it’s all running like clockwork behind the scenes.

So here’s to what’s next. We built the thing we always wished we had. Now it’s your turn to see what it can do.


Streaming Media Interview (and more)

If you want the more formal version of this story with fewer tangents, I had a great chat recently with the folks at Streaming Media. We covered a lot of ground: how Ceeblue got started, why WebRTS needed to exist, and where I think the industry’s headed. Worth a read if you want to hear it in a slightly tidier format. Here’s the interview.

For another recent post about WebRTS, click here.

Contact us for more info.

Keep up to date with the latest from Ceeblue

More To Explore

Live Video Transcoding and Streaming in Under a Second. Really.

Follow Us

OFFICES

Netherlands

Germany

Spain

©2021 Ceeblue B.V. All Rights Reserved. Privacy & Cookies Policy