Have you ever been told by a salesperson, tech or industry expert that their platform can automatically scale? Ever heard people in the industry keep referencing “cloud-native” technology with such passion that you’d think it’d solve world hunger, war, and climate change all in one go?
Are you asking yourself if this is just another craze from the front line of tech evangelists, or if you’re missing out on something here?
Well, you’re not alone. A lot of people are too vague when trying to explain these topics, and you can easily be left wondering what you’re even talking about here.
The streaming industry is too fragmented and complex to just default to buzz words such as “automatic scaling” and “cloud-native” when highlighting value propositions.
I’ve been guilty of this myself in the past, so, in this think piece, I aim to explain – in plain “layman” terms – what these buzzwords mean, how they relate to the service we offer at Vimond, and why some of the biggest media organizations in the world are adopting these technologies at a fast pace.
Vimond
Ok, so to start – for all those who don’t know Vimond – let me explain what we do:
At Vimond, we provide backend video software that powers streaming services offered by broadcasters, telcos, cable operators, and other media organizations. Some of these names include companies such as Comcast, Kayo Sports, Thomson Reuters, and many others.
So, what’s “backend video software” supposed to mean?
Well, in essence, it means that we take care of all the backend infrastructure needed to stream video online. We handle and store all the digital video, all the metadata, provide the video distribution, enhance video editing and live clipping, offer encoding and transcoding services where needed, and much more.
There are many parts to an OTT service, but for now, and in the context of this Think Piece, the important thing to understand is that if the Vimond service is not functioning properly, the end-user will not receive any video when they lean back and click play on their favourite show.
(For anyone who’d like to read more about the backend side of a streaming service and how we solve this at Vimond, check out our video CMS VIA)
Cloud-Native Streaming Technology
Ok, so with that out of the way, let’s talk about one of the previously highlighted buzzwords – cloud-native – and why it matters. To explain this, let me start by explaining the old ways of doing things.
In the past, before Vimond’s platform was fundamentally changed to a cloud-native platform, we used to provide our services in a very different way. Vimond would serve our platform as one big installation (big bang approach) per customer, made up of several smaller applications and services that we had created utilizing cloud infrastructure.
These services were great, and resulted from some very smart people developing very clever ways of solving complex problems. But:
- Were these applications and services that made up the backbone of our platform made in an agile and flexible way?
- Could you quickly change one service out, or bring one in?
- Were processes, code, and infrastructure streamlined so we made the most out of the true potential of the cloud?
The answer is a deafening no. Even though our platform historically offered superior flexibility, as time evolved it didn’t match our own expectations for an ever-growing, rapidly expanding industry
This is because these services were built and coded in unique ways, with many different components being integrated, customized, and intertwined with each other – essentially creating a monolithic application with tons of dependencies between components.
We were definitely not alone in having this challenge. It was how most software was written and run.
The result? A platform that takes significant time to set up for each customer, a platform that is complicated to change when first deployed, and a platform that needs individual client maintenance for every new feature release or version upgrade.
The dependencies between components also created unpredictability, as changing one part of the monolith could impact a completely different part – without the developer even realizing it.
You can compare it to having your garage door breaking down due to you trying to fix the kitchen sink.
Built for the future? Not even close. So, if you’re still with me – let me try and explain how we solved these issues by utilizing a cloud-native approach.
So, here we are:
What does cloud-native mean?
Cloud-native means to create solutions that are developed to run in the cloud. It is an approach taken in order to build services that utilize the unique benefits of the cloud, instead of transferring old ways of software architecture into a cloud-hosted solution.
Simply said, instead of moving historic on-premise solutions into virtualized computers, cloud-native means to rewrite your technology to fit a cloud-hosted architecture.
“Ok, but what does that actually mean?” you might ask. Very fair question – let me give you an example:
If you’re building an electric car, you wouldn’t necessarily want to use your old frameworks meant for cars designed with a petrol engine.
Free from having to carry a traditional engine, it opens the door for building the car differently to utilize the unique features of having an electric motor – and build for that purpose.
This philosophy is exactly what a cloud-native approach entails.
Alright, so how did we do this, in practice?
Remember I mentioned that core to our earlier problems was our monolithic platform structure? So, we needed a way to break down this monolith. The answer?
Microservices
So, what’s microservices? ? Well, this does get technical, but here is the “official” rundown:
Microservices is an architectural approach to developing an application as a collection of services, where each service runs its specific task independently of the other services.
Each microservice can be turned on, turned off, scaled up or down, and changed – without ever interfering with anything other than that service.
To break that down in terms of what that means for us at Vimond, it meant a complete rewrite of our platform’s infrastructure. We went away from the unique applications that we had built in the past, and rather re-built our platform and modules as microservices in the cloud (AWS).
The Benefits of using Microservices
Instead of having a platform where everything is intertwined and entangled with each other, we are now able to make quick changes, targeted deployments, and maintenance to individual components – without running the risk of impacting any of our other services.
Picture a train – there’s a reason it has carts. The carts are easily interchangeable and can be removed and added on to fit your needs at any given time. You can also extend or reduce the length of your train if you need to.
In this example, the carts fit the same purpose as the microservices. Instead of having one big train that we previously needed to chop up and glue together to make changes, we can now just change and “deploy” carts when appropriate.
This meant a reduction in customer risk, and a great improvement in the speed of delivery. It enabled us to scale down on rigorous tests and QA processes, as we now know that for every time we change something, we’re only changing that piece – limiting the need for QA to only that piece.
It allows our engineers to, instead of carefully watch legacy infrastructure, now focus on architecting for resilience. We’re able to launch new features much faster, and we can launch new features to all our customers at the same time.
Instead of painting with a brush, we’re now painting with hundreds of small individual pieces that we stick to the canvas – still painting the full picture, but with pieces that are easy to change should there be any errors.
Automatic Scaling
Alright – hopefully that was useful and helped you understand better what a cloud-native approach has meant for Vimond in terms of resilience, increased speed of delivery, and more streamlined architecture.
But what about the other buzzword mentioned at the beginning of this piece? What about Automatic Scaling?
Now, let’s again go back to look at how things used to work because there’s one issue that I haven’t covered yet about the monolithic approach – and it’s an important one: Scaling.
For those who aren’t aware – distributing video online requires computing power. The more viewers you have, the more computing power you need for your distribution platform.
Think of it as going uphill with a car – the steeper the slope, the more gas you’ll need. If you run out of gas, the car stops. And here, that means your streaming service stops, and your customer service centre is going to get a lot of traffic in the days to come.
The traditional way of achieving scale was to ensure that at any given point in time, you had additional computing power available that went above what you expected to use. This can be done in many different ways, but the key is that you had to scale resources up before an increase in viewership.
This is crucial, as it means that you would almost always operate at an oversized capacity rate, because you were leaving yourself a safety margin.
In practice, this meant that in everyday operations, your streaming service would need to be scaled up to whatever level your product owners and engineers saw fit.
They would look at viewing patterns, make predictions and leave a good safety margin to ensure an unexpected peak isn’t going to take the service down.
This preparation is absolutely necessary – if the viewership went above your capacity limits during a big event… well let’s just say your press officer would get a great chance to prove his or her money’s worth.
Now, if you did your job correctly, you would be left with additional capacity 100% of the time. But, this has a cost.
Running at overcapacity means in essence that you’re paying for a safety margin, and that if everything goes as planned, you’re continuously spending money on extra capacity you never needed. That’s a lot of cash!
It also increases the chances of something going wrong. Incorrect viewer expectations, human errors, alarm errors – there are many reasons for why issues can arise.
So, how to solve this problem?
The answer again lies with utilizing a cloud-native architectural approach. To recap, this means we’re utilizing cloud services and applications as our foundation – and building for that purpose.
Now, these cloud services and applications have the capacity to automatically allocate more cloud computing power when they need to. The individual microservices will analyze its current status, the viewership trend, and take appropriate action when needed.
This means that your system will “order” resources in the cloud as they see fit, whenever viewer load increases or decreases.
If a microservice or component sees that it’s getting close to its load limit, it’ll automatically increase its capacity – and then decrease it again once the load on the component is lower.
This enables what we call on-demand access to cloud capacity.
Full disclaimer, not every single microservice or component has the ability to automatically scale up. Some 3rd party components just aren’t there quite yet, but our services that are (which is the large majority), it’s a revolutionary way of utilizing the true potential of the cloud.
So what does it actually mean for you?
Well, first of all, it means you’re no longer having to pay for extra capacity as a safety blanket. You’ll be able to only pay for the computing power you actually require and trust that the system will have your back if needed.
And you no longer have to scale all aspects of your platform, as the available computing power will dynamically be allocated between microservices as they see fit – saving you even more computing power consumption.
But, what it also means – and this might be even more important – is that it decreases your risk of anything going wrong. Now, that might seem illogical:
“How can reducing my extra capacity – my safety blanket – mean less risk to my operations?”
I see your point – but it actually does… and in a lot of cases, at great scale!
Since a cloud-native architecture will automatically scale up and down resources, it removes the need for human intervention to execute this process for these services. The relevant components no longer need to be manually scaled, unpredictable dependencies don’t have to be interfered with, and you’re no longer left guessing how much capacity you’re going to need. Therefore, you no longer need that extra capacity.
With the additional risk of human error greatly reduced, these two aspects together make for a much more cost-effective, scalable, and flexible way of managing your available computer resources and capacity.
Conclusion
So, there we are. That’s cloud-native and auto-scaling in the world of the Vimond OTT platform.
I hope this was useful and able to help with your understanding of what these topics are meant to cover.
Now, there are many real-life examples of how this is being used today. If you’ve come this far in the article, and find yourself thinking about how you could benefit from this in your operations, give us a call and we’re happy to tell you all about them.