Riding the Wave of Over The Top Video Streaming

Category: HLS Streaming (Page 4 of 4)

Sky Technology Conference 2017 – Why we should all care about video quality?

As a broadcaster operating in both satellite and OTT spaces, we have exceptionally talented developers who create amazing software that millions of customers use every day (Now TV, Sky Q, Sky Go, SkySports). Whilst the OTT software stack is critical to success, we are reminded that we effectively give it away for free, so that customers can be enjoy the industry’s best sports, movies, entertainment and news content on their devices.

I was invited to speak at the Sky Technology Developer conference on the topic of “Why we should all care about Video Quality?”. The main focus of my talk was to demonstrate that you need to have an end to end view of the whole ecosystem in place to achieve the quality of experience that our customers expect.

At Sky, we use Conviva to record millions of streaming sessions every day across our OTT clients. This highly valuable data is used to measure the viewing experience and make automatic changes, such as swapping CDN A with CDN B when conditions dictate.

With any OTT streaming service, the ultimate measure of success is to have the technology get out of the way of the customer experience. If we can achieve this, then our business will continue to grow market share and prosper.. Conviva is helping us with this goal and good software integration is critical to ensuring all devices get a consistent experience with four key performance metrics as a minimum;

  • Buffering ratio.
  • Average bitrate.
  • Video start failure.
  • Exit before video start.

Below, I present what I believe to be the formula for running a successful streaming service

Amazing Content + Super Developers + Great Network + Customer Service = Quality.

Thank you for reading and I hope you enjoy the talk.

If you have any questions, please leave them in the comments section below.

BVE Conference 2017 – The Future of OTT

I was asked to speak on a panel discussion at the BVE conference in London on March 2nd. As you can imagine from the topic, this was a very wide ranging discussion covering all areas of our industry with the following agenda;

Consumers today can view more content on more devices than ever before, and the OTT revolution shows no signs of slowing down. What will OTT content and delivery look like in 5 or 10 years? This panel discussion will look at OTT today and tomorrow, focusing on the following challenges and opportunities:

  • Multiplatform viewing: mobile, connected device, and smart TV
  • Rights acquisition and DRM
  • Monetisation models: SVOD, AVOD, and TVOD
  • 4K and HDR
  • Data-driven advertising
  • Live events
  • The impact of VR and AR

Streaming Forum 2017 – Virtualisation for media publishers, CDNs and operators

My second public speaking was a panel discussion at Streaming Forum 2017, running alongside the BVE conference at ExCel in London on March 1st. The topic was “Virtualisation for media publishers, CDNs and operators”. I’d like to personally thank Dom, Andy and Mike for taking part in a very interesting debate and hope you enjoy  watching the video.

We examine the evolution of technology trends in the streaming space over the past 20 years, and why the trend is towards virtualisation at all levels – from media encoding and transcoding to delivery – leading to higher availability, stronger security, and higher service velocity. The increasing availability of graphics processors in commodity chipsets is changing the dynamic of where and how media is treated. This session will present a case study showing how this can provide for ‘carrier-grade’ availability even if the underlying fabric is only offering commodity-grade SLAs. Panellists will discuss how virtualisation is evolving in the operator space, and how it is changing the strategy of telcos and media distributors as they seek to bring these capabilities to market.

My comments can be heard at;

25:45 Thoughts of virtualisation and the power of software and micro services. What happens when a premier league events happens?

36:25 Thoughts on why we need low latency video streaming to be closer to live. Challenges with standard protocols and why metrics are important. Taking your colleagues on the journey with you. “Why do you need to touch the Sky 1 channel? Hey there’s this IP thing, it’s the future and you need to get on board.”

47:20 Thoughts on opportunities provided by virtualisation, micro services to scale streaming and vod services.

56:49 Thoughts on storage and high performance storage.

1:00:50 “Satellite is a cost effective mass market distribution system”. “People are time poor and we all have this challenge and if anything, this pace is accelerating and this mantra about doing less with more, also applies to people and to time.”

Moderator: Dom Robinson, Director and Creative Firestarter – id3as & Contributing Editor, StreamingMedia.com, UK
Andy Conway, Key Account Manager – Kontron, UK
Jeff Webb, Principal Streaming Architect – Sky, UK
Mike Ory, Engineering Manager, Digital Platforms – Verizon Digital Media Platforms, USA

Content Delivery World 2016

My first public speaking event as Principal Streaming Architect came at the Content Delivery World conference in November 2016, where I presented on “Innovations in Live Streaming to Multiple Platforms”.

My impression of the conference was that it was very well attended and the whole day was extremely interesting. As you can see from its title, the conference has a strong focus on content delivery and is specially relevant for Broadcasters. The conference will return in 2017 and I look forward to attending.

What will you learn about Streaming at Scale?

The presentation addresses some of the most important questions for OTT providers that want to offer customers a premium experience;

  • The challenges of Live vs VOD?
  • How do we measure at scale?
  • How do we protect the customer experience?
  • How do we streaming to millions of customers?

One of the most noteworthy streaming challenges is Live sporting events such as Premier League Football. In the presentation, I describe the perfect storm of when Monday night football meets Game of Thrones.

I hope you find the presentation interesting and would be great to hear your comments.

Surfing the Video Tidal Wave

Scaling the Internet Infrastructure

Following on from my previous article about the oncoming tidal wave of online HTTP video and its impact on the Internet infrastructure, I reviewed a software solution that can help address this challenge.

One area that’s really important to understand is the direct correlation on video quality with content popularity, such as in the example of watching NFL football or Film and just as you get to a good point, your client starts buffering. As a consumer you have limited choices except to vote with your feet. As a content provider, you need to ensure your infrastructure meets both current peak and future demands, because when a flash crowd occurs the customer will blame you, rather than a CDN which is an essential part of the video delivery chain.

What are the requirements for a good video delivery  solution?

With any online HTTP video delivery solution, there are three essential things required to minimise video buffering.

  • Foremost it must be really fast and run at network wire rate to leverage  available server hardware.
  • Furthermore it must scale linearly so that we can increase performance with additional units.
  • Lastly it must be a software only solution deployable quickly in the cloud or on premise.

Introducing aiScaler

aiScaler is a commercial high performance HTTP caching proxy server. It achieves this as a memory based cache, using asynchronous polling technology within the Linux kernel, and avoids disk except when used for logs and configuration. This makes it ideal for live video streaming applications such as Apple’s HTTP Live streaming, MPEG Dash, Microsoft Smooth and Adobe Flash. To prove the marketing and validate the performance claims I prepared a set of live video benchmarks.

Recommended Architecture

My recommendation is to deploy aiScaler directly in front of your origin server platform, assuming that you’re already using a CDN to serve video content to customers, as this provides the best possible performance with added DDoS security protection. The high level architecture diagram below provides an example of this:

HLA Block diagram v3

Test Architecture

The aim of benchmark testing is for maximum throughput! without errors which can cause buffering, and so I removed CDN’s from the test scenario. The  following high level architecture diagram shows the test scenario for Mobile HLS video:

HLA Block Test diagram v3

aiScaler Test Results

The test scenario was made up of 2500 synthetic mobile HLS users based in North America and Europe. Notice that I am not using a CDN but directly testing aiScaler as the origin cache layer, with public cloud provider CenturyLink.

  • Total CenturyLink bandwidth consumed in 30mins was 973GB.
  • aiScaler reduces client buffering, resulting in a smoother customer video experience as shown in the average response graph below.
  • DNS Time To Live set to 1 min for fast failure detection.

HLS average bit rates

Average response time is a way of measuring how long the live video takes to download and should remain relatively flat, to avoid client buffering issues. The above graph shows that during the test a variance of around 1.5 seconds was achieved. The test results demonstrate that aiScaler scales linearly to support multiple CDN partners concurrently which provides greater resilience.

Test Results Explained

During the test results I captured a lot of data from clients and from aiScaler, the following results are the most interesting.

CTL Server Stats Combined

  • All data was captured at the end of a 30 minute test run.
  • Tests were run three times and results averaged.
  • All HTTP 200 OK’s between 120-140 requests per/sec.
  • No HTTP 403/404 errors were observed.
  • No HTTP 5xx errors were observed.
  • Each aiScaler instance achieved wire rate 1Gbps error free, which proves CenturyLink have a great cloud platform.
  • CPU usage on the aiScaler instances reached 50% with 2 CPU’s and 4GB memory.
  • We could have achieved faster throughput as the only bottleneck was the instance type limiting throughput to 1Gbps.
  • aiScaler has been independently tested in excess of 9Gbps on a single  Intel Xeon based server.

Test Conclusions

The results were encouraging as they proved  that Multi Gigabit throughput could be achieved across multiple CenturyLink data centers with no errors. Delivering online video over the top (OTT) content at scale is challenging, without a combination of excellent caching software and a good cloud platform. Security is also a major factor in any online service and I enabled aiScaler’s automatic DDoS protection during the testing period.

Summary

I was able to prove that by deploying aiScaler on a public cloud provider, wire rate performance of 1Gbps can be achieved on a single instance. If you require more than 1Gbps simply choose a different instance type or run multiple instances geographically dispersed for resilience.

Deploying aiScaler was a straightforward process and took less than 15 minutes. In testing it has proven to be very capable of serving online video at very high throughput and I’d recommend you seriously evaluate it for your business needs.

Jeff

Newer posts »

© 2024 JeffWebb.net

Theme by Anders NorénUp ↑