red5 performance number of users - red5

I need to build infrastructure for video streaming service, that will be able to handle >100 live streams with average of 50 viewers, where top stream can have up to 5000 viewers. All streams will be served as multicast, not extra transcoding will be required (input and output will be h.264), no recording will be made. I'm curious how many streams can handle simple red5 server and how to calculate max of users.

Related

Does video.js support MPEG2-TS/UDP streams?

I am just starting to play around with video.js and really like it. I currently have some code where I have two players showing two different HLS streams in a single browser page.
However, HLS inherently has high latency and that may not work for my project. So I am wondering if video.js can receive and play MPEG2-TS/UDP streams which would have less latency (I can easily change the format of all of my source video steams).
My basic requirement is to have 2 players in a single browser page, one player showing the video stream sent from a particular network node, and the second showing how a different network node received that same stream. So the two video.js players on the browser page are showing 2 video streams that are actually the same video so they are highly correlated. This is why the latency is a critical requirement for this project.
Thanks,
-Andres

Youtube API's maximum number of video uploads per day

We are building an app with a video upload functionality. We were wondering if we could use a Youtube account to upload all of our user videos. They should only be accessible via our app... we don't mind if ads show up while viewing them.
If the app grows, we're looking at potential thousands of uploads per day.
Does Youtube support this? If a few videos get flagged, will the "master" account be shut down?
Finally, if Youtube is the not right choice, do you have any recommendation? We would like to avoid hosting them as much as possible... Since streaming large amounts of videos is an enormous challenge for a start up.
Thank you!
Some information on the video uploads:
https://developers.google.com/youtube/v3/docs/videos/insert
This method supports media upload. Uploaded files must conform to
these constraints: Maximum file size: 128GB Accepted Media MIME types:
video/*, application/octet-stream
You can get the qouta information here: https://developers.google.com/youtube/v3/getting-started#quota
Projects that enable the YouTube Data API have a default quota
allocation of 1 million units per day, an amount sufficient for the
overwhelming majority of our API users.
...
Different types of operations have different quota costs.
A simple read operation that only retrieves the ID of each returned
resource has a cost of approximately 1 unit. A write operation has a
cost of approximately 50 units. A video upload has a cost of
approximately 1600 units.
Yes, youtube can block API access, not only on flagged videos, but at any time as described here: https://developers.google.com/youtube/terms/api-services-terms-of-service#termination
24.2 Termination by YouTube. Notwithstanding anything to the contrary, YouTube reserves the right to (i) suspend or terminate access to, or
use of, any aspects of the YouTube API Services by you, your API
Client(s) and those acting on your behalf), and (ii) terminate the
Agreement (or any portion thereof), as applied to any specific user or
API Client, category of users or API Clients, or all users or API
Clients at any time. For example, we may need to exercise such rights
in instances of your breach of this Agreement, court order, when we
believe there to have been misconduct or conduct which may create
potential liability for YouTube or its Affiliates. Although we will
try to give you reasonable notice, we have no obligation to do so.

do webrtc's api have any limitations?

What are the limitations of WebRTC's APIs, if any?
What is the maximum number of peers that can be connected at a time? I mean, since webRTC is peer-to-peer, hardware and bandwidth resources are divided among peers, a single server is not responsible for handling media streams, then how many maximum users can we connect by using webRTC?

How to measure the performance of my site's video streaming and playback?

I have developed a site that hosts user videos. I store the video files in AWS S3, I deliver them through AWS Cloudfront and I use video.js as the site's player with HTML5 as default and flash as fallback.
Generally the video streaming seems to work fine but in some cases I receive complaints from users for slow or choppy video playback. I want to create some tests to measure the performance of streaming in order to be able to distinguish user problems (e.g. slow connection at the user side) or with my service.
Are there any best practices or tools to collect video delivery metrics? I'm interested in open source solutions or something that I can implement myself because it's just a personal project, but I don't want to rediscover the wheel.
Testing progressive download implies checking the transmission bandwidth and its continuity. For example for a high transmission rate the initial client buffer will be filled faster and the playback will start sooner. However, losing that transmission capacity at some later time can cause re-buffering. The total transmission time of your file must be lower than the video duration.
To identify potential issues you can start with the S3 bucket logs and the CloudFront cache statistics and access logs.
There's a load testing tool written in Java called Apache JMeter. It cannot use JavaScript so it must be configured to request the files directly.
The disadvantage of using a load test tool in a single location is pretty evident. Different geographical areas and carriers have different characteristics and test results will be different.
There are online, non open-source tools that can load test from multiple locations but they are generally paid though some offer free trials.
Here's another way to look at this.
but in some cases I receive complaints from users for slow or choppy video playback.
If you're using an Adaptive HLS stream, and you're CloudFront, and the video is still choppy to some users, that's probably because of their own internet connection speeds.
In that case, you can encode your video in multiple resolutions (using just one AWS MediaConvert job, btw) - like 1080p, 720p, 360p, 240p, 144p etc.
And then Videojs has a stream switcher plugin that will 1) automatically start playing the highest possible resolution - and no higher - that's right for the viewer's connection and 2) give the user the option via a "Settings" (gear) icon in the control bar that they can use to switch resolutions manually.
That way, even those with really poor internet connections should be able to watch your video.
Of course, the other alternative is to use progressive download videos that the viewer can simply click play, then immediately click pause, and wait for the video to buffer, and then play it after it's fully downloaded.
Check out the Videojs Resolution Switcher demo here.
-- Ravi Jayagopal

apache webserver adaptive video streaming

While looking for video streaming server with Adaptive Bit Rate using http, I came across some proprietary servers/implementation namely Adobe dynamic streaming for Flash, Apple HTTP adaptive streaming and a similar one from microsoft.
What I am looking for is Apache webserver ABR streaming, I found out that MPEG DASH is the standard for this, and looks like apache supports it. But I am not able to get a start to it.
Can someone point me to an example or steps to achieve this?
Also, I understand that such a streaming requires a bunch of video files acting as segments at different bit rates of a video file that needs to be streamed and some metadata file.
I am not able to understand how I can provide this to apache to make it stream to the client(browser).
Appreciate help or directions on this.
Thanks.
Using MPEG-DASH, streaming becomes very simple. The video is stored at different quality levels (in terms of bitrate, resolution, etc.) on any HTTP server, each divided in segments of a few seconds length. The (intelligent) client application requests the segment for a specific time and quality (dependent on the current network capacity) via standard HTTP GET requests. So you can use your "standard" apache or any other webserver.
To get started I suggest to get some DASH content, either from DASHIF, or generate content on your own the easy way, using a transcoding platforms, like bitcodin.