Parsing HLS manifest of live stream in Safari to retrieve time-based metadata - safari

I am using native Safari player implementation to stream video with HLS streaming protocol.
My goal is to get time-based metadata (such as EXT-X-DATERANGE) from a live stream manifest.
As far as I know, it is not possible to retrieve this data because the streaming logic is fully controlled by the Safari player which does not expose this data.
For now, I came to the 2 possible solutions:
Manually download the manifests and parse out the EXT-X-DATERANGE tag. But with this approach, the download timer should be manually managed too. And, of course, the number of requests for the playlists will be increased.
Desktop Safari browser supports MSE. This means it is possible to have full control over manifest retrieving and parsing. There are awesome libraries that already provide this functionality, such as shaka-player or hls.js. It is possible to implement custom response filter for segments(shaka-player) or listen to Hls.Events.FRAG_CHANGED event (hls.js) in order to have access to the playlist. The problem is that Safari in IOS mobile still does not support the MSE. So it is not possible to apply this solution for mobiles.
Are there any other ways to retrieve time-based metadata (such as EXT-X-DATERANGE) using native Safari player implementation?
Thanks a lot in advance!

Related

How to measure the performance of my site's video streaming and playback?

I have developed a site that hosts user videos. I store the video files in AWS S3, I deliver them through AWS Cloudfront and I use video.js as the site's player with HTML5 as default and flash as fallback.
Generally the video streaming seems to work fine but in some cases I receive complaints from users for slow or choppy video playback. I want to create some tests to measure the performance of streaming in order to be able to distinguish user problems (e.g. slow connection at the user side) or with my service.
Are there any best practices or tools to collect video delivery metrics? I'm interested in open source solutions or something that I can implement myself because it's just a personal project, but I don't want to rediscover the wheel.
Testing progressive download implies checking the transmission bandwidth and its continuity. For example for a high transmission rate the initial client buffer will be filled faster and the playback will start sooner. However, losing that transmission capacity at some later time can cause re-buffering. The total transmission time of your file must be lower than the video duration.
To identify potential issues you can start with the S3 bucket logs and the CloudFront cache statistics and access logs.
There's a load testing tool written in Java called Apache JMeter. It cannot use JavaScript so it must be configured to request the files directly.
The disadvantage of using a load test tool in a single location is pretty evident. Different geographical areas and carriers have different characteristics and test results will be different.
There are online, non open-source tools that can load test from multiple locations but they are generally paid though some offer free trials.
Here's another way to look at this.
but in some cases I receive complaints from users for slow or choppy video playback.
If you're using an Adaptive HLS stream, and you're CloudFront, and the video is still choppy to some users, that's probably because of their own internet connection speeds.
In that case, you can encode your video in multiple resolutions (using just one AWS MediaConvert job, btw) - like 1080p, 720p, 360p, 240p, 144p etc.
And then Videojs has a stream switcher plugin that will 1) automatically start playing the highest possible resolution - and no higher - that's right for the viewer's connection and 2) give the user the option via a "Settings" (gear) icon in the control bar that they can use to switch resolutions manually.
That way, even those with really poor internet connections should be able to watch your video.
Of course, the other alternative is to use progressive download videos that the viewer can simply click play, then immediately click pause, and wait for the video to buffer, and then play it after it's fully downloaded.
Check out the Videojs Resolution Switcher demo here.
-- Ravi Jayagopal

SMIL adaptive streaming in Videojs

What is required to use SMIL file to utilize adaptive streaming in a videojs player. I have created the SMIL file in my wowza application and it is creating my 4 separate streams and making them available. However I cannot get my webpage, that uses videojs, to correctly play the SMIL file. Hints on that coding or where to go to find the correct documentation would be greatly appreciated.
There aren't many implementations of SMIL players. I'm sure I've seen wowza URLs that suggest it will output the SMIL as other formats, something like whatever.smil/manifest.m3u8. That's HLS which could be played on mobile and Safari natively and with videojs-contrib-hls elsewhere.
I know the question is old, but I've been struggling with this recently, so I want to share my experience in case anyone is interested. My scenario is very similar: want to deliver adaptive bitrate streaming from Wowza to clients using videojs.
There is a master link that explains how to setup and run Wowza Transcoder for live streaming, and how to set up your Adaptive Bitrate Streams using an SMIL file. Following the video in there you can achieve to have a stream that uses ABS, but the SMIL file is attached to the stream name, so it is not a solution if you have streams that come to Wowza from another Media Server origin and that need to be transcoded before being served to the clients. In the article there are a few key things mentioned (like the Stream Name Groups), but somehow things doesn't seem pretty clear, at least to me. So here is some clarification from what I understood from all articles I read and what I did to achieve ABS:
You can achieve ABS in Wowza either with SMIL files or with Stream Name Groups (NGRP). NGRP refres to a block of streams that is defined in the Transcoder template that can be played back using multi-bitrate streaming (dynamically) (<- this is what I used). And SMIL files are used to create a "static" list of streams for multi-bitrate VOD streaming. If you are using Wowza Origin-Edge Delivery you'll need the .smil file, because NGRP do not get forwarded to the edge. (Source for all this information: here).
In case you need the SMIL file, you probably need to generate a new one for every stream, and probably you want to do that in an automated way, so best way would be through an HTTP request (in the link above it is explained how to achieve this).
In case you can live with NGRP, things are a bit easier:
You need to enable Wowza Transcoder (this is pretty easy and steps are in the video I mention above).
You should create your own Transcoder Template with the different stream presets you want to deliver, as an example you can check the default ones that are already there. The more presets you add, the more work Wowza will need to do whenever a stream comes, since it will need to generate a new stream for every preset that you have defined.
Now is when we generate the NGRPs. In your Transcoder Template, you can generate as many NGRPs as you want (to clarify: these are like groups of streams, that you will be able to set in your clients video player. Each NGRP contains the streams that the video will be able to use when doing the adaptive bitrate streaming). For instance, these are the default NGRPs:
If you play the ngrp "_mobile" in the clients video player, the ABS algorithm in the player will be able to adapt itself to play either the 240p or the 160p streams based on the client capabilities.
So imagine you have these two NGRP. In order to play them in videoJS, you will need to set the source to:
http://[wowza-ip-address]:1935/<name-of-your-application>/ngrp:myStream_all/playlist.m3u8
or
http://[wowza-ip-address]:1935/<name-of-your-application>/ngrp:myStream_mobile/playlist.m3u8
... based on how many options you want to provide to the client player to use for the ABS. (For instance: if your targets are old mobile devices, you probably just want to offer a couple of low bitrate streams).
(This would be in case you're delivering an HLS stream. If other format, the extension would change, for instance if you are delivering a DASH stream you would have "/manifest.mpd" instead of "playlist.m3u8").
That is all, there is also a very helpful link in video.js documentation explaining how it does the bitrate switching: here.
I hope it helps someone! At least clarifying things! :)

Create a custom desktop YouTube player

I want to create an application capable to play YouTube video's audios and also save the downloaded content in a local cache, therefore when the user decides to resume or play the video again, then it doesn't have to download part of video again but only download the remaining part (User can decide what to do with the cache then, and how to organize it).
It is also very convenient for mobiles (it is my main focus) but I'd like to create a desktop one too for experimental purposes.
So, my question itself is, does YouTube provide any API for this? I mean, in order to cache the download content I need that my application download the content and not any embed player (also remember that it is a native application). I have a third-party application in my Android system that plays YouTube videos, so I think it's possible unless that the developers use some sort of hack, again this is what I don't know.
Don't confuse with the web gdata info API and the embed API, this is not what I want, what I want is to handle the video transfer.
As far as I know, there is no official API for that. However, you could use libquvi to look up the URLs of the real video data, or you could have a look at how they do it and reimplement it yourself (see here).

Streaming music on your website through custom player / application (iTunes)

I was doing some research to find out ways that would allow me to stream music on my website legally. I came across iTunes partner program which allows to stream music on a website through their embedded players. I was wondering is it possible to stream iTunes music through your own custom player? If that is not possible via iTunes, then what other methods are available?
You could do this with a server software like Icecast, there is some good tutorials on setting this up here: http://www.icecast.org/docs.php
Depending on how many browsers you want to support you might want to setup two streams, one in MP3/OGG and a "backup" stream in Flash. Then add some detection as to what the browser supports and present the correct stream (i.e.: Use the HTML5 <audio> tag for playing MP3/OGG to browsers that support this, and use your flash stream for the rest)
their program allowing playback of music in the iTunes Store is likely only for those with the intention to sell music, without providing a commerce business, you'd be breaking their partner program T&C's.

Does WebRTC allow to create audio, video and text chat?

I want to create audio, video and text messagtes chat. Is it possible using WebRTC? Or it only allow audio and video chats?
One side of my app will be implemented using browser. An other one - using C++ native API.
Does anyone have examples in native C++ API and/or javascript?
The WebRTC specification is still very much in flux, but there's a DataChannel API in the spec that is implemented in an early form in both Firefox and Chrome. DataChannels are intended to allow you to send arbitrary bytes between peers, and the spec provides for both reliable (TCP-like) and unreliable (UDP-like) channels.
I am not sure if WebRTC allows for text chatting. I was able to successfully create an Android Application that performed all of this, but only with the combination of Google's Libjingle and WebRTC libraries. Within the Libjingle library there are several example programs/pieces of code that demonstrate the library's functionality. The call example in Libjingle sounds very similar to what you are wanting to do, and is what I built my Android application out of. The only thing is I have not yet ported it to an web browser, so I am not sure if Libjingle will work with that.
I have begun looking into it, and I have found some people on the WebRTC discussion group that have developed a very nice Multi-user video chat application for a web browser that is built using WebRTC. It is capable of video (along with voice) communications as well as text chatting. I do not know if this matters, but it all occurs within a single interface (meaning that it does not seem to allow for separated/singular form communications -- text only, voice only, video only). I am sure that it would not be too difficult to separate them all out if you wanted/needed. They have posted all of their code onto GitHub and seem to be actively updating and improving it.