Infinite live HLS (handle EXT-X-MEDIA-SEQUENCE overflow) - live-streaming

I want to simulate an infinite live streaming using HLS. Currently I am writing manually a .m3u8 file and the .ts files are loaded from an external service that provides infinite fragments.
This is an example of a m3u8 file:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:22730
#EXT-X-ALLOW-CACHE:YES
#EXT-X-TARGETDURATION:7
#EXTINF:6,
asd5.ts
#EXTINF:3,
asd6.ts
#EXT-X-DISCONTINUITY
#EXTINF:6,
xyz1.ts
I am increasing #EXT-X-MEDIA-SEQUENCE with a counter but I am asking what happen when it will reach its maximum value.

There is nothing in the spec that specifies a limit, so every player will respond differently.
Trying setting it to possible maximums (65535, 4294967295, etc) and see what happens.
In the real world however, you will reach practical limits before you reach technical limits. (e.g. there Is no practical reason to have a stream that lasts 100 years)

Related

HTML video performance in Safari - repeated byte range requests

I have a page that displays a looping video and the performance of the playback appears less than ideal - the video stutters and lags instead of playing smoothly. I learned through a bit of searching that Safari handles streaming video differently than other browsers because it makes byte range requests and expects the server to respond with status 206. Safari makes a series of requests with a range header set, while Chrome is able to make a single request.
When I view the network requests in Safari dev tools, I see the series of byte range requests happening as expected. But when the video loops back and starts from the beginning, I see the same series of requests happening a second time and continuously.
JSFiddle to reproduce in Safari.
<video src="https://jsoncompare.org/LearningContainer/SampleFiles/Video/MP4/Sample-MP4-Video-File-Download.mp4" autoplay loop muted playsinline preload="auto" controls/>
Question is: is this by design? It seems inefficient that the browser is re-downloading the pieces of the video every time it plays. Performance wise, I suspect this is what is causing the non-smooth playback. Is caching supported for byte range requests in Safari?
I also suspect this behavior may have to do with the size of the asset. I see the described behavior for a video that’s ~40 MB but smaller videos are downloaded in two requests and the requests don’t repeat.
Helpful resources I came across
https://blog.logrocket.com/streaming-video-in-safari/
https://developer.apple.com/library/archive/documentation/AppleApplications/Reference/SafariWebContent/CreatingVideoforSafarioniPhone/CreatingVideoforSafarioniPhone.html#//apple_ref/doc/uid/TP40006514-SW6
The re-requesting for the byte range requests is by design, or at least by current implementation.
It seems that the mechanism that Safari uses to cache requests does not currently allow for byte ranges - i.e. in simplistic terms, it looks just at the URL so would respond with whatever happened to be in cache for that URL, ignoring the byte range.
It seems this is a limitation (or maybe a very 'pure' interpretation of the specs, not sure...) but the current advice is definitely not to cache when using byte range requests on Apple based solutions:
NSURLRequestReloadIgnoringLocalCacheData = 1
This policy specifies that no existing cache data should be used to satisfy a URL load request.
Important
Always use this policy if you are making HTTP or HTTPS byte-range requests.
(https://developer.apple.com/documentation/foundation/nsurlrequestcachepolicy/nsurlrequestreloadignoringlocalcachedata)
You can see more discussion on this here also in the Apple Developer forum: https://developer.apple.com/forums/thread/92119
I also think this is by design.
Recently I just implemented video streaming for a website and also saw this behaviour. In chrome and firefox everything just works fine and even with the bye-range headers it always requests little chunks.
The safari devtools state that it downloads big chunks and often aborts these requests. This is a very strange behaviour, especially when you proxy a video from a aws s3 or something like that. Because safari requests a large chunk, the server loads this chunk from the s3 and sends it back, but safari only needs a few bytes.
Here is a good article which goes into detail of this behaviour:
https://www.stevesouders.com/blog/2013/04/21/html5-video-bytes-on-ios/

Reduce Freeswitch video conference latency

We're experimenting with a Freeswitch based multiparty video conferencing solution (Zoom like). The users are connecting via WebRTC (Verto clients) and the streams are all muxed and displayed on the canvas (mod_conference in mux mode). It works OK, but we notice high media latency for mixed output and this makes it very difficult to have a real-time dialogue. This is not load related, even with only 1 caller watching himself on the canvas (the mux conference output), it takes almost 1 second to see a local move being reflected on the screen (e.g. if I raise my hand I can see it happening on the screen after almost 1 second ). This is obviously the roundtrip delay, but after discarding the intrinsic network latency (measured to be about 100 ms roundtrip) there seem to be around 800-900 ms added latency. There's no TURN relaying involved. It seems this is being introduced along the buffering/ transcoding/ muxing pipeline. Any suggestions please what to try to reduce the latency? What sort of latency should we expect, what's your experience, has anyone deployed a Freeswitch video conferencing with acceptable latency for bidirectional, real time conversations? Ultimately I'm trying to understand if Freeswitch can be used for a multiparty real time video conversation or I should give up look for something else. Thanks!

Getting HLS livestream in sync across devices

We are currently using ExoPlayer for one of our applications, which is very similar to the HQ Trivia app, and we use HLS as the streaming protocol.
Due to the nature of the game, we are trying to keep all the viewers of this stream to have the same latency, basically to keep them in sync.
We noticed that with the current backend configuration the latency is somewhere between 6 and 10 seconds. Based on this fact, we assumed that it would be safe to “force” the player to play at a bigger delay (15 seconds, further off the live edge), this way achieving the same (constant) delay across all the devices.
We’re using EXT-X-PROGRAM-DATE-TIME tag to get the server time of the currently playing content and we also have a master clock with the current time (NTP). We’re constantly comparing the 2 clocks to check the current latency. We’re pausing the player until it reaches the desired delay, then we’re resuming the playback.
The problem with this solution is that the latency might get worse (accumulating delay) over time and we don’t have other choice than restarting the playback and redo the steps described above if the delay gets too big (steps over a specified threshold). Before restarting the player we’re also trying to slightly increase the playback speed until it reaches the specified delay.
The exoPlayer instance is setup with a DefaultLoadControl, DefaultRenderersFactory, DefaultTrackSelector and the media source uses a DefaultDataSourceFactory.
The server-side configuration is as follows:
cupertinoChunkDurationTarget: 2000 (default: 10000)
cupertinoMaxChunkCount: 31 (default: 10)
cupertinoPlaylistChunkCount: 15 (default: 3)
My first question would be if this is even achievable with a protocol like HLS? Why is the player drifting away accumulating more and more delay?
Is there a better setup for the exoPlayer instance considering our specific use case?
Is there a better way to achieve a constant playback delay across all the playing devices? How important are the parameters on the server side in trying to achieve such a behaviour?
I would really appreciate any kind of help because I have reached a dead-end. :)
Thanks!
The only sollution for this is provided by:
https://netinsight.net/product/sye/
Their sollution includes frame accurate sync with no drift and stateful ABR. This probably can’t be done with http based protocols hence their sollution is built upon UDP transport.

Play audio stream using WebAudio API

I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe
How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.

mp3 snippets on s3

I need a solution to play a segment of an mp3. I have a few 1,000 audio files which are currently stored on Amazon S3, and would like to allow users to play them, however I would like to limit the play length to 30 seconds or so in the middle of the recording.
I'm not sure if I need to create an entirely new file (snippet) such as I would for a thumbnail if it were an image, or if it's possible using some player/steam to safely limit it that way so they cannot access the whole song.
I'm coming from a Rails environment and using Paperclip to handle the files and JPlayer to play them if it matters.
Any pointers or best practices?
This is possible by using the HTTP "Content-range" header. This header says 'please just give me the bytes from here to here and ignore the rest'. If the web server is set up to handle them (Apache is for instance), then you get a 206 response with a body of just those bytes.
You must create a small proxy application that effectively acts as a gateway between the listener and Amazon.
To see if your host will respond try this from the command line:
curl -v -I http://www.mfiles.co.uk/mp3-downloads/01-Tartaros%20of%20light.mp3
Where the url is one of yours. If you are lucky you will see:
Accept-Ranges: bytes
Content-Length: 5284483
This means that the server does accept the Content-range header and the full length of the file is 5284483 bytes long.
Let's request the first third of the file:
curl -H'Range: bytes=0-1761494' http://www.mfiles.co.uk/mp3-downloads/01-Tartaros%20of%20light.mp3 > /tmp/test1.mp3
You should now be able to play /tmp/test1.mp3 and hear the first third of the track.
The next step is to create a proxy application. A good approach would be to use https://github.com/aniero/rack-streaming-proxy but you would probably need to fork the project to send the 'Range: bytes=0-1761494' header. Alternatively have a look at Sinatra.
A bonus here is that because you are proxying the remote server, you could obfuscate the actual URL of the file by having a simple database table with an ID for each file. I would suggest writing a small script that also stores the byte length of each file, so that you don't have to calculate the range for each request.
Thus a GET to "/preview/12345" would proxy "http://amazon.com/my_long_url" and give you just the first third of the file.
On top of that, you could put Varnish in front of your own server, which would cache these partial MP3 files and mean that you are not having to constantly go back to Amazon to get the files.
Unfortunately, you'll need to make new snippets - there isn't really a way to tell a user's browser "download this entire mp3 file, but only play and allow access to the middle 30 seconds".
i think it is simplier to solve the problem in the client side.
Are you using flash to play the audio files?
If yes, I have done something similar (but with videos) using JWPlayer (it also supports audio files).
You can develop a custom plugin to control the snippet you want to play and then stop the audio file and show a message or something like that.
This solution combined with signed urls or/and rtmp streaming with CloudFront can be very safe.
Due to the mp3 format limitation, you cannot seek to the arbitrary frame in the middle of the song and start transmission from that point.
So, there are basically three options:
1. Create new files offline. Very easy, but space consuming.
2. Transcode files on the fly. CPU consuming, degrades quality.
3. Limit playback with first X seconds: just peek into the song' header, get its bitrate and calculate size of the byte chunk to serve
And don't ever transmit more than you need: people will manage to intercept the stream and save it to disk (business side); save your users' traffic (good karma).