I have a page that displays a looping video and the performance of the playback appears less than ideal - the video stutters and lags instead of playing smoothly. I learned through a bit of searching that Safari handles streaming video differently than other browsers because it makes byte range requests and expects the server to respond with status 206. Safari makes a series of requests with a range header set, while Chrome is able to make a single request.
When I view the network requests in Safari dev tools, I see the series of byte range requests happening as expected. But when the video loops back and starts from the beginning, I see the same series of requests happening a second time and continuously.
JSFiddle to reproduce in Safari.
<video src="https://jsoncompare.org/LearningContainer/SampleFiles/Video/MP4/Sample-MP4-Video-File-Download.mp4" autoplay loop muted playsinline preload="auto" controls/>
Question is: is this by design? It seems inefficient that the browser is re-downloading the pieces of the video every time it plays. Performance wise, I suspect this is what is causing the non-smooth playback. Is caching supported for byte range requests in Safari?
I also suspect this behavior may have to do with the size of the asset. I see the described behavior for a video that’s ~40 MB but smaller videos are downloaded in two requests and the requests don’t repeat.
Helpful resources I came across
https://blog.logrocket.com/streaming-video-in-safari/
https://developer.apple.com/library/archive/documentation/AppleApplications/Reference/SafariWebContent/CreatingVideoforSafarioniPhone/CreatingVideoforSafarioniPhone.html#//apple_ref/doc/uid/TP40006514-SW6
The re-requesting for the byte range requests is by design, or at least by current implementation.
It seems that the mechanism that Safari uses to cache requests does not currently allow for byte ranges - i.e. in simplistic terms, it looks just at the URL so would respond with whatever happened to be in cache for that URL, ignoring the byte range.
It seems this is a limitation (or maybe a very 'pure' interpretation of the specs, not sure...) but the current advice is definitely not to cache when using byte range requests on Apple based solutions:
NSURLRequestReloadIgnoringLocalCacheData = 1
This policy specifies that no existing cache data should be used to satisfy a URL load request.
Important
Always use this policy if you are making HTTP or HTTPS byte-range requests.
(https://developer.apple.com/documentation/foundation/nsurlrequestcachepolicy/nsurlrequestreloadignoringlocalcachedata)
You can see more discussion on this here also in the Apple Developer forum: https://developer.apple.com/forums/thread/92119
I also think this is by design.
Recently I just implemented video streaming for a website and also saw this behaviour. In chrome and firefox everything just works fine and even with the bye-range headers it always requests little chunks.
The safari devtools state that it downloads big chunks and often aborts these requests. This is a very strange behaviour, especially when you proxy a video from a aws s3 or something like that. Because safari requests a large chunk, the server loads this chunk from the s3 and sends it back, but safari only needs a few bytes.
Here is a good article which goes into detail of this behaviour:
https://www.stevesouders.com/blog/2013/04/21/html5-video-bytes-on-ios/
Related
I've built up an API application with ASP.NET Core 2.2.
Everything has been fine. Except one PATCH API, which takes an ID and a list, to replace the list of corresponding item.
This API works fine with POSTMAN too. Simply and fast, works just as expected.
However, to run on browsers, it stalls 1 minute to send that request.
I've tried to make it simple by rewriting the App within only one jQuery function, to check if the problem is on my frontend app; however it still stalls for 1 minute.
I've looked up stalled, people say that it can be a Chrome policy to load maximum 6 requests at the same time; however it's not my case. There's only such request at that time, and every other API works fine except this one.
Also, I've tried with other browsers: Firefox and Edge, but it's still the same.
According to the article Chrome provides:
Queueing. The browser queues requests when:
There are higher priority requests.
There are already six TCP connections open for this origin, which is the limit. Applies to HTTP/1.0 and > HTTP/1.1 only.
The browser is briefly allocating space in the disk cache
Stalled. The request could be stalled for any of the reasons described in Queueing.
It seems that getting "stalled" for long, means that the request wasn't event sent. Does it mean that I can just exclude the possibility to fix backend API?
And also, since that there's no other request at the same time, does it mean that it most likely goes to the reason that "The browser is briefly allocating space in the disk cache", or is there any other reason?
And I also wander why only this API gets this issue. Is there anything special with the method "PATCH"?
At first use stopwatch and evaluate response time of your code in browser and postman and see how take long time in each.
If both is same, don't touch your code because your problem isn't from your method.
If you can, test it with 'post http attribute' till know your problem is because of it or not.
However I guess reason of it is your system.
Of course it's possible ypur problem resolve with changing pipeline (startup.cs) . There are also problems like CORS that occurred only in browsers and not postman.
I want to simulate an infinite live streaming using HLS. Currently I am writing manually a .m3u8 file and the .ts files are loaded from an external service that provides infinite fragments.
This is an example of a m3u8 file:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:22730
#EXT-X-ALLOW-CACHE:YES
#EXT-X-TARGETDURATION:7
#EXTINF:6,
asd5.ts
#EXTINF:3,
asd6.ts
#EXT-X-DISCONTINUITY
#EXTINF:6,
xyz1.ts
I am increasing #EXT-X-MEDIA-SEQUENCE with a counter but I am asking what happen when it will reach its maximum value.
There is nothing in the spec that specifies a limit, so every player will respond differently.
Trying setting it to possible maximums (65535, 4294967295, etc) and see what happens.
In the real world however, you will reach practical limits before you reach technical limits. (e.g. there Is no practical reason to have a stream that lasts 100 years)
We are currently using ExoPlayer for one of our applications, which is very similar to the HQ Trivia app, and we use HLS as the streaming protocol.
Due to the nature of the game, we are trying to keep all the viewers of this stream to have the same latency, basically to keep them in sync.
We noticed that with the current backend configuration the latency is somewhere between 6 and 10 seconds. Based on this fact, we assumed that it would be safe to “force” the player to play at a bigger delay (15 seconds, further off the live edge), this way achieving the same (constant) delay across all the devices.
We’re using EXT-X-PROGRAM-DATE-TIME tag to get the server time of the currently playing content and we also have a master clock with the current time (NTP). We’re constantly comparing the 2 clocks to check the current latency. We’re pausing the player until it reaches the desired delay, then we’re resuming the playback.
The problem with this solution is that the latency might get worse (accumulating delay) over time and we don’t have other choice than restarting the playback and redo the steps described above if the delay gets too big (steps over a specified threshold). Before restarting the player we’re also trying to slightly increase the playback speed until it reaches the specified delay.
The exoPlayer instance is setup with a DefaultLoadControl, DefaultRenderersFactory, DefaultTrackSelector and the media source uses a DefaultDataSourceFactory.
The server-side configuration is as follows:
cupertinoChunkDurationTarget: 2000 (default: 10000)
cupertinoMaxChunkCount: 31 (default: 10)
cupertinoPlaylistChunkCount: 15 (default: 3)
My first question would be if this is even achievable with a protocol like HLS? Why is the player drifting away accumulating more and more delay?
Is there a better setup for the exoPlayer instance considering our specific use case?
Is there a better way to achieve a constant playback delay across all the playing devices? How important are the parameters on the server side in trying to achieve such a behaviour?
I would really appreciate any kind of help because I have reached a dead-end. :)
Thanks!
The only sollution for this is provided by:
https://netinsight.net/product/sye/
Their sollution includes frame accurate sync with no drift and stateful ABR. This probably can’t be done with http based protocols hence their sollution is built upon UDP transport.
I have been using SimpleWebRTC lib for my project.
How to change dynamic remote video resolution during a call (like google hangout when resizing browser)
hangout browser resizing will change remote video resolution size (.videoWidth .videoHeight)
Is this associated with webrtc plan b?
I would like to know how it is implemented for many peer connection.
Tell the sending end (say via DataChannels) to change resolution to NxM. At the sending end, until the new APIs are available to change a getUserMedia/MediaStream capture size on the fly, you can request a second camera/mic stream and replace the existing streams with them. (Note: this will cause onnegotiationneeded i.e. renegotiation, and the far side would see a new output stream.)
Smoother (but only in Firefox thus far -- in the standardization process) would be to use RTPSender.replaceTrack() to change the video track without touching audio or renegotiating.
Another option that will exist (though doesn't yet in either browser) is to use RTPSender.width/height (or whatever syntax gets agreed) to scale outgoing video before encoding.
Plan B for multistream/BUNDLE (which Chrome implements) was not adopted; Firefox has now (in Fx38 which goes out in a few days) implemented the Unified Plan; expect to see a blog post soon from someone on how to force the two to work together (until Chrome gets to implementing Unified Plan)
I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe
How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.