I am using video.js to play m3u8 links.
I found it would continuously download .ts segments during playing.
I would like to create a loading overlay for preparation period, and when all is done, it disappears and user can watch video just like local.
So, is it possible to preload all segments during the loading period before playing?
Update
I found the m3u8 file contains .ts links, is it possible to predownload those blobs and intercept fetch requests to return downloaded blobs as response?
#EXTM3U
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-TARGETDURATION:60
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-START-TIME:3516
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:55:59+08:00
#EXTINF:2.01,
2758527764_1103365203_1.ts?start=0&end=91931&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:01+08:00
#EXTINF:1.979,
2758527764_1103365203_1.ts?start=91932&end=171643&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:02+08:00
#EXTINF:1.932,
2758527764_1103365203_1.ts?start=171644&end=248159&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:04+08:00
#EXTINF:2.002,
2758527764_1103365203_1.ts?start=248160&end=318659&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:06+08:00
#EXTINF:2.064,
2758527764_1103365203_1.ts?start=318660&end=393295&type=mpegts&resolution=320x240
I think that waht you would to do isn't the case of use of adaptive streaming, so you shouldn't use HLS or DASH.
Maybe you could achieve this type of experience using sample mp4 playback.
You can set the "videojs.Vhs.GOAL_BUFFER_LENGTH" value to a high figure. It's the number of seconds which will be pre-loaded. However there is a playback issue when too much is buffered. Reason is all these buffered segments ends up eating ram. On mid range mobile devices more than few minutes of pre-load make the video un-usable.
I use it with few thousand students who can pre-load larger chunk (10min), so they are not interrupted continuously when having a low bandwidth connection.
Related
I want to simulate an infinite live streaming using HLS. Currently I am writing manually a .m3u8 file and the .ts files are loaded from an external service that provides infinite fragments.
This is an example of a m3u8 file:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:22730
#EXT-X-ALLOW-CACHE:YES
#EXT-X-TARGETDURATION:7
#EXTINF:6,
asd5.ts
#EXTINF:3,
asd6.ts
#EXT-X-DISCONTINUITY
#EXTINF:6,
xyz1.ts
I am increasing #EXT-X-MEDIA-SEQUENCE with a counter but I am asking what happen when it will reach its maximum value.
There is nothing in the spec that specifies a limit, so every player will respond differently.
Trying setting it to possible maximums (65535, 4294967295, etc) and see what happens.
In the real world however, you will reach practical limits before you reach technical limits. (e.g. there Is no practical reason to have a stream that lasts 100 years)
Is there an easy way to playback video data stored in a stream-object (http://picamera.readthedocs.io/en/release-1.13/api_streams.html) (e.g. a PiCameraCircularIO with h264 encoding) using one of the PiRenderers?
The mmalobj API (http://picamera.readthedocs.io/en/release-1.13/api_mmalobj.html#id3) seems to support the playback of buffers, though it is hard to understand and everything I tried to use an MMALVideoDecoder and setting the input to a the data of a PiCameraCircularIO buffer failed.
I'm using the circular stream advancde recipe (http://picamera.readthedocs.io/en/release-1.13/recipes2.html#splitting-to-from-a-circular-stream) but rather than saving the last n seconds to a file, I want to playback them.
Is it possible to stream video and audio to a rtmp://-server with GPUImage?
I'm using the GPUImageVideoCamera and would love to stream (video + audio) directly to a rtmp-server.
I tried VideoCore which streams perfectly to e.g. YouTube, but whenever I try to overlay the video with different images I do get performance problems.
It seems as GPUImage is doing a really great job there, but I don't know how to stream with that. I found issues on VideoCore talking about feeding VideoCore with GPUImage, but I don't have a starting point on how that's implemented...
Trying to split HLS TS Stream audio from video, audio is AAC format.
The gole is to have some sort of AVAsset that I can later manipulate and then Mux back to the video.
After searching for a while i cant find a solid lead, can someone give me a educated direction to take on this issue ?
You can use the ffmpeg/libav library for demuxing the ts. To load the audio back as an AVAsset, it might be necessary to load it from a URL, either by writing temporarily to disk or serving with a local http server within your program.
I think you might run into some trouble in manipulating the audio stream, assuming you want to manipulate the raw audio data. That will require decoding the AAC, modifying it, re-encoding, and re-muxing with the video. That's all possible with ffmpeg/libav, but it's not really that easy.
I would like to control the playback rate of a song while it is playing. Basically I want to make it play a little faster or slower, when I tell it to do so.
Also, is it possible to playback two different tracks at the same time. Imagine a recording with the instruments in one track and the vocal in a different track. One of these tracks should then be able to change the playback rate in "realtime".
Is this possible on Symbian/S60?
It's possible, but you would have to:
Convert the audio data into PCM, if it is not already in this format
Process this PCM stream in the application, in order to change its playback rate
Render the audio via CMdaAudioOutputStream or CMMFDevSound (or QAudioOutput, if you are using Qt)
In other words, the platform itself does not provide any APIs for changing the audio playback rate - your application would need to process the audio stream directly.
As for playing multiple tracks together, depending on the device, the audio subsystem may let you play two or more streams simultaneously using either of the above APIs. The problem you may have however is that they are unlikely to be synchronised. Your app would probably therefore have to mix all of the individual tracks into one stream before rendering.