picamera mmalobj - Render last n seconds from Ringbuffer - camera

Is there an easy way to playback video data stored in a stream-object (http://picamera.readthedocs.io/en/release-1.13/api_streams.html) (e.g. a PiCameraCircularIO with h264 encoding) using one of the PiRenderers?
The mmalobj API (http://picamera.readthedocs.io/en/release-1.13/api_mmalobj.html#id3) seems to support the playback of buffers, though it is hard to understand and everything I tried to use an MMALVideoDecoder and setting the input to a the data of a PiCameraCircularIO buffer failed.
I'm using the circular stream advancde recipe (http://picamera.readthedocs.io/en/release-1.13/recipes2.html#splitting-to-from-a-circular-stream) but rather than saving the last n seconds to a file, I want to playback them.

Related

Is it possible to stream the output of an ffmpeg command to a client with dot net core?

I'm trying to take two videos and transform them with ffmpeg into a single video. It works great if you take the two videos, run them through ffmpeg and then serve that file up via an API. Unfortunately the upper range for these videos is ~20 minutes, and this method takes too long to create the full video (~30 seconds w/ ultrafast).
I had an idea to stream the output of the ffmpeg command to the client which would eliminate the need to wait for ffmpeg to create the whole video. I've tried to proof this out myself and haven't had much success. It could be my inexperience with streams, or this could be impossible.
Does anyone know if my idea to stream the in-progress output of ffmpeg is possible / feasible?
you should check hangfire. I used this for running the process on the background, and if it needs a notification, signalR will help you
What do you mean by "streaming" ? Serving the result of your command to an http client on the fly ? Or your client is some video player that play the video (like a VLC player receiving a tcp stream of 4 IP cameras) ?
Dealing with video isn't a simple task, and you need to choose your protocols, tools and even hardware carefully.
Based on the command that you send as an example, you probably need some jobs that convert your videos.
Here's a complete article on how to use Azure Batch to process using ffmeg. You can use any batching solution if you want (another answer suggests Hangfire and it's ok too)

How to preload all .ts of m3u8 with video.js

I am using video.js to play m3u8 links.
I found it would continuously download .ts segments during playing.
I would like to create a loading overlay for preparation period, and when all is done, it disappears and user can watch video just like local.
So, is it possible to preload all segments during the loading period before playing?
Update
I found the m3u8 file contains .ts links, is it possible to predownload those blobs and intercept fetch requests to return downloaded blobs as response?
#EXTM3U
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-TARGETDURATION:60
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:0
#EXT-START-TIME:3516
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:55:59+08:00
#EXTINF:2.01,
2758527764_1103365203_1.ts?start=0&end=91931&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:01+08:00
#EXTINF:1.979,
2758527764_1103365203_1.ts?start=91932&end=171643&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:02+08:00
#EXTINF:1.932,
2758527764_1103365203_1.ts?start=171644&end=248159&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:04+08:00
#EXTINF:2.002,
2758527764_1103365203_1.ts?start=248160&end=318659&type=mpegts&resolution=320x240
#EXT-X-PROGRAM-DATE-TIME:2021-02-19T14:56:06+08:00
#EXTINF:2.064,
2758527764_1103365203_1.ts?start=318660&end=393295&type=mpegts&resolution=320x240
I think that waht you would to do isn't the case of use of adaptive streaming, so you shouldn't use HLS or DASH.
Maybe you could achieve this type of experience using sample mp4 playback.
You can set the "videojs.Vhs.GOAL_BUFFER_LENGTH" value to a high figure. It's the number of seconds which will be pre-loaded. However there is a playback issue when too much is buffered. Reason is all these buffered segments ends up eating ram. On mid range mobile devices more than few minutes of pre-load make the video un-usable.
I use it with few thousand students who can pre-load larger chunk (10min), so they are not interrupted continuously when having a low bandwidth connection.

Infinite live HLS (handle EXT-X-MEDIA-SEQUENCE overflow)

I want to simulate an infinite live streaming using HLS. Currently I am writing manually a .m3u8 file and the .ts files are loaded from an external service that provides infinite fragments.
This is an example of a m3u8 file:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA-SEQUENCE:22730
#EXT-X-ALLOW-CACHE:YES
#EXT-X-TARGETDURATION:7
#EXTINF:6,
asd5.ts
#EXTINF:3,
asd6.ts
#EXT-X-DISCONTINUITY
#EXTINF:6,
xyz1.ts
I am increasing #EXT-X-MEDIA-SEQUENCE with a counter but I am asking what happen when it will reach its maximum value.
There is nothing in the spec that specifies a limit, so every player will respond differently.
Trying setting it to possible maximums (65535, 4294967295, etc) and see what happens.
In the real world however, you will reach practical limits before you reach technical limits. (e.g. there Is no practical reason to have a stream that lasts 100 years)

Play audio stream using WebAudio API

I have a client/server audio synthesizer where the server (java) dynamically generates an audio stream (Ogg/Vorbis) to be rendered by the client using an HTML5 audio element. Users can tweak various parameters and the server immediately alters the output accordingly. Unfortunately the audio element buffers (prefetches) very aggressively so changes made by the user won't be heard until minutes later, literally.
Trying to disable preload has no effect, and apparently this setting is only 'advisory' so there's no guarantee that it's behavior would be consistent across browsers.
I've been reading everything that I can find on WebRTC and the evolving WebAudio API and it seems like all of the pieces I need are there but I don't know if it's possible to connect them up the way I'd like to.
I looked at RTCPeerConnection, it does provide low latency but it brings in a lot of baggage that I don't want or need (STUN, ICE, offer/answer, etc) and currently it seems to only support a limited set of codecs, mostly geared towards voice. Also since the server side is in java I think I'd have to do a lot of work to teach it to 'speak' the various protocols and formats involved.
AudioContext.decodeAudioData works great for a static sample, but not for a stream since it doesn't process the incoming data until it's consumed the entire stream.
What I want is the exact functionality of the audio tag (i.e. HTMLAudioElement) without any buffering. If I could somehow create a MediaStream object that uses the server URL for its input then I could create a MediaStreamAudioSourceNode and send that output to context.destination. This is not very different than what AudioContext.decodeAudioData already does, except that method creates a static buffer, not a stream.
I would like to keep the Ogg/Vorbis compression and eventually use other codecs, but one thing that I may try next is to send raw PCM and build audio buffers on the fly, just as if they were being generated programatically by javascript code. But again, I think all of the parts already exist, and if there's any way to leverage that I would be most thrilled to know about it!
Thanks in advance,
Joe
How are you getting on ? Did you resolve this question ? I am solving a similar challenge. On the browser side I'm using web audio API which has nice ways to render streaming input audio data, and nodejs on the server side using web sockets as the middleware to send the browser streaming PCM buffers.

Symbian/S60 audio playback rate

I would like to control the playback rate of a song while it is playing. Basically I want to make it play a little faster or slower, when I tell it to do so.
Also, is it possible to playback two different tracks at the same time. Imagine a recording with the instruments in one track and the vocal in a different track. One of these tracks should then be able to change the playback rate in "realtime".
Is this possible on Symbian/S60?
It's possible, but you would have to:
Convert the audio data into PCM, if it is not already in this format
Process this PCM stream in the application, in order to change its playback rate
Render the audio via CMdaAudioOutputStream or CMMFDevSound (or QAudioOutput, if you are using Qt)
In other words, the platform itself does not provide any APIs for changing the audio playback rate - your application would need to process the audio stream directly.
As for playing multiple tracks together, depending on the device, the audio subsystem may let you play two or more streams simultaneously using either of the above APIs. The problem you may have however is that they are unlikely to be synchronised. Your app would probably therefore have to mix all of the individual tracks into one stream before rendering.