I'm trying to decode an buffer data video by using ffmpeg tools.
I can exec an ffmpeg command in React-Native, but i don't know if i can decode live buffer with ffmpeg ?
I'm using ffmpeg-kit :
FFmpegKit.execute('ffmpeg command here');
If anyone know more information about this with ffmpeg
Thank you !
Related
I'm trying to take two videos and transform them with ffmpeg into a single video. It works great if you take the two videos, run them through ffmpeg and then serve that file up via an API. Unfortunately the upper range for these videos is ~20 minutes, and this method takes too long to create the full video (~30 seconds w/ ultrafast).
I had an idea to stream the output of the ffmpeg command to the client which would eliminate the need to wait for ffmpeg to create the whole video. I've tried to proof this out myself and haven't had much success. It could be my inexperience with streams, or this could be impossible.
Does anyone know if my idea to stream the in-progress output of ffmpeg is possible / feasible?
you should check hangfire. I used this for running the process on the background, and if it needs a notification, signalR will help you
What do you mean by "streaming" ? Serving the result of your command to an http client on the fly ? Or your client is some video player that play the video (like a VLC player receiving a tcp stream of 4 IP cameras) ?
Dealing with video isn't a simple task, and you need to choose your protocols, tools and even hardware carefully.
Based on the command that you send as an example, you probably need some jobs that convert your videos.
Here's a complete article on how to use Azure Batch to process using ffmeg. You can use any batching solution if you want (another answer suggests Hangfire and it's ok too)
I am using the following command to try to take a single JPEG picture from a MJPEG over UDP stream with GStreamer:
gst-launch-1.0 udpsrc port=53247 ! jpegdec ! jpegenc ! filesink location=test.jpeg
The problem is even if I manage to get a snapshot of the stream as JPEG image, the pipeline doesn't stop and the size of the output image keep growing until I manually stop the pipeline.
I also tried the option num-buffers=1 but I only get a completely black Image then.
Is there a command that would allow me to take a JPEG format snapshot from the stream properly?
I found a solution that partially reply to my question.
I empirically set the variable num-buffers to 75, which enough in my case to get a full image and give me JPEG files with a reasonable weight.
The command is the following:
gst-launch-1.0 -e udpsrc port=53247 num-buffers=75 ! jpegdec ! jpegenc ! filesink location=test.jpeg
But since num-buffers is set empirically, I think this solution is not the most adapted.
Trying to split HLS TS Stream audio from video, audio is AAC format.
The gole is to have some sort of AVAsset that I can later manipulate and then Mux back to the video.
After searching for a while i cant find a solid lead, can someone give me a educated direction to take on this issue ?
You can use the ffmpeg/libav library for demuxing the ts. To load the audio back as an AVAsset, it might be necessary to load it from a URL, either by writing temporarily to disk or serving with a local http server within your program.
I think you might run into some trouble in manipulating the audio stream, assuming you want to manipulate the raw audio data. That will require decoding the AAC, modifying it, re-encoding, and re-muxing with the video. That's all possible with ffmpeg/libav, but it's not really that easy.
I have a rtsp web stream (axis 211 ip camera). Gst-launch Playbin2 uri=... can show it just fine. I cannot figure out the right pipeline to duplicate what playbin2 is doing. Is there a way to dump a description of the pipeline playbin 2 creates.
You should first identify the type streams outputted by the camera. For example I have axis 1054 camera transmitting h.264 video and MPEG-4 AAC audio (.m4a) elementary streams.
So my pipeline for displaying the video is as follows:-
gst-launch rtspsrc location = rtsp://192.x.x.x:555/media ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! autovideosink.
If you are identifying the format of the streams correctly then you should have no problem.
use the -v argument to gst-launch. You can figure out what pieces to put together from the output.
The other answer were useful for sure but in the end I found the bast way is to use the DOT file dump.
http://gstreamer.freedesktop.org/wiki/DumpingPipelineGraphs
you can see al the details of what the playbin constructed.. Very useful.
In a C program you can call
GST_DEBUG_BIN_TO_DOT_FILE()
Hi
I am using the NAudio library at http://naudio.codeplex.com/
I have this hardware made by some manufacturer which claims to send
audio with the following characteristics.
aLaw 8khz, AUD:11,0,3336,0
Not sure what it all means at this stage.
I received bunch of bytes from this device when a user speaks into the
equipment.
Hence I am constantly recieving a stream of bytes at particular times
At this stage I have been unable to decode the audio so I can hear
what is spoken into the device with my headphones.
I have tried writing the audio to a file doing code like
FWaveFileWriter = new WaveFileWriter("C:\Test4.wav",
WaveFormat.CreateALawFormat(8000, 1));
And have been unable to playback the sound using the sample demo apps.
I have tried similar code from
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=231245 and
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=83270
and still have not been able to achieve much.
Any information is appreciated.
Thanks
Allen
If you are definitely receiving raw a-law audio (mono 8kHz) then your code to create a WAV file should work correctly and result in a file that can play in Windows Media Player.
I suspect that maybe your incoming byte stream is wrapped in some other kind of protocol. I'm afraid I don't know what "AUD:11,0,3336,0" means, but that might be a place to start investigating. Do you hear anything intelligible at all when you play back the file?