GStreamer extract JPEG image from MJPEG UDP stream - udp

I am using the following command to try to take a single JPEG picture from a MJPEG over UDP stream with GStreamer:
gst-launch-1.0 udpsrc port=53247 ! jpegdec ! jpegenc ! filesink location=test.jpeg
The problem is even if I manage to get a snapshot of the stream as JPEG image, the pipeline doesn't stop and the size of the output image keep growing until I manually stop the pipeline.
I also tried the option num-buffers=1 but I only get a completely black Image then.
Is there a command that would allow me to take a JPEG format snapshot from the stream properly?

I found a solution that partially reply to my question.
I empirically set the variable num-buffers to 75, which enough in my case to get a full image and give me JPEG files with a reasonable weight.
The command is the following:
gst-launch-1.0 -e udpsrc port=53247 num-buffers=75 ! jpegdec ! jpegenc ! filesink location=test.jpeg
But since num-buffers is set empirically, I think this solution is not the most adapted.

Related

gstreamer WebRTC unidirectional broadcasting?

I'm a begginer to WebRTC & gstreamer and I have a question.
I've been struggling with gstreamer WebRTC example, webrtc-unidirectional-h264.c, to broadcast IP camera.
I changed the pipeline as like below, at first it was just v4l2src.
receiver_entry->pipeline =
gst_parse_launch ("webrtcbin name=webrtcbin stun-server=stun://"
STUN_SERVER " "
"rtspsrc location=rtsp://192.168.0.5:554/stream1 tune=zerolatency latency=0 ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! "
"x264enc bitrate=12288 speed-preset=ultrafast tune=zerolatency ! h264parse ! rtph264pay config-interval=-1 name=payloader ! "
"application/x-rtp,media=video,encoding-name=H264,payload=96"
" ! webrtcbin. ", &error);
To show it in chrome with fullscreen I editted some flags in html description like below.
<body> \n \
<div> \n \
<video id=\"stream\" autoplay muted></video> \n \
</div> \n \
</body> \n \
As far I know chrome doesn't play strema automatically, if I don't give muted option.
With the code I could make a webview stream page.
But if multiple user enter the page the latency of stream goes higher and higher.
For one user the latency of stream is about 300ms, but if four users enter the page the latency goes about 3secs.
I've been searching for the reason, but it's hard to find.
I estimated some reasons like below.
1. Everytime another user enters the page, it opens another pipeline so it might be a burden to camera
I made a multicast server to check whether it is true, but I think it's not the fact. Though I opened the multicast server and use that source, latency goes higher as number of user increases.
2. Everytime another user enters the page, it opens another pipeline so it might be burden to soup server
I'm begginer to WebRTC, so I've been searching this but I didn't find related issue
3. Everytime another user enters the page, the gst WebRTC pipeline needs to make a tee to work properly
Now I'm trying to make this work, if I got some progress I'll update it to this issue.
I tried to change some option of gst pipeline like bitrate of encoder, type of decoder&encoder.
And I tried to change source IP camera(RTSP), multicast server(via RTP), media server(RTSP).
But it doesn't work
What I expect is that multiple user can see the same stream of camera without latency.
Can you give me advices?
Thank you!
Your pipeline is good but each time a new user connects :
You connects to the camera ( bandwidth consumption )
You encode the stream you receive ( CPU intensive Consumption )
Then you send video to your connected user.
As you previouly said a tee could be a good idea, you can also if you want reuse encoding stream from your camera ( only if it is a constrained baseline profile, higher profiles are not supported by all navigators ).
Your pipeline main pipeline should be like that :
rtspsrc location=rtsp://192.168.0.5:554/stream1 tune=zerolatency latency=0 ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! x264enc bitrate=12288 speed-preset=ultrafast tune=zerolatency key-int-max=50 ! h264parse ! tee name=m
Then on each connection :
m. ! queue ! h264parse ! rtph264pay ! 'application/x-rtp,media=video,encoding-name=H264,payload=96' ! webrtcbin
( I think you can also put rtph264pay ! 'application/x-rtp,media=video,encoding-name=H264,payload=96' before the tee)
I you want to use camera encoder :
rtspsrc location=rtsp://192.168.0.5:554/stream1 tune=zerolatency latency=0 ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! rtph264depay ! h264parse ! tee name=m
On each connection the tee should be the same.
You can do that only if the camera profile is constrained-bandwidth.
If you share the encoder you need ensure that key-frame-interval is regular ( key-int-max for x264enc )
Other thing you seem to have Nvidia hardware may be nvenc element could be better for reduce you cpu usage.
You can contact me if you need a concrete implementation.
Best regards.

Decode buffer to video format

I'm trying to decode an buffer data video by using ffmpeg tools.
I can exec an ffmpeg command in React-Native, but i don't know if i can decode live buffer with ffmpeg ?
I'm using ffmpeg-kit :
FFmpegKit.execute('ffmpeg command here');
If anyone know more information about this with ffmpeg
Thank you !

Issues with WebRTC/Gstreamer video quality

Im pretty new in Streaming and RealTimeCommunication. I need to work on a service to playback camera feed from browser. (and probably use Gstreamer to process the video in future)
So I follow a helloworld example here: https://github.com/centricular/gstwebrtc-demos/blob/master/sendrecv/gst-java/src/main/java/WebrtcSendRecv.java
This looks so good and I got my camera video for first few 10 seconds. After 10 seconds, video quality start to become worse like this:
BTW, here is my current Gstreamer pipeine description (after WebRTCBin):
videoconvert ! queue max-size-buffers=1 leaky=downstream ! vp8enc deadline=1 ! rtpvp8pay mtu=1024 ! queue max-size-buffers=1 leaky=downstream ! capsfilter caps=application/x-rtp,media=video,encoding-name=VP8,payload=120
What could be reason to that in WebRTC? Could it be latency or just network congestion. Any clue is appreciate!

What is the fastest method to compress videos before sending in React Native Chatting app

So my question is mainly in the title, I'm working on a chatting app and I have to compress the videos before sending to the database (firebase storage). All I could find till now is ffmpeg, but the issue is that it's taking tremendous amount of time to compress videos, for a 10 sec video it would take like a minute, and i was astonished how fast it's done in whatsapp.. so is there any other method to compress videos faster? Or does changing the ffmpeg command make an acceptable difference? the currently used command is "-y -i ${rVideoUrl} -c:v libx264 -crf 28 -preset ultrafast ${finalVideo}"
One of the methods to decrease the compression time by a good margin was to set the resolution of the output video to a low resolution, by using "-vf scale=426:240", so the whole command became "-y -i ${inputVideo} -c:v libx264 -crf 28 -vf scale=426:240 -preset ultrafast ${outputVideo}", or you can use "-1" instead of 426 or 240, the -1 will tell ffmpeg to automatically choose the correct height in relation to the provided width to preserve the aspect ratio. -1 can also be used for width if you provide a given height. You can check more details regarding the "-1" here

gstreamer axis camera

I have a rtsp web stream (axis 211 ip camera). Gst-launch Playbin2 uri=... can show it just fine. I cannot figure out the right pipeline to duplicate what playbin2 is doing. Is there a way to dump a description of the pipeline playbin 2 creates.
You should first identify the type streams outputted by the camera. For example I have axis 1054 camera transmitting h.264 video and MPEG-4 AAC audio (.m4a) elementary streams.
So my pipeline for displaying the video is as follows:-
gst-launch rtspsrc location = rtsp://192.x.x.x:555/media ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! autovideosink.
If you are identifying the format of the streams correctly then you should have no problem.
use the -v argument to gst-launch. You can figure out what pieces to put together from the output.
The other answer were useful for sure but in the end I found the bast way is to use the DOT file dump.
http://gstreamer.freedesktop.org/wiki/DumpingPipelineGraphs
you can see al the details of what the playbin constructed.. Very useful.
In a C program you can call
GST_DEBUG_BIN_TO_DOT_FILE()