gstreamer WebRTC unidirectional broadcasting? - webrtc

I'm a begginer to WebRTC & gstreamer and I have a question.
I've been struggling with gstreamer WebRTC example, webrtc-unidirectional-h264.c, to broadcast IP camera.
I changed the pipeline as like below, at first it was just v4l2src.
receiver_entry->pipeline =
gst_parse_launch ("webrtcbin name=webrtcbin stun-server=stun://"
STUN_SERVER " "
"rtspsrc location=rtsp://192.168.0.5:554/stream1 tune=zerolatency latency=0 ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! "
"x264enc bitrate=12288 speed-preset=ultrafast tune=zerolatency ! h264parse ! rtph264pay config-interval=-1 name=payloader ! "
"application/x-rtp,media=video,encoding-name=H264,payload=96"
" ! webrtcbin. ", &error);
To show it in chrome with fullscreen I editted some flags in html description like below.
<body> \n \
<div> \n \
<video id=\"stream\" autoplay muted></video> \n \
</div> \n \
</body> \n \
As far I know chrome doesn't play strema automatically, if I don't give muted option.
With the code I could make a webview stream page.
But if multiple user enter the page the latency of stream goes higher and higher.
For one user the latency of stream is about 300ms, but if four users enter the page the latency goes about 3secs.
I've been searching for the reason, but it's hard to find.
I estimated some reasons like below.
1. Everytime another user enters the page, it opens another pipeline so it might be a burden to camera
I made a multicast server to check whether it is true, but I think it's not the fact. Though I opened the multicast server and use that source, latency goes higher as number of user increases.
2. Everytime another user enters the page, it opens another pipeline so it might be burden to soup server
I'm begginer to WebRTC, so I've been searching this but I didn't find related issue
3. Everytime another user enters the page, the gst WebRTC pipeline needs to make a tee to work properly
Now I'm trying to make this work, if I got some progress I'll update it to this issue.
I tried to change some option of gst pipeline like bitrate of encoder, type of decoder&encoder.
And I tried to change source IP camera(RTSP), multicast server(via RTP), media server(RTSP).
But it doesn't work
What I expect is that multiple user can see the same stream of camera without latency.
Can you give me advices?
Thank you!

Your pipeline is good but each time a new user connects :
You connects to the camera ( bandwidth consumption )
You encode the stream you receive ( CPU intensive Consumption )
Then you send video to your connected user.
As you previouly said a tee could be a good idea, you can also if you want reuse encoding stream from your camera ( only if it is a constrained baseline profile, higher profiles are not supported by all navigators ).
Your pipeline main pipeline should be like that :
rtspsrc location=rtsp://192.168.0.5:554/stream1 tune=zerolatency latency=0 ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! x264enc bitrate=12288 speed-preset=ultrafast tune=zerolatency key-int-max=50 ! h264parse ! tee name=m
Then on each connection :
m. ! queue ! h264parse ! rtph264pay ! 'application/x-rtp,media=video,encoding-name=H264,payload=96' ! webrtcbin
( I think you can also put rtph264pay ! 'application/x-rtp,media=video,encoding-name=H264,payload=96' before the tee)
I you want to use camera encoder :
rtspsrc location=rtsp://192.168.0.5:554/stream1 tune=zerolatency latency=0 ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! rtph264depay ! h264parse ! tee name=m
On each connection the tee should be the same.
You can do that only if the camera profile is constrained-bandwidth.
If you share the encoder you need ensure that key-frame-interval is regular ( key-int-max for x264enc )
Other thing you seem to have Nvidia hardware may be nvenc element could be better for reduce you cpu usage.
You can contact me if you need a concrete implementation.
Best regards.

Related

Issues with WebRTC/Gstreamer video quality

Im pretty new in Streaming and RealTimeCommunication. I need to work on a service to playback camera feed from browser. (and probably use Gstreamer to process the video in future)
So I follow a helloworld example here: https://github.com/centricular/gstwebrtc-demos/blob/master/sendrecv/gst-java/src/main/java/WebrtcSendRecv.java
This looks so good and I got my camera video for first few 10 seconds. After 10 seconds, video quality start to become worse like this:
BTW, here is my current Gstreamer pipeine description (after WebRTCBin):
videoconvert ! queue max-size-buffers=1 leaky=downstream ! vp8enc deadline=1 ! rtpvp8pay mtu=1024 ! queue max-size-buffers=1 leaky=downstream ! capsfilter caps=application/x-rtp,media=video,encoding-name=VP8,payload=120
What could be reason to that in WebRTC? Could it be latency or just network congestion. Any clue is appreciate!

Why does the birate decrease when streaming via WebRTCBin?

I have been experimenting with WebRTC video streaming via Gstreamer's WebRTCBin element to a web browser (Google Chrome). I noticed that the birate decreases sometimes, to the point of reaching zero when the source video being streamed doesnt change.
In this example, between 60 and ~90 seconds the bitrate of the video being received falls down and stays close to 0. In this time window the video at the source is a loading screen of a game, that doesnt change until the loading bar moves. Also, when the game finishes loading and starts again, the bitrate goes back up and starts decreasing.
My pipeline uses NVENC as the encoder:
"dxgiscreencapsrc cursor=true ! capsfilter caps=\"video/x-raw,framerate=60/1\" ! queue ! nvh264enc bitrate=2250 rc-mode=vbr gop-size=-1 qos=true preset=low-latency-hq ! capsfilter caps=\"video/x-h264,profile=high\" ! queue ! rtph264pay ! capsfilter caps=\"application/x-rtp,media=video,encoding-name=H264,width=1280,height=720,payload=123\""
I was wondering if there is some kind of optimization, either in WebRTCBin, NVENC or the web browser, that prevents repeated bits to be sent over to the client browser. Is that correct? Who is the culprit?
I did some searching online (for WebRTC and WebRTCBin related stuff) and could not find anything that explains the data I have been seeing.

GStreamer extract JPEG image from MJPEG UDP stream

I am using the following command to try to take a single JPEG picture from a MJPEG over UDP stream with GStreamer:
gst-launch-1.0 udpsrc port=53247 ! jpegdec ! jpegenc ! filesink location=test.jpeg
The problem is even if I manage to get a snapshot of the stream as JPEG image, the pipeline doesn't stop and the size of the output image keep growing until I manually stop the pipeline.
I also tried the option num-buffers=1 but I only get a completely black Image then.
Is there a command that would allow me to take a JPEG format snapshot from the stream properly?
I found a solution that partially reply to my question.
I empirically set the variable num-buffers to 75, which enough in my case to get a full image and give me JPEG files with a reasonable weight.
The command is the following:
gst-launch-1.0 -e udpsrc port=53247 num-buffers=75 ! jpegdec ! jpegenc ! filesink location=test.jpeg
But since num-buffers is set empirically, I think this solution is not the most adapted.

VLC streaming with ip camera - reconnect when camera resets

I am using VLC to get the video stream from my ip cam and stream it to the network, in order to save the limited wifi band that reaches the cam. The command I use to do this is as follows:
cvlc [cam stream] --sout "#standard{access=http{mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:[chosen port]}"
The problem is that when the camera restarts neither VLC quits nor it reconnects to it, so I cannot run it again. Does anyone have an idea on how to sove this? Any help will be very appreciated.
Got it working by writing a script that periodically connects to the vlc instance via TELNET, checks the ammount of bytes received and saves it into a log file. In case the ammount of bytes is equal to the last time it checked, it sends a stop then a play command to VLC and, if the camera is back online it will work again. If anyone wants more details about the implementation, just ask!

gstreamer axis camera

I have a rtsp web stream (axis 211 ip camera). Gst-launch Playbin2 uri=... can show it just fine. I cannot figure out the right pipeline to duplicate what playbin2 is doing. Is there a way to dump a description of the pipeline playbin 2 creates.
You should first identify the type streams outputted by the camera. For example I have axis 1054 camera transmitting h.264 video and MPEG-4 AAC audio (.m4a) elementary streams.
So my pipeline for displaying the video is as follows:-
gst-launch rtspsrc location = rtsp://192.x.x.x:555/media ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! autovideosink.
If you are identifying the format of the streams correctly then you should have no problem.
use the -v argument to gst-launch. You can figure out what pieces to put together from the output.
The other answer were useful for sure but in the end I found the bast way is to use the DOT file dump.
http://gstreamer.freedesktop.org/wiki/DumpingPipelineGraphs
you can see al the details of what the playbin constructed.. Very useful.
In a C program you can call
GST_DEBUG_BIN_TO_DOT_FILE()