gstreamer axis camera - camera

I have a rtsp web stream (axis 211 ip camera). Gst-launch Playbin2 uri=... can show it just fine. I cannot figure out the right pipeline to duplicate what playbin2 is doing. Is there a way to dump a description of the pipeline playbin 2 creates.

You should first identify the type streams outputted by the camera. For example I have axis 1054 camera transmitting h.264 video and MPEG-4 AAC audio (.m4a) elementary streams.
So my pipeline for displaying the video is as follows:-
gst-launch rtspsrc location = rtsp://192.x.x.x:555/media ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! autovideosink.
If you are identifying the format of the streams correctly then you should have no problem.

use the -v argument to gst-launch. You can figure out what pieces to put together from the output.

The other answer were useful for sure but in the end I found the bast way is to use the DOT file dump.
http://gstreamer.freedesktop.org/wiki/DumpingPipelineGraphs
you can see al the details of what the playbin constructed.. Very useful.
In a C program you can call
GST_DEBUG_BIN_TO_DOT_FILE()

Related

gstreamer WebRTC unidirectional broadcasting?

I'm a begginer to WebRTC & gstreamer and I have a question.
I've been struggling with gstreamer WebRTC example, webrtc-unidirectional-h264.c, to broadcast IP camera.
I changed the pipeline as like below, at first it was just v4l2src.
receiver_entry->pipeline =
gst_parse_launch ("webrtcbin name=webrtcbin stun-server=stun://"
STUN_SERVER " "
"rtspsrc location=rtsp://192.168.0.5:554/stream1 tune=zerolatency latency=0 ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! "
"x264enc bitrate=12288 speed-preset=ultrafast tune=zerolatency ! h264parse ! rtph264pay config-interval=-1 name=payloader ! "
"application/x-rtp,media=video,encoding-name=H264,payload=96"
" ! webrtcbin. ", &error);
To show it in chrome with fullscreen I editted some flags in html description like below.
<body> \n \
<div> \n \
<video id=\"stream\" autoplay muted></video> \n \
</div> \n \
</body> \n \
As far I know chrome doesn't play strema automatically, if I don't give muted option.
With the code I could make a webview stream page.
But if multiple user enter the page the latency of stream goes higher and higher.
For one user the latency of stream is about 300ms, but if four users enter the page the latency goes about 3secs.
I've been searching for the reason, but it's hard to find.
I estimated some reasons like below.
1. Everytime another user enters the page, it opens another pipeline so it might be a burden to camera
I made a multicast server to check whether it is true, but I think it's not the fact. Though I opened the multicast server and use that source, latency goes higher as number of user increases.
2. Everytime another user enters the page, it opens another pipeline so it might be burden to soup server
I'm begginer to WebRTC, so I've been searching this but I didn't find related issue
3. Everytime another user enters the page, the gst WebRTC pipeline needs to make a tee to work properly
Now I'm trying to make this work, if I got some progress I'll update it to this issue.
I tried to change some option of gst pipeline like bitrate of encoder, type of decoder&encoder.
And I tried to change source IP camera(RTSP), multicast server(via RTP), media server(RTSP).
But it doesn't work
What I expect is that multiple user can see the same stream of camera without latency.
Can you give me advices?
Thank you!
Your pipeline is good but each time a new user connects :
You connects to the camera ( bandwidth consumption )
You encode the stream you receive ( CPU intensive Consumption )
Then you send video to your connected user.
As you previouly said a tee could be a good idea, you can also if you want reuse encoding stream from your camera ( only if it is a constrained baseline profile, higher profiles are not supported by all navigators ).
Your pipeline main pipeline should be like that :
rtspsrc location=rtsp://192.168.0.5:554/stream1 tune=zerolatency latency=0 ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! x264enc bitrate=12288 speed-preset=ultrafast tune=zerolatency key-int-max=50 ! h264parse ! tee name=m
Then on each connection :
m. ! queue ! h264parse ! rtph264pay ! 'application/x-rtp,media=video,encoding-name=H264,payload=96' ! webrtcbin
( I think you can also put rtph264pay ! 'application/x-rtp,media=video,encoding-name=H264,payload=96' before the tee)
I you want to use camera encoder :
rtspsrc location=rtsp://192.168.0.5:554/stream1 tune=zerolatency latency=0 ! queue max-size-buffers=0 max-size-bytes=0 max-size-time=0 ! rtph264depay ! h264parse ! tee name=m
On each connection the tee should be the same.
You can do that only if the camera profile is constrained-bandwidth.
If you share the encoder you need ensure that key-frame-interval is regular ( key-int-max for x264enc )
Other thing you seem to have Nvidia hardware may be nvenc element could be better for reduce you cpu usage.
You can contact me if you need a concrete implementation.
Best regards.

Issues with WebRTC/Gstreamer video quality

Im pretty new in Streaming and RealTimeCommunication. I need to work on a service to playback camera feed from browser. (and probably use Gstreamer to process the video in future)
So I follow a helloworld example here: https://github.com/centricular/gstwebrtc-demos/blob/master/sendrecv/gst-java/src/main/java/WebrtcSendRecv.java
This looks so good and I got my camera video for first few 10 seconds. After 10 seconds, video quality start to become worse like this:
BTW, here is my current Gstreamer pipeine description (after WebRTCBin):
videoconvert ! queue max-size-buffers=1 leaky=downstream ! vp8enc deadline=1 ! rtpvp8pay mtu=1024 ! queue max-size-buffers=1 leaky=downstream ! capsfilter caps=application/x-rtp,media=video,encoding-name=VP8,payload=120
What could be reason to that in WebRTC? Could it be latency or just network congestion. Any clue is appreciate!

Is it possible to stream the output of an ffmpeg command to a client with dot net core?

I'm trying to take two videos and transform them with ffmpeg into a single video. It works great if you take the two videos, run them through ffmpeg and then serve that file up via an API. Unfortunately the upper range for these videos is ~20 minutes, and this method takes too long to create the full video (~30 seconds w/ ultrafast).
I had an idea to stream the output of the ffmpeg command to the client which would eliminate the need to wait for ffmpeg to create the whole video. I've tried to proof this out myself and haven't had much success. It could be my inexperience with streams, or this could be impossible.
Does anyone know if my idea to stream the in-progress output of ffmpeg is possible / feasible?
you should check hangfire. I used this for running the process on the background, and if it needs a notification, signalR will help you
What do you mean by "streaming" ? Serving the result of your command to an http client on the fly ? Or your client is some video player that play the video (like a VLC player receiving a tcp stream of 4 IP cameras) ?
Dealing with video isn't a simple task, and you need to choose your protocols, tools and even hardware carefully.
Based on the command that you send as an example, you probably need some jobs that convert your videos.
Here's a complete article on how to use Azure Batch to process using ffmeg. You can use any batching solution if you want (another answer suggests Hangfire and it's ok too)

GStreamer extract JPEG image from MJPEG UDP stream

I am using the following command to try to take a single JPEG picture from a MJPEG over UDP stream with GStreamer:
gst-launch-1.0 udpsrc port=53247 ! jpegdec ! jpegenc ! filesink location=test.jpeg
The problem is even if I manage to get a snapshot of the stream as JPEG image, the pipeline doesn't stop and the size of the output image keep growing until I manually stop the pipeline.
I also tried the option num-buffers=1 but I only get a completely black Image then.
Is there a command that would allow me to take a JPEG format snapshot from the stream properly?
I found a solution that partially reply to my question.
I empirically set the variable num-buffers to 75, which enough in my case to get a full image and give me JPEG files with a reasonable weight.
The command is the following:
gst-launch-1.0 -e udpsrc port=53247 num-buffers=75 ! jpegdec ! jpegenc ! filesink location=test.jpeg
But since num-buffers is set empirically, I think this solution is not the most adapted.

zte voice modem problem

we are using zte usb modem. we try to call by AT command (ATD) successfully. But there is no sound when remote device answered.
Does anyone have any idea?
My problem was associated with ZTE usb modem.
I solved the problem.
i can receive and send voice separately to voice port now. But i can not get clean sound like WCDMA UI.
how can i receive and send data with high quality?
Please look at my source code. [http://serv7.boxca.com/files/0/z9g2d59a8rtw6n/ModemDial.zip]
Does anyone now where is my error?
Thank you for your time.
a) Not all zte usb modems supports voice, to detect if modem supports check for ZTE voUSB Device in your ports list.
b) If port present, voice will go through it in pcm format, with 64kbps frequency (8000 samples per sec, 8 sample size).
In your own program, you should read audio stream from there.
stream is additionaly encoded with g.711, so you need to decode it before sending to audio device
It is fairly common to shut off the speaker after connection. Try sending ATM2, that should make the speaker always on.
Basic hayes command set:
M2
Speaker always on (data sounds are heard after CONNECT)
I'm trying to use asterisk's chan_dongle module on ZTE MF180 Datacard model with activated voices abilities.
Originally chan_dongle using raw PCM format on voice data.
But i was discover, that ZTE using ulaw format on sending and recciving voice data.
You can get voice data and save file in this format for learn by using standard Asterisk's Record(filename:ulaw) command in dialplan.
My voice data, dumped from ZTE modem in the same format.
I check it. ZTE dumped data was successefully played by Asterisk's command Playback(dumped)