Not able to receive EOS when streaming .ts file over UDP - udp

I am streaming a .ts file which contains both video and audio over UDP by using below pipeline,
GST_DEBUG=6 gst-launch-1.0 filesrc location=vafpd.ts ! tsdemux program-number=10 name=demux demux. ! queue ! h264parse ! muxer.sink_300 mpegtsmux name=muxer prog-map=program_map,sink_300=10,sink_301=10 ! rtpmp2tpay ! udpsink host=192.168.1.139 port=8765 sync=true async=true qos=true demux. ! queue ! faad ! faac ! aacparse ! muxer.sink_301
It is working fine and receives EOS, whenever the both audio and video durations are same.
But when i streamed a .ts file which has audio and video. and The audio duration is more than the video duration, then It never ends and never caught EOS.
When ever the video ends, then below INFO stops printing,
0:00:03.392743787 3358 0xa90050 INFO h264parse gsth264parse.c:1335:gst_h264_parse_update_src_caps:<h264parse0> PAR 1/1
0:00:04.059453486 3358 0xa90050 INFO baseparse gstbaseparse.c:3644:gst_base_parse_set_latency:<h264parse0> min/max latency 0:00:00.033333333, 0:00:00.033333333
These are the log messages,,, After end of the video duration,
0:05:19.714110763 3323 0x10ef990 LOG baseparse gstbaseparse.c:2919:gst_base_parse_chain:<h264parse0> chain leaving
0:05:19.714128717 3323 0x10ef990 LOG GST_SCHEDULING gstpad.c:3834:gst_pad_chain_data_unchecked:<h264parse0:sink> called chainfunction &gst_base_parse_chain with buffer 0x7f92a8013c00, returned ok
0:05:19.714152324 3323 0x10ef990 DEBUG queue_dataflow gstqueue.c:1277:gst_queue_loop:<queue0> queue is empty
0:05:19.714172126 3323 0x10ef990 LOG queue_dataflow gstqueue.c:1286:gst_queue_loop:<queue0> (queue0:src) wait for ADD: 0 of 0-200 buffers, 0 of 0-10485760 bytes, 0 of 0-1000000000 ns, 0 items
When GST-DEBUG=3,
0:00:00.031284464 3365 0x18ba920 WARN basesrc gstbasesrc.c:3483:gst_base_src_start_complete:<filesrc0> pad not activated yet
Pipeline is PREROLLING ...
0:00:00.033063992 3365 0x18bc450 WARN h264parse gsth264parse.c:1025:gst_h264_parse_handle_frame:<h264parse0> broken/invalid nal Type: 9 AU delimiter, Size: 2 will be dropped
0:00:00.047101000 3365 0x18bc4f0 FIXME basesink gstbasesink.c:3064:gst_base_sink_default_event:<udpsink0> stream-start event without group-id. Consider implementing group-id handling in the upstream elements
How to resolve this issue?

Related

Fix audio/video sync on AWS Kinesis stream (Gstreamer on Rapsberry Pi)

I wonder if anyone is able to help with this conundrum...? On a RPi 4, running the AWS Labs WebRTC SDK's sample app (https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/blob/master/samples/kvsWebRTCClientMasterGstreamerSample.c) I have edited the gstreamer pipeline to send 3x video streams from USB webcams/HDMI capture and audio stream from one of the camera's mic. It's working really nicely... except:
When testing only the video streams (via https://matwerber1.github.io/aws-kinesisvideo-webrtc-react/) , latency is very low, but once I add in the audio, it starts off in sync but gradually the video becomes approx 2 second delayed. One alternative pipeline setup with a single camera had the opposite effect of the audio gradually slipping out of sync to about 2 secs latency.
This is my pipeline as added to the sample app:
"v4l2src do-timestamp=TRUE device=/dev/video0 ! "
"video/x-raw,width=720,height=480 ! "
"videomixer name=mix sink_1::ypos=10 sink_1::xpos=10 sink_2::ypos=10 sink_2::xpos=180 ! "
"queue ! videoconvert ! "
"x264enc bframes=0 speed-preset=veryfast bitrate=1024 byte-stream=TRUE tune=zerolatency ! "
"video/x-h264,stream-format=byte-stream,alignment=au,profile=high,framerate=30/1 ! "
"appsink sync=TRUE emit-signals=TRUE name=appsink-video "
"v4l2src device=/dev/video2 ! "
"queue ! videoconvert ! video/x-raw,width=160,height=120 ! mix.sink_1 "
"v4l2src device=/dev/video4 ! "
"queue ! videoconvert ! video/x-raw,width=160,height=120 ! mix.sink_2 "
"alsasrc device=hw:2,0 !"
"queue ! audioconvert ! audioresample ! opusenc ! "
"audio/x-opus,rate=48000,channels=1 ! appsink sync=TRUE emit-signals=TRUE name=appsink-audio"
I have tried adjusting nearly all parameters with no improvement. Do I have the queue ! elements in the right places? Do I need to employ some buffering, if so, where? I have tried adding framerates to the caps but that stops all streams working completely.
Any recommendations or suggestions appreciated.
Thanks

ALSA - setting input volume for line in command line - input not recognized but available for programs

I cant see the input device in alsamixer on the usb board, but it is listed as I do arecord -L (also why is it listing the 2nd sound card by default ?)
arecord -l
**** List of CAPTURE Hardware Devices ****
card 1: CODEC [USB Audio CODEC], device 0: USB Audio [USB Audio]
Subdevices: 1/1
Subdevice #0: subdevice #0
I can see many informations but I cant figure from man, how to change the level of the line input in command line nor in the gui
amixer info
Card default 'CODEC'/'Burr-Brown from TI USB Audio CODEC at usb-3f980000.usb-1.3, full speed'
Mixer name : 'USB Mixer'
Components : 'USB08bb:2902'
Controls : 4
Simple ctrls : 1
amixer contents
numid=3,iface=MIXER,name='PCM Playback Switch'
; type=BOOLEAN,access=rw------,values=1
: values=on
numid=4,iface=MIXER,name='PCM Playback Volume'
; type=INTEGER,access=rw---R--,values=2,min=0,max=128,step=0
: values=128,128
| dBminmax-min=-128.00dB,max=0.00dB
numid=2,iface=PCM,name='Capture Channel Map'
; type=INTEGER,access=r----R--,values=2,min=0,max=36,step=0
: values=0,0
| container
| chmap-fixed=FL,FR
| chmap-fixed=MONO
numid=1,iface=PCM,name='Playback Channel Map'
; type=INTEGER,access=r----R--,values=2,min=0,max=36,step=0
: values=0,0
| container
| chmap-fixed=FL,FR
| chmap-fixed=MONO
I am actualy not sure about how to achieve this since many people say thousands of different thing
here I try to set the volume in a clumsy way, and it does not work :
amixer -c 1 sset numid=2 0
amixer: Unable to find simple control 'numid=2',0
amixer -c 1 cset numid=2 0
amixer: Control hw:1 element write error: Operation not permitted
sudo amixer -c 1 cset numid=2 0
amixer: Control hw:1 element write error: Operation not permitted
amixer does not list any record device although I can clearly record audio with audacity or puredata
amixer -c 1
Simple mixer control 'PCM',0
Capabilities: pvolume pswitch pswitch-joined
Playback channels: Front Left - Front Right
Limits: Playback 0 - 128
Mono:
Front Left: Playback 128 [100%] [0.00dB] [on]
Front Right: Playback 128 [100%] [0.00dB] [on]
amixer -c 0
Simple mixer control 'PCM',0
Capabilities: pvolume pvolume-joined pswitch pswitch-joined
Playback channels: Mono
Limits: Playback -10239 - 400
Mono: Playback 0 [96%] [0.00dB] [on]
please help me out, this makes no sens at all to me
thanks guys
amixer and alsamixer shows those mixer controls that the hardware actually has.
As figure 31 in the PCM2902 datasheet shows, this device indeed has no mechanism to change the capture volume:
It might be possible to add a softvol plugin, but it would be easier to just use PulseAudio.

what exactly is the use of the gstreamer filter in Kurento Media Server

According to Kurento documentation: http://doc-kurento.readthedocs.io/en/stable/mastering/kurento_API.html
GstreamerFilter is a generic filter interface that allow use GStreamer filter in Kurento Media Pipelines.
I was trying to find Gstreamer filters on google, all I found was Gstreamer plugins. (https://gstreamer.freedesktop.org/documentation/plugin-development/advanced/
Does this mean I can use the Kurento Gstreamer filter, to add plugins such as rtph264depay and rtmpsink with it?
e.g.
WebRTC endpoint > RTP Endpoint > (rtph264depay) Gstreamer filter (rtmpsink) > RTMP server.
All without installing Gstreamer separately?
GstreamerFilter allows you to configure a filter using a native GStreamer filter (the same way than when you are using gst-launch-1.0). For example, the following Kurento filter allows to rotate horizontally your media within KMS:
GStreamerFilter filter = new GStreamerFilter.Builder(pipeline, "videoflip method=horizontal-flip").build();
Said that, and regarding your question, for the best of my knowledge, I think so, you can use GstreamerFilter to use rtph264depay and rtmpsink.
Boni Garcia 's code is right.
But if you replace "videoflip method=horizontal-flip" as "rtmpsink location=rtmp://deque.me/live/test01", you will get a error message: "Given command is not valid, pad templates does not match".
You can go deeper to check kms-filter source code from https://github.com/Kurento/kms-filters, in kms-filters/src/server/implementation/objects/GStreamerFilterImpl.cpp there is a line:
99 throw KurentoException (MARSHALL_ERROR,
100 "Given command is not valid, pad templates does not match");
I afraid you can't use GstreamerFilter to send data to rtmp server, maybe you should modify the source code a bit.
Kurento
Just looking at the source - the GStreamerFilter is limited to simple GStreamer plugins. They reject bins and I don't see how you would specify/isolate multiple pads so it probably won't do it.
(EDIT: Maybe I'm wrong here - I'm still learning. I see the mixer example isolating media types and that makes me think it may be possible)
gstreamer
On the other hand installing gstreamer shouldn't really be that much overhead - then link the output RTP connection to a gst-launch pipeline that can output RTMP. It just sucks you can't manage the full pipeline using kurento.
(I don't know what that pipeline would look like - investigating it myself. It's something like this:
gst-launch-1.5 -v \
udpsrc port=9999 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! mux. \
multifilesrc location=sample.aac loop=1 ! aacparse ! mux. \
mpegtsmux name=mux mux. ! rtpmp2tpay ! queue ! udpsink host=10.20.20.20 port=5000
But I'm faking audio in this and haven't gotten the full stream working)
back to kurento
Further exploration suggested maybe the Composite MediaElement would work (tl;dr: no):
Composite composite = new Composite.Builder(pipeline).build();
HubPort in_audio = new HubPort.Builder(composite).build();
HubPort in_video = new HubPort.Builder(composite).build();
HubPort out_composite = new HubPort.Builder(composite).build();
GStreamerFilter filter = new GStreamerFilter.Builder(pipeline, "rtmpsink location=rtmp://127.0.0.1/live/live_stream_720p").build();
webRtcEndpoint.connect(in_audio, MediaType.AUDIO);
webRtcEndpoint.connect(in_video, MediaType.VIDEO);
out_composite.connect(filter);
results in (kurento logs):
...15,011560 21495 [0x4f01700] debug KurentoWebSocketTransport WebSocketTransport.cpp:422 processMessage() Message: >{"id":28,"method":"create","params":{"type":"GStreamerFilter","constructorParams":{"mediaPipeline":"5751ec53_kurento.MediaPipeline","command":"rtmpsink location=rtmp://127.0.0.1/live/live_stream_720p"},"properties":{},"sessionId":"d8abb1d8"},"jsonrpc":"2.0"}<
...15,011862 21495 [0x4f01700] debug KurentoGStreamerFilterImpl GStreamerFilterImpl.cpp:47 GStreamerFilterImpl() Command rtmpsink location=rtmp://127.0.0.1/live/live_stream_720p
...15,015698 21495 [0x4f01700] error filterelement kmsfilterelement.c:148 kms_filter_element_set_filter() <kmsfilterelement0> Invalid factory "rtmpsink", unexpected pad templates
...15,016841 21495 [0x4f01700] debug KurentoWebSocketTransport WebSocketTransport.cpp:424 processMessage() Response: >{"error":{"code":40001,"data":{"type":"MARSHALL_ERROR"},"message":"Given command is not valid, pad templates does not match"},"id":28,"jsonrpc":"2.0"}
I.e. failure.

Using udpsink to stream h264 with gstreamer ( c++ )

I'm trying to stream h264 video over the network using gstreamer ( in windows ) over UDP.
First if I use a pipeline like this, everything appears to be ok, and I see the test pattern:
videotestsrc, ffmpegcolorspace, x264enc, rtph264pay, rtph264depay, ffdec_h264, ffmpegcolorspace, autovideosink
Now I decided to divide this pipeline in client and server parts, transmitting the stream over udp using udpsink and udpsrc.
Server: videotestsrc, ffmpegcolorspace, x264enc, rtph264pay, udpsink
Client: udpsrc, rtph264depay, ffdec_h264, ffmpegcolorspace, autovideosink
On server I use something like that:
source = gst_element_factory_make ("videotestsrc", "source");
ffmpegcolortoYUV = gst_element_factory_make ("ffmpegcolorspace", "ffmpegcolortoYUV");
encoder = gst_element_factory_make ("x264enc", "encoder");
rtppay = gst_element_factory_make ("rtph264pay", "rtppay");
udpsink = gst_element_factory_make ("udpsink", "sink");
g_object_set (source, "pattern", 0, NULL);
g_object_set( udpsink, "host", "127.0.0.1", NULL );
g_object_set( udpsink, "port", 5555, NULL );
Then I add the elements to the pipeline and run, there are no errors anywhere.
Now if I look for UDP port 5555, it's not listening!!!!
The client part also runs but if there is no UDP port listening on server side it won't work.
EDIT: In fact I was very close to the solution... If I start the client it works, but with some problems on the visualization... I think the problem is the x264enc configuration. Anybody knows how to change x264enc parameters like speed-preset or tune???
I tried to instantiate GstX264EncPreset or GstX264EncTune but I have no the declarations of these strcutures.
Anybody knows any way to setup x264enc in other way, like parsing a string or something like that?
I know this is an older post, but you can set the GstX264EncPreset value using a simple integer that corresponds to the preset value.
g_object_set(encoder, "speed-preset", 2, NULL); works for me. The values can be found using gst-inspect-1.0 x264enc and are as follows:
speed-preset : Preset name for speed/quality tradeoff options (can affect decode compatibility - impose restrictions separately for your target decoder)
flags: readable, writable
Enum "GstX264EncPreset" Default: 6, "medium"
(0): None - No preset
(1): ultrafast - ultrafast
(2): superfast - superfast
(3): veryfast - veryfast
(4): faster - faster
(5): fast - fast
(6): medium - medium
(7): slow - slow
(8): slower - slower
(9): veryslow - veryslow
(10): placebo - placebo
Try setting the caps on the udpsrc element to "application/x-rtp".

How to list the available econdings of a webcam?

is there a way for discovering all the available encodings of a certain webcam (e.g x-raw-rgb -xraw-yuv)?
Morevoer, I would like to discover also the available resolutions.
Thanks!
Yes, set the v4l2src element to ready and check the caps on the src pad. The element will narrow the list of caps down to the ones actually supported when it has opened and queried an actual device. That happens in READY state.
What I do is the following (command line):
GST_DEBUG=v4l2src:3 gst-launch v4l2src ! decodebin2 ! xvimagesink
If the video source in onboard else change the "v4l2src". This will show ALOT of info, from "probed caps:" it will long line of possible formats the video source supports.
Here is a same copy/paste from my machine:
probed caps: video/x-raw-yuv, format=(fourcc)YUY2, width=(int)1280,
height=(int)720, interlaced=(boolean)false,
pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1 };
video/x-raw-yuv, format=(fourcc)YUY2, width=(int)640, height=(int)480,
interlaced=(boolean)false, pixel-aspect-ratio=(fraction)1/1,
framerate=(fraction){ 30/1 };
So the info your looking for is:
! video/x-raw-yuv, framerate=30/1, width=640, height=480, interlaced=false !
If anything NOT from the probed list will result in error:
could not negotiate format