I'm trying to stream h264 video over the network using gstreamer ( in windows ) over UDP.
First if I use a pipeline like this, everything appears to be ok, and I see the test pattern:
videotestsrc, ffmpegcolorspace, x264enc, rtph264pay, rtph264depay, ffdec_h264, ffmpegcolorspace, autovideosink
Now I decided to divide this pipeline in client and server parts, transmitting the stream over udp using udpsink and udpsrc.
Server: videotestsrc, ffmpegcolorspace, x264enc, rtph264pay, udpsink
Client: udpsrc, rtph264depay, ffdec_h264, ffmpegcolorspace, autovideosink
On server I use something like that:
source = gst_element_factory_make ("videotestsrc", "source");
ffmpegcolortoYUV = gst_element_factory_make ("ffmpegcolorspace", "ffmpegcolortoYUV");
encoder = gst_element_factory_make ("x264enc", "encoder");
rtppay = gst_element_factory_make ("rtph264pay", "rtppay");
udpsink = gst_element_factory_make ("udpsink", "sink");
g_object_set (source, "pattern", 0, NULL);
g_object_set( udpsink, "host", "127.0.0.1", NULL );
g_object_set( udpsink, "port", 5555, NULL );
Then I add the elements to the pipeline and run, there are no errors anywhere.
Now if I look for UDP port 5555, it's not listening!!!!
The client part also runs but if there is no UDP port listening on server side it won't work.
EDIT: In fact I was very close to the solution... If I start the client it works, but with some problems on the visualization... I think the problem is the x264enc configuration. Anybody knows how to change x264enc parameters like speed-preset or tune???
I tried to instantiate GstX264EncPreset or GstX264EncTune but I have no the declarations of these strcutures.
Anybody knows any way to setup x264enc in other way, like parsing a string or something like that?
I know this is an older post, but you can set the GstX264EncPreset value using a simple integer that corresponds to the preset value.
g_object_set(encoder, "speed-preset", 2, NULL); works for me. The values can be found using gst-inspect-1.0 x264enc and are as follows:
speed-preset : Preset name for speed/quality tradeoff options (can affect decode compatibility - impose restrictions separately for your target decoder)
flags: readable, writable
Enum "GstX264EncPreset" Default: 6, "medium"
(0): None - No preset
(1): ultrafast - ultrafast
(2): superfast - superfast
(3): veryfast - veryfast
(4): faster - faster
(5): fast - fast
(6): medium - medium
(7): slow - slow
(8): slower - slower
(9): veryslow - veryslow
(10): placebo - placebo
Try setting the caps on the udpsrc element to "application/x-rtp".
Related
On linux kernerl version 3.2.48.
As an udp server in kernel mod, skb_tail_pointer(skb) is not correct, it point the udp header tail, lossing the payload size. udphdr->len is right.
It is strange.
it is possible that tail and data pointer points to the same location. skb_tail_pointer() return the starting tail address.
For my task I need to load a bulk of data into Redis as soon as possible. It looks like this article is right about my case: https://redis.io/topics/mass-insert
The article starts from giving an example of using multiple inline SET commands with redis-cli. Then they proceed to generating Redis protocol and again use it with redis-cli. They don't explain the reasons or benefits of using Redis protocol.
Using of Redis protocol is a bit harder and it generates a bit more traffic. I wonder, what are the reasons to use Redis protocol rather than simple one-line commands? Probably despite the fact the data is larger, it is easier (and faster) for Redis to parse it?
Good point.
Only a small percentage of clients support non-blocking I/O, and not
all the clients are able to parse the replies in an efficient way in
order to maximize throughput. For all this reasons the preferred way
to mass import data into Redis is to generate a text file containing
the Redis protocol, in raw format, in order to call the commands
needed to insert the required data.
What I understood is that you emulate a client when you use Redis protocol directly, which would benefit from the highlighted points.
Based on the docs you provided, I tried these scripts:
test.rb
def gen_redis_proto(*cmd)
proto = ""
proto << "*"+cmd.length.to_s+"\r\n"
cmd.each{|arg|
proto << "$"+arg.to_s.bytesize.to_s+"\r\n"
proto << arg.to_s+"\r\n"
}
proto
end
(0...100000).each{|n|
STDOUT.write(gen_redis_proto("SET","Key#{n}","Value#{n}"))
}
test_no_protocol.rb
(0...100000).each{|n|
STDOUT.write("SET Key#{n} Value#{n}\r\n")
}
ruby test.rb > 100k_prot.txt
ruby test_no_protocol.rb > 100k_no_prot.txt
time cat 100k.txt | redis-cli --pipe
time cat 100k_no_prot.txt | redis-cli --pipe
I've got these results:
teixeira: ~/stackoverflow $ time cat 100k.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.168s
user 0m0.025s
sys 0m0.015s
(5 arquivo(s), 6,6Mb)
teixeira: ~/stackoverflow $ time cat 100k_no_prot.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.433s
user 0m0.026s
sys 0m0.012s
According to Kurento documentation: http://doc-kurento.readthedocs.io/en/stable/mastering/kurento_API.html
GstreamerFilter is a generic filter interface that allow use GStreamer filter in Kurento Media Pipelines.
I was trying to find Gstreamer filters on google, all I found was Gstreamer plugins. (https://gstreamer.freedesktop.org/documentation/plugin-development/advanced/
Does this mean I can use the Kurento Gstreamer filter, to add plugins such as rtph264depay and rtmpsink with it?
e.g.
WebRTC endpoint > RTP Endpoint > (rtph264depay) Gstreamer filter (rtmpsink) > RTMP server.
All without installing Gstreamer separately?
GstreamerFilter allows you to configure a filter using a native GStreamer filter (the same way than when you are using gst-launch-1.0). For example, the following Kurento filter allows to rotate horizontally your media within KMS:
GStreamerFilter filter = new GStreamerFilter.Builder(pipeline, "videoflip method=horizontal-flip").build();
Said that, and regarding your question, for the best of my knowledge, I think so, you can use GstreamerFilter to use rtph264depay and rtmpsink.
Boni Garcia 's code is right.
But if you replace "videoflip method=horizontal-flip" as "rtmpsink location=rtmp://deque.me/live/test01", you will get a error message: "Given command is not valid, pad templates does not match".
You can go deeper to check kms-filter source code from https://github.com/Kurento/kms-filters, in kms-filters/src/server/implementation/objects/GStreamerFilterImpl.cpp there is a line:
99 throw KurentoException (MARSHALL_ERROR,
100 "Given command is not valid, pad templates does not match");
I afraid you can't use GstreamerFilter to send data to rtmp server, maybe you should modify the source code a bit.
Kurento
Just looking at the source - the GStreamerFilter is limited to simple GStreamer plugins. They reject bins and I don't see how you would specify/isolate multiple pads so it probably won't do it.
(EDIT: Maybe I'm wrong here - I'm still learning. I see the mixer example isolating media types and that makes me think it may be possible)
gstreamer
On the other hand installing gstreamer shouldn't really be that much overhead - then link the output RTP connection to a gst-launch pipeline that can output RTMP. It just sucks you can't manage the full pipeline using kurento.
(I don't know what that pipeline would look like - investigating it myself. It's something like this:
gst-launch-1.5 -v \
udpsrc port=9999 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! mux. \
multifilesrc location=sample.aac loop=1 ! aacparse ! mux. \
mpegtsmux name=mux mux. ! rtpmp2tpay ! queue ! udpsink host=10.20.20.20 port=5000
But I'm faking audio in this and haven't gotten the full stream working)
back to kurento
Further exploration suggested maybe the Composite MediaElement would work (tl;dr: no):
Composite composite = new Composite.Builder(pipeline).build();
HubPort in_audio = new HubPort.Builder(composite).build();
HubPort in_video = new HubPort.Builder(composite).build();
HubPort out_composite = new HubPort.Builder(composite).build();
GStreamerFilter filter = new GStreamerFilter.Builder(pipeline, "rtmpsink location=rtmp://127.0.0.1/live/live_stream_720p").build();
webRtcEndpoint.connect(in_audio, MediaType.AUDIO);
webRtcEndpoint.connect(in_video, MediaType.VIDEO);
out_composite.connect(filter);
results in (kurento logs):
...15,011560 21495 [0x4f01700] debug KurentoWebSocketTransport WebSocketTransport.cpp:422 processMessage() Message: >{"id":28,"method":"create","params":{"type":"GStreamerFilter","constructorParams":{"mediaPipeline":"5751ec53_kurento.MediaPipeline","command":"rtmpsink location=rtmp://127.0.0.1/live/live_stream_720p"},"properties":{},"sessionId":"d8abb1d8"},"jsonrpc":"2.0"}<
...15,011862 21495 [0x4f01700] debug KurentoGStreamerFilterImpl GStreamerFilterImpl.cpp:47 GStreamerFilterImpl() Command rtmpsink location=rtmp://127.0.0.1/live/live_stream_720p
...15,015698 21495 [0x4f01700] error filterelement kmsfilterelement.c:148 kms_filter_element_set_filter() <kmsfilterelement0> Invalid factory "rtmpsink", unexpected pad templates
...15,016841 21495 [0x4f01700] debug KurentoWebSocketTransport WebSocketTransport.cpp:424 processMessage() Response: >{"error":{"code":40001,"data":{"type":"MARSHALL_ERROR"},"message":"Given command is not valid, pad templates does not match"},"id":28,"jsonrpc":"2.0"}
I.e. failure.
I have instaled ARToolKit on Ubuntu 12.10 on a 64bit Asus. The install gave no errors so I think I'm ok. But when I want to try one af the examples it can't find the camera. If I don't fill anything in at char *vconf = ""; I get
No video config string supplied, using defaults.
ioctl failed
The most often found solution implies
char *vconf = "v4l2src device=/dev/video0 use-fixed-fps=false ! ffmpegcolorspace ! capsfilter caps=video/x-raw-rgb,width=640,height=480 ! identity name=artoolkit ! fakesink";
But this doesn't work for me. I get
r#r-K55VD:~/Downloads/Artoolkit-on-Ubuntu-12.04-master/bin$ ./simpleTest
Using supplied video config string [v4l2src device=/dev/video0 use-fixed-fps=false ! ffmpegcolorspace ! capsfilter caps=video/x-raw-rgb,width=640,height=480 ! identity name=artoolkit ! fakesink].
ARVideo may be configured using one or more of the following options,
separated by a space:
DEVICE CONTROLS:
-dev=filepath
specifies device file.
-channel=N
specifies source channel.
-noadjust
prevent adjusting the width/height/channel if not suitable.
-width=N
specifies expected width of image.
-height=N
specifies expected height of image.
-palette=[RGB|YUV420P]
specifies the camera palette (WARNING:all are not supported on each camera !!).
IMAGE CONTROLS (WARNING: every options are not supported by all camera !!):
-brightness=N
specifies brightness. (0.0 <-> 1.0)
-contrast=N
specifies contrast. (0.0 <-> 1.0)
-saturation=N
specifies saturation (color). (0.0 <-> 1.0) (for color camera only)
-hue=N
specifies hue. (0.0 <-> 1.0) (for color camera only)
-whiteness=N
specifies whiteness. (0.0 <-> 1.0) (REMARK: gamma for some drivers, otherwise for greyscale camera only)
-color=N
specifies saturation (color). (0.0 <-> 1.0) (REMARK: obsolete !! use saturation control)
OPTION CONTROLS:
-mode=[PAL|NTSC|SECAM]
specifies TV signal mode (for tv/capture card).
What is a methodological way of finding out what exactly to put at char *vconf = " " ?Because I feel I tried a lot of variations at random, but nothing works. I know it needs a path like /dev/video0, but what else seems up in the air to me.
char *vconf = "v4l2src device=/dev/video0 use-fixed-fps=false !
ffmpegcolorspace ! capsfilter
caps=video/x-raw-rgb,width=640,height=480 ! identity name=artoolkit !
fakesink";
The above configuration you tried is for GStreamer driver.
Since you are using VideoLinuxV4L , instead of the above use:
char *vconf = "-dev=/dev/video0 ";
For more you can refer to "{ARtoolkit Folder}/doc/video/index.html"
is there a way for discovering all the available encodings of a certain webcam (e.g x-raw-rgb -xraw-yuv)?
Morevoer, I would like to discover also the available resolutions.
Thanks!
Yes, set the v4l2src element to ready and check the caps on the src pad. The element will narrow the list of caps down to the ones actually supported when it has opened and queried an actual device. That happens in READY state.
What I do is the following (command line):
GST_DEBUG=v4l2src:3 gst-launch v4l2src ! decodebin2 ! xvimagesink
If the video source in onboard else change the "v4l2src". This will show ALOT of info, from "probed caps:" it will long line of possible formats the video source supports.
Here is a same copy/paste from my machine:
probed caps: video/x-raw-yuv, format=(fourcc)YUY2, width=(int)1280,
height=(int)720, interlaced=(boolean)false,
pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1 };
video/x-raw-yuv, format=(fourcc)YUY2, width=(int)640, height=(int)480,
interlaced=(boolean)false, pixel-aspect-ratio=(fraction)1/1,
framerate=(fraction){ 30/1 };
So the info your looking for is:
! video/x-raw-yuv, framerate=30/1, width=640, height=480, interlaced=false !
If anything NOT from the probed list will result in error:
could not negotiate format