I have searched all over the place and have not found anyone using the videomixer function from gstreamer with the raspberry pi's raspivid.
I am trying to duplicate the raspivid output and merge them side by side and then eventually send a stream over tcp. But for right now I am just looking for some help with getting the videomixing to work.
The resulting video should be 1280x568 for my specific application and I do not care that there is any angle between the videos to create a "3d effect" because it is not required for the specific application I'm making.
I am using gstreamer 1.2 so the function call is gst-launch-1.0 and I can not use ffmpeg b/c I believe it has depricated, so I assume I would just use videoconvert to achieve the same result.
Im not sure if I should be using h264parse instead of decodebin. So here is what Ive got so far:
gst-launch-1.0 fdsrc | raspivid -t 0 -h 568 -w 640 -fps 25 -hf -b 2000000 -o - ! decodebin ! queue ! videoconvert ! videobox border-alpha=0 right=-640 ! videomixer name=mix ! videoconvert ! autovideosink fdsrc | raspivid -t 0 -h 568 -w 640 -fps 25 -hf -b 2000000 -o - ! decodebin ! queue ! videoconvert ! videobox border-alpha=0 left=-640 ! mix.
Im trying to model this based off these two sources(the raspivid command in the first link works for me):
http://www.raspberry-projects.com/pi/pi-hardware/raspberry-pi-camera/streaming-video-using-gstreamer
http://www.technomancy.org/gstreamer/playing-two-videos-side-by-side/
I know I am probably so far off but I am having a lot of difficulty finding examples of how to do this, especially with the raspivid function. I would greatly appreciate any help. Thank You.
You can find a exemple, with some explaination how to use videomixer using videomixer
Example of using videomixer combining 3 videos
Note: UNIX paths are used in this example
gst-launch-1.0 -e \
videomixer name=mix background=0 \
sink_1::xpos=0 sink_1::ypos=0 \
sink_2::xpos=200 sink_2::ypos=0 \
sink_3::xpos=100 sink_3::ypos=100 \
! autovideosink \
uridecodebin uri='file:///data/big_buck_bunny_trailer-360p.mp4' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_1 \
uridecodebin uri='file:///data/sintel_trailer-480p.webm' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_2 \
uridecodebin uri='file:///data/the_daily_dweebs-720p.mp4' \
! videoscale \
! video/x-raw,width=200,height=100 \
! mix.sink_3
Related
I have Janus(WebRTC) server. And I am using VP8/OPUS. Then Janus RTP Packet forwards to GStreamer. I have two questions.
Do I have to run one GStreamer(with multiple threads) or multiple GStremaer? Actually, Janus sent to Gstreamer multiple RTP streams. Ex) Two peer are in WebRTC room. Then, Janus sent 4 RTP packet to GStreamer. peer1: video/audio, peer2: video/audio. If I ran just one GStreamer, it is not possible to ascertain who each stream is from. So To classify I have to separate port with multiple GStreamer procceses.
Like this:
Process1:
gst-launch-1.0 \ rtpbin name=rtpbin \ udpsrc name=videoRTP port=5000 \ caps=“application/x-rtp, media=(string)video, payload=98, encoding-name=(string)VP8-DRAFT-IETF-01, clock-rate=90000” \ ! rtpvp8depay ! webmmux ! queue \ ! filesink location=track1.webm \ udpsrc port=5002 \ caps=“application/x-rtp, media=audio, payload=111, encoding-name=(string)OPUS, clock-rate=48000" \ ! rtpopusdepay ! opusparse ! oggmux \ ! filesink location=audio.ogg
process2:
gst-launch-1.0 \ rtpbin name=rtpbin \ udpsrc name=videoRTP port=5003 \ caps=“application/x-rtp, media=(string)video, payload=98, encoding-name=(string)VP8-DRAFT-IETF-01, clock-rate=90000” \ ! rtpvp8depay ! webmmux ! queue \ ! filesink location=track1.webm \ udpsrc port=5005 \ caps=“application/x-rtp, media=audio, payload=111, encoding-name=(string)OPUS, clock-rate=48000" \ ! rtpopusdepay ! opusparse ! oggmux \ ! filesink location=audio.ogg
So I confuse. Whether multiple threads? or multiple processes? Tell me details plz!
How do I mux VP8/OPUS to mp4 container in realtime? I searched for it for a long time. But I can't yet. GStreamer has so many options for each version.
I am waiting for your advice! Thank your.
I've tried as much as I can.
I expect way and mp4 files.
Hi one solution may be the plugin tee
found on the help pages
Description
Split data to multiple pads. Branching the data flow is useful when e.g. capturing a video where the video is shown on the screen and also encoded and written to a file. Another example is playing music and hooking up a visualisation module.
One needs to use separate queue elements (or a multiqueue) in each branch to provide separate threads for each branch. Otherwise a blocked dataflow in one branch would stall the other branches.
Example launch line
1
gst-launch-1.0 filesrc location=song.ogg ! decodebin ! tee name=t ! queue ! audioconvert ! audioresample ! autoaudiosink t. ! queue ! audioconvert ! goom ! videoconvert ! autovideosink
Play song.ogg audio file which must be in the current working directory and render visualisations using the goom element (this can be easier done using the playbin element, this is just an example pipeline).
Hi guys im trying to setup Gstreamer between my Pi and a windows computer. My comands are:
Pi:
~ raspivid -n -w 1280 -h 720 -b 1000000 -fps 15 -t 0 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=10 pt=96 ! udpsink host=[IP] port=9000
PC:
gst-launch-1.0 -v udpsrc port=9000 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264" ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=f
I get the error:
sudo: /home/pi: command not found
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstH264Parse:h264parse0: No valid frames found before end of stream
Additonal debug info:
gst_base_parse_sink_event_default (): /GstPipeline:pipeline0/GstH264Parse:h264parse0
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
Any help would be great thanks!!
(He just omitted the sudo in the line, but he actually typed it).
His home directory is /home/pi, and ~expands to /home/pi.
~ raspivid -n -w 1280 -h 720 -b 1000000 -fps 15 -t 0 -o -
expands to
/home/pi raspivid -n -w 1280 -h 720 -b 1000000 -fps 15 -t 0 -o -
Thus the "command not found", since /home/pi is not an executable. The result of this erroneous command is piped to gst-launch-1.0 and, of course, there are no valid frames!
I would like to capture a video stream (+audio) in MJPEG from my webcam into .mts container using this pipeline:
gst-launch-1.0 v4l2src do-timestamp=true device=/dev/video0 \ !
'image/jpeg,framerate=30/1,width=1280,height=720' ! videorate \
! queue ! mux2. pulsesrc do-timestamp=true \
device="alsa_input.pci-0000_00_1b.0.analog-stereo" ! \
'audio/x-raw,rate=88200,channels=1,depth=24' ! audioconvert ! \
avenc_aac compliance=experimental ! queue ! \
mux2. mpegtsmux name="mux2" ! filesink location=/home/sina/Webcam.mts
it seems that my pipeline doesn't recognize the mpegtsmux (?)
when i use avimux or even matroskamux it works but as far as I know for MPEG-TS I need to use the correct muxer which is "mpegtsmux"
This is the warning:
WARNING: erroneous pipeline: could not link queue0 to mux2
Can you please tell me what part of my pipeline is wrong? or what shall I change in order to get a timestamped video stream at the end (duration of the video must be shown when I play it via kdenlive or VLC)?
Best,
Sina
I think you are missing some encoder before mux.
Just try this without audio(added x264enc):
gst-launch-1.0 v4l2src device=/dev/video0 ! videorate ! queue ! x264enc ! mpegtsmux name="mux2" mux2. ! filesink location=bla.mts
The warning you are getting is saying it clearly.. it cannot link mux because the mux does not support capabilities image/jpeg.. just check the Capabilities section of sink pad with command:
gst-inspect-1.0 mpegtsmux
But it supports for example video/x-h264 - therefore the need for x264enc
I want to stream raw video from a Logitech C920 webcam and while both displaying and saving the video to file using GStreamer 1.0.
This works if I stream h264 encoded video from the camera (the camera provides hardware encoded h264), but it fails if I stream raw video from the camera. However, if I only display, or only save to file, streaming raw video works.
Why does it work with a h264 video stream but not with a raw video stream?
h264 encoded video stream from camera to BOTH display and file (WORKS):
gst-launch-1.0 -v v4l2src device=/dev/video0 \
! video/x-h264,width=640,height=480,framerate=15/1 ! tee name=t \
t. ! queue ! h264parse ! avdec_h264 ! xvimagesink sync=false \
t. ! queue ! h264parse ! matroskamux \
! filesink location='h264_dual.mkv' sync=false
raw video stream from camera to ONLY display (WORKS):
gst-launch-1.0 -v v4l2src device=/dev/video0 \
! video/x-raw,format=YUY2,width=640,height=480,framerate=15/1 \
! xvimagesink sync=false
raw video stream from camera to ONLY file (WORKS):
gst-launch-1.0 -v v4l2src device=/dev/video0 \
! video/x-raw,format=YUY2,width=640,height=480,framerate=15/1 \
! videoconvert ! x264enc ! matroskamux \
! filesink location='raw_single.mkv' sync=false
raw video stream from camera to BOTH display and file (FAILS):
gst-launch-1.0 -v v4l2src device=/dev/video0 \
! video/x-raw,format=YUY2,width=640,height=480,framerate=15/1 \
! tee name=t \
t. ! queue ! xvimagesink sync=false \
t. ! queue ! videoconvert ! x264enc ! h264parse ! matroskamux \
! filesink location='raw_dual.mkv' sync=false
The last command (raw video to both display and file) fails without any warnings or errors. The gst-launch terminal output is exactly the same as when only writing to file. The xvimage window appears and displays an image from the camera, but the image does not change (i.e. it is frozen). A zero bytes file appears too.
I have tried multiple versions of the above commands, but I think those are the minimal commands that can reproduce the problem.
Does anyone understand what I am doing wrong?
Streaming raw video from a webcam (not specific to C920) to both display and h.264 encoded file can be done. The x264enc property tune needs to be set to zerolatency.
h.264 example:
gst-launch-1.0 -v v4l2src device=/dev/video0 \
! video/x-raw,format=YUY2,width=640,height=480,framerate=15/1 \
! tee name=t t. ! queue ! xvimagesink sync=false t. ! queue ! \
videoconvert ! x264enc tune=zerolatency ! h264parse ! \
matroskamux ! filesink location='raw_dual.mkv' sync=false
Alternatively, one can skip h.264 altogether and encode to theora or vp8 instead.
theora example:
gst-launch-1.0 -v v4l2src device=/dev/video0 ! \
video/x-raw,format=YUY2,width=640,height=480,framerate=15/1 ! \
tee name=t t. ! queue ! xvimagesink sync=false t. ! queue ! \
videoconvert ! theoraenc ! theoraparse ! \
matroskamux ! filesink location='raw_dual.mkv' sync=false
vp8 example:
gst-launch-1.0 -v v4l2src device=/dev/video0 ! \
video/x-raw,format=YUY2,width=640,height=480,framerate=15/1 ! \
tee name=t t. ! queue ! xvimagesink sync=false t. ! queue ! \
videoconvert ! vp8enc ! \
matroskamux ! filesink location='raw_dual.mkv' sync=false
Thanks a lot to Jan Spurny and Tim.
Im trying to stream video from a Raspberry Pi (on Raspbian) to a Windows 7 PC like in this video: https://www.youtube.com/watch?v=lNvYanDLHZA
I have a Logitech C270 connected to the Raspberry Pi, and have managed to stream webcam video over TCP using:
gst-launch v4l2src device=/dev/video0 ! \
'video/x-raw-yuv,width=640,height=480' ! \
x264enc pass=qual quantizer=20 tune=zerolatency ! \
rtph264pay ! tcpsink host=$pi_ip port=5000
from my Pi. Receive this using VLC works, but with a 3 sec delay.
I want to do this over UDP to get a shorter delay (correct me if I'm wrong). But cannot for the life of me figure it out. I have tried following:
gst-launch-1.0 v4l2src device=/dev/video0 ! \
'video/x-raw-yuv,width=640,height=480' ! \
x264enc pass=qual quantizer=20 tune=zerolatency ! \
rtph264pay ! udpsink host=$pc_ip port=1234
and
gst-launch-1.0 udpsrc port=1234 ! \
"application/x-rtp, payload=127" ! \
rtph264depay ! ffdec_h264 ! fpsdisplaysink sync=false text-overlay=false
For the Pi and PC side, respectively (taken from
Webcam streaming using gstreamer over UDP)
but with no luck. (tried to change the video/x-raw-yuv to fit 1.0 version but still without luck)
Any hints would be highly appreciated!
Edit
With the raspi camera (not the webcam) the following works:
Windows batch script:
#echo off
cd C:\gstreamer\1.0\x86_64\bin
gst-launch-1.0 -e -v udpsrc port=5000 ! application/x-rtp, payload=96 !
rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false
text-overlay=false
Raspberry Pi Bash Script:
#!/bin/bash
clear
raspivid -n -t 0 -rot 270 -w 960 -h 720 -fps 30 -b 6000000 -o - | gst-
launch-1.0 -e -vvvv fdsrc ! h264parse ! rtph264pay pt=96 config-interval=5 !
udpsink host=***YOUR_PC_IP*** port=5000
But I cannot figure out how to use to webcam instead of the raspberry pi camera (i.e. v4l2src instead of raspivid) in the same manner
Edit 2
The following works, but is very slow and has a huge delay:
RPi
gst-launch-1.0 -vv -e v4l2src device=/dev/video0 \
! videoscale \
! "video/x-raw,width=400,height=200,framerate=10/1" \
! x264enc pass=qual quantizer=20 tune=zerolatency \
! h264parse \
! rtph264pay config-interval=5 pt=96 \
! udpsink host=$myip port=$myport
PC:
gst-launch-1.0 -e -v udpsrc port=5001 ! ^
application/x-rtp, payload=96 ! ^
rtpjitterbuffer ! ^
rtph264depay ! ^
avdec_h264 ! ^
autovideosink sync=false text-overlay=false
I now suspect that (thanks to hint from #Mustafa Chelik) that the huge lag is due to the fact that the raspberry pi has to encode the webcam video, while the raspberry pi video is already encoded, not sure if this makes sense though?
Found hints to the solution from http://www.z25.org/static/rd/videostreaming_intro_plab/
The following worked very well for streaming video from Logitech c270 on raspberry pi to a windows 7 pc:
PC side:
gst-launch-1.0 -e -v udpsrc port=5001 ! ^
application/x-rtp, encoding-name=JPEG,payload=26 ! ^
rtpjpegdepay ! jpegdec ! ^
autovideosink
RPi side:
gst-launch-1.0 -v v4l2src device=/dev/video0 \
! "image/jpeg,width=1280, height=720,framerate=30/1" \
! rtpjpegpay \
! udpsink host=$myip port=$myport
I suspect that it was the encoding of the webcam video to h264 that was too slow on the raspberry pi, however the webcamera already gave jpeg frames and thus no encoding was nescessary using "image/jpeg"
I have used for my webcamstream the MJPG-Streamer and get a 0,2 seconds delay.
http://wiki.ubuntuusers.de/MJPG-Streamer
And the advantage is that you can watch it with the webbrowser.