Sony Camera Remote API, How can I show/use liveview-stream data with VB.net (use of Sony QX1) - vb.net

I'm programming a small software for the remote use of a Sony camera (I use QX1 but the model should be irrelevant) in VB.net. I could make pictures by sending the JSON-commands to the camera and also could start the liveview-stream with the method "startLiveview" wrapped in a JSON-command. In return I get the address to download the livestream, like http://192.168.122.1:8080/liveview/liveviewstream (wrapped in a JSON-answer).
According to the Sony CameraRemote-API-reference this is a stream which contains some header-data and the single jpeg-data. But it seems not to be a MJPEG-stream. I could past the livestream-link to my browser and it starts to infinitely download the livestream. I could not show the stream with a MJPEG-stream player like VLC.
My question is, how can I filter out the jpeg-data with VB.net or how can I show the livestream.
A similar question was already posted at an older question but without any reply. Therefore I'm trying it again.

This is my way, I use ffserver to make the video stream-able.
this is myconfig for
ffserver config (server.conf):
Port 8090
BindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 10000
CustomLog -
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 1G
ACL allow 127.0.0.1
</Feed>
<Stream cam.webm>
Feed feed1.ffm
Format webm
VideoCodec libvpx
VideoSize vga
VideoFrameRate 25
AVOptionVideo flags +global_header
StartSendOnKey
NoAudio
preroll 5
VideoBitRate 400
</Stream>
<Stream status.html>
Format status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Stream>
And then I run the ffserver with that config:
ffserver -f server.conf
And then encode the video from sony liveview, and broadcast via ffserver:
ffmpeg -i http://192.168.122.1:8080/liveview/liveviewstream -vcodec libvpx -fflags nobuffer -an http://127.0.0.1:8090/feed1.ffm
After that you can stream liveview from the address
localhost:8090/cam.webm

(I use my laptop with linux in a terminal)
Install GSTREAMER:
sudo apt-get install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
fix the parameters of your camera to enable the control via Smartphone, for example the ssd of my camera on my network is DIRECT-dpC3:DSC-RX100M5A
Use Wifi to connect your computer directly to your camera
Tell your camera to begin liveView with this command:
curl http://192.168.122.1:10000/sony/camera -X POST -H 'Content- type:application/json' --data '{ "method": "startLiveview", "params": [], "id": 1, "version": "1.0"}'
Note the response of the camera is an URL: mine is:
{"id":1,"result":["http://192.168.122.1:60152/liveviewstream?%211234%21%2a%3a%2a%3aimage%2fjpeg%3a%2a%21%21%21%21%21"]}
Tell gstreamer to use this URL:
gst-launch-1.0 souphttpsrc location=http://192.168.122.1:60152/liveviewstream?%211234%21%2a%3a%2a%3aimage%2fjpeg%3a%2a%21%21%21%21%21 ! jpegdec ! autovideosink
7; Enjoy ;-)

I try to use ffmpeg to process the streaming, and success to save streaming as flv file.
I use this code on terminal (I use UNIX) and I success save the file as flv file:
ffmpeg -i http://192.168.122.1:8080/liveview/liveviewstream -vcodec flv -qscale 1 -an output.flv
Maybe you can modify or optimize it as you needed.

In VLC works for me adding .mjpg to URL try this. Wait for sec and should be played http://192.168.122.1:8080/liveview/liveviewstream.mjpg

Related

How to send one line at a packet using Gstreamer command line

I am trying to stream a raw video to ethernet via RTP Protocol (RFC4175), using Gstreamer 1.0 in Windows.
I don't want my data to be compressed, so I use rtpvrawpay element
I have the following gstreamer line
gst-launch-1.0 -v filesrc location=%FILENAME% ! videoparse width=%WIDTH% height=%HEIGHT% framerate=50/1 format=GST_VIDEO_FORMAT_GRAY16_BE ! videoconvert ! video/x-raw,media=(string)video,encoding-name=(string)RAW,sampling=(string)YCbCr-4:2:2,witdh=640,height=512 ! rtpvrawpay pt=96 ! udpsink async=true host=%HOST% port=%PORT%
I have another system decoding this rtp video. However, that system is restricted to process 1 line of video for each UDP packet. Morever, the system eliminates any packet has a length different than 1342 bytes.
(1 line: 640(width)x2 bytes + 20 bytes of RTP Header + 42 bytes of UDP header)
So, I have to tell the gstreamer pipe to send 1 line at a packet. My first attempt was to set "mtu" property of the rtpvrawdepay element. When I set mtu to 1300, my UDP packets are 1400 bytes of length (?)
Then I set it to 1302, UDP packets are 1403 bytes. There has to be a way to tell gstreamer never use any packet as a continuation packet in RTP.
some things to d0: first, upload the video to an FTP. Then, in JavaScript/html:
<embed src="myftpsie/mycoolvideo.mp4"></embed>
make sure its in a format the html can comprehend

Why won't chrony sync to GPS when serial data flows through socat?

I'm using gpsd to sync time to a GPS. When I connect my GPS to /dev/ttyUSB0, and tell gpsd to listen on that port, chrony is happy to use it as a time source.
gpsd -D 5 -N -n /dev/ttyUSB0
However, as soon as I try and pipe that data through socat (which is how it needs to work in our production system), chrony won't use it as a source. This is the case even though gpsd, cgps, and gpsmon all seem perfectly happy with the GPS data they are getting.
Here's my socat:
socat -d -d pty,rawer,echo=0,link=/tmp/ttyVSP0 /dev/ttyUSB0,b4800
(my gpsd command is the same as above but with /tmp/ttyVSP0 as the port to listen to in this case).
I'm using chronyc sources to confirm when GPS is a chrony source.
My refclock line in my /etc/chrony/chrony.conf looks like this:
refclock SHM 0 refid GPS
pty ports are prevented from talking to ntp (and thus, chrony) by an early return meant to prevent code from being executed during testing.
void ntpshm_link_activate(struct gps_device_t *session)
/* set up ntpshm storage for a session */
{
/* don't talk to NTP when we're running inside the test harness */
if (session->sourcetype == source_pty)
return;
if (session->sourcetype != source_pps ) {
/* allocate a shared-memory segment for "NMEA" time data */
session->shm_clock = ntpshm_alloc(session->context);
if (session->shm_clock == NULL) {
gpsd_log(&session->context->errout, LOG_WARN,
"NTP: ntpshm_alloc() failed\n");
return;
}
}
Discovered thanks to this bug report

How do we send a canvas image data as an attachment to a server on Pharo?

How do we send or upload a data file to a server on Pharo. I saw some example of sending file from a directory on the machine.
It works fine.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator home /Path to the file;
put
In my case I don't want to send/upload file downloaded on a machine but instead I want to send/upload a file hosted somewhere or data I retrieved over the network and send it attached to another server.
How can we do that ?
Based on your previous questions I presume you are using linux. The issue here is not within Smalltak/Pharo, but the network mapping.
FTP
If you want to have a ftp, don't forget it is sending password in plaintext, set-up it a way you can mount it. There are probably plenty of ways to do this but you can try using curlftpfs. You need kernel module fuse for that, make sure you have it loaded. If it is not loaded you can do so via modprobe fuse.
The usage would be:
curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other
where you fill username/password. The option allow_other allows other users at the system to use your mount.
(for more details you can see arch wiki and its curlftpfs)
Webdav
For webdav I would use the same approach, this time using davfs
You would manually mount it via mount command:
mount -t davfs https://yoursite.net:<port>/path /mnt/webdav
There are two reasonable way to setup it - systemd or fstab. The information below is taken from davfs2 Arch wiki:
For systemd:
/etc/systemd/system/mnt-webdav-service.mount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Mount]
What=http(s)://address:<port>/path
Where=/mnt/webdav/service
Options=uid=1000,file_mode=0664,dir_mode=2775,grpid
Type=davfs
TimeoutSec=15
[Install]
WantedBy=multi-user.target
You can create an systemd automount unit to set a timeout:
/etc/systemd/system/mnt-webdav-service.automount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Automount]
Where=/mnt/webdav
TimeoutIdleSec=300
[Install]
WantedBy=remote-fs.target
For the fstab way it is easy if you have edited fstab before (it behaves same as any other fstab entry):
/etc/fstab
https://webdav.example/path /mnt/webdav davfs rw,user,uid=username,noauto 0 0
For webdav you can even store the credentials securely:
Create a secrets file to store credentials for a WebDAV-service using ~/.davfs2/secrets for user, and /etc/davfs2/secrets for root:
/etc/davfs2/secrets
https://webdav.example/path davusername davpassword
Make sure the secrets file contains the correct permissions, for root mounting:
# chmod 600 /etc/davfs2/secrets
# chown root:root /etc/davfs2/secrets
And for user mounting:
$ chmod 600 ~/.davfs2/secrets
Back to your Pharo/Smalltalk code:
I presume you read the above and have either /mnt/ftp or /mnt/webdav mounted.
For e.g. ftp your code would simply take from the mounted directory:
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator '/mnt/ftp/your_file_to_upload';
put
Edit Bassed on the comments.
The issue is that the configuration of the ZnClient is in the Pharo itself and the json file is also generated there.
One quick and dirty solution - would be to use above mentined with a shell command:
With ftp for example:
| commandOutput |
commandOutput := (PipeableOSProcess command: 'curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other') output.
Transcript show: commandOutput.
Other approach is more sensible. Is to use Pharo FTP or WebDav support via FileSystemNetwork.
To load ftp only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'FTP'
to load Webdav only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'Webdav'
To get everything including tests:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
loadStable.
With that you should be able to get a file for example for ftp:
| ftpConnection wDir file |
"Open a connection"
ftpConnection := FileSystem ftp: 'ftp://ftp.sh.cvut.cz/'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/Arch/lastsync' asFileReference.
"Close connection - do always!"
ftpConnection close.
Then your upload via (ftp) would look like this:
| ftpConnection wDir file |
"Open connection"
ftpConnection := FileSystem ftp: 'ftp://your_ftp'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/<your_file_path' asFileReference.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator file;
put
"Close connection - do always!"
ftpConnection close.
The Webdav would be similar.

How to solve: UDP send of xxx bytes failed with error 11 in Ubuntu?

UDP send of XXXX bytes failed with error 11
I am running a WebRTC streaming app on Ubuntu 16.04.
It streams video and audio from Logitec HD Webcam c930e within an Electronjs Desktop App.
It all works fine and smooth running on my other machine Macbook Pro. But on my Ubuntu machine I receive errors after 10-20 seconds when the peer connection is established:
[2743:0513/193817.691636:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1019 bytes failed with error 11
[2743:0513/193817.691775:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.696615:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.696777:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.712369:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1029 bytes failed with error 11
[2743:0513/193817.712952:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
[2743:0513/193817.713086:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
[2743:0513/193817.717713:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
==> Btw, if I do NOT stream audio, but video only. I got the same error but only with the "video" between the Log lines...
somewhere in between the lines I also got one line that says:
[3441:0513/195919.377887:ERROR:stunport.cc(506)] sendto: [0x0000000b] Resource temporarily unavailable
I also looked into sysctl.conf and increased the values there. My currenct sysctl.conf looks like this:
fs.file-max=1048576
fs.inotify.max_user_instances=1048576
fs.inotify.max_user_watches=1048576
fs.nr_open=1048576
net.core.netdev_max_backlog=1048576
net.core.rmem_max=16777216
net.core.somaxconn=65535
net.core.wmem_max=16777216
net.ipv4.tcp_congestion_control=htcp
net.ipv4.ip_local_port_range=1024 65535
net.ipv4.tcp_fin_timeout=5
net.ipv4.tcp_max_orphans=1048576
net.ipv4.tcp_max_syn_backlog=20480
net.ipv4.tcp_max_tw_buckets=400000
net.ipv4.tcp_no_metrics_save=1
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_synack_retries=2
net.ipv4.tcp_syn_retries=2
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_wmem=4096 65535 16777216
vm.max_map_count=1048576
vm.min_free_kbytes=65535
vm.overcommit_memory=1
vm.swappiness=0
vm.vfs_cache_pressure=50
Like suggested here: https://gist.github.com/cdgraff/7920db287988463aafd7ea09eef6f9f0
It does not seem to help. I am still getting these errors and I experience lagging on the other side.
Additional info: on Ubuntu the Electronjs App connects to Heroku Server (Nodejs) and the other side of the peer connection (Chrome Browser) also connects to it. Heroku Server acts as Handshaking Server to establish WebRTC connection. Both have as configuration:
{'urls': 'stun:stun1.l.google.com:19302'},
{'urls': 'stun:stun2.l.google.com:19302'},
and also an additional Turn Server from numb.viagenie.ca
Connection is established and within the first 10 seconds the quality is very high and there is no lagging at all. But then after 10-20 seconds there is lagging and on the Ubuntu console I am getting these UDP errors.
The PC that Ubuntu is running on:
PROCESSOR / CHIPSET:
CPU Intel Core i3 (2nd Gen) 2310M / 2.1 GHz
Number of Cores: Dual-Core
Cache: 3 MB
64-bit Computing: Yes
Chipset Type: Mobile Intel HM65 Express
RAM:
Memory Speed: 1333 MHz
Memory Specification Compliance: PC3-10600
Technology: DDR3 SDRAM
Installed Size: 4 GB
Rated Memory Speed: 1333 MHz
Graphics
Graphics Processor Intel HD Graphics 3000
Could please anyone give me some hints or anything that could solve this problem?
Thank you
==============EDIT=============
I found in my very large strace log somewhere these two lines:
7671 sendmsg(17, {msg_name(0)=NULL, msg_iov(1)=[{"CHILD_PING\0", 11}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 11
7661 <... recvmsg resumed> {msg_name(0)=NULL, msg_iov(1)=[{"CHILD_PING\0", 12}], msg_controllen=32, [{cmsg_len=28, cmsg_level=SOL_SOCKET, cmsg_type=SCM_CREDENTIALS, {pid=7671, uid=0, gid=0}}], msg_flags=0}, 0) = 11
On top of that, somewhere near when the error happens (at the end of the log file, just before I quit the application) I see in the log file the following:
https://gist.github.com/Mcdane/2342d26923e554483237faf02cc7cfad
First, to get an impression of what is happening in the first place, I'd look with strace. Start your application with
strace -e network -o log.strace -f YOUR_APPLICATION
If your application looks for another running process to turn the work too, start it with parameters so it doesn't do that. For instance, for Chrome, pass in a --user-data-dir value that is different from your default.
Look for = 11 in the output file log.strace afterwards, and look what happened before and after. This will give you a rough picture of what is happening, and you can exclude silly mistakes like sendtos to 0.0.0.0 or so (For this reason, this is also very important information to include in a stackoverflow question, for instance by uploading the output to gist).
It may also be helpful to use Wireshark or another packet capture program to get a rough overview of what is being sent.
Assuming you can confirm with strace that a valid send call is taken place, you can then further analyze the error conditions.
Error 11 is EAGAIN. The documentation of send says when this error is supposed to happen:
EAGAIN (...) The socket is marked nonblocking and the requested operation would block. (...)
EAGAIN (Internet domain datagram sockets) The socket referred to by
sockfd had not previously been bound to an address and, upon
attempting to bind it to an ephemeral port, it was determined that all
port numbers in the ephemeral port range are currently in use. See
the discussion of /proc/sys/net/ipv4/ip_local_port_range in
ip(7).
Both conditions could apply.
The first will be obvious by the strace log if you trace the creation of the socket involved.
To exclude the second, you can run netstat -una (or, if you want to know the programs involved, sudo netstat -unap) to see which ports are open (if you want Stack Overflow users to look into it, post the output on gist or similar and link to it here). Your port range net.ipv4.ip_local_port_range=1024 65535 is not the standard 32768 60999; this looks like you attempted to do something about lacking port numbers already. It would help to trace back to the reason of why you changed that parameter, and the conditions that convinced you to do so.

How to setup gstreamer on raspberry Pi and client for rtp with H264-capable webcam?

in the context of an underwater ROV project, I'm trying to stream (via rtp) a HD video flux from a Raspberry Pi model 2. The webcam is a Logitech C920 webcam, which I bought a while ago because it was the only H264-capable cam at that time.
It is also essential that I obtain the lowest possible latency, since the video will be used to pilot the ROV.
So I perfected some gstreamer-1.0 pipelines on my desktop computer with the C920 (Dell station, running Ubuntu 14.04), which worked fine, but I encounter some problems when I try to use the Raspberry instead of it.
First, I tried (on the RPi) to capture the H264 camera flow to a matroska file:
#this sets the C920 cam to H264 encoding, framerate 30/1:
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1
gst-launch-1.0 -vvv v4l2src \
! video/x-h264, width=1920, height=1080, framerate=30/1 \
! queue max-size-buffers=1 \
! matroskamux \
! filesink location=/tmp/video.mkv
This worked perfectly. A little choppy, I guess due to the size of the buffer, but OK.
Then, I tried to put the flow on an rtp stream pointed at my laptop (MacBook Pro, Yosemite, gstreamer installed via brew).
# on the server (RPi):
gst-launch-1.0 -vvv v4l2src \
! video/x-h264,width=1920,height=1080,framerate=30/1 \
! rtph264pay \
! udpsink host=192.168.0.168 port=5000
# on the client (MacBookPro)
gst-launch-1.0 -vvv udpsrc port=5000 \
caps="application/x-rtp, media=(string)video, \
clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" \
! rtpjitterbuffer drop-on-latency=true latency=300 \
! rtph264depay \
! queue max-size-buffers=1 \
! matroskamux \
! filesink location=/tmp/video.mkv
There, I get nothing. I checked on the client with sudo tcpdump (port 5000 and udp) that it effectively receives udp packets on the port 5000, but that's all. Nothing gets recorded in video.mkv, which is "touched" but stays at 0 bytes.
After reading some related questions here, I tried many variations, including:
streaming the flow to the pi itself. In that case, I sometimes get some weird outputs on the client window, which "tend to" disappear if I increase the size of the buffer:
** (gst-launch-1.0:2832): CRITICAL **: gst_rtp_buffer_map: assertion 'GST_IS_BUFFER (buffer)' failed
** (gst-launch-1.0:2832): CRITICAL **: gst_rtp_buffer_unmap: assertion 'rtp->buffer != NULL' failed
but still no output whatsoever.
other sinks: xvimagesink, autovideosink
rtpjitterbuffer: toggled drop-on-latency, changed latency value
queue: changed buffer size
Here's the output of the client:
gst-launch-1.0 -vvv udpsrc port=5000 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtpjitterbuffer drop-on-latency=true latency=300 ! rtph264depay ! queue max-size-buffers=10 ! matroskamux ! filesink location=/tmp/movie.mkv
Définition du pipeline à PAUSED...
Le pipeline est actif et n’a pas besoin de phase PREROLL…
/GstPipeline:pipeline0/GstUDPSrc:udpsrc0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
/GstPipeline:pipeline0/GstRtpJitterBuffer:rtpjitterbuffer0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
Passage du pipeline à la phase PLAYING…
New clock: GstSystemClock
/GstPipeline:pipeline0/GstRtpJitterBuffer:rtpjitterbuffer0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
^Chandling interrupt.
Interruption : arrêt du pipeline…
Execution ended after 0:16:23.292637000
Définition du pipeline à PAUSED...
Définition du pipeline à READY (prêt)…
Définition du pipeline à NULL…
Libération du pipeline…
I hope that someone here can give me some clues about this problem: I should point out (if needed) that I'm still largely a beginner in gstreamer...
EDIT (12/11/16)
following ensonic's advice I used GST_DEBUG="*:3". The client now tells its problem: it' can't find the type of the video:
0:00:35.185377000 12349 0x7f878904bb20 WARN typefind
gsttypefindelement.c:983:GstFlowReturn
gst_type_find_element_chain_do_typefinding(GstTypeFindElement *, gboolean, gboolean):<typefind> error: Le flux ne contient pas assez de données.
0:00:35.185416000 12349 0x7f878904bb20 WARN typefind
gsttypefindelement.c:983:GstFlowReturn
gst_type_find_element_chain_do_typefinding(GstTypeFindElement *, gboolean, gboolean):<typefind> error: Can't typefind stream
ERREUR : de l’élément /GstPipeline:pipeline0/GstDecodeBin:decodebin0
/GstTypeFindElement:typefind : Le flux ne contient pas assez de données.
Information de débogage supplémentaire :
gsttypefindelement.c(983): GstFlowReturn
gst_type_find_element_chain_do_typefinding(GstTypeFindElement *, gboolean, gboolean) (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0
/GstTypeFindElement:typefind:
Can't typefind stream
So, the client doesn't have enough data in the stream to determine its type...
How should I change that ? I don't understanding what's lacking !
A few comments:
1) on the client use "gst-launch-1.0 -e ... " to cause ctrl-c to send an eos so that the file gets finalized.
2) on the raspi, add "gdppay" before udpsink and on the client add "gdpdepay" after udpsrc. This will transport events and queries, since you don't use rtsp.
3) on the client try running with GST_DEBUG="*:3" to see if there are any wayrnings. Also try running with " ! decodebin ! autovideosink" to see if you get any images.
Following ensonic's comment (see below), I finally managed to have both pipelines working.
The trick was to use gdppay/gdpdepay elements instead of rtph264pay/rtph264depay.
On the server-side (Raspberry Pi)
#set the Logitech C920 cam properly (1920x1080, 30 fps)
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1 --set-parm=30
# exec the following pipeline (only after gstreamer runs on the client!):
gst-launch-1.0 -vvv -e v4l2src \
! video/x-h264,width=1920,height=1080,framerate=30/1 \
! gdppay \
! udpsink host=192.168.0.168 port=5000
On the client side (MacBookPro)
# launch the following command before executing server pipeline:
gst-launch-1.0 -e -vvv udpsrc port=5000 \
caps="application/x-gdp, streamheader=(buffer)< [insert long header here] >" \
! gdpdepay \
! video/x-h264, width=1920, height=1080, pixel-aspect-ratio=1/1, framerate=30/1 \
! decodebin \
! queue max-size-buffers=10 \
! autovideosink sync=false async=false
Results
CPU load
The performance of the C920 on the Raspberry Pi is remarkable. For 1920x1080 resolution, at 30 fps, the total cpu load is less than 3%. For comparison, when I encode the equivalent raw YUV-stream on the Raspberry, the load climbs to 96%. The load on the client side (for my 2011 Intel i5 MacBookPro) is about 25%.
video latency
I've tested the previous pipelines once over 20 min continuously and over 10 times overall. Each time I get a very reproducible latency of ~250 ms, be it over LAN or WLAN. Changing the size of the queues' buffers doesn't help much. Considering what one can read on the Net regarding streaming latencies, I think it's quite acceptable. And sufficient for piloting a vehicle at low speed.
start of the stream
Sometimes, just after launching the pipeline on the server-side, many packets get lost due to the following error:
gst_h264_parse_handle_frame:<h264parse0> broken/invalid nal Type: 1 Slice, Size: xxxx will be dropped
but these errors disappear very quickly, maybe after the next key frame is received (one problem is, I can't change the encoding parameters of the cam easily)?
Other tips and remarks
launching order
As stated above, be sure to launch the server's pipeline after the client's one. Otherwise the negotiation fails.
getting the header buffer
To get the (very long) header buffer of the server's stream, execute the server's pipeline once with the -vvv option, kill it, then copy/paste it in the caps of the client's pipeline.
gstreamer and OS versions used
Raspberry Pi 2, Raspbian, 4.1.19-v7+, gstreamer 1.2.0 (http://packages.qa.debian.org/gstreamer1.0)
Client :
MacBook Pro 2011, i5, Apple OSX Yosemite, gstreamer 1.10.1 (installed via brew)
Many thanks again to ensonic who had the idea to switch to gdp !