How to send one line at a packet using Gstreamer command line - udp

I am trying to stream a raw video to ethernet via RTP Protocol (RFC4175), using Gstreamer 1.0 in Windows.
I don't want my data to be compressed, so I use rtpvrawpay element
I have the following gstreamer line
gst-launch-1.0 -v filesrc location=%FILENAME% ! videoparse width=%WIDTH% height=%HEIGHT% framerate=50/1 format=GST_VIDEO_FORMAT_GRAY16_BE ! videoconvert ! video/x-raw,media=(string)video,encoding-name=(string)RAW,sampling=(string)YCbCr-4:2:2,witdh=640,height=512 ! rtpvrawpay pt=96 ! udpsink async=true host=%HOST% port=%PORT%
I have another system decoding this rtp video. However, that system is restricted to process 1 line of video for each UDP packet. Morever, the system eliminates any packet has a length different than 1342 bytes.
(1 line: 640(width)x2 bytes + 20 bytes of RTP Header + 42 bytes of UDP header)
So, I have to tell the gstreamer pipe to send 1 line at a packet. My first attempt was to set "mtu" property of the rtpvrawdepay element. When I set mtu to 1300, my UDP packets are 1400 bytes of length (?)
Then I set it to 1302, UDP packets are 1403 bytes. There has to be a way to tell gstreamer never use any packet as a continuation packet in RTP.

some things to d0: first, upload the video to an FTP. Then, in JavaScript/html:
<embed src="myftpsie/mycoolvideo.mp4"></embed>
make sure its in a format the html can comprehend

Related

How to solve: UDP send of xxx bytes failed with error 11 in Ubuntu?

UDP send of XXXX bytes failed with error 11
I am running a WebRTC streaming app on Ubuntu 16.04.
It streams video and audio from Logitec HD Webcam c930e within an Electronjs Desktop App.
It all works fine and smooth running on my other machine Macbook Pro. But on my Ubuntu machine I receive errors after 10-20 seconds when the peer connection is established:
[2743:0513/193817.691636:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1019 bytes failed with error 11
[2743:0513/193817.691775:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.696615:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.696777:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1020 bytes failed with error 11
[2743:0513/193817.712369:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1029 bytes failed with error 11
[2743:0513/193817.712952:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
[2743:0513/193817.713086:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
[2743:0513/193817.717713:ERROR:stunport.cc(282)] Jingle:Port[0xa5faa3df800:audio:1:0:local:Net[wlx0013ef503b67:192.168.0.x/24:Wifi]]: UDP send of 1030 bytes failed with error 11
==> Btw, if I do NOT stream audio, but video only. I got the same error but only with the "video" between the Log lines...
somewhere in between the lines I also got one line that says:
[3441:0513/195919.377887:ERROR:stunport.cc(506)] sendto: [0x0000000b] Resource temporarily unavailable
I also looked into sysctl.conf and increased the values there. My currenct sysctl.conf looks like this:
fs.file-max=1048576
fs.inotify.max_user_instances=1048576
fs.inotify.max_user_watches=1048576
fs.nr_open=1048576
net.core.netdev_max_backlog=1048576
net.core.rmem_max=16777216
net.core.somaxconn=65535
net.core.wmem_max=16777216
net.ipv4.tcp_congestion_control=htcp
net.ipv4.ip_local_port_range=1024 65535
net.ipv4.tcp_fin_timeout=5
net.ipv4.tcp_max_orphans=1048576
net.ipv4.tcp_max_syn_backlog=20480
net.ipv4.tcp_max_tw_buckets=400000
net.ipv4.tcp_no_metrics_save=1
net.ipv4.tcp_rmem=4096 87380 16777216
net.ipv4.tcp_synack_retries=2
net.ipv4.tcp_syn_retries=2
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_wmem=4096 65535 16777216
vm.max_map_count=1048576
vm.min_free_kbytes=65535
vm.overcommit_memory=1
vm.swappiness=0
vm.vfs_cache_pressure=50
Like suggested here: https://gist.github.com/cdgraff/7920db287988463aafd7ea09eef6f9f0
It does not seem to help. I am still getting these errors and I experience lagging on the other side.
Additional info: on Ubuntu the Electronjs App connects to Heroku Server (Nodejs) and the other side of the peer connection (Chrome Browser) also connects to it. Heroku Server acts as Handshaking Server to establish WebRTC connection. Both have as configuration:
{'urls': 'stun:stun1.l.google.com:19302'},
{'urls': 'stun:stun2.l.google.com:19302'},
and also an additional Turn Server from numb.viagenie.ca
Connection is established and within the first 10 seconds the quality is very high and there is no lagging at all. But then after 10-20 seconds there is lagging and on the Ubuntu console I am getting these UDP errors.
The PC that Ubuntu is running on:
PROCESSOR / CHIPSET:
CPU Intel Core i3 (2nd Gen) 2310M / 2.1 GHz
Number of Cores: Dual-Core
Cache: 3 MB
64-bit Computing: Yes
Chipset Type: Mobile Intel HM65 Express
RAM:
Memory Speed: 1333 MHz
Memory Specification Compliance: PC3-10600
Technology: DDR3 SDRAM
Installed Size: 4 GB
Rated Memory Speed: 1333 MHz
Graphics
Graphics Processor Intel HD Graphics 3000
Could please anyone give me some hints or anything that could solve this problem?
Thank you
==============EDIT=============
I found in my very large strace log somewhere these two lines:
7671 sendmsg(17, {msg_name(0)=NULL, msg_iov(1)=[{"CHILD_PING\0", 11}], msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 11
7661 <... recvmsg resumed> {msg_name(0)=NULL, msg_iov(1)=[{"CHILD_PING\0", 12}], msg_controllen=32, [{cmsg_len=28, cmsg_level=SOL_SOCKET, cmsg_type=SCM_CREDENTIALS, {pid=7671, uid=0, gid=0}}], msg_flags=0}, 0) = 11
On top of that, somewhere near when the error happens (at the end of the log file, just before I quit the application) I see in the log file the following:
https://gist.github.com/Mcdane/2342d26923e554483237faf02cc7cfad
First, to get an impression of what is happening in the first place, I'd look with strace. Start your application with
strace -e network -o log.strace -f YOUR_APPLICATION
If your application looks for another running process to turn the work too, start it with parameters so it doesn't do that. For instance, for Chrome, pass in a --user-data-dir value that is different from your default.
Look for = 11 in the output file log.strace afterwards, and look what happened before and after. This will give you a rough picture of what is happening, and you can exclude silly mistakes like sendtos to 0.0.0.0 or so (For this reason, this is also very important information to include in a stackoverflow question, for instance by uploading the output to gist).
It may also be helpful to use Wireshark or another packet capture program to get a rough overview of what is being sent.
Assuming you can confirm with strace that a valid send call is taken place, you can then further analyze the error conditions.
Error 11 is EAGAIN. The documentation of send says when this error is supposed to happen:
EAGAIN (...) The socket is marked nonblocking and the requested operation would block. (...)
EAGAIN (Internet domain datagram sockets) The socket referred to by
sockfd had not previously been bound to an address and, upon
attempting to bind it to an ephemeral port, it was determined that all
port numbers in the ephemeral port range are currently in use. See
the discussion of /proc/sys/net/ipv4/ip_local_port_range in
ip(7).
Both conditions could apply.
The first will be obvious by the strace log if you trace the creation of the socket involved.
To exclude the second, you can run netstat -una (or, if you want to know the programs involved, sudo netstat -unap) to see which ports are open (if you want Stack Overflow users to look into it, post the output on gist or similar and link to it here). Your port range net.ipv4.ip_local_port_range=1024 65535 is not the standard 32768 60999; this looks like you attempted to do something about lacking port numbers already. It would help to trace back to the reason of why you changed that parameter, and the conditions that convinced you to do so.

How to setup gstreamer on raspberry Pi and client for rtp with H264-capable webcam?

in the context of an underwater ROV project, I'm trying to stream (via rtp) a HD video flux from a Raspberry Pi model 2. The webcam is a Logitech C920 webcam, which I bought a while ago because it was the only H264-capable cam at that time.
It is also essential that I obtain the lowest possible latency, since the video will be used to pilot the ROV.
So I perfected some gstreamer-1.0 pipelines on my desktop computer with the C920 (Dell station, running Ubuntu 14.04), which worked fine, but I encounter some problems when I try to use the Raspberry instead of it.
First, I tried (on the RPi) to capture the H264 camera flow to a matroska file:
#this sets the C920 cam to H264 encoding, framerate 30/1:
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1
gst-launch-1.0 -vvv v4l2src \
! video/x-h264, width=1920, height=1080, framerate=30/1 \
! queue max-size-buffers=1 \
! matroskamux \
! filesink location=/tmp/video.mkv
This worked perfectly. A little choppy, I guess due to the size of the buffer, but OK.
Then, I tried to put the flow on an rtp stream pointed at my laptop (MacBook Pro, Yosemite, gstreamer installed via brew).
# on the server (RPi):
gst-launch-1.0 -vvv v4l2src \
! video/x-h264,width=1920,height=1080,framerate=30/1 \
! rtph264pay \
! udpsink host=192.168.0.168 port=5000
# on the client (MacBookPro)
gst-launch-1.0 -vvv udpsrc port=5000 \
caps="application/x-rtp, media=(string)video, \
clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" \
! rtpjitterbuffer drop-on-latency=true latency=300 \
! rtph264depay \
! queue max-size-buffers=1 \
! matroskamux \
! filesink location=/tmp/video.mkv
There, I get nothing. I checked on the client with sudo tcpdump (port 5000 and udp) that it effectively receives udp packets on the port 5000, but that's all. Nothing gets recorded in video.mkv, which is "touched" but stays at 0 bytes.
After reading some related questions here, I tried many variations, including:
streaming the flow to the pi itself. In that case, I sometimes get some weird outputs on the client window, which "tend to" disappear if I increase the size of the buffer:
** (gst-launch-1.0:2832): CRITICAL **: gst_rtp_buffer_map: assertion 'GST_IS_BUFFER (buffer)' failed
** (gst-launch-1.0:2832): CRITICAL **: gst_rtp_buffer_unmap: assertion 'rtp->buffer != NULL' failed
but still no output whatsoever.
other sinks: xvimagesink, autovideosink
rtpjitterbuffer: toggled drop-on-latency, changed latency value
queue: changed buffer size
Here's the output of the client:
gst-launch-1.0 -vvv udpsrc port=5000 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtpjitterbuffer drop-on-latency=true latency=300 ! rtph264depay ! queue max-size-buffers=10 ! matroskamux ! filesink location=/tmp/movie.mkv
Définition du pipeline à PAUSED...
Le pipeline est actif et n’a pas besoin de phase PREROLL…
/GstPipeline:pipeline0/GstUDPSrc:udpsrc0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
/GstPipeline:pipeline0/GstRtpJitterBuffer:rtpjitterbuffer0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
Passage du pipeline à la phase PLAYING…
New clock: GstSystemClock
/GstPipeline:pipeline0/GstRtpJitterBuffer:rtpjitterbuffer0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
/GstPipeline:pipeline0/GstRtpH264Depay:rtph264depay0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96
^Chandling interrupt.
Interruption : arrêt du pipeline…
Execution ended after 0:16:23.292637000
Définition du pipeline à PAUSED...
Définition du pipeline à READY (prêt)…
Définition du pipeline à NULL…
Libération du pipeline…
I hope that someone here can give me some clues about this problem: I should point out (if needed) that I'm still largely a beginner in gstreamer...
EDIT (12/11/16)
following ensonic's advice I used GST_DEBUG="*:3". The client now tells its problem: it' can't find the type of the video:
0:00:35.185377000 12349 0x7f878904bb20 WARN typefind
gsttypefindelement.c:983:GstFlowReturn
gst_type_find_element_chain_do_typefinding(GstTypeFindElement *, gboolean, gboolean):<typefind> error: Le flux ne contient pas assez de données.
0:00:35.185416000 12349 0x7f878904bb20 WARN typefind
gsttypefindelement.c:983:GstFlowReturn
gst_type_find_element_chain_do_typefinding(GstTypeFindElement *, gboolean, gboolean):<typefind> error: Can't typefind stream
ERREUR : de l’élément /GstPipeline:pipeline0/GstDecodeBin:decodebin0
/GstTypeFindElement:typefind : Le flux ne contient pas assez de données.
Information de débogage supplémentaire :
gsttypefindelement.c(983): GstFlowReturn
gst_type_find_element_chain_do_typefinding(GstTypeFindElement *, gboolean, gboolean) (): /GstPipeline:pipeline0/GstDecodeBin:decodebin0
/GstTypeFindElement:typefind:
Can't typefind stream
So, the client doesn't have enough data in the stream to determine its type...
How should I change that ? I don't understanding what's lacking !
A few comments:
1) on the client use "gst-launch-1.0 -e ... " to cause ctrl-c to send an eos so that the file gets finalized.
2) on the raspi, add "gdppay" before udpsink and on the client add "gdpdepay" after udpsrc. This will transport events and queries, since you don't use rtsp.
3) on the client try running with GST_DEBUG="*:3" to see if there are any wayrnings. Also try running with " ! decodebin ! autovideosink" to see if you get any images.
Following ensonic's comment (see below), I finally managed to have both pipelines working.
The trick was to use gdppay/gdpdepay elements instead of rtph264pay/rtph264depay.
On the server-side (Raspberry Pi)
#set the Logitech C920 cam properly (1920x1080, 30 fps)
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1 --set-parm=30
# exec the following pipeline (only after gstreamer runs on the client!):
gst-launch-1.0 -vvv -e v4l2src \
! video/x-h264,width=1920,height=1080,framerate=30/1 \
! gdppay \
! udpsink host=192.168.0.168 port=5000
On the client side (MacBookPro)
# launch the following command before executing server pipeline:
gst-launch-1.0 -e -vvv udpsrc port=5000 \
caps="application/x-gdp, streamheader=(buffer)< [insert long header here] >" \
! gdpdepay \
! video/x-h264, width=1920, height=1080, pixel-aspect-ratio=1/1, framerate=30/1 \
! decodebin \
! queue max-size-buffers=10 \
! autovideosink sync=false async=false
Results
CPU load
The performance of the C920 on the Raspberry Pi is remarkable. For 1920x1080 resolution, at 30 fps, the total cpu load is less than 3%. For comparison, when I encode the equivalent raw YUV-stream on the Raspberry, the load climbs to 96%. The load on the client side (for my 2011 Intel i5 MacBookPro) is about 25%.
video latency
I've tested the previous pipelines once over 20 min continuously and over 10 times overall. Each time I get a very reproducible latency of ~250 ms, be it over LAN or WLAN. Changing the size of the queues' buffers doesn't help much. Considering what one can read on the Net regarding streaming latencies, I think it's quite acceptable. And sufficient for piloting a vehicle at low speed.
start of the stream
Sometimes, just after launching the pipeline on the server-side, many packets get lost due to the following error:
gst_h264_parse_handle_frame:<h264parse0> broken/invalid nal Type: 1 Slice, Size: xxxx will be dropped
but these errors disappear very quickly, maybe after the next key frame is received (one problem is, I can't change the encoding parameters of the cam easily)?
Other tips and remarks
launching order
As stated above, be sure to launch the server's pipeline after the client's one. Otherwise the negotiation fails.
getting the header buffer
To get the (very long) header buffer of the server's stream, execute the server's pipeline once with the -vvv option, kill it, then copy/paste it in the caps of the client's pipeline.
gstreamer and OS versions used
Raspberry Pi 2, Raspbian, 4.1.19-v7+, gstreamer 1.2.0 (http://packages.qa.debian.org/gstreamer1.0)
Client :
MacBook Pro 2011, i5, Apple OSX Yosemite, gstreamer 1.10.1 (installed via brew)
Many thanks again to ensonic who had the idea to switch to gdp !

Sony Camera Remote API, How can I show/use liveview-stream data with VB.net (use of Sony QX1)

I'm programming a small software for the remote use of a Sony camera (I use QX1 but the model should be irrelevant) in VB.net. I could make pictures by sending the JSON-commands to the camera and also could start the liveview-stream with the method "startLiveview" wrapped in a JSON-command. In return I get the address to download the livestream, like http://192.168.122.1:8080/liveview/liveviewstream (wrapped in a JSON-answer).
According to the Sony CameraRemote-API-reference this is a stream which contains some header-data and the single jpeg-data. But it seems not to be a MJPEG-stream. I could past the livestream-link to my browser and it starts to infinitely download the livestream. I could not show the stream with a MJPEG-stream player like VLC.
My question is, how can I filter out the jpeg-data with VB.net or how can I show the livestream.
A similar question was already posted at an older question but without any reply. Therefore I'm trying it again.
This is my way, I use ffserver to make the video stream-able.
this is myconfig for
ffserver config (server.conf):
Port 8090
BindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 10000
CustomLog -
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 1G
ACL allow 127.0.0.1
</Feed>
<Stream cam.webm>
Feed feed1.ffm
Format webm
VideoCodec libvpx
VideoSize vga
VideoFrameRate 25
AVOptionVideo flags +global_header
StartSendOnKey
NoAudio
preroll 5
VideoBitRate 400
</Stream>
<Stream status.html>
Format status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Stream>
And then I run the ffserver with that config:
ffserver -f server.conf
And then encode the video from sony liveview, and broadcast via ffserver:
ffmpeg -i http://192.168.122.1:8080/liveview/liveviewstream -vcodec libvpx -fflags nobuffer -an http://127.0.0.1:8090/feed1.ffm
After that you can stream liveview from the address
localhost:8090/cam.webm
(I use my laptop with linux in a terminal)
Install GSTREAMER:
sudo apt-get install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
fix the parameters of your camera to enable the control via Smartphone, for example the ssd of my camera on my network is DIRECT-dpC3:DSC-RX100M5A
Use Wifi to connect your computer directly to your camera
Tell your camera to begin liveView with this command:
curl http://192.168.122.1:10000/sony/camera -X POST -H 'Content- type:application/json' --data '{ "method": "startLiveview", "params": [], "id": 1, "version": "1.0"}'
Note the response of the camera is an URL: mine is:
{"id":1,"result":["http://192.168.122.1:60152/liveviewstream?%211234%21%2a%3a%2a%3aimage%2fjpeg%3a%2a%21%21%21%21%21"]}
Tell gstreamer to use this URL:
gst-launch-1.0 souphttpsrc location=http://192.168.122.1:60152/liveviewstream?%211234%21%2a%3a%2a%3aimage%2fjpeg%3a%2a%21%21%21%21%21 ! jpegdec ! autovideosink
7; Enjoy ;-)
I try to use ffmpeg to process the streaming, and success to save streaming as flv file.
I use this code on terminal (I use UNIX) and I success save the file as flv file:
ffmpeg -i http://192.168.122.1:8080/liveview/liveviewstream -vcodec flv -qscale 1 -an output.flv
Maybe you can modify or optimize it as you needed.
In VLC works for me adding .mjpg to URL try this. Wait for sec and should be played http://192.168.122.1:8080/liveview/liveviewstream.mjpg

VLC does not play RTP video stream if the UDP(RTP) packet has been forwarded

I've tried out everything and failed. It could not be more wired. My condition is like this:
An rtsp streaming server. (Server A)
A forward server (Server B)
PJNATH Lib and RTP.NET Lib
The final Client (Client C)
A, B and C is in the same LAN.
B requests an RTP stream from A by sending RSTP request, and gets the stream. VLC on B is able to play the stream by an SDP file. Now, playing the same stream on C:
will SUCCESS via RTP.NET lib on B to receive stream (by letting RTP.NET listen on the UDP port on localhost) from A, and forward it (by setting RTP.NET destination) to C
will FAIL if using PJNATH between B and C (C will send the received UDP packet from PJNATH recvdata callback to VLC without modifying a bit). I'm sure of the following facts by sniffing packet from wireshark:
PJNATH identifies that B and C is in the same LAN and sends data directly from B to C
Randomly picked up UDP packets show that the content of A->B is exactly the same as of C->(VLC on C). This content CAN be played on B but CANNOT be played on C with the same version of VLC.
VLC on C is able to receive data but displays nothing. The log sticks at "Decoder buffering done in 0 ms"
main debug: Buffering 80%
main debug: Buffering 83%
main debug: Buffering 86%
main debug: Buffering 91%
main debug: Buffering 95%
main debug: Stream buffering done (1002 ms in 1195 ms)
main debug: Decoder buffering done in 0 ms
(nothing more...)
My goal is to make VLC play the stream on C by forwarding the stream from B to C via PJNATH.
I'm almost breaking my fingers on this problem.
And I don't understand why VLC on B and on C reads the same SDP (all the same except UDP port to listen), receives the same UDP data but acts differently.

tftp retry timeout exceeded

My issue is retry count exceeds when I download kernel image to Econa processor board (Econa is ARM based processor) via TFTP as shown below
CNS3000 # tftp 0x4000000 bootpImage.cns3420.uclibc
MAC PORT 0 : Initialize bcm53115M
MAC PORT 2 : Initialize RTL8211
TFTP from server 192.168.0.219; our IP address is 192.168.0.112
Filename 'bootpImage.cns3420.uclibc'.
Load address: 0x4000000
Loading: T T T T T T T T T T
Retry count exceeded; starting again
Following are the points which may help you in finding the cause of this error.
Ping response is OK
CNS3000 # ping 192.168.0.219
MAC PORT 0 : Initialize bcm53115M
MAC PORT 2 : Initialize RTL8211
host 192.168.0.219 is alive
When I tried to verify TFTP is running, I tried as shown below. It seems TFTP server is working. I placed a small file in /tftpboot:
# echo "Hello, embedded world" > /tftpboot/hello.txt"
Then I did localhost
# tftp localhost
tftp> get hello.txt
Received 23 bytes in 0.1 seconds
tftp> quit
Please note that there is no firewall or SELinux on my machine.
Please verify location of these files are OK. I have placed kernel image file bootpImage.cns3420.uclibc in /tftpbootTFTP service file is located in /etc/xinetd.d/tftp.
My TFTP service file is:
service tftp
{
socket_type =dgram
protocol=udp
wait=yes
user=root
server=/usr/sbin/in.tftpd
server_args=-s /tftpboot -b 512
disable=no
per_source=11
cps=100 2
flags=ipv4
}
printenv response in U-boot is:
CNS3000 # printenv
bootargs=root=/dev/mtdblock0 mem=256M console=ttyS0
baudrate=38400
ethaddr=00:53:43:4F:54:54
netmask=255.255.0.0
tftp_bsize=512
udp_frag_size=512
mmc_init=mmcinit
loading=fatload mmc 0 0x4000000 bootpimage-82511
running=go 0x4000000
bootcmd=run mmc_init;run loading;run running
serverip=192.168.0.219
ipaddr=192.168.0.112
bootdelay=5
port=1
bootfile=/tftpboot/bootpImage.cns3420.uclibcl
stdin=serial
stdout=serial
stderr=serial
verify=n
Environment size: 437/4092 bytes
Regards
Waqas
Loading: T T T T T T T T T T
Means there is no transfer at all; this can be caused by wrong interface setting i.e.
u-boot is configured for 100Mbit full duplex, and you try to connect via half duplex or 10Mbit (or some mix of it). Another point is the MTU size, should be 1500 (u-boot cannot handle packet fragmentation)
Hint for windows/vmware users:
tftp timeouts from u-boot are caused by windows ip-forwarding.
1) If you have a home network : switch it of.
2) You are running Routing and Remote Access service : shut down service
3) check registry for ip forwarding:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\IPEnableRouter
set value to 0 (and maybe reboot)