I need to replay collected gps data by gpsd for testing. I know that I can feed gpsd by tcp, for example: gpsd -N tcp://127.0.0.1:6000, but I can't find right format for feeding. What the format should I use?
If you want to replay the GPS data for testing, you can use gpsfake tool from the gpsd suite: https://gpsd.gitlab.io/gpsd/gpsfake.html
Regarding the log format that it can use
Logfiles for the use with gpsfake can be retrieved using gpspipe, gpscat, or cgps from the gpsd distribution, or any other application which is able to create a compatible output.
So I would record the log from a real GNSS receiver with
gpspipe -R > gps.log
and later use gpsfake to replay it during testing without GNSS receiver.
edit: The gpspipe -R command will save a NMEA log if the receiver outputs NMEA messages. It can be later used for testing with gpsfake or other tool. Of course you can kill gpsd and just directly listen to the serial port to record the NMEA.
Related
The problem is: Sometimes tcpdump sees that the receiving of a UDP packet is held back until the next incoming UDP packet, although the network tap device shows it goes without delay through the cable.
Scenary: My profinet stack on Linux (located in user space) has a cyclic connection where it receives and sends Profinet protocol packets every 4ms (via raw sockets). About every 30 ms it also receives UDP packets in another thread on a UDP socket and replies them immediately, according to that protocol. It's around 10% CPU load. Sometimes it seems such received UDP packets are stuck in the network driver. After 2 seconds the next UDP packet comes in and both, the missed UDP packet and that next one is received. There are no dropped packets.
My measuring:
I use tcpdump -i eth0 --time-stamp-precision=nano --time-stamp-type=adapter_unsynced -w /tmp/tcpdump.pcap to record the UDP traffic to a RAM disk file.
At the same time I use a network tap device to record the traffic.
Question:
How to find out where the delay comes from (or is it a known effect)?
(2. What does the timestamp (which tcpdump sets to each packet) tell me? I mean, which OSI layer refers it to, in other words: When is it taken?)
Topology: "embedded device with Linux and eth0" <---> tap-device <---> PLC. The program "tcpdump" is running on the embedded device. The tap device is listening on the cable. The actual Profinet connection is between PLC and embedded device. A PC is connected on the tap device to record what it is listening to.
Wireshark (via tap and tcpdump): see here (packet no. 3189 in tcpdump.pcap)
It was a bug in the freescale Fast Ethernet Driver (fec_main.c) which NXP has fixed by its awesome support now.
The actual answer (for the question "How to find out where the delay comes from?") is: One has to build a Linux with kernel tracing on, patch the driver code with kernel tracing and then analyse such tracing with the developer Linux tool trace-cmd. It's a very complicated thing but I'm very happy it is fixed now:
trace-cmd record -o /tmp/trace.dat -p function -l fec_enet_interrupt -l fec_enet_rx_napi -e 'fec:fec_rx_tp' tcpdump -i eth0 --time-stamp-precision=nano --time-stamp-type=adapter_unsynced -w /tmp/tcpdump.pcap
I can capture packets using tcpdump OK as the source eth1 port is connected to a cisco switch span port, and filter using tcpdump options (at this stage interested in DNS packets to and from a particualar IP only). Rather than writing to a file, I want to simply dump the filtered raw (DNS) packets onto eth2 (which could be unnumbered or numbered). The reason for this is that a 3rd party needs access to the raw data, but I need to filter non-DNS traffic (otherwise I'd just let them connect to the switch span port).
Preferably I also want to run the process continuously. Is there an easy way to direct the tcpdump output to an unnumbered eth interface, or is there a better way of achieving this?
Is there a way for a client to get notified about failover events in the Redis cluster? If so, which client library would support this? I am currently using Jedis but have the flexibility to switch to any other Java client.
There are two ways that I can think of to check this, one of them is to grep for master nodes on the cluster, keeping in mind their IDs, if the ports changed for any of them then a failover happened.
$ redis-cli -p {PORT} cluster nodes | grep master
Another way, but it is not as robust of a solution is using the consistency checker ruby script, that will start showing errors in writes as an output, which you can monitor and send notifications depending on it, since that happens when the read server is trying to take its master's role.
Sentinel (http://redis.io/topics/sentinel) has the ability to monitor the cluster member, and send a publish/subscribe notification upon failure. The link contains a more in-depth explanation and tutorial.
So we have an application that makes udp calls and sends packets. However, since responses are given for UDP calls, how could we ensure that the service is up and the port is open and that things are getting stored?
The only thought we have right now is to send in test packets and ensure they are getting saved out to the db.
So my over all question is, is there a better, easier way to ensure that our udp calls are succeeding?
On the listening host, you can validate that the port is open with netstat. For example, if your application uses UDP port 68, you could run:
# Grep for :<port> from netstat output.
$ netstat -lnu | grep :68
udp 0 0 0.0.0.0:68 0.0.0.0:*
You could also send some test data to your application, and then check your database to verify that the fixture data made it into your database. That doesn't mean it always will be, just that it's working at the time of the test.
Ultimately, the problem is that UDP packets are best-effort, and not guaranteed. So unless you can configure your logging platform to send some sort of acknowledgment after data is received and/or written, then you can't guarantee anything. The very nature of UDP is that it leaves acknowledgments (if any) to the application layer.
We took a different approach and we are checking to make sure the calls made it to the db. Its easy enough to query a table and ensure records are in there. If none recent, we know something is wrong. CodeGnome had a good idea, just not the route we went. Thanks!
I'm running a SSH Tunnel with OpenSSH on linux using the subprocess python module.
I want to find out how many bytes were sent and received from that SSH tunnel.
How can I find it out?
ssh(1) provides no mechanism for this. The OS does not provide a mechanism for this. Packet sniffing with e.g. tcpdump(1) is an option, but that would probably require root privileges, and would only be approximate if ssh(1) connections are made to the remote peer outside of your application. IPTables Accounting would give you similar tradeoffs, but would probably be much less overhead than tcpdump(1).
If you don't mind being very approximate, you could keep track of all the data you send to and read from your subprocess. ssh(1) will compress data before encrypting it, so you might over-estimate the amount of data sent, but ssh(1) will also have some overhead for re-keying, channel control, message authenticity, and so on, so it might even come close for 'average' data.
Of course, if a router along the way decides to drop every other packet, your TCP stack will send twice the data, maybe more.
Very approximate indeed.
You could measure the raw ssh transfer with something like pv:
ssh user#remote -t "cat /dev/urandom" | pv > /dev/null
ssh user#remote -t "pv > /dev/null" < /dev/urandom
(you could try with /dev/zero - but if you are using ssh compression, you'd get a very unreal transfer rate.)