How to feed NMEA data to gpsfake to run with gpsd - gps

I have an NMEA test data file that contains a few gps points:
$GPGGA,170358.132,5230.704,N,01324.262,E,1,12,1.0,0.0,M,0.0,M,,*63
$GPGSA,A,3,01,02,03,04,05,06,07,08,09,10,11,12,1.0,1.0,1.0*30
$GPRMC,170358.132,A,5230.704,N,01324.262,E,4152.6,050.1,150822,000.0,W*4B
$GPGGA,170359.132,5231.634,N,01325.380,E,1,12,1.0,0.0,M,0.0,M,,*6D
$GPGSA,A,3,01,02,03,04,05,06,07,08,09,10,11,12,1.0,1.0,1.0*30
$GPRMC,170359.132,A,5231.634,N,01325.380,E,2369.6,047.3,150822,000.0,W*4D
$GPGGA,170400.132,5232.183,N,01325.977,E,1,12,1.0,0.0,M,0.0,M,,*6C
$GPGSA,A,3,01,02,03,04,05,06,07,08,09,10,11,12,1.0,1.0,1.0*30
$GPRMC,170400.132,A,5232.183,N,01325.977,E,1313.3,288.5,150822,000.0,W*40
I am trying to feed these to gpsfake to test it with a gpsd demo script I have. I have tried running gpsfake alone, piping it with gpsd, and gpsd alone, but no luck. Mind you, this is the first time I use gpsd.
I have tried running gpsd using my script and a physical GPS dongle, it runs perfectly fine and I get the following output per my code:
~$ /usr/sbin/gpsd -n -G -b /dev/ttyUSB0 &
[1] 4514
~$ ./demo
WARNING: Opening interface localhost:gpsd
WARNING: Entering GPS poll loop (2000000us)
Speed: 0.017
Speed: 0.018
Speed: 0.018
Speed: 0.021
speed: 0.046976
{
"timestamp": 1660331437857,
"latitude": 33.6973963,
"longitude": -117.7707148,
"eph": 4.563,
"speed": 0.047,
"eps": 0.460,
"track": 0.000,
"satellites_used": 8,
"hdop": 0.000000,
"vdop": 0.000000
}
Speed: 0.021
[1]+ Done /usr/sbin/gpsd -n -G -b /dev/ttyUSB0
Can someone please tell me how I can feed the fake data to gpsfake instead of using the GPS dongle? Thanks!

Related

How to call AAC audio streaming functionality via ssh

I have a USB audio dongle connected to the USB port on the QNAP NAS. I have on the NAS a script called "radio" that streams me internet radio streams via a USB audio dongle to the soundbar. The whole thing is controlled by Raspberry Pi (with the Domoticz home automation system). I mean RPi sends ssh commands to the NAS to run the script "radio" on the NAS. Everything works fine as long as it's an HTTP MP3 stream. I use mpg123 then, that is convertion MP3 to WAV. For AAC stream, I have to use FFMPEG to convert AAC to WAV, needed for aplay. Unfortunately, the number of commands available on the NAS is very limited and I can only use FFMPEG and APLAY. If I run the "radio" script directly (from the console) on the NAS, everything works fine. However, when I run it remotely from RPi, MP3 streams play correctly but AAC does not. Below is the command I am using in "radio" script on NAS at the moment (after many attempts). When I run it from the NAS console, everything works fine. However, when I run it remotely using SSH with RPi, both FFMPEG and APLAY are launched but nothing is played on the NAS.
....
[ ! -e /dev/shm/pipe ] && $path_bin/mkfifo /dev/shm/pipe
....
ffmpeg -y -i "$url" -vn -acodec pcm_s16le -ar 44100 -f wav /dev/shm/pipe & $path_bin/aplay -D sysdefault:Device --file-type raw --format=cd /dev/shm/pipe
....
If I run "radio" script from NAS console, FFMPEG start to display kind of counter of received/transcoded kbits. When I call it remotely, counter does not start on RPi console. Probably FFMPEG works, but does not transcode stream. Any idea what to do for proper run radio streaming?
EDIT-1
stderr output:
ffmpeg version 3.3.6 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 4.9.2 (Debian 4.9.2-10)
configuration: --enable-cross-compile --arch=i686 --target-os=linux --disable-yasm --disable-static --enable-shared --enable-gpl --enable-libmp3lame --disable-libx264 --enable-libsoxr --enable-version3 --enable-nonfree --enable-openssl --disable-decoder=ac3 --disable-decoder=ac3_fixed --disable-decoder=eac3 --disable-decoder=dca --disable-decoder=truehd --disable-encoder=ac3 --disable-encoder=ac3_fixed --disable-encoder=eac3 --disable-encoder=dca --disable-decoder=hevc --disable-decoder=hevc_cuvid --disable-encoder=hevc_nvenc --disable-encoder=nvenc_hevc --disable-decoder=h264 --disable-decoder=h264_cuvid --disable-encoder=libx264 --disable-encoder=libx264rgb --disable-encoder=h264_nvenc --disable-encoder=nvenc --disable-encoder=nvenc_h264 --disable-decoder=mpeg2video --disable-decoder=mpegvideo --disable-decoder=mpeg2_cuvid --disable-encoder=mpeg2video --disable-decoder=mpeg4 --disable-decoder=mpeg4_cuvid --disable-decoder=msmpeg4v1 --disable-decoder=msmpeg4v2 --disable-decoder=msmpeg4v3 --disable-encoder=mpeg4 --disable-encoder=msmpeg4v2 --disable-encoder=msmpeg4v3 --disable-decoder=mvc1 --disable-decoder=vc1 --disable-decoder=vc1_cuvid --disable-decoder=vc1image --disable-decoder=aac --disable-decoder=aac_fixed --disable-decoder=aac_latm --disable-encoder=aac --extra-ldflags='-L/root/daily_build/64_41/4.5.1/LinkFS/usr/lib -L/root/daily_build/64_41/4.5.1/Model/TS-X72/build/RootFS/usr/local/medialibrary/lib -Wl,--rpath -Wl,/usr/local/medialibrary/lib' --extra-cflags='-I/root/daily_build/64_41/4.5.1/LinkFS/usr/include -I/root/daily_build/64_41/4.5.1/Model/TS-X72/build/RootFS/usr/local/medialibrary/include -D_GNU_SOURCE -DQNAP' --prefix=/root/daily_build/64_41/4.5.1/Model/TS-X72/build/RootFS/usr/local/medialibrary
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100
libpostproc 54. 5.100 / 54. 5.100
Finally I found a workaround. It's not an "elegant" solution - but it works.
I used the "expect" functionality and created a script on RPi that triggers SSH streaming to the NAS. Script content below.
#!/usr/bin/expect -f
set channel [lindex $argv 0]
set timeout 5
spawn /usr/bin/ssh -p669 -x -q -i /home/debian/.ssh/id_debian admin#192.168.0.7
expect "*# "
send -- "/share/homes/media/Pobrane/RADIO/radio $channel\r"
expect "*# "
send -- "exit\r"
expect eof
Thats all.

ffmpeg always gives me input/output error for the first time running the command

I'm using ffmpeg to push raspberrypi video feeds (CSI camera) to a nginx-RTMP server then the nginx push it to youtube.
My problem is, every time when I run the ffmpeg command, it always gives me input/out error. Then it is working fine when I run the exact same ffmpeg command for the 2nd time.
How do I resolve this problem?
I want to start the ffmpeg command in a script file and put the script in crontab so that it can start the live streaming automatically. But this error makes it impossible to do that.
my ffmpeg command is as below(change the real domain name to mydomain.com):
ffmpeg -thread_queue_size 512 -f v4l2 -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -input_format yuyv422 -video_size 1280x720 -framerate 30 -i /dev/video0 -vf eq=gamma=1.5:saturation=1.3 -c:v h264_omx -b:v 20480K -vsync 1 -g 16 -f flv rtmp://mydomain.com:1935/live/
the error log:
Input #1, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 474200.421802, bitrate: 442368 kb/s
Stream #1:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 1280x720, 442368 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
rtmp://rtmp.simonliu.space:1935/live/: Input/output error

tensorFlow serving batch configure ineffectiveness

docker run command
docker run -t --rm -p 8500:8500 -p 8501:8501
-v /home/zhi.wang/tensorflow-serving/model:/models
-e MODEL_NAME=beidian_cart_ctr_wdl_model tensorflow/serving:1.12.0
--enable_batching=true --batching_parameters_file=/models/batching_parameters.txt &
batching_parameters.txt
num_batch_threads { value: 40 }
batch_timeout_micros { value: 5000}
max_batch_size {value: 20000000}
server configuration
40 cpu and 64G memory
test result
1 thread predict cost 30ms
40 thread predict one predict cost 300ms
cpu usage
cpu usage in docker can only up to 300% and host cpu usage is low
java test script
TensorProto.Builder tensor = TensorProto.newBuilder();
tensor.setTensorShape(shapeProto);
tensor.setDtype(DataType.DT_STRING);
// batch set 200
for (int i=0; i<200; i++) {
tensor.addStringVal(example.toByteString());
}
i also face the same proble and i found that's maybe the network io problem,you can use dstat to monitor your network interface.
and i fount example.toByteString() also cost much time

How to get graphical GUI output and user touch / keyboard / mouse input in a full system gem5 simulation?

Hopefully with fs.py, but not necessarily.
For example, I have some x86 BIOS example that draw a line on the screen on QEMU, and I'd like to see that work on gem5 too.
Interested in all archs.
https://www.mail-archive.com/gem5-users#gem5.org/msg15455.html
arm
I have managed to get an image on the screen for ARM.
Here is a highly automated setup which does the following steps:
grab the ARM gem5 Linux kernel v4.15 fork from: https://gem5.googlesource.com/arm/linux/ and use the config file arch/arm/configs/gem5_defconfig from there.
The fork is required for the commit drm: Add component-aware simple encoder https://gem5.googlesource.com/arm/linux/ I believe, which adds the required option CONFIG_DRM_VIRT_ENCODER=y.
The other required option is CONFIG_DRM_HDLCD=y, which enables the HDLCD ARM IP that manages the display: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0541c/CHDBAIDI.html
run gem5 at 49f96e7b77925837aa5bc84d4c3453ab5f07408e
with a command of type:
M5_PATH='/data/git/linux-kernel-module-cheat/out/common/gem5/system' \
'/data/git/linux-kernel-module-cheat/out/common/gem5/build/ARM/gem5.opt' \
--debug-file=trace.txt \
-d '/data/git/linux-kernel-module-cheat/out/arm/gem5/m5out' \
'/data/git/linux-kernel-module-cheat/gem5/gem5/configs/example/fs.py' \
--disk-image='/data/git/linux-kernel-module-cheat/out/arm/buildroot/images/rootfs.ext2' \
--kernel='/data/git/linux-kernel-module-cheat/out/arm/buildroot/build/linux-custom/vmlinux' \
--mem-size='256MB' \
--num-cpus='1' \
--script='/data/git/linux-kernel-module-cheat/data/readfile' \
--command-line='earlyprintk=pl011,0x1c090000 console=ttyAMA0 lpj=19988480 rw loglevel=8 mem=256MB root=/dev/sda console_msg_format=syslog nokaslr norandmaps printk.devkmsg=on printk.time=y' \
--dtb-file='/data/git/linux-kernel-module-cheat/out/common/gem5/system/arm/dt/armv7_gem5_v1_1cpu.dtb' \
--machine-type=VExpress_GEM5_V1 \
connect to the VNC server gem5 provides with your favorite client.
On Ubuntu 18.04, I like:
sudo apt-get install vinagre
vinagre localhost:5900
The port shows up on a gem5 message of type:
system.vncserver: Listening for connections on port 5900
and it takes up the first free port starting from 5900.
Only raw connections are supported currently.
Outcome:
after a few seconds, the VNC client shows up a little penguin on the screen! This is because our kernel was compiled with: CONFIG_LOGO=y.
the latest frame gets dumped to system.framebuffer.png, and it also contains the little penguin.
the Linux kernel dmesg shows on telnet 3456 terminal a messages like:
[ 0.152755] [drm] found ARM HDLCD version r0p0
[ 0.152790] hdlcd 2b000000.hdlcd: bound virt-encoder (ops 0x80935f94)
[ 0.152795] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[ 0.152799] [drm] No driver support for vblank timestamp query.
[ 0.215179] Console: switching to colour frame buffer device 240x67
[ 0.230389] hdlcd 2b000000.hdlcd: fb0: frame buffer device
[ 0.230509] [drm] Initialized hdlcd 1.0.0 20151021 for 2b000000.hdlcd on minor 0
which shows that the HDLCD was enabled.
when we connect, gem5 shows on stdout:
info: VNC client attached
TODO: also get a shell working. Currently I only have a the little penguin, and my keystrokes do nothing. Likely have to tweak the console= kernel parameter or setup a tty console there on init? CONFIG_FRAMEBUFFER_CONSOLE=y is set. Maybe the answer is contained in: https://www.kernel.org/doc/Documentation/fb/fbcon.txt
aarch64
The aarch64 gem5 defconfig does not come with all required options, e.g. CONFIG_DRM_HDLCD=y.
Adding the following options, either by hacking or with a config fragment made it work:
CONFIG_DRM=y
CONFIG_DRM_HDLCD=y
CONFIG_DRM_VIRT_ENCODER=y

Can ARM qemu system emulator boot from card image without kernel param?

I've seen a lot of examples how to run a QEMU ARM board emulator. In every case, besides SD card image param, QEMU was also always supplied with kernel param, i.e.:
qemu-system-arm -M versatilepb \
-kernel vmlinuz-2.6.18-6-versatile \ #KERNEL PARAM HERE
-initrd initrd.gz \
-hda hda.img -append "root=/dev/ram"
I am palying with bootloaders and want to create my own bootable SD card, but don't have a real board yet and want to learn with an emulated one. However, if run as described above, QEMU skips bootloader stage and runs kernel.
So what should I do to emulate a full boot sequence on QEMU so that it executes bootloader? Should I get a ROM dump and pass it as a -bios param?
You can do that by feeding the uboot image. I never used ROM dump.
QEMU BOOT SEQUENCE:
On real, physical boards the boot process usually involves a non-volatile memory (e.g. a Flash) containing a boot-loader and the operating system. On power on, the core loads and runs the boot-loader, that in turn loads and runs the operating system.
QEMU has the possibility to emulate Flash memory on many platforms, but not on the VersatilePB. There are patches ad procedures available that can add flash support, but for now I wanted to leave QEMU as it is.
QEMU can load a Linux kernel using the -kernel and -initrd options; at a low level, these options have the effect of loading two binary files into the emulated memory: the kernel binary at address 0x10000 (64KiB) and the ramdisk binary at address 0x800000 (8MiB).
Then QEMU prepares the kernel arguments and jumps at 0x10000 (64KiB) to execute Linux. I wanted to recreate this same situation using U-Boot, and to keep the situation similar to a real one I wanted to create a single binary image containing the whole system, just like having a Flash on board. The -kernel option in QEMU will be used to load the Flash binary into the emulated memory, and this means the starting address of the binary image will be 0x10000 (64KiB).
This example is based of ARM versatilepb board
make CROSS_COMPILE=arm-none-eabi- versatilepb_config
make CROSS_COMPILE=arm-none-eabi- all
Creating the Flash image
* download u-boot-xxx.x source tree and extract it
* cd into the source tree directory and build it
mkimage -A arm -C none -O linux -T kernel -d zImage -a 0x00010000 -e 0x00010000 zImage.uimg
mkimage -A arm -C none -O linux -T ramdisk -d rootfs.img.gz -a 0x00800000 -e 0x00800000 rootfs.uimg
dd if=/dev/zero of=flash.bin bs=1 count=6M
dd if=u-boot.bin of=flash.bin conv=notrunc bs=1
dd if=zImage.uimg of=flash.bin conv=notrunc bs=1 seek=2M
dd if=rootfs.uimg of=flash.bin conv=notrunc bs=1 seek=4M
Booting Linux
To boot Linux we can finally call:
qemu-system-arm -M versatilepb -m 128M -kernel flash.bin -serial stdio
You will need to pass it some kind of bootloader image via -bios (or a pflash option), yes. I doubt that a ROM dump would work though -- typically the ROM will assume much closer fidelity to the real hardware than QEMU provides. You'd want a bootloader written and tested to work with QEMU. One example of that is if you use the 'virt' board and a UEFI image which is built for QEMU.
Otherwise QEMU will use its "built in bootloader" which is a handful of instructions that are capable of booting the kernel you pass it with -kernel.