how to extract camera info and images from ros bag? - camera

I have a ros bag and its information as following
path: zed.bag
version: 2.0
duration: 3:55s (235s)
start: Nov 12 2014 04:28:20.90 (1415737700.90)
end: Nov 12 2014 04:32:16.65 (1415737936.65)
size: 668.3 MB
messages: 54083
compression: none [848/848 chunks]
types: sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/CompressedImage [8f7a12909da2c9d3332d540a0977563f]
tf2_msgs/TFMessage [94810edda583a504dfda3829e70d7eec]
topics: /stereo_camera/left/camera_info_throttle 3741 msgs : sensor_msgs/CameraInfo
/stereo_camera/left/image_raw_throttle/compressed 3753 msgs : sensor_msgs/CompressedImage
/stereo_camera/right/camera_info_throttle 3741 msgs : sensor_msgs/CameraInfo
/stereo_camera/right/image_raw_throttle/compressed 3745 msgs : sensor_msgs/CompressedImage
/tf 39103 msgs : tf2_msgs/TFMessage (2 connections)
I can extract images by following
http://wiki.ros.org/rosbag/Tutorials/Exporting%20image%20and%20video%20data
but issue occurs when I want to get camera info, Do anyone know how to solve it?

One can solve it via echoing the text-based information into a file using rostopic:
rostopic echo -b zed.bag /stereo_camera/left/camera_info_throttle > data.txt

Related

Using an awk while-loop in Conky

I'm struggling with a while() loop in a Conky script.
Here's what I want to do :
I'm tunneling a command output to awk, extracting and formatting data.
The problem is : the output could contain 1 to n sections, and I want to get values from each one of them.
Here's the output sent to awk :
1) -----------
name: wu_1664392603_228876_0
WU name: wu_1664392603_228876
project URL: https://boinc.loda-lang.org/loda/
received: Thu Oct 6 15:31:40 2022
report deadline: Thu Oct 13 15:31:40 2022
ready to report: no
state: downloaded
scheduler state: scheduled
active_task_state: EXECUTING
app version num: 220917
resources: 1 CPU
estimated CPU time remaining: 1379.480287
elapsed task time: 5858.009798
slot: 1
PID: 2221366
CPU time at last checkpoint: 5690.500000
current CPU time: 5712.920000
fraction done: 0.809000
swap size: 1051 MB
working set size: 973 MB
2) -----------
name: wu_1664392603_228908_0
WU name: wu_1664392603_228908
project URL: https://boinc.loda-lang.org/loda/
received: Thu Oct 6 15:31:53 2022
report deadline: Thu Oct 13 15:31:53 2022
ready to report: no
state: downloaded
scheduler state: scheduled
active_task_state: EXECUTING
app version num: 220917
resources: 1 CPU
estimated CPU time remaining: 1393.925106
elapsed task time: 5849.961764
slot: 7
PID: 2221367
CPU time at last checkpoint: 5654.640000
current CPU time: 5682.160000
fraction done: 0.807000
swap size: 802 MB
working set size: 728 MB
...
And here's the final output I want :
boinc.loda wu_1664392603_2288 80.9 07/10 01h37
boinc.loda wu_1664392603_2289 80.7 07/10 02h38
I managed to get the data I want ("WU name", "project URL", "estimated CPU time remaining" AND "fraction done") from one particuliar section using this code :
${execi 60 boinccmd --get_tasks | awk -F': |://|/' '\
/URL/ && ++i==1 {u=$3}\
/WU/ && ++j==1 {w=$2}\
/fraction/ && ++k==1 {p=$2}\
/estimated/ && ++l==1 {e=strftime("%d/%m %Hh%M",$2+systime())}\
END {printf "%.10s %.18s %3.1f %s", u, w, p*100, e}\
'}
This is quite inelegant, as I must repeat this code nth times, increasing i,j,k,l values to get the whole dataset (n is related to CPU threads, my PC has 8 threads, so I repeat the code 8 times).
I'd like the script to adapt to other CPUs, where n could be anything from 1 to ...
The obvious solution is to use a while() loop, parsing the whole dataset.
But nesting a conditional loop into an awk sequence calling an external command seems too tricky for me, and Conky scripts aren't really easy to debug, as Conky may hang without any error output or log if the script's syntax is bad.
Any help will be appreciated :)
Assumptions:
the sample input shows 2 values for estimated that are ~14.5 seconds apart (1379.480287 and 1393.925106) but the expected output is showing the estimated values as being ~61 mins apart (07/10 01h37 and 07/10 02h38); for now I'm going to assume this is due to OP's repeated runs of execi returning widely varying values for the estimated lines
each section of execi output always contains 4 matching lines (URL, WU, fraction, estimated) and these 4 strings only occur once within a section of execi output
I don't have execi installed on my system so to emulate OP's exceci I've cut-n-pasted OP's sample execi results into a local file named execi.dat.
Tweaking OP's current awk script that also allows us to eliminate the need for a bash loop (that repeatedly calls execi | awk):
cat execi.dat | awk -F': |://|/' '
FNR==NR { st=systime() }
/URL/ { found++; u=$3 }
/WU/ { found++; w=$2 }
/fraction/ { found++; p=$2 }
/estimated/ { found++; e=strftime("%d/%m %Hh%M",$2+st) }
found==4 { printf "%.10s %.18s %3.1f %s\n", u, w, p*100, e; found=0 }
'
This generates:
boinc.loda wu_1664392603_2288 80.9 06/10 17h47
boinc.loda wu_1664392603_2289 80.7 06/10 17h47
NOTE: the last value appears to be duplicated but that's due to the sample estimated values only differing by ~14.5 seconds

How do I feed CD audio tracks into an ALSA-driven sound output device?

I'm using a USB CD/DVD drive without built-in sound decoder and controlling it via ALSA, which already works. The host is a Raspberry Pi 3B with the current Raspbian. Here is the corresponding config file:
pi#autoradio:/etc $ cat asound.conf
pcm.dmixer {
type dmix
ipc_key 1024
ipc_perm 0666
slave {
pcm "hw:0,0"
period_time 0
period_size 1024
buffer_size 4096
rate 192000
format S32_LE
channels 2
}
bindings {
0 0
1 1
}
}
pcm.dsnooper {
type dsnoop
ipc_key 2048
ipc_perm 0666
slave
{
pcm "hw:0,0"
period_time 0
period_size 1024
buffer_size 4096
rate 192000
format S32_LE
channels 2
}
bindings {
0 0
1 1
}
}
pcm.duplex {
type asym
playback.pcm "dmixer"
capture.pcm "dsnooper"
}
pcm.!default {
type plug
slave.pcm "duplex"
}
ctl.!default {
type hw
card 0
}
To read the music from CD-DA, I'm gonna use the CDIO++ library. Its cd-info utility recognises both the drive, and the audio CD:
pi#autoradio:/etc $ cd-info
cd-info version 2.1.0 armv7l-unknown-linux-gnueabihf
CD location : /dev/cdrom
CD driver name: GNU/Linux
access mode: IOCTL
Vendor : MATSHITA
Model : CD-RW CW-8124
Revision : DA0D
Hardware : CD-ROM or DVD
Can eject : Yes
Can close tray : Yes
Can disable manual eject : Yes
Can select juke-box disc : No
Can set drive speed : No
Can read multiple sessions (e.g. PhotoCD) : Yes
Can hard reset device : Yes
Reading....
Can read Mode 2 Form 1 : Yes
Can read Mode 2 Form 2 : Yes
Can read (S)VCD (i.e. Mode 2 Form 1/2) : Yes
Can read C2 Errors : Yes
Can read IRSC : Yes
Can read Media Channel Number (or UPC) : Yes
Can play audio : Yes
Can read CD-DA : Yes
Can read CD-R : Yes
Can read CD-RW : Yes
Can read DVD-ROM : Yes
Writing....
Can write CD-RW : Yes
Can write DVD-R : No
Can write DVD-RAM : No
Can write DVD-RW : No
Can write DVD+RW : No
__________________________________
Disc mode is listed as: CD-DA
I've already got some code to send the PCM data to the sound card and some insight regarding the (rather poorly documented) CDIO API (I know that the readSectors() method is used for reading sound data from the CD sector after sector), but not really a clue on how to hand over the data from the CD-DA input to the ALSA output routine correctly.
Please nopte that mplayer is off-limits to me as this routine will be a part of a larger solution.
Any help would be greatly appreciated.
UPDATE: Does the different block size of an audio CD (2,352 bytes) and of the sound output (910 bytes, at least in my particular case) matter?
CD audio data is just two channels of little-endian 16-bit samples at 44.1 kHz.
If you output the data to the standard output, you can pipe it into your sound-playing program, or aplay:
./my-read-cdda | ./play 44100 2 99999
./my-read-cdda | aplay --file-type raw --format cd
If you want to do everything in a single program, replace the read(0, ...) with readSectors(). (The buffer size does not need to have any relation with ALSA's period size or buffer size.)

showing results of tcp-variants-comparison.cc under ns3 3.28

I am looking for a way to show the results of the file "tcp-variants-comparison.cc" under ns3 (3.28) used with Ubuntu 18.04.
I found here an old topic from 2013, but it seems not to work correctly in my current environment.
P.S: I am a newbie in ns3, so i will appreciate any help.
regards
cedkhader
Running ./waf --run "tcp-variants-comparison --tracing=1" yields the following files:
-rw-rw-r-- 1 112271415 Aug 5 15:52 TcpVariantsComparison-ascii
-rw-rw-r-- 1 401623 Aug 5 15:52 TcpVariantsComparison-cwnd.data
-rw-rw-r-- 1 1216177 Aug 5 15:52 TcpVariantsComparison-inflight.data
-rw-rw-r-- 1 947619 Aug 5 15:52 TcpVariantsComparison-next-rx.data
-rw-rw-r-- 1 955550 Aug 5 15:52 TcpVariantsComparison-next-tx.data
-rw-rw-r-- 1 38 Aug 5 15:51 TcpVariantsComparison-rto.data
-rw-rw-r-- 1 482134 Aug 5 15:52 TcpVariantsComparison-rtt.data
-rw-rw-r-- 1 346427 Aug 5 15:52 TcpVariantsComparison-ssth.data
You can use other command line arguments to generate the desired output, see list below.
Program Arguments:
--transport_prot: Transport protocol to use: TcpNewReno, TcpHybla, TcpHighSpeed, TcpHtcp, TcpVegas, TcpScalable, TcpVeno, TcpBic, TcpYeah, TcpIllinois, TcpWestwood, TcpWestwoodPlus, TcpLedbat [TcpWestwood]
--error_p: Packet error rate [0]
--bandwidth: Bottleneck bandwidth [2Mbps]
--delay: Bottleneck delay [0.01ms]
--access_bandwidth: Access link bandwidth [10Mbps]
--access_delay: Access link delay [45ms]
--tracing: Flag to enable/disable tracing [true]
--prefix_name: Prefix of output trace file [TcpVariantsComparison]
--data: Number of Megabytes of data to transmit [0]
--mtu: Size of IP packets to send in bytes [400]
--num_flows: Number of flows [1]
--duration: Time to allow flows to run in seconds [100]
--run: Run index (for setting repeatable seeds) [0]
--flow_monitor: Enable flow monitor [false]
--pcap_tracing: Enable or disable PCAP tracing [false]
--queue_disc_type: Queue disc type for gateway (e.g. ns3::CoDelQueueDisc) [ns3::PfifoFastQueueDisc]
--sack: Enable or disable SACK option [true]
in ns3.36.1 I used this command
./ns3 run examples/tcp/tcp-variants-comparison.cc -- --tracing=1
and output look like this
TcpVariantsComparison-ascii
TcpVariantsComparison-cwnd.data
TcpVariantsComparison-inflight.data
TcpVariantsComparison-next-rx.data
TcpVariantsComparison-next-tx.data
TcpVariantsComparison-rto.data
TcpVariantsComparison-rtt.data
TcpVariantsComparison-ssth.data

Only grep sections of file containing a specific string

I want to make it easier to find logs for a specific user and I want to only grep/filter the sections which contain the string 15120000000. A section is a line starting with time stamp (Sep 16 19:31:46 in this example. Or it could be everyline starting with "Sep 16"). Is it possible to use grep or awk for this? Thanks in advance for all the help.
Here is the log sample.
Sep 16 19:31:46 da1psbc05pev kamailio[31135]: INFO: <script>: onreply_route Rcvd [487] response from[25.11.214.107:5061] MsgId[6384483] From[sip:15120000000#3bc.cloud.comptel.com] callid[f91a7279-15751118-6c5f65af#192.168.1.58]
SIP/2.0 487 Request Terminated
From: "TAC Offnet"<sip:15120000000#3bc.cloud.comptel.com>;tag=B251F2AD-C3A0DEEC
To: <sip:0001#3bc.cloud.comptel.com;user=phone>;tag=dc25d928-0-13c4-6006-cb98c-2310cd2b-cb98c
Call-ID: f91a7279-15751118-6c5f65af#192.168.1.58
CSeq: 1 INVITE
Via: SIP/2.0/TLS 25.11.214.72:5061;alias;rport=54928;branch=z9hG4bKbbd9.62d12aaf57f6574fd99b719735009993.0;i=4e0f7
Via: SIP/2.0/TLS 192.168.1.58:36304;received=199.199.199.122;rport=36304;branch=z9hG4bK47084e2320E46012
Supported: timer,replaces,info
User-Agent: compGear/21.79.9310.0 (compTel 15)
Content-Length: 0
Sep 16 19:31:46 DaHostname kamailio[31135]: : <core> [msg_translator.c:553]: lump_check_opt(): ERROR: lump_check_opt: null send socket
Sep 16 19:31:46 DaHostname kamailio[31135]: : <core> [msg_translator.c:553]: lump_check_opt(): ERROR: lump_check_opt: null send socket
Sep 16 19:31:46 DaHostname kamailio[31135]: INFO: <script>: onsend_route Dumping MsgId[6384483] sending to[199.199.199.122:36304] size[556] callid[f91a7279-15751118-6c5f65af#192.168.1.58]
SIP/2.0 487 Request Terminated
Record-Route: <sip:25.11.214.72:5061;transport=tls;transport=tls;lr=on>
From: "TAC Offnet"<sip:15120000000#3bc.cloud.Comptel.com>;tag=B251F2AD-C3A0DEEC
To: <sip:0000#3bc.cloud.Comptel.com;user=phone>;tag=dc25d928-0-13c4-6006-cb98c-2310cd2b-cb98c
Call-ID: f91a7279-15751118-6c5f65af#192.168.1.58
CSeq: 1 INVITE
Via: SIP/2.0/TLS 192.168.1.58:36304;received=199.199.199.122;rport=36304;branch=z9hG4bK47084e2320E46012
Supported: timer,replaces,info
User-Agent: CompGear/21.79.9310.0 (CompTel 15)
Content-Length: 0
Sep 16 19:31:46 DaHostname kamailio[31141]: INFO: <script>: onreply_route Rcvd OPTIONS [200] response from[201.125.123.125:46518] MsgId[6331041] From[sip:sips:3] callid[CompTel_1474068706-1077690787]
Assuming sections are separated with one or more blank lines
$ awk -v RS= '/15120000000/' file
should do
Thanks for the answer. This gave me exactly what I wanted.
awk 'BEGIN{RS="Sep 16"; ORS="Sep"} /5120000000/ {print}' /var/log/log
I used date part as of time stamp as record separator (Sep 16) and then selected records based on the number 512000000. So that part is working pretty nice.
Now, is there a way I can make it dynamic? How can I make RS to select any value from Jan to Dec?. Thanks.

Error loading a local file to a BigQuery table

I'm trying to load a local file into BigQuery via the API, and it is failing. The file size is 98 MB and as a bit over 5 million rows. Note that I have loaded tables with the same number of rows and slightly bigger file size without problems in the past.
The code I am using is exactly the same as the one in the API documentation, which I have used successfully to upload several other tables. The error I get is the following:
Errors:
Line:2243530, Too few columns: expected 5 column(s) but got 3 column(s)
Too many errors encountered. Limit is: 0.
Job ID: job_6464fc24a4414ae285d1334de924f12d
Start Time: 9:38am, 7 Aug 2012
End Time: 9:38am, 7 Aug 2012
Destination Table: 387047224813:pos_dw_api.test
Source URI: uploaded file
Schema:
tbId: INTEGER
hdId: INTEGER
vtId: STRING
prId: INTEGER
pff: INTEGER
Note that the same file loads just fine from CloudStorage (dw_tests/TestCSV/test.csv), so the problem cannot be the one reported about one line having fewer columns, as it would fail from CloudStorage too, and I have also checked that all the rows have the correct format.
The following jobs have the same problem, the only difference is the table name and the name of the fields in the schema are different (but it is the same data file, fields and types). In those attempts it claimed a different row in trouble:
Line:4288253, Too few columns: expected 5 column(s) but got 4 column(s)
The jobs are the following:
job_cbe54015b5304785b874baafd9c7e82e load FAILURE 07 Aug 08:45:23 0:00:34
job_f634cbb0a26f4404b6d7b442b9fca39c load FAILURE 06 Aug 16:35:28 0:00:30
job_346fdf250ae44b618633ad505d793fd1 load FAILURE 06 Aug 16:30:13 0:00:34
The error that the Python script returns is the following:
{'status': '503', 'content-length': '177', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jul 27 2012 15:58:36 (1343429916)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Tue, 07 Aug 2012 08:36:40 GMT', 'content-type': 'application/json'}
{
"error": {
"errors": [
{
"domain": "global",
"reason": "backendError",
"message": "Backend Error"
}
],
"code": 503,
"message": "Backend Error"
}
}
This looks like there may be an issue at BigQuery. How can I fix this problem?
The temporary files were still around for this import, so I was able to check out the file we tried to import. For job job_6464fc24a4414ae285d1334de924f12d, the last lines were:
222,320828,bot,2,0
222,320829,bot,4,3
222,320829,
It looks like we dropped part of the input file at some point... The input specification says that the MD5 hash should be 58eb7c2954ddfa96d109fa1c60663293 but our hash of the data is 297f958bcf94959eae49bee32cc3acdc, and file size should be 98921024, but we only have 83886080 bytes.
I'll look into why this is occurring. In the meantime, imports though Google Storage use a much simpler path and should be fine.