Publishing Prerecorded Lidar-PCAP file and IMU-Measurments as ros-bag - gps

I have a prerecorded lidar point-clouds as pcap-file and also IMU and GPS records. All sensors were synchronized over gps-time. I would like to publish them as ROS-messages in real-time, which means I should read each point-cloud frame and imu-measurement at their own sample rates (IMU=125HZ, Lidar=10HZ) and publish them as ros-messages.
I would like to know if it is possible, and what is the best way to achieve this?
Thanks

Related

Sensor data timestamps using VictoriaMetrics

I am trying to figure out how to record timestamped sensor data to an instance of VictoriaMetrics. I have an embedded controller with a sensor that is read once per second. I would like VictoriaMetrics to poll the controller once a minute, and log all 60 readings with their associated timestamps into the TSDB.
I have the server and client running, and measuring system metrics is easy, but I can't find an example of how to get a batch of sensor readings to be reported by the embedded client, nor have I been able to figure it out from the docs.
Any insights are welcome!
VictoriaMetrics supports data ingestion via various protocols. All these protocols support batching, i.e. multiple measurements can be sent in a single request. So you can choose the best suited protocol for inserting batches of collected measurements into VictoriaMetrics. For example, if Prometheus text exposition format is selected for data ingestion, then a batch of metrics could look like the following:
measurement_name{optional="labels"} value1 timestamp1
...
measurement_name{optional="labels"} valueN timestampN
VictoriaMetrics can poll (scrape) metrics from the configured address via HTTP. It expects application to return metrics value in the exposition text format. The exposition text format is compatible with Prometheus, so its libraries for different languages will be compatible with VictoriaMetrics as well.
There is also a how-to guide for instrumenting golang application to expose metrics and scrape via VictoriaMetrics here. It describes the monitoring basics for any service or application.

Stream html5 camera output

does anyone know how to stream html5 camera output to other users.
If that's possible should I use sockets, images and stream them to the users or other technology.
Is there any video tutorial where I can take a look about it.
Many thanks.
The two most common approaches now are most likely:
stream from the source to a server, and allow users connect to the server to stream to their devices, typically using some form of Adaptive Bit Rate streaming protocol (ABR - basically creates multiple bit rate versions of your content and chunks them, so the client can choose the next chunk from the best bit rate for the device and current network conditions).
Stream peer to peer, or via a conferencing hub, using WebRTC
In general, the latter is more focused towards real time, e.g. any delay should be below the threshold which would interfere with audio and video conferences, usually less than 200ms for audio for example. To achieve this it may have to sacrifice quality sometimes, especially video quality.
There are some good WebRTC samples available online (here at the time of writing): https://webrtc.github.io/samples/

NAudio: Recording Audio-Card's Actual Output

I successfully use WasapiLoopbackCapture() for recording audio played on system, but I'm looking for a way to record what the user would actually hear through the speakers.
I'll explain: If a certain application plays music, WASAPI Loopback shall intercept music samples, even if Windows main volume-control is set to 0, meaning: even if no sound is actually heard through audio-card's output-jack (speakers/headphone/etc).
I'd like to intercept the audio actually "reaching" the output-jack (after ALL mixers on the audio-path have "done their job").
Is this possible using NAudio (or other infrastructure)?
A code-sample or a link to a such could come in handy.
Thanks much.
No, this is not directly possible. The loopback capture provided by WASAPI is the stream of data being sent to the audio hardware. It is the hardware that controls the actual output sound, and this is where the volume level is applied to change the output signal strength. Apart from some hardware- and driver-specific options - or some interesting hardware solutions like loopback cables or external ADC - there is no direct method to get the true output data.
One option is to get the volume level from the mixer and apply it as a scaling factor on any data you receive from the loopback stream. This is not a perfect solution, but possibly the best you can do without specific hardware support.

Circular Buffer Audio Recording iOS: Possible?

A client of mine wants to continually record audio and when he clicks submit he wants to submit the last 10 seconds only. So he wants a continual recording and only keeping the last x seconds.
I would think this requires something like a circular buffer, but (as a somewhat newbie for iOS) it looks like AVAudioRecorder can only write to a file.
Are there any options for me to implement this?
Can anyone give a pointer?
I would use the Audio Queue Services. This will allow you isolate certain parts of the buffer. Here is the guide to it: http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AQRecord/RecordingAudio.html#//apple_ref/doc/uid/TP40005343-CH4-SW1

Symbian/S60 audio playback rate

I would like to control the playback rate of a song while it is playing. Basically I want to make it play a little faster or slower, when I tell it to do so.
Also, is it possible to playback two different tracks at the same time. Imagine a recording with the instruments in one track and the vocal in a different track. One of these tracks should then be able to change the playback rate in "realtime".
Is this possible on Symbian/S60?
It's possible, but you would have to:
Convert the audio data into PCM, if it is not already in this format
Process this PCM stream in the application, in order to change its playback rate
Render the audio via CMdaAudioOutputStream or CMMFDevSound (or QAudioOutput, if you are using Qt)
In other words, the platform itself does not provide any APIs for changing the audio playback rate - your application would need to process the audio stream directly.
As for playing multiple tracks together, depending on the device, the audio subsystem may let you play two or more streams simultaneously using either of the above APIs. The problem you may have however is that they are unlikely to be synchronised. Your app would probably therefore have to mix all of the individual tracks into one stream before rendering.