GPS coordinates not updated every time - gps

I'm using SIM808 to get GPS coordinates. I'm fetching GPS data every 10 seconds. But I'm getting the same GPS coordinate 2-3 times. AT command (CGPSINF) does not yield a new coordinate on every call, so I get repeatedly the same coordinate.
Please help me out why I'm not getting updated GPS coordinate on every AT command call. Previously I was working on Ublox GPS receiver, same things happened there. That module also sent repeated coordinates when I was using this with a Python library on Raspberry Pi.

There will be some data stored in buffer in serial. You have to flush it before you read the data.
Suppose your serial port is defined like this:
ab= serial.Serial('/dev/ttyUSB0')
So before reading data flush the buffer.
ab.flush()
Then read the data from it.
ab.readline()

Related

How to code ADAfruit Clue to store sensor data in the Flash memory or RAM?

Using ADAfruit clue and coding it via the Arduino IDE to run and print Acceleration readings using the Accel+Gyro sensors we have in the ADAfruit Clue board.
We can print the instantaneous acceleration readings on the Clue screen and also on the Serial monitor.
What if we want to store the readings of 30s in the Flash/RAM memory?
What if we want to run the program if there is an external command given and stop the program after 30 seconds of running it?
A few more questions related to storage of data are as follows:
Should we use Flash memory or RAM to store data? why?
Will (1MB or 256kB) be it be able to record and store 30 seconds data?
How to export the stored data to another device?

C++ most effiicient way to get waveform data into a stream

I am attempting to get waveform data into a redis stream.
The waveform consists of 250,000 floats and is being ingested at 1hz
What I have attempted so far is to do a single xadd with said waveform as a byte string. This fits in nicely with my time constraint, but
makes it difficult to digest into applications reading from redis since they have to read the entire waveform.
This xadd with 1MB payload takes ~5ms
What I'd like to do is a single xadd per element. That way I could leverage xrange when digesting the data.
This is taking a long time, even on the loopback. Utilizing a pipeline , I can get the xadd to average out around 50 microseconds; however I am still off by a factor of 10 to even be close to fitting into 1 second.
Is this even possible with redis / redis plus plus? Am I using the wrong data structure?
Thanks in advance
Side note:
i can get roughly 10 % better by providing a timestamp to xadd compared to using "*"
Environment :
Ubuntu-latest docker container
gcc 8.1
redis-stable
hiredis-master
redispluplus-master

Partial update of device local buffer in Vulkan

I'm generating vertex data to memory (from voxel data), setting up a staging buffer (host visible) (vkCreateBuffer), copying vertex data into staging buffer, setting up a device local buffer (vkCreateBuffer) and copy the buffer from host visible to device local (vkCmdCopyBuffer).
From what I understand there is a limit to how many buffers I can have, so I probably can't create one buffer per model.
For static models this is fine, just mash them together and upload. But I want to modify a few random vertexes "regularly". For this I'm thinking of doing differential update of device local buffers. So in a big buffer I only update the data that actually changed. Can this be done?
If I don't render anything from host visible buffer then it will not take up any resources on GPU? So I could keep the host visible buffers and don't have to recreate and fill them?
Yes you should be able to do what you want. Essentially you are sending your updated data without a staging buffer and copy command (similar to how we generally populate uniform buffers for example).
In psuedo-code:
update the data in your application
map the buffer
copy the changed data
unmap the buffer
synchronize
The last part if the tricky aspect - you could simply make the buffer VK_MEMORY_PROPERTY_HOST_COHERENT_BIT which means it will be updated before the next call to vkQueueSubmit. So basically you would want to do the above before the next frame is rendered, see spec. Note that the buffer will need to be VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT.
Whether you make this 'dynamic' data part of your uber-buffer or a completely separate buffer is up to you.
This approach is used for things like particle systems that are managed by the application. Obviously copying directly to a buffer that is visible to both the host and the GPU is (presumably) slower than the staging/copy approach.
If however you want to send some data via a staging buffer and copy command to a buffer that is only visible to the host and then periodically modify some or all of that data (e.g. for deformable terrain) then that might be trickier, it's not something I have looked into.
EDIT: just seen the following post that might be related? Best way to store animated vertex data

Aerospike: Device Overload Error when size of map is too big

We got "device overload" error after the program ran successfully on production for a few months. And we find that some maps' sizes are very big, which may be bigger than 1,000.
After I inspected the source code, I found that the reason of "devcie overload" is that the write queue is beyond limitations, and the length of the write queue is related to the effiency of processing.
So I checked the "particle_map" file, and I suspect that the whole map will be rewritten even if we just want to insert one pair of KV into the map.
But I am not so sure about this. Any advice ?
So I checked the "particle_map" file, and I suspect that the whole map will be rewritten even if we just want to insert one pair of KV into the map.
You are correct. When using persistence, Aerospike does not update records in-place. Each update/insert is buffered into an in-memory write-block which, when full, is queued to be written to disk. This queue allows for short bursts that exceed your disks max IO but if the burst is sustained for too long the server will begin to fail the writes with the 'device overload' error you have mentioned. How far behind the disk is allowed to get is controlled by the max-write-cache namespace storage-engine parameter.
You can find more about our storage layer at https://www.aerospike.com/docs/architecture/index.html.

What is a typical time for a USB write on an STM32?

I have an STM32f042 and I have loaded the example Custom HID firmware from the STM32F0x2_USB-FS-Device_Lib V1.0.0.
I then did some simple write transfers sending just one or two bytes, and watched the response using wireshark.
After doing about ten transfers it looks like time for a transfer to complete ranges between 15ms and 31ms with the average being somewhere around 25ms.
I've been told in the past that a single fast USB transaction should take around 1ms so this feels to me to be about an order of magnitude slow.
Is this a normal time for this chip? (And how would I go about figuring out what "normal" is?) Or is this abnormally slow?
Please check configuration descriptor in usbd_customhid.c file. The polling interval for each endpoint set but parameter: bInterval, the default value in examples(as I remember) set to 0x20(32ms) try to change it!