Triggering Lucid cameras with Lidar sensor in a drone-mounted setup - camera

I am working on a drone-mounted setup which consists of one Lidar sensor and two Lucid cameras. Currently, the sensors are triggered by a CPU that is also mounted on the drone. However, triggering the sensors in this way is reducing the computational power of the CPU, which is also used to process the data captured by the sensors.
I am looking for a solution to trigger the Lucid cameras with the Lidar sensor, in order to free up the computational power of the CPU. I have no prior experience in this area, and I would be grateful for any guidance or advice on how to resolve this issue.
Thank you in advance for your time and help.
Sincerely,
Anh.

Related

overhead of changing sensor sampling time

I am looking for the potential overhead of changing the sampling time (not sampling rate) of sensors in embedded systems/robotics/IoT. For example, let's say the sensor is a camera connected to a raspberry pi capturing pictures every 100 ms at 0,100ms,200ms,300ms,... What will happen if I change the sampling time to 50ms,150ms,250ms,...? Do I have to initialize the sensor again? or does it reduce the performance of applications that are using the sensor? Any example is appreciated. You can consider any other type of sensor as well if you have a sensor/system in mind. again, I am looking for the potential overhead of changing the sampling time.

Can I use sensor-fusion for multiple GPS receivers and better my position estimation?

I am wondering if it makes sense to fuse multiple GPS signals to improve my estimated result. This works fine for example for accelartion sensors, but this sensors have a white gaussian noise.
GPS sensors being mounted on the same board probably suffer from the same errors like drift or multi-path effects, which cannot be corrected by only fuse the sensor readings of this sensors. I imagine that like a constant offset in the same direction, which won t be correct just stays nearly the same.
Furthermore, I have diffrent sensor which I can mount on my drone, even RKT sensor. In my opinion, it makes no sense to fuse a d-GPS with readings from an RKT GPS.
Please correct my if I am wrong.
Thank you in advance and I hope this forum is the right spot to ask that question.
yes you can. Use EKF based approach with onboard multi GPS and multi IMU
The DJi is doing it, But it is can only prevent one of sensor failure, not the systematic drift patter. To avoid that, you need some more source such as visual odometry or lidar odometry to fuse in the EKF. GPS sate count is good meaure of how bad the position is. It ranges from 0 to 15. So when every one is 15, trust GPS more less variance. When everyone is lower than 6 add very high variance to GPS source.
Yes RTK might be better when you have direct line of sight. But once out of sight, then other GPS might be better. So totaly depends on your use case

I have a project idea on Smart Lighting System. How can I do the simulation for this?

Currently I am working on this project to provide the layout of a smart street light system with energy saving function based on sensor network for energy management. The proposal is an autonomous-distributed-controlled light system, in which the lights turn on before pedestrians come and turn off or reduce power when there is no one by means of a distributed-installed sensor network.
I will be adding a few things to the project for energy reduction but what I need to know is how do I perform the simulation to show that this approach would reduce energy consumption?

When is it needed to fuse IMU sensor data with GPS-RTK, and when is it not?

I'm using a high accuracy GPS RTK setup to precisely locate a mobile robotic platform in the field (down to 10 cm accuracy). I have also a 9DOF IMU mounted on the platform (9DOF sparkfun IMU Razor).
The Question is, Do I really need to perform a sensor fusion between IMU and GPS like what this ROS node do (http://wiki.ros.org/robot_localization) to estimate the robot pose? or is it just enough to read the Pitch,Yaw,Rotation data from the IMU to know the heading along with the GPS Long,Lat,Alt ?
What cases make it essential to perform this type of fusion ?
Thanks in advance
It is essential to perform fusion because:
1) Roll, Pitch, and Rotation data from the IMU are not perfect, and they will drift over time due to gyro errors. The magnetic field sensor in the IMU module limits this, but crudely. Fusion allows the GPS RTK measurements to be used to continuously estimate the dominant error sources in the IMU and maintain better attitude information.
2) The IMU supports position estimation when GPS-RTK is lost through signal blockage or any other outage, such that the robotic platform is not lost when and if GPS signals are interrupted.

how to meter power(watt) of PC components(cpu,memory,disk,etc) in real time?

As the question says ,I want to monitor the value of power(watts) that some components consumption .especially the value of CPU , Memory and disk .
when I use aida64,I found that in computer/sensor ,there are some data about power consumption . I want to know how did it can get these data ?
I already have some idea ,but not sure which is the best way to solve this question :
there are some sensors on the motherboard ,we can use values of those sensors to calculate the real-time power.
according to different OS, we have some APIs that can get the utilization of cpu,memory throughout rate and disk I/O rate . Using this data ,we can build mode of power consumption about PC.if there are those APIs,where can I find them ?
maybe the hardware manufacturer like intel has already record the value of power in real-time ,they put the value into some special register in hardware .we can get the value through mapping into special memory location .
In my opinion ,the second way maybe the solution that most monitor software using .but I just don't know where can I get those API.
whats more ,our aim is to design an OS-independent real-time power monitor software. So, if there are any better solutions about this question ,I will appreciate your help .
Hmmm. I wasn't sure if I should post this as a comment or an answer. It is an answer but in the negative.
At this time, you can't create an OS independent software-based non-intrusive power monitor. By non-intrusive, I mean that you are not putting special instrumentation on the motherboard and other hardware. This is because the power technology being used by modern processors is in rapid flux, each new generation making significant advances. Additionally, the amount of power related information available to software from the hardware (via PMU events and the like) is continually increasing as more silicon real estate becomes available. For example, I believe that in the most current processors, you can get direct thermal information for key parts of the processor silicon, and temperature, power and current readings from various parts of the core and uncore.
The best you can do is to abstract the top layer of your monitor from the lower layers. Then the top becomes OS / HW independent while the lower levels need to be platform dependent.
Check out the PAPI APIs. Note that the APIs appear to give you the world, but are really just an API set. Someone still has to implement what's on the other side of the API.
Now if you can do your own special instrumentation, many (most?) motherboards and other hardware have measurement points (some undocumented) that provide thermal, current (and so power) information. This information is important for debugging devices and platforms.