Robotic arm's joints acceleration limitation using lp filter - robot

I have implemented a closed loop orientation control of 7degrees freedom robotic arm. I have accomplished velocity limitation with the closed loop system but i need to implement a low pass filter in joints velocity using matlab in order to limit acceleration of joints. As my knowledge on this type o filters is limited anyone having any insight how to approach it?
Thank you for your time

Related

Is programming a voxel based graphics API theoretically possible?

This is entirely a theoretical question because I understand the time it would take to do such a thing would be ridiculous
I've been working with "voxels" a lot lately and the only way I can display them to a user is to either triangulate the visible surfaces or make a CPU ray-tracer but both come with their own problems.
Simply put, if we dismiss the storage space needed for voxel meshs and targeted a very specific GPU would someone who was wanting to create a graphics API like OpenGL but with "true" voxel primitives that don't need to be converted be able to make such thing or are GPUs designed specifically for triangles with no way to introduce a new base primitive?
Its possible and it was already done many times
games like Minecraft,SpaceEngineers...
3D printing tools and slicers
MRI/PET scans tools
Yes rendering on GPU is possible with the two base methods you mention. Games usually use the transform to boundary representation 3D geometry. With rise of shaders even ray tracers are now possible here mine:
simple GLSL voxel ray tracer
using native OpenGL architecture and passing geometry as 3D texture. In order to obtain speed you need to add BVH or similar spatial subdivision of geometry...
However voxel based tools have been here for quite some time. For example many isometric games/engines are voxel based (tile is a voxel) like this one:
Improving performance of click detection on a staggered column isometric grid
Also do you remember UFO ? It was playable on x286 and it was also "voxel/tile" based isometric.

Forward and backward movement detection with IMU

We have an embedded device mounted in a vehicle. It has accelerometer, gyrosopce and GPS sensors on board. The goal is to distinguish when vehicle is moving forward and backward (in reverse gear). Sensor's axis are aligned with vehicle's axis.
Here's our observations:
It's not enough to check direction of acceleration, because going backwards and braking while moving forward would show results in the same direction.
We could say that if GPS speed decreased 70 -> 60 km/h it was a braking event. But it becomes tricky when speed is < 20 km/h. Decrease 20 -> 10 km/h is possible when going both directions.
We can't rely on GPS angle at low speeds.
How could we approach this problem? Any ideas, articles or researches would be helpful.
You are looking for Attitude and heading reference system implementation. Here's an open source code library. It works by fusing the two data sources (IMU and GPS) to determine the location and the heading.
AHRS provides you with roll, pitch and yaw which are the angles around X, Y and Z axises of the IMU device.
There are different algorithms to do that. Examples of AHRS algorithms are Madgwick and Mahony algorithms. They will provide you with quaternion and Eurler angles which can easily help you identify the orientation of the vehicle at any time.
This is a cool video of AHRS algo running in real time.
Similar question is here.
EDIT
Without Mag data, you won't get high accuracy and your drift will increase over time.
Still, you can perform AHRS on 6DoF (Acc XYZ and Gyr XYZ) using Madgwick algorithm. You can find an implementation here. If you want to dive into the theory of things, have a look at Madgwick's internal report.
Kalman Filter could be an option to merge your IMU 6DoF with GPS data which could dramatically reduce the drift over time. But that requires a good understanding of Kalman Filters and probably custom implementation.

How to implement lowpass filter to reduce noise in gyroscope values?

I am new to labview and I need help.
I am using myrio with gyroscope, and when I display the gyroscope values I get noise.
My question is: How can I implement lowpass filter to reduce the noise in X , Y and Z rates of the gyroscope?
I searched a lot, but I did not understand how can I know what is the sampling frequency, the low and the high cutoff frequency.
Thank you so much.
If you're data is noisy you should try to fix the problem before you digitize the data. If a physical low-pass filter will do the trick, install one. The better the signal before the DAQ the better the data will be once it's digitized.
Some other signal conditioning considerations: make sure to reduce the length of wire from the gyroscope to the DAQ to only what's necessary, if possible eliminate any sources of noise from the environment (like any large rotating magnets--seriously I once helped someone who was complaining about noise when they were using an unshielded wire next to an MRI machine), and if you're going to add any signal conditioning try to amplify close to your sensor.
If you still would like to filter in software, there's an example included with LabVIEW that demonstrates both the point-by-point VIs and the array based VIs. It's called PtByBp and Array Based Filter.vi and can be found in the Example Finder under Analysis, Signal Processing and Mathematics >> Filtering and Conditioning
Please install this FREE toolkit from ni.com: http://sine.ni.com/nips/cds/view/p/lang/en/nid/212733
There are examples and good ready to use application how to use myRIO gyroscope and how to do proper DSP.
Sampling frequency is how fast you sample. Look for this value in the ADC settings. Low and high cutoffs - play with those values. Doing an FFT on your signal may help you to determine spectral frequency density, and decide where to cut.

What frameworks for depth cameras are out there?

I want to evaluate the performance of several SDKs / frameworks for depth cameras. These cameras can either be using Time-of-Flight or structured light.
The framework should be capable (at least) of person tracking / blob detection and gesture recognition.
So far I found the following frameworks:
OpenNI (structured light only)
Microsoft Kinect SDK (Kinect only)
Beckon SDK by Omek Interactive (ToF and structured light)
iisu by SoftKinetic (ToF and structured light)
Are there any other frameworks I should be aware of?
EDIT: I found this article by Techradar that seems to indicate that these are indeed the only options currently available.
Any feedback would be very much appreciated!
I have found some interesting links on this. You can take MIT's approach using CodAC . They list lots of facts on this post, the most important ones I will post here.
9. What are limitations of this technique?
The main limitation of our framework is inapplicability to scenes with curvilinear
objects, which would require extensions of the current mathematical model.
Another limitation is that a periodic light source creates a wrap-around error
as it does in other TOF devices. For scenes in which surfaces have high reflectance
or texture variations, availability of a traditional 2D image prior to our data
acquisition allows for improved depth map reconstruction as discussed in our paper.
10. What are advantages of this technique/device and how does it
compare with existing TOF-based range sensing techniques?
In laser scanning, spatial resolution is limited by the scanning time.
TOF cameras do not provide high spatial resolution because they rely on a
low-resolution 2D pixel array of range-sensing pixels. CoDAC is a single-sensor,
high spatial resolution depth camera which works by exploiting the sparsity of natural
scene structure.
11. What is the range resolution and spatial resolution of the CoDAC system?
We have demonstrated sub-centimeter range resolution in our experiments.
This is significantly better than fundamental limit of about 10 cm that would
arise from using a detector with 0.7 nanosecond rise time if we were not using
parametric signal modeling. The improvement in range resolution comes from the
parametric modeling and deconvolution in our framework. We refer the reader to
our publications for complete details and analysis.
We have demonstrated 64-by-64 pixel spatial resolution,
as this is the spatial resolution of our spatial light modulator.
Spatially patterning with a digital micromirror device (DMD) will enable
much higher spatial resolution. Our experiments use only 205 projection patterns,
which correspond to just 5% of number of pixels in the reconstructed depth map.
This is a significant improvement over raster scanning in LIDAR, and it is
obtained without the 2D sensor array used in TOF cameras.
Also another interesting project I found on Youtube uses libfreenect and libusb
There is also dSensingNI which is described as
This work presents an approach to overcome the disadvantages of existing interaction
frameworks and technologies for touch detection and object interaction. The robust and
easy to use framework dSensingNI (Depth Sensing Natural Interaction) is described,
which supports multitouch and tangible interaction with arbitrary objects. It uses
images from a depth-sensing camera and provides tracking of users fingers of palm of
hands and combines this with object interaction, such as grasping, grouping and
stacking, which can be used for advanced interaction techniques.
So you have hit most of them out there, especially that use Kinect, but there are a few other options out there! Hope this Helps!

Applying scrolling physics to an app

I believe this is a fairly simple question but I have no idea where to start.
I'm trying to implement a feature where an entity (such as an image) can be flicked across the screen such that it decelerates over time based on an initial speed (non-zero) and coefficient of friction.
In other words, given an initial velocity and constant friction, how can I programmtically determine where an object will be at time t??
Feel free reply using pseudo-code or any programming language you're comfortable with.
Thanks guys
The equation is
s = u*t + 0.5*a*t*t
where,
s is displacement (i.e. position)
u is the initial speed (can be zero too actually)
a is the acceleration (if you want deceleration use a negative value instead)
t is the time elapsed
To account for friction your a will be (on a horizontal surface)
a = -μg
where,
μ is the coefficient of friction
g is gravitational acceleration