change range of the kinect depth camera - kinect

How range is the depth camera in the kinect and can I change it?
I want to change range of depth data. Currently ı can receive and process data. I did some research on this subject, but I could not reach a conclusion.
Thanks...

Kinect itself has limitations. If you are looking for tracking more than it can according to the wiki it says "The Kinect sensor has a practical ranging limit of 1.2–3.5 m (3.9–11.5 ft) distance when used with the Xbox software". Also I don't think you can change the capability of the device (In hardware level). So if you are looking for a sub range of this it is possible but if the range is far from this its impossible.If you are looking for a range then you can limit the Z values upon certain threshold like..
if(yourClosestRange < kinectRealZ < yourFarmostRange)
{
int zInRange = kinectRealZ;
}
hope you could understand my point.

Related

Finding the Stokes Number of a Microcarrier Particle

I'm trying to model the flow and suspension of microcarriers (particles that are used as surfaces for cells to attach to and grow on) in a CFD application. I know some basic characteristics of the particles (they're called "Cytodex", about 180 µm big, density is 1.03g/cm^2) but I'd like to find the Stokes number to determine how strongly they are affected by turbulence and movement of the fluid. Can somebody point me to how to approach this (or at least approximate?). It's surprisingly hard to find any information for somebody like me who hasn't got a very strong background in fluid mechanics.
Here is the manufacturer's microcarrier manual. See page 62, Table 12 for Cytodex 1 physical properties.
https://www.gelifesciences.co.kr/wp-content/uploads/2016/07/023.8_Microcarrier-Cell-Culture.pdf
See this SlideShare, slide 15, for how to calculate the Stokes # for Cytodex 1 microcarriers: https://www.slideshare.net/rjrishabhjain/bs-4sedimentation?from_action=save
but for Cytodex 1 correct the d=180 um, for cell culture media=nutrient broth viscosity = 0.96 cP, media density ~ 1.007g/mL, microcarrier density 1.03 g/mL, to get settling velocity of 0.062cm/s = 3.72 cm/min. However, per the manufacturer's manual settling velocity is 12-16mL/min. Might be an error. I am seeing an answer.
For CFD modeling of microcarriers in bioreactors see: Loubière
https://pdfs.semanticscholar.org/d955/75b5c640c8268fd1ec51b2ce46862e7bbfbd.pdf
I have more related literature if you have interest.
DApple

How to get user location using accelerometer, gryoscope, and magnetometer in iPhone?

The simple equation for user location using inbuilt inertial measurement unit (IMU) which is also called pedestrian dead reckoning (PDR) is given as:
x= x(previous)+step length * sin(heading direction)
y= y(previous)+step length *cos(heading direction )
We can use the motionManager property of CMMotionManager class to access raw values from accelerometer, gyroscope, and magnetometer. Also, we can get attitudes values as roll, pitch, and yaw. The step length can be calculated as the double square root of acceleration. However, I'm confused with the heading direction. Some of the published literature has used a combination of magnetometer and gyroscope data to estimate the heading direction. I can see that CLHeading also gives heading information. There are some online tutorials especially for an android platform like this to estimate user location. However, it does not give any proper mathematical explanation.
I've followed many online resources like this, this,this, and this to make a PDR app. My app can detect the steps and gives the step length properly however its output is full of errors. I think the error is due to the lack of proper heading direction. I've used the following relation to get heading direction from the magnetometer.
magnetometerHeading = atan2(-self.motionManager.magnetometerData.magneticField.y, self.motionManager.magnetometerData.magneticField.x);
Similarly, from gyroscope:
grysocopeHeading +=-self.motionManager.gyroData.rotationRate.z*180/M_PI;
Finally, I give proportional weight to the previous heading driection, gryoscopeheading, and magnetometerHeading as follows:
headingDriection = (2*headingDirection/5)+(magnetometerHeading/5)+(2*gryospoceHeading/5);
I followed this method from a published journal paper. However, I'm getting lots of error in my work. Is my approach wrong? What exactly should I do to get a proper heading direction such that the localization estimation error would be minimum?
Any help would be appreciated.
Thank you.
EDIT
I noticed that while calculating heading direction using gyroscope data, I didn't multiply the rotation rate (which is in radian/sec) with the delta time. For this, I added following code:
CMDeviceMotion *motion = self.motionManager.deviceMotion;
[_motionManager startDeviceMotionUpdates];
if(!previousTime)
previousTime = motion.timestamp;
double deltaTime = motion.timestamp - previousTime;
previousTime = motion.timestamp;
Then I updated the gyroscope heading with :
gyroscopeHeading+= -self.motionManager.gryoData.rotationRate.z*deltaTime*180/M_PI;
The localization result is still not close to the real location. Is my approach correct?

measuring time between two rising edges in beaglebone

I am reading sensor output as square wave(0-5 volt) via oscilloscope. Now I want to measure frequency of one period with Beaglebone. So I should measure the time between two rising edges. However, I don't have any experience with working Beaglebone. Can you give some advices or sample codes about measuring time between rising edges?
How deterministic do you need this to be? If you can tolerate some inaccuracy, you can probably do it on the main Linux OS; if you want to be fancy pants, this seems like a potential use case for the BBB's PRU's (which I unfortunately haven't used so take this with substantial amounts of salt). I would expect you'd be able to write PRU code that just sits with an infinite outerloop and then inside that loop, start looping until it sees the pin shows 0, then starts looping until the pin shows 1 (this is the first rising edge), then starts counting until either the pin shows 0 again (this would then be the falling edge) or another loop to the next rising edge... either way, you could take the counter value and you should be able to directly convert that into time (the PRU is states as having fixed frequency for each instruction, and is a 200Mhz (50ns/instruction). Assuming your loop is something like
#starting with pin low
inner loop 1:
registerX = loadPin
increment counter
jump if zero registerX to inner loop 1
# pin is now high
inner loop 2:
registerX = loadPin
increment counter
jump if one registerX to inner loop 2
# pin is now low again
That should take 3 instructions per counter increment, so you can get the time as 3 * counter * 50 ns.
As suggested by Foon in his answer, the PRUs are a good fit for this task (although depending on your requirements it may be fine to use the ARM processor and standard GPIO). Please note that (as far as I know) both the regular GPIOs and the PRU inputs are based on 3.3V logic, and connecting a 5V signal might fry your board! You will need an additional component or circuit to convert from 5V to 3.3V.
I've written a basic example that measures timing between rising edges on the header pin P8.15 for my own purpose of measuring an engine's rpm. If you decide to use it, you should check the timing results against a known reference. It's about right but I haven't checked it carefully at all. It is implemented using PRU assembly and uses the pypruss python module to simplify interfacing.

Different distance between two points on iOS and Android

I'm trying to measure the distance between two points (longitude, latitude). My problem is that I get different results on iOS then on Android.
I've checked it with this site and the result was that the Android values are correct.
I'm using this MapKit method to get the distance in iOS: distanceFromLocation:
Here are my test locations:
P1: 48.643798, 9.453735
P2: 49.495150, 9.782150
Distance iOS: 97717 m
Distance Android: 97673 m
How is this possible and how can I fix this?
So I was having a different issue and stumbled upon the answer to both of our questions:
On iOS you can do the following:
meters1 = [P1 distanceFromLocation:P2]
// meters1 is 97,717
meters2 = [P2 distanceFromLocation:P1]
// meters2 is 97,630
I've searched and searched but haven't been able to find a reason for the difference. Since they are the exact same points, it should show the same distance no matter which way you are traveling. I submitted it to Apple as a bug and they closed it as a duplicate but have still not fixed it. I would suggest to anyone who wants this to be fixed to also submit it as a bug.
In the meantime, the average of the two is actually the correct value:
meters = (meters1 + meters2)/2
// meters (the average of the first two) is 97,673
Apparently Android does not have this problem.
The longitude and latitude are not all that you need. You have to use the same reference model like WGS84 or ETRS89.
The earth is not an exact ellipsoid, so you need models, none of the models are entirely exact, and depending on which model you use, distances are somewhat different.
Please make sure you use the same reference for iOS and Android.
There is more than one way to calculate distance between long/lat coords based on how you compensate for the curvature of the earth, and there's no right or wrong approach. Most likely the two platforms use a slightly different model.
Here are some formulae for calculating it yourself. http://www.movable-type.co.uk/scripts/latlong.html
If you absolutely need them to be the same, just implement your own calculation using one of these formulae, then you can ensure you get the same result on both platforms.

Objective C - Cross-correlation for audio delay estimation

I would like to know if anyone knows how to perform a cross-correlation between two audio signals on iOS.
I would like to align the FFT windows that I get at the receiver (I am receiving the signal from the mic) with the ones at the transmitter (which is playing the audio track), i.e. make sure that the first sample of each window (besides a "sync" period) at the transmitter will also be the first window at the receiver.
I injected in every chunk of the transmitted audio a known waveform (in the frequency domain). I want estimate the delay through cross-correlation between the known waveform and the received signal (over several consecutive chunks), but I don't know how to do it.
It looks like there is the method vDSP_convD to do it, but I have no idea how to use it and whether I first have to perform the real FFT of the samples (probably yes, because I have to pass double[]).
void vDSP_convD (
const double __vDSP_signal[],
vDSP_Stride __vDSP_signalStride,
const double __vDSP_filter[],
vDSP_Stride __vDSP_strideFilter,
double __vDSP_result[],
vDSP_Stride __vDSP_strideResult,
vDSP_Length __vDSP_lenResult,
vDSP_Length __vDSP_lenFilter
)
The vDSP_convD() function calculates the convolution of the two input vectors to produce a result vector. It’s unlikely that you want to convolve in the frequency domain, since you are looking for a time-domain result — though you might, if you have FFTs already for some other reason, choose to multiply them together rather than convolving the time-domain sequences (but in that case, to get your result, you will need to perform an inverse DFT to get back to the time domain again).
Assuming, of course, I understand you correctly.
Then once you have the result from vDSP_convD(), you would want to look for the highest value, which will tell you where the signals are most strongly correlated. You might also need to cope with the case where the input signal does not contain sufficient of your reference signal, and in that case you may wish to (for example) ignore values in the result vector below a certain level.
Cross-correlation is the solution, yes. But there are many obstacles you need to handle. If you get samples from the audio files, they contain padding which cross-correlation function does not like. It is also very inefficient to perform correlation with all those samples - it takes a huge amount of time. I have made a sample code which demonstrates time shift of two audio files. If you are interested in the sample, look at my Github Project.