fall detection in Labview with quaternions - labview

I have a wearable device that gives output in Quaternions which I can read serially via Labview. My task is to develop a threshold based fall detection system based on these values which I am not familiar with. The platform is Labview.
Could someone guide as to where I should start. FYI I don't have access to Accelerometer values.
Any help is appreciable
Here is a sample data I read from the device
id: 4 distance: 1048 q0: 646 q1: -232 q2: -119 q3: 717
id: 4 distance: 1067 q0: 645 q1: -232 q2: -80 q3: 722
id: 4 distance: 1109 q0: 645 q1: -232 q2: -81 q3: 722
id: 4 distance: 1036 q0: 645 q1: -232 q2: -80 q3: 722
Actually it has become more of a mathematical question now. I was able to find the Euler angles from the quaternions. I'm using the left hand or North East Down coordinates. The device is fixed on the shoe. I'm assuming forward and backward fall could be determined with yaw angle. Lateral fall with pitch. Is there a Combination of roll and pitch that could be used to find a fall?

Related

How to get NED velocity from GPS?

I have an Adafruit Ultimate GPS module which I am trying to fuse with a BNO055 IMU sensor. I'm trying to follow https://github.com/slobdell/kalman-filter-example this kalman-filtering example. Although most of his code is pretty clear, I looked at his input json file(https://github.com/slobdell/kalman-filter-example/blob/master/pos_final.json) and saw that he's getting velocity north, velocity east and velocity down from the GPS module. I looked at the NMEA messages and none seem to give me that. What am I missing? How to get these direction velocities?
Thanks!
pos_final.json is not the input file, but the output file. The input file is taco_bell_data.json and is found in the tar.gz archive. It contains the following variables:
"timestamp": 1.482995526836e+09,
"gps_lat": 0,
"gps_lon": 0,
"gps_alt": 0,
"pitch": 13.841609,
"yaw": 225.25635,
"roll": 0.6795258,
"rel_forward_acc": -0.014887575,
"rel_up_acc": -0.025188839,
"abs_north_acc": -0.0056906715,
"abs_east_acc": 0.00010974275,
"abs_up_acc": 0.0040153866
He measures position with a GPS and orientation/acceleration with an accelerometer. The NED velocities that are found in pos_final.json are estimated by the Kalman filter. That's one of the main tasks of a Kalman filter (and other observers): to estimate unknown quantities.
A GPS will often output velocities, but they will be relative to the body of the object. You can convert the body-relative velocties to NED-velocities if you know the orientation of the body (roll, pitch and yaw). Let's say you have a drone moving at heading 030°, and the GPS says the forward velocity is 1 m/s, the drone will have the following North velocity:
vel_north = 1 m/s * cos(30°) = 0.86 m/s
and the following East velocity:
vel_east = 1 m/s * sin(30°) = 0.5 m/s
This doesn't take into account roll and pitch. To take roll and pitch into account you can take a look at rotation matrices or quaternions on Wikipedia.
The velocities are usually found in the VTG telegram the GPS outputs. It's not always being output. The GPS has to have that feature and it has to be enabled on the GPS. The RMC telegram can also be used.
The velocities from the GPS are often very noisy, which is why a Kalman filter is typically used instead of converting the body-relative velocities to NED-velocities with the method above. The GPS velocities will work fine in higher speeds though.

How would you plot a pandas series of floats which really stand for a categorical variable?

I am learning Pandas exploring a Google Play installs dataset on kaggle:
https://www.kaggle.com/lava18/google-play-store-apps
One of the columns is "Installs" and I have converted the values from the original Object type to Float to perform basic descriptive statistics but when I look at the content:
0.000000e+00 15
1.000000e+00 67
5.000000e+00 82
1.000000e+01 386
5.000000e+01 205
1.000000e+02 719
5.000000e+02 330
1.000000e+03 907
5.000000e+03 477
1.000000e+04 1054
5.000000e+04 479
1.000000e+05 1169
5.000000e+05 539
1.000000e+06 1579
5.000000e+06 752
1.000000e+07 1252
5.000000e+07 289
1.000000e+08 409
5.000000e+08 72
1.000000e+09 58
Name: Installs, dtype: int64
It is clear that Google does not give an exact number but rather a "bin".
Plotting it with this basic command:
apps['Installs'].plot.bar()
yields an almost unintelligible image.
Suggestions for a more readable presentation?
Suggestions to graphically show the different distribution of a subset of the data (e.g. only the "Medical" app category data)?
Thank you very much.

GTX 970 bandwidth calculation

I am trying to calculate the theoretical bandwidth of gtx970. As per the specs given in:-
http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-970/specifications
Memory clock is 7Gb/s
Memory bus width = 256
Bandwidth = 7*256*2/8 (*2 because it is a DDR)
= 448 GB/s
However, in the specs it is given as 224GB/s
Why is there a factor 2 difference? Am i making a mistake, if so please correct me.
Thanks
The 7 Gbps seems to be the effective clock, i.e. including the data rate. Also note that the field explanation for this Wikipedia list says that "All DDR/GDDR memories operate at half this frequency, except for GDDR5, which operates at one quarter of this frequency", which suggests that all GDDR5 chips are in fact quad data rate, despite the DDR abbreviation.
Finally, let me point out this note from Wikipedia, which disqualifies the trivial effective clock * bus width formula:
For accessing its memory, the GTX 970 stripes data across 7 of its 8 32-bit physical memory lanes, at 196 GB/s. The last 1/8 of its memory (0.5 GiB on a 4 GiB card) is accessed on a non-interleaved solitary 32-bit connection at 28 GB/s, one seventh the speed of the rest of the memory space. Because this smaller memory pool uses the same connection as the 7th lane to the larger main pool, it contends with accesses to the larger block reducing the effective memory bandwidth not adding to it as an independent connection could.
The clock rate reported is an "effective" clock rate and already takes into account the transfer on both rising and falling edges. The trouble is the factor of 2 for DDR.
Some discussion on devtalk here: https://devtalk.nvidia.com/default/topic/995384/theoretical-bandwidth-vs-effective-bandwidth/
In fact, your format is correct, but the memory clock is wrong. GeForce GTX 970's memory clock is 1753MHz(refers to https://www.techpowerup.com/gpu-specs/geforce-gtx-970.c2620).

physics-motion in straight line body dropped

A tennis ball is dropped on to the floor from a height of 4m.it rebounds to a height of 3m.if the ball was in contact with floor for 0.010 sec, what was its average acceleration during contact.
acceleration=[(2gh)^1/2]+[(2gh)^1/2]/t=[(2*9.8*3)^1/2]+[(2*9.8*4)^1/2]/0.01=1652m/s
i have doubt in while using time.in the expression for acceleration "t" is the time required to travel the distance.so in that case
acceleration will be=[velocity of fall+velocity of rebound/time required for falling+rebounding]
but here we used "time in ball in contact with the floor" instead of total time. what is concept/logic behind it.
Please guide me the correct way to achieve my objective.
For the falling ball V^2 = 2*g*h so V at first contact with ground is sqrt(78.4) = 8.85 m/s downwards.
For the rising ball, the same argument can be used to show V = 7.67 m/s upwards.
So deltaV is 8.85+7.67 = 16.52 m/s.
But deltaV = a*t so a = deltaV/t = 16.52/0.01 = 1652 m/s (equal to ~168.6 times gravitational acceleration).
I would suggest that if you want to understand this a little better, you should begin by searching for a tutorial explaining kinematics.

How to convert 1998 USGS map coordinates to current Google Earth coordinates (per changes to North American datums, etc)?

How to convert 1998 USGS map coordinates to current Google Earth coordinates (per changes to North American datums, etc)? I looked at other possible questions/answers put forth, but I'm not exactly sure what I'm looking for....
I've got 1998 USGS coordinates for a location/specific object that are considerably off the mark/object if you look at these coordinates on Google Earth - so I know there is some conversion/correction to be done...
any help is appreciated...
Google Earth uses WGS84 coordinates, so in future please call them WGS84 (coordinates).
USGS 1998, i think use NAD83 (1998) coodinates. Make sure this is correct: Demand the Map Datum of the coordinates you got.
Ask your data provider: "Which Map Datum the coordinates are based on?"
According to the link below, and my knowledge, there should be only some centimeters differnce between NAD83 and WGS84:
http://en.wikipedia.org/wiki/North_American_Datum#North_American_Datum_1983_and_WGS84
So I think your coordinates could be in NAD27 Datum. (Check with an online NAD27 to WGS84) conversion and look if the coordinates mnow matchm when you use the WGS84 in Google).
In that case you need a NAD27 to WGS84 conversion. This is not easy, that needs so called GRID files.
You will not be able to calculate that yourself, you need a library for coordinate transformations.
Update
you coordinate that you use is totally wrong.
The error is not caused by any Map Datum conversion:
For that location the error between NAD27 and NAD83/WGS84 is 39 meters. (=128ft)
See output of NADCON
NAD 27 datum values: 43 01 32.28000 72 25 44.46000
NAD 83 datum values: 43 01 32.56771 72 25 42.77644
NAD 83 - NAD 27 shift values: 0.28771 -1.68356(secs.)
8.879 -38.117 (meters)
Magnitude of total shift: 39.138(meters)