Quantity of motion from quaternions - quaternions

I made some recordings from a head tracker which provides the 4 values of the quaternions, which are saved in a csv (each row is a set of quaternion plus a timestamp).
I need to calculate for the whole recording how much the head moved. This is needed for an experiment where I would like to see whether under a condition the head moved more or less compared to another condition.
What is the best way to get a single quantity for each recording?
I have some proposals but I do not know how much appropriate they are:
PROPOSAL 1) I calculate the cumulative sum of the absolute value of the derivatives for each quaternion value, then I sum the 4 sums together to get a single value
PROPOSAL 2) I calculate the cumulative sum of the absolute value of the derivatives of the norm

Sounds like you just want a rough estimate of total angular movement as a single value. One way is to assume minimum rotation angle between quaternion samples and then just add up those angles. E.g., suppose two consecutive quaternion samples are q1 and q2. Then calculate the quaternion multiply q = q1 * inv(q2) and your delta-angle for that step is 2*acos(abs(qw)). Do this for each step and add up all the delta angles.

Related

How to aggregate edge-based simulation output to grid cells?

I have a network that has to be divided into a grid of square cells (200 X 200 m). each cell includes sub-segments of the edges.
I have generated the simulation data output and used sumolib to extract edge-based output. I have to calculate the average traffic volume in each cell (not edge) measured in (vehicles/second).
this is part of the script I have written:
extract edge-based density and speed values:
for interval in sumolib.output.parse('cairo.edgeDataOutput.xml','interval'):
for edge in interval.edge:
edgeDataOutput[edge.id]= (edge.density,edge.speed)
after saving density and speed into edgeDataOutput, I have to aggregate into cells and calculate avg.traffic volume in each cell:
for cellID in ids:
density=0
speed=0
n=0 #avg.traffic vol
for edgeID in cell_edgeMap[cellID].keys():
if edgeID in edgeDataOutput.keys():
density+= float(edgeDataOutput[edgeID][1])
speed+= float(edgeDataOutput[edgeID][2])
n += (float(edgeDataOutput[edgeID][1])/1000) * float(edgeDataOutput[edgeID][2]) #traffic vol = (density/1000)*speed
densities.append(int((density / len(cell_edgeMap[cellID].keys()))+0.5))
speeds.append(int((speed / len(cell_edgeMap[cellID].keys()))+0.5))
numOfVehicles.append(int(n/len(cell_edgeMap[cellID].keys())))
as you can see from the code, I sum up the density, speed values of each edge that is in the cell then divide by the number of edges inside the cell to get the mean value.
density at cell(veh/Km) = sum(density at each edge inside cell)/num of edges inside cell.
speed at cell(m/s) = sum(speed at each edge inside cell)/num of edges in cell.
and I am using the following formula to calculate the traffic volume at each cell:
avg.traffic volume at cell(veh/s) = sum(avg.traffic volume at each edge inside cell)/num of edges inside cell.
avg.traffic volume at edge(veh/s) = density at edge(veh/Km) * speed at edge(m/s) / 1000.
I just want to make sure that I am using the write formula.
It is not easy to answer whether you do the right thing because averaging over both time and space is always hard. Furthermore it is not clear what you are trying to measure. The traffic volume usually denotes the total (or average) number of vehicles and is measured without unit (so only number of vehicles). The traffic flow is measured in vehicles per time unit but is usually applied only to a cross section not to an area. If you want the average number of cars in the cell it should suffice to divide the sum of the number of sampleSeconds by the length of the interval. The second value needs a more in depth discussion but I would probably at least multiply it with the edge length when summing up.

finding transition point of data slope

I am wondering if there is a method to approach this problem.
The reason I need this is because for a certain trend of data I need to use a specific formula and for the next trend of the data I need to use a different formula.
Also, the data is not simple but there are two distinct slopes.
All data points are in excel cells.I haven't started the code yet. I am thinking about using (0,1,2,3,4) data points and finding slope and keep moving by 1 (1,2,3,4,5) then somehow calculate a difference in the 2 slopes and when they are significant. to call that the transition point
You may be able to reduce the problem to finding inflection points. This can be defined as point where the data flattens briefly to either resume a trend, change it (but in the same direction), or reverse it. You can do this by finding small time clusters with slope of zero. Or a better idea would be to divide your y data into horizontal bins. If a certain threshold of number of data points in a bin is reached, a change in trend is in progress. You can vary the inflection sensitivity by varying the bin size and/or varying the minimum number of points in a bin.

Is there any available DM script that can compare two images and know the difference

Is there any available DM script that can compare two images and know the difference?
I mean the script can compare two or more images, and it can determine the similarity of two images, for example the 95% area of one image is same as another image, then the similarity of these two images is 95%.
The script can compare brightness and contrast distribution of images.
Thanks,
This question is a bit ill-defined, as "similarity" between images depends a lot on what you want.
If by "95% of the area is the same" you mean that 95% of the pixels are of identical value in images A & B, you can simply create a mask and sum() it to count the number of pixels, i.e.:
sum( abs(A-B)==0 ? 1 : 0 )
However, this will utterly fail if the images A & B are shifted with respect to each other even by a single pixel. It will also fail, if A & B are of same contrast but different absolute value.
I guess the intended question was to find similarity of two images in a fuzzy way.
For these, one way is to do crosscorrelation. DM has this function. Like this,
image xcorr= CrossCorrelate(ref,img)
From xcorr, the peak position gives x- and y- shift between the two, the peak intensity gives "similarity" of the two.
If you know there is no shift between the two, you can just do the sum and multiplication,
number similarity1=sum(img1*img2)
Another way to do similarity is calculate Euclidian distance of the two:
number similarity2=sqrt(sum((img1-img2)**2)).
"similarity2" calculates the "pure" similarity. "similarity1" is the pure similarity plus the mean intensity of img1 and img2. The difference is essentially this,
(a-b)**2=a**2+b**2-2*a*b.
The left term is "similarity2", the last term on the right is the "crosscorrelation" or "similarity1".
I think "similarity1" is called cross-correlation, "similarity2" is called correlation coefficient.
In example comparing two diffraction patterns, if you want to compute the degree of similarity, use "similarity2". If you want to compute the degree of similarity plus a certain character of the diffraction pattern, use "similarity1".

Kinect normalize depth

I have some Kinect data of somebody standing (reasonably) still and performing sets of punches. I am given it in the format of an x,y,z co-ordinate for each joint of which they are 20, so I have 60 data points per frame.
I'm trying to perform a classification task on the punches however I'm having some problems normalising my data. As you can see from the graph there are sections with much higher 'amplitude' than the others, my belief is that this is due to how close that person was to the kinect sensor when the readings were taken. (The graph is actually the first principal coefficient obtained by PCA for each frame, multiple sequences of the same punch are strung together in this graph)
Looking back at the data files it looks like those that are 'out' have a z co-ordinate (depth from sensor) of ~2.7 where as the others tent to hover around 3.3-3.6.
How can I perform a normalization with the depth values to make them closer to each other for each sequence? I've already tried differentiation to get the velocity, although it helps to normalise the output actually ends up too similar and makes it very hard to classify.
Edit: I should mention I am already using a normalization method by subtracting the hip position from each joint in an attempt to make the co-ordinates relative.
The Kinect can output some strange values when the person that is tracked is standing near the edges of the view of the Kinect. I would either completly ignore these data or just replace the data with an average of the previous 2 and next 2.
For example:
1,2,1,12,1,2,3
Replace 12 with (2 + 1 + 1 + 2) / 4 = 1.5
You can basically do this with the whole array of values you have, this way you have a more normalised line/graph.
You can also use the clippedEdges value to determine if one or more joints is outside the view.

Efficient computation of "variable (number of points included)" moving average in R

I'm trying to implement a variable exponential moving average on a time series of intraday data (i.e 10 seconds). By variable, I mean that the size of the window included in the moving average depends on another factor (i.e. volatility). I was thinking of the following:
MA(t)=alpha(t)*price(t) + (1-alpha(t))MA(t-1),
where alpha corresponds for example to a changing volatility index.
In a backtest on huge series (more than 100000) points, this computation causes me "troubles". I have the complete vectors alpha and price, but for the current values of MA I always need the value just calculated before. Thus, so far I do not see a vectorized solution????
Another idea, I had, was trying to directly apply the implemented EMA(..,n=f()) function to every data point, by always having a different value for f(). But I do not find a fast solution neither so far.
Would be very kind if somebody could help me with my problem??? Even other suggestions of how constructing a variable moving average would be great.
Thx a lot in advance
Martin
A very efficient moving average operation is also possible via filter():
## create a weight vector -- this one has equal weights, other schemes possible
weights <- rep(1/nobs, nobs)
## and apply it as a one-sided moving average calculations, see help(filter)
movavg <- as.vector(filter(somevector, weights, method="convolution", side=1))
That was left-sided only, other choices are possible.
For timeseries, see the function rollmean in the zoo package.
You actually don't calculate a moving average, but some kind of a weighted cumulative average. A (weighted) moving average would be something like :
price <- runif(100,10,1000)
alpha <- rbeta(100,1,0.5)
tp <- embed(price,2)
ta <- embed(alpha,2)
MA1 <- apply(cbind(tp,ta),1,function(x){
weighted.mean(x[1:2],w=2*x[3:4]/sum(x))
})
Make sure you rescale the weights so they sum to the amount of observations.
For your own calculation, you could try something like :
MAt <- price*alpha
ma.MAt <- matrix(rep(MAt,each=n),nrow=n)
ma.MAt[upper.tri(ma.MAt)] <- 0
tt1 <- sapply(1:n,function(x){
tmp <- rev(c(rep(0,n-x),1,cumprod(rev(alpha[1:(x-1)])))[1:n])
sum(ma.MAt[i,]*tmp)
})
This calculates the averages as linear combinations of MAt, with weights defined by the cumulative product of alpha.
On a sidenote : I assumed the index to lie somewhere between 0 and 1.
I just added a VMA function to the TTR package to do this. For example:
library(quantmod) # loads TTR
getSymbols("SPY")
SPY$absCMO <- abs(CMO(Cl(SPY),20))/100
SPY$vma <- VMA(Cl(SPY), SPY$absCMO)
chartSeries(SPY,TA="addTA(SPY$vma,on=1,col='blue')")
x <- xts(rnorm(1e6),Sys.time()-1e6:1)
y <- xts(runif(1e6),Sys.time()-1e6:1)
system.time(VMA(x,y)) # < 0.5s on a 2.2Ghz Centrino
A couple notes from the documentation:
‘VMA’ calculate a variable-length
moving average based on the absolute
value of ‘w’. Higher (lower) values
of ‘w’ will cause ‘VMA’ to react
faster (slower).
The pre-compiled binaries should be on R-forge within 24 hours.