How to calculate average velocity for different acceleration? - gps

I want to calculate average speed of the distance traveled using gps signals.
Is this formula calculates correct avg speed?
avgspeed = totalspeed/count
where count is the no.of gps signals.
If it is wrong,please any one tell me the correct formula.

While that should work, remember that GPS signals can be confused easily if you're in diverse terrain. Therefore, I would not use an arithmetic mean, but compute the median, so outliers (quick jumps) would not have such a big effect on the result.
From Wikipedia (n being the number of signals):
If n is odd then Median (M) = value of ((n + 1)/2)th item term.
If n is even then Median (M) = value of [((n)/2)th item term + ((n)/2
+ 1)th item term ]/2

Related

The mass of components in a binary system for given largest and smallest distance

I try to calculate the mass of the component stars in a binary system.
I only have the period and the largest and smallest distance between them and know how to use them to get the total mass.
To my knowledge, I think I need the distance from one of the stars to the barycenter.
Is that possible to calculate each mass of the component members with this information?
Thank you for your help!
I think, if you only have the period T and the largest a_max and smallest a_min distance between the two stars, as you pointed out, you can calculate the total mass, using the formula
mass_1 + mass_2 = (2*pi / T)^2 * ((a_min + a_max)^3 / G)
However, you cannot calculate the individual masses solely from this information, because the prescribed data, the period T and the largest a_max and smallest a_min distance, are for the relative postion of the stars, not the individual.
What do I mean. Assume you have two stars whose motion has the parameters given above. Then, by Newtonian mechanics, let us assume your coordinate system is placed at the barycenter and if you denote the position vectors r1 and r2 pointing from the barycenter to the respective stars, then, the equations of motion are
(d/dt)^2 r1 = - ( mass_2*G / |r2 - r1|^3 )*(r1 - r2)
(d/dt)^2 r2 = - ( mass_1*G / |r2 - r1|^3 )*(r2 - r1)
If you subtract the first vector differential equation from the second, and you set r = r2 - r1, you obtain the vector differential equation (3 scalar differential equations and 3 scalar variables, the 3D coordinates of the relative position vector r)
(d/dt)^2 r = - ( (mass_1 + mass_2)*G / |r|^3 ) * r
This is the classical vector differential equation that describes the time-evolution of the relative position vector r between the two stars. The information you have, the period T and the largest a_max and smallest a_min, can be used to find a specific solution to the last equation above, the one for r, which gives you the relative motion r = r(t) between the two stars with the prescribed properties. However, the motion of any pair of stars with arbitrary masses mass_1 and mass_2, that sum up to the same value mass_1 + mass_2, will provide a solution to the vector differential equation
(d/dt)^2 r = - ( (mass_1 + mass_2)*G / |r|^3 ) * r
and among all such solution there will be some that posses the desired properties: period T and the largest a_max and smallest a_min. Observe that T, a_min and a_max are properties of the vector r and not so much properties of the individual r1 and r2, which tells you that you cannot find the individual masses.

How To Calculate Exact 99.9th Percentile in Splunk

Does anyone know how to exactly calculate the 99.9th percentile in Splunk?
I have tried a variety of methods as below, such as exactperc (but this only takes integer percentiles) and perc (but this approximates the result heavily).
base | stats exactperc99(latency) as "99th Percentile", p99.9(latency) as "99.9th Percentile"
Thanks,
James
From the Splunk documentation:
There are three different percentile functions:
perc<X>(Y) (or the abbreviation p<X>(Y)) upperperc<X>(Y)
exactperc<X>(Y) Returns the X-th percentile value of the numeric field
Y. Valid values of X are floating point numbers from 1 to 99, such as
99.95.
Use the perc<X>(Y) function to calculate an approximate threshold,
such that of the values in field Y, X percent fall below the
threshold.
The perc and upperperc functions give approximate values for the
integer percentile requested. The approximation algorithm that is
used, which is based on dynamic compression of a radix tree, provides
a strict bound of the actual value for any percentile. The perc
function returns a single number that represents the lower end of that
range. The upperperc function gives the approximate upper bound. The
exactperc function provides the exact value, but will be very
expensive for high cardinality fields. The exactperc function could
consume a large amount of memory in the search head.
Processes field values as strings.
Examples:
p99.999(response_ms)
p99(bytes_received)
p50(salary) # median

How is the Bollinger Oscillator Calculated?

The Upperband is calculated: Middleband + (D + sqrt(((close - Middle band)^2)/n))
And I know how to calculate the lower bollinger band and middle bollinger bands.
But there is an elusive indicator called the bollinger oscillator which I find combines the bollinger bands into a single oscillating indicator. Please explain how to calculate it.
Use SQL if possible assume fields contain relevant values.
Find the 9-day moving average average (n1 + n2 ... + n9)/9
Find the standard deviation of the 9-days
Subtract 9-day Moving average from the current ruling price
Take the answer devide by the standard deviation
Answer is the BOS (Bollinger Oscillator)

Why is average disk seek time one-third of the full seek time?

I have read in many books and papers, considering disk performance, that the average seek time is roughly one-third of the full seek time, but no one really offers any explanation about that. Where does this come from?
The average is calculated mathematically using calculus.
We use the very basic formula for calculation of average.
Average seek time = (Sum of all possible seek times)/(Total no. of possible seek times)
The disk is assumed to have N number of tracks, so that these are numbered from 1...N
The position of the head at any point of time can be anything from 0 to N (inclusive).
Let us say that the initial position of the disk head is at track 'x' and the final position of the disk head is at track 'y' , so that x can vary from 0 to N and also, y can vary from 0 to N.
On similar lines as we defined average seek time, we can say that,
Average seek distance = (Sum of all possible seek distances)/(total no. of possible seek distances)
By definition of x and y,
Total no. of possible seek distances = N*N
and
Sum of all possible seek distances = SIGMA(x=0,N) SIGMA(y=0,N) |x-y|
= INTEGRAL(x=0,N)INTEGRAL(y=0,N) |x-y| dy dx
To solve this, use the technique of splitting modulus of the expression for y = 0 to x and for y = x to N. Then solve for x = 0 to N.
This comes out to be (N^3)/3.
Avg seek distance = (N^3)/3*N*N = N/3
Average seek time = Avg seek distance / seek rate
If the seek time for the from position 0 to track N takes 't' seconds then seek rate = N/t
Therefore, avg seek time = (N/3)/(N/t) = t/3
Reference:
http://pages.cs.wisc.edu/~remzi/OSFEP/file-disks.pdf
Page-9 gives a very good answer to this.

Lucence SweetSpotSimilarity lengthNorm

http://lucene.apache.org/java/2_3_0/api/org/apache/lucene/misc/SweetSpotSimilarity.html
Implemented as: 1/sqrt( steepness * (abs(x-min) + abs(x-max) - (max-min)) + 1 ) .
This degrades to 1/sqrt(x) when min and max are both 1 and steepness is 0.5
Can anyone explain this formula for me? How steepness is decided and what is exactly referring to?
Any help is appreciated.
With the DefaultSimilarity, the shorter the field in terms of number of tokens, the higher the score.
e.g. if you have two docs, with indexed field values of "the quick brown fox" and "brown fox", respectively, the latter would score higher in a query for "fox".
SweetSpotSimilarity lets you define a "sweet spot" for the length of a field in terms of a range defined by min and max. Field lengths within the range will score equally, and field lengths outside the range will score lower, depending on the distance the length is form the range boundary. "steepness" determines how quickly the score degrades as a function of distance.