Is there a formula for rating WebRTC audio quality as Excellent, Good, Fair, or Poor? - webrtc

I have been able to get various stats (Jitter, RTT, Packet lost, etc) of WebRTC audio call using RTCPeerConnection.getStats() API.
I need to rate the overall call quality as Excellent, Good, Fair, or Poor.
Is there a formula that uses WebRTC stats to give a overall rating? if not, which of the WebRTC stat(s) that I should give more weightage?

We ended up using MOS (mean opinion score) algorithm to calculate voice call quality metric.
Here is the formula we used -
Take the average latency, add jitter, but double the impact to latency
then add 10 for protocol latencies
EffectiveLatency = ( AverageLatency + Jitter * 2 + 10 )
Implement a basic curve - deduct 4 for the R value at 160ms of latency
(round trip). Anything over that gets a much more agressive deduction
if EffectiveLatency < 160 then
R = 93.2 - (EffectiveLatency / 40)
else
R = 93.2 - (EffectiveLatency - 120) / 10
Now, let's deduct 2.5 R values per percentage of packet loss
R = R - (PacketLoss * 2.5)
Convert the R into an MOS value.(this is a known formula)
MOS = 1 + (0.035) * R + (.000007) * R * (R-60) * (100-R)
We found the formula from https://www.pingman.com/kb/article/how-is-mos-calculated-in-pingplotter-pro-50.html

Related

Missing data in longitudinal data and fitted by linear mixed model

I have a question about handling the following missing data scenario using the linear mixed effect model.
Suppose I have a closed longitudinal cohort followed by six years. There are 1500 individuals at the initial wave.
Available observations by each wave are as the following:
Wave 1: 1500
Wave 2: 1400
Wave 3: 1000
Wave 4: 800
Wave 5: 500
Wave 6: 67
There are two reasons for the missing observations. First, people dropped out. Second, the data collection process is ongoing, and not all individuals have been interviewed yet (this is more likely in the later wave).
I know the linear mixed effect model can address the missing problem using the maximum likelihood if MAR or MCAR. My question is: if I assume all missing happens at random, should I drop observations from wave 6 to avoid biased estimates? Or in other words, if I assume the missingness in my data set is happened at random, should I drop a specific wave with substantial amount of missingness to avoid a biased estimate?
The model I would like to run is as the following:
m_Kunkle_exe <- lmer(cs_exec_fn ~ PRS_Kunkle*AgeAtVisit*APOE_score +
PRS_Kunkle*I(AgeAtVisit^2)*APOE_score +
+ gender + EdYears_Coded_Max20 + VisNo + famhist + X1 + X2 + X3 + X4 + X5 +
(1 |family/DBID),
data = WRAP_all, REML = F)
Many thanks

Confusing Labels for Function Generators and Oscilloscopes in Tinkercad

In Tinkercad, amplitude definition for Function Generators and scale definition for Oscilloscopes are quite confusing. Here is an ss from Tinkercad's function generator:
On the device 6.20 V is represented as peak-to-peak voltage, look at the red-lines I've marked. But on the panel right-hand-side, we input it as the amplitude, look at the green line I've marked. Which one is true?
And I cannot deduce the answer using an oscilloscope, because there is not enough info about oscilloscope. (At least, I couldn't find enough info.) Here is the input signal from the function generator above:
Answer is not obvious, because the meaning of 10 V placed on y_axis is ambiguous. Is it +/- 10 V as in 20 V in total, i.e. the voltage-per-division is 2 V (first explanation)? Or, is it +/- 5 V as in 10 V in total, i.e. voltage-per-division is 1 V (second explanation)? In some Youtube lectures the explanation is first one. But, I'm not quite sure. Because, if 6.2 V is amplitude and voltage-per-division is 2 V, then this is noncontradictory. But if 6.2 V is peak-to-peak voltage and voltage-per-division is 1 V, then this, too, is noncontradictory. Again, which one is true?
And also, while studying, I've realise that a real life experiment indicates that the second explanation should be true. Let me explain the experiment step by step.
Theory: Full Wave Rectifier Circuits
Assume we apply V_in as the amplitude, the peak-peak voltage is, V_peaktopeak = 2 * V_in. And for output signal we have,
V_out = (V_in - n * V_diode) * R_L / (R_L + r_d),
where n is the number of diode in conduction, V_diode is bias of a diode and R_L is load resistor. Load resistor is choosen big enough so that R_L >> r_d and we get,
V_out = V_in - n * V_diode.
In a real experiment r_d is in between 1 \ohm and 25 \ohm, and we choose R_L on the order of kilo \ohm. Therefore, we can ignore R_L / (R_L + r_d) part, safely.
And for DC voltage corresponding to the output signal we have,
V_DC = 2 * V_out / \pi = 0.637 * V_out.
Sheme of Circuit in an Experiment
Here is circuit scheme,
As you may see, for positive half-periode only two of four diode is in conduction. And for negative half-periode, the other two is in conduction. Thus n is 2 for this circuit. Let's construct this experiment on Tinkercad. I didn't use breadboard to show more similarity between the scheme of circuit and the circuit built in Tinkercad.
Scenerio #1 - Theoretical Expectations
Let's assume 6.2 V to be the amplitude. Then, V_in=6.2 V. And V_peaktopeak is 12.4 V. As output signal we calculate,
V_out = V_in - n * V_diode = 6.2 V - 2 * 0.7 V = 4.8 V.
And for DC equivalent we theoretically get,
V_DC = 0.637 * V_out = 3.06 V.
But in multimeter, we see 1.06 V. This indicates nearly %60 percantage error.
Scenerio #2 - Theoretical Expectations
Let's assume 6.2 V to be the peak-to-peak voltage. Then, V_in=3.1 V. And V_peaktopeak is 6.2 V. As output signal we calculate,
V_out = V_in - n * V_diode = 3.1 V - 2 * 0.7 V = 1.7 V.
And for DC equivalent we theoretically get,
V_DC = 0.637 * V_out = 1.08 V.
And in multimeter, we see 1.06 V. There values are pretty close to each other.
Conclusion
Based on these results, we may conclude that 6.2 V is peak-to-peak voltage, scheme on the function generator is true, the tag "Amplitude" in the description of function generator is wrong and the y-scale of an oscilloscope represents the total voltage half of which is positive and the other half is negative.
BUT
I cannot be sure, and since I'll teach this material in my electronic laboratory class, I really need to be sure about this conclusion. Therefore, here I'm asking you about your opinions, conclusions or maybe other references that I've missed.
TinkerCAD refers to peak-to-peak voltage as amplitude for some reason. I believe the second explanation (+/- 5V, 10 V total) is correct, based on the x axis and frequency value.

Why these two versions of Julia code for the same optimization problem are almost identical and produce different results?

I am trying to learn Julia for educational purposes. Specially, I trying to use Julia and the package JuMP to solve operational research problems.
I was watching this great video from youtube. A guy, named Philip Thomas, shows a didatic example. However, this video was produced in 2014. Since then, Julia has evolved.
He used this code:
#=
We are going to the thrift store and need 99 cents. What is the least amount of
weight we need to carry?
i.e. a knapsack problem
We specify that you need at least 99 cents - does the answer change if you need exact change?
=#
using JuMP
using Cbc # Open source solver. Must support integer programming.
m = Model(solver=CbcSolver())
# Variables represent how many of each coin we want to carry
#defVar(m, pennies >= 0, Int)
#defVar(m, nickels >= 0, Int)
#defVar(m, dimes >= 0, Int)
#defVar(m, quarters >= 0, Int)
# We need at least 99 cents
#addConstraint(m, 1 * pennies + 5 * nickels + 10 * dimes + 25 * quarters >= 99)
# Minimize mass (Grams)
# (source: US Mint)
#setObjective(m, Min, 2.5 * pennies + 5 * nickels + 2.268 * dimes + 5.670 * quarters)
# Solve
status = solve(m)
println("Minimum weight: ", getObjectiveValue(m), " grams")
println("using:")
println(round(getValue(pennies)), " pennies") # "round" to cast as integer
println(round(getValue(nickels)), " nickels")
println(round(getValue(dimes)), " dimes")
println(round(getValue(quarters)), " quarters")
His code returns this result:
Minimum weight: 22.68 grams
using:
0.0 pennies
0.0 nickels
0.0 dimes
4.0 quarters
I am using the current version of Julia (1.0). Moreover, I am using the current version of JUMP. There are syntactic differences between the current version of Julia and the code above. After some trial and error, I was able to properly translate the code to make it run in Julia 1.0:
#=
We are going to the thrift store and need 99 cents. What is the least amount of
weight we need to carry?
i.e. a knapsack problem
We specify that you need at least 99 cents - does the answer change if you need exact change?
=#
using JuMP
using GLPK
using Cbc # Open source solver. Must support integer programming.
model = Model(with_optimizer(GLPK.Optimizer))
# Variables represent how many of each coin we want to carry
#variable(model, pennies >= 0, Int)
#variable(model, nickels >= 0, Int)
#variable(model, dimes >= 0, Int)
#variable(model, quarters >= 0, Int)
# We need at least 99 cents
#constraint(model, 1 * pennies + 5 * nickels + 10 * dimes + 25 * quarters >= 99)
# Minimize mass (Grams)
# (source: US Mint)
#objective(model, Min, 2.5 * pennies + 5 * nickels + 2.268 * dimes + 5.670 * quarters)
# Solve
optimize!(model)
println("Minimum weight: ", objective_value(model), " grams")
println("using:")
println(round(value(pennies)), " pennies") # "round" to cast as integer
println(round(value(nickels)), " nickels")
println(round(value(dimes)), " dimes")
println(round(value(quarters)), " quarters")
The interesting thing is the result the terminal returned:
Minimum weight: 22.68 grams
using:
0.0 pennies
0.0 nickels
0.0 dimes
4.0 quarters
As you can see, the final result concerning the minimum weight remains the same. However, the decision about what coin to get changes from 10 dimes to 4 quarters.
Besides the syntactic changes, I modified the solver because, initially, I was not being able to run Cbc.
After that, I changed again to Cbc with this simple modification:
model = Model(with_optimizer(Cbc.Optimizer))
After the modification above the "code translation" was closer to the original with Cbc as the chosen solver. Curiously, the program returns the expected result:
Minimum weight: 22.68 grams
using:
0.0 pennies
0.0 nickels
10.0 dimes
0.0 quarters
However, I am still confused. According to the documentation both solvers can solve MILP (Mixed Integer Linear Problems).
Why this happens? Why different solvers return different results if they both have a similar profile? Did I miss some detail during the code translation?
Thanks in advance.
As you already discovered the value of the target function is the same for both solutions. Which solution is reached depends on the path that the solver went through to reach it.
Differences in the path may arise in the Simplex optimizer used to solve individual LP subproblems. Switching the sequence of variables or rows may be sufficient to end up in another point of the solution set. Some solver even use a random number generator to determine which variable enters the base in the Simplex algorithm first (but not GLPK).
Another reason for reaching a different solution may be the sequence in which the search tree for the integer variables is searched. This is influenced amongst others by the search strategy (e.g. depth first vs breadth first).

Orbital Mechanics: Estimated Time To Form Specific Angle Between Two Planets

I'm trying to come up with a formula to estimate reoccurring times when two orbiting planets will form a target angle. I've made some very important assumptions for the sake of simplicity:
Pretend Kepler's laws do not exist
Pretend the speeds are constant
Pretend both planets are orbiting along the same path
Pretend this path is a circle, NOT an ellipse
Here is a diagram to assist in understanding my challenge (Google Docs):
https://docs.google.com/drawings/d/1Z6ziYEKLgc_tlhvJrC93C91w2R9_IGisf5Z3bw_Cxsg/edit?usp=sharing
I ran a simulation and stored data in a spreadsheet (Google Docs):
https://docs.google.com/spreadsheet/ccc?key=0AgPx8CZl3CNAdGRRTlBUUFpnbGhOdnAwYmtTZWVoVVE&usp=sharing
Using the stored data from the simulation, I was able to determine a way to estimate the FIRST occurrence that two orbiting planets form a specific angle:
Initial State
Planet 1: position=0 degrees; speed=1 degree/day
Planet 2: position=30 degrees; speed=6 degrees/day
Target Angle: 90 degrees
I performed these steps:
Speed Difference: s2 - s1 ; 6 - 1 = 5 degrees / day
Angle Formed: p2 - p1 ; 30 - 0 = 30 degrees
Find Days Required
Target = Angle + (Speed Diff * Days)
Days (d) = (Target - Angle) / Speed Diff
90 = 30 + 5d
60 = 5d
d = 12 days
Prove:
Position of Planet 1: 0 + (1 * 12) = 12 degrees
Position of Planet 2: 30 + (6 * 12) = 30 + 72 + 102 degrees
Angle: 102 - 12 = 90 degrees
Using this logic, I then returned to an astronomy program that uses Astro's Swiss Ephemeris. The estimated days got me close enough to comfortably pinpoint the date and time when two planets reached the desired angle without affecting application performance.
Here is where my problem lies: Given the information that I know, what approach should I take in order to estimate re-occurring times when a 90 degree angle will be reached again?
Thank you for taking the time to read this in advance.
There is not a simple formula as such but there is an algorithm you could program to determine the results. Pentadecagon is also correct in that you need to take into account n*360. You are also right in that you stop one of the planets and work on the difference of the speeds.
After d days the difference in degrees between the planets is 30 + d*5.
Since we are only interested in degrees between 0 and 360 then the difference of the angle between planets is (30 + d*5) mod 360.
In case you do not know a mod b gives the remainder when a is divided by b and most programming languages have this operation built in (as do spreadsheets).
You have spotted you want the values of d when the difference is 90 degrees or 270 degrees
So you need to find the values of d whenever
(30 + d*5) mod 360 = 90 or (30 + d*5) mod 360 = 270
pseudo code algorithm
FOR (d=0; d<11; d=d+5)
IF((30 + d*5) MOD 360 = 90 OR (30 + d*5) MOD 360 = 270)
PRINT d
NEXT
The funny thing about angles is that there are different ways to represent the same angle. So you currently set
Target = 90
One revolution later, the same angle could be written as
Target = 90 + 360 = 450
Or generally, n revolutions later
Target = 90 + n * 360
If you also want the same angle with opposite orientation you can set
Target = -90 + n * 360
If you use solve your equation for each of those target angles, you will find all the events you are looking for.

Is there an iterative way to calculate radii along a scanline?

I am processing a series of points which all have the same Y value, but different X values. I go through the points by incrementing X by one. For example, I might have Y = 50 and X is the integers from -30 to 30. Part of my algorithm involves finding the distance to the origin from each point and then doing further processing.
After profiling, I've found that the sqrt call in the distance calculation is taking a significant amount of my time. Is there an iterative way to calculate the distance?
In other words:
I want to efficiently calculate: r[n] = sqrt(x[n]*x[n] + y*y)). I can save information from the previous iteration. Each iteration changes by incrementing x, so x[n] = x[n-1] + 1. I can not use sqrt or trig functions because they are too slow except at the beginning of each scanline.
I can use approximations as long as they are good enough (less than 0.l% error) and the errors introduced are smooth (I can't bin to a pre-calculated table of approximations).
Additional information:
x and y are always integers between -150 and 150
I'm going to try a couple ideas out tomorrow and mark the best answer based on which is fastest.
Results
I did some timings
Distance formula: 16 ms / iteration
Pete's interperlating solution: 8 ms / iteration
wrang-wrang pre-calculation solution: 8ms / iteration
I was hoping the test would decide between the two, because I like both answers. I'm going to go with Pete's because it uses less memory.
Just to get a feel for it, for your range y = 50, x = 0 gives r = 50 and y = 50, x = +/- 30 gives r ~= 58.3. You want an approximation good for +/- 0.1%, or +/- 0.05 absolute. That's a lot lower accuracy than most library sqrts do.
Two approximate approaches - you calculate r based on interpolating from the previous value, or use a few terms of a suitable series.
Interpolating from previous r
r = ( x2 + y2 ) 1/2
dr/dx = 1/2 . 2x . ( x2 + y2 ) -1/2 = x/r
double r = 50;
for ( int x = 0; x <= 30; ++x ) {
double r_true = Math.sqrt ( 50*50 + x*x );
System.out.printf ( "x: %d r_true: %f r_approx: %f error: %f%%\n", x, r, r_true, 100 * Math.abs ( r_true - r ) / r );
r = r + ( x + 0.5 ) / r;
}
Gives:
x: 0 r_true: 50.000000 r_approx: 50.000000 error: 0.000000%
x: 1 r_true: 50.010000 r_approx: 50.009999 error: 0.000002%
....
x: 29 r_true: 57.825065 r_approx: 57.801384 error: 0.040953%
x: 30 r_true: 58.335225 r_approx: 58.309519 error: 0.044065%
which seems to meet the requirement of 0.1% error, so I didn't bother coding the next one, as it would require quite a bit more calculation steps.
Truncated Series
The taylor series for sqrt ( 1 + x ) for x near zero is
sqrt ( 1 + x ) = 1 + 1/2 x - 1/8 x2 ... + ( - 1 / 2 )n+1 xn
Using r = y sqrt ( 1 + (x/y)2 ) then you're looking for a term t = ( - 1 / 2 )n+1 0.36n with magnitude less that a 0.001, log ( 0.002 ) > n log ( 0.18 ) or n > 3.6, so taking terms to x^4 should be Ok.
Y=10000
Y2=Y*Y
for x=0..Y2 do
D[x]=sqrt(Y2+x*x)
norm(x,y)=
if (y==0) x
else if (x>y) norm(y,x)
else {
s=Y/y
D[round(x*s)]/s
}
If your coordinates are smooth, then the idea can be extended with linear interpolation. For more precision, increase Y.
The idea is that s*(x,y) is on the line y=Y, which you've precomputed distances for. Get the distance, then divide it by s.
I assume you really do need the distance and not its square.
You may also be able to find a general sqrt implementation that sacrifices some accuracy for speed, but I have a hard time imagining that beating what the FPU can do.
By linear interpolation, I mean to change D[round(x)] to:
f=floor(x)
a=x-f
D[f]*(1-a)+D[f+1]*a
This doesn't really answer your question, but may help...
The first questions I would ask would be:
"do I need the sqrt at all?".
"If not, how can I reduce the number of sqrts?"
then yours: "Can I replace the remaining sqrts with a clever calculation?"
So I'd start with:
Do you need the exact radius, or would radius-squared be acceptable? There are fast approximatiosn to sqrt, but probably not accurate enough for your spec.
Can you process the image using mirrored quadrants or eighths? By processing all pixels at the same radius value in a batch, you can reduce the number of calculations by 8x.
Can you precalculate the radius values? You only need a table that is a quarter (or possibly an eighth) of the size of the image you are processing, and the table would only need to be precalculated once and then re-used for many runs of the algorithm.
So clever maths may not be the fastest solution.
Well there's always trying optimize your sqrt, the fastest one I've seen is the old carmack quake 3 sqrt:
http://betterexplained.com/articles/understanding-quakes-fast-inverse-square-root/
That said, since sqrt is non-linear, you're not going to be able to do simple linear interpolation along your line to get your result. The best idea is to use a table lookup since that will give you blazing fast access to the data. And, since you appear to be iterating by whole integers, a table lookup should be exceedingly accurate.
Well, you can mirror around x=0 to start with (you need only compute n>=0, and the dupe those results to corresponding n<0). After that, I'd take a look at using the derivative on sqrt(a^2+b^2) (or the corresponding sin) to take advantage of the constant dx.
If that's not accurate enough, may I point out that this is a pretty good job for SIMD, which will provide you with a reciprocal square root op on both SSE and VMX (and shader model 2).
This is sort of related to a HAKMEM item:
ITEM 149 (Minsky): CIRCLE ALGORITHM
Here is an elegant way to draw almost
circles on a point-plotting display:
NEW X = OLD X - epsilon * OLD Y
NEW Y = OLD Y + epsilon * NEW(!) X
This makes a very round ellipse
centered at the origin with its size
determined by the initial point.
epsilon determines the angular
velocity of the circulating point, and
slightly affects the eccentricity. If
epsilon is a power of 2, then we don't
even need multiplication, let alone
square roots, sines, and cosines! The
"circle" will be perfectly stable
because the points soon become
periodic.
The circle algorithm was invented by
mistake when I tried to save one
register in a display hack! Ben Gurley
had an amazing display hack using only
about six or seven instructions, and
it was a great wonder. But it was
basically line-oriented. It occurred
to me that it would be exciting to
have curves, and I was trying to get a
curve display hack with minimal
instructions.