Simple question what is the way to use bankers' rounding in BigQuery.
The only thing which I can find is:
BAD WAY to do it but still works:
CREATE TEMP FUNCTION test(num FLOAT64, decimalPlaces INT64)
RETURNS FLOAT64
LANGUAGE js AS """
var d = decimalPlaces || 0;
var m = Math.pow(10, d);
var n = +(d ? num * m : num).toFixed(8); // Avoid rounding errors
var i = Math.floor(n), f = n - i;
var e = 1e-8; // Allow for rounding errors in f
var r = (f > 0.5 - e && f < 0.5 + e) ?
((i % 2 == 0) ? i : i + 1) : Math.round(n);
return d ? r / m : r;
""";
SELECT ROUND(1.525,2)
There is a simpler way of calculating it:
CREATE TEMP FUNCTION bankersRound(num FLOAT64, decimals INT64)
RETURNS FLOAT64
LANGUAGE js AS """
var scale = Math.pow(10, decimals);
var result = value = (Math.round((num * scale) / 2) * 2) / scale;
return result;
""";
Bad way, but still works:
CREATE TEMP FUNCTION test(num FLOAT64, decimalPlaces INT64)
RETURNS FLOAT64
LANGUAGE js AS """
var d = decimalPlaces || 0;
var m = Math.pow(10, d);
var n = +(d ? num * m : num).toFixed(8); // Avoid rounding errors
var i = Math.floor(n), f = n - i;
var e = 1e-8; // Allow for rounding errors in f
var r = (f > 0.5 - e && f < 0.5 + e) ?
((i % 2 == 0) ? i : i + 1) : Math.round(n);
return d ? r / m : r;
""";
SELECT ROUND(1.525,2)
First, a picture:
Column A is my source data, 50 points.
Column C and D are the SMA calculated with numpy and mathdotnet.com, respectively, with a window of 15.
Column F is the delta.
As we can see, about halfway, the data becomes identical, but the first half is not. I do not understand why, and, more importantly, do not know what to trust.
So I got from SO an optimized version of the SMA and ran the data through it.
The code is here:
private static NDArray SMA(this NDArray Data, int Period)
{
var Length = Data.len;
// calculate the moving average
var Buffer = new double[Period];
var Output = new double[Length];
var CurrentIndex = 0;
for (var i = 0; i < Length; i++)
{
Buffer[CurrentIndex] = Data.GetDouble(i) / Period;
var MA = 0.0;
for (var j = 0; j < Period; j++)
{
MA += Buffer[j];
}
Output[i] = MA;
CurrentIndex = (CurrentIndex + 1) % Period;
}
var R = new ArraySegment<double>(Output, Period - 1, Length - Period + 1);
return new NDArray(R.ToArray());
}
It is using NumSharp, the .net port of numpy, to hold the source array.
While it is all different code, the C# code and python numpy output the same results (differences happen after the 12th decimal point, so we can consider them identical).
This points out to mathdotnet.com being different; so I guess I can trust the numpy / C# versions more.
Are there different variations of the SMA that could cause this? or something obvious I don't see?
I have put all the data here: https://pastebin.com/WgYJUUJF
Edit:
Here is the numpy code:
import numpy as np
def calcSma(data, smaPeriod):
j = next(i for i, x in enumerate(data) if x is not None)
our_range = range(len(data))[j + smaPeriod - 1:]
empty_list = [None] * (j + smaPeriod - 1)
sub_result = [np.mean(data[i - smaPeriod + 1: i + 1]) for i in our_range]
return np.array(empty_list + sub_result)
def calcSma2(data_set, periods=3):
weights = np.ones(periods) / periods
return np.convolve(data_set, weights, mode='valid')
a = np.array([1.1282553063375, 1.13157696082132, 1.13275406120136, 1.1332879715733, 1.12761933580452, 1.12621836040801, 1.12282485875706, 1.12265572041877, 1.13094386506532, 1.12320520490577, 1.12427293064877, 1.1328332027022, 1.13099445663901, 1.12843355605048, 1.13002750724853, 1.12843355605048, 1.13099445663901, 1.12709476494142, 1.12684879712348, 1.12672349888807, 1.12600933402474, 1.13112070248549, 1.12985951088976, 1.12822416032659, 1.12471789559362, 1.12651004224413, 1.12442669033881, 1.12334638977164, 1.12714333124378, 1.1312233808195, 1.12713229372575, 1.128255040952, 1.12585669781931, 1.12763457442902, 1.12470631424376, 1.12223443223443, 1.12506842815956, 1.12691187181355, 1.12385654130971, 1.13026344596074, 1.12237927400894, 1.1245915922457, 1.13088395780284, 1.13211944646759, 1.12590649028825, 1.12829127560895, 1.11876736364966, 1.12222667492441, 1.12169543369019, 1.12199031071285])
b = calcSma(a, 15)
c = calcSma2(a, 15)
print b
print "----------------------------------"
print c
and here is the mathdotnet one:
var data = Vector<double>.Build.Dense(new[] { 1.1282553063375, 1.13157696082132, 1.13275406120136, 1.1332879715733, 1.12761933580452, 1.12621836040801, 1.12282485875706, 1.12265572041877, 1.13094386506532, 1.12320520490577, 1.12427293064877, 1.1328332027022, 1.13099445663901, 1.12843355605048, 1.13002750724853, 1.12843355605048, 1.13099445663901, 1.12709476494142, 1.12684879712348, 1.12672349888807, 1.12600933402474, 1.13112070248549, 1.12985951088976, 1.12822416032659, 1.12471789559362, 1.12651004224413, 1.12442669033881, 1.12334638977164, 1.12714333124378, 1.1312233808195, 1.12713229372575, 1.128255040952, 1.12585669781931, 1.12763457442902, 1.12470631424376, 1.12223443223443, 1.12506842815956, 1.12691187181355, 1.12385654130971, 1.13026344596074, 1.12237927400894, 1.1245915922457, 1.13088395780284, 1.13211944646759, 1.12590649028825, 1.12829127560895, 1.11876736364966, 1.12222667492441, 1.12169543369019, 1.12199031071285 });
var sma = Vector<double>.Build.Dense(data.MovingAverage(15).Skip(14).ToArray());
var s = sma.Aggregate(string.Empty, (Current, v) => Current + $"{v}, ");
Console.WriteLine(s);
I have a space ship that I want to turn to a destination angle. Currently it works like 90% of the time, but sometimes, it 'jumps' to the destination angle rather than moving smoothly. Here is my code:
a = System.Math.Sin(.destStoppingAngle + System.Math.PI)
b = System.Math.Cos(.destStoppingAngle + System.Math.PI)
c = System.Math.Sin(.msngFacing)
d = System.Math.Cos(.msngFacing)
det = a * d - b * c
If det > 0 Then
.msngFacing = .msngFacing - .ROTATION_RATE * TV.TimeElapsed
If det < 0.1 Then
.msngFacing = .destStoppingAngle
.turning = False
End If
Else
.msngFacing = .msngFacing + .ROTATION_RATE * TV.TimeElapsed
If det > 0.1 Then
.msngFacing = .destStoppingAngle
.turning = False
End If
End If
I would do it like this. First you need a function to lerp an angle (C code, port it yourself):
float lerpangle(float from, float to, float frac) {
float a;
if ( to - from > 180 ) {
to -= 360;
}
if ( to - from < -180 ) {
to += 360;
}
a = from + frac * (to - from);
return a;
}
Then, when starting the rotation you have the duration and stoppingangle as your own parameters. Get the startingangle from your object and startingtime (in something decently precise, milliseconds) and save them. The rotation then goes like this:
current_rotation = lerpangle(startingangle, stoppingangle,
(time.now - startingtime) / duration)
How do you use the waveshapernode in the web audio api? particular the curve Float32Array attribute?
Feel free to look at an example here.
In detail, I create a waveshaper curve with this function:
WAAMorningStar.prototype.createWSCurve = function (amount, n_samples) {
if ((amount >= 0) && (amount < 1)) {
ND.dist = amount;
var k = 2 * ND.dist / (1 - ND.dist);
for (var i = 0; i < n_samples; i+=1) {
// LINEAR INTERPOLATION: x := (c - a) * (z - y) / (b - a) + y
// a = 0, b = 2048, z = 1, y = -1, c = i
var x = (i - 0) * (1 - (-1)) / (n_samples - 0) + (-1);
this.wsCurve[i] = (1 + k) * x / (1+ k * Math.abs(x));
}
}
Then "load" it in a waveshaper node like this:
this.createWSCurve(ND.dist, this.nSamples);
this.sigmaDistortNode = this.context.createWaveShaper();
this.sigmaDistortNode.curve = this.wsCurve;
Everytime I need to change the distortion parameter, I re-create the waveshaper curve:
WAAMorningStar.prototype.setDistortion = function (distValue) {
var distCorrect = distValue;
if (distValue < -1) {
distCorrect = -1;
}
if (distValue >= 1) {
distCorrect = 0.985;
}
this.createWSCurve (distCorrect, this.nSamples);
}
(I use distCorrect to make the distortion sound nicer, values found euristically).
You can find the algorithm I use to create the waveshaper curve here
I have 2 int's. How do I divide one by the other and then round up afterwards?
If your ints are A and B and you want to have ceil(A/B) just calculate (A+B-1)/B.
What about:
float A,B; // this variables have to be floats!
int result = floor(A/B); // rounded down
int result = ceil(A/B); // rounded up
-(NSInteger)divideAndRoundUp:(NSInteger)a with:(NSInteger)b
{
if( a % b != 0 )
{
return a / b + 1;
}
return a / b;
}
As in C, you can cast both to float and then round the result using a rounding function that takes a float as input.
int a = 1;
int b = 2;
float result = (float)a / (float)b;
int rounded = (int)(result+0.5f);
i
If you looking for
2.1 roundup> 3
double row = _datas.count / 3;
double rounded = ceil(_datas.count / 3);
if(row > rounded){
row += 1;
}else{
}