moving average difference between numpy and mathdotnet.com - numpy

First, a picture:
Column A is my source data, 50 points.
Column C and D are the SMA calculated with numpy and mathdotnet.com, respectively, with a window of 15.
Column F is the delta.
As we can see, about halfway, the data becomes identical, but the first half is not. I do not understand why, and, more importantly, do not know what to trust.
So I got from SO an optimized version of the SMA and ran the data through it.
The code is here:
private static NDArray SMA(this NDArray Data, int Period)
{
var Length = Data.len;
// calculate the moving average
var Buffer = new double[Period];
var Output = new double[Length];
var CurrentIndex = 0;
for (var i = 0; i < Length; i++)
{
Buffer[CurrentIndex] = Data.GetDouble(i) / Period;
var MA = 0.0;
for (var j = 0; j < Period; j++)
{
MA += Buffer[j];
}
Output[i] = MA;
CurrentIndex = (CurrentIndex + 1) % Period;
}
var R = new ArraySegment<double>(Output, Period - 1, Length - Period + 1);
return new NDArray(R.ToArray());
}
It is using NumSharp, the .net port of numpy, to hold the source array.
While it is all different code, the C# code and python numpy output the same results (differences happen after the 12th decimal point, so we can consider them identical).
This points out to mathdotnet.com being different; so I guess I can trust the numpy / C# versions more.
Are there different variations of the SMA that could cause this? or something obvious I don't see?
I have put all the data here: https://pastebin.com/WgYJUUJF
Edit:
Here is the numpy code:
import numpy as np
def calcSma(data, smaPeriod):
j = next(i for i, x in enumerate(data) if x is not None)
our_range = range(len(data))[j + smaPeriod - 1:]
empty_list = [None] * (j + smaPeriod - 1)
sub_result = [np.mean(data[i - smaPeriod + 1: i + 1]) for i in our_range]
return np.array(empty_list + sub_result)
def calcSma2(data_set, periods=3):
weights = np.ones(periods) / periods
return np.convolve(data_set, weights, mode='valid')
a = np.array([1.1282553063375, 1.13157696082132, 1.13275406120136, 1.1332879715733, 1.12761933580452, 1.12621836040801, 1.12282485875706, 1.12265572041877, 1.13094386506532, 1.12320520490577, 1.12427293064877, 1.1328332027022, 1.13099445663901, 1.12843355605048, 1.13002750724853, 1.12843355605048, 1.13099445663901, 1.12709476494142, 1.12684879712348, 1.12672349888807, 1.12600933402474, 1.13112070248549, 1.12985951088976, 1.12822416032659, 1.12471789559362, 1.12651004224413, 1.12442669033881, 1.12334638977164, 1.12714333124378, 1.1312233808195, 1.12713229372575, 1.128255040952, 1.12585669781931, 1.12763457442902, 1.12470631424376, 1.12223443223443, 1.12506842815956, 1.12691187181355, 1.12385654130971, 1.13026344596074, 1.12237927400894, 1.1245915922457, 1.13088395780284, 1.13211944646759, 1.12590649028825, 1.12829127560895, 1.11876736364966, 1.12222667492441, 1.12169543369019, 1.12199031071285])
b = calcSma(a, 15)
c = calcSma2(a, 15)
print b
print "----------------------------------"
print c
and here is the mathdotnet one:
var data = Vector<double>.Build.Dense(new[] { 1.1282553063375, 1.13157696082132, 1.13275406120136, 1.1332879715733, 1.12761933580452, 1.12621836040801, 1.12282485875706, 1.12265572041877, 1.13094386506532, 1.12320520490577, 1.12427293064877, 1.1328332027022, 1.13099445663901, 1.12843355605048, 1.13002750724853, 1.12843355605048, 1.13099445663901, 1.12709476494142, 1.12684879712348, 1.12672349888807, 1.12600933402474, 1.13112070248549, 1.12985951088976, 1.12822416032659, 1.12471789559362, 1.12651004224413, 1.12442669033881, 1.12334638977164, 1.12714333124378, 1.1312233808195, 1.12713229372575, 1.128255040952, 1.12585669781931, 1.12763457442902, 1.12470631424376, 1.12223443223443, 1.12506842815956, 1.12691187181355, 1.12385654130971, 1.13026344596074, 1.12237927400894, 1.1245915922457, 1.13088395780284, 1.13211944646759, 1.12590649028825, 1.12829127560895, 1.11876736364966, 1.12222667492441, 1.12169543369019, 1.12199031071285 });
var sma = Vector<double>.Build.Dense(data.MovingAverage(15).Skip(14).ToArray());
var s = sma.Aggregate(string.Empty, (Current, v) => Current + $"{v}, ");
Console.WriteLine(s);

Related

How do I get the complexity of bilinear/nearest neighbour interpolation algorithm? (calculate the big O)

I want to calculate the big O of the following algorithms for resizing binary images:
Bilinear interpolation:
double scale_x = (double)new_height/(height-1);
double scale_y = (double)new_width/(width-1);
for (int i = 0; i < new_height; i++)
{
int ii = i / scale_x;
for (int j = 0; j < new_width; j++)
{
int jj = j / scale_y;
double v00 = matrix[ii][jj], v01 = matrix[ii][jj + 1],
v10 = matrix[ii + 1][jj], v11 = matrix[ii + 1][jj + 1];
double fi = i / scale_x - ii, fj = j / scale_y - jj;
double temp = (1 - fi) * ((1 - fj) * v00 + fj * v01) +
fi * ((1 - fj) * v10 + fj * v11);
if (temp >= 0.5)
result[i][j] = 1;
else
result[i][j] = 0;
}
}
Nearest neighbour interpolation
double scale_x = (double)height/new_height;
double scale_y = (double)width/new_width;
for (int i = 0; i < new_height; i++)
{
int srcx = floor(i * scale_x);
for (int j = 0; j < new_width; j++)
{
int srcy = floor(j * scale_y);
result[i][j] = matrix[srcx][srcy];
}
}
I assumed that the complexity of both of them is the loop dimensions, i.e O(new_height*new_width). However, the bilinear interpolation surely works much slower than the nearest neighbour. Could you please explain how to correctly compute complexity?
They are both running in Theta(new_height*new_width) time because except for the loop iterations all operations are constant time.
This doesn't in any way imply that the two programs will execute equally fast. It merely means that if you increase new_height and/or new_width to infinity, the ratio of execution time between the two programs will neither go to infinity nor to zero.
(This is making the assumption that the integer types are unbounded and that all arithmetic operations are constant time operations independent of the length of the operands. Otherwise there will be another relevant factor accounting for the cost of the arithmetic.)

How to fix "submatrix incorrectly defined" in Scilab?

I am trying to find three parameters (a, b, c) to fit my experimental data using ODE solver and optimization by least squares using Scilab in-built functions.
However, I keep having the message "submatrix incorrectly defined" at line "y_exp(:,1) = [0.135 ..."
When I try another series of data (t, yexp) such as the one used in the original template I get no error messages. The template I use was found here: https://wiki.scilab.org/Non%20linear%20optimization%20for%20parameter%20fitting%20example
function dy = myModel ( t , y , a , b, c )
// The right-hand side of the Ordinary Differential Equation.
dy(1) = -a*y(1) - b*y(1)*y(2)
dy(2) = a*y(1) - b*y(1)*y(2) - c*y(2)
endfunction
function f = myDifferences ( k )
// Returns the difference between the simulated differential
// equation and the experimental data.
global MYDATA
t = MYDATA.t
y_exp = MYDATA.y_exp
a = k(1)
b = k(2)
c = k(3)
y0 = y_exp(1,:)
t0 = 0
y_calc=ode(y0',t0,t,list(myModel,a,b,c))
diffmat = y_calc' - y_exp
// Make a column vector
f = diffmat(:)
MYDATA.funeval = MYDATA.funeval+ 1
endfunction
// Experimental data
t = [0,20,30,45,75,105,135,180,240]';
y_exp(:,1) =
[0.135,0.0924,0.067,0.0527,0.0363,0.02445,0.01668,0.012,0.009]';
y_exp(:,2) =
[0,0.00918,0.0132,0.01835,0.0261,0.03215,0.0366,0.0393,0.0401]';
// Store data for future use
global MYDATA;
MYDATA.t = t;
MYDATA.y_exp = y_exp;
MYDATA.funeval = 0;
function val = L_Squares ( k )
// Computes the sum of squares of the differences.
f = myDifferences ( k )
val = sum(f.^2)
endfunction
// Initial guess
a = 0;
b = 0;
c = 0;
x0 = [a;b;c];
[fopt ,xopt]=leastsq(myDifferences, x0)
Does anyone know how to approach this problem?
Just rewrite lines 28,29 as
y_exp = [0.135,0.0924,0.067,0.0527,0.0363,0.02445,0.01668,0.012,0.009
0,0.00918,0.0132,0.01835,0.0261,0.03215,0.0366,0.0393,0.0401]';
or insert a clear at line 1 (you may have defined y_exp before with a different size).

Script interface for the Fit image Palette introduced in GMS 2.3?

The Fit Image Palette is quite nice and powerful. Is there a script interface that we can access it directly?
There is a script interface, and the example script below will get you started. However, the script interface is not officially supported. It might therefore be buggy or likely to change in future GMS versions.
For GMS 2.3 the following script works:
// create the input image:
Image input := NewImage("formula test", 2, 100)
input = 500.5 - icol*11.1 + icol*icol*0.11
// add some random noise:
input += (random()-0.5)*sqrt(abs(input))
// create image with error data (not required)
Image errors := input.ImageClone()
errors = tert(input > 1, sqrt(input), 1)
// setup fit:
Image pars := NewImage("pars", 2, 3)
Image parsToFit := NewImage("pars to fit", 2, 3)
pars = 10; // starting values
parsToFit = 1;
Number chiSqr = 1e6
Number conv_cond = 0.00001
Result("\n starting pars = {")
Number xSize = pars.ImageGetDimensionSize(0)
Number i = 0
for (i = 0; i < xSize; i++)
{
Result(GetPixel(pars, i, 0))
if (i < (xSize-1)) Result(", ")
}
Result("}")
// fit:
String formulaStr = "p0 + p1*x + p2*x**2"
Number ok = FitFormula(formulaStr, input, errors, pars, parsToFit, chiSqr, conv_cond)
Result("\n results pars = {")
for (i = 0; i < xSize; i++)
{
Result(GetPixel(pars, i, 0))
if (i < (xSize-1)) Result(", ")
}
Result("}")
Result(", chiSqr ="+ chiSqr)
// plot results of fit:
Image plot := PlotFormula(formulaStr, input, pars)
// compare the plot and original data:
Image compare := NewImage("Compare Fit", 2, 100, 3)
compare[icol, 0] = input // original data
compare[icol, 1] = plot // fit function
compare[icol, 2] = input - plot // residuals
ImageDocument linePlotDoc = CreateImageDocument("Test Fitting")
ImageDisplay linePlotDsp = linePlotDoc.ImageDocumentAddImageDisplay(compare, 3)
linePlotDoc.ImageDocumentShow()

Removing the spacing between tiles in tilesheet

So I have an image which contains a tile-sheet, where each tile is approx 16 pixels wide, and high. But there spaced out with a transparent spacer between each tile.
Like so:
But this is ugly, and makes displaying the sprites in the program annoying, not to mention it wastes valuable image space. Is there any easy (Besides me manually using Photoshop to move each individual tile) way to make it look like this?
I looked through Photoshop macros, as-well as other programs and I diden't seem to find anything that would directly do this.
Google also suggests I go to home-depo and get tile caulk remover.
Try this snippet. As you said, it assumes tiles are always going to be 16 pixels. Top left one is in the correct position and a single pixel gap. The script assumes the document will opened with the layer containing your tiles set as the active layer.
#target photoshop
app.preferences.rulerUnits = Units.PIXELS;
app.preferences.typeUnits = TypeUnits.PIXELS;
var gap = 1;
var tileSize = 16;
var doc = app.activeDocument.duplicate();
var sourceLyr = doc.activeLayer;
var xTilePosition = 0;
var yTilePosition = 0;
for (var x = 0; x < sourceLyr.bounds[2]; x = x+ tileSize + 1 ) {
for (var y = 0; y < sourceLyr.bounds[3]; y = y + tileSize + 1) {
if (x > 0 || y > 0) {
app.activeDocument = doc;
doc.activeLayer = sourceLyr;
selRegion = Array(Array(x, y),
Array(x + tileSize, y),
Array(x + tileSize, y + tileSize),
Array(x, y + tileSize),
Array(x, y))
doc.selection.select(selRegion);
var dx = x - (xTilePosition * tileSize);
var dy = y - (yTilePosition * tileSize);
doc.selection.translate(0 - dx, 0 - dy);
}
yTilePosition ++;
}
xTilePosition++;
yTilePosition = 0;
}

Web Audio API WaveShaperNode

How do you use the waveshapernode in the web audio api? particular the curve Float32Array attribute?
Feel free to look at an example here.
In detail, I create a waveshaper curve with this function:
WAAMorningStar.prototype.createWSCurve = function (amount, n_samples) {
if ((amount >= 0) && (amount < 1)) {
ND.dist = amount;
var k = 2 * ND.dist / (1 - ND.dist);
for (var i = 0; i < n_samples; i+=1) {
// LINEAR INTERPOLATION: x := (c - a) * (z - y) / (b - a) + y
// a = 0, b = 2048, z = 1, y = -1, c = i
var x = (i - 0) * (1 - (-1)) / (n_samples - 0) + (-1);
this.wsCurve[i] = (1 + k) * x / (1+ k * Math.abs(x));
}
}
Then "load" it in a waveshaper node like this:
this.createWSCurve(ND.dist, this.nSamples);
this.sigmaDistortNode = this.context.createWaveShaper();
this.sigmaDistortNode.curve = this.wsCurve;
Everytime I need to change the distortion parameter, I re-create the waveshaper curve:
WAAMorningStar.prototype.setDistortion = function (distValue) {
var distCorrect = distValue;
if (distValue < -1) {
distCorrect = -1;
}
if (distValue >= 1) {
distCorrect = 0.985;
}
this.createWSCurve (distCorrect, this.nSamples);
}
(I use distCorrect to make the distortion sound nicer, values found euristically).
You can find the algorithm I use to create the waveshaper curve here