Define a matrix in VB2012 with Math.net Numeric Matrix - vb.net

I am switching from MATLAB to VS2012. I want to use Math.net Numeric to solve my matrix based equations. I am having hard time to define a simple matrix in VS2012 in VB environment using Math.Net matrix. I have found many articles on F# and how to define a matrix, but no luck in VB. I tried Public MAT1 As Matrix(Of but I don't know how to finish the declaration. Does anyone know? Thank you.

The MathNet library has predefined Matrix classes for singles, doubles, and complex values.
For example, to instantiate a 3x3 matrix of doubles, use:
Dim m = MathNet.Numerics.LinearAlgebra.Double.Matrix.Build.DenseOfArray({{1, 2, 3}, {4, 5, 6}, {7, 8, 9}})
Each operation on the matrix returns a transformed matrix:
Dim m2 = m.Multiply(1.5)

Related

Calculating covariance of a 3-dim matrix using einsum

I've got an array of time series data of shape (2466, 2498, 9) ((asset, date, feature)).
I've got 9 features, on which I want to do PCA to reduce the dimensionality on this axis.
I'm struggling to calculate the covariance matrix, Z = X.T # X.
I think I want to express this as an einsum, but I'm not sure how. I'm certainly interested in other methods as well, as the purpose of this is to learn numpy, rather than actually solve a problem.
Edit: This is my (apparently wrong) attempt so far:
np.einsum('ijk,ijl->ijkl',myData, myData)`
(This just hangs my system.)
Edit 2:
I've come to understand that I should be using np.linalg.svd for this problem.

Fastest way to apply arithmetic operations to System.Array in IronPython

I would like to add (arithmetics) two large System.Arrays element-wise in IronPython and store the result in the first array like this:
for i in range(0:ArrA.Count) :
arrA.SetValue(i, arrA.GetValue(i) + arrB.GetValue(i));
However, this seems very slow. Having a C background I would like to use pointers or iterators. However, I do not know how I should apply the IronPython idiom in a fast way. I cannot use Python lists, as my objects are strictly from type System.Array. The type is 3d float.
What is the fastests / a fast way to perform to compute this computation?
Edit:
The number of elements is appr. 256^3.
3d float means that the array can be accessed like this: array.GetValue(indexX, indexY, indexZ). I am not sure how the respective memory is organized in IronPython's System.Array.
Background: I wrote an interface to an IronPython API, which gives access to data in a simulation software tool. I retrieve 3d scalar data and accumulate it to a temporal array in my IronPython script. The accumulation is performed 10,000 times and should be fast, so that the simulation does not take ages.
Is it possible to use the numpy library developed for IronPython?
https://pytools.codeplex.com/wikipage?title=NumPy%20and%20SciPy%20for%20.Net
It appears to be supported, and as far as I know is as close you can get in python to C style pointer functionality with arrays and such.
Create an array:
x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
Multiply all elements by 3.0:
x *= 3.0

Lua: Dimensions of a table

This seems like a really easy, "google it for me" kind of question but I can't seem to get an answer to it. How do I find the dimensions of a table in Lua using a command similar to Numpy's .shape method?
E.g. blah = '2 x 3 table'; blah.lua_equivalent_of_shape = {2,3}
Tables in Lua are sets of key-value pairs and do not have dimensions.
You can implement 2d-arrays with Lua tables. In this case, the dimension is given by #t x #t[1], as in the example below:
t={
{11,12,13},
{21,22,23},
}
print(#t,#t[1])
Numpy's arrays are contiguous in memory and Lua's tables are Hashes so they don't always have the notion of a shape. Tables can be used to implement ragged arrays, sets, objects, etc.
That being said, to find the length of a table, t, using indices 1..n use #t
t = {1, 2, 3}
print(#t) -- prints 3
You could implement an object to behave more like a numpy array and add a shape attribute, or implement it in C and make bindings for Lua.
t = {{1, 0}, {2, 3}, {3, 1}, shape={2, 2}}
print(t.shape[1], t.shape[2])
print("dims", #t.shape)
If you really miss Numpy's functionality you can use use torch.tensor for efficient numpy like functionality in Lua.

How to maximize the log-likelihood for a Gaussian Process in Mathematica

I am currently trying to implement a Gaussian Process in Mathematica and am stuck with the maximization of the loglikelihood. I just tried to use the FindMaximum formula on my loglikelihood function but this does not seem to work on this function.
gpdata = {{-1.5, -1.8}, {-1., -1.2}, {-0.75, -0.4}, {-0.4,
0.1}, {-0.25, 0.5}, {0., 0.8}};
kernelfunction[i_, j_, h0_, h1_] :=
h0*h0*Exp[-(gpdata[[i, 1]] - gpdata[[j, 1]])^2/(2*h1^2)] +
KroneckerDelta[i, j]*0.09;
covariancematrix[h0_, h1_] =
ParallelTable[kernelfunction[i, j, h0, h1], {i, 1, 6}, {j, 1, 6}];
loglikelihood[h0_, h1_] := -0.5*
gpdata[[All, 2]].LinearSolve[covariancematrix[h0, h1],
gpdata[[All, 2]], Method -> "Cholesky"] -
0.5*Log[Det[covariancematrix[h0, h1]]] - 3*Log[2*Pi];
FindMaximum[loglikelihood[a, b], {{a, 1}, {b, 1.1}},
MaxIterations -> 500, Method -> "QuasiNewton"]
In the loglikelihood I would usually have the product of the inverse of the covariance matrix times the gpdata[[All, 2]] vector but because the covariance matrix is always positive semidefinite I wrote it this way. Also the evaluation does not stop if I use
gpdata[[All, 2]].Inverse[
covariancematrix[h0, h1]].gpdata[[All, 2]]
Has anyone an idea? I am actually working on a far more complicated problem where I have 6 parameters to optimize but I already have problems with 2.
In my experience I've seen that second-order methods fail with hyper-parameter optimization more than gradient based methods. I think this is because (most?) second-order methods rely on the function being close to a quadratic near the current estimate.
Using conjugate-gradient or even Powell's (derivative-free) conjugate direction method has proved successful in my experiments. For the two parameter case, I would suggest making a contour plot of the hyper-parameter surface for some intuition.

Data Compression Through Polynomial Equations

I am trying to make a new method of compressing data. What I want to do is take say a massive set of numbers, and then try and apply/generate polynomial functions to the data. I would then store the polynomial, with starting parameters, rather than each individual bit of data. This would mean at runtime, that the opening program would just use the polynomial to recreate the data.
My question: Are there any libraries or functions in Visual Basic .NET that will take say an array of data {0, 2, 4, 6, ..., Large Even} and return a "trendline" for it? Much like excel's graph trendline option.