solving Ax=b using inverse matrix in maple - physics

I am trying to solve a system of linear equations using the inverse matrix, but am having issues on my last command where I am trying to multiply the inverse matrix by B. Can anyone offer advice on what I am doing wrong?
restart; with(linalg):
sys := {a+.9*h+.8*c+.4*d+.1*e+0*f = 1, .1*a+.2*h+.4*c+.6*d+.5*e+.6*f = .6, .4*a+.5*h+.7*c+d+.6*e+.3*f = .7, .6*a+.1*h+.2*c+.3*d+.5*e+f = .5, .8*a+.8*h+c+.7*d+.4*e+.2*f = .8, .9*a+h+.8*c+.5*d+.2*e+.1*f = .9}:
solve(sys, {a, c, d, e, f, h});
{a = 0.08191850594, c = 0.7504244482, d = 3.510186757,
e = -6.474108659, f = 2.533531409, h = -0.4876910017}
Z := genmatrix(sys, [a, h, c, d, e, f], 'b');
evalm(b);
linsolve(Z, b);
inverse(Z);
B := {`<|>`(`<,>`(1, .6, .7, .5, .8, .9))};
evalm(inverse(Z)&*B);
response is indented below each line where possible. I don't have enough points to put pictures in for matrix results so they have been left blank.

As a previous poster suggests, removing the curly braces will fix your code, however, it may also be worth noting that if you are using a copy of Maple 6 or newer, the linalg package has been deprecated by the newer LinearAlgebra package.
Here is equivalent code that uses the LinearAlgebra package:
with(LinearAlgebra):
sys := [a+.9*h+.8*c+.4*d+.1*e+0*f = 1, .1*a+.2*h+.4*c+.6*d+.5*e+.6*f = .6, .4*a+.5*h+.7*c+d+.6*e+.3*f = .7, .6*a+.1*h+.2*c+.3*d+.5*e+f = .5, .8*a+.8*h+c+.7*d+.4*e+.2*f = .8, .9*a+h+.8*c+.5*d+.2*e+.1*f = .9];
solve(sys, {a, c, d, e, f, h});
Z,b := GenerateMatrix(sys, [a, h, c, d, e, f]);
LinearSolve( Z, b );
MatrixInverse( Z );
MatrixInverse( Z ) . b;
One minor difference is that here the GenerateMatrix command returns both the coefficient matrix as well as the right hand side Vector. Also note that I suppressed the output for the with command using the : operator.

Just remove the curly brackets from B.
B := `<|>`(`<,>`(1, .6, .7, .5, .8, .9));
evalm(inverse(Z)&*B);

Related

sympy lambdify with scipy curve_fit

I constructed a sympy expression
and I used lambdify to convert to a numpy function as follow:
import sympy
from sympy.parsing.sympy_parser import parse_expr
x0,x1 = sympy.symbols('x0 x1')
a,b,c = sympy.symbols('a b c')
func=parse_expr('a*x0 + b*x1 + c*x0*x1')
p = [x0,x1,a,b,c]
npFunc = sympy.lambdify(p,func,'numpy')
but when I use scipy's curve_fit to fit npFunc for (a,b,c) with the two independent variables x0 and x1, it fails. I can't figure out how to use lambdify to make npFunc to work like this (with the unpacking):
def npFunc(X, a, b, c):
x0,x1 = X
return a*x0 + b*x1 + c*x0*x1
How should I do it?
The docs for your function (using the ipython ? shortcut)
In [22]: npFunc?
Signature: npFunc(x0, x1, a, b, c)
Docstring:
Created with lambdify. Signature:
func(x0, x1, a, b, c)
Expression:
a*x0 + b*x1 + c*x0*x1
Source code:
def _lambdifygenerated(x0, x1, a, b, c):
return (a*x0 + b*x1 + c*x0*x1)
With the suggest alternative:
In [23]: p = [[x0, x1], a, b, c]
In [24]: npFunc = lambdify(p,func,'numpy')
In [25]: npFunc?
Signature: npFunc(_Dummy_22, a, b, c)
Docstring:
Created with lambdify. Signature:
func(arg_0, a, b, c)
Expression:
a*x0 + b*x1 + c*x0*x1
Source code:
def _lambdifygenerated(_Dummy_22, a, b, c):
[x0, x1] = _Dummy_22
return (a*x0 + b*x1 + c*x0*x1)

Numpy dot product with 3d array

I've got two arrays:
data of shape (2466, 2498, 9), where the dimensions are (asset, date, returns).
correlation_matrix of shape (2466, 2466) (with 0's on the diagonal)
I want to get the dot product that equates to the expected returns, which is the returns of each asset multiplied by the correlation_matrix. It should give a shape the same as data.
I've tried:
data.transpose([1, 2, 0]) # correlation_matrix
but this just hangs my PC (been going 10 minutes and counting).
I also tried:
np.einsum('ijk,lm->ijk', data, correlation_matrix)
but I'm less familiar with einsum, and this also hangs.
What am I doing wrong?
With your .transpose((1, 2, 0)) data, the correct form is:
"ijs,sk" # -> ijk
Since for a tensor A and B, we can write:
C_{ijk} = Σ_s A_{ijs} * B_{sk}
If you want to avoid transposing your data beforehand, you can just permute the indices:
"sij,sk" # -> ijk
To verify:
p, q, r = 2466, 2498, 9
a = np.random.randint(255, size=(p, q, r))
b = np.random.randint(255, size=(p, p))
c1 = a.transpose((1, 2, 0)) # b
c2 = np.einsum("sij,sk", a, b)
>>> np.all(c1 == c2)
True
The amount of multiplications needed to compute this for (p, q, r) shaped data is p * np.prod(c.shape) == p * (q * r * p) == p**2 * q * r. In your case, that is 136_716_549_192 multiplications. You also need approximately the same number of additions, so that gives us somewhere close to 270 billion operations. If you want more speed, you could consider using a GPU for your computations via cupy.
def with_np():
p, q, r = 2466, 2498, 9
a = np.random.randint(255, size=(p, q, r))
b = np.random.randint(255, size=(p, p))
c1 = a.transpose((1, 2, 0)) # b
c2 = np.einsum("sij,sk", a, b)
def with_cp():
p, q, r = 2466, 2498, 9
a = cp.random.randint(255, size=(p, q, r))
b = cp.random.randint(255, size=(p, p))
c1 = a.transpose((1, 2, 0)) # b
c2 = cp.einsum("sij,sk", a, b)
>>> timeit(with_np, number=1)
513.066
>>> timeit(with_cp, number=1)
0.197
That's a speedup of 2600, including memory allocation, initialization, and CPU/GPU copy times! (A more realistic benchmark would give an even larger speedup.)
There are different ways to do this product:
# as you already suggested:
data.transpose([1, 2, 0]) # correlation_matrix
# using einsum
np.einsum('ijk,il', data, correlation_matrix)
# using tensordot to explicitly specify the axes to sum over
np.tensordot(data, correlation_matrix, axes=(0,0))
All of them should give the same result. The timing for some small matrices was more or less the same for me. So your problem is the large amount of data, not an inefficient implementation.
A=np.arange(100*120*9).reshape((100, 120, 9))
B=np.arange(100**2).reshape((100,100))
timeit('A.transpose([1,2,0])#B', globals=globals(), number=100)
# 0.747475513999234
timeit("np.einsum('ijk,il', A, B)", globals=globals(), number=100)
# 0.4993825999990804
timeit('np.tensordot(A, B, axes=(0,0))', globals=globals(), number=100)
# 0.5872082839996438

Complex matrix multiplication with tensorflow-backend of Keras

Let matrix F1 has a shape of (a * h * w * m), matrix F2 has a shape of (a * h * w * n), and matrix G has a shape of (a * m * n).
I want to implement the following formula which calculates each factor of G from factors of F1 and F2, using tensorflow backend of Keras. However I am confused by various backend functions, especially K.dot() and K.batch_dot().
$$ G_{k, i, j} = \sum^h_{s=1} \sum^w_{t=1} \dfrac{F^1_{k, s, t, i} * F^2_{k, s, t, j}}{h * w} $$ i.e.:
(Image obtained by copying the above equation within $$ and pasting it to this site)
Is there any way to implement the above formula? Thank you in advance.
Using Tensorflow tf.einsum() (which you could wrap in a Lambda layer for Keras):
import tensorflow as tf
import numpy as np
a, h, w, m, n = 1, 2, 3, 4, 5
F1 = tf.random_uniform(shape=(a, h, w, m))
F2 = tf.random_uniform(shape=(a, h, w, n))
G = tf.einsum('ahwm,ahwn->amn', F1, F2) / (h * w)
with tf.Session() as sess:
f1, f2, g = sess.run([F1, F2, G])
# Manually computing G to check our operation, reproducing naively your equation:
g_check = np.zeros(shape=(a, m, n))
for k in range(a):
for i in range(m):
for j in range(n):
for s in range(h):
for t in range(w):
g_check[k, i, j] += f1[k,s,t,i] * f2[k,s,t,j] / (h * w)
# Checking for equality:
print(np.allclose(g, g_check))
# > True

Maple genmatrix command

Does anyone know what I am doing wrong here? I am trying to generate a matrix from the linear system defined and then solve the matrix using it's inverse. For some reason it won't create the matrix.
sys := {a+.9*h+.8*c+.4*d+.1*e+0*f = 1, .1*a+.2*h+.4*c+.6*d+.5*e+.6*f = .6,
.4*a+.5*h+.7*c+d+.6*e+.3*f = .7, .6*a+.1*h+.2*c+.3*d+.5*e+f = .5,
.8*a+.8*h+c+.7*d+.4*e+.2*f = .8, .9*a+h+.8*c+.5*d+.2*e+.1*f = .9}:
solve(sys, {a, c, d, e, f, h});
Z := genmatrix(sys, [a, h, c, d, e, f], 'b');
Indented values are Maple's response. Third line of code should have generated a matrix, but instead is giving back my input.
You need to load the linear-algebra package with(linalg).
restart:with(linalg):
Z := genmatrix(sys, [a, h, c, d, e, f], 'b');

How do I sum the coefficients of a polynomial in Maxima?

I came up with this nice thing, which I am calling 'partition function for symmetric groups'
Z[0]:1;
Z[n]:=expand(sum((n-1)!/i!*z[n-i]*Z[i], i, 0, n-1));
Z[4];
6*z[4]+8*z[1]*z[3]+3*z[2]^2+6*z[1]^2*z[2]+z[1]^4
The sum of the coefficients for Z[4] is 6+8+3+6+1 = 24 = 4!
which I am hoping corresponds to the fact that the group S4 has 6 elements like (abcd), 8 like (a)(bcd), 3 like (ab)(cd), 6 like (a)(b)(cd), and 1 like (a)(b)(c)(d)
So I thought to myself, the sum of the coefficients of Z[20] should be 20!
But life being somewhat on the short side, and fingers giving trouble, I was hoping to confirm this automatically. Can anyone help?
This sort of thing points a way:
Z[20],z[1]=1,z[2]=1,z[3]=1,z[4]=1,z[5]=1,z[6]=1,z[7]=1,z[8]=1;
But really...
I don't know a straightforward way to do that; coeff seems to handle only a single variable at a time. But here's a way to get the list you want. The basic idea is to extract the terms of Z[20] as a list, and then evaluate each term with z[1] = 1, z[2] = 1, ..., z[20] = 1.
(%i1) display2d : false $
(%i2) Z[0] : 1 $
(%i3) Z[n] := expand (sum ((n - 1)!/i!*z[n - i]*Z[i], i, 0, n-1)) $
(%i4) z1 : makelist (z[i] = 1, i, 1, 20);
(%o4) [z[1] = 1,z[2] = 1,z[3] = 1,z[4] = 1,z[5] = 1,z[6] = 1,z[7] = 1, ...]
(%i5) a : args (Z[20]);
(%o5) [121645100408832000*z[20],128047474114560000*z[1]*z[19],
67580611338240000*z[2]*z[18],67580611338240000*z[1]^2*z[18],
47703960944640000*z[3]*z[17],71555941416960000*z[1]*z[2]*z[17], ...]
(%i6) a1 : ev (a, z1);
(%o6) [121645100408832000,128047474114560000,67580611338240000, ...]
(%i7) apply ("+", a1);
(%o7) 2432902008176640000
(%i8) 20!;
(%o8) 2432902008176640000