question about methodology to solve equation in sage - variables

So im trying to solve this equation: 3g - 2h = 1 in sage
by finding the value of both variables so that the equation is 1
by hand its g= 1 h=-1
this is what i have so far
var('g,h')
E = 3*g-2*h
1 == 3*g - 2*h
solve(E,c,d)

One way to go is as follows.
Define the symbolic variables g and h:
sage: g, h = SR.var('g, h')
Define the equation E as
sage: E = 3*g - 2*h == 1
Solve E in terms for g and h:
sage: solve([E], [g, h])
[[g == 2/3*r1 + 1/3, h == r1]]
This returns a list of solutions
containing a single solution, which is
a list of possible values for g and h.
The solution returned involves a real parameter
(denoted by r1 here), and g and h are
expressed in terms of this parameter.
One can also get solutions as dictionaries:
sage: sols = solve([E], [g, h], solution_dict=True)
sage: sols
[{g: 2/3*r3 + 1/3, h: r3}]
sage: sol = sols[0]
sage: sol
{g: 2/3*r3 + 1/3, h: r3}
Recover the values of g and h in the solution:
sage: sol[g]
2/3*r3 + 1/3
sage: sol[h]
r3
Give a Python name to that parameter
sage: r3, = sol[h].variables()
sage: r3
r3
Substitute some values:
sage: sol[g].subs({r3: 1}), sol[h].subs({r3: 1})
(1, 1)
sage: sol[g].subs({r3: -1}), sol[h].subs({r3: -1})
(-1/3, -1)
sage: sol[g].subs({r3: -2}), sol[h].subs({r3: -2})
(-1, -2)
sage: sol[g].subs({r3: 3}), sol[h].subs({r3: 3})
(7/3, 3)
sage: sol[g].subs({r3: 4}), sol[h].subs({r3: 4})
(3, 4)

Related

SAGEMATH: SyntaxError: keyword can't be an expression

At the end I wanted to substitute qij^2=qij, i,j=1,2,3,4
and got the error:
SyntaxError: keyword can't be an expression
The code:
sage: f1 = 2*x1 + x2 + 1
sage: f2 = -x1 + 5*x2 - 2
sage: e1 = expand(f1^2)
sage: e2 = expand(f2^2)
sage: var('q11, q12, q13, q14, q21, q22, q23, q24')
(q11, q12, q13, q14, q21, q22, q23, q24)
sage: E1 = e1.substitute(x1=q11+(2q12)-q13-(2q14), x2=q21+(2q22)-q23-(2q24))
sage: E2 = e2.substitute(x1=q11+(2q12)-q13-(2q14), x2=q21+(2q22)-q23-(2q24))
sage: E1a = expand(E1)
sage: E2a = expand(E2)
sage: E1aa = E1a.substitute(q11^2=q11, q12^2=q12, q13^2=q13, q14^2=q14,
....: q21^2=q21, q22^2=q22, q23^2=q23, q24^2=q24)
sage: E2aa = E2a.substitute(q11^2=q11, q12^2=q12, q13^2=q13, q14^2=q14,
....: q21^2=q21, q22^2=q22, q23^2=q23, q24^2=q24)
sage: E1aa, E2aa
If you want to just substitute for a variable, you can use the = syntax: E1a.substitute(q11=(whatever)). If, though, you want to substitute for a more complicated expression, you need to use a different syntax:
E1a.substitute(q11^2=q1) # will raise an error
E1a.substitute(q11^2==q1) # should work
E1a.substitute({q11^2: q1}) # should work
The help message produced by E1a.substitute? gives some examples.

probability of sample of distribution

I am trying to generate a sample of 100 scenarios (X, Y) where both X and Y are normally distributed X=N(50,5^2), Y=N(30,2^2) and X and Y are correlated Cov(X,Y)=0.4.
I have been able to generate 100 scenarios with the Cholesky decomposition:
# We do a Cholesky decomposition to generate correlated scenarios
nScenarios = 10
Σ = [25 0.4; 0.4 4]
μ = [50, 30]
L = cholesky(Σ)
v = [rand(Normal(0, 1), nScenarios), rand(Normal(0, 1), nScenarios)]
X = reshape(zeros(nScenarios),1,nScenarios)
Y = reshape(zeros(nScenarios),1,nScenarios)
for i = 1:nScenarios
X[1, i] = sum(L.U[1, j] *v[j][i] for j = 1:nBreadTypes) + μ[1]
Y[1, i] = sum(L.U[2, j] *v[j][i] for j = 1:nBreadTypes) + μ[2]
end
However I need the probability of each scenario, i.e P(X=k and Y=p). My question would be, how can we get a sample of a certain distribution with the probability of each scenario?
Following the BatWannaBe explanation, normally I would do it like this:
julia> using Distributions
julia> d = MvNormal([50.0, 30.0], [25.0 0.4; 0.4 4.0])
FullNormal(
dim: 2
μ: [50.0, 30.0]
Σ: [25.0 0.4; 0.4 4.0]
)
julia> point = rand(d)
2-element Vector{Float64}:
52.807189619051485
32.693811008760676
julia> pdf(d, point)
0.0056519503173830515

Numpy dot product with 3d array

I've got two arrays:
data of shape (2466, 2498, 9), where the dimensions are (asset, date, returns).
correlation_matrix of shape (2466, 2466) (with 0's on the diagonal)
I want to get the dot product that equates to the expected returns, which is the returns of each asset multiplied by the correlation_matrix. It should give a shape the same as data.
I've tried:
data.transpose([1, 2, 0]) # correlation_matrix
but this just hangs my PC (been going 10 minutes and counting).
I also tried:
np.einsum('ijk,lm->ijk', data, correlation_matrix)
but I'm less familiar with einsum, and this also hangs.
What am I doing wrong?
With your .transpose((1, 2, 0)) data, the correct form is:
"ijs,sk" # -> ijk
Since for a tensor A and B, we can write:
C_{ijk} = Σ_s A_{ijs} * B_{sk}
If you want to avoid transposing your data beforehand, you can just permute the indices:
"sij,sk" # -> ijk
To verify:
p, q, r = 2466, 2498, 9
a = np.random.randint(255, size=(p, q, r))
b = np.random.randint(255, size=(p, p))
c1 = a.transpose((1, 2, 0)) # b
c2 = np.einsum("sij,sk", a, b)
>>> np.all(c1 == c2)
True
The amount of multiplications needed to compute this for (p, q, r) shaped data is p * np.prod(c.shape) == p * (q * r * p) == p**2 * q * r. In your case, that is 136_716_549_192 multiplications. You also need approximately the same number of additions, so that gives us somewhere close to 270 billion operations. If you want more speed, you could consider using a GPU for your computations via cupy.
def with_np():
p, q, r = 2466, 2498, 9
a = np.random.randint(255, size=(p, q, r))
b = np.random.randint(255, size=(p, p))
c1 = a.transpose((1, 2, 0)) # b
c2 = np.einsum("sij,sk", a, b)
def with_cp():
p, q, r = 2466, 2498, 9
a = cp.random.randint(255, size=(p, q, r))
b = cp.random.randint(255, size=(p, p))
c1 = a.transpose((1, 2, 0)) # b
c2 = cp.einsum("sij,sk", a, b)
>>> timeit(with_np, number=1)
513.066
>>> timeit(with_cp, number=1)
0.197
That's a speedup of 2600, including memory allocation, initialization, and CPU/GPU copy times! (A more realistic benchmark would give an even larger speedup.)
There are different ways to do this product:
# as you already suggested:
data.transpose([1, 2, 0]) # correlation_matrix
# using einsum
np.einsum('ijk,il', data, correlation_matrix)
# using tensordot to explicitly specify the axes to sum over
np.tensordot(data, correlation_matrix, axes=(0,0))
All of them should give the same result. The timing for some small matrices was more or less the same for me. So your problem is the large amount of data, not an inefficient implementation.
A=np.arange(100*120*9).reshape((100, 120, 9))
B=np.arange(100**2).reshape((100,100))
timeit('A.transpose([1,2,0])#B', globals=globals(), number=100)
# 0.747475513999234
timeit("np.einsum('ijk,il', A, B)", globals=globals(), number=100)
# 0.4993825999990804
timeit('np.tensordot(A, B, axes=(0,0))', globals=globals(), number=100)
# 0.5872082839996438

Complex matrix multiplication with tensorflow-backend of Keras

Let matrix F1 has a shape of (a * h * w * m), matrix F2 has a shape of (a * h * w * n), and matrix G has a shape of (a * m * n).
I want to implement the following formula which calculates each factor of G from factors of F1 and F2, using tensorflow backend of Keras. However I am confused by various backend functions, especially K.dot() and K.batch_dot().
$$ G_{k, i, j} = \sum^h_{s=1} \sum^w_{t=1} \dfrac{F^1_{k, s, t, i} * F^2_{k, s, t, j}}{h * w} $$ i.e.:
(Image obtained by copying the above equation within $$ and pasting it to this site)
Is there any way to implement the above formula? Thank you in advance.
Using Tensorflow tf.einsum() (which you could wrap in a Lambda layer for Keras):
import tensorflow as tf
import numpy as np
a, h, w, m, n = 1, 2, 3, 4, 5
F1 = tf.random_uniform(shape=(a, h, w, m))
F2 = tf.random_uniform(shape=(a, h, w, n))
G = tf.einsum('ahwm,ahwn->amn', F1, F2) / (h * w)
with tf.Session() as sess:
f1, f2, g = sess.run([F1, F2, G])
# Manually computing G to check our operation, reproducing naively your equation:
g_check = np.zeros(shape=(a, m, n))
for k in range(a):
for i in range(m):
for j in range(n):
for s in range(h):
for t in range(w):
g_check[k, i, j] += f1[k,s,t,i] * f2[k,s,t,j] / (h * w)
# Checking for equality:
print(np.allclose(g, g_check))
# > True

Fastest way to create a sparse matrix of the form A.T * diag(b) * A + C?

I'm trying to optimize a piece of code that solves a large sparse nonlinear system using an interior point method. During the update step, this involves computing the Hessian matrix H, the gradient g, then solving for d in H * d = -g to get the new search direction.
The Hessian matrix has a symmetric tridiagonal structure of the form:
A.T * diag(b) * A + C
I've run line_profiler on the particular function in question:
Line # Hits Time Per Hit % Time Line Contents
==================================================
386 def _direction(n, res, M, Hsig, scale_var, grad_lnprior, z, fac):
387
388 # gradient
389 44 1241715 28220.8 3.7 g = 2 * scale_var * res - grad_lnprior + z * np.dot(M.T, 1. / n)
390
391 # hessian
392 44 3103117 70525.4 9.3 N = sparse.diags(1. / n ** 2, 0, format=FMT, dtype=DTYPE)
393 44 18814307 427597.9 56.2 H = - Hsig - z * np.dot(M.T, np.dot(N, M)) # slow!
394
395 # update direction
396 44 10329556 234762.6 30.8 d, fac = my_solver(H, -g, fac)
397
398 44 111 2.5 0.0 return d, fac
Looking at the output it's clear that constructing H is by far the most costly step - it takes considerably longer than actually solving for the new direction.
Hsig and M are both CSC sparse matrices, n is a dense vector and z is a scalar. The solver I'm using requires H to be either a CSC or CSR sparse matrix.
Here's a function that produces some toy data with the same formats, dimensions and sparseness as my real matrices:
import numpy as np
from scipy import sparse
def make_toy_data(nt=200000, nc=10):
d0 = np.random.randn(nc * (nt - 1))
d1 = np.random.randn(nc * (nt - 1))
M = sparse.diags((d0, d1), (0, nc), shape=(nc * (nt - 1), nc * nt),
format='csc', dtype=np.float64)
d0 = np.random.randn(nc * nt)
Hsig = sparse.diags(d0, 0, shape=(nc * nt, nc * nt), format='csc',
dtype=np.float64)
n = np.random.randn(nc * (nt - 1))
z = np.random.randn()
return Hsig, M, n, z
And here's my original approach for constructing H:
def original(Hsig, M, n, z):
N = sparse.diags(1. / n ** 2, 0, format='csc')
H = - Hsig - z * np.dot(M.T, np.dot(N, M)) # slow!
return H
Timing:
%timeit original(Hsig, M, n, z)
# 1 loops, best of 3: 483 ms per loop
Is there a faster way to construct this matrix?
I get close to a 4x speed-up in computing the product M.T * D * M out of the three diagonal arrays. If d0 and d1 are the main and upper diagonal of M, and d is the main diagonal of D, then the following code creates M.T * D * M directly:
def make_tridi_bis(d0, d1, d, nc=10):
d00 = d0*d0*d
d11 = d1*d1*d
d01 = d0*d1*d
len_ = d0.size
data = np.empty((3*len_ + nc,))
indices = np.empty((3*len_ + nc,), dtype=np.int)
# Fill main diagonal
data[:2*nc:2] = d00[:nc]
indices[:2*nc:2] = np.arange(nc)
data[2*nc+1:-2*nc:3] = d00[nc:] + d11[:-nc]
indices[2*nc+1:-2*nc:3] = np.arange(nc, len_)
data[-2*nc+1::2] = d11[-nc:]
indices[-2*nc+1::2] = np.arange(len_, len_ + nc)
# Fill top diagonal
data[1:2*nc:2] = d01[:nc]
indices[1:2*nc:2] = np.arange(nc, 2*nc)
data[2*nc+2:-2*nc:3] = d01[nc:]
indices[2*nc+2:-2*nc:3] = np.arange(2*nc, len_+nc)
# Fill bottom diagonal
data[2*nc:-2*nc:3] = d01[:-nc]
indices[2*nc:-2*nc:3] = np.arange(len_ - nc)
data[-2*nc::2] = d01[-nc:]
indices[-2*nc::2] = np.arange(len_ - nc ,len_)
indptr = np.empty((len_ + nc + 1,), dtype=np.int)
indptr[0] = 0
indptr[1:nc+1] = 2
indptr[nc+1:len_+1] = 3
indptr[-nc:] = 2
np.cumsum(indptr, out=indptr)
return sparse.csr_matrix((data, indices, indptr), shape=(len_+nc, len_+nc))
If your matrix M were in CSR format, you can extract d0 and d1 as d0 = M.data[::2] and d1 = M.data[1::2], I modified you toy data making routine to return those arrays as well, and here's what I get:
In [90]: np.allclose((M.T * sparse.diags(d, 0) * M).A, make_tridi_bis(d0, d1, d).A)
Out[90]: True
In [92]: %timeit make_tridi_bis(d0, d1, d)
10 loops, best of 3: 124 ms per loop
In [93]: %timeit M.T * sparse.diags(d, 0) * M
1 loops, best of 3: 501 ms per loop
The whole purpose of the above code is to take advantage of the structure of the non-zero entries. If you draw a diagram of the matrices you are multiplying together, it is relatively easy to convince yourself that the main (d_0) and top and bottom (d_1) diagonals of the resulting tridiagonal matrix are simply:
d_0 = np.zeros((len_ + nc,))
d_0[:len_] = d00
d_0[-len_:] += d11
d_1 = d01
The rest of the code in that function is simply building the tridiagonal matrix directly, as calling sparse.diags with the above data is several times slower.
I tried running your test case and had problems with the np.dot(N, M). I didn't dig into it, but I think my numpy/sparse combo (both pretty new) had problems using np.dot on sparse arrays.
But H = -Hsig - z*M.T.dot(N.dot(M)) runs just fine. This uses the sparse dot.
I haven't run a profile, but here are Ipython timings for several parts. It takes longer to generate the data than to do that double dot.
In [37]: timeit Hsig,M,n,z=make_toy_data()
1 loops, best of 3: 2 s per loop
In [38]: timeit N = sparse.diags(1. / n ** 2, 0, format='csc')
1 loops, best of 3: 377 ms per loop
In [39]: timeit H = -Hsig - z*M.T.dot(N.dot(M))
1 loops, best of 3: 1.55 s per loop
H is a
<2000000x2000000 sparse matrix of type '<type 'numpy.float64'>'
with 5999980 stored elements in Compressed Sparse Column format>