SAGEMATH: SyntaxError: keyword can't be an expression - syntax-error

At the end I wanted to substitute qij^2=qij, i,j=1,2,3,4
and got the error:
SyntaxError: keyword can't be an expression
The code:
sage: f1 = 2*x1 + x2 + 1
sage: f2 = -x1 + 5*x2 - 2
sage: e1 = expand(f1^2)
sage: e2 = expand(f2^2)
sage: var('q11, q12, q13, q14, q21, q22, q23, q24')
(q11, q12, q13, q14, q21, q22, q23, q24)
sage: E1 = e1.substitute(x1=q11+(2q12)-q13-(2q14), x2=q21+(2q22)-q23-(2q24))
sage: E2 = e2.substitute(x1=q11+(2q12)-q13-(2q14), x2=q21+(2q22)-q23-(2q24))
sage: E1a = expand(E1)
sage: E2a = expand(E2)
sage: E1aa = E1a.substitute(q11^2=q11, q12^2=q12, q13^2=q13, q14^2=q14,
....: q21^2=q21, q22^2=q22, q23^2=q23, q24^2=q24)
sage: E2aa = E2a.substitute(q11^2=q11, q12^2=q12, q13^2=q13, q14^2=q14,
....: q21^2=q21, q22^2=q22, q23^2=q23, q24^2=q24)
sage: E1aa, E2aa

If you want to just substitute for a variable, you can use the = syntax: E1a.substitute(q11=(whatever)). If, though, you want to substitute for a more complicated expression, you need to use a different syntax:
E1a.substitute(q11^2=q1) # will raise an error
E1a.substitute(q11^2==q1) # should work
E1a.substitute({q11^2: q1}) # should work
The help message produced by E1a.substitute? gives some examples.

Related

divide by zero encountered in true_divide error without having zeros in my data

this is my code and this is my data, and this is the output of the code. I've tried adding one the values on the x axes, thinking maybe values so little can be interpreted as zeros. I've no idea what true_divide could be, and I cannot explain this divide by zero error since there is not a single zero in my data, checked all of my 2500 data points. Hoping that some of you could provide some clarification. Thanks in advance.
import pandas as pd
import matplotlib.pyplot as plt
from iminuit import cost, Minuit
import numpy as np
frame = pd.read_excel('/Users/lorenzotecchia/Desktop/Analisi Laboratorio/Analisi dati/Quinta Esperienza/500Hz/F0000CH2.xlsx', 'F0000CH2')
data = pd.read_excel('/Users/lorenzotecchia/Desktop/Analisi Laboratorio/Analisi dati/Quinta Esperienza/500Hz/F0000CH1.xlsx', 'F0000CH1')
# tempi_500Hz = pd.DataFrame(frame,columns=['x'])
# Vout_500Hz = pd.DataFrame(frame,columns=['y'])
tempi_500Hz = pd.DataFrame(frame,columns=['x1'])
Vout_500Hz = pd.DataFrame(frame,columns=['y1'])
# Vin_500Hz = pd.DataFrame(data,columns=['y'])
def fit_esponenziale(x, α, β):
return α * (1 - np.exp(-x / β))
plt.xlabel('ω(Hz)')
plt.ylabel('Attenuazioni')
plt.title('Fit Parabolico')
plt.scatter(tempi_500Hz, Vout_500Hz)
least_squares = cost.LeastSquares(tempi_500Hz, Vout_500Hz, np.sqrt(Vout_500Hz), fit_esponenziale)
m = Minuit(least_squares, α=0, β=0)
m.migrad()
m.hesse()
plt.errorbar(tempi_500Hz, Vout_500Hz, fmt="o", label="data")
plt.plot(tempi_500Hz, fit_esponenziale(tempi_500Hz, *m.values), label="fit")
fit_info = [
f"$\\chi^2$ / $n_\\mathrm{{dof}}$ = {m.fval:.1f} / {len(tempi_500Hz) - m.nfit}",]
for p, v, e in zip(m.parameters, m.values, m.errors):
fit_info.append(f"{p} = ${v:.3f} \\pm {e:.3f}$")
plt.legend()
plt.show()
input
output and example of data
There is actually a way to fit this completely linear.
See e.g.here
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import cumtrapz
def fit_exp(x, a, b, c):
return a * (1 - np.exp( -b * x) ) + c
nn = 170
xl = np.linspace( 0, 0.001, nn )
yl = fit_exp( xl, 15, 5300, -8.1 ) + np.random.normal( size=nn, scale=0.05 )
"""
with y = a( 1- exp(-bx) ) + c
we have Y = int y = -1/b y + d x + h ....try it out or see below
so we get a linear equation for b (actually 1/b ) to optimize
this goes as:
"""
Yl = cumtrapz( yl, xl, initial=0 )
ST = [xl, yl, np.ones( nn ) ]
S = np.transpose( ST )
eta = np.dot( ST, Yl )
A = np.dot( ST, S )
sol = np.linalg.solve( A, eta )
bFit = -1/sol[1]
print( bFit )
"""
now we can do a normal linear fit
"""
ST = [ fit_exp(xl, 1, bFit, 0), np.ones( nn ) ]
S = np.transpose( ST )
A = np.dot( ST, S )
eta = np.dot( ST, yl )
aFit, cFit = np.linalg.solve( A, eta )
print( aFit, cFit )
print(aFit + cFit, sol[0] ) ### consistency check
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1 )
ax.plot(xl, yl, marker ='+', ls='' )
## at best a sufficient fit, at worst a good start for a non-linear fit
ax.plot(xl, fit_exp( xl, aFit, bFit, cFit ) )
plt.show()
"""
a ( 1 - exp(-b x)) + c = a + c - a exp(-b x) = d - a exp( -b x )
int y = d x + a/b exp( -b x ) + g
= d x +a/b exp( -b x ) + a/b - a/b + c/b - c/b + g
= d x - 1/b ( a - a exp( -b x ) + c ) + c/b + a/b + g
= d x + k y + h
with k = -1/b and h = g + c/b + a/b.
d and h are fitted but not used, but as a+c = d we can check
for consistency
"""
Here is a working Minuit vs curve_fit example. I scaled the function such that the decay in the exponential is in the order of 1 (generally a good idea for non linear fits ). Eventually, both methods give very similar results.
Note:I leave it open whether the error makes sense like this or not. The starting values equal to zero was definitively a bad idea.
import pandas as pd
import matplotlib.pyplot as plt
from iminuit import cost, Minuit
from scipy.optimize import curve_fit
import numpy as np
def fit_exp(x, a, b, c):
return a * (1 - np.exp(- 1000 * b * x) ) + c
nn = 170
xl = np.linspace( 0, 0.001, nn )
yl = fit_exp( xl, 15, 5.3, -8.1 ) + np.random.normal( size=nn, scale=0.05 )
#######################
### Minuit
#######################
least_squares = cost.LeastSquares(xl, yl, np.sqrt( np.abs( yl ) ), fit_exp )
print(least_squares)
m = Minuit(least_squares, a=1, b=5, c=-7)
print( "grad: ")
print( m.migrad() ) ### needs to be called to get fit values
print( m.values )### gives slightly different output
print("Hesse:")
print( m.hesse() )
#######################
### curve_fit
#######################
opt, cov = curve_fit(
fit_exp, xl, yl, sigma=np.sqrt( np.abs( yl ) ),
absolute_sigma=True
)
print( " curve_fit: ")
print( opt )
print( " covariance ")
print( cov )
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1 )
ax.plot(xl, yl, marker ='+', ls='' )
ax.plot(xl, fit_exp(xl, *m.values), ls="--")
ax.plot(xl, fit_exp(xl, *opt), ls=":")
plt.show()

probability of sample of distribution

I am trying to generate a sample of 100 scenarios (X, Y) where both X and Y are normally distributed X=N(50,5^2), Y=N(30,2^2) and X and Y are correlated Cov(X,Y)=0.4.
I have been able to generate 100 scenarios with the Cholesky decomposition:
# We do a Cholesky decomposition to generate correlated scenarios
nScenarios = 10
Σ = [25 0.4; 0.4 4]
μ = [50, 30]
L = cholesky(Σ)
v = [rand(Normal(0, 1), nScenarios), rand(Normal(0, 1), nScenarios)]
X = reshape(zeros(nScenarios),1,nScenarios)
Y = reshape(zeros(nScenarios),1,nScenarios)
for i = 1:nScenarios
X[1, i] = sum(L.U[1, j] *v[j][i] for j = 1:nBreadTypes) + μ[1]
Y[1, i] = sum(L.U[2, j] *v[j][i] for j = 1:nBreadTypes) + μ[2]
end
However I need the probability of each scenario, i.e P(X=k and Y=p). My question would be, how can we get a sample of a certain distribution with the probability of each scenario?
Following the BatWannaBe explanation, normally I would do it like this:
julia> using Distributions
julia> d = MvNormal([50.0, 30.0], [25.0 0.4; 0.4 4.0])
FullNormal(
dim: 2
μ: [50.0, 30.0]
Σ: [25.0 0.4; 0.4 4.0]
)
julia> point = rand(d)
2-element Vector{Float64}:
52.807189619051485
32.693811008760676
julia> pdf(d, point)
0.0056519503173830515

question about methodology to solve equation in sage

So im trying to solve this equation: 3g - 2h = 1 in sage
by finding the value of both variables so that the equation is 1
by hand its g= 1 h=-1
this is what i have so far
var('g,h')
E = 3*g-2*h
1 == 3*g - 2*h
solve(E,c,d)
One way to go is as follows.
Define the symbolic variables g and h:
sage: g, h = SR.var('g, h')
Define the equation E as
sage: E = 3*g - 2*h == 1
Solve E in terms for g and h:
sage: solve([E], [g, h])
[[g == 2/3*r1 + 1/3, h == r1]]
This returns a list of solutions
containing a single solution, which is
a list of possible values for g and h.
The solution returned involves a real parameter
(denoted by r1 here), and g and h are
expressed in terms of this parameter.
One can also get solutions as dictionaries:
sage: sols = solve([E], [g, h], solution_dict=True)
sage: sols
[{g: 2/3*r3 + 1/3, h: r3}]
sage: sol = sols[0]
sage: sol
{g: 2/3*r3 + 1/3, h: r3}
Recover the values of g and h in the solution:
sage: sol[g]
2/3*r3 + 1/3
sage: sol[h]
r3
Give a Python name to that parameter
sage: r3, = sol[h].variables()
sage: r3
r3
Substitute some values:
sage: sol[g].subs({r3: 1}), sol[h].subs({r3: 1})
(1, 1)
sage: sol[g].subs({r3: -1}), sol[h].subs({r3: -1})
(-1/3, -1)
sage: sol[g].subs({r3: -2}), sol[h].subs({r3: -2})
(-1, -2)
sage: sol[g].subs({r3: 3}), sol[h].subs({r3: 3})
(7/3, 3)
sage: sol[g].subs({r3: 4}), sol[h].subs({r3: 4})
(3, 4)

Get corner of rectangle near to origin in batch of tensor given any two diagonal coordinates in pytorch

Let's say I have pytorch tensor of batch of coordinates of off diagonal elements and I want to get coordinate of the corner which is near to origin. coordinates are in (x1, y1, x2, y2) form.
a = torch.tensor([[3,2,2,3], [1,1,2,2])
# expected output
[[2,2], [1,1]]
You can just iterate over all tensors and for each of them calculate distance to four corners and take the corner with minimum distance.
import torch
a = torch.tensor([[3,2,2,3], [1,1,2,2]])
c = torch.zeros(a.shape[0], 2)
for idx, x in enumerate(a):
d1 = x[0] ** 2 + x[1] ** 2
d2 = x[2] ** 2 + x[3] ** 2
d3 = x[0] ** 2 + x[3] ** 2
d4 = x[2] ** 2 + x[1] ** 2
dmin = min(d1, d2, d3, d4)
if d1 == dmin:
c[idx] = torch.tensor([x[0], x[1]])
elif d2 == dmin:
c[idx] = torch.tensor([x[2], x[3]])
elif d3 == dmin:
c[idx] = torch.tensor([x[0], x[3]])
elif d4 == dmin:
c[idx] = torch.tensor([x[2], x[1]])
print(c) # tensor([[2., 2.], [1., 1.]])

Fastest way to create a sparse matrix of the form A.T * diag(b) * A + C?

I'm trying to optimize a piece of code that solves a large sparse nonlinear system using an interior point method. During the update step, this involves computing the Hessian matrix H, the gradient g, then solving for d in H * d = -g to get the new search direction.
The Hessian matrix has a symmetric tridiagonal structure of the form:
A.T * diag(b) * A + C
I've run line_profiler on the particular function in question:
Line # Hits Time Per Hit % Time Line Contents
==================================================
386 def _direction(n, res, M, Hsig, scale_var, grad_lnprior, z, fac):
387
388 # gradient
389 44 1241715 28220.8 3.7 g = 2 * scale_var * res - grad_lnprior + z * np.dot(M.T, 1. / n)
390
391 # hessian
392 44 3103117 70525.4 9.3 N = sparse.diags(1. / n ** 2, 0, format=FMT, dtype=DTYPE)
393 44 18814307 427597.9 56.2 H = - Hsig - z * np.dot(M.T, np.dot(N, M)) # slow!
394
395 # update direction
396 44 10329556 234762.6 30.8 d, fac = my_solver(H, -g, fac)
397
398 44 111 2.5 0.0 return d, fac
Looking at the output it's clear that constructing H is by far the most costly step - it takes considerably longer than actually solving for the new direction.
Hsig and M are both CSC sparse matrices, n is a dense vector and z is a scalar. The solver I'm using requires H to be either a CSC or CSR sparse matrix.
Here's a function that produces some toy data with the same formats, dimensions and sparseness as my real matrices:
import numpy as np
from scipy import sparse
def make_toy_data(nt=200000, nc=10):
d0 = np.random.randn(nc * (nt - 1))
d1 = np.random.randn(nc * (nt - 1))
M = sparse.diags((d0, d1), (0, nc), shape=(nc * (nt - 1), nc * nt),
format='csc', dtype=np.float64)
d0 = np.random.randn(nc * nt)
Hsig = sparse.diags(d0, 0, shape=(nc * nt, nc * nt), format='csc',
dtype=np.float64)
n = np.random.randn(nc * (nt - 1))
z = np.random.randn()
return Hsig, M, n, z
And here's my original approach for constructing H:
def original(Hsig, M, n, z):
N = sparse.diags(1. / n ** 2, 0, format='csc')
H = - Hsig - z * np.dot(M.T, np.dot(N, M)) # slow!
return H
Timing:
%timeit original(Hsig, M, n, z)
# 1 loops, best of 3: 483 ms per loop
Is there a faster way to construct this matrix?
I get close to a 4x speed-up in computing the product M.T * D * M out of the three diagonal arrays. If d0 and d1 are the main and upper diagonal of M, and d is the main diagonal of D, then the following code creates M.T * D * M directly:
def make_tridi_bis(d0, d1, d, nc=10):
d00 = d0*d0*d
d11 = d1*d1*d
d01 = d0*d1*d
len_ = d0.size
data = np.empty((3*len_ + nc,))
indices = np.empty((3*len_ + nc,), dtype=np.int)
# Fill main diagonal
data[:2*nc:2] = d00[:nc]
indices[:2*nc:2] = np.arange(nc)
data[2*nc+1:-2*nc:3] = d00[nc:] + d11[:-nc]
indices[2*nc+1:-2*nc:3] = np.arange(nc, len_)
data[-2*nc+1::2] = d11[-nc:]
indices[-2*nc+1::2] = np.arange(len_, len_ + nc)
# Fill top diagonal
data[1:2*nc:2] = d01[:nc]
indices[1:2*nc:2] = np.arange(nc, 2*nc)
data[2*nc+2:-2*nc:3] = d01[nc:]
indices[2*nc+2:-2*nc:3] = np.arange(2*nc, len_+nc)
# Fill bottom diagonal
data[2*nc:-2*nc:3] = d01[:-nc]
indices[2*nc:-2*nc:3] = np.arange(len_ - nc)
data[-2*nc::2] = d01[-nc:]
indices[-2*nc::2] = np.arange(len_ - nc ,len_)
indptr = np.empty((len_ + nc + 1,), dtype=np.int)
indptr[0] = 0
indptr[1:nc+1] = 2
indptr[nc+1:len_+1] = 3
indptr[-nc:] = 2
np.cumsum(indptr, out=indptr)
return sparse.csr_matrix((data, indices, indptr), shape=(len_+nc, len_+nc))
If your matrix M were in CSR format, you can extract d0 and d1 as d0 = M.data[::2] and d1 = M.data[1::2], I modified you toy data making routine to return those arrays as well, and here's what I get:
In [90]: np.allclose((M.T * sparse.diags(d, 0) * M).A, make_tridi_bis(d0, d1, d).A)
Out[90]: True
In [92]: %timeit make_tridi_bis(d0, d1, d)
10 loops, best of 3: 124 ms per loop
In [93]: %timeit M.T * sparse.diags(d, 0) * M
1 loops, best of 3: 501 ms per loop
The whole purpose of the above code is to take advantage of the structure of the non-zero entries. If you draw a diagram of the matrices you are multiplying together, it is relatively easy to convince yourself that the main (d_0) and top and bottom (d_1) diagonals of the resulting tridiagonal matrix are simply:
d_0 = np.zeros((len_ + nc,))
d_0[:len_] = d00
d_0[-len_:] += d11
d_1 = d01
The rest of the code in that function is simply building the tridiagonal matrix directly, as calling sparse.diags with the above data is several times slower.
I tried running your test case and had problems with the np.dot(N, M). I didn't dig into it, but I think my numpy/sparse combo (both pretty new) had problems using np.dot on sparse arrays.
But H = -Hsig - z*M.T.dot(N.dot(M)) runs just fine. This uses the sparse dot.
I haven't run a profile, but here are Ipython timings for several parts. It takes longer to generate the data than to do that double dot.
In [37]: timeit Hsig,M,n,z=make_toy_data()
1 loops, best of 3: 2 s per loop
In [38]: timeit N = sparse.diags(1. / n ** 2, 0, format='csc')
1 loops, best of 3: 377 ms per loop
In [39]: timeit H = -Hsig - z*M.T.dot(N.dot(M))
1 loops, best of 3: 1.55 s per loop
H is a
<2000000x2000000 sparse matrix of type '<type 'numpy.float64'>'
with 5999980 stored elements in Compressed Sparse Column format>