Odd behavior of ODE in Scilab: equation dy/dx=A is not solved properly - line

I am still learning Scilab (5.5.2), so I am writing and running test codes to familiarize myself with the software.
To test the numerical differential equation solver, I started easy from the equation dy/dx = A, which has as solution y = Ax+c (line equation).
This is the code I wrote:
// Function y = A*x+1
function ydot=fn(x, A)
ydot=A
endfunction
A=2;
//Initial conditions
x0=0;
y0=A*x0+1;
//Numerical Solution
x=[0:5];
y= ode(y0,x0,x,fn);
//Analytical solution
y2 = A*x+1;
clf(); plot(x, y); plot(x, y2, '-k');
//End
And these are the unexpected results:
y = 1. 2.7182824 7.3890581 20.085545 54.598182
148.41327
y2 = 1. 3. 5. 7. 9. 11.
It appears that y = e^x. Can someone explain what is going wrong, or what I did wrong?

Just renaming the variables does not change how they are used internally by the ODE solver. Since that solver expects a function with arguments time,state it will interpret the provided function that way.
Renaming the variables back, what you programmed is equivalent to
function ydot=fn(t,y)
ydot = y
endfunction
which indeed has the exponential function as solution.
From the manual you can see that the way to include parameters is to pass the function as a list,
The f argument can also be a list with the following structure: lst=list(realf,u1,u2,...un) where realf is a Scilab function with syntax: ydot = f(t,y,u1,u2,...,un)
function ydot=fn(t,y,A)
ydot = A
endfunction
y= ode(y0,x0,x,list(fn,A));

Related

Getting "ValueError: data type <class 'numpy.object_'> not inexact" error while trying to linear fit a dataset using uncertainities

I am very new to python so i am struggling a lot to do what i want to do, so i figured i could ask.
I have an excel sheet with data columns like period, pdot, flux values etc. There are also error columns associated with these. I want to plot these in python, and then do a linear fit while counting in the errors. Then obtain values like standard deviation or p-value to decide the goodness of the fit. Then using this fit i will try to predict values based on a missing parameter. I managed to do it without the errors, but now im trying to do it while propagating my error and its causing me some errors.
My working code that doesnt take errors into consideration is like this:
dist_array1= np.multiply(3.08567758128*10**21,dist_array)
dist_array2 = np.multiply(dist_array1,dist_array1)
e1=np.multiply(4*math.pi,dist_array2)
L_gamma = np.multiply(e1,flux_array)
Gamma_Eff = np.divide(L_gamma,edot_array)
Tau = np.divide(period_array,pdot_array)
constant = 2.94*10**8
t1=np.power(period_array,-5)
t2=np.multiply(t1,pdot_array)
t3=np.power(t2,1/2)
B_LC = np.multiply(constant,t3)
c1=np.multiply(10**15,pdot_array)
c2=np.log(c1)
c3=np.log(period_array)
c4=1-np.multiply(11/7,c3)+np.multiply(4/7,c2)
c5=3.56-c3-c2
Zeta1=1+np.divide(c4,c5)
c6=0.8-np.multiply(2/7,c3)+np.multiply(2/7,c2)
Zeta2=1+np.divide(c6,1.3)
c8=0.6-np.multiply(11/14,c3)+np.multiply(2/7,c2)
Zeta3=1+np.divide(c8,1.3)
#Here i defined my variables that i will work with, now i will try to fit it.
x1 = np.log(period_array)
y1 = np.log(Gamma_Eff)
coef1, V1 = np.polyfit(x1,y1,1, cov=True)
poly1d_fn1 = np.poly1d(coef1)
fig, (ax1, ax2, ax3) = plt.subplots(1, 3,figsize=(30,10))
fig.suptitle('Figure 1')
ax1.plot(x1,y1, 'yo', x1, poly1d_fn1(x1), '-k')
x2 = np.log(Tau)
coef2, V2 = np.polyfit(x2,y1,1, cov=True)
poly1d_fn2 = np.poly1d(coef2)
ax2.plot(x2,y1, 'yo', x2, poly1d_fn2(x2), '-k')
x3= np.log(B_LC)
coef3, V3 = np.polyfit(x3,y1,1, cov=True)
poly1d_fn3 = np.poly1d(coef3)
ax3.plot(x3,y1, 'yo', x3, poly1d_fn3(x3), '-k')
ax1.set(xlabel='log P (s)', ylabel='log η')
ax2.set(xlabel='log τ (yr)', ylabel='log η')
ax3.set(xlabel='log B_LC (G)', ylabel='log η')
#And then obtain the uncertainities
sigma_period_1=np.sqrt(V1[0][0])
sigma_period_2=np.sqrt(V1[1][1])
sigma_Tau_1=np.sqrt(V2[0][0])
sigma_Tau_2=np.sqrt(V2[1][1])
sigma_B_LC_1=np.sqrt(V3[0][0])
sigma_B_LC_2=np.sqrt(V3[1][1])
Now this works well and i can fit it, the problem is i cannot get stuff like p-value or standard deviation from the fit. I think i need to use statsmodels for that. And i also need to put errors into the formulas to be more accurate. What i changed to obtain this so far is as follows:
period_array= unumpy.uarray(period_array,perioderr_array) # Here im combining the error and the value so that i can use it propagates the error.
pdot_array=unumpy.uarray(pdot_array,pdoterr_array) #Same thing for the second value with error
flux_array=unumpy.uarray(flux_array,flux_err_array) #Same thing for third
c2=unumpy.log(c1) #Here i had to use unumpy instead of np because it gave me errors when using log function
c3=unumpy.log(period_array) #Same thing
Then i tried to fit using polyfit, to see if it works, then i will try to get the same fit with statsmodels.
x1 = unumpy.log(period_array) #log issue again
y1 = unumpy.log(Gamma_Eff)
coef1, V1 = np.polyfit(x1,y1,1, cov=True)
The last line gives me the error "ValueError: data type <class 'numpy.object_'> not inexact" I did some digging and i understood the problem as "my values are not float, and this is why im getting error, so i need to turn them into float". To do this i tried many things including stuff like x = list(x) but to no avail.
So what am i doing wrong?

Is there a way to vectorize linalg.expm

To follow up on this question
I am new to python and I am trying to calculate the exponential of
a product Matrix - Scalar using vectorization (if possible)
What I did:
n=10
t_ = np.arange(1,n+1)*5*np.pi/n
a_11,a_12, a_21, a_22=0,1,-1,-1
x_0,v_0=1,1
A = np.array([[a_11,a_12], [a_21, a_22]])
A_ = np.array([A for k in range (1,n+1,1)])
X_0 = np.array([[x_0],[v_0]]) # build X_0
print A
x_=scipy.linalg.expm(t_[:,None,None]*A[None,:,:])*X_0
I get the following error within linalg.expm:
ValueError: expected a square matrix
Any help is much appreciated.

Julia PyPlot can't create quadratic function

I'm trying to learn to plot things with Julia using PyPlot, and I tried to plot a quadratic function. It does not like how I'm squaring x. I tried using x**2 and x*x, and the compiler did not accept those either. What should I be using to square x?
Thanks
Code # line 7:
x1 = linspace(0,4*pi, 500); y1 = x^2
Error:
LoadError: MethodError: `*` has no method matching *(::LinSpace{Float64},
::LinSpace{Float64})
Closest candidates are:
*(::Any, ::Any, !Matched::Any, !Matched::Any...)
*{T}(!Matched::Bidiagonal{T}, ::AbstractArray{T,1})
*(!Matched::Number, ::AbstractArray{T,N})
...
in power_by_squaring at intfuncs.jl:80
in ^ at intfuncs.jl:108
in include_string at loading.jl:282
in include_string at C:\Users\User\.julia\v0.4\CodeTools\src\eval.jl:32
in anonymous at C:\Users\User\.julia\v0.4\Atom\src\eval.jl:84
in withpath at C:\Users\User\.julia\v0.4\Requires\src\require.jl:37
in withpath at C:\Users\User\.julia\v0.4\Atom\src\eval.jl:53
[inlined code] from C:\Users\User\.julia\v0.4\Atom\src\eval.jl:83
in anonymous at task.jl:58
while loading C:\Users\User\Desktop\Comp Sci\Class\plotTest, in expression
starting on line 7
To square every element of an array, use x.^2.
You are trying to square all of the elements of an array. This means you need to use the element-wise version x.^2.

Numpy - AttributeError: 'Zero' object has no attribute 'exp'

I'm having trouble solving a discrepancy between something breaking at runtime, but using the exact same data and operations in the python console, having it work fine.
# f_err - currently has value 1.11819388872025
# l_scales - currently a numpy array [1.17840183376334 1.13456764589809]
sq_euc_dists = self.se_term(x1, x2, l_scales) # this is fine. It calls cdists on x1/l_scales, x2/l_scales vectors
return (f_err**2) * np.exp(-0.5 * sq_euc_dists) # <-- errors on this line
The error that I get is
AttributeError: 'Zero' object has no attribute 'exp'
However, calling those exact same lines, with the same f_err, l_scales, and x1, x2 in the console right after it errors out, somehow does not produce errors.
I was not able to find a post referring to the 'Zero' object error specifically, and the non-'Zero' ones I found didn't seem to apply to my case here.
EDIT: It was a bit lacking in info, so here's an actual (extracted) runnable example with sample data I took straight out of a failed run, which when run in isolation works fine/I can't reproduce the error except in runtime.
Note that the sqeucld_dist function below is quite bad and I should be using scipy's cdist instead. However, because I'm using sympy's symbols for matrix elementwise gradients with over 15 partial derivatives in my real data, cdist is not an option as it doesn't deal with arbitrary objects.
import numpy as np
def se_term(x1, x2, l):
return sqeucl_dist(x1/l, x2/l)
def sqeucl_dist(x, xs):
return np.sum([(i-j)**2 for i in x for j in xs], axis=1).reshape(x.shape[0], xs.shape[0])
x = np.array([[-0.29932052, 0.40997373], [0.40203481, 2.19895326], [-0.37679417, -1.11028267], [-2.53012051, 1.09819485], [0.59390005, 0.9735], [0.78276777, -1.18787904], [-0.9300892, 1.18802775], [0.44852545, -1.57954101], [1.33285028, -0.58594779], [0.7401607, 2.69842268], [-2.04258086, 0.43581565], [0.17353396, -1.34430191], [0.97214259, -1.29342284], [-0.11103534, -0.15112815], [0.41541759, -1.51803154], [-0.59852383, 0.78442389], [2.01323359, -0.85283772], [-0.14074266, -0.63457529], [-0.49504797, -1.06690869], [-0.18028754, -0.70835799], [-1.3794126, 0.20592016], [-0.49685373, -1.46109525], [-1.41276934, -0.66472598], [-1.44173868, 0.42678815], [0.64623684, 1.19927771], [-0.5945761, -0.10417961]])
f_err = 1.11466725760716
l = [1.18388412685279, 1.02290811104357]
result = (f_err**2) * np.exp(-0.5 * se_term(x, x, l)) # This runs fine, but fails with the exact same calls and data during runtime
Any help greatly appreciated!
Here is how to reproduce the error you are seeing:
import sympy
import numpy
zero = sympy.sympify('0')
numpy.exp(zero)
You will see the same exception you are seeing.
You can fix this (inefficiently) by changing your code to the following to make things floating point.
def sqeucl_dist(x, xs):
return np.sum([np.vectorize(float)(i-j)**2 for i in x for j in xs],
axis=1).reshape(x.shape[0], xs.shape[0])
It will be better to fix your gradient function using lambdify.
Here's an example of how lambdify can be used on partial d
from sympy.abc import x, y, z
expression = x**2 + sympy.sin(y) + z
derivatives = [expression.diff(var, 1) for var in [x, y, z]]
derivatives is now [2*x, cos(y), 1], a list of Sympy expressions. To create a function which will evaluate this numerically at a particular set of values, we use lambdify as follows (passing 'numpy' as an argument like that means to use numpy.cos rather than sympy.cos):
derivative_calc = sympy.lambdify((x, y, z), derivatives, 'numpy')
Now derivative_calc(1, 2, 3) will return [2, -0.41614683654714241, 1]. These are ints and numpy.float64s.
A side note: np.exp(M) will calculate the element-wise exponent of each of the elements of M. If you are trying to do a matrix exponential, you need np.linalg.exmp.

Is there a way to easily get the logarithm of a np.ndarray containing errors

If I have a np.array of values, Y, with a no.array of corresponding errors, Err, the error in the log scale will be
Err_{log} = log(Y+Err) - log(Y) = log ((Y+Err)/Y)
While I can place this in my code, this isn't much readable. Is there a function that does that?
NumPy has the function log1p(x) that computes the log of 1+x. So you could write:
Err_log = np.log1p(Err/Y)