Time Dependant 1D Schroedinger Equation using Numpy and SciPy solve_ivp - numpy

I am trying to solve the 1D time dependent Schroedinger equation using finite difference methods, here is how the equation looks and how it undergoes discretization
Say I have N spatial points (the x_i goes from 0 to N-1), and suppose my time span is K time points.
I strive to get a K by N matrix. each row (j) will be the function at time t_j
I suspect that my issue is that I am defining the system of the coupled equations in a wrong way.
My boundary conditions are psi=0, or some constant at the sides of the box so I make the ode's in the sides of my X span to be zero
My Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
#Defining the length and the resolution of our x vector
length = 2*np.pi
delta_x = .01
# create a vector of X values, and the number of X values
def create_x_vector(length, delta_x):
x = np.arange(-length, length, delta_x)
N = len(x)
return x, N
# create initial condition vector
def create_initial_cond(x,x0,Gausswidth):
psi0 = np.exp((-(x-x0)**2)/Gausswidth)
return psi0
#create the system of ODEs
def ode_system(psi,t,delta_x,N):
psi_t = np.zeros(N)
psi_t[0]=0
psi_t[N-1]=0
for i in range(1,N-1):
psi_t[i] = (psi[i+1]-2*psi[i]+psi[i-1])/(delta_x)**2
return psi_t
#Create the actual time, x and initial condition vectors using the functions
t = np.linspace(0,15,5000)
x,N= create_x_vector(length,delta_x)
psi0 = create_initial_cond(x,0,1)
psi = np.zeros(N)
psi= solve_ivp(ode_system(psi,t,delta_x,N),[0,15],psi0,method='Radau',max_step=0.1)
After running I get an error:
runfile('D:/Studies/Project/Simulation Test/Test2.py', wdir='D:/Studies/Project/Simulation Test')
Traceback (most recent call last):
File "<ipython-input-16-bff0a1fd9937>", line 1, in <module>
runfile('D:/Studies/Project/Simulation Test/Test2.py', wdir='D:/Studies/Project/Simulation Test')
File "C:\Users\Pasha\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 704, in runfile
execfile(filename, namespace)
File "C:\Users\Pasha\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "D:/Studies/Project/Simulation Test/Test2.py", line 35, in <module>
psi= solve_ivp(ode_system(psi,t,delta_x,N),[0,15],psi0,method='Radau',max_step=0.1)
File "C:\Users\Pasha\Anaconda3\lib\site-packages\scipy\integrate\_ivp\ivp.py", line 454, in solve_ivp
solver = method(fun, t0, y0, tf, vectorized=vectorized, **options)
File "C:\Users\Pasha\Anaconda3\lib\site-packages\scipy\integrate\_ivp\radau.py", line 288, in __init__
self.f = self.fun(self.t, self.y)
File "C:\Users\Pasha\Anaconda3\lib\site-packages\scipy\integrate\_ivp\base.py", line 139, in fun
return self.fun_single(t, y)
File "C:\Users\Pasha\Anaconda3\lib\site-packages\scipy\integrate\_ivp\base.py", line 21, in fun_wrapped
return np.asarray(fun(t, y), dtype=dtype)
TypeError: 'numpy.ndarray' object is not callable
In a more general note, how can I make python solve N ode's without manually defining each and one of them?
I want to have a big vector called xdot where each cell in this vector will be a function of some X[i]'s and I seem to fail to do that? or maybe my approach is completely wrong?
Also I feel maybe that the "Vectorized" argument of ivp_solve can be connected, but I do not understand the explanation in the SciPy documentation.

The problem is probably that solve_ivp expects a function for its first parameter, and you provided ode_system(psi,t,delta_x,N) which results in a matrix instead (therefore you get type error - ndarray).
You need to provide solve_ivp a function that accepts two variables, t and y (which in your case is psi). It can be done like this:
def temp_function(t, psi):
return ode_system(psi,t,delta_x,N)
and then, your last line should be:
psi= solve_ivp(temp_function,[0,15],psi0,method='Radau',max_step=0.1)
This code solved the problem for me.
For a shorthand way of doing this, you can also just write the function inline using Lambda :
psi= solve_ivp(lambda t,psi : ode_system(psi,t,delta_x,N),[0,15],psi0,method='Radau',max_step=0.1)

Related

Optimization (scipy.optimize) L-BFGS-B wrapper args treating array elements as one variable

I am unable to understand the source of this error:
line 327, in function_wrapper
return function(*(wrapper_args + args))
TypeError: SSVOptionPriceObjFunc() missing 1 required positional argument: 'marketVolSurface'
The relevant code is below:
x0 = [1.0, 0.0] # (lambda0, rho)
x0 = np.asarray(x0)
args = (spot, 0.01*r, daysInYear, mktPrices, volSurface)
# constraints: lambd0 >0, -1<= rho <=1
boundsHere = ((0, None), (-1, 1))
res = minimize(SSVOptionPriceObjFunc, x0, args, method='L-BFGS-B', jac=None,
bounds=boundsHere,options={'xtol': 1e-8, 'disp': True})
The function to be minimized is below. The first two arguments are the free variables, while the other five are fixed as parameters.
def SSVOptionPriceObjFunc(lambda0, rho, spot, spotInterestRate, daysInYear, marketPrices,
marketVolSurface):
My intention is to find (lambd0, rho) giving a minimum. From the debugger, it seems that my initial guess x0 is interpreted as a single variable, not as a vector, giving the error about a missing positional argument. I have tried passing x0 as a list, tuple, and ndarray; all fail. Can someone spot an error, or suggest a workaround? Thank you in advance.
Update: I have found a solution: use a wrapper function from the functools package to set the parameters.
import functools as ft
SSVOptionPriceObjFuncWrapper = ft.partial(SSVOptionPriceObjFunc, spot=spot,
spotInterestRate=0.01 * r, daysInYear=daysInYear, marketPrices=mktPrices,
marketVolSurface=volSurface)
Then pass SSVOptionPriceObjFuncWrapper to the minimizer with args = None
Thank you for the replies.
Take the documented minimize inputs seriously. It's your job to write the function to fit what minimize does, not the other way around.
scipy.optimize.minimize(fun, x0, args=(),
fun: callable
The objective function to be minimized.
fun(x, *args) -> float
where x is an 1-D array with shape (n,) and args is a tuple of the fixed
parameters needed to completely specify the function.

Tensorflow AssertionError "gradients list should have been aggregated by now"

I have a function f that is internally using some tf.while_loops and tf.gradients to compute the value y = f(x). Something like this
def f( x ):
...
def body( g, x ):
# Compute the gradient here
grad = tf.gradients( g, x )[0]
...
return ...
return tf.while_loop( cond, body, parallel_iterations=1 )
There are a few hundred lines of code. But I believe that those are the important points...
Now when I evaluate f(x), I get exactly the value I expect ..
y = known output of f(x)
with tf.Session() as sess:
fx = f(x)
print("Error = ", y - sess.run(fx, feed_dict)) # Prints 0
However, when I try to evaluate the gradient of f(x) with respect to x, that is,
grads = tf.gradients( fx, x )[0]
I get the error
AssertionError: gradients list should have been aggregated by now.
Here is the full trace:
File "C:/Dropbox/bob/tester.py", line 174, in <module>
grads = tf.gradients(y, x)[0]
File "C:\Anaconda36\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 649, in gradients
return [_GetGrad(grads, x) for x in xs]
File "C:\Anaconda36\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 649, in <listcomp>
return [_GetGrad(grads, x) for x in xs]
File "C:\Anaconda36\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 727, in _GetGrad
"gradients list should have been aggregated by now.")
AssertionError: gradients list should have been aggregated by now.
Could somebody please outline likely causes for this error? I have no idea where to even start looking for the issue...
Some observations:
Note that I have set the parallel iterations for the while loop to 1. This
should mean that there is no errors due to reading and writing from multiple threads.
If I discard the while loop, and just have f return body(), then the code runs:
# The following does not crash, but we removed the while_loop, so the output is incorrect
def f( x ):
...
def body( g, x ):
# Compute the gradient here
grad = tf.gradients( g, x )[0]
...
return ...
return body(...)
Obviously, the output is incorrect, but at least the gradients are computed.
I came across a similar issue. Some patterns I noted:
If the x used in tf.gradients was used in a manner that required dimension broadcasting in body, I got this error. If I changed it to one that didn't require broadcasting, tf.gradients returned [None]. I didn't test this extensively, so this pattern may not be consistent across all examples.
Both cases (returning [None] and raising this assertion error) can be resolved by differentiating tf.identity(y) rather than just y: grads = tf.gradients(tf.identity(y), xs) I have absolutely no idea why this works.

Why can't I access the variable I create using the variable name plus scope path in TensorFlow?

I was trying to get a variable I created in a simple function but I keep getting errors. I am doing:
x = tf.get_variable('quadratic/x')
but the python complains as follow:
python qm_tb_scopes.py
quadratic/x:0
Traceback (most recent call last):
File "qm_tb_scopes.py", line 24, in <module>
x = tf.get_variable('quadratic/x')
File "/Users/my_username/path/tensor_flow_experiments/venv/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 732, in get_variable
partitioner=partitioner, validate_shape=validate_shape)
File "/Users/my_username/path/tensor_flow_experiments/venv/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 596, in get_variable
partitioner=partitioner, validate_shape=validate_shape)
File "/Users/my_username/path/tensor_flow_experiments/venv/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 161, in get_variable
caching_device=caching_device, validate_shape=validate_shape)
File "/Users/my_username/path/tensor_flow_experiments/venv/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 457, in _get_single_variable
"but instead was %s." % (name, shape))
ValueError: Shape of a new variable (quadratic/x) must be fully defined, but instead was <unknown>.
it seems its trying to create a new variable, but I am simply trying to get a defined one. Why is it doing this?
The whole code is:
import tensorflow as tf
def get_quaratic():
# x variable
with tf.variable_scope('quadratic'):
x = tf.Variable(10.0,name='x')
# b placeholder (simualtes the "data" part of the training)
b = tf.placeholder(tf.float32,name='b')
# make model (1/2)(x-b)^2
xx_b = 0.5*tf.pow(x-b,2)
y=xx_b
return y,x
y,x = get_quaratic()
learning_rate = 1.0
# get optimizer
opt = tf.train.GradientDescentOptimizer(learning_rate)
# gradient variable list = [ (gradient,variable) ]
print x.name
x = tf.get_variable('quadratic/x')
x = tf.get_variable(x.name)
You need to pass the option reuse=True to tf.variable_scope() if you want to get the same variable twice.
See the documentation (https://www.tensorflow.org/versions/r0.9/how_tos/variable_scope/index.html)
for more details.
Alternatively, you could get the variable once, outside your Python function, and pass it in as a argument in Python. I find that a bit cleaner since it makes it explicit what variables the code uses.
I hope that helps!
This is not the best solution, but try creating the variable through tf.get_variable() with reuse=False to ensure a new variable is created. Then, when obtaining the variable, use tf.get_variable() with reuse=True to get the current variable. Setting reuse to tf.AUTO_REUSE risks the creation of a new variable if the exact var is not present. Also make sure to specify the shape of the variable in tf.get_variable().
import tensorflow as tf
def get_quaratic():
# x variable
with tf.variable_scope('quadratic', reuse=False):
x = tf.get_variable('x', ())
tf.assign(x, 10)
# b placeholder (simualtes the "data" part of the training)
b = tf.placeholder(tf.float32,name='b')
# make model (1/2)(x-b)^2
xx_b = 0.5*tf.pow(x-b,2)
y=xx_b
return y,x
y,x = get_quaratic()
learning_rate = 1.0
# get optimizer
opt = tf.train.GradientDescentOptimizer(learning_rate)
# gradient variable list = [ (gradient,variable) ]
print (x.name)
with tf.variable_scope('', reuse=True):
x = tf.get_variable('quadratic/x', shape=())
print(tf.global_variables()) # there is only 1 variable

Computing Edit Distance (feed_dict error)

I've written some code in Tensorflow to compute the edit-distance between one string and a set of strings. I can't figure out the error.
import tensorflow as tf
sess = tf.Session()
# Create input data
test_string = ['foo']
ref_strings = ['food', 'bar']
def create_sparse_vec(word_list):
num_words = len(word_list)
indices = [[xi, 0, yi] for xi,x in enumerate(word_list) for yi,y in enumerate(x)]
chars = list(''.join(word_list))
return(tf.SparseTensor(indices, chars, [num_words,1,1]))
test_string_sparse = create_sparse_vec(test_string*len(ref_strings))
ref_string_sparse = create_sparse_vec(ref_strings)
sess.run(tf.edit_distance(test_string_sparse, ref_string_sparse, normalize=True))
This code works and when run, it produces the output:
array([[ 0.25],
[ 1. ]], dtype=float32)
But when I attempt to do this by feeding the sparse tensors in through sparse placeholders, I get an error.
test_input = tf.sparse_placeholder(dtype=tf.string)
ref_input = tf.sparse_placeholder(dtype=tf.string)
edit_distances = tf.edit_distance(test_input, ref_input, normalize=True)
feed_dict = {test_input: test_string_sparse,
ref_input: ref_string_sparse}
sess.run(edit_distances, feed_dict=feed_dict)
Here is the error traceback:
Traceback (most recent call last):
File "<ipython-input-29-4e06de0b7af3>", line 1, in <module>
sess.run(edit_distances, feed_dict=feed_dict)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 597, in _run
for subfeed, subfeed_val in _feed_fn(feed, feed_val):
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 558, in _feed_fn
return feed_fn(feed, feed_val)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 268, in <lambda>
[feed.indices, feed.values, feed.shape], feed_val)),
TypeError: zip argument #2 must support iteration
Any idea what is going on here?
TL;DR: For the return type of create_sparse_vec(), use tf.SparseTensorValue instead of tf.SparseTensor.
The problem here comes from the return type of create_sparse_vec(), which is tf.SparseTensor, and is not understood as a feed value in the call to sess.run().
When you feed a (dense) tf.Tensor, the expected value type is a NumPy array (or certain objects that can be converted to an array). When you feed a tf.SparseTensor, the expected value type is a tf.SparseTensorValue, which is similar to a tf.SparseTensor but its indices, values, and shape properties are NumPy arrays (or certain objects that can be converted to arrays, like the lists in your example.
The following code should work:
def create_sparse_vec(word_list):
num_words = len(word_list)
indices = [[xi, 0, yi] for xi,x in enumerate(word_list) for yi,y in enumerate(x)]
chars = list(''.join(word_list))
return tf.SparseTensorValue(indices, chars, [num_words,1,1])

TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')

Strange error from numpy via matplotlib when trying to get a histogram of a tiny toy dataset. I'm just not sure how to interpret the error, which makes it hard to see what to do next.
Didn't find much related, though this nltk question and this gdsCAD question are superficially similar.
I intend the debugging info at bottom to be more helpful than the driver code, but if I've missed something, please ask. This is reproducible as part of an existing test suite.
if n > 1:
return diff(a[slice1]-a[slice2], n-1, axis=axis)
else:
> return a[slice1]-a[slice2]
E TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
../py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py:1567: TypeError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) bt
[...]
py2.7.11-venv/lib/python2.7/site-packages/matplotlib/axes/_axes.py(5678)hist()
-> m, bins = np.histogram(x[i], bins, weights=w[i], **hist_kwargs)
py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(606)histogram()
-> if (np.diff(bins) < 0).any():
> py2.7.11-venv/lib/python2.7/site-packages/numpy/lib/function_base.py(1567)diff()
-> return a[slice1]-a[slice2]
(Pdb) p numpy.__version__
'1.11.0'
(Pdb) p matplotlib.__version__
'1.4.3'
(Pdb) a
a = [u'A' u'B' u'C' u'D' u'E']
n = 1
axis = -1
(Pdb) p slice1
(slice(1, None, None),)
(Pdb) p slice2
(slice(None, -1, None),)
(Pdb)
I got the same error, but in my case I am subtracting dict.key from dict.value. I have fixed this by subtracting dict.value for corresponding key from other dict.value.
cosine_sim = cosine_similarity(e_b-e_a, w-e_c)
here I got error because e_b, e_a and e_c are embedding vector for word a,b,c respectively. I didn't know that 'w' is string, when I sought out w is string then I fix this by following line:
cosine_sim = cosine_similarity(e_b-e_a, word_to_vec_map[w]-e_c)
Instead of subtracting dict.key, now I have subtracted corresponding value for key
I had a similar issue where an integer in a row of a DataFrame I was iterating over was of type numpy.int64. I got the
TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U1') dtype('<U1') dtype('<U1')
error when trying to subtract a float from it.
The easiest fix for me was to convert the row using pd.to_numeric(row).
Why is it applying diff to an array of strings.
I get an error at the same point, though with a different message
In [23]: a=np.array([u'A' u'B' u'C' u'D' u'E'])
In [24]: np.diff(a)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-9d5a62fc3ff0> in <module>()
----> 1 np.diff(a)
C:\Users\paul\AppData\Local\Enthought\Canopy\User\lib\site-packages\numpy\lib\function_base.pyc in diff(a, n, axis)
1112 return diff(a[slice1]-a[slice2], n-1, axis=axis)
1113 else:
-> 1114 return a[slice1]-a[slice2]
1115
1116
TypeError: unsupported operand type(s) for -: 'numpy.ndarray' and 'numpy.ndarray'
Is this a array the bins parameter? What does the docs say bins should be?
I am fairly new to this myself, but I had a similar error and found that it is due to a type casting issue. I was trying to concatenate rather than take the difference but I think the principle is the same here. I provided a similar answer on another question so I hope that is OK.
In essence you need to use a different data type cast, in my case I needed str not float, I suspect yours is the same so my suggested solution is. I am sorry I cannot test it before suggesting but I am unclear from your example what you were doing.
return diff(str(a[slice1])-str(a[slice2]), n-1, axis=axis)
Please see my example code below for the fix to my code, the change occurs on the third to last line. The code is to produce a basic random forest model.
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
This leads to an error of;
Traceback (most recent call last):
File "min_example.py", line 40, in <module>
fpred.write(RFpreds[i]+",,"+yTest[i]+",\n")
TypeError: ufunc 'add' did not contain a loop with signature matching types dtype('S32') dtype('S32') dtype('S32')
The solution is to make each variable a str() type on the third to last line then write to file. No other changes to then code have been made from the above.
import scipy
import math
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing, metrics, cross_validation
Data = pd.read_csv("Free_Energy_exp.csv", sep=",")
Data = Data.fillna(Data.mean()) # replace the NA values with the mean of the descriptor
header = Data.columns.values # Ues the column headers as the descriptor labels
Data.head()
test_name = "Test.csv"
npArray = np.array(Data)
print header.shape
npheader = np.array(header[1:-1])
print("Array shape X = %d, Y = %d " % (npArray.shape))
datax, datay = npArray.shape
names = npArray[:,0]
X = npArray[:,1:-1].astype(float)
y = npArray[:,-1] .astype(float)
X = preprocessing.scale(X)
XTrain, XTest, yTrain, yTest = cross_validation.train_test_split(X,y, random_state=0)
# Predictions results initialised
RFpredictions = []
RF = RandomForestRegressor(n_estimators = 10, max_features = 5, max_depth = 5, random_state=0)
RF.fit(XTrain, yTrain) # Train the model
print("Training R2 = %5.2f" % RF.score(XTrain,yTrain))
RFpreds = RF.predict(XTest)
with open(test_name,'a') as fpred :
lenpredictions = len(RFpreds)
lentrue = yTest.shape[0]
if lenpredictions == lentrue :
fpred.write("Names/Label,, Prediction Random Forest,, True Value,\n")
for i in range(0,lenpredictions) :
fpred.write(str(RFpreds[i])+",,"+str(yTest[i])+",\n")
else :
print "ERROR - names, prediction and true value array size mismatch."
These examples are from a larger code so I hope the examples are clear enough.
I think #James is right. I got stuck by same error while working on Polyval(). And yeah solution is to use the same type of variabes. You can use typecast to cast all variables in the same type.
BELOW IS A EXAMPLE CODE
import numpy
P = numpy.array(input().split(), float)
x = float(input())
print(numpy.polyval(P,x))
here I used float as an output type. so even the user inputs the INT value (whole number). the final answer will be typecasted to float.
I ran into the same issue, but in my case it was just a Python list instead of a Numpy array used. Using two Numpy arrays solved the issue for me.