Tuple indices must be integers not tuple, matplot - numpy

I'm trying to code a program that will integrate a function using diferent ways (Euler, Runge...) and using the build-in function scipy.integrate.odeint.
Everything and I'm getting the right results but I also need to create a graph with the results and that's when everything goes wrong.
For the odeint function I can't draw the graph.
Here is my code and the ERROR, I hope someone will be able to help me.
def odeint(phi, t0tf, Y0, N):
T6=numpy.zeros((N+1))
T6[0]=t0tf[0]
h=(t0tf[1]-t0tf[0])/N
for i in range (N):
T6[i+1]=T6[i]+h
def f(t,x):
return phi(x,t)
Y6 = scipy.integrate.odeint(f,Y0,T6, full_output=True)
return Y6
Y6 = edo.odeint(phi, t0tf, Y0, N)
T6Y6 = numpy.hstack([Y6])
print("Solutions Scipy :")
print()
print(T6Y6)
print()
mpl.figure("Courbes")
mpl.plot(Y6[0:N,0],Y6[0:N,1],color="yellow",label="Isoda")
mpl.show()
And the error is :
mpl.plot(Y6[0:N,0],Y6[0:N,1],color="yellow",label="Isoda")
TypeError: tuple indices must be integers, not tuple
Thanks in advance (PS: I'm french so my sentences might be kinda shaky)

Y6 seems to be a tuple that you are calling in an incorrect way. It's difficult to point out exactly what is wrong since you didn't provide the data but the following example shows you how to call elements from a tuple:
y = ((1,2,3,4,5),)
print('This one works: ',y[0][1:])
print(y[1:,0])
, the result is this:
This one works: (2, 3, 4, 5)
Traceback (most recent call last):
File "E:\armatita\stackoverflow\test.py", line 9, in <module>
print(y[1:,0])
TypeError: tuple indices must be integers, not tuple

Related

Use of plt.plot vs plt.scatter with two variables (x and f(x,y))

I am new in Python and stack overflow so please bear with me.
I was trying to plot using plt.plot and plt.scatter. The former works perfectly alright while the latter not. Down below is the relevant part of code:
enter code here
def vis_cal(u, a):
return np.exp(2*np.pi*1j*u*np.cos(a))
u = np.array([[1, 2, 3, 4]])
u = u.reshape((4,1))
a = a([[-np.pi, -np.pi/6]])
plt.figure(figsize=(10, 8))
plt.xlabel("Baseline")
plt.ylabel("Vij (Visibility)")
plt.scatter(u, vis_cal(u, a), 'o', color='blue', label="Vij_ind")
plt.legend(loc="lower left")
plt.show()
This returns an error: ValueError: x and y must be the same size
My questions here are
Why the different array size doesn't matter to plt.plot but it does matter to plt.scatter?
Does this mean that if I want to use plt.scatter I always need to make sure that they arrays must have the same size otherwise I need to use plt.plot?
Thank you very much

Inconsistencies between latest numpy and scikit-learn versons?

I just upgraded my versions of numpy and scikit-learn to the latest versions, i.e. numpy-1.16.3 and sklearn-0.21.0 (for Python 3.7). A lot is crashing, e.g. a simple PCA on a numeric matrix will not work anymore. For instance, consider this toy matrix:
Xt
Out[3561]:
matrix([[-0.98200559, 0.80514289, 0.02461868, -1.74564111],
[ 2.3069239 , 1.79912014, 1.47062378, 2.52407335],
[-0.70465054, -1.95163302, -0.67250316, -0.56615338],
[-0.75764211, -1.03073475, 0.98067997, -2.24648769],
[-0.2751523 , -0.46869694, 1.7917171 , -3.31407694],
[-1.52269241, 0.05986123, -1.40287416, 2.57148354],
[ 1.38349325, -1.30947483, 0.90442436, 2.52055143],
[-0.4717785 , -1.46032344, -1.50331841, 3.58598692],
[-0.03124986, -3.52378987, 1.22626145, 1.50521572],
[-1.01453403, -3.3211243 , -0.00752532, 0.56538522]])
Then run PCA on it:
import sklearn.decomposition as skd
est2 = skd.PCA(n_components=4)
est2.fit(Xt)
This fails:
Traceback (most recent call last):
File "<ipython-input-3563-1c97b7d5474f>", line 2, in <module>
est2.fit(Xt)
File "/home/sven/anaconda3/lib/python3.7/site-packages/sklearn/decomposition/pca.py", line 341, in fit
self._fit(X)
File "/home/sven/anaconda3/lib/python3.7/site-packages/sklearn/decomposition/pca.py", line 407, in _fit
return self._fit_full(X, n_components)
File "/home/sven/anaconda3/lib/python3.7/site-packages/sklearn/decomposition/pca.py", line 446, in _fit_full
total_var = explained_variance_.sum()
File "/home/sven/anaconda3/lib/python3.7/site-packages/numpy/core/_methods.py", line 36, in _sum
return umr_sum(a, axis, dtype, out, keepdims, initial)
TypeError: float() argument must be a string or a number, not '_NoValueType'
My impression is that numpy has been restructured at a very fundamental level, including single column matrix referencing, such that functions such as np.sum, np.sqrt etc don't behave as they did in older versions.
Does anyone know what the path forward with numpy is and what exactly is going on here?
At this point your code fit as run scipy.linalg.svd on your Xt, and is looking at the singular values S.
self.mean_ = np.mean(X, axis=0)
X -= self.mean_
U, S, V = linalg.svd(X, full_matrices=False)
# flip eigenvectors' sign to enforce deterministic output
U, V = svd_flip(U, V)
components_ = V
# Get variance explained by singular values
explained_variance_ = (S ** 2) / (n_samples - 1)
total_var = explained_variance_.sum()
In my working case:
In [175]: est2.explained_variance_
Out[175]: array([6.12529695, 3.20400543, 1.86208619, 0.11453425])
In [176]: est2.explained_variance_.sum()
Out[176]: 11.305922832602981
np.sum explains that, as of v 1.15, it takes a initial parameter (ref. ufunc.reduce). And the default is initial=np._NoValue
In [178]: np._NoValue
Out[178]: <no value>
In [179]: type(np._NoValue)
Out[179]: numpy._globals._NoValueType
So that explains, in part, the _NoValueType reference in the error.
What's your scipy version?
In [180]: import scipy
In [181]: scipy.__version__
Out[181]: '1.2.1'
I wonder if your scipy.linalg.svd is returning a S array that is an 'old' ndarray, and doesn't fully implement this initial parameter. I can't explain why that could happen, but can't explain otherwise why the array sum is having problems with a np._NoValue.

TypeError: 'TensorShape' object is not callable

I am new to Tensorflow programming , i was digging up some functions and got this error in the snippet :
**with** **tf.Session()** as sess_1:
c = tf.constant(5)
d = tf.constant(6)
e = c + d
print(sess_1.run(e))
print(sess_1.run(e.shape()))
Error found :Traceback (most recent call last):
File "C:/Users/Ashu/PycharmProjects/untitled/Bored.py", line 15, in
print(sess_1.run(e.shape()))
TypeError: 'TensorShape' object is not callable
I didn't found it here so can anyone please clarify this silly doubt as i am new learner.Sorry for any typing mistake !
I have a one more doubt , when i uses simply eval() function it doesn't print anything in pycharm , i had to use it along with print() method. So my doubt is when print() method is used it doesn't print the dtype of the tensor , it simply print the tensor or python object value in pycharm.(Why i am not getting the output in the format like : array([1. , 1.,] , dtype=float32))Is it the Pycharm way to print the tensor in new version or is it something i am doing wrong ? So excited to know the thing behind this , please help and pardon if i am wrong at any place.
One confusing aspect of tensorflow for beginners is there are two types of shape: dynamic shape, given by tf.shape(x), and static shape, given by x.shape (assuming x is a tensor). While they represent the same concept, they are used very differently.
Static shape is the shape of a tensor known at run time. Its a data type in its own right, but it can be converted to a list using as_list().
x = tf.placeholder(shape=(None, 3, 4))
static_shape = x.shape
shape_list = x.shape.as_list()
print(shape_list) # [None, 3, 4]
y = tf.reduce_sum(x, axis=1)
print(y.shape.as_list()) # [None, 4]
During operations, tensorflow tracks static shapes as best it can. In the above example, y's shape was calculated based on the partially known shape of x's. Note we haven't even created a session, but the static shape is still known.
Since the batch size is not known, you can't use the static first entry in calculations.
z = tf.reduce_sum(x) / tf.cast(x.shape.as_list()[0], tf.float32) # ERROR
(we could have divided by x.shape.as_list()[1], since that dimension is known at run-time - but that wouldn't demonstrate anything here)
If we need to use a value which is not known statically - i.e. at graph construction time - we can use the dynamic shape of x. The dynamic shape is a tensor - like other tensors in tensorflow - which is evaluated using a session.
z = tf.reduce_sum(x) / tf.cast(tf.shape(x)[0], tf.float32) # all good!
You can't call as_list on the dynamic shape, nor can you inspect its values without going through a session evaluation.
As stated in the documentation, you can only call a session's run method with tensors, operations, or lists of tensors/operations. Your last line of code calls run with the result of e.shape(), which has type TensorShape. The session can't execute a TensorShape argument, so you're getting an error.
When you call print with a tensor, the system prints the tensor's content. If you want to print the tensor's type, use code like print(type(tensor)).

Numpy - AttributeError: 'Zero' object has no attribute 'exp'

I'm having trouble solving a discrepancy between something breaking at runtime, but using the exact same data and operations in the python console, having it work fine.
# f_err - currently has value 1.11819388872025
# l_scales - currently a numpy array [1.17840183376334 1.13456764589809]
sq_euc_dists = self.se_term(x1, x2, l_scales) # this is fine. It calls cdists on x1/l_scales, x2/l_scales vectors
return (f_err**2) * np.exp(-0.5 * sq_euc_dists) # <-- errors on this line
The error that I get is
AttributeError: 'Zero' object has no attribute 'exp'
However, calling those exact same lines, with the same f_err, l_scales, and x1, x2 in the console right after it errors out, somehow does not produce errors.
I was not able to find a post referring to the 'Zero' object error specifically, and the non-'Zero' ones I found didn't seem to apply to my case here.
EDIT: It was a bit lacking in info, so here's an actual (extracted) runnable example with sample data I took straight out of a failed run, which when run in isolation works fine/I can't reproduce the error except in runtime.
Note that the sqeucld_dist function below is quite bad and I should be using scipy's cdist instead. However, because I'm using sympy's symbols for matrix elementwise gradients with over 15 partial derivatives in my real data, cdist is not an option as it doesn't deal with arbitrary objects.
import numpy as np
def se_term(x1, x2, l):
return sqeucl_dist(x1/l, x2/l)
def sqeucl_dist(x, xs):
return np.sum([(i-j)**2 for i in x for j in xs], axis=1).reshape(x.shape[0], xs.shape[0])
x = np.array([[-0.29932052, 0.40997373], [0.40203481, 2.19895326], [-0.37679417, -1.11028267], [-2.53012051, 1.09819485], [0.59390005, 0.9735], [0.78276777, -1.18787904], [-0.9300892, 1.18802775], [0.44852545, -1.57954101], [1.33285028, -0.58594779], [0.7401607, 2.69842268], [-2.04258086, 0.43581565], [0.17353396, -1.34430191], [0.97214259, -1.29342284], [-0.11103534, -0.15112815], [0.41541759, -1.51803154], [-0.59852383, 0.78442389], [2.01323359, -0.85283772], [-0.14074266, -0.63457529], [-0.49504797, -1.06690869], [-0.18028754, -0.70835799], [-1.3794126, 0.20592016], [-0.49685373, -1.46109525], [-1.41276934, -0.66472598], [-1.44173868, 0.42678815], [0.64623684, 1.19927771], [-0.5945761, -0.10417961]])
f_err = 1.11466725760716
l = [1.18388412685279, 1.02290811104357]
result = (f_err**2) * np.exp(-0.5 * se_term(x, x, l)) # This runs fine, but fails with the exact same calls and data during runtime
Any help greatly appreciated!
Here is how to reproduce the error you are seeing:
import sympy
import numpy
zero = sympy.sympify('0')
numpy.exp(zero)
You will see the same exception you are seeing.
You can fix this (inefficiently) by changing your code to the following to make things floating point.
def sqeucl_dist(x, xs):
return np.sum([np.vectorize(float)(i-j)**2 for i in x for j in xs],
axis=1).reshape(x.shape[0], xs.shape[0])
It will be better to fix your gradient function using lambdify.
Here's an example of how lambdify can be used on partial d
from sympy.abc import x, y, z
expression = x**2 + sympy.sin(y) + z
derivatives = [expression.diff(var, 1) for var in [x, y, z]]
derivatives is now [2*x, cos(y), 1], a list of Sympy expressions. To create a function which will evaluate this numerically at a particular set of values, we use lambdify as follows (passing 'numpy' as an argument like that means to use numpy.cos rather than sympy.cos):
derivative_calc = sympy.lambdify((x, y, z), derivatives, 'numpy')
Now derivative_calc(1, 2, 3) will return [2, -0.41614683654714241, 1]. These are ints and numpy.float64s.
A side note: np.exp(M) will calculate the element-wise exponent of each of the elements of M. If you are trying to do a matrix exponential, you need np.linalg.exmp.

Clustering of sparse matrix in python and scipy

I'm trying to cluster some data with python and scipy but the following code does not work for reason I do not understand:
from scipy.sparse import *
matrix = dok_matrix((en,en), int)
for pub in pubs:
authors = pub.split(";")
for auth1 in authors:
for auth2 in authors:
if auth1 == auth2: continue
id1 = e2id[auth1]
id2 = e2id[auth2]
matrix[id1, id2] += 1
from scipy.cluster.vq import vq, kmeans2, whiten
result = kmeans2(matrix, 30)
print result
It says:
Traceback (most recent call last):
File "cluster.py", line 40, in <module>
result = kmeans2(matrix, 30)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 683, in kmeans2
clusters = init(data, k)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 576, in _krandinit
return init_rankn(data)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 563, in init_rankn
mu = np.mean(data, 0)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 2374, in mean
return mean(axis, dtype, out)
TypeError: mean() takes at most 2 arguments (4 given)
When I'm using kmenas instead of kmenas2 I have the following error:
Traceback (most recent call last):
File "cluster.py", line 40, in <module>
result = kmeans(matrix, 30)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 507, in kmeans
guess = take(obs, randint(0, No, k), 0)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 103, in take
return take(indices, axis, out, mode)
TypeError: take() takes at most 3 arguments (5 given)
I think I have the problems because I'm using sparse matrices but my matrices are too big to fit the memory otherwise. Is there a way to use standard clustering algorithms from scipy with sparse matrices? Or I have to re-implement them myself?
I created a new version of my code to work with vector space
el = len(experts)
pl = len(pubs)
print el, pl
from scipy.sparse import *
P = dok_matrix((pl, el), int)
p_id = 0
for pub in pubs:
authors = pub.split(";")
for auth1 in authors:
if len(auth1) < 2: continue
id1 = e2id[auth1]
P[p_id, id1] = 1
from scipy.cluster.vq import kmeans, kmeans2, whiten
result = kmeans2(P, 30)
print result
But I'm still getting the error:
TypeError: mean() takes at most 2 arguments (4 given)
What am I doing wrong?
K-means cannot be run on distance matrixes.
It needs a vector space to compute means in, that is why it is called k-means. If you want to use a distance matrix, you need to look into purely distance based algorithms such as DBSCAN and OPTICS (both on Wikipedia).
May I suggest, "Affinity Propagation" from scikit-learn? On the work I've been doing with it, I find that it has generally been able to find the 'naturally' occurring clusters within my data set. The inputs into the algorithm are an affinity matrix, or similarity matrix, of any arbitrary similarity measure.
I don't have a good handle on the kind of data you have on hand, so I can't speak to the exact suitability of this method to your data set, but it may be worth a try, perhaps?
Alternatively, if you're looking to cluster graphs, I'd take a look at NetworkX. That might be a useful tool for you. The reason I suggest this is because it looks like the data you're looking to work with networks of authors. Hence, with NetworkX, you can put in an adjacency matrix and find out which authors are clustered together.
For a further elaboration on this, you can see a question that I had asked earlier for inspiration here.