Clustering of sparse matrix in python and scipy - numpy

I'm trying to cluster some data with python and scipy but the following code does not work for reason I do not understand:
from scipy.sparse import *
matrix = dok_matrix((en,en), int)
for pub in pubs:
authors = pub.split(";")
for auth1 in authors:
for auth2 in authors:
if auth1 == auth2: continue
id1 = e2id[auth1]
id2 = e2id[auth2]
matrix[id1, id2] += 1
from scipy.cluster.vq import vq, kmeans2, whiten
result = kmeans2(matrix, 30)
print result
It says:
Traceback (most recent call last):
File "cluster.py", line 40, in <module>
result = kmeans2(matrix, 30)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 683, in kmeans2
clusters = init(data, k)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 576, in _krandinit
return init_rankn(data)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 563, in init_rankn
mu = np.mean(data, 0)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 2374, in mean
return mean(axis, dtype, out)
TypeError: mean() takes at most 2 arguments (4 given)
When I'm using kmenas instead of kmenas2 I have the following error:
Traceback (most recent call last):
File "cluster.py", line 40, in <module>
result = kmeans(matrix, 30)
File "/usr/lib/python2.7/dist-packages/scipy/cluster/vq.py", line 507, in kmeans
guess = take(obs, randint(0, No, k), 0)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 103, in take
return take(indices, axis, out, mode)
TypeError: take() takes at most 3 arguments (5 given)
I think I have the problems because I'm using sparse matrices but my matrices are too big to fit the memory otherwise. Is there a way to use standard clustering algorithms from scipy with sparse matrices? Or I have to re-implement them myself?
I created a new version of my code to work with vector space
el = len(experts)
pl = len(pubs)
print el, pl
from scipy.sparse import *
P = dok_matrix((pl, el), int)
p_id = 0
for pub in pubs:
authors = pub.split(";")
for auth1 in authors:
if len(auth1) < 2: continue
id1 = e2id[auth1]
P[p_id, id1] = 1
from scipy.cluster.vq import kmeans, kmeans2, whiten
result = kmeans2(P, 30)
print result
But I'm still getting the error:
TypeError: mean() takes at most 2 arguments (4 given)
What am I doing wrong?

K-means cannot be run on distance matrixes.
It needs a vector space to compute means in, that is why it is called k-means. If you want to use a distance matrix, you need to look into purely distance based algorithms such as DBSCAN and OPTICS (both on Wikipedia).

May I suggest, "Affinity Propagation" from scikit-learn? On the work I've been doing with it, I find that it has generally been able to find the 'naturally' occurring clusters within my data set. The inputs into the algorithm are an affinity matrix, or similarity matrix, of any arbitrary similarity measure.
I don't have a good handle on the kind of data you have on hand, so I can't speak to the exact suitability of this method to your data set, but it may be worth a try, perhaps?

Alternatively, if you're looking to cluster graphs, I'd take a look at NetworkX. That might be a useful tool for you. The reason I suggest this is because it looks like the data you're looking to work with networks of authors. Hence, with NetworkX, you can put in an adjacency matrix and find out which authors are clustered together.
For a further elaboration on this, you can see a question that I had asked earlier for inspiration here.

Related

pandas value_counts() with IntEnum raises RecursionError

I got the following code to elaborate on my problem. I'm using python 3.6 with pandas==0.25.3.
import pandas as pd
from enum import Enum, IntEnum
class BookType(Enum):
DRAMA = 5
ROMAN = 3
class AuthorType(IntEnum):
UNKNOWN = 0
GROUP = 1
MAN = 2
def print_num_type(df: pd.DataFrame, col_name: str, enum_type: Enum) -> int:
counts = df[col_name].value_counts()
val = counts[enum_type]
print('value counts:', counts)
print(f'Found "{val}" of type {enum_type}')
d = {'title': ['Charly Morry', 'James', 'Watson', 'Marry L.'], 'isbn': [21412412, 334764712, 12471021, 124141111], 'book_type': [BookType.DRAMA, BookType.ROMAN, BookType.ROMAN, BookType.ROMAN], 'author_type': [AuthorType.UNKNOWN, AuthorType.UNKNOWN, AuthorType.MAN, AuthorType.UNKNOWN]}
df = pd.DataFrame(data=d)
df.set_index(['title', 'isbn'], inplace=True)
df['book_type'] = df['book_type'].astype('category')
df['author_type'] = df['author_type'].astype('category')
print(df)
print(df.dtypes)
print_num_type(df, 'book_type', BookType.DRAMA)
print_num_type(df, 'author_type', AuthorType.UNKNOWN)
My pandas.DataFrame consists of two columns (book_type and author_type) of type categorical.
Furthermore, book_type is a class inheriting from type Enum and author_type from IntEnum. When calling print_num_type(df, 'book_type', BookType.DRAMA) everything works out as expected and the number of books of this type are printed, whereas print_num_type(df, 'author_type', AuthorType.UNKNOWN) raises the error:
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\_weakrefset.py", line 72, in __contains__
wr = ref(item)
RecursionError: maximum recursion depth exceeded while calling a Python object
Exception ignored in: 'pandas._libs.lib.c_is_list_like'
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\abc.py", line 182, in __instancecheck__
if subclass in cls._abc_cache:
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\_weakrefset.py", line 72, in __contains__
wr = ref(item)
RecursionError: maximum recursion depth exceeded while calling a Python object
What am I doing wrong here?
Is there a workaround to get this error fixed? since I can't change the IntEnum type of AuthorType since it's provided from another library.
Thanks in advance!
See answer here
The main idea is that since x.value_counts() or counts in your function is itself a pandas Series, it's best to use .iat or .iloc when calling it, e.g, see iat docs
I think the easiest solution is to just use (x==0).sum(), or in your syntax:
val = (df[col_name]==enum_type).sum()
I put a minimal working example in the comments under your question so you can reproduce the problem/fix easily with the "x" notation.
What version of Pandas are you using? I realized after reproducing the error that upgrading Pandas (now on pandas-1.4.2) fixes the error, and the value_counts()[0] worked as expected.
run pip install --upgrade pandas

Inconsistencies between latest numpy and scikit-learn versons?

I just upgraded my versions of numpy and scikit-learn to the latest versions, i.e. numpy-1.16.3 and sklearn-0.21.0 (for Python 3.7). A lot is crashing, e.g. a simple PCA on a numeric matrix will not work anymore. For instance, consider this toy matrix:
Xt
Out[3561]:
matrix([[-0.98200559, 0.80514289, 0.02461868, -1.74564111],
[ 2.3069239 , 1.79912014, 1.47062378, 2.52407335],
[-0.70465054, -1.95163302, -0.67250316, -0.56615338],
[-0.75764211, -1.03073475, 0.98067997, -2.24648769],
[-0.2751523 , -0.46869694, 1.7917171 , -3.31407694],
[-1.52269241, 0.05986123, -1.40287416, 2.57148354],
[ 1.38349325, -1.30947483, 0.90442436, 2.52055143],
[-0.4717785 , -1.46032344, -1.50331841, 3.58598692],
[-0.03124986, -3.52378987, 1.22626145, 1.50521572],
[-1.01453403, -3.3211243 , -0.00752532, 0.56538522]])
Then run PCA on it:
import sklearn.decomposition as skd
est2 = skd.PCA(n_components=4)
est2.fit(Xt)
This fails:
Traceback (most recent call last):
File "<ipython-input-3563-1c97b7d5474f>", line 2, in <module>
est2.fit(Xt)
File "/home/sven/anaconda3/lib/python3.7/site-packages/sklearn/decomposition/pca.py", line 341, in fit
self._fit(X)
File "/home/sven/anaconda3/lib/python3.7/site-packages/sklearn/decomposition/pca.py", line 407, in _fit
return self._fit_full(X, n_components)
File "/home/sven/anaconda3/lib/python3.7/site-packages/sklearn/decomposition/pca.py", line 446, in _fit_full
total_var = explained_variance_.sum()
File "/home/sven/anaconda3/lib/python3.7/site-packages/numpy/core/_methods.py", line 36, in _sum
return umr_sum(a, axis, dtype, out, keepdims, initial)
TypeError: float() argument must be a string or a number, not '_NoValueType'
My impression is that numpy has been restructured at a very fundamental level, including single column matrix referencing, such that functions such as np.sum, np.sqrt etc don't behave as they did in older versions.
Does anyone know what the path forward with numpy is and what exactly is going on here?
At this point your code fit as run scipy.linalg.svd on your Xt, and is looking at the singular values S.
self.mean_ = np.mean(X, axis=0)
X -= self.mean_
U, S, V = linalg.svd(X, full_matrices=False)
# flip eigenvectors' sign to enforce deterministic output
U, V = svd_flip(U, V)
components_ = V
# Get variance explained by singular values
explained_variance_ = (S ** 2) / (n_samples - 1)
total_var = explained_variance_.sum()
In my working case:
In [175]: est2.explained_variance_
Out[175]: array([6.12529695, 3.20400543, 1.86208619, 0.11453425])
In [176]: est2.explained_variance_.sum()
Out[176]: 11.305922832602981
np.sum explains that, as of v 1.15, it takes a initial parameter (ref. ufunc.reduce). And the default is initial=np._NoValue
In [178]: np._NoValue
Out[178]: <no value>
In [179]: type(np._NoValue)
Out[179]: numpy._globals._NoValueType
So that explains, in part, the _NoValueType reference in the error.
What's your scipy version?
In [180]: import scipy
In [181]: scipy.__version__
Out[181]: '1.2.1'
I wonder if your scipy.linalg.svd is returning a S array that is an 'old' ndarray, and doesn't fully implement this initial parameter. I can't explain why that could happen, but can't explain otherwise why the array sum is having problems with a np._NoValue.

Why does the weight matrix of the mxnet.gluon.nn.Dense object has no shape?

I try to follow this nice MXNet Tutorial. I create an extremely simple neural network (two input unit, no hidden units and one output unit) doing this:
from mxnet import gluon
net = gluon.nn.Dense(1, in_units=2)
After that I try to take a look at the shape of the weight matrix (the same way as it is described in the tutorial):
print(net.weight)
As a result I expect to see this:
Parameter dense4_weight (shape=(1, 2), dtype=None)
However, I see the following error message:
Traceback (most recent call last):
File "tmp.py", line 5, in <module>
print(net.weight)
File "/usr/local/lib/python3.6/site-packages/mxnet/gluon/parameter.py", line 120, in __repr__
return s.format(**self.__dict__)
KeyError: 'shape'
Am I doing something wrong?
This is a regression that happened here and has since been fixed on master branch here. Expect it to be fixed in the next MXNet release.

Numpy dot product MemoryError for small matrices

I wanted to implement Singular Value Decomposition (SVD) as the collaborative filtering method for recommendation systems. I have this sparse_matrix, with rows representing users and columns representing items, and each matrix entry as the user-item rating.
>>> type(sparse_matrix)
scipy.sparse.csr.csr_matrix
First I factorized this matrix using SVD:
from scipy.sparse.linalg import svds
u, s, vt = svds(sparse_matrix.asfptype(), k = 2)
s_diag = np.diag(s)
Then I make the prediction by taking the dot product of u, s_diag, and vt:
>>> tmp = np.dot(u, s_diag)
>>> pred = np.dot(tmp, vt)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError
I got an MemoryError. However, I checked the size and memory usage of tmp and vt:
>>> tmp.shape
(686556, 2)
>>> tmp.nbytes
10984896
>>> vt.shape
(2, 85539)
>>> vt.nbytes
1368624
which means that tmp is around 11MB and vt is 1.4MB. But at the time of np.dot(tmp, vt), my system has over 50GB free memory available, which seems sufficient for this computation. So why am I getting this MemoryError? Is there something wrong with my code? Or is np.dot super expensive in terms of memory usage?
I think you get this error because np.dot is not able to handle sparse matrices.
As a check, please try converting the matrices to full.
check the sparse documentation (https://docs.scipy.org/doc/scipy/reference/sparse.html)
try:
np.dot(u.toarray(), s_diag.toarray())
or use
u.dot(s_diag)

Tuple indices must be integers not tuple, matplot

I'm trying to code a program that will integrate a function using diferent ways (Euler, Runge...) and using the build-in function scipy.integrate.odeint.
Everything and I'm getting the right results but I also need to create a graph with the results and that's when everything goes wrong.
For the odeint function I can't draw the graph.
Here is my code and the ERROR, I hope someone will be able to help me.
def odeint(phi, t0tf, Y0, N):
T6=numpy.zeros((N+1))
T6[0]=t0tf[0]
h=(t0tf[1]-t0tf[0])/N
for i in range (N):
T6[i+1]=T6[i]+h
def f(t,x):
return phi(x,t)
Y6 = scipy.integrate.odeint(f,Y0,T6, full_output=True)
return Y6
Y6 = edo.odeint(phi, t0tf, Y0, N)
T6Y6 = numpy.hstack([Y6])
print("Solutions Scipy :")
print()
print(T6Y6)
print()
mpl.figure("Courbes")
mpl.plot(Y6[0:N,0],Y6[0:N,1],color="yellow",label="Isoda")
mpl.show()
And the error is :
mpl.plot(Y6[0:N,0],Y6[0:N,1],color="yellow",label="Isoda")
TypeError: tuple indices must be integers, not tuple
Thanks in advance (PS: I'm french so my sentences might be kinda shaky)
Y6 seems to be a tuple that you are calling in an incorrect way. It's difficult to point out exactly what is wrong since you didn't provide the data but the following example shows you how to call elements from a tuple:
y = ((1,2,3,4,5),)
print('This one works: ',y[0][1:])
print(y[1:,0])
, the result is this:
This one works: (2, 3, 4, 5)
Traceback (most recent call last):
File "E:\armatita\stackoverflow\test.py", line 9, in <module>
print(y[1:,0])
TypeError: tuple indices must be integers, not tuple