Python: AttributeError: "'numpy.float64' object has no attribute 'tanh'" - numpy
I have seen couple of questions with similar title, however I am afraid, none of them could satisfactorily answer my question and that is, how do I take tan inverse or lets say exp of a numpy ndarray? For instance, piece of my code looks similar to this-
import numpy as np
from numpy import ndarray,zeros,array,dot,exp
import itertools
def zetta_G(x,spr_g,theta_g,c_g):
#this function computes estimated g:
#c_g is basically a matrix of dim equal to g and whose elements contains list of centers that describe the fuzzy system for each element of g:
m,n=c_g.shape[0],c_g.shape[1]
#creating an empty matrix of dim mxn to hold regressors:
zetta_g=zeros((m,n),dtype=ndarray)
#creating an empty matrix of dim mxn to hold estimated g:
z_g=np.zeros((m,n),dtype=ndarray)
#for filling rows
for k in range(m):
#for filling columns
for p in range(n):
#container to hold-length being equal to number of inputs(e1,e2,e3 etc)
Mu=[[] for i in range(len(x))]
for i in range(len(x)):
#filling that with number of zeros equal to len of center
Mu[i]=np.zeros(len(c_g[k][p]))
#creating an empty list for holding rules
M=[]
#piece of code for creating rules-all possible combinations
for i in range(len(x)):
for j in range(len(c_g[k][p])):
Mu[i][j]=exp(-.5*((x[i]-c_g[k][p][j])/spr_g[k][p])**2)
b=list(itertools.product(*Mu))
for i in range(len(b)):
M.append(reduce(lambda x,y:x*y,b[i]))
M=np.array(M)
S=np.sum(M)
#import pdb;pdb.set_trace()
zetta_g[k][p]=M/S
z_g[k][p]=dot(M/S,theta_g[k][p])
return zetta_g,z_g
if __name__=='__main__':
x=[1.2,.2,.4]
cg11,cg12,cg13,cg21,cg22,cg23,cg31,cg32,cg33=[-10,-8,-6,-4,-2,0,2,4,6,8,10],[-10,-8,-6,-4,-2,0,2,4,6,8,10],[-10,-8,-6,-4,-2,0,2,4,6,8,10],[-10,-8,-6,-4,-2,0,2,4,6,8,10],[-10,-8,-6,-4,-2,0,2,4,6,8,10],[-12,-9,-6,-3,0,3,6,9,12],[-6.5,-4.5,-2.5,0,2.5,4.5,6.5],[-5,-4,-3,-2,-1,0,1,2,3,4,5],[-3.5,-2.5,-1.5,0,1.5,2.5,3.5]
C,spr_f=array([[-10,-8,-6,-4,-2,0,2,4,6,8,10],[-10,-8,-6,-4,-2,0,2,4,6,8,10],[-10,-8,-6,-4,-2,0,2,4,6,8,10]]),[2.2,2,2.1]
c_g=array([[cg11,cg12,cg13],[cg21,cg22,cg23],[cg31,cg32,cg33]])
spr_g=array([[2,2.1,2],[2.1,2.2,3],[2.5,1,1.5]])
theta_g=np.zeros((c_g.shape[0],c_g.shape[1]),dtype=ndarray)
#import pdb;pdb.set_trace()
N=0
for i in range(c_g.shape[0]):
for j in range(c_g.shape[1]):
length=len(c_g[i][j])**len(x)
theta_g[i][j]=np.random.sample(length)
N=N+(len(c_g[i][j]))**len(x)
zetta_g,z_g=zetta_G(x,spr_g,theta_g,c_g)
#zetta is a function that accepts following args-- x: which is a list of certain dim, spr_g: is a matrix of dimension similar to theta_g and c_g. theta_g and c_g are numpy matrices with lists as individual elements
print(zetta_g)
print(z_g)
inv=np.tanh(z_g)
print(inv)
In [89]: a=np.array([[1],[3],[2]],dtype=np.ndarray)
In [90]: a
Out[90]:
array([[1],
[3],
[2]], dtype=object)
Note that the dtype is object, not ndarray. If the dtype isn't one of the recognized numeric or string types, it is object, a generic pointer, just like the elements of a list.
In [91]: np.tanh(a)
AttributeError: 'int' object has no attribute 'tanh'
np.tanh is trying to delegate the task to the elements of array. Commonly math on object dtype arrays is performed by list like iteration on the elements. It does not do the fast compiled numeric numpy math.
If a is ordinary number array:
In [95]: np.tanh(np.array([[1],[3],[2]]))
Out[95]:
array([[0.76159416],
[0.99505475],
[0.96402758]])
With object dtype arrays, your ability to do numeric calculations is limited. Some things work, others don't. It's hit-or-miss.
Here's a first stab at cleaning up your code; it's not tested.
def zetta_G(x,spr_g,theta_g,c_g):
m,n=c_g.shape[0],c_g.shape[1]
#creating an empty matrix of dim mxn to hold regressors:
zetta_g=zeros((m,n),dtype=object)
#creating an empty matrix of dim mxn to hold estimated g:
z_g=np.zeros((m,n),dtype=object)
#for filling rows
for k in range(m):
#for filling columns
for p in range(n):
#container to hold-length being equal to number of inputs(e1,e2,e3 etc)
Mu = np.zeros((len(x), len(c_g[k,p])))
#creating an empty list for holding rules
for i in range(len(x)):
Mu[i,:]=exp(-.5*((x[i]-c_g[k,p,:])/spr_g[k,p])**2)
# probably can calc Mu without any loop
M = []
b=list(itertools.product(*Mu))
for i in range(len(b)):
M.append(reduce(lambda x,y:x*y,b[i]))
M=np.array(M)
S=np.sum(M)
zetta_g[k,p]=M/S
z_g[k,p]=dot(M/S,theta_g[k,p])
return zetta_g,z_g
Running your code, and adding some .shape displays I see that
z_g is (3,3) and contains just single numbers. So it can be initialed as a plain 2d float array:
z_g=np.zeros((m,n))
theta_g is (3,3), but with variable length array elements
print([i.shape for i in theta_g.flat])
[(1331,), (1331,), (1331,), (1331,), (1331,), (729,), (343,), (1331,), (343,)]
zetta_g matches in shapes
If I change:
x=np.array([1.2,.2,.4])
I can calculate Mu without a loop with:
Mu = exp(-.5*((x[:,None]-np.array(c_g[k,p])[None,:])/spr_g[k,p])**2)
c_g is a (3,3) array with variable length lists; I can vectorize the
((x[i]-c_g[k,p][j])
expression with:
x[:,None]-np.array(c_g[k,p])[None,:]
Not a big time saver here since x has 4 elements and c_g elements are only 7-11 long. But cleaner.
In this running code I don't see a tanh, so I don't know what kinds of arrays are using that.
You set type of array's elements to dtype=np.ndarray. Replace type to, let say, dtype=np.float64 or any numeric type.
Related
Adding a third dimension to my 2D array in a for loop
I have a for loop that gives me an output of 16 x 8 2D arrays per entry in the loop. I want to stack all of these 2D arrays along the z-axis in a 3D array. This way, I can determine the variance over the z-axis. I have tried multiple commands, such as np.dstack, matrix3D[p,:,:] = ... and np.newaxis both in- and outside the loop. However, the closest I've come to my desired output is just a repetition of the last array stacked on top of each other. Also the dimensions were way off. I need to keep the original 16 x 8 format. By now I'm in a bit too deep and could use some nudge in the right direction! My code: excludedElectrodes = [1,a.numberOfColumnsInArray,a.numberOfElectrodes-a.numberOfColumnsInArray+1,a.numberOfElectrodes] matrixEA = np.full([a.numberOfRowsInArray, a.numberOfColumnsInArray], np.nan) for iElectrode in range(a.numberOfElectrodes): if a.numberOfDeflectionsPerElectrode[iElectrode] != 0: matrixEA[iElectrode // a.numberOfColumnsInArray][iElectrode % a.numberOfColumnsInArray] = 0 for iElectrode in range (a.numberOfElectrodes): if iElectrode+1 not in excludedElectrodes: """Preprocessing""" # Loop over heartbeats for p in range (1,len(iLAT)): # Calculate parameters, store them in right row-col combo (electrode number) matrixEA[iElectrode // a.numberOfColumnsInArray][iElectrode % a.numberOfColumnsInArray] = (np.trapz(abs(correctedElectrogram[limitA[0]:limitB[0]]-totalBaseline[limitA[0]:limitB[0]]))/(1000)) # Stack all matrixEA arrays along z axis matrix3D = np.dstack(matrixEA)
This example snippet does what you want, although I suspect your errors have to do more with things not relative to the concatenate part. Here, we use the None keyword in the array to create a new empty dimension (along which we concatenate the 2D arrays). import numpy as np # Function does create a dummy (16,8) array def foo(a): return np.random.random((16,8)) + a arrays2D = [] # Your loop for i in range(10): # Calculate your (16,8) array f = foo(i) # And append it to the list arrays2D.append(f) # Stack arrays along new dimension array3D = np.concatenate([i[...,None] for i in arrays2D], axis = -1)
generate large array in dask
I would like to calculate SVD from large matrix by Dask. However, I tried naively to create an empty 2D array and update in a loop, but Dask does not allow mutating the array. So, I'm looking for a workaround. I tried saving large ( around 65,000 x 65,000, or even more) array into HDF5 via h5py, but updating the array in a loop is quite inefficient. Should I be using mmap, memory mapped numpy instead? Below, I shared a sample code, without any dask implementation. Should I use dask.bag or dask.delayed for this operation? The sample code is taking in long strings and in window size of 8, generates combinations of two-letter words. In actual data, the window size would be 20 and words will be 8-letter long. And, the input string can be 3 Gb long. import itertools import numpy as np np.set_printoptions(threshold=np.Inf) # generate all possible words of length 2 (AA, AC, AG, AT, CA, etc.) # then get numerical index (AA -> 0, AC -> 1, etc.) bases=['A','C','G','T'] all_two = [''.join(p) for p in itertools.product(bases, repeat=2)] two_index = {x: y for (x,y) in zip(all_two, range(len(all_two)))} # final array to fill, size is [ 16 possible words x 16 possible words ] counts = np.zeros(shape=(16,16)) # in actual sample we expect 65000x65000 array # sample sequences (these will be gigabytes long in actual sample) seq1 = "AAAAACCATCGACTACGACTAC" seq2 = "ACGATCACGACTACGACTAGATGCATCACGACTAAAAA" # accumulate results all_pairs=[] def generate_pairs(sequence): pairs=[] for i in range(len(sequence)-8+1): window=sequence[i:i+8] words= [window[i:i+2] for i in range(0, len(window), 2)] for pair in itertools.combinations(words,2): pairs.append(pair) return pairs # use function for each sequence all_pairs.extend(generate_pairs(seq1)) all_pairs.extend(generate_pairs(seq2)) # convert 1D array of pairs into 2D counts of pairs # for each pair, lookup word index and increase corresponding cell for j in all_pairs: counts[ two_index[j[0]], two_index[j[1]] ] += 1 print(counts) EDIT: I might have asked the question a little complicated, let me try to paraphrase it. I need to construct a single large 2D array of size ~65000x65000. The array needs to be filled with counting occurrences of (word1,word2) pairs. Since Dask does not allow item assignment/mutate for Dask array, I can not fill the array as pairs are processed. Is there a workaround to generate/fill a large 2D array with Dask? Here's simpler code to test: import itertools import numpy as np np.set_printoptions(threshold=np.Inf) bases=['A','C','G','T'] all_two = [''.join(p) for p in itertools.product(bases, repeat=2)] two_index = {x: y for (x,y) in zip(all_two, range(len(all_two)))} seq = "AAAAACCATCGACTACGACTAC" counts = np.zeros(shape=(16,16)) for i in range(len(seq)-8+1): window=seq[i:i+8] words= [window[i:i+2] for i in range(0, len(window), 2)] for pair in itertools.combinations(words,2): counts[two_index[pair[0]], two_index[pair[1]]] += 1 # problematic part! print(counts)
numpy.corrcoeff() MemoryError
Can't understand MemoryError I get using numpy.corrcoeff() to find correlation coefficient between 2 vectors smin & smax as following: import numpy as np from numpy import random as rn r=0.01 sigma=0.2 T=1 K=1 N=252 h=T/N M = 50000 Z = rn.randn(M,N) S=np.ones((M,N+1)) smax=np.ones((M,1)) smin=np.ones((M,1)) for i in range(0,N): S[:,i+1]=S[:,i]*(np.exp((r-(sigma**2)/2)*h+sigma*Z[:,i]*np.sqrt(h))) for j in range(0,M): smax[j,:]=np.exp(-r*T)*(np.max(S[j,:])>K)*(np.max(S[j,:])-K) smin[j,:]=np.exp(-r*T)*(np.min(S[j,:])<K)*(K-np.min(S[j,:])) c=np.corrcoef(smax,smin) print(c) if there is another way to find correlation coeff.,like using pandas it's also good.
The shape of your arrays here is what is the problem. The function documentation states that x is a "1-D or 2-D array containing multiple variables and observations. Each row of x represents a variable, and each column a single observation of all those variables." and that y is an additional set of variables and observations. So this is trying to allocate an array of size (10000, 10000), which is huge. If you just want to calculate the pearson correlation coefficient between two one dimensional vectors, you can use a much simpler formula than what is implemented here. This documentation has the formula I am referring to. https://hydroerr.readthedocs.io/en/stable/api/HydroErr.HydroErr.pearson_r.html#HydroErr.HydroErr.pearson_r But to be able to still use the numpy version you need to pass in the observations and predictions in the same parameter x, and x and y need to be 1D arrays. import numpy as np simulated_array = np.random.rand(50000) observed_array = np.random.rand(50000) c = np.corrcoef([simulated_array, observed_array])[1, 0] More explanation about this here.
Sklearn and Sparse Matrices ValueError
I'm aware similar questions have been asked before, and I've tried everything suggested in them, but I'm still stumped. I have a dataset with 2 columns: The first with vectors representing words stored as a 1x10000 sparse csr matrix (so a matrix in each cell), and the second contains integer ratings which I will use for classification. When I run the following code for index, row in data.iterrows(): print(row) print(row[0].shape) I get the correct output for all the rows Name: 0, dtype: object (1, 10000) Vector (0, 0)\t1.0\n (0, 1)\t1.0\n (0, 2)\t1.0\n ... Rating 5 Now when I try passing my data in any SKlearn classifier like so: uniform_random_classifier = DummyClassifier(strategy='uniform') uniform_random_classifier.fit(data["Vectors"], data["Ratings"]) I get the following error: array = np.array(array, dtype=dtype, order=order, copy=copy) ValueError: setting an array element with a sequence. What am I doing wrong? I've made sure all my sparse matrices are the same size and I've tried reshaping my data in various ways, but with no luck, and the Sklearn classifiers are supposed to be able to deal with csr matrices. Update: Converting the entire "Vectors" column into one large 2-D matrix did the trick, but for completeness sake the following is the code I used to generate my dataframe if anyone is curious and wants to try solving the original issue. Assume data is a pandas dataframe with rows that look like "560 420 222" 5.0 "2345 2344 2344 5" 3.0 def vectorize(feature, size): """Given a numeric string generated from a vocabulary table return a binary vector representation of each feature""" vector = sparse.lil_matrix((1, size)) for number in feature.split(' '): try: vector[0, int(number) - 1] = 1 except ValueError: pass return vector def vectorize_dataset(data, vectorize, size): """Given a dataset in the appropriate "num num num..." format, a specific vectorization format, and a vector size, returns the dataset in vectorized form""" result_data = pd.DataFrame(index=range(data.shape[0]), columns=["Vector", "Rating"]) for index, row in data.iterrows(): # All the mixing up of decodings and encoding has made it so that Pandas incorrectly parses EOF chars if type(row[0]) == type('str'): result_data.iat[index, 0] = vectorize(row[0], size).tocsr() result_data.iat[index, 1] = data.loc[index][1] return result_data
Numpy / Scipy - Sparse matrix to vector
I have sparse CSR matrices (from a product of two sparse vector) and I want to convert each matrix to a flat vector. Indeed, I want to avoid using any dense representation or iterating over indexes. So far, the only solution that came up was to iterate over non null elements by using coo representation: import numpy from scipy import sparse as sp matrices = [sp.csr_matrix([[1,2],[3,4]])]*3 vectorSize = matrices[0].shape[0]*matrices[0].shape[1] flatMatrixData = [] flatMatrixRows = [] flatMatrixCols = [] for i in range(len(matrices)): matrix = matrices[i].tocoo() flatMatrixData += matrix.data.tolist() flatMatrixRows += [i]*matrix.nnz flatMatrixCols += [r+c*2 for r,c in zip(matrix.row, matrix.col)] flatMatrix = sp.coo_matrix((flatMatrixData,(flatMatrixRows, flatMatrixCols)), shape=(len(matrices), vectorSize), dtype=numpy.float64).tocsr() It is indeed unsatisfying and inelegant. Does any one know how to achieve this in an efficient way?
Your flatMatrix is (3,4); each row is [1 3 2 4]. If a submatrix is x, then the row is x.A.T.flatten(). F = sp.vstack([x.T.tolil().reshape((1,vectorSize)) for x in matrices]) F is the same (dtype is int). I had to convert each submatrix to lil since csr has not implemented reshape (in my version of sparse). I don't know if other formats work. Ideally sparse would let you do the whole range of numpy array (or matrix) manipulations, but it isn't there yet. Given the small dimensions in this example, I won't speculate on the speed of the alternatives.