Can a numpy matrix be converted to database table? - sql

Can a numpy matrix created within a plpython function be converted into a database table?

You can use the dbTable library to store a numpy array as a table. Please refer to the complete documentation - https://pypi.org/project/dbTable/

Related

Only size 1 arrays can be converted to python scalars

I created a 3 dimensional object using numpy.random module such as
import numpy as np
b = np.random.randn(4,4,3)
Why can't we cast type float to b?
TypeError
actual code
You can't float(b) because b isn't a number, it's a multidimensional array/matrix. If you're trying to convert every element to a Python float, that's a bad idea because numpy numbers are more precise, but if you really want to do that for whatever reason, you can do b.tolist(), which returns a Python list of floats. However, I don't believe you can have a numpy matrix of native Python types because that doesn't make any sense.

Image in the form of Numpy array in a cell in Pyspark data frame

I would like to store a image represented as a numpy array in a Pyspark data frame.
When I try the I get an error data type not supported.
looking at the data types supported in Pyspark I don't see numpy, wondering if there's a way to store array.
I also tried numpy as string but the string for some reason is truncated contains ...
Any suggestions or solutions?

check if a numpy array is a numpy masked array

As output of a script, I have numpy masked array and standard numpy array. How do I easily check while running the script if an array is a masked (has data, mask attributes) one or not?
You can check explicitly if it is a masked array by isinstance(arr, np.ma.MaskedArray), or you can check for the attributes hasattr(arr, 'mask'). I'd probably recommend the first approach in general.

How to create a pandas object from array with unknown rank?

Given any Numpy array, I want to create the best matching pandas data structure out of pd.Series, pd.DataFrame, etc. Is there a built-in function for this? I guess so but I couldn't find anything in the documentation.

Save a numpy sparse matrix into file

I want to save the result of TfidfVectorizer in sklearn.feature_extraction.text into a text file for future use. As I found, it is a sparse matrix of type ''. However when I try to save it using the following code
np.savetxt('Feature_TfIdf.txt', X_Tfidf, fmt='%2.6f')
I get an error like this
IndexError: tuple index out of range
Use joblib.dump or sklearn.externals.joblib.dump for this. NumPy doesn't get SciPy sparse matrices.
Simple example:
np.save('TfIdf.pkl',tfidf)
I manage to solve the problem by converting the sparse matrix to full matrix and then save matrix and save the results. This approach however is not useful for large arrays so it is better to save the matrix in .pkl format.