how to get values from array with a specific rule in numpy - numpy

For example, I have a array with value [1,2,4,3,6,7,33,2]. I want to get all the values which are bigger than 6. As I know the numpy.take can only get values with indices.
Which function should I use?

You can index into the array with a boolean array index:
>>> a = np.array([1,2,4,3,6,7,33,2])
>>> a > 6
array([False, False, False, False, False, True, True, False], dtype=bool)
>>> a[a > 6]
array([ 7, 33])
If you want the indices where this occurs, you can use np.where:
>>> np.where(a>6)
(array([5, 6]),)

Related

Fill nan in numpy array

Is there a straight forward way of filling nan values in a numpy array when the left and right non nan values match?
For example, if I have an array that has False, False , NaN, NaN, False, I want the NaN values to also be False. If the left and right values do not match, I want it to keep the NaN
Your first task is to reliably identify the np.nan elements. Because it's a unique float value, testing isn't trivail. np.isnan is the best numpy tool.
To mix boolean and float (np.nan) you have to use object dtype:
In [68]: arr = np.array([False, False, np.nan, np.nan, False],object)
In [69]: arr
Out[69]: array([False, False, nan, nan, False], dtype=object)
converting to float changes the False to 0 (and True to 1):
In [70]: arr.astype(float)
Out[70]: array([ 0., 0., nan, nan, 0.])
np.isnan is a good test for nan, but it only works on floats:
In [71]: np.isnan(arr)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-71-25d2f1dae78d> in <module>
----> 1 np.isnan(arr)
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
In [72]: np.isnan(arr.astype(float))
Out[72]: array([False, False, True, True, False])
You could test the object array (or a list) with a helper function and a list comprehension:
In [73]: def foo(x):
...: try:
...: return np.isnan(x)
...: except TypeError:
...: return x
...:
In [74]: [foo(x) for x in arr]
Out[74]: [False, False, True, True, False]
Having reliably identified the nan values, you can then apply the before/after logic. I'm not sure if it's easier with lists or array (your logic isn't entirely clear).

Numpy broadcasting inequality operator

Numpy broadcasting question. I have two arrays similar to these:
>my_array = np.array([[3,1,2,0] , [4,5,2,1]])
>my_array
array([[3, 1, 2, 0],
[4, 5, 2, 1]])
>second_array = np.array([2,5])
>second_array
array([2, 5])
What I want to do is transpose second_array and test, by column, to see if my_array is >= second_array . So the result would be like this:
>final_array = np.array([ [ (3 >= 2), (1>= 2), (2>=2), (0>=2)] , [(4 >=5),(5>=5),(2>=5),(1>=5)]])
>final_array
array([[ True, False, True, False],
[False, True, False, False]], dtype=bool)
I'm pretty new to matrix operations in Numpy (been doing them in R for a long time) so thanks for helping with such an introductory question.
You just need to reshape second_array so that it has appropriate dimensions:
my_array >= second_array.reshape(2,1) # or (-1,1) if height is unknown
Or equivalently:
my_array >= second_array[:,np.newaxis]

Row wise contain in numpy [duplicate]

This question already has an answer here:
Numpy element-wise in operation
(1 answer)
Closed 5 years ago.
I was wondering what is the best way to check a row wise contain in python numpy?
Suppose we have a vector V = [1, 2, 3, 4] and a Matrix M = [[2, 3, 4], [3, 5, 6], [4, 1, 3], [5, 4, 2]] (The number of rows in M is equal to the length of V). After performing row wise contain, I should get (False, False, True, True) because 1 is not in [2, 3, 4] and 2 is not in [3, 5, 6] and 3 is in [4, 1, 3] and 4 is in [5, 4, 2]
What would be the best way to do this operation in python numpy?
Actually, I do not want to use a for loop. That obviously could work while is not the best way to do it. I myself came up with this idea to do a subtraction and then count the number of zeros in the result which is much faster than using for loops. However, I wanted to know if there is a better way to do it.
What you're looking for is the in operator. e.g. 1 in [1,2,3] returns True
So given your values of v and m as numpy arrays as follows:
import numpy as np
v = np.array([1,2,3,4])
m = np.array([np.array([2,3,4]), np.array([3,5,6]), np.array([4,1,3]), np.array([5,4,2])])
# Checking row wise contain
result = [(v[i] in m[i]) for i in range(len(v))]
The result is:
>>> [(v[i] in m[i]) for i in range(len(v))]
[False, False, True, True]
Another solution as Divakar pointed out would be to use
>>> (m==v[:,None]).any(1)
array([False, False, True, True], dtype=bool)
However, doing some rough timing checks:
>>> start_time=time.time(); (m==v[:,None]).any(1); print(time.time()-start_time)
array([False, False, True, True], dtype=bool)
0.000586032867432
>>> start_time=time.time(); [(v[i] in m[i]) for i in range(len(v))]; print(time.time()-start_time)
[False, False, True, True]
7.20024108887e-05
The initial solution seems to be faster.

Numpy creating logical array in the presence of NaNs

I have an array x, from which I would like to extract a logical mask. x contains nan values, and the mask operation raises a warning, which is what I am trying to avoid.
Here is my code:
import numpy as np
x = np.array([[0, 1], [2.0, np.nan]])
mask = np.isfinite(x) & (x > 0)
The resulting mask is correct (array([[False, True], [ True, False]], dtype=bool)), but a warning is raised:
__main__:1: RuntimeWarning: invalid value encountered in greater
How can I construct the mask in a way that avoids comparing against NaNs? I am not trying to suppress the warning (which I know how to do).
We could do it in two steps - Create the mask of finite ones and then use the same mask to index into itself and also to select the valid mask of remaining finite elements off x for testing and setting into the remaining elements in that mask. So, we would have an implementation like so -
In [35]: x
Out[35]:
array([[ 0., 1.],
[ 2., nan]])
In [36]: mask = np.isfinite(x)
In [37]: mask[mask] = x[mask]>0
In [38]: mask
Out[38]:
array([[False, True],
[ True, False]], dtype=bool)
Looks like masked arrays works with this case:
In [214]: x = np.array([[0, 1], [2.0, np.nan]])
In [215]: xm = np.ma.masked_invalid(x)
In [216]: xm
Out[216]:
masked_array(data =
[[0.0 1.0]
[2.0 --]],
mask =
[[False False]
[False True]],
fill_value = 1e+20)
In [217]: xm>0
Out[217]:
masked_array(data =
[[False True]
[True --]],
mask =
[[False False]
[False True]],
fill_value = 1e+20)
In [218]: _.data
Out[218]:
array([[False, True],
[ True, False]], dtype=bool)
But other than propagating the masking I don't know how it handles element by element operations like this. The usual fill and compressed steps don't seem relevant.

Creating matrix out of an array of categories in numpy

I have a length-n numpy array, y, of integers in the range [0...k-1]. From this, I would like to create an n-by-k numpy matrix M, where M[i,j] is 1 if y[i]==j, and 0 else.
What is the best way to do this in numpy?
Use broadcasting:
a = np.array([1, 2, 3, 1, 2, 2, 3, 0])
m = a[:, None] == np.arange(max(a)+1)
the result is:
array([[False, True, False, False],
[False, False, True, False],
[False, False, False, True],
[False, True, False, False],
[False, False, True, False],
[False, False, True, False],
[False, False, False, True],
[ True, False, False, False]], dtype=bool)
Or create a zero array and fill, I think it's faster:
m2 = np.zeros((len(a), a.max()+1), np.bool)
m2[np.arange(len(a)), a] = True
print m2
This is maybe a bit out there, but its a pretty extensible solution and at least worth noting. If you've already got scikit-learn, the DictVectorizer class is used to transform categorical features in a dataset to column-wise binary representations just like you described:
import numpy as np
from sklearn.feature_extraction import DictVectorizer
# starting with your numpy array
y = np.array([1, 2, 3, 1, 2, 2, 3, 0])
# transform the array to a list of dicts, with original
# int values now as strings, and a throw-away key ''
y_dict = [{'':str(x)} for x in y.tolist()]
# create the vectorizer and transform the list of dicts
vec = DictVectorizer(sparse=False, dtype=int)
M = vec.fit_transform(y_dict)
print M
[[0 1 0 0]
[0 0 1 0]
[0 0 0 1]
[0 1 0 0]
[0 0 1 0]
[0 0 1 0]
[0 0 0 1]
[1 0 0 0]]
Again, probably overkill but it's kind of cute and I thought I'd throw it out there.