NumPy: Check if field exists - numpy

I have a structured numpy array:
>>> import numpy
>>> a = numpy.zeros(1, dtype = [('field0', 'i2'), ('field1', 'f4')])
Then I start to retrieve some values. However, I do not know in advance, if my array contains a certain field. Therefore, if I try to reach a non-existing field, I am expectedly getting IndexError:
>>> a[0]['field0']
0
>>> a[0]['field2']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: invalid index
I could of course go with try-except; however, this can potentially mask some errors, as IndexError does not specify, on which level I hit the non-existing index:
>>> try:
... a[9999]['field2']['subfield3']
... except IndexError:
... print('Some index does not exist')
...
Some index does not exist
I also tried to approach numpy arrays as lists, but this does not work:
>>> if 'field0' in a[0]:
... print('yes')
... else:
... print('no')
...
no
Therefore, question: Is there a way to check if a given field exists in a structured numpy array?

You could check .dtype.names or .dtype.fields:
>>> a.dtype.names
('field0', 'field1')
>>> 'field0' in a.dtype.names
True
>>> a.dtype.fields
mappingproxy({'field0': (dtype('int16'), 0), 'field1': (dtype('float32'), 2)})
>>> 'field0' in a.dtype.fields
True

Related

What is the meaning of `numpy.array(value)`?

numpy.array(value) evaluates to true, if value is int, float or complex. The result seems to be a shapeless array (numpy.array(value).shape returns ()).
Reshaping the above like so numpy.array(value).reshape(1) works fine and numpy.array(value).reshape(1).squeeze() reverses this and again results in a shapeless array.
What is the rationale behind this behavior? Which use-cases exist for this behaviour?
When you create a zero-dimensional array like np.array(3), you get an object that behaves as an array in 99.99% of situations. You can inspect the basic properties:
>>> x = np.array(3)
>>> x
array(3)
>>> x.ndim
0
>>> x.shape
()
>>> x[None]
array([3])
>>> type(x)
numpy.ndarray
>>> x.dtype
dtype('int32')
So far so good. The logic behind this is simple: you can process any array-like object the same way, regardless of whether is it a number, list or array, just by wrapping it in a call to np.array.
One thing to keep in mind is that when you index an array, the index tuple must have ndim or fewer elements. So you can't do:
>>> x[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: too many indices for array
Instead, you have to use a zero-sized tuple (since x[] is invalid syntax):
>>> x[()]
3
You can also use the array as a scalar instead:
>>> y = x + 3
>>> y
6
>>> type(y)
numpy.int32
Adding two scalars produces a scalar instance of the dtype, not another array. That being said, you can use y from this example in exactly the same way you would x, 99.99% of the time, since dtypes inherit from ndarray. It does not matter that 3 is a Python int, since np.add will wrap it in an array regardless. y = x + x will yield identical results.
One difference between x and y in these examples is that x is not officially considered to be a scalar:
>>> np.isscalar(x)
False
>>> np.isscalar(y)
True
The indexing issue can potentially throw a monkey wrench in your plans to index any array like-object. You can easily get around it by supplying ndmin=1 as an argument to the constructor, or using a reshape:
>>> x1 = np.array(3, ndmin=1)
>>> x1
array([3])
>>> x2 = np.array(3).reshape(-1)
>>> x2
array([3])
I generally recommend the former method, as it requires no prior knowledge of the dimensionality of the input.
FurtherRreading:
Why are 0d arrays in Numpy not considered scalar?

Pandas error with multidimensional key using .loc and a boolean

Been running into this same error for 2 weeks, even though the code worked before. Not sure if I updated pandas as part of another library install, and maybe something changed there. Currently on version 23.4. Expected outcome is returning just the row with that identifier value.
In [42]: df.head()
Out[43]:
index Identifier ...
0 51384710 ...
1 74838J10 ...
2 80589M10 ...
3 67104410 ...
4 50241310 ...
[5 rows x 14 columns]
In [43]: df.loc[df.Identifier.isin(['51384710'])].head()
Traceback (most recent call last):
File "C:\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-44-a3dbf43451ef>", line 1, in <module>
df.loc[df.Identifier.isin(['51384710'])].head()
File "C:\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1478, in __getitem__
return self._getitem_axis(maybe_callable, axis=axis)
File "C:\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1899, in _getitem_axis
raise ValueError('Cannot index with multidimensional key')
**ValueError: Cannot index with multidimensional key**
Code Snippet
Fixed it. I'd done df.columns = [column_list] where column_list = [...], which caused df to be treated as if it had a multiindex, even though there was only one level. removed the brackets from the df.columns assignment.
Try changing
df.loc[df.Identifier.isin(['51384710'])].head()
to
df[df.Identifier.isin(['51384710'])].head()

Tensorflow tf.split() list index out of range?

Here's the codes:
a = tf.constant([1,2,3,4])
b = tf.constant([4])
c = tf.split(a, tf.squeeze(b))
then, it turns out to be wrong:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jeff/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1203, in split
num = size_splits_shape.dims[0]
IndexError: list index out of range
But why?
The docs state,
If num_or_size_splits is a tensor, size_splits, then splits value into len(size_splits) pieces. The shape of the i-th piece has the same size as the value except along dimension axis where the size is size_splits[i].
Note that size_splits needs to be slicable.
However when you squeeze(b), because it has only one element in your example, it returns a scalar that has no dimension. A scalar cannot be sliced :
b_ = tf.squeeze(b)
b_[0] # error
Hence your error.

Replace NaN values in all levels of a Pandas MultiIndex

After reading in an excel sheet with a MultiIndex, I am getting np.nan appearing in the index because some of the values are 'N/A' and pd.read_excel thinks it's a good idea to convert them. However I want to keep them as 'N/A' to preserve the multi-index. I thought it would be easy to change them back using MultiIndex.fillna but I get this error:
index = pd.MultiIndex(levels=[[u'foo', u'bar'], [u'one', np.nan]],
codes=[[0, 0, 1, 1], [0, 1, 0, 1]],
names=[u'first', u'second'])
df = pd.DataFrame(index=index, columns=['A', 'B'])
df
df.index.fillna("N/A")
Output:
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-17-09e14dcdc74f> in <module>
----> 1 df.index.fillna("N/A")
/anaconda3/envs/torch/lib/python3.7/site-packages/pandas/core/indexes/multi.py in fillna(self, value, downcast)
1456 fillna is not implemented for MultiIndex
1457 """
-> 1458 raise NotImplementedError("isna is not defined for MultiIndex")
1459
1460 #Appender(_index_shared_docs["dropna"])
NotImplementedError: isna is not defined for MultiIndex
Update:
Code updated to reflect Pandas 1.0.2. Prior to version 0.24.0 the codes attribute of pd.MultiIndex was called labels. Also, the traceback details changed from isnull is not defined to isna is not defined as above.
The accepted solution did not work for me either. It still left NA values in the index even though inspecting the df.index.levels individually did not show NA values.
Jorge's solution pointed me in the right direction but also wasn't quite right for my case. Here is my approach, including handling of the single Index case as discussed in the comments of the accepted answer.
if isinstance(df.index, pd.MultiIndex):
df.index = pd.MultiIndex.from_frame(
df.index.to_frame().fillna(my_fillna_value)
)
else:
df.index = df.index.fillna(my_fillna_value)
Use set_levels
df.index.set_levels([l.fillna('N/A') for l in df.index.levels], inplace=True)
df
The current solution didn't work for me when having multi level columns. What i did and worked for me was the following:
df.columns = pd.MultiIndex.from_frame(df.columns.to_frame().fillna(''))

numpy change elements matching conditions

For two numpy array a, b
a=[1,2,3] b=[4,5,6]
I want to change x<2.5 data of a to b. So I tried
a[a<2.5]=b
hoping a to be a=[4,5,3].
but this makes error
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
a[a<2.5]=b
ValueError: NumPy boolean array indexing assignment cannot assign 3 input values to the 2 output values where the mask is true
what is the problem?
The issue you're seeing is a result of how masks work on numpy arrays.
When you write
a[a < 2.5]
you get back the elements of a which match the mask a < 2.5. In this case, that will be the first two elements only.
Attempting to do
a[a < 2.5] = b
is an error because b has three elements, but a[a < 2.5] has only two.
An easy way to achieve the result you're after in numpy is to use np.where.
The syntax of this is np.where(condition, valuesWhereTrue, valuesWhereFalse).
In your case, you could write
newArray = np.where(a < 2.5, b, a)
Alternatively, if you don't want the overhead of a new array, you could perform the replacement in-place (as you're trying to do in the question). To achieve this, you can write:
idxs = a < 2.5
a[idxs] = b[idxs]