I want to have a function that can operate on either a row or a column of a 2D ndarray. Assume the array has C order. The function changes values in the 2D data.
Inside the function I want to have identical index syntax whether it is called with a row or column. A row slice is [n,:] and column slice [:,n] so they have different shapes. Inside the function this requires different indexing expressions.
Is there a way to do this that does not require moving or allocating memory? I am under the impression that using reshape will force a copy to make the data to make it contiguous. Is there a way to use nditer in the function?
Do you mean like this:
In [74]: def foo(arr, n):
...: arr += n
...:
In [75]: arr = np.ones((2,3),int)
In [76]: foo(arr[0,:],1)
In [77]: arr
Out[77]:
array([[2, 2, 2],
[1, 1, 1]])
In [78]: foo(arr[:,1],[100,200])
In [79]: arr
Out[79]:
array([[ 2, 102, 2],
[ 1, 201, 1]])
In the first case I'm adding 1 to one row of the array, ie. a row slice. In the second case I'm add a array (list) to a column. In that case n has to have the right length.
Usually we don't worry about whether the values are C contiguous. Striding takes care of access either way.
Related
I would like sort an array based on one column, then for all the columns values that are equal - sort them based on a second column. For example: suppose that I have the array:
a = np.array([[0,1,1],[0,3,1],[1,7,2],[0,2,1]])
I can sort it by column 0 using:
sorted_array = a[np.argsort(a[:, 0])]
however, I want rows that have similar values at the [0] column to be sorted by the [1] column, so my result would look like:
desired_result = np.array([[0,1,1],[0,2,1],[0,3,1],[1,7,2]])
What is the best way to achieve that? Thanks.
You can sort them as tuple, then convert back to numpy array:
out = np.array(sorted(map(tuple,a)))
Output:
array([[0, 1, 1],
[0, 2, 1],
[0, 3, 1],
[1, 7, 2]])
You first sort the array in the secondary column, then you sort in the primary axis, making sure to use a stable sorting method.
sorted_array = a[np.argsort(a[:, 1])]
sorted_array = sorted_array[np.argsort(sorted_array[:, 0], kind='stable')]
Or you can use lexsort
sorted_array = a[np.lexsort((a[:,1], a[:, 0])), :]
I have an array:
>>> arr1 = np.array([[1,2,3], [4,5,6], [7,8,9]])
array([[1 2 3]
[4 5 6]
[7 8 9]])
I want to retrieve a list (or 1d-array) of elements of this array by giving a list of their indices, like so:
indices = [[0,0], [0,2], [2,0]]
print(arr1[indices])
# result
[1,6,7]
But it does not work, I have been looking for a solution about it for a while, but I only found ways to select per row and/or per column (not per specific indices)
Someone has any idea ?
Cheers
Aymeric
First make indices an array instead of a nested list:
indices = np.array([[0,0], [0,2], [2,0]])
Then, index the first dimension of arr1 using the first values of indices, likewise the second:
arr1[indices[:,0], indices[:,1]]
It gives array([1, 3, 7]) (which is correct, your [1, 6, 7] example output is probably a typo).
I have a calculated matrix
from numpy import matrix
vec=matrix([[ 4.79263398e-01+0.j , -2.94883960e-14+0.34362808j,
5.91036823e-01+0.j , -2.06730654e-14+0.41959935j,
-3.20298698e-01+0.08635809j, -5.97136351e-02+0.22325523j],
[ 9.45394208e-14+0.34385164j, 4.78941900e-01+0.j ,
1.07732017e-13+0.41891016j, 5.91969770e-01+0.j ,
-6.06877417e-02-0.2250884j , 3.17803028e-01+0.08500215j],
[ 4.63795513e-01-0.00827114j, -1.15263719e-02+0.33287485j,
-2.78282097e-01-0.20137267j, -2.81970922e-01-0.1980647j ,
9.26109539e-02-0.38428445j, 5.12483437e-01+0.j ],
[ -1.15282610e-02+0.33275927j, 4.63961516e-01-0.00826978j,
-2.84077490e-01-0.19723838j, -2.79429184e-01-0.19984041j,
-4.42104809e-01+0.25708681j, -2.71973825e-01+0.28735795j],
[ 4.63795513e-01+0.00827114j, 1.15263719e-02+0.33287485j,
-2.78282097e-01+0.20137267j, 2.81970922e-01-0.1980647j ,
2.73235786e-01+0.28564581j, -4.44053596e-01-0.25584307j],
[ 1.15282610e-02+0.33275927j, 4.63961516e-01+0.00826978j,
2.84077490e-01-0.19723838j, -2.79429184e-01+0.19984041j,
5.11419878e-01+0.j , -9.22028113e-02-0.38476356j]])
I want to get 2nd row, 3rd column element
vec[1][2]
IndexError: index 1 is out of bounds for axis 0 with size 1
and slicing works well
vec[1,2]
(1.07732017e-13+0.41891015999999998j)
My first question why first way does not work in this case? it worked before when I used it.
Second question is: the result of slicing is an array, how to make it an complex value without bracket? My experience was using
vec[1,2][0]
but again it is not working here.
I tried to do everything on numpy array at begining, those methods that do not work on numpy matrix work on numpy array. Why there are such differences?
The key difference is that a matrix is always 2d, always. (This is supposed to be familiar to MATLAB users.)
In [85]: mat = np.matrix('1,2;3,4')
In [86]: mat
Out[86]:
matrix([[1, 2],
[3, 4]])
In [87]: mat.shape
Out[87]: (2, 2)
In [88]: mat[1]
Out[88]: matrix([[3, 4]])
In [89]: _.shape
Out[89]: (1, 2)
Selecting a row of mat returns a matrix - a 1 row one. It should be clear that it cannot be indexed again with [1].
Indexing with the tuple returns a scalar:
In [90]: mat[1,1]
Out[90]: 4
In [91]: type(_)
Out[91]: numpy.int32
As a general rule operations on a np.matrix returns a matrix or a scalar, not a np.ndarray.
The other key point is that mat[1][1] is not one numpy operation. It is two, a mat[1] followed by another [1]. Imagine yourself to be a Python interpreter without any special knowledge of numpy. How would you evaluate that expression?
Now for the complex question:
In [92]: mat = np.matrix('1+3j, 2;-2, 2+1j')
In [93]: mat
Out[93]:
matrix([[ 1.+3.j, 2.+0.j],
[-2.+0.j, 2.+1.j]])
In [94]: mat[1,1]
Out[94]: (2+1j)
In [95]: type(_)
Out[95]: numpy.complex128
As expected the tuple index has returned a scalar numpy element. () is just part of numpys way of displaying a complex number.
We can use item to extra python equivalent, but the display still uses ()
In [96]: __.item()
Out[96]: (2+1j)
In [97]: type(_)
Out[97]: complex
In [98]: 1+3j
Out[98]: (1+3j)
mat has A property that gives the array equivalent. But notice the shapes.
In [99]: mat.A # a 2d array
Out[99]:
array([[ 1.+3.j, 2.+0.j],
[-2.+0.j, 2.+1.j]])
In [100]: mat.A1 # a 1d array
Out[100]: array([ 1.+3.j, 2.+0.j, -2.+0.j, 2.+1.j])
In [101]: mat[1].A
Out[101]: array([[-2.+0.j, 2.+1.j]])
In [102]: mat[1].A1
Out[102]: array([-2.+0.j, 2.+1.j])
Sometimes this behavior of matrix is handy. For example np.sum acts like the array keepdims=True:
In [108]: np.sum(mat,1)
Out[108]:
matrix([[ 3.+3.j],
[ 0.+1.j]])
In [110]: np.sum(mat.A,1, keepdims=True)
Out[110]:
array([[ 3.+3.j],
[ 0.+1.j]])
How do I remove rows from ndarray arrays which have the same nth column value?
For eg,
a = np.ndarray([[1, 3, 4],
[1, 3, 4],
[1, 3, 5]])
And I want to have rows unique by third column.
I want to have just the [1, 3, 5] row left.
numpy.unique does not do it. It will check for uniqueness in every column; I can't specify the
column by which to check uniqueness.
How can I do this efficiently for thousand + rows?
Thank you.
You could try a combination of bincount, nonzero and in1d
import numpy as np
a = np.array([[1, 3, 4],
[1, 3, 4],
[1, 3, 5]])
#A tuple containing the values which are unique in column 3
unique_in_column = (np.bincount(a[:,2]) == 1).nonzero()
a[:,2] == unique_in_column[0]
unique_index = np.in1d(a[:,2], unique_in_column[0])
unique_a = a[unique_index]
This should do the trick. However, I'm not sure how this method scales with 1000+ rows.
I had done this finally:
repeatdict = {}
todel = []
for i, row in enumerate(kplist):
if repeatdict.get(row[2], 0):
todel.append(i)
else:
repeatdict[row[2]] = 1
kplist = np.delete(kplist, todel, axis=0)
Basically, I iterated over the list store the values of the third column, and if in the next iteration the same value is already found in the repeatdict dict, that row is marked for deletion, by storing its index in todel list.
Then we can get rid of the unwanted rows by calling np.delete with the list of all row indexes which we want to delete.
Also, I'm not picking my answer as the picked answer, because I know there's probably a better way to do this with just numpy magic.
I'll wait.
Are there built-in ways to construct/deconstruct a dataframe from/to a Python list-of-Python-lists?
As far as the constructor (let's call it make_df for now) that I'm looking for goes, I want to be able to write the initialization of a dataframe from literal values, including columns of arbitrary types, in an easily-readable form, like this:
df = make_df([[9.75, 1],
[6.375, 2],
[9., 3],
[0.25, 1],
[1.875, 2],
[3.75, 3],
[8.625, 1]],
['d', 'i'])
For the deconstructor, I want to essentially recover from a dataframe df the arguments one would need to pass to such make_df to re-create df.
AFAIK,
officially at least, the pandas.DataFrame constructor accepts only a numpy ndarray, a dict, or another DataFrame (and not a simple Python list-of-lists) as its first argument;
the pandas.DataFrame.values property does not preserve the original data types.
I can roll my own functions to do this (e.g., see below), but I would prefer to stick to built-in methods, if available. (The Pandas API is pretty big, and some of its names not what I would expect, so it is quite possible that I have missed one or both of these functions.)
FWIW, below is a hand-rolled version of what I described above, minimally tested. (I doubt that it would be able to handle every possible corner-case.)
import pandas as pd
import collections as co
import pandas.util.testing as pdt
def make_df(values, columns):
return pd.DataFrame(co.OrderedDict([(columns[i],
[row[i] for row in values])
for i in range(len(columns))]))
def unmake_df(dataframe):
columns = list(dataframe.columns)
return ([[dataframe[c][i] for c in columns] for i in dataframe.index],
columns)
values = [[9.75, 1],
[6.375, 2],
[9., 3],
[0.25, 1],
[1.875, 2],
[3.75, 3],
[8.625, 1]]
columns = ['d', 'i']
df = make_df(values, columns)
Here's what the output of the call to make_df above produced:
>>> df
d i
0 9.750 1
1 6.375 2
2 9.000 3
3 0.250 1
4 1.875 2
5 3.750 3
6 8.625 1
A simple check of the round-trip1:
>>> df == make_df(*unmake_df(df))
True
>>> (values, columns) == unmake_df(make_df(*(values, columns)))
True
BTW, this is an example of the loss of the original values' types:
>>> df.values
array([[ 9.75 , 1. ],
[ 6.375, 2. ],
[ 9. , 3. ],
[ 0.25 , 1. ],
[ 1.875, 2. ],
[ 3.75 , 3. ],
[ 8.625, 1. ]])
Notice how the values in the second column are no longer integers, as they were originally.
Hence,
>>> df == make_df(df.values, columns)
False
1 In order to be able to use == to test for equality between dataframes above, I resorted to a little monkey-patching:
def pd_DataFrame___eq__(self, other):
try:
pdt.assert_frame_equal(self, other,
check_index_type=True,
check_column_type=True,
check_frame_type=True)
except:
return False
else:
return True
pd.DataFrame.__eq__ = pd_DataFrame___eq__
Without this hack, expressions of the form dataframe_0 == dataframe_1 would have evaluated to dataframe objects, not simple boolean values.
I'm not sure what documentation you are reading, because the link you give explicitly says that the default constructor accepts other list-like objects (one of which is a list of lists).
In [6]: pandas.DataFrame([['a', 1], ['b', 2]])
Out[6]:
0 1
0 a 1
1 b 2
[2 rows x 2 columns]
In [7]: t = pandas.DataFrame([['a', 1], ['b', 2]])
In [8]: t.to_dict()
Out[8]: {0: {0: 'a', 1: 'b'}, 1: {0: 1, 1: 2}}
Notice that I use to_dict at the end, rather than trying to get back the original list of lists. This is because it is an ill-posed problem to get the list arguments back (unless you make an overkill decorator or something to actually store the ordered arguments that the constructor was called with).
The reason is that a pandas DataFrame, by default, is not an ordered data structure, at least in the column dimension. You could have permuted the order of the column data at construction time, and you would get the "same" DataFrame.
Since there can be many differing notions of equality between two DataFrame (e.g. same columns even including type, or just same named columns, or some columns and in same order, or just same columns in mixed order, etc.) -- pandas defaults to trying to be the least specific about it (Python's principle of least astonishment).
So it would not be good design for the default or built-in constructors to choose an overly specific idea of equality for the purposes of returning the DataFrame back down to its arguments.
For that reason, using to_dict is better since the resulting keys will encode the column information, and you can choose to check for column types or ordering however you want to for your own application. You can even discard the keys by iterating the dict and simply pumping the contents into a list of lists if you really want to.
In other words, because order might not matter among the columns, the "inverse" of the list-of-list constructor maps backwards into a bigger set, namely all the permutations of the same column data. So the inverse you're looking for is not well-defined without assuming more structure -- and casual users of a DataFrame might not want or need to make those extra assumptions to get the invertibility.
As mentioned elsewhere, you should use DataFrame.equals to do equality checking among DataFrames. The function has many options that allow you specify the specific kind of equality testing that makes sense for your application, while leaving the default version as a reasonably generic set of options.