Hide Labels from pandas.cut customised Interval Index - pandas

I applied pandas.cut on Series. if I don't use customised Interval Index, label=False works as expected (returns only integer indicators of the bins). However, when I used customised Interval Index, even I set label=False, it returns interval of each bin. I guess this is probably because I used interval index in stead of number of bins.
Is there anyway to use customised interval index, also only return integer indicator of the bins?
bins = pd.interval_range(start=0, end=10, periods=5, closed='left')
pd.cut([1, 3, 5, 7, 9], bins, labels=False)

codes attribute
pd.cut([1, 3, 5, 7, 9], bins, labels=False).codes
array([0, 1, 2, 3, 4], dtype=int8)
The pd.cut function returns a Categorical type object. What gets displayed are the various categories for each element. However, the Categorical object has two attributes codes and categories. The categories are what you'd think. It is an array of your unique categories in the proper order. The codes are the positions of that categories array that each element of the Categorical object is referencing.
You can produce the Categorical values by slicing the categories array with the codes array like so:
mycut = pd.cut([1, 3, 5, 7, 9], bins, labels=False)
mycut.categories[mycut.codes]
IntervalIndex([[0, 2), [2, 4), [4, 6), [6, 8), [8, 10)],
closed='left',
dtype='interval[int64]')
However, the codes are the exact thing you were looking for... so take it.

Related

Numpy Interpolation for Array of Arrays

I have an array of arrays that I want to interpolate based on each array's min and max.
For a simple mxn array , with values ranging from 0 to 1, I can do this as follows:
x_inp=np.interp(x,(x.min(),x.max()),(0,0.7))
This suppresses every existing value to 0 to 0.7. However, if I have an array of dimension 100xmxn, the above method considers the global min/max and not the individual min/max of each of the mxn array.
Edit:
For example
x1=np.random.randint(0,5, size=(2, 4))
x2=np.random.randint(6,10, size=(2, 4))
my_list=[x1,x2]
my_array=np.asarray(my_list)
print(my_array)
>> array([[[1, 4, 3, 4],
[3, 2, 0, 0]],
[9, 6, 8, 6],
8, 7, 6, 7]]])
my_array is now of dimension 2x2x4 and my_array.min() and my_array.max() would give me 0 and 9. So If I interpolate, it won't work based on the min/max of the individual 2x4 arrays. What I want is, to have the interpolation work based on min/max of 0/4 for the 1st array and 6/9 for the second.

Assign numpy matrix to pandas columns

I have dataframe with 48870 rows and calculated embeddings with shape (48870, 768)
I wanna assign this embeddings to padnas column
When i try
test['original_text_embeddings'] = embeddings
I have an error: Wrong number of items passed 768, placement implies 1
I know if a make something like df.loc['original_text_embeddings'] = embeddings[0] will work but i need to automate this process
A dataframe/column needs a 1d list/array:
In [84]: x = np.arange(12).reshape(3,4)
In [85]: pd.Series(x)
...
ValueError: Data must be 1-dimensional
Splitting the array into a list (of arrays):
In [86]: pd.Series(list(x))
Out[86]:
0 [0, 1, 2, 3]
1 [4, 5, 6, 7]
2 [8, 9, 10, 11]
dtype: object
In [87]: _.to_numpy()
Out[87]:
array([array([0, 1, 2, 3]), array([4, 5, 6, 7]), array([ 8, 9, 10, 11])],
dtype=object)
Your embeddings have 768 columns, which would translate to equally 768 columns in a data frame. You are trying to assign all columns from the embeddings to just one column in the data frame, which is not possible.
What you could do is generating a new data frame from the embeddings and concatenate the test df with the embedding df
embedding_df = pd.DataFrame(embeddings)
test = pd.concat([test, embedding_df], axis=1)
Have a look at the documentation for handling indexes and concatenating on different axis:
https://pandas.pydata.org/docs/reference/api/pandas.concat.html

Rearranging numpy arrays

I was not able to find a duplicate of my question, unfortunately, although I am sure that this is a problem which has been solved before
I have a numpy array with a certain set of indices, eg.
ind1 = np.array([1, 3, 5, 7])
With these indices, I can filter some values from another array. Lets call this other array rows. As an example, I can retrieve
rows[ind1] = [1, 10, 20, 15]
The order of rows[ind1] must not be changed in the following.
I have another index array, ind2
ind2 = np.array([4, 5, 6, 7])
I also have an array cols, where I can filter values from using ind2. I know that cols[ind2] results in an array which has the size of rows[ind1] and the entries are the same, but the order is different. An example:
cols[ind2] = [15, 20, 10, 1]
I would like to rearrange the order of cols[ind2], so that it corresponds to rows[ind1]. I am interested in the corresponding order of ind2.
In the example, the result should be
cols[ind2] = [1, 10, 20, 15]
ind2 = [7, 6, 5, 4]
Using numpy, I did not find a way to do this. Any ideas would be helpful. Thanks in advance.
There may be a better way, but you can do this using argsorts.
Let's call your "reordered ind2" ind3.
If you are sure that rows[ind1] and cols[ind2] will have the same length and all of the same elements, then the sorted versions of both will be the same i.e np.sort(rows[ind1]) = np.sort(cols[ind2]).
If this is the case, and you don't run into any problems with repeated elements (unsure of your exact use case), then what you can do is find the indices to put cols[ind2] in order, and then from there, find the indices to put np.sort(cols[ind2]) into the order of rows[ind1].
So, if
p1 = np.argsort(rows[ind1])
and
p2 = np.argsort(cols[ind2])
and
p3 = np.argsort(p1)
Then
ind3 = ind2[p2][p3]. The reason this works is because if you do an argsort of an argsort, it gives you the indices you need to reverse the first sort. p2 sorts cols[ind2] (that's the definition of argsort), and p3 unsorts the result of that back into the order of rows[ind1].

Determine number of preceding equal elements

Using numpy, given a sorted 1D array, how to efficiently obtain a 1D array with equal size where the value at each position is the number of preceding equal elements? I have very large arrays and processing each element in Python code one way or another is not acceptable.
Example:
input = [0, 0, 4, 4, 4, 5, 5, 5, 5, 6]
output = [0, 1, 0, 1, 2, 0, 1, 2, 3, 0]
import numpy as np
A=np.array([0, 0, 4, 4, 4, 5, 5, 5, 5, 6])
uni,counts=np.unique(A, return_counts=True)
out=np.concatenate([np.arange(n) for n in counts])
print(out)
Not certain about the efficiency (probably better way to form the out array rather than concatenating), but a very straightforward way to get the result you are looking for. Counts the unique elements, then does np.arange on each count to get the ascending sequence, then concatenates these arrays together.

Numpy Indexing Behavior

I am having a lot of trouble understanding numpy indexing for multidimensional arrays. In this example that I am working with, let's say that I have a 2D array, A, which is 100x10. Then I have another array, B, which is a 100x1 1D array of values between 0-9 (indices for A). In MATLAB, I would use A(sub2ind(size(A), 1:size(A,1)', B) to return for each row of A, the value at the index stored in the corresponding row of B.
So, as a test case, let's say I have this:
A = np.random.rand(100,10)
B = np.int32(np.floor(np.random.rand(100)*10))
If I print their shapes, I get:
print A.shape returns (100L, 10L)
print B.shape returns (100L,)
When I try to index into A using B naively (incorrectly)
Test1 = A[:,B]
print Test1.shape returns (100L, 100L)
but if I do
Test2 = A[range(A.shape[0]),B]
print Test2.shape returns (100L,)
which is what I want. I'm having trouble understanding the distinction being made here. In my mind, A[:,5] and A[range(A.shape[0]),5] should return the same thing, but it isn't here. How is : different from using range(sizeArray) which just creates an array from [0:sizeArray] inclusive, to use an indices?
Let's look at a simple array:
In [654]: X=np.arange(12).reshape(3,4)
In [655]: X
Out[655]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
With the slice we can pick 3 columns of X, in any order (and even repeated). In other words, take all the rows, but selected columns.
In [656]: X[:,[3,2,1]]
Out[656]:
array([[ 3, 2, 1],
[ 7, 6, 5],
[11, 10, 9]])
If instead I use a list (or array) of 3 values, it pairs them up with the column values, effectively picking 3 values, X[0,3],X[1,2],X[2,1]:
In [657]: X[[0,1,2],[3,2,1]]
Out[657]: array([3, 6, 9])
If instead I gave it a column vector to index rows, I get the same thing as with the slice:
In [659]: X[[[0],[1],[2]],[3,2,1]]
Out[659]:
array([[ 3, 2, 1],
[ 7, 6, 5],
[11, 10, 9]])
This amounts to picking 9 individual values, as generated by broadcasting:
In [663]: np.broadcast_arrays(np.arange(3)[:,None],np.array([3,2,1]))
Out[663]:
[array([[0, 0, 0],
[1, 1, 1],
[2, 2, 2]]),
array([[3, 2, 1],
[3, 2, 1],
[3, 2, 1]])]
numpy indexing can be confusing. But a good starting point is this page: http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html