Related
I have a dataset which has data of events including various elements with positional data of these elements included at various points in time. The total dataset is very large covering many of these events.
For each element at each point in time, I want to find the closest other element. To start this I was going to return an array of the positional data of all other elements at a specific time period and include this in the same row of the original dataframe (to perform further calculations on later).
I had two attempts at coding this, which I have included below. Both take too long on such a large dataset. Any ways that I can make it more efficient would be greatly appreciated.
import pandas as pd
import numpy as np
def func1(db, val, frame):
return db.loc[(db['val'].isin([val])) & (db['frameId'].isin([frame])) & ['displayName', 'x', 'y']]
.reset_index(drop=True).values.tolist()
d = pd.DataFrame({'displayName': ['Bob', 'Jane', 'Alice',
'Bob', 'Jane', 'Alice'],
'x': [90, 88, 86, 94, 91, 92],
'y': [24, 13, 18, 20, 15, 16],
'val': [201801, 201801, 201801, 201801, 201801, 201801],
'frameId': [1, 1, 1, 2, 2, 2]})
res = d.apply(lambda row: func1(d, row['val'], row['frameId']), axis=1)
Approach 2:
def func2(db, val, frame):
return [l[[0, 1, 2]] for l in db if l[3] == val if l[4] == frame]
res = d.apply(lambda row: func2(np.array(d), row['val'], row['frameId']), axis=1)
The result (res) will thus be an array that looks like this:
[[['Bob', 90, 24], ['Jane', 88, 13], ['Alice', 86, 18]],
[['Bob', 90, 24], ['Jane', 88, 13], ['Alice', 86, 18]],
[['Bob', 90, 24], ['Jane', 88, 13], ['Alice', 86, 18]],
[['Bob', 94, 20], ['Jane', 91, 15], ['Alice', 92, 16]],
[['Bob', 94, 20], ['Jane', 91, 15], ['Alice', 92, 16]],
[['Bob', 94, 20], ['Jane', 91, 15], ['Alice', 92, 16]]]
However over the large dataset this is very time consuming to produce under both methods so any way to reduce time complexity would be welcomed.
If the order of the first dimension of the 3D-Array does not matter, then please use (if it does matter, then you will have to create a series that groups by displayName or index and takes the cumcount. Sort by that and then drop. Let me know.:
import pandas as pd
import numpy as np
d = pd.DataFrame({'displayName': ['Bob', 'Jane', 'Alice',
'Bob', 'Jane', 'Alice'],
'x': [90, 88, 86, 94, 91, 92],
'y': [24, 13, 18, 20, 15, 16],
'val': [201801, 201801, 201801, 201801, 201801, 201801],
'frameId': [1, 1, 1, 2, 2, 2]})
n = d['frameId'].max() + 1
x = d['displayName'].nunique()
pd.concat([d.iloc[:,0:3]]*n).to_numpy().reshape(df.shape[0],x,x)
Out[1]:
array([[['Bob', 90, 24],
['Jane', 88, 13],
['Alice', 86, 18]],
[['Bob', 94, 20],
['Jane', 91, 15],
['Alice', 92, 16]],
[['Bob', 90, 24],
['Jane', 88, 13],
['Alice', 86, 18]],
[['Bob', 94, 20],
['Jane', 91, 15],
['Alice', 92, 16]],
[['Bob', 90, 24],
['Jane', 88, 13],
['Alice', 86, 18]],
[['Bob', 94, 20],
['Jane', 91, 15],
['Alice', 92, 16]]], dtype=object)
Here is my problem :
I’m trying to do a operation on a numpy array after reshaping it.
But after this operation, I want to reshape again my array to get my original shape with the same indexing.
So I want to find the appropriate "inverse reshape" so that inverse_reshape(reshape(a))==a
length = 10
a = np.arange(length^2).reshape((length,length))
#a.spape = (10,10)
b = (a.reshape((length//2, 2, -1, 2))
.swapaxes(1, 2)
.reshape(-1, 2, 2))
#b.shape = (25,2,2)
b = my_function(b)
#b.shape = (25,2,2) still the same shape
# b --> a ?
I know that the numpy reshape funtion doesn’t copy the array, but the swapaxes one does.
How can I get the appropriate reshaping ?
Simply reverse the order of the the a=>b conversion.
The original made:
In [53]: a.reshape((length//2, 2, -1, 2)).shape
Out[53]: (5, 2, 5, 2)
In [54]: a.reshape((length//2, 2, -1, 2)).swapaxes(1,2).shape
Out[54]: (5, 5, 2, 2)
In [55]: b.shape
Out[55]: (25, 2, 2)
So we need to get b back to the 4d shape, swap the axes back, and reshape to original a shape:
In [56]: b.reshape(5,5,2,2).swapaxes(1,2).reshape(10,10)
Out[56]:
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
[50, 51, 52, 53, 54, 55, 56, 57, 58, 59],
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
[70, 71, 72, 73, 74, 75, 76, 77, 78, 79],
[80, 81, 82, 83, 84, 85, 86, 87, 88, 89],
[90, 91, 92, 93, 94, 95, 96, 97, 98, 99]])
I have the following dataframe:
# List of Tuples
matrix = [([22, 23], [34, 35, 65], [23, 29, 31]),
([33, 34], [31, 44], [11, 16, 18]),
([44, 56, 76], [16, 34, 76], [21, 34]),
([55, 34], [32, 35, 38], [22, 24, 26]),
([66, 65, 67], [33, 38, 39], [27, 32, 34]),
([77, 39, 45], [35, 36, 38], [11, 21, 34])]
# Create a DataFrame object
df = pd.DataFrame(matrix, columns=list('xyz'), index=list('abcdef'))
I'm able to apply my custom function to output start, end items in list like below for all columns:
def fl(x):
return [x[0], x[len(x)-1]]
df.apply(lambda x : [fl(i) for i in x])
But i want to apply the function to selected columns x & z.
I'm trying like below referring to this link
df.apply(lambda x: fl(x) if x in ['x', 'y'] else x)
and like this:
df[['x', 'y']].apply(fl)
How to get the output with the function applied to only x and z columns with y column unchanged.
Use DataFrame.applymap for elementwise processing, also for last value is possible use [-1] indexing:
def fl(x):
return [x[0], x[-1]]
df[['x', 'z']] = df[['x', 'z']].applymap(fl)
print (df)
x y z
a [22, 23] [34, 35, 65] [23, 31]
b [33, 34] [31, 44] [11, 18]
c [44, 76] [16, 34, 76] [21, 34]
d [55, 34] [32, 35, 38] [22, 26]
e [66, 67] [33, 38, 39] [27, 34]
f [77, 45] [35, 36, 38] [11, 34]
Or for solution with DataFrame.apply use zip with mapping tuples to lists and selexting by str:
def fl(x):
return list(map(list, zip(x.str[0], x.str[-1])))
df[['x', 'z']] = df[['x', 'z']].apply(fl)
print (df)
x y z
a [22, 23] [34, 35, 65] [23, 31]
b [33, 34] [31, 44] [11, 18]
c [44, 76] [16, 34, 76] [21, 34]
d [55, 34] [32, 35, 38] [22, 26]
e [66, 67] [33, 38, 39] [27, 34]
f [77, 45] [35, 36, 38] [11, 34]
Found out the mistake i'm doing.
Thanks for the reply.
I changed the function like below:
def fl(x):
new = []
for i in x:
new.append([i[0], i[-1]])
return new
Then applied the function like this.
df.apply(lambda x : fl(x) if x.name in ['x', 'z'] else x)
Then i'm able to get the expected output.
I've got a strange situation.
I have a 2D Numpy array, x:
x = np.random.random_integers(0,5,(20,8))
And I have 2 indexers--one with indices for the rows, and one with indices for the column. In order to index X, I am having to do the following:
row_indices = [4,2,18,16,7,19,4]
col_indices = [1,2]
x_rows = x[row_indices,:]
x_indexed = x_rows[:,column_indices]
Instead of just:
x_new = x[row_indices,column_indices]
(which fails with: error, cannot broadcast (20,) with (2,))
I'd like to be able to do the indexing in one line using the broadcasting, since that would keep the code clean and readable...also, I don't know all that much about python under the hood, but as I understand it, it should be faster to do it in one line (and I'll be working with pretty big arrays).
Test Case:
x = np.random.random_integers(0,5,(20,8))
row_indices = [4,2,18,16,7,19,4]
col_indices = [1,2]
x_rows = x[row_indices,:]
x_indexed = x_rows[:,col_indices]
x_doesnt_work = x[row_indices,col_indices]
Selections or assignments with np.ix_ using indexing or boolean arrays/masks
1. With indexing-arrays
A. Selection
We can use np.ix_ to get a tuple of indexing arrays that are broadcastable against each other to result in a higher-dimensional combinations of indices. So, when that tuple is used for indexing into the input array, would give us the same higher-dimensional array. Hence, to make a selection based on two 1D indexing arrays, it would be -
x_indexed = x[np.ix_(row_indices,col_indices)]
B. Assignment
We can use the same notation for assigning scalar or a broadcastable array into those indexed positions. Hence, the following works for assignments -
x[np.ix_(row_indices,col_indices)] = # scalar or broadcastable array
2. With masks
We can also use boolean arrays/masks with np.ix_, similar to how indexing arrays are used. This can be used again to select a block off the input array and also for assignments into it.
A. Selection
Thus, with row_mask and col_mask boolean arrays as the masks for row and column selections respectively, we can use the following for selections -
x[np.ix_(row_mask,col_mask)]
B. Assignment
And the following works for assignments -
x[np.ix_(row_mask,col_mask)] = # scalar or broadcastable array
Sample Runs
1. Using np.ix_ with indexing-arrays
Input array and indexing arrays -
In [221]: x
Out[221]:
array([[17, 39, 88, 14, 73, 58, 17, 78],
[88, 92, 46, 67, 44, 81, 17, 67],
[31, 70, 47, 90, 52, 15, 24, 22],
[19, 59, 98, 19, 52, 95, 88, 65],
[85, 76, 56, 72, 43, 79, 53, 37],
[74, 46, 95, 27, 81, 97, 93, 69],
[49, 46, 12, 83, 15, 63, 20, 79]])
In [222]: row_indices
Out[222]: [4, 2, 5, 4, 1]
In [223]: col_indices
Out[223]: [1, 2]
Tuple of indexing arrays with np.ix_ -
In [224]: np.ix_(row_indices,col_indices) # Broadcasting of indices
Out[224]:
(array([[4],
[2],
[5],
[4],
[1]]), array([[1, 2]]))
Make selections -
In [225]: x[np.ix_(row_indices,col_indices)]
Out[225]:
array([[76, 56],
[70, 47],
[46, 95],
[76, 56],
[92, 46]])
As suggested by OP, this is in effect same as performing old-school broadcasting with a 2D array version of row_indices that has its elements/indices sent to axis=0 and thus creating a singleton dimension at axis=1 and thus allowing broadcasting with col_indices. Thus, we would have an alternative solution like so -
In [227]: x[np.asarray(row_indices)[:,None],col_indices]
Out[227]:
array([[76, 56],
[70, 47],
[46, 95],
[76, 56],
[92, 46]])
As discussed earlier, for the assignments, we simply do so.
Row, col indexing arrays -
In [36]: row_indices = [1, 4]
In [37]: col_indices = [1, 3]
Make assignments with scalar -
In [38]: x[np.ix_(row_indices,col_indices)] = -1
In [39]: x
Out[39]:
array([[17, 39, 88, 14, 73, 58, 17, 78],
[88, -1, 46, -1, 44, 81, 17, 67],
[31, 70, 47, 90, 52, 15, 24, 22],
[19, 59, 98, 19, 52, 95, 88, 65],
[85, -1, 56, -1, 43, 79, 53, 37],
[74, 46, 95, 27, 81, 97, 93, 69],
[49, 46, 12, 83, 15, 63, 20, 79]])
Make assignments with 2D block(broadcastable array) -
In [40]: rand_arr = -np.arange(4).reshape(2,2)
In [41]: x[np.ix_(row_indices,col_indices)] = rand_arr
In [42]: x
Out[42]:
array([[17, 39, 88, 14, 73, 58, 17, 78],
[88, 0, 46, -1, 44, 81, 17, 67],
[31, 70, 47, 90, 52, 15, 24, 22],
[19, 59, 98, 19, 52, 95, 88, 65],
[85, -2, 56, -3, 43, 79, 53, 37],
[74, 46, 95, 27, 81, 97, 93, 69],
[49, 46, 12, 83, 15, 63, 20, 79]])
2. Using np.ix_ with masks
Input array -
In [19]: x
Out[19]:
array([[17, 39, 88, 14, 73, 58, 17, 78],
[88, 92, 46, 67, 44, 81, 17, 67],
[31, 70, 47, 90, 52, 15, 24, 22],
[19, 59, 98, 19, 52, 95, 88, 65],
[85, 76, 56, 72, 43, 79, 53, 37],
[74, 46, 95, 27, 81, 97, 93, 69],
[49, 46, 12, 83, 15, 63, 20, 79]])
Input row, col masks -
In [20]: row_mask = np.array([0,1,1,0,0,1,0],dtype=bool)
In [21]: col_mask = np.array([1,0,1,0,1,1,0,0],dtype=bool)
Make selections -
In [22]: x[np.ix_(row_mask,col_mask)]
Out[22]:
array([[88, 46, 44, 81],
[31, 47, 52, 15],
[74, 95, 81, 97]])
Make assignments with scalar -
In [23]: x[np.ix_(row_mask,col_mask)] = -1
In [24]: x
Out[24]:
array([[17, 39, 88, 14, 73, 58, 17, 78],
[-1, 92, -1, 67, -1, -1, 17, 67],
[-1, 70, -1, 90, -1, -1, 24, 22],
[19, 59, 98, 19, 52, 95, 88, 65],
[85, 76, 56, 72, 43, 79, 53, 37],
[-1, 46, -1, 27, -1, -1, 93, 69],
[49, 46, 12, 83, 15, 63, 20, 79]])
Make assignments with 2D block(broadcastable array) -
In [25]: rand_arr = -np.arange(12).reshape(3,4)
In [26]: x[np.ix_(row_mask,col_mask)] = rand_arr
In [27]: x
Out[27]:
array([[ 17, 39, 88, 14, 73, 58, 17, 78],
[ 0, 92, -1, 67, -2, -3, 17, 67],
[ -4, 70, -5, 90, -6, -7, 24, 22],
[ 19, 59, 98, 19, 52, 95, 88, 65],
[ 85, 76, 56, 72, 43, 79, 53, 37],
[ -8, 46, -9, 27, -10, -11, 93, 69],
[ 49, 46, 12, 83, 15, 63, 20, 79]])
What about:
x[row_indices][:,col_indices]
For example,
x = np.random.random_integers(0,5,(5,5))
## array([[4, 3, 2, 5, 0],
## [0, 3, 1, 4, 2],
## [4, 2, 0, 0, 3],
## [4, 5, 5, 5, 0],
## [1, 1, 5, 0, 2]])
row_indices = [4,2]
col_indices = [1,2]
x[row_indices][:,col_indices]
## array([[1, 5],
## [2, 0]])
import numpy as np
x = np.random.random_integers(0,5,(4,4))
x
array([[5, 3, 3, 2],
[4, 3, 0, 0],
[1, 4, 5, 3],
[0, 4, 3, 4]])
# This indexes the elements 1,1 and 2,2 and 3,3
indexes = (np.array([1,2,3]),np.array([1,2,3]))
x[indexes]
# returns array([3, 5, 4])
Notice that numpy has very different rules depending on what kind of indexes you use. So indexing several elements should be by a tuple of np.ndarray (see indexing manual).
So you need only to convert your list to np.ndarray and it should work as expected.
I think you are trying to do one of the following (equlvalent) operations:
x_does_work = x[row_indices,:][:,col_indices]
x_does_work = x[:,col_indices][row_indices,:]
This will actually create a subset of x with only the selected rows, then select the columns from that, or vice versa in the second case. The first case can be thought of as
x_does_work = (x[row_indices,:])[:,col_indices]
Your first try would work if you write it with np.newaxis
x_new = x[row_indices[:, np.newaxis],column_indices]
Does it do one shuffling in one epoch, or else?
What is the difference of tf.train.shuffle_batch and tf.train.batch?
Could someone explain it? Thanks.
First take a look at the documentation (https://www.tensorflow.org/api_docs/python/tf/train/shuffle_batch and https://www.tensorflow.org/api_docs/python/tf/train/batch ). Internally batch is build around a FIFOQueue and shuffle_batch is build around a RandomShuffleQueue.
Consider the following toy example, it puts 1 to 100 in a constant which gets fed through tf.train.shuffle_batch and tf.train.batch and later on this prints the results.
import tensorflow as tf
import numpy as np
data = np.arange(1, 100 + 1)
data_input = tf.constant(data)
batch_shuffle = tf.train.shuffle_batch([data_input], enqueue_many=True, batch_size=10, capacity=100, min_after_dequeue=10, allow_smaller_final_batch=True)
batch_no_shuffle = tf.train.batch([data_input], enqueue_many=True, batch_size=10, capacity=100, allow_smaller_final_batch=True)
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for i in range(10):
print(i, sess.run([batch_shuffle, batch_no_shuffle]))
coord.request_stop()
coord.join(threads)
Which yields:
0 [array([23, 48, 15, 46, 78, 89, 18, 37, 88, 4]), array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])]
1 [array([80, 10, 5, 76, 50, 53, 1, 72, 67, 14]), array([11, 12, 13, 14, 15, 16, 17, 18, 19, 20])]
2 [array([11, 85, 56, 21, 86, 12, 9, 7, 24, 1]), array([21, 22, 23, 24, 25, 26, 27, 28, 29, 30])]
3 [array([ 8, 79, 90, 81, 71, 2, 20, 63, 73, 26]), array([31, 32, 33, 34, 35, 36, 37, 38, 39, 40])]
4 [array([84, 82, 33, 6, 39, 6, 25, 19, 19, 34]), array([41, 42, 43, 44, 45, 46, 47, 48, 49, 50])]
5 [array([27, 41, 21, 37, 60, 16, 12, 16, 24, 57]), array([51, 52, 53, 54, 55, 56, 57, 58, 59, 60])]
6 [array([69, 40, 52, 55, 29, 15, 45, 4, 7, 42]), array([61, 62, 63, 64, 65, 66, 67, 68, 69, 70])]
7 [array([61, 30, 53, 95, 22, 33, 10, 34, 41, 13]), array([71, 72, 73, 74, 75, 76, 77, 78, 79, 80])]
8 [array([45, 52, 57, 35, 70, 51, 8, 94, 68, 47]), array([81, 82, 83, 84, 85, 86, 87, 88, 89, 90])]
9 [array([35, 28, 83, 65, 80, 84, 71, 72, 26, 77]), array([ 91, 92, 93, 94, 95, 96, 97, 98, 99, 100])]
tf.train.shuffle_batch() shuffles every epoch.