how to use keras and pandas to duplicate similar arrays - pandas

I have a array from my teacher, he gave me an array like below:
array contains 0,1,none
[[1, 1, 0, 0, none, 0, 1], [1, 0, 0, 0, none,0, 1], [1, 1, none,0, 1, 0, none], [1,1,1,0,none, 0, 0], [1, 1,0, none, 0, 0,1]]
and asked me to duplicate the array ten times but however each column must have similar percent distribution say each column have no more than 8% percent compared with the origin array.
how should i achieve the goal?

Related

PyTorch indexing by argmax

Dear community I have a challenge with regard to tensor indexing in PyTorch. The problem is very simple. Given a tensor create an index tensor to index its maximum values per column.
x = T.tensor([[0, 3, 0, 5, 9, 8, 2, 0],
[0, 4, 9, 6, 7, 9, 1, 0]])
Given this tensor I would like to build a boolean mask for indexing its maximum values per colum. To be specific I do not need its maximum values, torch.max(x, dim=0), nor its indices, torch.argmax(x, dim=0), but a boolean mask for indexing other tensor based on this tensor max values. My ideal output would be:
# Input tensor
x
tensor([[0, 3, 0, 5, 9, 8, 2, 0],
[0, 4, 9, 6, 7, 9, 1, 0]])
# Ideal output bool mask tensor
idx
tensor([[1, 0, 0, 0, 1, 0, 1, 1],
[0, 1, 1, 1, 0, 1, 0, 0]])
I know that values_max = x[idx] and values_max = x.max(dim=0) are equivalent but I am not looking for values_max but for idx.
I have built a solution around it but it just seem to complex and I am sure torch have an optimized way to do this. I have tried to use torch.index_select with the output of x.argmax(dim=0) but failed so I built a custom solution that seems to cumbersome to me so I am asking for help to do this in a vectorized / tensorial / torch way.
You can perform this operation by first extracting the index of the maximum value column-wise of your tensor with torch.argmax, setting keepdim to True
>>> x.argmax(0, keepdim=True)
tensor([[0, 1, 1, 1, 0, 1, 0, 0]])
Then you can use torch.scatter to place 1s in a zero tensor at the designated indices:
>>> torch.zeros_like(x).scatter(0, x.argmax(0,True), value=1)
tensor([[1, 0, 0, 0, 1, 0, 1, 1],
[0, 1, 1, 1, 0, 1, 0, 0]])

Find the maximum values of a matrix in rows axis and replace other values to zero

A = [[2,2,4,2,2,2]
[2,6,2,2,2,2]
[2,2,2,2,8,2]]
I want matrix B to be equal to:
B = [[0,0,4,0,0,0]
[0,6,0,0,0,0]
[0,0,0,0,8,0]]
So I want to find the maximum value of each row and replace other values with 0. Is there any way to do this without using for loops?
Thanks in advance for your comments.
Instead of looking at the argmax, you could take the max values for each row directly, then mask the elements which are lower and replace them with zeros:
Inplace this would look like (here True stands for keepdims=True):
>>> A[A < A.max(1, True)] = 0
>>> A
array([[0, 0, 4, 0, 0, 0],
[0, 6, 0, 0, 0, 0],
[0, 0, 0, 0, 8, 0]])
An out of place alternative is to use np.where:
>>> np.where(A == A.max(1, True), A, 0)
array([[0, 0, 4, 0, 0, 0],
[0, 6, 0, 0, 0, 0],
[0, 0, 0, 0, 8, 0]])

Standard implementation of vectorize_sequences

In François Chollet's Deep Learning with Python, appears this function:
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
I understand what this function does. This function is asked about in this quesion and in this question as well, also mentioned here, here, here, here, here & here. Despite being so wide-spread, this vectorization is, according to Chollet's book is done "manually for maximum clarity." I am interested whether there is a standard, not "manual" way of doing it.
Is there a standard Keras / Tensorflow / Scikit-learn / Pandas / Numpy implementation of a function which behaves very similarly to the function above?
Solution with MultiLabelBinarizer
Assuming sequences is an array of integers with maximum possible value upto dimension-1, we can use MultiLabelBinarizer from sklearn.preprocessing to replicate the behaviour of the function vectorize_sequences
from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer(classes=range(dimension))
mlb.fit_transform(sequences)
Solution with Numpy broadcasting
Assuming sequences is an array of integers with maximum possible value upto dimension-1
(np.array(sequences)[:, :, None] == range(dimension)).any(1).view('i1')
Worked out example
>>> sequences
[[4, 1, 0],
[4, 0, 3],
[3, 4, 2]]
>>> dimension = 10
>>> mlb = MultiLabelBinarizer(classes=range(dimension))
>>> mlb.fit_transform(sequences)
array([[1, 1, 0, 0, 1, 0, 0, 0, 0, 0],
[1, 0, 0, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 0, 0, 0, 0, 0]])
>>> (np.array(sequences)[:, :, None] == range(dimension)).any(1).view('i1')
array([[0, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 1, 0, 0, 0, 0, 0]])

how to get row indices where row slice contains a single value (0)

With the numpy array
arr = np.array([[1, 1, 0, 0, 0, 1], [1, 1, 0, 0, 1, 1], [1, 1, 0, 0, 0, 1]])
I would like to get the indices of all rows where the row slice 2:5 contains all zeros.
In the above example, it should return rows 0 and 2.
I tried:
zero_indices = np.where(not np.any(arr[:,2:5]))
but it doesn't seem to work.
I'm trying to do this over a large array with several million rows.
Try this
np.nonzero((~arr[:,2:5].astype(bool)).all(1))[0]
Out[133]: array([0, 2], dtype=int32)
Or
np.nonzero((arr[:,2:5] == 0).all(1))[0]
Out[139]: array([0, 2], dtype=int32)

scipy: Adding a sparse vector to a specific row of a sparse matrix

In python, what is the best way to add a CSR vector to a specific row of a CSR matrix? I found one workaround here, but wondering if there is a better/more efficient way to do this. Would appreciate any help.
Given an NxM CSR matrix A and a 1xM CSR matrix B, and a row index i, the goal is to add B to the i-th row of A efficiently.
The obvious indexed addition does work. It gives a efficiency warning, but that doesn't mean it is the slowest way, just that you shouldn't count of doing this repeatedly. It suggests working with the lil format, but conversion to that and back probably takes more time than performing the addition to the csr matrix.
In [1049]: B.A
Out[1049]:
array([[0, 9, 0, 0, 1, 0],
[2, 0, 5, 0, 0, 9],
[0, 2, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0],
[0, 9, 5, 3, 0, 7],
[1, 0, 0, 8, 9, 0]], dtype=int32)
In [1051]: B[1,:] += np.array([1,0,1,0,0,0])
/usr/local/lib/python3.5/dist-packages/scipy/sparse/compressed.py:730: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
SparseEfficiencyWarning)
In [1052]: B
Out[1052]:
<6x6 sparse matrix of type '<class 'numpy.int32'>'
with 17 stored elements in Compressed Sparse Row format>
In [1053]: B.A
Out[1053]:
array([[0, 9, 0, 0, 1, 0],
[3, 0, 6, 0, 0, 9],
[0, 2, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0],
[0, 9, 5, 3, 0, 7],
[1, 0, 0, 8, 9, 0]])
As your linked question shows, it is possible to act directly on the attributes of the sparse matrix. His code shows why there's an efficiency warning - in the general case it has to rebuild the matrix attributes.
lil is more efficient for row replacement because it just has to change a sublist in the matrix .data and .rows attributes. A change in one row doesn't change the attributes of any of the others.
That said, IF your addition has the same sparsity as the original row, it is possible change specific elements of the data attribute without reworking .indices or .indptr. Drawing on the linked code
A.data[:idx_start_row : idx_end_row]
is the slice of A.data that will be changed. You need of course the corresponding slice from the 'vector'.
Starting with the In [1049] B
In [1085]: B.indptr
Out[1085]: array([ 0, 2, 5, 6, 7, 11, 14], dtype=int32)
In [1086]: B.data
Out[1086]: array([9, 1, 2, 5, 9, 2, 2, 9, 5, 3, 7, 1, 8, 9], dtype=int32)
In [1087]: B.indptr[[1,2]] # row 1
Out[1087]: array([2, 5], dtype=int32)
In [1088]: B.data[2:5]
Out[1088]: array([2, 5, 9], dtype=int32)
In [1089]: B.indices[2:5] # row 1 column indices
Out[1089]: array([0, 2, 5], dtype=int32)
In [1090]: B.data[2:5] += np.array([1,2,3])
In [1091]: B.A
Out[1091]:
array([[ 0, 9, 0, 0, 1, 0],
[ 3, 0, 7, 0, 0, 12],
[ 0, 2, 0, 0, 0, 0],
[ 2, 0, 0, 0, 0, 0],
[ 0, 9, 5, 3, 0, 7],
[ 1, 0, 0, 8, 9, 0]], dtype=int32)
Notice where the changed values, [3,7,12], are in the lil format:
In [1092]: B.tolil().data
Out[1092]: array([[9, 1], [3, 7, 12], [2], [2], [9, 5, 3, 7], [1, 8, 9]], dtype=object)
csr / csc matrices are efficient for most operations including addition (O(nnz)). However, little changes that affect the sparsity structure such as your example or even switching a single position from 0 to 1 are not because they require a O(nnz) reorganisation of the representation. Values and indices are packed; inserting one, all above need to move.
If you do just a single such operation, my guess would be that you can't easily beat scipy's implementation. However, if you are adding multiple rows for example it may be worthwile first making a sparse matrix of them and then adding that in one go.
Creating a csr matrix by hand from rows, say, is not that difficult. For example if your rows are dense and in order:
row_numbers, indices = np.where(rows)
data = rows[row_numbers, indices]
indptr = np.searchsorted(np.r_[true_row_numbers[row_numbers], N], np.arange(N+1))
If you have a collection of sparse rows and their row numbers:
data = np.r_[tuple([r.data for r in rows])]
indices = np.r_[tuple(r.indices for r in rows])]
jumps = np.add.accumulate([0] + [len(r) for r in rows])
indptr = np.repeat(jumps, np.diff(np.r_[-1, true_row_numbers, N]))