Equivalent of np.isin for TensorFlow - tensorflow

I have categories as a list of list integers as shown below:
categories = [
[0,2,4,6,8],
[1,3,5,7,9]
]
I have a label tensor y with num_batches integers (as classes):
y = tf.constant([0, 1, 1, 2, 5, 4, 7, 9, 3, 3])
I want to replace values in y with certain indices (let's say 0-even, 1-odd) with the categories list available, such that final result would be:
cat_labels = tf.constant([0, 1, 1, 0, 1, 0, 1, 1, 1, 1])
I can get it by iterating through each value in y like below:
cat_labels = tf.Variable(tf.identity(y))
for idx in range(len(categories)):
for i, _y in enumerate(y):
if _y in categories[idx]: # if _y value is in categories[idx]
cat_labels[i].assign(idx) # replace all of them with idx
But apparently iterating is not allowed when this block is encapsulated in a #tf.function parent function.
Is there a way to apply the logic without iterating, or converting to numpy and applying np.isin, while getting speedups of tf.function?
Edit: There seem to be workarounds on this like here, but any help on explaining in the context of this use case would be appreciated.

You can try this:
y = tf.constant([0., 1., 1., 2., 5., 4., 7., 9., 3., 3.], dtype=tf.float32)
categories = [[0,2,4,6,8],[1,3,5,7,9]]
c = tf.convert_to_tensor(categories, dtype=tf.float32)
cat_labels = tf.map_fn( # apply an operation on all of the elements of Y
lambda x:tf.gather_nd( # get index of category: 0 or 1 or anything else
tf.cast( # cast dtype of the result of the inner function
tf.where( # get index of the element of Y in categories
tf.equal(c, x)), # search an element of Y within categories
dtype=tf.float32),[0,0]), y)
tf.print(cat_labels, summarize=-1)
# [0 1 1 0 1 0 1 1 1 1]

Related

Conditional value on tensor relative to element neighbors

I am implementing the Canny algorithm using Tensorflow (this is needed to use the borders as an evaluation metric, but this is off topic). One of the steps is to calculate the "Non-maximum Suppression", which consists in zeroing the center element in a 3x3 region, unless two specific neighbors are smaller. More details here.
How can I achieve this operation using Tensorflow?
I am actually using Keras, but the Tensorflow solution will work as well, for reference, my code so far looks like this:
def canny(img):
'''Canny border detection. The input should be a grayscale image.'''
gauss_kernel = np.array([[2, 4, 5, 4, 2],
[4, 9, 12, 9, 4],
[5, 12, 15, 12, 5],
[4, 9, 12, 9, 4],
[2, 4, 5, 4, 2]]).reshape(5, 5, 1, 1)
gauss_kernel = K.variable(1./159 * gauss_kernel)
Gx = K.variable(np.array([[-1., 0. ,1.],
[-2., 0., 2.],
[-1., 0., 1.]]).reshape(3, 3, 1, 1))
Gy = K.variable(np.array([[-1., -2., -1.],
[ 0., 0., 0.],
[ 1., 2., 1.]]).reshape(3, 3, 1, 1))
# Smooth image
smoothed = K.conv2d(img, gauss_kernel, padding='same')
# Derivative in x
Dx = K.conv2d(smoothed, Gx, padding='same')
# Derivative in y
Dy = K.conv2d(smoothed, Gy, padding='same')
# Take gradient strength
G = K.sqrt(K.square(Dx) + K.square(Dy))
# TODO: Non-maximum Suppression & Hysteresis Thresholding
return G
You could use convolutional filters to segregate the two target pixels and make them "concentric" with the central pixel.
For comparing with two target pixels, for instance, we could use this filter, shaped as (3, 3, 1, 2) -- One input channel, two output channels. Each channel will return one of the target pixels.
The filter should have 1 at the target pixels. And the rest are zeros:
#taking two diagonal pixels
filter = np.zeros((3,3,1,2))
filter[0,0,0,0] = 1 #first pixel is top/left, passed to the first channel
filter[2,2,0,1] = 1 #second pixel is bottom/right, passed to the second channel
#which ones are really bottom or top, left or right depend on your preprocessing,
#but they should be consistent with the rest of your modeling
filter = K.variable(filter)
If you're taking top and bottom, or left and right, you can make smaller filters. No need to be 3x3 (no problem either), but only 1x3 or 3x1:
filter1 = np.zeros((1,3,1,2)) #horizontal filter
filter2 = np.zeros((3,1,1,2)) #vertical filter
filter1[0,0,0,0] = 1 #left pixel - if filter is 3x3: [1,0,0,0]
filter1[0,2,0,1] = 1 #right pixel - if filter is 3x3: [1,2,0,1]
filter1 = K.variable(filter1)
filter2[0,0,0,0] = 1 #top pixel - if filter is 3x3: [0,1,0,0]
filter2[2,0,0,1] = 1 #bottom pxl - if filter is 3x3: [2,1,0,1]
filter2 = K.variable(filter2)
Then you apply these as convolutions. You will get one channel for one pixel, and the other channel for the other pixel. You can then compare them as if they were all in the same place, just in different channels:
targetPixels = K.conv2d(originalImages, kernel=filter, padding='same')
#two channels telling if the center pixel is greater than the pixel in the channel
isGreater = K.greater(originalImages,targetPixels)
#merging the two channels, considering they're 0 for false and 1 for true
isGreater = K.cast(isGreater,K.floatx())
isGreater = isGreater[:,:,:,:1] * isGreater[:,:,:,1:]
#now, the center pixel will remain if isGreater = 1 at that position:
result = originalImages * isGreater

Numpy compare values inside to return greater index

I have a numpy array and another array:
[array([-1.67397643, -2.77258872]), array([-1.67397643, -2.77258872]), array([-2.77258872, -1.67397643]), array([-2.77258872, -1.67397643])]
Which index position inside the numpy arrays wins - i.e. -1.67397643 > -2.77258872 - so the first value would be 0.
Final output of the numpy array would be [0, 0, 1, 1] (a list is fine too)
How can I do that ?
It seems you have a list of arrays, so I would start by making them a proper numpy array:
a = [array([-1.67397643, -2.77258872]), array([-1.67397643, -2.77258872]), array([-2.77258872, -1.67397643]), array([-2.77258872, -1.67397643])]
b = np.array(a).T # .T transposes it.
c = b[0] < b[1]
c is now an array([False, False, True, True], dtype=bool), and probably serves your purpose. If you must have [0,0,1,1] instead, then:
d = np.zeros(len(c))
d[c] = 1
d is now an array([ 0., 0., 1., 1.])

How can I find a basis for the column space of a rectangular matrix?

Given a numpy ndarray with dimensions m by n (where n>m), how can I find the linearly independent columns?
One way is to use the LU decomposition. The factor U will be of the same size as your matrix, but will be upper-triangular. In each row of U, pick the first nonzero element: these are pivot elements, which belong to linearly independent columns. A self-contained example:
import numpy as np
from scipy.linalg import lu
A = np.array([[1, 2, 3], [2, 4, 2]]) # example for testing
U = lu(A)[2]
lin_indep_columns = [np.flatnonzero(U[i, :])[0] for i in range(U.shape[0])]
Output: [0, 2], which means the 0th and 2nd columns of A form a basis for its column space.
#user6655984's answer inspired this code, where I developed a function instead of the author's last line of code (finding pivot columns of U) so that it can handle more diverse A's.
Here it is:
import numpy as np
from scipy import linalg as LA
np.set_printoptions(precision=1, suppress=True)
A = np.array([[1, 4, 1, -1],
[2, 5, 1, -2],
[3, 6, 1, -3]])
P, L, U = LA.lu(A)
print('P', P, '', 'L', L, '', 'U', U, sep='\n')
Output:
P
[[0. 1. 0.]
[0. 0. 1.]
[1. 0. 0.]]
L
[[1. 0. 0. ]
[0.3 1. 0. ]
[0.7 0.5 1. ]]
U
[[ 3. 6. 1. -3. ]
[ 0. 2. 0.7 -0. ]
[ 0. 0. -0. -0. ]]
I came up with this function:
def get_indices_for_linearly_independent_columns_of_A(U: np.ndarray) -> list:
# I should first convert all "-0."s to "0." so that nonzero() can find them.
U_copy = U.copy()
U_copy[abs(U_copy) < 1.e-7] = 0
# Because some rows in U may not have even one nonzero element,
# I have to find the index for the first one in two steps.
index_of_all_nonzero_cols_in_each_row = (
[U_copy[i, :].nonzero()[0] for i in range(U_copy.shape[0])]
)
index_of_first_nonzero_col_in_each_row = (
[indices[0] for indices in index_of_all_nonzero_cols_in_each_row
if len(indices) > 0]
)
# Because two rows or more may have the same indices
# for their first nonzero element, I should remove duplicates.
unique_indices = sorted(list(set(index_of_first_nonzero_col_in_each_row)))
return unique_indices
Finally:
col_sp_A = A[:, get_indices_for_linearly_independent_columns_of_A(U)]
print(col_sp_A)
Output:
[[1 4]
[2 5]
[3 6]]
Try this one
def LU_decomposition(A):
"""
Perform LU decompostion of a given matrix
Args:
A: the given matrix
Returns: P, L and U, s.t. PA = LU
"""
assert A.shape[0] == A.shape[1]
N = A.shape[0]
P_idx = np.arange(0, N, dtype=np.int16).reshape(-1, 1)
for i in range(N - 1):
pivot_loc = np.argmax(np.abs(A[i:, [i]])) + i
if pivot_loc != i:
A[[i, pivot_loc], :] = A[[pivot_loc, i], :]
P_idx[[i, pivot_loc], :] = P_idx[[pivot_loc, i], :]
A[i + 1:, i] /= A[i, i]
A[i + 1:, i + 1:] -= A[i + 1:, [i]] * A[[i], i + 1:]
U, L, P = np.zeros_like(A), np.identity(N), np.zeros((N, N), dtype=np.int16)
for i in range(N):
L[i, :i] = A[i, :i]
U[i, i:] = A[i, i:]
P[i, P_idx[i][0]] = 1
return P.astype(np.float64), L, U
def get_bases(A):
assert A.ndim == 2
Q = gaussian_elimination(A)
M, N = Q.shape
pivot_idxs = []
for i in range(M):
j = i
while j < N and abs(Q[i, j]) < 1e-5:
j += 1
if j < N:
pivot_idxs.append(j)
return A[:, list(set(pivot_idxs))]

How to find an index of the first matching element in TensorFlow

I am looking for a TensorFlow way of implementing something similar to Python's list.index() function.
Given a matrix and a value to find, I want to know the first occurrence of the value in each row of the matrix.
For example,
m is a <batch_size, 100> matrix of integers
val = 23
result = [0] * batch_size
for i, row_elems in enumerate(m):
result[i] = row_elems.index(val)
I cannot assume that 'val' appears only once in each row, otherwise I would have implemented it using tf.argmax(m == val). In my case, it is important to get the index of the first occurrence of 'val' and not any.
It seems that tf.argmax works like np.argmax (according to the test), which will return the first index when there are multiple occurrences of the max value.
You can use tf.argmax(tf.cast(tf.equal(m, val), tf.int32), axis=1) to get what you want. However, currently the behavior of tf.argmax is undefined in case of multiple occurrences of the max value.
If you are worried about undefined behavior, you can apply tf.argmin on the return value of tf.where as #Igor Tsvetkov suggested.
For example,
# test with tensorflow r1.0
import tensorflow as tf
val = 3
m = tf.placeholder(tf.int32)
m_feed = [[0 , 0, val, 0, val],
[val, 0, val, val, 0],
[0 , val, 0, 0, 0]]
tmp_indices = tf.where(tf.equal(m, val))
result = tf.segment_min(tmp_indices[:, 1], tmp_indices[:, 0])
with tf.Session() as sess:
print(sess.run(result, feed_dict={m: m_feed})) # [2, 0, 1]
Note that tf.segment_min will raise InvalidArgumentError when there is some row containing no val. In your code row_elems.index(val) will raise exception too when row_elems don't contain val.
Looks a little ugly but works (assuming m and val are both tensors):
idx = list()
for t in tf.unpack(m, axis=0):
idx.append(tf.reduce_min(tf.where(tf.equal(t, val))))
idx = tf.pack(idx, axis=0)
EDIT:
As Yaroslav Bulatov mentioned, you could achieve the same result with tf.map_fn:
def index1d(t):
return tf.reduce_min(tf.where(tf.equal(t, val)))
idx = tf.map_fn(index1d, m, dtype=tf.int64)
Here is another solution to the problem, assuming there is a hit on every row.
import tensorflow as tf
val = 3
m = tf.constant([
[0 , 0, val, 0, val],
[val, 0, val, val, 0],
[0 , val, 0, 0, 0]])
# replace all entries in the matrix either with its column index, or out-of-index-number
match_indices = tf.where( # [[5, 5, 2, 5, 4],
tf.equal(val, m), # [0, 5, 2, 3, 5],
x=tf.range(tf.shape(m)[1]) * tf.ones_like(m), # [5, 1, 5, 5, 5]]
y=(tf.shape(m)[1])*tf.ones_like(m))
result = tf.reduce_min(match_indices, axis=1)
with tf.Session() as sess:
print(sess.run(result)) # [2, 0, 1]
Here is a solution which also considers the case the element is not included by the matrix (solution from github repository of DeepMind)
def get_first_occurrence_indices(sequence, eos_idx):
'''
args:
sequence: [batch, length]
eos_idx: scalar
'''
batch_size, maxlen = sequence.get_shape().as_list()
eos_idx = tf.convert_to_tensor(eos_idx)
tensor = tf.concat(
[sequence, tf.tile(eos_idx[None, None], [batch_size, 1])], axis = -1)
index_all_occurrences = tf.where(tf.equal(tensor, eos_idx))
index_all_occurrences = tf.cast(index_all_occurrences, tf.int32)
index_first_occurrences = tf.segment_min(index_all_occurrences[:, 1],
index_all_occurrences[:, 0])
index_first_occurrences.set_shape([batch_size])
index_first_occurrences = tf.minimum(index_first_occurrences + 1, maxlen)
return index_first_occurrences
And:
import tensorflow as tf
mat = tf.Variable([[1,2,3,4,5], [2,3,4,5,6], [3,4,5,6,7], [0,0,0,0,0]], dtype = tf.int32)
idx = 3
first_occurrences = get_first_occurrence_indices(mat, idx)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
sess.run(first_occurrence) # [3, 2, 1, 5]

bad result from numpy corrcoef and minimum spanning tree

I have this code:
mm = np.array([[1, 4, 7, 8], [2, 2, 8, 4], [1, 13, 1, 5]])
mm = np.column_stack(mm)
mmCov = np.cov(mm, rowvar=0)
print("covariance\n", mmCov)
# my code to get correlations
mmResCor = np.zeros(shape=(3, 3))
for i in range(len(mmCov)):
for j in range(len(mmCov[i])):
mmResCor[i][j] = mmCov[i][j] / (math.sqrt(mmCov[i][i] * mmCov[j] [j]))
print("correlaciones a mano\n", mmResCor)
mmCor = np.corrcoef(mmCov, rowvar=0)
print("correlations\n", mmCor)
X = csr_matrix(mmCor)
XX = minimum_spanning_tree(X)
print("minimun spanning tree\n", XX)
first: each column represents a variable, with observations in the rows
numpy corrcoef use this relation with covariance matrix:
R_{ij} = \frac{ C_{ij} } { \sqrt{ C_{ii} * C_{jj} } }
when I use numpy corrcoef I get this matrix
correlations
[[ 1. 0.8660254 -0.82603319]
[ 0.8660254 1. -0.99717646]
[-0.82603319 -0.99717646 1. ]]
but when I apply "my code" to get the same result...
mmResCor = np.zeros(shape=(3, 3))
for i in range(len(mmCov)):
for j in range(len(mmCov[i])):
mmResCor[i][j] = mmCov[i][j] / (math.sqrt(mmCov[i][i] * mmCov[j][j]))
I get this matrix
correlaciones a mano
[[ 1. 0.67082039 0. ]
[ 0.67082039 1. -0.5 ]
[ 0. -0.5 1. ]]
why do I get differents results if its suppose I am doing the same?
One more question:
When I apply minimun_spanning_tree I get this:
minimun spanning tree
(0, 2) -0.826033187631
(1, 2) -0.997176464953
Is there any way to represent these or can I save this result in some variables?
The np.corrcoef should take the data as the input. You're passing the covariance matrix as input. If you pass the data, you get the same result as your manual computation:
>>> np.corrcoef(mm, rowvar=0)
array([[ 1. , 0.67082039, 0. ],
[ 0.67082039, 1. , -0.5 ],
[ 0. , -0.5 , 1. ]])
Regarding the minimum spanning tree, I'm not sure what your question is, but the output XX is a sparse matrix which stores a matrix representation of the tree.