I have two embeddings tensor A and B, which looks like
[
[1,1,1],
[1,1,1]
]
and
[
[0,0,0],
[1,1,1]
]
what I want to do is calculate the L2 distance d(A,B) element-wise.
First I did a tf.square(tf.sub(lhs, rhs)) to get
[
[1,1,1],
[0,0,0]
]
and then I want to do an element-wise reduce which returns
[
3,
0
]
but tf.reduce_sum does not allow my to reduce by row. Any inputs would be appreciated. Thanks.
Add the reduction_indices argument with a value of 1, eg.:
tf.reduce_sum( tf.square( tf.sub( lhs, rhs) ), 1 )
That should produce the result you're looking for. Here is the documentation on reduce_sum().
According to TensorFlow documentation, reduce_sum function which takes four arguments.
tf.reduce_sum(input_tensor, axis=None, keep_dims=False, name=None, reduction_indices=None).
But reduction_indices has been deprecated. Better to use axis instead of. If the axis is not set, reduces all its dimensions.
As an example,this is taken from the documentation,
# 'x' is [[1, 1, 1]
# [1, 1, 1]]
tf.reduce_sum(x) ==> 6
tf.reduce_sum(x, 0) ==> [2, 2, 2]
tf.reduce_sum(x, 1) ==> [3, 3]
tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
tf.reduce_sum(x, [0, 1]) ==> 6
Above requirement can be written in this manner,
import numpy as np
import tensorflow as tf
a = np.array([[1,7,1],[1,1,1]])
b = np.array([[0,0,0],[1,1,1]])
xtr = tf.placeholder("float", [None, 3])
xte = tf.placeholder("float", [None, 3])
pred = tf.reduce_sum(tf.square(tf.subtract(xtr, xte)),1)
# Initializing the variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
nn_index = sess.run(pred, feed_dict={xtr: a, xte: b})
print nn_index
Related
I have a dataset which contains many snapshot observations in time and a 1 or 0 as a label for each observation. Lets say each observation contains 3 features. I am wanting to train an LSTM which will take a sequence of n observations and attempt to classify nth observation as a 1 or 0.
So if we have a dataset that looks like this:
# X = [[0, 1, 1], [1, 0, 0], [1, 1, 1], [1, 1, 0]]
# y = [1, 0, 1, 0]
# so X[0] = y[0], X[1] = y[1]
# . and I would like to input X[0] + X[1] to classify X[1] as y[1]
# . How would I need to structure this below?
X = [[0, 1, 1], [1, 0, 0], [1, 1, 1], [1, 1, 0]]
y = [1, 0, 1, 0]
def create_model():
model = Sequential()
# input_shape[0] is equal to 2 timesteps?
# input_shape[1] is equal to the 3 features per row?
model.add(LSTM(20, input_shape=(2, 3)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
m = create_model()
m.fit(X, y)
So I want X[0] and X[1] to be the input for one iteration of training and should be classified as y[1].
My question is this. How do I structure the model in order to take this input properly? I am very confused by input_shape, features, input_length, batches etc ...
The below code snippet might help clarify:
from keras.models import Sequential
from keras.layers import LSTM, Dense
import numpy as np
# Number of samples = 4, sequence length = 3, features = 2
X = np.array( [ [ [0, 1], [1, 0,], [1, 1] ],
[ [1, 1], [1, 1,], [1, 0] ],
[ [0, 1], [1, 0,], [0, 0] ],
[ [1, 1], [1, 1,], [1, 1] ]] )
y = np.array([[1], [0], [1], [0]])
print(X)
print(X.shape)
print(y.shape)
model = Sequential()
model.add(LSTM(20, input_shape=(3, 2)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(X, y)
Also, on the Keras documentation page: https://keras.io/getting-started/sequential-model-guide/ look at the example for "Stacked LSTM for sequence classification" near the bottom. It might help.
In general using Keras, the batch dimension/sample dimension is not specified in layers - it is automatically inferred from the input data.
I hope this helps.
You have the input shape correct.
I would reshape the input data to be (batch_size, timesteps, features)
m = create_model()
X.reshape((batch_size, 2, 3))
m.fit(X, y)
Common batch sizes are 4, 8 , 16, 32 but for small dataset the impact of the batch size is less important.
And when you want to predict use batch_size = 1
I have a tensor X of shape (N,...) and a boolean index mask mask of shape N. I want to shuffle the subarray of X given by mask along the first axis.
How can this be done non-eagerly and, if possible, in place?
Note: I do not need gradients.
You can do that like this:
import tensorflow as tf
def shuffle_mask(x, mask, seed=None):
n = tf.size(mask)
# Get masked indices
idx_masked = tf.cast(tf.where(mask), n.dtype)
# Shuffle masked indices
idx_masked_shuffled = tf.random.shuffle(tf.squeeze(idx_masked, 1), seed=seed)
# Scatter shuffled indices into place
idx_masked_shuffled_scat = tf.scatter_nd(idx_masked, idx_masked_shuffled, [n])
# Combine shuffled and non-shuffled indices
idx_shuffled = tf.where(mask, idx_masked_shuffled_scat, tf.range(n))
# Gather using resulting indices
return tf.gather(x, idx_shuffled)
# Test
with tf.Graph().as_default(), tf.Session() as sess:
tf.random.set_random_seed(0)
x = tf.constant([[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]])
mask = tf.constant([True, False, True, True, False])
y = shuffle_mask(x, mask)
print(sess.run(y))
# [[6 7]
# [2 3]
# [0 1]
# [4 5]
# [8 9]]
You cannot do the operation "in place", as there are no in-place operations at all in TensorFlow. Tensors are constant, so you will always be replacing one tensor with another.
Suppose I have a tensor A of shape (m, n), I would like to randomly sample k elements (without replacement) from each row, resulting in a tensor B of shape (m, k). How to do that in tensorflow?
An example would be:
A: [[1,2,3], [4,5,6], [7,8,9], [10,11,12]]
k: 2
B: [[1,3],[5,6],[9,8],[12,10]]
This is a way to do that:
import tensorflow as tf
with tf.Graph().as_default(), tf.Session() as sess:
tf.random.set_random_seed(0)
a = tf.constant([[1,2,3], [4,5,6], [7,8,9], [10,11,12]], tf.int32)
k = tf.constant(2, tf.int32)
# Tranpose, shuffle, slice, undo transpose
aT = tf.transpose(a)
aT_shuff = tf.random.shuffle(aT)
at_shuff_k = aT_shuff[:k]
result = tf.transpose(at_shuff_k)
print(sess.run(result))
# [[ 3 1]
# [ 6 4]
# [ 9 7]
# [12 10]]
As above. I tried those to no avail:
tf.random.shuffle( (a,b) )
tf.random.shuffle( zip(a,b) )
I used to concatenate them and do the shuffling, then unconcatenate / unpack. But now I'm in a situation where (a) is 4D rank tensor while (b) is 1D, so, no way to concatenate.
I also tried to give the seed argument to the shuffle method so it reproduces the same shuffling and I use it twice => Failed. Also tried to do the shuffling myself with randomly shuffled range of numbers, but TF is not as flexible as numpy in fancy indexing and stuff ==> failed.
What I'm doing now is, convert everything back to numpy then use shuffle from sklearn then go back to tensors by recasting. It is sheer stupid way. This is supposed to happen inside a graph.
You could just shuffle the indices and then use tf.gather() to extract values corresponding to those shuffled indices:
TF2.x (UPDATE)
import tensorflow as tf
import numpy as np
x = tf.convert_to_tensor(np.arange(5))
y = tf.convert_to_tensor(['a', 'b', 'c', 'd', 'e'])
indices = tf.range(start=0, limit=tf.shape(x)[0], dtype=tf.int32)
shuffled_indices = tf.random.shuffle(indices)
shuffled_x = tf.gather(x, shuffled_indices)
shuffled_y = tf.gather(y, shuffled_indices)
print('before')
print('x', x.numpy())
print('y', y.numpy())
print('after')
print('x', shuffled_x.numpy())
print('y', shuffled_y.numpy())
# before
# x [0 1 2 3 4]
# y [b'a' b'b' b'c' b'd' b'e']
# after
# x [4 0 1 2 3]
# y [b'e' b'a' b'b' b'c' b'd']
TF1.x
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, (None, 1, 1, 1))
y = tf.placeholder(tf.int32, (None))
indices = tf.range(start=0, limit=tf.shape(x)[0], dtype=tf.int32)
shuffled_indices = tf.random.shuffle(indices)
shuffled_x = tf.gather(x, shuffled_indices)
shuffled_y = tf.gather(y, shuffled_indices)
Make sure that you compute shuffled_x, shuffled_y in the same session run. Otherwise they might get different index orderings.
# Testing
x_data = np.concatenate([np.zeros((1, 1, 1, 1)),
np.ones((1, 1, 1, 1)),
2*np.ones((1, 1, 1, 1))]).astype('float32')
y_data = np.arange(4, 7, 1)
print('Before shuffling:')
print('x:')
print(x_data.squeeze())
print('y:')
print(y_data)
with tf.Session() as sess:
x_res, y_res = sess.run([shuffled_x, shuffled_y],
feed_dict={x: x_data, y: y_data})
print('After shuffling:')
print('x:')
print(x_res.squeeze())
print('y:')
print(y_res)
Before shuffling:
x:
[0. 1. 2.]
y:
[4 5 6]
After shuffling:
x:
[1. 2. 0.]
y:
[5 6 4]
I am reading the tests in the TensorFlow MNIST official model. Line 49 has:
self.assertEqual(loss.shape, ())
and selected lines leading up to it are:
BATCH_SIZE = 100
def dummy_input_fn():
image = tf.random_uniform([BATCH_SIZE, 784])
labels = tf.random_uniform([BATCH_SIZE, 1], maxval=9, dtype=tf.int32)
return image, labels
def make_estimator():
return tf.estimator.Estimator(
model_fn=mnist.model_fn, params={
'data_format': 'channels_last'
})
class Tests(tf.test.TestCase):
"""Run tests for MNIST model."""
def test_mnist(self):
classifier = make_estimator()
classifier.train(input_fn=dummy_input_fn, steps=2)
loss = eval_results['loss']
self.assertEqual(loss.shape, ())
but the TensorFlow documentation suggests that a shape is an array of numbers:
t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
tf.shape(t) # [2, 2, 3]
These two statements that print the shape of the object don't help much:
print(loss.shape)
# prints `()`
print(tf.shape(loss))
# prints `Tensor("Shape:0", shape=(0,), dtype=int32)`
What is the meaning of a () shape?
Your loss is a NumPy object and not a TensorFlow object:
print(type(loss))
# prints <class 'numpy.float32'>
print(loss)
# prints 2.2745261
I assume that a shape of () in NumPy means a scalar, though I could not find a documentation for it. You can see the list of object attributes (fields and methods) with:
print(dir(loss))
# prints `['T', '__abs__', '__add__', '__and__',
# ... 'shape', 'size', 'sort', ... ]`