are tensorflow operations pointwise? - tensorflow

If I do the following:
r = (x - mn) / std
where x is of shape (batchSize, 100), mn, and std are all of length (1, 100)
Are the subtraction and division done pointwise? I would expect to r to be (batchSize, 100).
I cannot examine the shapes directly because using tf.keras.batch_flatten obliberates the shapes.
For example:
x.shape
TensorShape([Dimension(None), Dimension(314), Dimension(314), Dimension(8)])
x = K.batch_flatten(x)
<tf.Tensor 'conv2d_1/activity_regularizer/Reshape_2:0' shape=(?, ?) dtype=float32>
x.shape
TensorShape([Dimension(None), Dimension(None)])

Everything concerning Keras and Tensorflow is Numpy compatible as it could be. So let's have a look.
x = np.array([1,2,3,4,5])
m = np.array([1,1,1,1,1])
n = np.array([5,4,3,2,1])
std = 10
m_times_n = m * n
# [5 4 3 2 1]
x_minus_mn = x - m_times_n
# [-4 -2 0 2 4]
r = x_minus_mn / std
# [-0.4 -0.2 0. 0.2 0.4]
So they are pointwise. Or let's see what happens in Tensorflow:
tf.enable_eager_execution()
x = tf.constant([1,2,3,4,5])
m = tf.constant([1,1,1,1,1])
n = tf.constant([5,4,3,2,1])
std = tf.constant(10)
m_times_n = m * n
# tf.Tensor([5 4 3 2 1], shape=(5,), dtype=int32)
x_minus_mn = x - m_times_n
# tf.Tensor([-4 -2 0 2 4], shape=(5,), dtype=int32)
r = x_minus_mn / std
# tf.Tensor([-0.4 -0.2 0. 0.2 0.4], shape=(5,), dtype=float64)
Pointwise as well.
Also in your post you have mentioned that you have issues with tf.keras.batch_flatten. The resulting (?, ?) shape is because of the way tf.keras.batch_flatten works. Let's have a look:
# Assuming we have 5 images, with 320x320 size, and 3 channels
X = tf.ones((5, 320,320, 3))
flatten = tf.keras.backend.batch_flatten(X)
flatten.shape
# (5, 307200)
Taken from the documentation:
Turn a nD tensor into a 2D tensor with same 0th dimension.
And we are seeing the exact thing. The 0th (batch_size) has been kept, while all other dimensions were squeezed such that the resulting tensor is 2D.

Related

How to implement tf.gather_nd in Pytorch with the argument batch_dims?

I have been doing a project on image matching, so I need to find correspondences between 2 images. To get descriptors, I will need a interpolate function. However, when I read about a equivalent function which is done in Tensorflow, I still don’t get how to implement tf.gather_nd(parmas, indices, barch_dims) in Pytorch. Especially when there is a argument: batch_dims. I have gone through stackoverflow and there is no perfect equivalence yet.
The referred interpolate function in Tensorflow is below and I have been trying to implement this in Pytorch Arguments' information is below:
inputs is a dense feature map[i] from a for loop of batch size, which means it is 3D[H, W, C](in pytorch is [C, H, W])
pos is a set of random point coordinate shapes like [[i, j], [i, j],...,[i, j]], so it is 2D when it goes in interpolate function(in pytorch is [[i,i,...,i], [j,j,...,j]])
and it then expands both of their dimensions when they get into this function
I just want a perfect implement of tf.gather_nd with argument batch_dims. Thank you!
And here's a simple example of using it:
pos = tf.ones((12, 2)) ## stands for a set of coordinates [[i, i,…, i], [j, j,…, j]]
inputs = tf.ones((4, 4, 128)) ## stands for [H, W, C] of dense feature map
outputs = interpolate(pos, inputs, batched=False)
print(outputs.get_shape()) # We get (12, 128) here
interpolate function (tf version):
def interpolate(pos, inputs, nd=True):
pos = tf.expand_dims(pos, 0)
inputs = tf.expand_dims(inputs, 0)
h = tf.shape(inputs)[1]
w = tf.shape(inputs)[2]
i = pos[:, :, 0]
j = pos[:, :, 1]
i_top_left = tf.clip_by_value(tf.cast(tf.math.floor(i), tf.int32), 0, h - 1)
j_top_left = tf.clip_by_value(tf.cast(tf.math.floor(j), tf.int32), 0, w - 1)
i_top_right = tf.clip_by_value(tf.cast(tf.math.floor(i), tf.int32), 0, h - 1)
j_top_right = tf.clip_by_value(tf.cast(tf.math.ceil(j), tf.int32), 0, w - 1)
i_bottom_left = tf.clip_by_value(tf.cast(tf.math.ceil(i), tf.int32), 0, h - 1)
j_bottom_left = tf.clip_by_value(tf.cast(tf.math.floor(j), tf.int32), 0, w - 1)
i_bottom_right = tf.clip_by_value(tf.cast(tf.math.ceil(i), tf.int32), 0, h - 1)
j_bottom_right = tf.clip_by_value(tf.cast(tf.math.ceil(j), tf.int32), 0, w - 1)
dist_i_top_left = i - tf.cast(i_top_left, tf.float32)
dist_j_top_left = j - tf.cast(j_top_left, tf.float32)
w_top_left = (1 - dist_i_top_left) * (1 - dist_j_top_left)
w_top_right = (1 - dist_i_top_left) * dist_j_top_left
w_bottom_left = dist_i_top_left * (1 - dist_j_top_left)
w_bottom_right = dist_i_top_left * dist_j_top_left
if nd:
w_top_left = w_top_left[..., None]
w_top_right = w_top_right[..., None]
w_bottom_left = w_bottom_left[..., None]
w_bottom_right = w_bottom_right[..., None]
interpolated_val = (
w_top_left * tf.gather_nd(inputs, tf.stack([i_top_left, j_top_left], axis=-1), batch_dims=1) +
w_top_right * tf.gather_nd(inputs, tf.stack([i_top_right, j_top_right], axis=-1), batch_dims=1) +
w_bottom_left * tf.gather_nd(inputs, tf.stack([i_bottom_left, j_bottom_left], axis=-1), batch_dims=1) +
w_bottom_right * tf.gather_nd(inputs, tf.stack([i_bottom_right, j_bottom_right], axis=-1), batch_dims=1)
)
interpolated_val = tf.squeeze(interpolated_val, axis=0)
return interpolated_val
As far as I'm aware there is no directly equivalent of tf.gather_nd in PyTorch and implementing a generic version with batch_dims is not that simple. However, you likely don't need a generic version, and given the context of your interpolate function, a version for [C, H, W] would suffice.
At the beginning of interpolate you add a singular dimension to the front, which is the batch dimension. Setting batch_dims=1 in tf.gather_nd means there is one batch dimension at the beginning, therefore it applies it per batch, i.e. it indexes inputs[0] with pos[0] etc. There is no benefit of adding a singular batch dimension, because you could have just used the direct computation.
# Adding singular batch dimension
# Shape: [1, num_pos, 2]
pos = tf.expand_dims(pos, 0)
# Shape: [1, H, W, C]
inputs = tf.expand_dims(inputs, 0)
batched_result = tf.gather_nd(inputs, pos, batch_dims=1)
single_result = tf.gater_nd(inputs[0], pos[0])
# The first element in the batched result is the same as the single result
# Hence there is no benefit to adding a singular batch dimension.
tf.reduce_all(batched_result[0] == single_result) # => True
Single version
In PyTorch the implementation for [H, W, C] can be done with Python's indexing. While PyTorch usually uses [C, H, W] for images, it's only a matter of what dimension to index, but let's keep them the same as in TensorFlow for the sake of comparison. If you were to index them manually, you would do it as such: inputs[pos_h[0], pos_w[0]], inputs[pos_h[1], pos_w[1]] and so on. PyTorch allows you to do that automatically by providing the indices as lists: inputs[pos_h, pos_w], where pos_h and pos_w have the same length. All you need to do is split your pos into two separate tensors, one for the indices along the height dimension and the other along the width dimension, which you also did in the TensorFlow version.
inputs = torch.randn(4, 4, 128)
# Random positions 0-3, shape: [12, 2]
pos = torch.randint(4, (12, 2))
# Positions split by dimension
pos_h = pos[:, 0]
pos_w = pos[:, 1]
# Index the inputs with the indices per dimension
gathered = inputs[pos_h, pos_w]
# Verify that it's identical to TensorFlow's output
inputs_tf = tf.convert_to_tensor(inputs.numpy())
pos_tf = tf.convert_to_tensor(pos.numpy())
gathered_tf = tf.gather_nd(inputs_tf, pos_tf)
gathered_tf = torch.from_numpy(gathered_tf.numpy())
torch.equal(gathered_tf, gathered) # => True
If you want to apply it to a tensor of size [C, H, W] instead, you only need to change the dimensions you want to index:
# For [H, W, C]
gathered = inputs[pos_h, pos_w]
# For [C, H, W]
gathered = inputs[:, pos_h, pos_w]
Batched version
Making it a batched batched version (for [N, H, W, C] or [N, C, H, W]) is not that difficult, and using that is more appropriate, since you're dealing with batches anyway. The only tricky part is that each element in the batch should only be applied to the corresponding batch. For this the batch dimensions needs to be enumerated, which can be done with torch.arange. The batch enumeration is just the list with the batch indices, which will be combined with the pos_h and pos_w indices, resulting in inputs[0, pos_h[0, 0], pos_h[0, 0]], inputs[0, pos_h[0, 1], pos_h[0, 1]] ... inputs[1, pos_h[1, 0], pos_h[1, 0]] etc.
batch_size = 3
inputs = torch.randn(batch_size, 4, 4, 128)
# Random positions 0-3, different for each batch, shape: [3, 12, 2]
pos = torch.randint(4, (batch_size, 12, 2))
# Positions split by dimension
pos_h = pos[:, :, 0]
pos_w = pos[:, :, 1]
batch_enumeration = torch.arange(batch_size) # => [0, 1, 2]
# pos_h and pos_w have shape [3, 12], so the batch enumeration needs to be
# repeated 12 times per batch.
# Unsqueeze to get shape [3, 1], now the 1 could be repeated to 12, but
# broadcasting will do that automatically.
batch_enumeration = batch_enumeration.unsqueeze(1)
# Index the inputs with the indices per dimension
gathered = inputs[batch_enumeration, pos_h, pos_w]
# Again, verify that it's identical to TensorFlow's output
inputs_tf = tf.convert_to_tensor(inputs.numpy())
pos_tf = tf.convert_to_tensor(pos.numpy())
# This time with batch_dims=1
gathered_tf = tf.gather_nd(inputs_tf, pos_tf, batch_dims=1)
gathered_tf = torch.from_numpy(gathered_tf.numpy())
torch.equal(gathered_tf, gathered) # => True
Again, for [N, C, H, W], only the dimensions that are indexed need to be changed:
# For [N, H, W, C]
gathered = inputs[batch_enumeration, pos_h, pos_w]
# For [N, C, H, W]
gathered = inputs[batch_enumeration, :, pos_h, pos_w]
Just a little side note on the interpolate implementation, rounding the positions (floor and ceil respectively) doesn't make sense, because indices must be integers, so it has no effect, as long as your positions are actual indices. That also results in i_top_left and i_bottom_left being the same value, but even if they are to be rounded differently, they are always 1 position apart. Furthermore, i_top_left and i_top_right are literally the same. I don't think that this function produces a meaningful output. I don't know what you're trying to achieve, but if you're looking for image interpolation you could have a look at torch.nn.functional.interpolate.
This is just an extension of Michael Jungo's batched version answer when pos is 2D array instead of 1D array (excluding batch dimension).
bs = 2
H = 4
W = 6
C = 3
inputs = torch.randn(bs, H, W, C)
pos_h = torch.randint(H, (bs, H, W))
pos_w = torch.randint(W, (bs, H, W))
batch_enumeration = torch.arange(bs)
batch_enumeration = batch_enumeration.unsqueeze(1).unsqueeze(2)
inputs.shape
Out[34]: torch.Size([2, 4, 6, 3])
pos_h.shape
Out[35]: torch.Size([2, 4, 6])
pos_w.shape
Out[36]: torch.Size([2, 4, 6])
batch_enumeration.shape
Out[37]: torch.Size([2, 1, 1])
gathered = inputs[batch_enumeration, pos_h, pos_w]
For channel first, we also need to enumerate channels
inputs = torch.randn(bs, C, H, W)
pos_h = torch.randint(H, (bs, 1, H, W))
pos_w = torch.randint(W, (bs, 1, H, W))
batch_enumeration = torch.arange(bs)
batch_enumeration = batch_enumeration.unsqueeze(1).unsqueeze(2).unsqueeze(3)
channel_enumeration = torch.arange(C)
channel_enumeration = channel_enumeration.unsqueeze(0).unsqueeze(2).unsqueeze(3)
inputs.shape
Out[49]: torch.Size([2, 3, 4, 6])
pos_h.shape
Out[50]: torch.Size([2, 1, 4, 6])
pos_w.shape
Out[51]: torch.Size([2, 1, 4, 6])
batch_enumeration.shape
Out[52]: torch.Size([2, 1, 1, 1])
channel_enumeration.shape
Out[57]: torch.Size([1, 3, 1, 1])
gathered = inputs[batch_enumeration, channel_enumeration, pos_h, pos_w]
gathered.shape
Out[59]: torch.Size([2, 3, 4, 6])
Let's verify
inputs_np = inputs.numpy()
pos_h_np = pos_h.numpy()
pos_w_np = pos_w.numpy()
gathered_np = gathered.numpy()
pos_h_np[0,0,0,0]
Out[68]: 0
pos_w_np[0,0,0,0]
Out[69]: 3
inputs_np[0,:,0,3]
Out[71]: array([ 0.79122806, -2.190181 , -0.16741803], dtype=float32)
gathered_np[0,:,0,0]
Out[72]: array([ 0.79122806, -2.190181 , -0.16741803], dtype=float32)
pos_h_np[1,0,3,4]
Out[73]: 1
pos_w_np[1,0,3,4]
Out[74]: 2
inputs_np[1,:,1,2]
Out[75]: array([ 0.9282498 , -0.34945545, 0.9136222 ], dtype=float32)
gathered_np[1,:,3,4]
Out[77]: array([ 0.9282498 , -0.34945545, 0.9136222 ], dtype=float32)
I improved the answer from Michael Jungo's implementation. Now it supports arbitrary leading batch dimensions.
def gather_nd_torch(params, indices, batch_dim=1):
""" A PyTorch porting of tensorflow.gather_nd
This implementation can handle leading batch dimensions in params, see below for detailed explanation.
The majority of this implementation is from Michael Jungo # https://stackoverflow.com/a/61810047/6670143
I just ported it compatible to leading batch dimension.
Args:
params: a tensor of dimension [b1, ..., bn, g1, ..., gm, c].
indices: a tensor of dimension [b1, ..., bn, x, m]
batch_dim: indicate how many batch dimension you have, in the above example, batch_dim = n.
Returns:
gathered: a tensor of dimension [b1, ..., bn, x, c].
Example:
>>> batch_size = 5
>>> inputs = torch.randn(batch_size, batch_size, batch_size, 4, 4, 4, 32)
>>> pos = torch.randint(4, (batch_size, batch_size, batch_size, 12, 3))
>>> gathered = gather_nd_torch(inputs, pos, batch_dim=3)
>>> gathered.shape
torch.Size([5, 5, 5, 12, 32])
>>> inputs_tf = tf.convert_to_tensor(inputs.numpy())
>>> pos_tf = tf.convert_to_tensor(pos.numpy())
>>> gathered_tf = tf.gather_nd(inputs_tf, pos_tf, batch_dims=3)
>>> gathered_tf.shape
TensorShape([5, 5, 5, 12, 32])
>>> gathered_tf = torch.from_numpy(gathered_tf.numpy())
>>> torch.equal(gathered_tf, gathered)
True
"""
batch_dims = params.size()[:batch_dim] # [b1, ..., bn]
batch_size = np.cumprod(list(batch_dims))[-1] # b1 * ... * bn
c_dim = params.size()[-1] # c
grid_dims = params.size()[batch_dim:-1] # [g1, ..., gm]
n_indices = indices.size(-2) # x
n_pos = indices.size(-1) # m
# reshape leadning batch dims to a single batch dim
params = params.reshape(batch_size, *grid_dims, c_dim)
indices = indices.reshape(batch_size, n_indices, n_pos)
# build gather indices
# gather for each of the data point in this "batch"
batch_enumeration = torch.arange(batch_size).unsqueeze(1)
gather_dims = [indices[:, :, i] for i in range(len(grid_dims))]
gather_dims.insert(0, batch_enumeration)
gathered = params[gather_dims]
# reshape back to the shape with leading batch dims
gathered = gathered.reshape(*batch_dims, n_indices, c_dim)
return gathered
I have also made a demo Colab notebook, you can check it here. This implementation is way faster than TF's original implementation according to my poor speed test on Colab server with a GPU instance.

Implementing the Cosine similarity in tensor flow

My Question is for the below equation
The equation above of single vector. But if I have a batches of vectors, like my X and Y having the dimension of (None, 32), then there will some issue.
Also remember in coding environment, one example inside the batch is already in transpose shape. My problem is when we need to do transpose on [None, 32] the code will not accept and transpose for None dimenation.So I solve it in the following way:
def Cosine_similarity(X, Y, feature_dim):
L = tf.compat.v1.initializers.glorot_normal()(shape=[feature_dim, feature_dim])
out1 = tf.matmul(X, L)
out2 = tf.matmul(Y, L)
out_numerator = tf.reduce_sum(tf.multiply(out1, out2), axis = 1)
out3 = tf.reduce_sum(tf.multiply(out1, out1), axis = 1)
out3 = tf.sqrt(out3)
out4 = tf.reduce_sum(tf.multiply(out2, out2), axis = 1)
out4 = tf.sqrt(out4)
out_denominator = tf.multiply(out3, out4)
final_out = tf.divide(out_numerator, out_denominator)
return final_out
And this is coming from the following:
<XA.YA> = (XA)^T (YA)
= tf.reduce_sum(tf.multiply((X A) , (Y A)), axis = 1)
So I just to know if this implementation is right? Or you can correct me if I am missing something
Not sure I understand your concern for the (none) dimension.
If I understand correctly the cosine similarity between two identically shaped matrix X and Y ([batch, target_dim]) is just a matrix multiplication of X * Y^T with some L2 normalization. Note X would be your out1 and Y would be your out2.
def Cosine_similarity(x, y, A):
"""Pair-wise Cosine similarity.
First `x` and `y` are transformed by A.
`X = xA^T` with shape [batch, target_dim],
`Y = yA^T` with shape [batch, target_dim].
Args:
x: shaped [batch, feature_dim].
y: shaped [batch, feature_dim].
A: shaped [targte_dim, feature_dim]. Transformation matrix to project
from `feature_dim` to `target_dim`.
Returns:
A cosine similarity matrix shaped [batch, batch]. The entry
at (i, j) is the cosine similarity value between vector `X[i, :]` and
`Y[j, :]` where `X`, `Y` are the transformed `x` and y` by `A`
respectively. In the other word, entry at (i, j) is the pair-wise
cosine similarity value between the i-th example of `x` and the j-th
example of `y`.
"""
x = tf.matmul(x, A, transpose_b=True)
y = tf.matmul(y, A, transpose_b=True)
x_norm = tf.nn.l2_normalize(x, axis=-1)
y_norm = tf.nn.l2_normalize(y, axis=-1)
y_norm_trans = tf.transpose(y_norm, [1, 0])
sim = tf.matmul(x_norm, y_norm_trans)
return sim
import numpy as np
feature_dim = 8
target_dim = 4
batch_size = 2
x = tf.placeholder(tf.float32, shape=(None, dim))
y = tf.placeholder(tf.float32, shape=(None, dim))
A = tf.placeholder(tf.float32, shape=(target_dim, feature_dim))
sim = Cosine_similarity(x, y, A)
with tf.Session() as sess:
x, y, sim = sess.run([x, y, sim], feed_dict={
x: np.ones((batch_size, feature_dim)),
y: np.random.rand(batch_size, feature_dim),
A: np.random.rand(target_dim, feature_dim)})
print 'x=\n', x
print 'y=\n', y
print 'sim=\n', sim
Result:
x=
[[ 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1.]]
y=
[[ 0.01471654 0.76577073 0.97747731 0.06429122 0.91344446 0.47987637
0.09899797 0.773938 ]
[ 0.8555786 0.43403915 0.92445409 0.03393625 0.30154493 0.60895061
0.1233703 0.58597666]]
sim=
[[ 0.95917791 0.98181278]
[ 0.95917791 0.98181278]]

Tensorflow value error when chaining content of data - Cannot feed value of shape (1, 1) for Tensor 'Placeholder_1:0',

This post is related to the following question. The code above is taken from the accepted answer.
The program itself works fine as is, but if I only changed the values of the data provided from
df = pd.DataFrame({'Temperature': [183, 10.7, 24.3, 10.7],
'Weight': [8, 11.2, 14, 11.2],
'Size': [3.97, 7.88, 11, 7.88],
'Property': [0,1,2,0]})
to
df = pd.DataFrame({'Temperature': [0,0,0,0],
'Weight': [1,2,3,4],
'Size': [1,2,3,4],
'Property': [1,1,1,1]})
I receive the following error while executing the code
ValueError: Cannot feed value of shape (1, 1) for Tensor
'Placeholder_1:0', which has shape '(?, 3)'
Nothing really changed structurally, so I am really confused by this error. The odd thing is that changing the values of the data may or may not trigger this issue. I've tried various TF versions including the latest and the same issue always occurs.
Does anybody know what am I missing? The full code example follows.
import tensorflow as tf
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
df = pd.DataFrame({'Temperature': [183, 10.7, 24.3, 10.7],
'Weight': [8, 11.2, 14, 11.2],
'Size': [3.97, 7.88, 11, 7.88],
'Property': [0,1,2,0]})
df.Property = df.Property.shift(-1)
print ( df.head() )
# parameters
time_steps = 1
inputs = 3
outputs = 3
df = df.iloc[:-1,:]
df = df.values
train_X = df[:, 1:]
train_y = df[:, 0]
scaler = MinMaxScaler(feature_range=(0, 1))
train_X = scaler.fit_transform(train_X)
train_X = train_X[:,None,:]
onehot_encoder = OneHotEncoder()
encode_categorical = train_y.reshape(len(train_y), 1)
train_y = onehot_encoder.fit_transform(encode_categorical).toarray()
learning_rate = 0.001
epochs = 500
batch_size = int(train_X.shape[0]/2)
length = train_X.shape[0]
display = 100
neurons = 100
tf.reset_default_graph()
X = tf.placeholder(tf.float32, [None, time_steps, inputs])
y = tf.placeholder(tf.float32, [None, outputs])
cell = tf.contrib.rnn.BasicLSTMCell(num_units=neurons, activation=tf.nn.relu)
cell_outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
stacked_outputs = tf.reshape(cell_outputs, [-1, neurons])
out = tf.layers.dense(inputs=stacked_outputs, units=outputs)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(
labels=y, logits=out))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
accuracy = tf.metrics.accuracy(labels = tf.argmax(y, 1),
predictions = tf.argmax(out, 1),
name = "accuracy")
precision = tf.metrics.precision(labels=tf.argmax(y, 1),
predictions=tf.argmax(out, 1),
name="precision")
recall = tf.metrics.recall(labels=tf.argmax(y, 1),
predictions=tf.argmax(out, 1),
name="recall")
f1 = 2 * accuracy[1] * recall[1] / ( precision[1] + recall[1] )
with tf.Session() as sess:
tf.global_variables_initializer().run()
tf.local_variables_initializer().run()
for steps in range(epochs):
mini_batch = zip(range(0, length, batch_size),
range(batch_size, length+1, batch_size))
for (start, end) in mini_batch:
sess.run(training_op, feed_dict = {X: train_X[start:end,:,:],
y: train_y[start:end,:]})
if (steps+1) % display == 0:
loss_fn = loss.eval(feed_dict = {X: train_X, y: train_y})
print('Step: {} \tTraining loss: {}'.format((steps+1), loss_fn))
acc, prec, recall, f1 = sess.run([accuracy, precision, recall, f1],
feed_dict = {X: train_X, y: train_y})
print('\nEvaluation on training set')
print('Accuracy:', acc[1])
print('Precision:', prec[1])
print('Recall:', recall[1])
print('F1 score:', f1)
As #Lescurel rightly pointed out, in a classification setting, the variable output should reflect the number of classes in the target variable.
Whereas in a regression setting, it'll reflect the number of columns of the target variables (assuming we are predicting more than one variable).
So given the sample input data:
df = pd.DataFrame({'Temperature': [1,2,3,4,5],
'Weight': [2,4,6,8,10],
'Size': [9,24,9,9,9],
'Property': [0,0,0,0,1]})
The number of target classes is 2. Hence output = 2.
Note: Your posted code in https://paste.ubuntu.com/p/tmXgQfm8GB/ works well for me.
Just observed that your target variable Property is the last column of the DataFrame.
Temperature Weight Size Property
0 1 2 9 0.0
1 2 4 24 0.0
2 3 6 9 0.0
3 4 8 9 1.0
4 5 10 9 NaN
Modify your code as follows, instead of having:
# X_y_split
train_X = df[:, 1:]
train_y = df[:, 0]
change it to:
# X_y_split
train_X = df[:, :-1]
train_y = df[:, -1]
What you have here is a classification network: It takes inputs, or features (Temperature, Weight and Size), and classify them into one of your classes : 0, 1 or 2. (Property field)
When you modified the original dataset, you modified the number of classes : from 3 (0,1,2), you went to 1. (1).
For the code to work, you just need to modify the parameters section of your code so it fits your dataset.
# parameters
time_steps = 1
inputs = 3
outputs = 1
Note : In this case, I find the the term outputs is a bit vague. I would have used something like nb_classes

How to make a checkerboard matrix in tensorflow?

I need to initialize a checkerboard matrix to merge two feature maps in my tensorflow graph, I was able to do it for a known shape using numpy beside TF like this
def checkerboard_concat(x1, x2):
mask1 = np.ones((10,10,3))
mask1[1::2,::2] = 0
mask1[::2,1::2] = 0
mask2 = np.zeros((10,10,3))
mask2[1::2,::2] = 1
mask2[::2,1::2] = 1
return x1 * mask1 + x2 * mask2
But I was not able to do it with a dynamic shape, I used tf.shape() that returns an output of shape (N,) but I don't how to evaluate it dynamically.
Also, I tried using tf.ones_like(x1) but couldn't use subscripts to change it like a numpy array
Here is a solution based on modulo and XOR operations:
import tensorflow as tf
def make_checkerboard(N):
"""
Return a NxN checkerboard matrix M, i.e. with M(i,j) == True if (i+j) mod 2 == 1
:param N: Length of the checkerboard (can be dynamic)
:return: Boolean tensor of shape NxN
"""
range_n = tf.range(N)
odd_ind = tf.cast(tf.mod(range_n, 2), dtype=tf.bool)
odd_rows = tf.tile(tf.expand_dims(odd_ind , -1), [1, N])
odd_cols = tf.tile(tf.expand_dims(odd_ind , 0), [N, 1])
checker = tf.logical_xor(odd_rows, odd_cols)
return checker
def checkerboard_concat(x1, x2, is_batch=True):
dynamic_n = tf.shape(x1)[1 if is_batch else 0]
mask2 = make_checkerboard(dynamic_n)
mask2 = tf.expand_dims(mask2, -1) # Expand masks to cover channels
mask1 = tf.logical_not(mask2)
return x1 * tf.cast(mask1, dtype=x1.dtype) + x2 * tf.cast(mask2, dtype=x2.dtype)
# Example:
tf.reset_default_graph()
sess = tf.InteractiveSession()
x1 = tf.ones((4,4,3), dtype=tf.int32)
x2 = tf.ones((4,4,3), dtype=tf.int32) * 2
x = checkerboard_concat(x1, x2, is_batch=False)
res = sess.run(x)
print(res[...,0])
# [[1 2 1 2]
# [2 1 2 1]
# [1 2 1 2]
# [2 1 2 1]]

Multiplying a rank 3 tensor with a rank 2 tensor in Tensorflow

Given a rank 3 tensor:
sentence_max_length = 5
batch_size = 3
n_hidden = 10
n_classes = 2
x = tf.constant(np.reshape(np.arange(150),(batch_size,sentence_max_length, n_hidden)), dtype = tf.float32)
And a rank 2 tensor:
W = tf.constant(np.reshape(np.arange(20), (n_hidden, n_classes)), dtype = tf.float32)
And a rank 1 bias tensor:
b = tf.constant(np.reshape(np.arange(5), (n_classes), dtype = tf.float32))
I was wondering how one would the last two axis of x by W such that the resulting vector Z would be of shape (batch_size, max_length, n_classes) though batch_size would not be known during graph creation I've just given it a value here for demonstration purposes
So to clarify:
Z[0] = tf.matmul(x[0,:,:], W) + b
So that the W and b are shared across all the batches. The reason for this is that I am trying to use the output of tf.dynamic_rnn whereby the output is of shape (batch_size, sentence_max_length, n_hidden) and build another layer ontop of output which has shared weights W and b.
One approach could be ...
import tensorflow as tf
import numpy as np
from tensorflow.python.layers.core import Dense
sentence_max_length = 5
batch_size = 3
n_hidden = 10
n_classes = 2
x = tf.constant(np.reshape(np.arange(150),(batch_size,sentence_max_length, n_hidden)), dtype = tf.float32)
linear_layer = Dense(n_classes, use_bias=True) #required projection value
z = linear_layer(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
res = sess.run(z)
res.shape
(3, 5, 2)
Internally, Dense layer creates trainable W & b variables. And, it uses standard_ops.tensordot operation to transform the last dimension to the projected value. For further details, refer to the source code here.