I need to train a mobile net using tensorflow. The tf.squeeze layer is not supported. Can I replace it with the tf.reshape?
Is the operation:
tf.squeeze(net, [1, 2], name='squeeze')
the same as:
tf.reshape(net, [50,1000], name='reshape')
where net has the shape [50,1,1,1000].
Why do you say tf.squeeze is not supported? In order to remove 1 dimensional axis from tensor, tf.squeeze is the correct operation. But you can achieve your desired work with tf.reshape as well though I will suggest you to make use of tf.squeeze.
In tf 2.0 you can easily check that these ops are the same. The only difference that you may remove all axis with dim == 1 without specifying them. So in the last line you may use tf.squeeze(x_resh) instead of tf.squeeze(x_resh, [1, 2]).
size = [2, 3]
tf.random.set_seed(42)
x = tf.random.normal(size)
x
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 0.3274685, -0.8426258, 0.3194337],
[-1.4075519, -2.3880599, -1.0392479]], dtype=float32)>
x_resh = tf.reshape(x, [2, 1, 1, 3])
x_resh
<tf.Tensor: shape=(2, 1, 1, 3), dtype=float32, numpy=
array([[[[ 0.3274685, -0.8426258, 0.3194337]]],
[[[-1.4075519, -2.3880599, -1.0392479]]]], dtype=float32)>
tf.reshape(x_resh, [2, 3])
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 0.3274685, -0.8426258, 0.3194337],
[-1.4075519, -2.3880599, -1.0392479]], dtype=float32)>
tf.squeeze(x_resh, [1, 2])
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[ 0.3274685, -0.8426258, 0.3194337],
[-1.4075519, -2.3880599, -1.0392479]], dtype=float32)>
Related
I'm facing issue after upgrading tensorflow 2.3 to 2.5 while extracting features from trained CNN model. Earlier in tf 2.3 the following code was working perfectly:
get_last_layer_output = K.function([model.layers[0].input],
[model.layers[-2].output])
trainFeatures = np.zeros((len(x_train), 512))
for i in range(len(x_train)):
trainFeatures[i,:] = get_last_layer_output([x_train[i:i+1,:,:,:],1])[0]
testFeatures = np.zeros((len(x_test), 512))
for i in range(len(x_test)):
testFeatures[i,:] = get_last_layer_output([x_test[i:i+1,:,:,:],0])[0]
But in tf 2.5, the above code gives error.
Layer "model_5" expects 1 input(s), but it received 2 input tensors. Inputs received: [<tf.Tensor: shape=(1, 224, 224, 3), dtype=uint8, numpy=
array([[[[2, 2, 2],
[2, 2, 2],
[2, 2, 2],
...,
[1, 1, 1],
[1, 1, 1],
[2, 2, 2]]]], dtype=uint8)>, <tf.Tensor: shape=(), dtype=int32, numpy=1>]
Kindly help.
Using tensorflow.stack what does it mean to have axis=-1 ?
I'm using tensorflow==1.14
Using axis=-1 simply means to stack the tensors along the last axis (as per the python list indexing syntax).
Let's take a look at how this looks like using these tensors of shape (2, 2):
>>> x = tf.constant([[1, 2], [3, 4]])
>>> y = tf.constant([[5, 6], [7, 8]])
>>> z = tf.constant([[9, 10], [11, 12]])
The default behavior for tf.stack as described in the documentation is to stack the tensors along the first axis (index 0) resulting in a tensor of shape (3, 2, 2)
>>> tf.stack([x, y, z], axis=0)
<tf.Tensor: shape=(3, 2, 2), dtype=int32, numpy=
array([[[ 1, 2],
[ 3, 4]],
[[ 5, 6],
[ 7, 8]],
[[ 9, 10],
[11, 12]]], dtype=int32)>
Using axis=-1, the three tensors are stacked along the last axis instead, resulting in a tensor of shape (2, 2, 3)
>>> tf.stack([x, y, z], axis=-1)
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[ 1, 5, 9],
[ 2, 6, 10]],
[[ 3, 7, 11],
[ 4, 8, 12]]], dtype=int32)>
I want to apply the operation tf.tile, e.g. tf.tile(A, [1, 1, b]) where A has shape [5,4,3]. How to generate [1, 1, 1] according to A? Then I set the [1, 1, 1]'s third element to b, where b is a placeholder.
This is my code, but it doesn't work, how to fix it?
d = tf.shape(A)
for i in range(tf.rank(A)): #wrong, tf.rank(A) as a tensor can't be here
d[i] = 1
d[2] = b
result = tf.tile(A, d)
The easiest solution is probably to use tf.one_hot to build your multiples tensor directly.
>>> b = 2
>>> tf.one_hot(indices=[b], depth=tf.rank(A), on_value=b, off_value=1)
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 1, 2], dtype=int32)>
Alternatively, you can use tf.ones_like to generate a tensor of 1 with the same shape as the Tensor passed as an argument.
>>> A = tf.random.uniform((5,4,3))
>>> tf.shape(A)
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([5, 4, 3], dtype=int32)>
>>> tf.ones_like(tf.shape(A))
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 1, 1], dtype=int32)>
Note that in tensorflow, you can't do item assignment on a tensor (so d[2] = b won't work for example). To generate your tensor [1,1,b] you can use tf.concat:
>>> b = 2
>>> tf.concat([tf.ones_like(tf.shape(A)[:-1]),[b]],axis=0)
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([1, 1, 2], dtype=int32)>
Is there a way to directly update the elements in tf.Variable X at indices without creating a new tensor having the same shape as X?
tf.tensor_scatter_nd_update create a new tensor hence it appears not updateing the original tf.Variable.
This operation creates a new tensor by applying sparse updates to the input tensor.
tf.Variable assign apparently needs a new tensor value which has the same shape of X to update the tf.Variable X.
assign(
value, use_locking=False, name=None, read_value=True
)
value A Tensor. The new value for this variable.
About the tf.tensor_scatter_nd_update, you're right that it returns a new tf.tensor (and not tf.Variable). But about the assign which is an attribute of tf.Variable, I think you somewhat misread the document; the value is just the new item that you want to assign in particular indices of your old variable.
AFAIK, in tensorflow all tensors are immutable like python numbers and strings; you can never update the contents of a tensor, only create a new one, source. And directly updating or manipulating of tf.tensor or tf.Variable such as numpy like item assignment is still not supported. Check the following Github issues to follow up the discussions: #33131, #14132.
In numpy, we can do an in-place item assignment that you showed in the comment box.
import numpy as np
a = np.array([1,2,3])
print(a) # [1 2 3]
a[1] = 0
print(a) # [1 0 3]
A similar result can be achieved in tf.Variable with assign attribute.
import tensorflow as tf
b = tf.Variable([1,2,3])
b.numpy() # array([1, 2, 3], dtype=int32)
b[1].assign(0)
b.numpy() # array([1, 0, 3], dtype=int32)
Later, we can convert it to tf. tensor as follows.
b_ten = tf.convert_to_tensor(b)
b_ten.numpy() # array([1, 0, 3], dtype=int32)
We can do such item assignment in tf.tensor too but we need to convert it to tf.Variable first, (I know, not very intuitive).
tensor = [[1, 1], [1, 1], [1, 1]] # tf.rank(tensor) == 2
indices = [[0, 1], [2, 0]] # num_updates == 2, index_depth == 2
updates = [5, 10] # num_updates == 2
x = tf.tensor_scatter_nd_update(tensor, indices, updates)
x
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 1, 5],
[ 1, 1],
[10, 1]], dtype=int32)>
x = tf.Variable(x)
x
<tf.Variable 'Variable:0' shape=(3, 2) dtype=int32, numpy=
array([[ 1, 5],
[ 1, 1],
[10, 1]], dtype=int32)>
x[0].assign([5,1])
x
<tf.Variable 'Variable:0' shape=(3, 2) dtype=int32, numpy=
array([[ 5, 1],
[ 1, 1],
[10, 1]], dtype=int32)>
x = tf.convert_to_tensor(x)
x
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 5, 1],
[ 1, 1],
[10, 1]], dtype=int32)>
The TensorFlow supports stack operation as follows:
"Stacks a list of rank-R tensors into one rank-(R+1) tensor".
My question is, can we use other operations( like tf.concat, or tf.expand_dims) or anything else and emulate the behavior of tf.stack? My intention is to skip using tf.stack
You can achieve this using tf.concat operation and with tf.expand_dims,below is an example of it.
Using Stack:
t1 = tf.constant([1,2,3])
t2 = tf.constant([4,5,6])
tf.stack((t1,t2),axis=0)
Result :
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6]], dtype=int32)>
Using concat:
tf.concat((tf.expand_dims(t1,0),tf.expand_dims(t2,0)),axis=0)
Result:
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6]], dtype=int32)>