The result of "tf.subtract" is not the same as expected - tensorflow

import tensorflow as tf
a=tf.random_normal([3, 2], mean=6, stddev=0.1, seed=1)
b=tf.random_normal([3, 2], mean=1, stddev=1, seed=1)
sess=tf.Session()
ra=sess.run(a)
rb=sess.run(b)
r1=ra-rb
r2=sess.run(tf.subtract(a,b))
Why is r1 and r2 not equal?
Shouldn't it be the same in theory?
tensorflow version : 1.15.0

In Tensorflow 1.x since in each session the tf.random_normal generates the new set of numbers which is the reason for change in results as rightly mentioned by #xdurch0 and #Addy in the comment section.
Instead, you can set the constant numbers using tf.constant and compare the results.
Tensorflow 1.x:
import tensorflow as tf
a = tf.constant([[5.918868 , 6.14846 ],
[6.006533 , 5.7557297],
[6.009925 , 6.0591226]])
b = tf.constant([[0.32409406, 1.2866583 ],
[1.3215888 , 2.2124639 ],
[0.19414288, 0.86650544]])
sess=tf.Session()
ra=sess.run(a)
rb=sess.run(b)
r1=ra -rb
r2=sess.run(tf.subtract(a,b))
print(r1)
print(r2)
Result:
[[5.5947742 4.8618016]
[4.684944 3.5432658]
[5.815782 5.192617 ]]
[[5.5947742 4.8618016]
[4.684944 3.5432658]
[5.815782 5.192617 ]]
Tensorflow 2.x:
In Tensorflow 2.x since eager execution is enabled by default the tf.random.normal will execute immediately and keep the result for rest of the code.
import tensorflow as tf
a=tf.random.normal([3, 2], mean=6, stddev=0.1, seed=1)
b=tf.random.normal([3, 2], mean=1, stddev=1, seed=1)
r1=a-b
r2=tf.subtract(a,b)
print(r1)
print(r2)
Result:
tf.Tensor(
[[5.5947742 4.8618016]
[4.684944 3.5432658]
[5.815782 5.192617 ]], shape=(3, 2), dtype=float32)
tf.Tensor(
[[5.5947742 4.8618016]
[4.684944 3.5432658]
[5.815782 5.192617 ]], shape=(3, 2), dtype=float32)

Related

How can I use Tensorflow's GlorotUniform Initializer with a state-less semantics?

How can I use GlorotUniform Initializer with a state-less semantics? In other words, I would like GlorotUniform to produce the same result on different calls. The following code does not work
import tensorflow as tf
tf.random.set_seed(1234)
initializer = tf.keras.initializers.GlorotUniform(seed=3)
print (initializer(shape=(2, 2)))
initializer = tf.keras.initializers.GlorotUniform(seed=3)
print (initializer(shape=(2, 2)))
which produces
tf.Tensor(
[[1.1279136 0.19878006]
[0.34682322 1.1320969 ]], shape=(2, 2), dtype=float32)
tf.Tensor(
[[ 0.9531394 0.22104084]
[ 0.41438842 -1.1447294 ]], shape=(2, 2), dtype=float32)
I understand one can use tf.random.stateless_uniform but that is not globot uniform.

tensorflow:Model was constructed with shape (None, 4, 1), but it was called on an input with incompatible shape (4, 1, 1)

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
train_data = np.array(
[[ 0.045964252, 0.08585282, 0.056468535, 0.087974496],
[ 0.06128449, 0.027692182, 0.01929527, 0.027361592],
[ 0.076604135, 0., 0., 0. ],
[-0.15014096, -0.6869674, -0.6869674, 0. ]], np.float32)
train_label= np.array(
[[0.08585282 ],
[0.027692182],
[0. ],
[0.036714412]], np.float32)
mydataset = tf.data.Dataset.from_tensor_slices((train_data, train_label))
myinput = tf.keras.layers.Input(shape=(4, 1), ragged=True)
output = tf.keras.layers.Dense(1)(myinput)
model = tf.keras.models.Model(inputs=myinput, outputs=output)
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.MeanSquaredError()])
print("model.fit mydatasetelement_spec:\n", mydataset.element_spec)
# (TensorSpec(shape=(4,), dtype=tf.float32, name=None), TensorSpec(shape=(1,), dtype=tf.float32, name=None))
history = model.fit(
mydataset,
epochs=4,
steps_per_epoch=4,
verbose=0)
How can I eliminate the warning by correcting the model input layer?
WARNING:tensorflow:Model was constructed with shape (None, 4, 1) for
input Tensor("Placeholder_1:0", shape=(None, 4, 1), dtype=float32),
but it was called on an input with incompatible shape (4, 1, 1)
I cannot seem to get tf.keras.layers.Input to accept the input from model.fit without throwing the warning. I don't want to change my data (reshape, squeeze etc.). I want to keep the input as a dataset with features and labels. I want to adapt the model to accept the input of my data.
You can fix it by doing:
myinput = tf.keras.layers.Input(shape=(1,), ragged=True)
Note that Dense layers' input shape should be in the following form: (batch_size, input_size)

How to access embedding layer's variables in tensorflow?

Suppose I have the embedding layer e like this:
import tensorflow as tf
e = tf.keras.layers.Embedding(5,3)
How can I print its numpy values?
You need to build embedding layer before you can access embedding matrix:
import tensorflow as tf
emb = tf.keras.layers.Embedding(5, 3)
emb.build(())
emb.trainable_variables[0].numpy()
# array([[-0.00595363, 0.03049802, 0.01821234],
# [ 0.01515153, -0.01006874, 0.02568189],
# [-0.01845006, 0.02135053, -0.03916124],
# [-0.00822829, 0.00922295, 0.00091892],
# [-0.00727308, -0.03537174, -0.01419405]], dtype=float32)
Thank #vald for his answer. I think e.embeddings is more pythonic and maybe efficient.
import tensorflow as tf
e = tf.keras.layers.Embedding(5,3)
e.build(()) # You should build it before using.
print(e.embeddings)
>>>
<tf.Variable 'embeddings:0' shape=(5, 3) dtype=float32, numpy=
array([[ 0.02099125, 0.01865673, 0.03652272],
[ 0.02714007, -0.00316695, -0.00252246],
[-0.02411103, 0.02043924, -0.01297874],
[ 0.00766286, -0.03511617, 0.03460207],
[ 0.00256425, -0.03659264, -0.01796588]], dtype=float32)>

ValueError: Shapes must be equal rank in assign_add()

I am reading tf.Variable in Tensorflow r2.0 in TF2:
import tensorflow as tf
# Create a variable.
w = tf.constant([1, 2, 3, 4], tf.float32, shape=[2, 2])
# Use the variable in the graph like any Tensor.
y = tf.matmul(w,tf.constant([7, 8, 9, 10], tf.float32, shape=[2, 2]))
v= tf.Variable(w)
# The overloaded operators are available too.
z = tf.sigmoid(w + y)
tf.shape(z)
# Assign a new value to the variable with `assign()` or a related method.
v.assign(w + 1)
v.assign_add(tf.constant([1.0, 21]))
ValueError: Shapes must be equal rank, but are 2 and 1 for
'AssignAddVariableOp_4' (op: 'AssignAddVariableOp') with input shapes:
[], 2.
And also how come the following returns false?
tf.shape(v) == tf.shape(tf.constant([1.0, 21],tf.float32))
My other question is that when we are in TF 2, we should not use tf.Session() anymore, correct? It seems we should never run session.run(), but the API document keys doing it with tf.compat.v1, etc. So why they are using it in TF2 docs?
Any help would be appreciated.
CS
As it clearly says in the error, it is expecting shape [2,2] for assign_add on v which is having the shape [2,2].
If you try to give any shape other than the initial shape of the Tensor which you are trying to do assign_add the error will be given.
Below is the modified code with the expected shape for the operation.
import tensorflow as tf
# Create a variable.
w = tf.constant([1, 2, 3, 4], tf.float32, shape=[2, 2])
# Use the variable in the graph like any Tensor.
y = tf.matmul(w,tf.constant([7, 8, 9, 10], tf.float32, shape=[2, 2]))
v= tf.Variable(w)
# The overloaded operators are available too.
z = tf.sigmoid(w + y)
tf.shape(z)
# Assign a new value to the variable with `assign()` or a related method.
v.assign(w + 1)
print(v)
v.assign_add(tf.constant([1, 2, 3, 4], tf.float32, shape=[2, 2]))
Output for v:
<tf.Variable 'UnreadVariable' shape=(2, 2) dtype=float32, numpy=
array([[3., 5.],
[7., 9.]], dtype=float32)>
Now the following Tensor comparison is returning True.
tf.shape(v) == tf.shape(tf.constant([1.0, 21],tf.float32))
<tf.Tensor: shape=(2,), dtype=bool, numpy=array([ True, True])>
Coming to your tf.Session() question, in TensorFlow 2.0 eager execution is enabled by default, still, if you need to disable eager execution and can use tf.Session like below.
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
hello = tf.constant('Hello, TensorFlow!')
sess = tf.compat.v1.Session()
print(sess.run(hello))

How to print the value in the official version of tf2.0.0?

I found that I can't display the value of tensor in the official version of tf2.0.0. What should I do? numpy? eval?
print(tf.random.uniform((3, 3)))
print(tf.keras.layers.LayerNormalization()(tf.random.uniform((3, 3))))
The result:
Tensor("random_uniform:0", shape=(3, 3), dtype=float32)
Tensor("layer_normalization/batchnorm/add_1:0", shape=(3, 3), dtype=float32)
Are you sure of your TF version? Here is my result for your code:
import tensorflow as tf
def main():
print("Version: ", tf.version.VERSION)
print(tf.random.uniform((3, 3)))
print(tf.keras.layers.LayerNormalization()(tf.random.uniform((3, 3))))
if __name__ == '__main__':
main()
Version: 2.0.0
tf.Tensor(
[[0.4394927 0.44767535 0.02136886]
[0.7118287 0.65160227 0.47469318]
[0.7066748 0.130373 0.09051967]], shape=(3, 3), dtype=float32)
tf.Tensor(
[[ 0.8090544 -1.4032681 0.5942137 ]
[-1.3625047 0.38342142 0.9790828 ]
[-1.2024965 0.00880218 1.1936939 ]], shape=(3, 3), dtype=float32)
You also have the choice to use tf.print instead of print which displays only the values (not the shape, nor the data type), which is the same as calling print(tensor.numpy()).