Trying debug statements in Python/tensorflow1.0 using jupyter , but does not get any output printed from tf.Print
Thought sess.run(during training in below code) should have evaluated db1 tensor and print output which did not happen
However db1.eval in evaluate phase , printing entire tensor X with out "message X:".
def combine_inputs(X):
db1=tf.Print(X,[X],message='X:')
return (tf.matmul(X, W) + b,db1)
<<training code>>
_,summary=sess.run([train_op,merged_summaries])
## merged_summaries tensor triggers combine_inputs function. There are
## other tensor functions/coding in between , not giving entire code to keep
## it simple; code works as expected except tf.Print
<<evaluate code>>
print(db1.eval())
Confused on following
a) Why tf.Print is not printing during sess.run during training?
b) Why explicit db1.eval is necessary , expected tf.Print to trigger with
sess.run. If eval is required , could copy tensor X in my code to db1
and evaluate it with out tf.Print. Correct?
Tried going through other questions (like below one). Suggested to implement memory_util or predefined function. As learner could not understand why tf.Print does not work in my scenario
If anyone encountered similar issues , please assist. Thanks!
Similar question in stackoverflow
According to the documentation, tf.Print prints to standard error (as of version 1.1), and it's not compatible with jupyter notebook. That's why you can't see any output.
Check here:
https://www.tensorflow.org/api_docs/python/tf/Print
You can check the terminal where you launched the jupyter notebook to see the message.
import tensorflow as tf
tf.InteractiveSession()
a = tf.constant(1)
b = tf.constant(2)
opt = a + b
opt = tf.Print(opt, [opt], message="1 + 2 = ")
opt.eval()
In the terminal, I can see:
2018-01-02 23:38:07.691808: I tensorflow/core/kernels/logging_ops.cc:79] 1 + 2 = [3]
Related
I would like to convert the ActorDistributionModel from a trained PPOClipAgent into a Tensorflow Lite model for deployment. How should I accomplish this?
I have tried following this tutorial (see section at bottom converting policy to TFLite), but the network outputs a single action (the policy) rather than the density function over actions that I desire.
I think perhaps something like this could work:
tf.compat.v2.saved_model.save(actor_net, saved_model_path, signature=?)
... if I knew how to set the signature parameter. That line of code executes without error when I omit the signature parameter, but I get the following error on load (I assume because the signature is not set up correctly):
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
File "/home/ais/salesmentor.ai/MDPSolver/src/solver/ppo_budget.py", line 336, in train_eval
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
File "/home/ais/.local/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 1275, in from_saved_model
raise ValueError("Only support a single signature key.")
ValueError: Only support a single signature key.
This appears to work. I won't accept the answer until I have completed an end-to-end test, though.
def export_model(actor_net, observation_spec, saved_model_path):
predict_signature = {
'action_pred':
tf.function(func=lambda x: actor_net(x, None, None)[0].logits,
input_signature=(tf.TensorSpec(shape=observation_spec.shape),)
)
}
tf.saved_model.save(actor_net, saved_model_path, signatures=predict_signature)
# Convert to TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path,
signature_keys=["action_pred"])
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_policy = converter.convert()
with open(os.path.join(saved_model_path, 'policy.tflite'), 'wb') as f:
f.write(tflite_policy)
The solution wraps the actor_net in a lambda because I was unable to figure out how to specify the signature with all three expected arguments. Through the lambda, I convert the function into using a single argument (a tensor). I expect to pass None to the other two arguments in my use case, so there is nothing lost in this approach.
I see you using CartPole as the model simulation, Agent DQN, and Model learning and Evaluation from links provided TF-Agent Checkpointer. For simple understanding, you need to understand about the distributions and your model limits ( less than 6 actions determining at a time ).
Discretes Distribution, answer the question to the points but the links is how they implement AgentDQN on TF- Agent.
temp = tf.random.normal([10], 1, 0.2, tf.float32), mean is one and the standard deviation is 0.2. Overall of result summation product is nearby one and its variance is 0.2, when they have 10 actions to determine the possibility of the result is the same action is 1 from 5 or 0.5. random normal
Coefficient is ladder steps or you understand as IF and ELSE conditions or SWITCH conditions such as at the gap of 0 to 5, 5 to 10, 10 to 15, and continue.
The matrixes product from the Matrix coefficients and randoms is selected 4 - 5 actions sorted by priority, significant and select the most effects in rows.
The ArgMax is 0 to 9 which is actions 0 - 9 that respond to the environment input co-variances.
Sample: To the points, random distributions and selective agents ( we call selective agent maybe the questioner has confused with NN DQN )
temp = tf.random.normal([10], 1, 0.2, tf.float32)
temp = np.asarray(temp) * np.asarray([ coefficient_0, coefficient_1, coefficient_2, coefficient_3, coefficient_4, coefficient_5, coefficient_6, coefficient_7, coefficient_8, coefficient_9 ])
temp = tf.nn.softmax(temp)
action = int(np.argmax(temp))
I used the tensorflow tf.keras.metrics.MeanSquaredError() metric to evaluate the mean squared error between two numpy arrays. But each time I call mse() it give a different result.
a = np.random.random(size=(100,2000))
b = np.random.random(size=(100,2000))
for i in range(100):
v = mse(a, b).numpy()
plt.scatter(i,v)
print(v)
where I had previously defined mse = tf.keras.metrics.MeanSquaredError() Here is the Output. Any idea what is going wrong?
np.random.random generates random data every run. So, your code should result in different mse, shouldn't it?
run 1:
[0.87148841 0.50221413 0.49858526 ... 0.22311888 0.71320089 0.36298912]
Run 2:
[0.14941241 0.78560523 0.62436783 ... 0.1865485 0.2730567 0.49300401]
I'm a Tensorflow beginner.
I've tried to print Hello world using below Tensorflow 1.15.0 code.
import tensorflow as tf
h = tf.constant("Hello")
w = tf.constant(" World!")
hw = h + w
with tf.Session() as sess:
ans = sess.run(hw)
print(ans)
When I run the code using jupyter notebook, b'Hello World!' came out.
What I expected is only 'Hello World!". Why does the b come out in front of my output?
Many thank
The b prefix indicates that it is a byte string and not unicode string. You can use tf.print() to print it properly
This question has already been answered here: The print of string constant is always attached with 'b' inTensorFlow
I am new to Tensorflow programming , i was digging up some functions and got this error in the snippet :
**with** **tf.Session()** as sess_1:
c = tf.constant(5)
d = tf.constant(6)
e = c + d
print(sess_1.run(e))
print(sess_1.run(e.shape()))
Error found :Traceback (most recent call last):
File "C:/Users/Ashu/PycharmProjects/untitled/Bored.py", line 15, in
print(sess_1.run(e.shape()))
TypeError: 'TensorShape' object is not callable
I didn't found it here so can anyone please clarify this silly doubt as i am new learner.Sorry for any typing mistake !
I have a one more doubt , when i uses simply eval() function it doesn't print anything in pycharm , i had to use it along with print() method. So my doubt is when print() method is used it doesn't print the dtype of the tensor , it simply print the tensor or python object value in pycharm.(Why i am not getting the output in the format like : array([1. , 1.,] , dtype=float32))Is it the Pycharm way to print the tensor in new version or is it something i am doing wrong ? So excited to know the thing behind this , please help and pardon if i am wrong at any place.
One confusing aspect of tensorflow for beginners is there are two types of shape: dynamic shape, given by tf.shape(x), and static shape, given by x.shape (assuming x is a tensor). While they represent the same concept, they are used very differently.
Static shape is the shape of a tensor known at run time. Its a data type in its own right, but it can be converted to a list using as_list().
x = tf.placeholder(shape=(None, 3, 4))
static_shape = x.shape
shape_list = x.shape.as_list()
print(shape_list) # [None, 3, 4]
y = tf.reduce_sum(x, axis=1)
print(y.shape.as_list()) # [None, 4]
During operations, tensorflow tracks static shapes as best it can. In the above example, y's shape was calculated based on the partially known shape of x's. Note we haven't even created a session, but the static shape is still known.
Since the batch size is not known, you can't use the static first entry in calculations.
z = tf.reduce_sum(x) / tf.cast(x.shape.as_list()[0], tf.float32) # ERROR
(we could have divided by x.shape.as_list()[1], since that dimension is known at run-time - but that wouldn't demonstrate anything here)
If we need to use a value which is not known statically - i.e. at graph construction time - we can use the dynamic shape of x. The dynamic shape is a tensor - like other tensors in tensorflow - which is evaluated using a session.
z = tf.reduce_sum(x) / tf.cast(tf.shape(x)[0], tf.float32) # all good!
You can't call as_list on the dynamic shape, nor can you inspect its values without going through a session evaluation.
As stated in the documentation, you can only call a session's run method with tensors, operations, or lists of tensors/operations. Your last line of code calls run with the result of e.shape(), which has type TensorShape. The session can't execute a TensorShape argument, so you're getting an error.
When you call print with a tensor, the system prints the tensor's content. If you want to print the tensor's type, use code like print(type(tensor)).
Referring to this Link, (the Link)
I try to practice using tf.contrib.factorization.KMeansClustering for clustering. The simple codes as follow works okay:
import numpy as np
import tensorflow as tf
# ---- Create Data Sample -----
k = 5
n = 100
variables = 5
points = np.random.uniform(0, 1000, [n, variables])
# ---- Clustering -----
input_fn=lambda: tf.train.limit_epochs(tf.convert_to_tensor(points, dtype=tf.float32), num_epochs=1)
kmeans=tf.contrib.factorization.KMeansClustering(num_clusters=6)
kmeans.train(input_fn=input_fn)
centers = kmeans.cluster_centers()
# ---- Print out -----
cluster_indices = list(kmeans.predict_cluster_index(input_fn))
for i, point in enumerate(points):
cluster_index = cluster_indices[i]
print ('point:', point, 'is in cluster', cluster_index, 'centered at', centers[cluster_index])
My question is why would this "input_fn" code does the trick?
If I change the code to this, it will run into an infinite loop. Why??
input_fn=lambda:tf.convert_to_tensor(points, dtype=tf.float32)
From the document (here), it seems that train() is expecting argument of input_fn, which is simply a A 'tf.data.Dataset' object , like Tensor(X). So, why do I have to do all these tricky things regarding lambda: tf.train.limit_epochs()?
Can anyone who is familiar with the fundamental of tensorflow estimators help to explain? Many Thanks!
My question is why would this "input_fn" code does the trick? If I change the code to this, it will run into an infinite loop. Why??
The documentation states that input_fn is called repeatedly until it returns a tf.errors.OutOfRangeError. Adorning your tensor with tf.train.limit_epochs ensures that the error is eventually raised, which signals to KMeans that it should stop training.