Tensorflow gradients - tensorflow

I am trying to adapt the tf DeepDream tutorial code to work with another model. Right now when I call tf.gradients():
t_grad = tf.gradients(t_score, t_input)[0]
g = sess.run(t_grad, {t_input:img0})
I am getting a type error:
TypeError: Fetch argument None of None has invalid type <type 'NoneType'>,
must be a string or Tensor. (Can not convert a NoneType into a Tensor or
Operation.)
Where should I even start to look for fixing this error?
Is it possible to use tf.gradients() with a model that has an Optimizer in it?

I'm guessing your t_grad has some Nones. None is mathematically equivalent to 0 gradient, but is returned for the special case when the cost doesn't depend on the argument it is differentiated against. There are various reasons why we don't just return 0 instead of None which you can see in discussion here
Because None can be annoying in cases like above, or when computing second derivatives, I use helper function below
def replace_none_with_zero(l):
return [0 if i==None else i for i in l]

The following is a helpful tip for debugging tf.gradients()
for an invalid pair of tensors:
grads = tf.gradients(<a tensor>, <another tensor that doesn't depend on the first>)
even before you try to run tf.gradients in a session you can see it is invalid using print
print grads
It will return [None] a list with a single None in it.
If you try to run it in a session anyways:
results = sess.run(grads)
You will not get None again, instead you get the error message described in the question.
For a valid pair of tensors:
grads = tf.gradients(<a tensor>, <a related tensor>)
print grads
You will get something like:
Tensor("gradients_1/sub_grad/Reshape:0", dtype=float32)
In a valid situation:
results = sess.run(grads, {<appropriate feeds>})
print results
you get something like
[array([[ 4.97156498e-06, 7.87349381e-06, 9.25197037e-06, ...,
8.72526925e-06, 6.78442757e-06, 3.85240173e-06],
[ 7.72772819e-06, 9.26370740e-06, 1.19129227e-05, ...,
1.27088233e-05, 8.76379818e-06, 6.00637532e-06],
[ 9.46506498e-06, 1.10620931e-05, 1.43903117e-05, ...,
1.40718612e-05, 1.08670165e-05, 7.12365863e-06],
...,
[ 1.03536004e-05, 1.03090524e-05, 1.32107480e-05, ...,
1.40605653e-05, 1.25974075e-05, 8.90011415e-06],
[ 9.69486427e-06, 8.18045282e-06, 1.12702282e-05, ...,
1.32554378e-05, 1.13317501e-05, 7.74569162e-06],
[ 5.61043908e-06, 4.93397192e-06, 6.33513537e-06, ...,
6.26539259e-06, 4.52598442e-06, 4.10689108e-06]], dtype=float32)]

Related

Convert an TF Agents ActorDistributionNetwork into a Tensorflow lite model

I would like to convert the ActorDistributionModel from a trained PPOClipAgent into a Tensorflow Lite model for deployment. How should I accomplish this?
I have tried following this tutorial (see section at bottom converting policy to TFLite), but the network outputs a single action (the policy) rather than the density function over actions that I desire.
I think perhaps something like this could work:
tf.compat.v2.saved_model.save(actor_net, saved_model_path, signature=?)
... if I knew how to set the signature parameter. That line of code executes without error when I omit the signature parameter, but I get the following error on load (I assume because the signature is not set up correctly):
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
File "/home/ais/salesmentor.ai/MDPSolver/src/solver/ppo_budget.py", line 336, in train_eval
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
File "/home/ais/.local/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 1275, in from_saved_model
raise ValueError("Only support a single signature key.")
ValueError: Only support a single signature key.
This appears to work. I won't accept the answer until I have completed an end-to-end test, though.
def export_model(actor_net, observation_spec, saved_model_path):
predict_signature = {
'action_pred':
tf.function(func=lambda x: actor_net(x, None, None)[0].logits,
input_signature=(tf.TensorSpec(shape=observation_spec.shape),)
)
}
tf.saved_model.save(actor_net, saved_model_path, signatures=predict_signature)
# Convert to TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path,
signature_keys=["action_pred"])
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_policy = converter.convert()
with open(os.path.join(saved_model_path, 'policy.tflite'), 'wb') as f:
f.write(tflite_policy)
The solution wraps the actor_net in a lambda because I was unable to figure out how to specify the signature with all three expected arguments. Through the lambda, I convert the function into using a single argument (a tensor). I expect to pass None to the other two arguments in my use case, so there is nothing lost in this approach.
I see you using CartPole as the model simulation, Agent DQN, and Model learning and Evaluation from links provided TF-Agent Checkpointer. For simple understanding, you need to understand about the distributions and your model limits ( less than 6 actions determining at a time ).
Discretes Distribution, answer the question to the points but the links is how they implement AgentDQN on TF- Agent.
temp = tf.random.normal([10], 1, 0.2, tf.float32), mean is one and the standard deviation is 0.2. Overall of result summation product is nearby one and its variance is 0.2, when they have 10 actions to determine the possibility of the result is the same action is 1 from 5 or 0.5. random normal
Coefficient is ladder steps or you understand as IF and ELSE conditions or SWITCH conditions such as at the gap of 0 to 5, 5 to 10, 10 to 15, and continue.
The matrixes product from the Matrix coefficients and randoms is selected 4 - 5 actions sorted by priority, significant and select the most effects in rows.
The ArgMax is 0 to 9 which is actions 0 - 9 that respond to the environment input co-variances.
Sample: To the points, random distributions and selective agents ( we call selective agent maybe the questioner has confused with NN DQN )
temp = tf.random.normal([10], 1, 0.2, tf.float32)
temp = np.asarray(temp) * np.asarray([ coefficient_0, coefficient_1, coefficient_2, coefficient_3, coefficient_4, coefficient_5, coefficient_6, coefficient_7, coefficient_8, coefficient_9 ])
temp = tf.nn.softmax(temp)
action = int(np.argmax(temp))

Get shape using tf.function()

I have this function
train_step_signature = [
tf.TensorSpec(shape=(None, None), dtype=tf.int32)
]
#tf.function(input_signature=train_step_signature)
def train_step(inp):
# do stuff
I need to use the first dim of inp in one operation (a loop with range the shape 0 of inp), but when I try, and error pops out:
TypeError: 'NoneType' object cannot be interpreted as an integer
That is obviously because of the train_step_signature. I've seen that it works if I drop train_step_signature from the args, but it takes a lot of more time to process my code. My question is, is there anyway to get this first shape without loosing the train_step_signature arg?
You are probably using a pythonic loop like for i in range(inp.shape[0]) that is not possible because inp.shape[0] is None inside tf.function. Do not be afraid to use
tf.while_loop
inside tf.function.
Alternatively, try to use tf.shape(inp)[0] instead for inp.shape[0].

TypeError: 'TensorShape' object is not callable

I am new to Tensorflow programming , i was digging up some functions and got this error in the snippet :
**with** **tf.Session()** as sess_1:
c = tf.constant(5)
d = tf.constant(6)
e = c + d
print(sess_1.run(e))
print(sess_1.run(e.shape()))
Error found :Traceback (most recent call last):
File "C:/Users/Ashu/PycharmProjects/untitled/Bored.py", line 15, in
print(sess_1.run(e.shape()))
TypeError: 'TensorShape' object is not callable
I didn't found it here so can anyone please clarify this silly doubt as i am new learner.Sorry for any typing mistake !
I have a one more doubt , when i uses simply eval() function it doesn't print anything in pycharm , i had to use it along with print() method. So my doubt is when print() method is used it doesn't print the dtype of the tensor , it simply print the tensor or python object value in pycharm.(Why i am not getting the output in the format like : array([1. , 1.,] , dtype=float32))Is it the Pycharm way to print the tensor in new version or is it something i am doing wrong ? So excited to know the thing behind this , please help and pardon if i am wrong at any place.
One confusing aspect of tensorflow for beginners is there are two types of shape: dynamic shape, given by tf.shape(x), and static shape, given by x.shape (assuming x is a tensor). While they represent the same concept, they are used very differently.
Static shape is the shape of a tensor known at run time. Its a data type in its own right, but it can be converted to a list using as_list().
x = tf.placeholder(shape=(None, 3, 4))
static_shape = x.shape
shape_list = x.shape.as_list()
print(shape_list) # [None, 3, 4]
y = tf.reduce_sum(x, axis=1)
print(y.shape.as_list()) # [None, 4]
During operations, tensorflow tracks static shapes as best it can. In the above example, y's shape was calculated based on the partially known shape of x's. Note we haven't even created a session, but the static shape is still known.
Since the batch size is not known, you can't use the static first entry in calculations.
z = tf.reduce_sum(x) / tf.cast(x.shape.as_list()[0], tf.float32) # ERROR
(we could have divided by x.shape.as_list()[1], since that dimension is known at run-time - but that wouldn't demonstrate anything here)
If we need to use a value which is not known statically - i.e. at graph construction time - we can use the dynamic shape of x. The dynamic shape is a tensor - like other tensors in tensorflow - which is evaluated using a session.
z = tf.reduce_sum(x) / tf.cast(tf.shape(x)[0], tf.float32) # all good!
You can't call as_list on the dynamic shape, nor can you inspect its values without going through a session evaluation.
As stated in the documentation, you can only call a session's run method with tensors, operations, or lists of tensors/operations. Your last line of code calls run with the result of e.shape(), which has type TensorShape. The session can't execute a TensorShape argument, so you're getting an error.
When you call print with a tensor, the system prints the tensor's content. If you want to print the tensor's type, use code like print(type(tensor)).

Tensorflow: InvalidArgumentError: Input ... incompatible with expected float_ref

The following code results in a very unhelpful error:
import tensorflow as tf
x = tf.Variable(tf.constant(0.), name="x")
with tf.Session() as s:
val = s.run(x.assign(1))
print(val) # 1
val = s.run(x, {x: 2})
print(val) # 2
val = s.run(x.assign(1), {x: 0.}) # InvalidArgumentError
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node Assign_1 was passed float from _arg_x_0_0:0 incompatible with expected float_ref.
How did I get this error?
Why do I get this error?
Here's what I could infer.
How did I get this error?
This error is seen when attempting to perform the following two operations in a single session run:
A Tensorflow variable is assigned a value
That same variable is also passed a value as part of the feed_dict
This is why the first 2 runs succeed (they both don't simultaneously attempt to perform both these operations).
Why do I get this error?
I am not sure, but I don't think this was an intentional design choice by Google. Here's my explanation:
Firstly, the TF(TensorFlow) source code (basically) resolves x.assign(1) to tf.assign(x, 1) which gives us a hint for better understand the error message when it says Input 0.
The error message refers to x when it says Input 0 of the assign op.
It goes on to say that the first argument of the assign op was passed float from _arg_x_0_0:0.
TLDR
Thus for a run where a TF variable is provided as a feed, that variable will no longer be treated as a variable (but instead as the value it was assigned), and thus any attempts at further assigning a value to it would be erroneous since only TF variables can be assigned a value in the graph.
Fix
If your graph has variable assignment operation, don't pass a value to that same variable in your feed_dict. ¯_(ツ)_/¯. Assuming you're using the feed_dict to provide an initial value, you could instead assign it a value in a prior session run. Or, leverage tf.control_dependencies when building your graph to assign it an initial value from a placeholder as shown below:
import tensorflow as tf
x = tf.Variable(tf.constant(0.), name="x")
initial_x = tf.placeholder(tf.float32)
assign_from_placeholder = x.assign(initial_x)
with tf.control_dependencies([assign_from_placeholder]):
x_assign = x.assign(1)
with tf.Session() as s:
val = s.run(x_assign, {initial_x: 0.}) # Success!

compute Hessians w.r.t higher rank variable not work neither by tf.hessians() nor tf.gradients()

When we need to calculate double gradient or Hessian, in tensorflow, we may use tf.hessians(F(x),x), or use tf.gradient(tf.gradients(F(x),x)[0], x)[0]. However, when x is not rank one, I was told the following error when use tf.hessians().
ValueError: Cannot compute Hessian because element 0 of xs does not
have rank one.. Tensor model_inputs/action:0 must have rank 1.
Received rank 2, shape (?, 1)
in following code:
with tf.name_scope("1st scope"):
self.states = tf.placeholder(tf.float32, (None, self.state_dim), name="states")
self.action = tf.placeholder(tf.float32, (None, self.action_dim), name="action")
with tf.name_scope("2nd scope"):
with tf.variable_scope("3rd scope"):
self.policy_outputs = self.policy_network(self.states)
# use tf.gradients twice
self.actor_action_gradients = tf.gradients(self.policy_outputs, self.action)[0]
self.actor_action_hessian = tf.gradients(self.actor_action_gradients, self.action)[0]
# or use tf.hessians
self.actor_action_hessian = tf.hessian(self.policy_outputs, self.action)
When using tf.gradients(), also causes an error:
in create_variables self.actor_action_hessian =
tf.gradients(self.actor_action_gradients, self.action)[0]
AttributeError: 'NoneType' object has no attribute 'dtype'
How can I fix this, does neither tf.gradients() nor tf.hessians() can be used in this case?
The second approach is fine, error is somewhere else, namely your graph is not connected.
self.actor_action_gradients = tf.gradients(self.policy_outputs, self.action)[0]
self.actor_action_hessian = tf.gradients(self.actor_action_gradients, self.action)[0]
errror is thrown in second line because self.actor_action_gradients is None, and so you can't compute its gradient. Nothing in your code suggests that self.policy_outputs depends on self.action (and it shouldn't, since its action that depends on policy, not policy on action).
Once you fix this you will notice, that "hessian" is not really a hessian but a vector, to form a proper hessian of f wrt. x you have to iterate over all values returned by tf.gradients, and compute tf.gradients of each one independently. This is a known limitation in TF, and no simpler way is available right now.