I am trying to using the bool condition in my own tf2.1.0-keras model, below is the simple example:
import tensorflow as tf
class TestKeras:
def __init__(self):
pass
def build_graph(self):
x = tf.keras.Input(shape=(2),batch_size=1)
x_value = x[0,0]
y = tf.cond(x_value > 0, lambda :tf.add(x_value,0), lambda :tf.add(x_value,0))
return tf.keras.models.Model(inputs=[x], outputs=[y])
if __name__ == "__main__":
tk = TestKeras()
model = tk.build_graph()
model.summary(line_length=100)
but it seem not work and throw the exception:
using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with #tf.function.
I have try to replace the tf.cond with tf.keras.backend.switch, but it still got the same error.
Also i have try to split the code y = tf.cond(xxx) into a single funtion and add the #tf.funcion decorator:
#tf.function
def compute_y(self,x):
return tf.cond(x > 0, lambda :tf.add(x,0), lambda :tf.add(x,0))
but it got another error:
Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Tensor 'strided_slice:0' shape=() dtype=float32>]
Anyone knows how can condition works in tf2.1.0-keras?
tf.keras.Input is a symbolic Tensor used to define an input for a keras model. Whenever you want to apply custom logic in a keras model, you should either subclass the Layer class, or use a Lambda layer.
For example, with a Lambda layer:
class TestKeras:
def __init__(self):
pass
def build_graph(self):
x = tf.keras.Input(shape=(2),batch_size=1)
def custom_fct(x):
x_value = x[0,0]
return tf.cond(x_value > 0, lambda :tf.add(x_value,0), lambda :tf.add(x_value,0))
y = tf.keras.layers.Lambda(custom_fct)(x)
return tf.keras.models.Model(inputs=[x], outputs=[y])
Related
I defined simple custom model:
import tensorflow as tf
class CustomModule(tf.keras.layers.Layer):
def __init__(self):
super(CustomModule, self).__init__()
self.v = tf.Variable(1.)
def call(self, x):
print('Tracing with', x)
return x * self.v
def mutate(self, new_v):
self.v.assign(new_v)
I want to save it for serving and that is why I need to provide a function for “serving_default”. I’ve tried to do it like this:
module = CustomModule() module_with_signature_path = './tmp/1' call = tf.function(module.mutate, input_signature=[tf.TensorSpec([], tf.float32)]) tf.saved_model.save(module, module_with_signature_path, signatures=call)
I got an error:
ValueError: Got a non-Tensor value <tf.Operation 'StatefulPartitionedCall' type=StatefulPartitionedCall> for key 'output_0' in the output of the function __inference_mutate_8 used to generate the SavedModel signature 'serving_default'. Outputs for functions used as signatures must be a single Tensor, a sequence of Tensors, or a dictionary from string to Tensor.
How can I properly define signature while saving model? Thank you!
The model is built as
class MyModel(tf.keras.Model):
def __init__(self, num_states, hidden_units, num_actions):
super(MyModel, self).__init__()
## btach_size * size_state
self.input_layer = tf.keras.layers.InputLayer(input_shape=(num_states,))
self.hidden_layers = []
for i in hidden_units:
self.hidden_layers.append(tf.keras.layers.Dense(
i, activation='tanh', kernel_initializer='RandomNormal'))
self.output_layer = tf.keras.layers.Dense(
num_actions, activation='linear', kernel_initializer='RandomNormal')
#tf.function
def call(self, inputs):
z = self.input_layer(inputs)
for layer in self.hidden_layers:
z = layer(z)
output = self.output_layer(z)
return output
after training, the model was saved as:
tf.saved_model.save(self.model, model_path_dir='saved_model/model_checkpoint')
Now, I'm trying to restore the model and then do inference as:
train_agent = tf.saved_model.load('saved_model/model_checkpoint')
## input_state is an array with shape of (num_states, )
action = train_agent(np.atleast_2d(input_state))
However, I keep getting this error:
TypeError: '_UserObject' object is not callable
How can I exactly use the restored model?
I haven't figure out how to use the tf.saved_model.save() and tf.saved_model.load() properly, if anyone could help, it would be great! Current solution is to use save_weights() instead. For example:
model = Model()
mode.train()
...
model.save_weights(checkpoint_dir)
input_s = np.random.normal(1, 128)
output = model(input_s) ## output a tensor
You correctly decorated the call method with tf.function, thus, your SavedModel will have the very same method serialized inside.
Thus, you can just call your model by calling the call method:
action = train_agent.call(np.atleast_2d(input_state))
Required code given below
tf.keras.models.save_model(model_name,path)
import tensorflow as tf
#save the model for later use
tf.keras.models.save_model(classifier_model,'F:/Face_Recognition/face_classifier_model.h5')
#Reload the model
classifier_model=tf.keras.models.load_model('F:/Face_Recognition/face_classifier_model.h5')
When using a custom class to replace a lambda function, building a model fails.
I previously had this code in a lambda function and it worked fine but I was unable to save a model. I need to save the model that I'm building which has dependencies on this code snippet.
import keras
import tensorflow as tf
class ShapePositionLayer(keras.layers.Layer):
def call(self, x):
assert isinstance(x, list)
a, b = x
return keras.backend.gather(keras.backend.shape(a), b)
def compute_output_shape(self, input_shape):
return (1)
captions = keras.layers.Input(shape=[5,1024], name='captions')
batch_size = ShapePositionLayer()([captions,tf.constant(0,
dtype=tf.int32)])
model = keras.models.Model(inputs=[captions], outputs=[batch_size])
I expected to be able to build a model.
Instead receive error:
AttributeError: 'NoneType' object has no attribute '_inbound_nodes'
I found that it is easy to use lasagne to make a graph like this.
import lasagne.layers as L
class A:
def __init__(self):
self.x = L.InputLayer(shape=(None, 3), name='x')
self.y = x + 1
def get_y_sym(self, x_var, **kwargs):
y = L.get_output(self.y, {self.x: x_var}, **kwargs)
return y
through the method get_y_sym, we could get a tensor not a value, then I could use this tensor as the input of another graph.
But if I use tensorflow, how could I implement this?
I'm not familiar with lasagne but you should know that ALL of TensorFlow uses graph based computation (unless you use tf.Eager, but that's another story). So by default something like:
net = tf.nn.conv2d(...)
returns a reference to a Tensor object. In other words, net is NOT a value, it is a reference to the output of the convolution node created by tf.nn.conv2d(...).
These can then be chained:
net2 = tf.nn.conv2d(net, ...) and so on.
To get "values" one has to open a tf.Session:
with tf.Session() as sess:
net2_eval = sess.run(net2)
I am trying to combine a Keras model with a Lasagne layer. I am calling this function in the Lasagne layer:
def get_output_for(self, inputs, deterministic=False):
self.p = self.nonlinearity(T.dot(inputs[1], self.pi))
self.mask = sample_mask(self.p)
if deterministic or T.mean(self.p) == 0:
return self.p*inputs[0]
else:
return inputs[0]*self.mask
The problem is that my inputs object is the output of the previous Keras layer, which is a Tesnor object as Keras layers produce Tensor outputs. This does not work. I am not sure what type inputs are supposed to have or how to convert between Tensor and the type expected by this function.
I think it is not the best to mix Lasagne and Keras/Tensorflow objects. Instead you can convert the get_output_for method to Tensorflow.
In what follows, I suppose that nonlinearity is similar to something like
self.nonlinearity = lambda x: tf.maximum(x, 0)
and that inputs is a numpy.array-like object:
def tf_get_output_for(self, inputs, deterministic=False):
self.p = self.nonlinearity(tf.matmul(inputs[1], self.pi))
self.mask = sample_mask(self.p)
if deterministic or tf.reduce_mean(self.p) == 0:
return self.p * inputs[0]
else:
return inputs[0] * self.mask
The method sample_mask has to be converted to be "Tensorflow compatible"