When I convert my tensorflow model (saved as .pb file) to uff file, error log like this:
Using output node final/lanenet_loss/instance_seg
Using output node final/lanenet_loss/binary_seg
Converting to UFF graph
Warning: No conversion function registered for layer: Slice yet.
Converting as custom op Slice final/lanenet_loss/Slice
name: "final/lanenet_loss/Slice"
op: "Slice"
input: "final/lanenet_loss/Shape_1"
input: "final/lanenet_loss/Slice/begin"
input: "final/lanenet_loss/Slice/size"
attr {
key: "Index"
value {
type: DT_INT32
}
}
attr {
key: "T"
value {
type: DT_INT32
}
}
Traceback (most recent call last):
File "tfpb_to_uff.py", line 16, in <module>
uff_model = uff.from_tensorflow(graphdef=output_graph_def, output_filename=output_path, output_nodes=["final/lanenet_loss/instance_seg", "final/lanenet_loss/binary_seg"], text=True)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
name="main")
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
uff_graph, input_replacements)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 51, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 28, in convert_layer
fields = cls.parse_tf_attrs(tf_node.attr)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 177, in parse_tf_attrs
for key, val in attrs.items()}
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 177, in <dictcomp>
for key, val in attrs.items()}
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 172, in parse_tf_attr_value
return cls.convert_tf2uff_field(code, val)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 146, in convert_tf2uff_field
return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 74, in convert_tf2numpy_dtype
return np.dtype(dt[dtype])
TypeError: list indices must be integers or slices, not AttrValue
It meaning that the layer: 'Slice' is not supported by TensorRT currently.
So I plan to modify this layer in my code.
However, I can't locate 'Slice' layer in my code, even I get information about 'Slice' by function sess.graph.get_operation_by_name:
graph list name: "final/lanenet_loss/Slice"
op: "Slice"
input: "final/lanenet_loss/Shape_1"
input: "final/lanenet_loss/Slice/begin"
input: "final/lanenet_loss/Slice/size"
attr {
key: "Index"
value {
type: DT_INT32
}
}
attr {
key: "T"
value {
type: DT_INT32
}
}
How can I locate the 'Slice' layer in my code lines so that I can modify it by TensorRT custom layer?
since you are parsing from Tensorflow maybe it's better to see which layers TensorRT DOES support. As of TensorRT 4, these following layers are supported:
Placeholder
Const
Add, Sub, Mul, Div, Minimum and Maximum
BiasAdd
Negative, Abs, Sqrt, Rsqrt, Pow, Exp and Log
FusedBatchNorm
ReLU, TanH, Sigmoid
SoftMax
Mean
ConcatV2
Reshape
Transpose
Conv2D
DepthwiseConv2dNative
ConvTranspose2D
MaxPool
AvgPool
Pad is supported if followed by one of these TensorFlow layers:
Conv2D, DepthwiseConv2dNative, MaxPool, and AvgPool
From what I see in your logs you are trying to deploy LaneNet, is it the LaneNet of this paper?
If that is the case it seems to be a variant of H-Net, haven't read about it but the architecture is the following, according to the paper:
So I see Convs, Relus, Maxpool and Linear, all of which are supported, don't know about that BN, maybe check that out to see which layer does it refer, if it is not on the list of supported networks you'll have to implement it from scratch.
Best of luck!
Related
I am on Tensorflow 2.4.0, and tried to perform Exponential decay on the learning rate as follows:
learning_rate_scheduler = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate=0.1, decay_steps=1000, decay_rate=0.97, staircase=False)
and start the learning rate of my optimizer with such decay method:
optimizer_to_use = Adam(learning_rate=learning_rate_scheduler)
the model is compiled as follows
model.compile(loss=metrics.contrastive_loss, optimizer=optimizer_to_use, metrics=[accuracy])
The train goes well until the third epoch, where the following error is showed:
File "train_contrastive_siamese_network_inception.py", line 163, in run_experiment
history = model.fit([pairTrain[:, 0], pairTrain[:, 1]], labelTrain[:], validation_data=([pairTest[:, 0], pairTest[:, 1]], labelTest[:]), batch_size=config.BATCH_SIZE, epochs=config.EPOCHS, callbacks=callbacks)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1145, in fit
callbacks.on_epoch_end(epoch, epoch_logs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py", line 432, in on_epoch_end
callback.on_epoch_end(epoch, numpy_logs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/callbacks.py", line 2542, in on_epoch_end
old_lr = float(K.get_value(self.model.optimizer.lr))
TypeError: float() argument must be a string or a number, not 'ExponentialDecay'
I checked this issue was even raised in the official keras Forum, but no success even there. Plus, the documentation clearly states that:
A LearningRateSchedule instance can be passed in as the learning_rate argument of any optimizer.
What could be the issue?
What does this error message mean?
TypeError: Could not build a TypeSpec for name: "tf.print/PrintV2"
op: "PrintV2"
input: "tf.print/StringFormat"
attr {
key: "end"
value {
s: "\n"
}
}
attr {
key: "output_stream"
value {
s: "stdout"
}
}
of unsupported type <class 'google3.third_party.tensorflow.python.framework.ops.Operation'>
I'm printing the shape of a tensor. My code "works" without the print, so I'm sure it is this statement, and the tensor is valid. I can print the shape of a tensor in a test colab. I'm clueless how to narrow this down and debug this. My failure is in a big hairy program.
I can't find any information on the web about what might be causing this error.
What does it mean when I get a TypeSpec error from a tf.print?
-- Malcolm
(TF 2.7.0)
I'm sorry for the tardy followup.
Turns out that the output from Keras layers is not a regular tf.tensor. I still don't understand the reason, the error message, or how to give a better message. :-(
Here is a simple example of the problem (and the error message) and an (undocumented) solution.
import tensorflow as tf
keras_input = tf.keras.layers.Input([10])
tf.print(keras_input)
==> TypeError: Could not build a TypeSpec for name: "tf.print_2/PrintV2"
tf.keras.backend.print_tensor(keras_input)
==> <KerasTensor: shape=(None, 10) dtype=float32 (created by layer 'tf.keras.backend.print_tensor')>
So the moral of the story is use tf.keras.backend.print_tensor when working with Keras models.
I have saved the trained model and the weights as below.
model, history, score = fit_model(model, train_batches, val_batches, callbacks=[callback])
model.save('./model')
model.save_weights('./weights')
Then I tried to get the saved model as the following way
if __name__ == '__main__':
model = keras.models.load_model('./model', compile= False,custom_objects={"F1Score": tfa.metrics.F1Score})
test_batches, nb_samples = test_gen(dataset_test_path, 32, img_width, img_height)
predict, loss, acc = predict_model(model,test_batches, nb_samples)
print(predict)
print(acc)
print(loss)
But it gives me an error. What should I do to overcome this?
Traceback (most recent call last):
File "test_pro.py", line 34, in <module>
model = keras.models.load_model('./model',compile= False,custom_objects={"F1Score": tfa.metrics.F1Score})
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/save.py", line 212, in load_model
return saved_model_load.load(filepath, compile, options)
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 138, in load
keras_loader.load_layers()
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 379, in load_layers
self.loaded_nodes[node_metadata.node_id] = self._load_layer(
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 407, in _load_layer
obj, setter = revive_custom_object(identifier, metadata)
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 921, in revive_custom_object
raise ValueError('Unable to restore custom object of type {} currently. '
ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements `get_config`and `from_config` when saving. In addition, please use the `custom_objects` arg when calling `load_model()`.
Looking at the source code for Keras, the error is raised when trying to load a model with a custom object:
def revive_custom_object(identifier, metadata):
"""Revives object from SavedModel."""
if ops.executing_eagerly_outside_functions():
model_class = training_lib.Model
else:
model_class = training_lib_v1.Model
revived_classes = {
constants.INPUT_LAYER_IDENTIFIER: (
RevivedInputLayer, input_layer.InputLayer),
constants.LAYER_IDENTIFIER: (RevivedLayer, base_layer.Layer),
constants.MODEL_IDENTIFIER: (RevivedNetwork, model_class),
constants.NETWORK_IDENTIFIER: (RevivedNetwork, functional_lib.Functional),
constants.SEQUENTIAL_IDENTIFIER: (RevivedNetwork, models_lib.Sequential),
}
parent_classes = revived_classes.get(identifier, None)
if parent_classes is not None:
parent_classes = revived_classes[identifier]
revived_cls = type(
compat.as_str(metadata['class_name']), parent_classes, {})
return revived_cls._init_from_metadata(metadata) # pylint: disable=protected-access
else:
raise ValueError('Unable to restore custom object of type {} currently. '
'Please make sure that the layer implements `get_config`'
'and `from_config` when saving. In addition, please use '
'the `custom_objects` arg when calling `load_model()`.'
.format(identifier))
The method will only work fine with the custom objects of the types defined in revived_classes. As you can see, it currently only works with input layer, layer, model, network, and sequential custom objects.
In your code, you pass an tfa.metrics.F1Score class in the custom_objects argument, which is of type METRIC_IDENTIFIER, therefore, not supported (probably because it doesn't implement the get_config and from_config functions as the error output says):
keras.models.load_model('./model', compile=False, custom_objects={"F1Score": tfa.metrics.F1Score})
It's been a while since I last worked with Keras but maybe you can try and follow what was proposed in this other related answer and wrap the call to tfa.metrics.F1Score in a method. Something like this (adjust it to your needs):
def f1(y_true, y_pred):
metric = tfa.metrics.F1Score(num_classes=3, threshold=0.5)
metric.update_state(y_true, y_pred)
return metric.result()
keras.models.load_model('./model', compile=False, custom_objects={'f1': f1})
I am trying to compute the gradients of a complex function in Tensorflow, but I have some trouble.
Here is my code:
import numpy as np
import tensorflow as tf
def CompSEQ(seq, rho0):
def EvolveRHO(prev, input):
return tf.mul(tf.complex(input,0.0),tf.matmul(rho0,prev))
def ComputeP(p, rho):
return p * tf.real(tf.trace(rho))
rhos = tf.scan(EvolveRHO, seq, initializer=rho0)
p = tf.scan(ComputeP, rhos, initializer=tf.constant(1.0))
return tf.gather(p,[tf.size(seq)-1])[0]
N = 4
seq = tf.placeholder(tf.float32, shape=[5])
x = tf.Variable(tf.zeros([2*N*N], dtype=tf.float32))
seqP = CompSEQ(seq, tf.complex(tf.reshape(x[0:N*N],[N,N]),
tf.reshape(x[N*N:2*N*N],[N,N])))
#seqPp = tf.gradients([seqP], [x]) # THIS LINE CAUSES THE PROBLEM!!!
sess = tf.Session()
sess.run(tf.initialize_all_variables())
v = np.random.rand(2*N*N).astype(np.float32)
s0 = np.random.rand(5).astype(np.float32)
p = sess.run(seqP, feed_dict={seq:s0, x:v})
print('seqP',p);
I use an input float32 vector x that will be transformed into a complex matrix. All the computations are performed using complex numbers and the last tf.scan in the CompSEQ function transforms all the results into float32 by taking the real part.
If I comment the call to tf.gradients (as in the code) everything works fine, but when I try to compute the gradients I get the following error:
Traceback (most recent call last):
File "error.1.py", line 24, in <module>
seqPp = tf.gradients([seqP], [x]) # THIS LINE CAUSES THE PROBLEM!!!
File "/Users/tamburin/Library/Python/2.7/lib/python/site-packages/tensorflow/python/ops/gradients.py", line 486, in gradients
_VerifyGeneratedGradients(in_grads, op)
File "/Users/tamburin/Library/Python/2.7/lib/python/site-packages/tensorflow/python/ops/gradients.py", line 264, in _VerifyGeneratedGradients
dtypes.as_dtype(inp.dtype).name))
ValueError: Gradient type float32 generated for op name: "scan/while/Switch_1"
op: "Switch"
input: "scan/while/Merge_1"
input: "scan/while/LoopCond"
attr {
key: "T"
value {
type: DT_COMPLEX64
}
}
attr {
key: "_class"
value {
list {
s: "loc:#scan/while/Merge_1"
}
}
}
does not match input type complex64
Converting all the computations into float32 variables resolves the problem, but I need to maintain the computation on complex variables (this is a simplified example w.r.t. my real problem).
I've written some code in Tensorflow to compute the edit-distance between one string and a set of strings. I can't figure out the error.
import tensorflow as tf
sess = tf.Session()
# Create input data
test_string = ['foo']
ref_strings = ['food', 'bar']
def create_sparse_vec(word_list):
num_words = len(word_list)
indices = [[xi, 0, yi] for xi,x in enumerate(word_list) for yi,y in enumerate(x)]
chars = list(''.join(word_list))
return(tf.SparseTensor(indices, chars, [num_words,1,1]))
test_string_sparse = create_sparse_vec(test_string*len(ref_strings))
ref_string_sparse = create_sparse_vec(ref_strings)
sess.run(tf.edit_distance(test_string_sparse, ref_string_sparse, normalize=True))
This code works and when run, it produces the output:
array([[ 0.25],
[ 1. ]], dtype=float32)
But when I attempt to do this by feeding the sparse tensors in through sparse placeholders, I get an error.
test_input = tf.sparse_placeholder(dtype=tf.string)
ref_input = tf.sparse_placeholder(dtype=tf.string)
edit_distances = tf.edit_distance(test_input, ref_input, normalize=True)
feed_dict = {test_input: test_string_sparse,
ref_input: ref_string_sparse}
sess.run(edit_distances, feed_dict=feed_dict)
Here is the error traceback:
Traceback (most recent call last):
File "<ipython-input-29-4e06de0b7af3>", line 1, in <module>
sess.run(edit_distances, feed_dict=feed_dict)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 597, in _run
for subfeed, subfeed_val in _feed_fn(feed, feed_val):
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 558, in _feed_fn
return feed_fn(feed, feed_val)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 268, in <lambda>
[feed.indices, feed.values, feed.shape], feed_val)),
TypeError: zip argument #2 must support iteration
Any idea what is going on here?
TL;DR: For the return type of create_sparse_vec(), use tf.SparseTensorValue instead of tf.SparseTensor.
The problem here comes from the return type of create_sparse_vec(), which is tf.SparseTensor, and is not understood as a feed value in the call to sess.run().
When you feed a (dense) tf.Tensor, the expected value type is a NumPy array (or certain objects that can be converted to an array). When you feed a tf.SparseTensor, the expected value type is a tf.SparseTensorValue, which is similar to a tf.SparseTensor but its indices, values, and shape properties are NumPy arrays (or certain objects that can be converted to arrays, like the lists in your example.
The following code should work:
def create_sparse_vec(word_list):
num_words = len(word_list)
indices = [[xi, 0, yi] for xi,x in enumerate(word_list) for yi,y in enumerate(x)]
chars = list(''.join(word_list))
return tf.SparseTensorValue(indices, chars, [num_words,1,1])