tf.print gives TypeSpec TypeError - tensorflow

What does this error message mean?
TypeError: Could not build a TypeSpec for name: "tf.print/PrintV2"
op: "PrintV2"
input: "tf.print/StringFormat"
attr {
key: "end"
value {
s: "\n"
}
}
attr {
key: "output_stream"
value {
s: "stdout"
}
}
of unsupported type <class 'google3.third_party.tensorflow.python.framework.ops.Operation'>
I'm printing the shape of a tensor. My code "works" without the print, so I'm sure it is this statement, and the tensor is valid. I can print the shape of a tensor in a test colab. I'm clueless how to narrow this down and debug this. My failure is in a big hairy program.
I can't find any information on the web about what might be causing this error.
What does it mean when I get a TypeSpec error from a tf.print?
-- Malcolm
(TF 2.7.0)

I'm sorry for the tardy followup.
Turns out that the output from Keras layers is not a regular tf.tensor. I still don't understand the reason, the error message, or how to give a better message. :-(
Here is a simple example of the problem (and the error message) and an (undocumented) solution.
import tensorflow as tf
keras_input = tf.keras.layers.Input([10])
tf.print(keras_input)
==> TypeError: Could not build a TypeSpec for name: "tf.print_2/PrintV2"
tf.keras.backend.print_tensor(keras_input)
==> <KerasTensor: shape=(None, 10) dtype=float32 (created by layer 'tf.keras.backend.print_tensor')>
So the moral of the story is use tf.keras.backend.print_tensor when working with Keras models.

Related

cannot deploy YAMNet model to SageMaker

I followed this tutorial and had the model fine-tuned.
the model-saving part of serving model is like this:
saved_model_path = 'dogs_and_cats_yamnet/yamnet-model/00000001'
input_segment = tf.keras.layers.Input(shape=(), dtype=tf.float32, name='audio')
embedding_extraction_layer = hub.KerasLayer(yamnet_model_handle,
trainable=False, name='yamnet')
_, embeddings_output, _ = embedding_extraction_layer(input_segment)
serving_outputs = my_model(embeddings_output)
serving_outputs = ReduceMeanLayer(axis=0, name='classifier')(serving_outputs)
serving_model = tf.keras.Model(input_segment, serving_outputs)
serving_model.save(saved_model_path, include_optimizer=False)
Then followed this page, uploading the model to S3 and deploying the model.
!tar -C "$PWD" -czf dogs_and_cats_yamnet.tar.gz dogs_and_cats_yamnet/
model_data = Session().upload_data(path="dogs_and_cats_yamnet.tar.gz", key_prefix="model")
model = TensorFlowModel(model_data=model_data, role=sagemaker_role, framework_version="2.3")
predictor = model.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")
Deployment seems successful, but when I try to do inference,
waveform = np.zeros((3*48000), dtype=np.float32)
result = predictor.predict(waveform)
the following error occurs.
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"error": "The first dimension of paddings must be the rank of inputs[1,2] [1,144000]\n\t [[{{node yamnet_frames/tf_op_layer_Pad/Pad}}]]"
I have no idea why this happens. I am struggling with it for hours and coming up with no clue.
YAMNet works fine when I pulled the model from tf hub directly and take inference with it.
This is kind of a minor question I guess, but I would appreciate any helpful answers.
Thank you in advance.

tensorflow.js getting Error when checking input: expected dense_Dense1_input to have 3 dimension(s). but got array with shape

simple question and im sure answer is straightforward but im really struggling to match model shape with tensor fitting into model.
this simple code
let tf = require('#tensorflow/tfjs-node');
let features = {
x: [1,2,3,4,5,6,7,8,9],
y: [1,2,3,4,5,6,7,8,9]
}
let tensorfeature = tf.tensor2d(Object.values(features))
console.log(tensorfeature.shape)
const model = tf.sequential();
model.add(tf.layers.dense(
{
inputShape: tensorfeature.shape,
units: 1
}
))
const optimizer = tf.train.sgd(0.005);
model.compile({optimizer: optimizer, loss: 'meanAbsoluteError'});
model.fit(tensorfeature,
{epochs: 5}
)
Results in Error: Error when checking input: expected dense_Dense1_input to have 3 dimension(s). but got array with shape 2,9
tried multiple things with reshape, slice, etc with no luck. Can someone point me what exactly is wrong?
model.fit takes at least two parameters x, y which are either tensors or array of tensors. The config object is the third parameter.
Also, the feature(tensorfeature) tensor passed as argument to model.fit should be one dimension higher than the inputShape of the model. Since tensorfeature.shape is used as the inputShape, if we want to traing the model with tensorfeature its dimension should be expanded. It can be done using reshape or expandDims.
model.fit(tensorfeature.expandDims(0))
// or possibly
model.fit(tensorfeature.reshape([1, ...tensorfeature.shape])
This shape mismatch between the model and the training data has been discussed here and there

How to locate an operation unsupported by TensorRT

When I convert my tensorflow model (saved as .pb file) to uff file, error log like this:
Using output node final/lanenet_loss/instance_seg
Using output node final/lanenet_loss/binary_seg
Converting to UFF graph
Warning: No conversion function registered for layer: Slice yet.
Converting as custom op Slice final/lanenet_loss/Slice
name: "final/lanenet_loss/Slice"
op: "Slice"
input: "final/lanenet_loss/Shape_1"
input: "final/lanenet_loss/Slice/begin"
input: "final/lanenet_loss/Slice/size"
attr {
key: "Index"
value {
type: DT_INT32
}
}
attr {
key: "T"
value {
type: DT_INT32
}
}
Traceback (most recent call last):
File "tfpb_to_uff.py", line 16, in <module>
uff_model = uff.from_tensorflow(graphdef=output_graph_def, output_filename=output_path, output_nodes=["final/lanenet_loss/instance_seg", "final/lanenet_loss/binary_seg"], text=True)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
name="main")
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
uff_graph, input_replacements)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 51, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 28, in convert_layer
fields = cls.parse_tf_attrs(tf_node.attr)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 177, in parse_tf_attrs
for key, val in attrs.items()}
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 177, in <dictcomp>
for key, val in attrs.items()}
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 172, in parse_tf_attr_value
return cls.convert_tf2uff_field(code, val)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 146, in convert_tf2uff_field
return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 74, in convert_tf2numpy_dtype
return np.dtype(dt[dtype])
TypeError: list indices must be integers or slices, not AttrValue
It meaning that the layer: 'Slice' is not supported by TensorRT currently.
So I plan to modify this layer in my code.
However, I can't locate 'Slice' layer in my code, even I get information about 'Slice' by function sess.graph.get_operation_by_name:
graph list name: "final/lanenet_loss/Slice"
op: "Slice"
input: "final/lanenet_loss/Shape_1"
input: "final/lanenet_loss/Slice/begin"
input: "final/lanenet_loss/Slice/size"
attr {
key: "Index"
value {
type: DT_INT32
}
}
attr {
key: "T"
value {
type: DT_INT32
}
}
How can I locate the 'Slice' layer in my code lines so that I can modify it by TensorRT custom layer?
since you are parsing from Tensorflow maybe it's better to see which layers TensorRT DOES support. As of TensorRT 4, these following layers are supported:
Placeholder
Const
Add, Sub, Mul, Div, Minimum and Maximum
BiasAdd
Negative, Abs, Sqrt, Rsqrt, Pow, Exp and Log
FusedBatchNorm
ReLU, TanH, Sigmoid
SoftMax
Mean
ConcatV2
Reshape
Transpose
Conv2D
DepthwiseConv2dNative
ConvTranspose2D
MaxPool
AvgPool
Pad is supported if followed by one of these TensorFlow layers:
Conv2D, DepthwiseConv2dNative, MaxPool, and AvgPool
From what I see in your logs you are trying to deploy LaneNet, is it the LaneNet of this paper?
If that is the case it seems to be a variant of H-Net, haven't read about it but the architecture is the following, according to the paper:
So I see Convs, Relus, Maxpool and Linear, all of which are supported, don't know about that BN, maybe check that out to see which layer does it refer, if it is not on the list of supported networks you'll have to implement it from scratch.
Best of luck!

Correct format for input data on CloudML

I'm trying to send a job up to my object detection model on CloudML to get predictions. I'm following the guide at https://cloud.google.com/ml-engine/docs/online-predict but I'm getting an error when submitting the request:
RuntimeError: Prediction failed: Error processing input: Expected uint8, got '\xf6>\x00\x01\x04\xa4d\x94...(more bytes)...\x00\x10\x10\x10\x04\x80\xd9' of type 'str' instead.
This is my code:
img = base64.b64encode(open("file.jpg", "rb").read()).decode('utf-8')
json = {"b64": img}
result = predict_json(project, model, json, "v1")
My fault, I forgot to add --input_type encoded_image_string_tensor when I exported the graph.

Changing label name when retraining Inception on Google Cloud ML

I currently follow the tutorial to retrain Inception for image classification:
https://cloud.google.com/blog/big-data/2016/12/how-to-train-and-classify-images-using-google-cloud-machine-learning-and-cloud-dataflow
However, when I make a prediction with the API I get only the index of my class as a label. However I would like that the API actually gives me a string back with the actual class name e.g instead of
​predictions:
- key: '0'
prediction: 4
scores:
- 8.11998e-09
- 2.64907e-08
- 1.10307e-06
I would like to get:
​predictions:
- key: '0'
prediction: ROSES
scores:
- 8.11998e-09
- 2.64907e-08
- 1.10307e-06
Looking at the reference for the Google API it should be possible:
https://cloud.google.com/ml-engine/reference/rest/v1/projects/predict
I already tried to change in the model.py the following to
outputs = {
'key': keys.name,
'prediction': tensors.predictions[0].name,
'scores': tensors.predictions[1].name
}
tf.add_to_collection('outputs', json.dumps(outputs))
to
if tensors.predictions[0].name == 0:
pred_name ='roses'
elif tensors.predictions[0].name == 1:
pred_name ='tulips'
outputs = {
'key': keys.name,
'prediction': pred_name,
'scores': tensors.predictions[1].name
}
tf.add_to_collection('outputs', json.dumps(outputs))
but this doesn't work.
My next idea was to change this part in the preprocess.py file. So instead getting the index I want to use the string label.
def process(self, row, all_labels):
try:
row = row.element
except AttributeError:
pass
if not self.label_to_id_map:
for i, label in enumerate(all_labels):
label = label.strip()
if label:
self.label_to_id_map[label] = label #i
and
label_ids = []
for label in row[1:]:
try:
label_ids.append(label.strip())
#label_ids.append(self.label_to_id_map[label.strip()])
except KeyError:
unknown_label.inc()
but this gives the error:
TypeError: 'roses' has type <type 'str'>, but expected one of: (<type 'int'>, <type 'long'>) [while running 'Embed and make TFExample']
hence I thought that I should change something here in preprocess.py, in order to allow strings:
example = tf.train.Example(features=tf.train.Features(feature={
'image_uri': _bytes_feature([uri]),
'embedding': _float_feature(embedding.ravel().tolist()),
}))
if label_ids:
label_ids.sort()
example.features.feature['label'].int64_list.value.extend(label_ids)
But I don't know how to change it appropriately as I could not find someting like str_list. Could anyone please help me out here?
Online prediction certainly allows this, the model itself needs to be updated to do the conversion from int to string.
Keep in mind that the Python code is just building a graph which describes what computation to do in your model -- you're not sending the Python code to online prediction, you're sending the graph you build.
That distinction is important because the changes you have made are in Python -- you don't yet have any inputs or predictions, so you won't be able to inspect their values. What you need to do instead is add the equivalent lookups to the graph that you're exporting.
You could modify the code like so:
labels = tf.constant(['cars', 'trucks', 'suvs'])
predicted_indices = tf.argmax(softmax, 1)
prediction = tf.gather(labels, predicted_indices)
And leave the inputs/outputs untouched from the original code