Tensorflow Data Augmentation gives a warning: Using a while_loop for converting - tensorflow

I use the data augmentation according to the official TensorFlow tutorial.
First, I create a sequential model with augmenting layers:
def _getAugmentationFunction(self):
if not self.augmentation:
return None
pipeline = []
pipeline.append(layers.RandomFlip('horizontal_and_vertical'))
pipeline.append(layers.RandomRotation(30))
pipeline.append(layers.RandomTranslation(0.1, 0.1, fill_mode='nearest'))
pipeline.append(layers.RandomBrightness(0.1, value_range=(0.0, 1.0)))
model = Sequential(pipeline)
return lambda x, y: (model(x, training=True), y)
Then, I use the map function on the dataset:
data_augmentation = self._getAugmentationFunction()
self.train_data = self.train_data.map(data_augmentation,
num_parallel_calls=AUTOTUNE)
The code works as expected but I get the following warning:
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
What is the reason of the warnings and how to fix it?
I'm using TF v2.9.1

It's not only warnings - these layers are extremly slow! In my case, the time for one epoch went up from 30 seconds to several minutes.
This seems to be a bug in keras version 2.9 and 2.10 (which is included in tensorflow): https://github.com/keras-team/keras-cv/issues/581
It works correctly with TF v2.8.3 - no error messages, and training is fast.
On my arch system – I have had installed TF by installing the python-tensorflow-opt-cuda package using pacman – I issued the following command which solved the issue:
python -m pip install tensorflow-gpu==2.8.3

The warnings in TensorFlow could be managed by tf.get_logger().setLevel(). To turn off the warnings you can use
tf.get_logger().setLevel('ERROR')
I tried to replicate it in the gist, please find it here. Thank you!

the downgrade works nicely except some of my augmentation is not available.
'tensorflow.keras.layers' has no attribute 'RandomBrightness'
and GPU support has vanished. Was included in 2.10 and beyond.
will have to reinstall tensorflow-gpu

Related

WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op

WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op.
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3 cause there is no registered converter for this op.
I am getting this error while building a model please solve the issue

Can't convert onnx model to tflite using TF 2.4.1

I'm having an ONNX model, which I can successfully convert to TF with TF 2.4.1. But when it comes to the conversion of that saved model to TFLite an error happens.
The code:
import onnx
import tensorflow as tf
from onnx_tf.backend import prepare
print(tf.__version__)
# Convert model.onnx to Tensorflow
onnx_model = onnx.load('model.onnx')
onnx.checker.check_model(onnx_model)
tf_rep = prepare(onnx_model)
tf_rep.export_graph('model')
# Convert saved model to tflite
converter = tf.lite.TFLiteConverter.from_saved_model('model')
tf_lite_model = converter.convert()
open('model.tflite', 'wb').write(tf_lite_model)
Everything goes OK until the conversion step, which ends like so:
>>> tf_lite_model = converter.convert()
2021-04-22 18:18:14.715046: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
2021-04-22 18:18:14.715072: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
2021-04-22 18:18:14.715078: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:325] Ignored change_concat_input_ranges.
2021-04-22 18:18:14.716044: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: model
2021-04-22 18:18:14.778050: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
2021-04-22 18:18:14.778083: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: model
2021-04-22 18:18:14.998062: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
2021-04-22 18:18:15.043862: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
2021-04-22 18:18:15.438804: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: model
2021-04-22 18:18:15.809851: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 1093808 microseconds.
2021-04-22 18:18:18.757257: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): error: operand #0 does not dominate this use
Traceback (most recent call last):
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
<unknown>:0: note: loc("PartitionedCall"): called from
<unknown>:0: note: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
result = _convert_saved_model(**converter_kwargs)
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
data = toco_convert_protos(
File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
<unknown>:0: note: loc("PartitionedCall"): called from
<unknown>:0: note: loc(callsite(callsite("Pad_16#__inference___call___16503" at "PartitionedCall#__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here
I have no idea, what this message means, but if I switch to TF 2.2 the conversion passes w/o errors. The bad thing is, that due to another problem now the initial ONNX to TF conversion fails.
Anybody having an idea, what this message means and what could be done with it?
TIA
Is it possible to share your the saved model directory to me? I can help debugging.
The general advise is that, there are two possibilities that
(1) TF Lite converter may not handle the saved model correctly.
(2) onnx conversion tool may not create a valid TF saved model.
Using the recent TF version (2.5 or tf-nightly) might help resolve this problem in the (1) case but it's not guaranteed.
I confirmed that the tf-nightly version could convert the attached saved model without any issue:
converter = tf.lite.TFLiteConverter.from_saved_model(
"/tmp/onnx_model")
tflite_model = converter.convert()
with open("/tmp/onnx.tflite", "wb") as f:
f.write(tflite_model)

ERROR! Converting Keras model to Tensorflow lite

I have a simple classification model built with feature columns. I want to convert it to Tensorflow lite format. I'm using the Keras conversion code. But it gives the following error;
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
error message;
InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array.
Why am I getting this error message?
Thank you.

tensorflow gradient: unsupported operand type

I got the following error:
anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gradients.py:90: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Traceback (most recent call last):
trainstep = tf.train.AdamOptimizer(0.0001).minimize(lossobj)
File "anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 196, in minimize
grad_loss=grad_loss)
File "anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 253, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gradients.py", line 469, in gradients
in_grads = _AsList(grad_fn(op, *out_grads))
File "anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/array_grad.py", line 504, in _ExtractImagePatchesGrad
rows_out = int(ceil(rows_in / stride_r))
TypeError: unsupported operand type(s) for /: 'NoneType' and 'long'
there is look like gather ops is wrong.
I see that this is an old issue, but I have found a quick work-around for some cases of this. Chances are, you are feeding your input using a placeholder and one of the dimensions of the placeholder shape is "None". If you set that dimension to your batch size, it will no longer be an unknown shape.

How to save outputs in every step using while_loop with tensorflow?

I want to build a RNN with thousands of timesteps, so the proper way is to use the while_loop function since the GPU will be out of memory in for loops.
But I could not find a way to save rnn outputs in every step. I tried using a global list or using tf.concat() to accumulate the output. Neither worked. It seems like while_loop() can only be used to get the final output.
Is there any solution to get all the outputs?
Try tf.nn.dynamic_rnn which does exactly this using while_loop and TensorArray objects.