I am trying to use mixed_precision in TensorFlow 2.2.0. Since I wrote the code in tf version 2.9.2 and need to run it on a computer with version 2.2.0. I couldn't find a page for the experimental mixed_precision page since it is embedded into core tf.
POLICY:
policy = mixed_precision.experimental.Policy('mixed_float16')
mixed_precision.experimental.set_policy(policy)
OPTIMIZER:
self.optimizer = tf.keras.optimizers.Adam(learning_rate=LR_SCHEDULE, epsilon=1e-08)
self.optimizer = mixed_precision.experimental.LossScaleOptimizer(self.optimizer)
ERRROR:
Traceback (most recent call last):
File "my_code.py", line 354, in <module>
model.train(total_it=TRAIN_IT)
File "my_code.py", line 166, in train
self.optimizer = mixed_precision.experimental.LossScaleOptimizer(self.optimizer)
TypeError: __init__() missing 1 required positional argument: 'loss_scale'
Related
I wanted to use yolov4-tiny in the Tensorflow lite framework to count objects that cross a virtual line in a video.
I converted my darknet weights trained from AlexeyAB's repo using these commands:
python save_model.py --weights yolov4-tiny.weights --output ./checkpoints/yolov4-tiny-608-tf --input_size 608 --model yolov4 --tiny --framework tflite
python convert_tflite.py --weights ./checkpoints/yolov4-tiny-608-tf --output ./checkpoints/yolov4-tiny-608.tflite
You can find the convert_tflite.py here
The first command is successful using numpy==1.19.0. However, the second one shows these errors:
loc("batch_normalization/moving_mean"): error: is not immutable, try running tf-saved-model-optimize-global-tensors to prove tensors are immutable
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\tensorflow\lite\python\convert.py", line 213, in toco_convert_protos
enable_mlir_converter)
File "C:\Python37\lib\site-packages\tensorflow\lite\python\wrap_toco.py", line 38, in wrapped_toco_convert
enable_mlir_converter)
Exception: <unknown>:0: error: loc("batch_normalization/moving_mean"): is not immutable, try running tf-saved-model-optimize-global-tensors to prove tensors are immutable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "convert_tflite.py", line 76, in <module>
app.run(main)
File "C:\Python37\lib\site-packages\absl\app.py", line 303, in run
_run_main(main, args)
File "C:\Python37\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "convert_tflite.py", line 71, in main
save_tflite()
File "convert_tflite.py", line 45, in save_tflite
tflite_model = converter.convert()
File "C:\Python37\lib\site-packages\tensorflow\lite\python\lite.py", line 762, in convert
result = _convert_saved_model(**converter_kwargs)
File "C:\Python37\lib\site-packages\tensorflow\lite\python\convert.py", line 648, in convert_saved_model
enable_mlir_converter=True)
File "C:\Python37\lib\site-packages\tensorflow\lite\python\convert.py", line 216, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc("batch_normalization/moving_mean"): is not immutable, try running tf-saved-model-optimize-global-tensors to prove tensors are immutable
I have tried other versions of Tensorflow (2.2, 2.3, 2.4) but I had no luck. What should I do?
There is a similar issue raised here: Tensorflow Issue 44790
Here are my system details:
Windows 10, x64
GeForce GTX 1060
NVIDIA Driver 460.89
CUDA 11.0.3
CuDNN 8.0.5.39
Python 3.7.2
pip install tensorflow==2.3.0rc0
and restart runtime before starting conversion
I resolved the problem by following a thread on Github issues.
In google colab, I had this issue if I used the default TF version, which was 2.4.0 or above.
Running !pip install tensorflow==2.3.0 and restarting the runtime, then converting corrected the issue.
For me this solved my problem :
import tensorflow as tf
if tf.__version__ != '2.3.0-rc0':
!pip uninstall -y tensorflow
!pip install tensorflow-gpu==2.3.0rc0
And restart runtime, in order to use newly installed versions.
Description
I am following a tutorial of microsoft from this website
to get a model to inference Chinese couplet.
Now I have trained the model on Google cloud and I can also get good inference.
Howerver, when I am constructing inference service, I found my function to communicate with tensorflowserverapi can't find my problem get registered.
I also have trained this model for one step and add t2t_trainer --registry_help, and I can see my problem is actually registered under problems Problems.
My code is just the same as the one in this repo script
And here is my test code:
from up2down_model.up2down_model import up2down
upper_couplet = input()
up2down.get_down_couplet([upper_couplet])
Environment information:
OS: Ubuntu 20.04
$ pip freeze | grep tensor
tensor2tensor 1.15.6
tensorboard 1.14.0
tensorflow 1.14.0
tensorflow-addons 0.10.0
tensorflow-datasets 1.3.0
tensorflow-estimator 1.14.0
tensorflow-gan 2.0.0
tensorflow-hub 0.8.0
tensorflow-metadata 0.22.0
tensorflow-probability 0.7.0
tensorflow-serving-api 1.14.0
$ python -3.7.7
Error logs:
raceback (most recent call last):
File "/home/enigma/anaconda3/envs/NLP/lib/python3.7/site-packages/tensor2tensor/utils/registry.py", line 509, in problem
return Registries.problems[spec.base_name](
File "/home/enigma/anaconda3/envs/NLP/lib/python3.7/site-packages/tensor2tensor/utils/registry.py", line 254, in __getitem__
(key, self.name, display_list_by_prefix(sorted(self), 4)))
KeyError: 'translate_up2down never registered with registry problems. Available:
All problems without my own
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 1, in <module>
from up2down_model.up2down_model import up2down
File "/home/enigma/Desktop/NLP/service/up2down_model/up2down_model.py", line 85, in <module>
up2down = up2down_class(FLAGS,server_address) # inference model
File "/home/enigma/Desktop/NLP/service/up2down_model/up2down_model.py", line 40, in __init__
self.problem = registry.problem(self.FLAGS.problem)
File "/home/enigma/anaconda3/envs/NLP/lib/python3.7/site-packages/tensor2tensor/utils/registry.py", line 513, in problem
return env_problem(problem_name, **kwargs)
File "/home/enigma/anaconda3/envs/NLP/lib/python3.7/site-packages/tensor2tensor/utils/registry.py", line 527, in env_problem
ep_cls = Registries.env_problems[env_problem_name]
File "/home/enigma/anaconda3/envs/NLP/lib/python3.7/site-packages/tensor2tensor/utils/registry.py", line 254, in __getitem__
(key, self.name, display_list_by_prefix(sorted(self), 4)))
KeyError: 'translate_up2down never registered with registry env_problems. Available:\n reacher:\n * reacher_env_problem\n tic:\n * tic_tac_toe_env_problem'
I use Python 3.5, Tensorflow-gpu 1.12.0, Keras 2.2.4 in ubuntu. When I use system interpreter in Pycharm, the code is run without any problem. But, when I create a virtual environment in Pycharm and install the same versions of all necessary packages (OpenCV, Sklearn, pandas, Keras, Tensorflow), it gives the following error:
Traceback (most recent call last):
File "/media/ehsan/48BE4782BE476810/AA_MY_PYTHON_CODE/MultiLable_MultiTask_Light_Examples/CodeTwo/2_Main_Code_Training_Multitask_Network.py", line 338, in <module>
base_model, multi_model, feature_map = multi_model(loss_list, test_metrics, dd)
File "/media/ehsan/48BE4782BE476810/AA_MY_PYTHON_CODE/MultiLable_MultiTask_Light_Examples/CodeTwo/2_Main_Code_Training_Multitask_Network.py", line 40, in multi_model
_, base_model = VGG19(weights='imagenet', include_top=False, input_shape=(175, 100, 3))
TypeError: 'Model' object is not iterable
I tried to reinstall Tensorflow and Keras. Also, I recreated the virtual environment. But, I got the same error while using the virtual environment.
Try importing the model from TensorFlow instead of Keras.
from tensorflow.keras.models import load_model
instead of
from keras.models import load_model
I have tried to run my first demo using keras with tensorflow backend but failed:
Traceback (most recent call last):
File "mnist_cnn.py", line 26, in <module>
if K.image_data_format() == 'channels_first':
AttributeError: 'module' object has no attribute 'image_data_format'
keras version: 1.2.1
tensorflow version: 1.0.1
How to fix?
update keras to 2.0.2 and it fixed.
Recently, I installed tensorflow and got python import error in CIFAR tutorial.
I'm using Mac OS X, CPU only, Python 2.7.
$ python cifar10_train.py
Filling queue with 20000 CIFAR images before starting to train. This will take a few minutes.
Traceback (most recent call last):
File "cifar10_train.py", line 120, in
tf.app.run()
File "/Users/sunwoo/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "cifar10_train.py", line 116, in main
train()
File "cifar10_train.py", line 76, in train
class _LoggerHook(tf.train.SessionRunHook):
AttributeError: 'module' object has no attribute 'SessionRunHook'
How can I import tf.train.SessionRunHook?
It looks like you are using the master branch of cifar10_train.py, with an older installed version of TensorFlow (0.11 or earlier). The master branch was recently modified to use a new API, which wasn't available in TensorFlow 0.11 or earlier.
There are two ways to fix this problem. Either upgrade TensorFlow to version 0.12 or later, or check out the r0.11 branch of the TensorFlow source, and use the version of cifar10_train.py from that branch.