Test the .trt file using tensorflow - tensorflow

Below output_saved_model_dir in this directory i am having trt file named final_model_gender_classification_gpu0_int8.trt
output_saved_model_dir='/home/cocoslabs/Downloads/age_gender_trt'
saved_model_loaded = tf.saved_model.load(output_saved_model_dir, tags=[tag_constants.SERVING])
When I run the above script it showing error as follows:
File "test.py", line 7, in <module>
saved_model_loaded = tf.saved_model.load(output_saved_model_dir, tags=[tag_constants.SERVING])
File "/home/cocoslabs/deepstream_docker/venv/lib/python3.6/site-packages/tensorflow_core/python/saved_model/load.py", line 528, in load
return load_internal(export_dir, tags)
File "/home/cocoslabs/deepstream_docker/venv/lib/python3.6/site-packages/tensorflow_core/python/saved_model/load.py", line 537, in load_internal
saved_model_proto = loader_impl.parse_saved_model(export_dir)
File "/home/cocoslabs/deepstream_docker/venv/lib/python3.6/site-packages/tensorflow_core/python/saved_model/loader_impl.py", line 83, in parse_saved_model
constants.SAVED_MODEL_FILENAME_PB))
OSError: SavedModel file does not exist at: /home/cocoslabs/Downloads/age_gender_trt/{saved_model.pbtxt|saved_model.pb}
From the above error what I understand is tf.saved_model.load() accept only .pb or .pbtxt files. Is it right ? But as per this link Load and run test a .trt model what they said is tf.saved_model.load() function will accept .trt file. Help me to rectify this error. Thank You.

Related

onnxruntime: Given model could not be parsed while creating inference session. Error message: Protobuf parsing failed

According to the example code mentioned below the library. I have followed the example code but it didn't work.
[Library] https://github.com/notAI-tech/NudeNet/
Code
from nudenet import NudeClassifier
import onnxruntime
classifier = NudeClassifier()
classifier.classify('/home/coremax/Downloads/DETECTOR_AUTO_GENERATED_DATA/IMAGES/3FEF7B75-3823-4153-8490-87483AAC6ABC'
'.jpg')
I have also followed the previous solution on StackOverflow but it didn't work
Error on running Super Resolution Model from ONNX
Traceback (most recent call last):
File "/snap/pycharm-community/276/plugins/python-ce/helpers/pydev/pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/snap/pycharm-community/276/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/coremax/Documents/NudeNet/main.py", line 3, in <module>
classifier = NudeClassifier()
File "/home/coremax/Documents/NudeNet/nudenet/classifier.py", line 37, in __init__
self.nsfw_model = onnxruntime.InferenceSession(model_path)
File "/home/coremax/anaconda3/envs/AdultNET/lib/python3.6/site-packages/onnxruntime/capi/session.py", line 158, in __init__
self._load_model(providers or [])
File "/home/coremax/anaconda3/envs/AdultNET/lib/python3.6/site-packages/onnxruntime/capi/session.py", line 166, in _load_model
True)
RuntimeError: /onnxruntime_src/onnxruntime/core/session/inference_session.cc:238 onnxruntime::InferenceSession::InferenceSession(const onnxruntime::SessionOptions&, const onnxruntime::Environment&, const string&) status.IsOK() was false. Given model could not be parsed while creating inference session. Error message: Protobuf parsing failed.
I know it is too late but hope this helps someone build a very useful software.
why it fails
the error is that for NudeClassifier to work it has to download the onnx model from this link
but github now requires you to be logged in to download any file so the constructor for the NudeClassifier fails as it tries to download this model
Solution
create a folder in your user's home folder with the name .NudeNet/
download the model from this link
save the model in the folder you created in step one
you should now have the model at the following path ~/.NudeNet/classifier_model.onnx
now you're ready to go good luck!

How to resolve UnicodeError in Tensorflow 2 Object Detection API

I have a question, but when I was training the tensorflow-object-detection-API, I got the following error. Can you tell me if there is any workaround?
Conducted commnand
python model_main_tf2.py --model_dir=models/my_ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8 --pipeline_config_path=models/my_ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/pipeline.config
erroer messege
File "model_main_tf2.py", line 115, in <module>
tf.compat.v1.app.run()
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 40, in ru
n
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\absl\app.py", line 303, in run
_run_main(main, args)
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "model_main_tf2.py", line 106, in main
model_lib_v2.train_loop(
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\object_detection\model_lib_v2.py", line 611, in tr
ain_loop
manager = tf.compat.v2.train.CheckpointManager(
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\training\checkpoint_management.p
y", line 640, in __init__
recovered_state = get_checkpoint_state(directory)
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\training\checkpoint_management.p
y", line 278, in get_checkpoint_state
file_content = file_io.read_file_to_string(
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 352, in
read_file_to_string
return f.read()
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 117, in
read
self._preread_check()
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 79, in
_preread_check
self._read_buf = _pywrap_file_io.BufferedInputStream(
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x83 in position 108: invalid start byte
What I did
-I tried to convert the character code of pipeline.config.
-The API was tested. (It's OK like the attached image.)
-Check if there are any mistakes in the execution command.
Also, when learning on another network, I was able to finish learning to the end without such an error. This time as well, I downloaded and ran the trained model.
Reference site:
·tutorial
https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#training-the-model
・ List of trained models https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
We apologize for the inconvenience, but thank you for your cooperation.
Most Probably it's because you are trying to run a TPU model on your local machine(I guessed that from you PyCharm ScreenShot). Try running a GPU based model or a CPU one.

Error converting dataset to tfrecord for DeepLab v3+

I'm trying to convert a custom dataset to tfrecord for DeepLab v3+, following this tutorial. My directory setup is as follows:
+ datasets
+ pascal_voc_seg/custom_dataset
+ VOCdevkit
+ VOC2012
+ JPEGImages
+ SegmentationClassRaw
+ ImageSets
+Segmentation
+ tfrecord
I also have downloaded the Pascal VOC dataset and the two directory structures are now identical. When I run the build_voc2012_data.py script on the PascalVOC dataset as follows:
#from models/research/deeplab/dataset/pascal_voc_seg
python build_voc2012_data.py \
--image_folder="./VOCdevkit/VOC2012/JPEGImages" \
--semantic_segmentation_folder="./VOCdevkit/VOC2012/SegmentationClassRaw" \
--list_folder="./VOCdevkit/VOC2012/ImageSets/Segmentation" \
--image_format="jpg" \
--output_dir="./tfrecord"
...everything works fine, the dataset gets converted to tfrecord files with a progress bar displayed. However when I run the same script from my custom dataset directory, the following error occurs:
>> Converting image 1/164 shard 0Traceback (most recent call last):
File "build_voc2012_data.py", line 146, in <module>
tf.compat.v1.app.run()
File "/home/delanyn/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/delanyn/.local/lib/python2.7/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/home/delanyn/.local/lib/python2.7/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "build_voc2012_data.py", line 142, in main
_convert_dataset(dataset_split)
File "build_voc2012_data.py", line 121, in _convert_dataset
image_data = tf.io.gfile.GFile(image_filename, 'rb').read()
File "/home/delanyn/.local/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 122, in read
self._preread_check()
File "/home/delanyn/.local/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 84, in _preread_check
compat.as_bytes(self.__name), 1024 * 512)
.jpg; No such file or directoryors_impl.NotFoundError: ./VOCdevkit/VOC2012/JPEGImages/2020_0
What could I be missing here? My images are JPEG with the same dimensions as the Pascal VOC images. The segmentation masks have the same colormap as well, and I use remove colormap script on them in advance.
As per the error message, I could only say that an entry in the train.txt or val.txt file in the folder: pascal_voc_dataset/VOCdevkit/VOC2012/ImageSets/Segmentation does not match with anything (image) in the JPEG folder: pascal_voc_dataset/VOCdevkit/VOC2012/JPEGImages.

ArgumentError: argument --batch_size: conflicting option string: --batch_size in Spyder

During the execution of a lot of script with Spyder I get the error ArgumentError: argument --batch_size: conflicting option string: --batch_size
every time I try to execute the code and execution terminates.
Eg with Tensorflow CIFAR10 sample I get this error
on this line
# Basic model parameters.
tf.app.flags.DEFINE_integer('batch_size', 128,
"""Number of images to process in a batch.""")
Full error log:
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:\ProgramData\Anaconda3\lib\argparse.py", line 1344, in add_argument
return self._add_action(action)
File "C:\ProgramData\Anaconda3\lib\argparse.py", line 1707, in _add_action
self._optionals._add_action(action)
File "C:\ProgramData\Anaconda3\lib\argparse.py", line 1548, in _add_action
action = super(_ArgumentGroup, self)._add_action(action)
File "C:\ProgramData\Anaconda3\lib\argparse.py", line 1358, in _add_action
self._check_conflict(action)
File "C:\ProgramData\Anaconda3\lib\argparse.py", line 1497, in _check_conflict
conflict_handler(action, confl_optionals)
File "C:\ProgramData\Anaconda3\lib\argparse.py", line 1506, in _handle_conflict_error
raise ArgumentError(action, message % conflict_string)
ArgumentError: argument --batch_size: conflicting option string: --batch_size
I cannot figure how to fix it. If I run the code from command line the error doesn't happen.
The problem is: you run the cifar10.py more than once in the same python instance.
Cifar10.py has this code:
tf.app.flags.DEFINE_integer('batch_size', 128,
"""Number of images to process in a batch.""")
which defines an argument batch_size in tf.app.flags.FLAGS.
When you run the Cifar10.py the second time (run the file itself or import through other file), TensorFlow checks that the argument batch_size already existed, so it gives the error.
How to fix: open a new Console (Consoles->Open an IPython console) and run the file.
The command line will create a new python instance every time, so you will not encounter this error.

matplotlib savefig IO error

I am trying to use the matplotlib.pyplot.savefig() function to save some figures.
I am saving them to a directory, however I keep getting the error:
matplotlib.pyplot.savefig(savepath,dpi=dpi,size=size)
File "C:\Anaconda\lib\site-packages\matplotlib\pyplot.py", line 577, in savefig
res = fig.savefig(*args, **kwargs)
File "C:\Anaconda\lib\site-packages\matplotlib\figure.py", line 1476, in savefig
self.canvas.print_figure(*args, **kwargs)
File "C:\Anaconda\lib\site-packages\matplotlib\backends\backend_qt5agg.py", line 161, in print_figure
FigureCanvasAgg.print_figure(self, *args, **kwargs)
File "C:\Anaconda\lib\site-packages\matplotlib\backend_bases.py", line 2211, in print_figure
**kwargs)
File "C:\Anaconda\lib\site-packages\matplotlib\backends\backend_agg.py", line 526, in print_png
filename_or_obj = open(filename_or_obj, 'wb')
IOError: [Errno 2] No such file or directory:
Of course the file doesn't exist, as I am trying to save it now.
The directory does exist, I have checked repeatedly.
I am completely baffled, as this worked perfectly fine 2 days ago, but with no changes to the code it does not now. EDIT: I updated the version of the anaconda python distribution that I was using before, From 32 bit anaconda 2.0 to 64 bit 2.3, both for python 2.7.
Does anyone have any clue?
Thank you for reading my desperate plea for assistance!
EDIT:
I am also getting what I believe to be the same error from saving txt files in python now.
f = open(fname, 'w')
IOError: [Errno 2] No such file or directory: 'D:\\DropBox\\Dropbox\\abc\\Time resolved spectroscopy data\\LiHoF4\\High resolution 1cm\\power spectra\\Si\\RT\\25ns\\CUT POWER SPECTRUM LiHoF pumping 5G5 449.8nm DC si detector 1cm resolution 25ns data aquisition 2000 points 5AVG RT 450nmlongpassfilter.0.dpt_fitting_output.txt'
Could this be to do with the long filename?
I don't understand why there would be a problem with a file not existing when opening it to write to.