Python libraries being incompatible with each other - tensorflow

I am trying to reproduce a piece of code source for Image Segmentation which involves the use of the
keras
and
segmentation-models
libraries.
I was able to run it in another PC running Python 3.9 and tensorflow==2.10.0, keras==2.10.0, segmentation-models==1.0.0. However, when I am trying to reproduce it on my PC with tensorflow==2.11.0, keras==2.11.0, and segmentation-models==1.0.1, I get this error:
File "/home/dev/anaconda3/lib/python3.9/site-packages/efficientnet/__init__.py", line 71, in init_keras_custom_objects
keras.utils.generic_utils.get_custom_objects().update(custom_objects)
AttributeError: module 'keras.utils.generic_utils' has no attribute 'get_custom_objects'
Library versions seem to be consistent with what is mentioned in requirements of source.
I tried downgrading keras, tensorflow, and segmentation-models versions to the older ones (which I reckoned had worked on the earlier PC), but that again shows up further compatibility issues. On further digging, I found a link which seemed to give the right requirements. I created a new conda environment with Python==3.7.0, and have again downgraded the versions into those suggested in link. I still am getting newer issues:
(sm) dev#Turbo:/media/dev/WinD/Curious Dev B/PROJECT STAGE - II$ python3 debug.py
Using TensorFlow backend.
Segmentation Models: using `keras` framework.
Traceback (most recent call last):
File "debug.py", line 150, in <module>
model = sm.Unet(BACKBONE, classes=n_classes, activation=activation)
File "/home/dev/anaconda3/envs/sm/lib/python3.7/site-packages/segmentation_models/__init__.py", line 34, in wrapper
return func(*args, **kwargs)
File "/home/dev/anaconda3/envs/sm/lib/python3.7/site-packages/segmentation_models/models/unet.py", line 226, in Unet
**kwargs,
File "/home/dev/anaconda3/envs/sm/lib/python3.7/site-packages/segmentation_models/backbones/backbones_factory.py", line 103, in get_backbone
model = model_fn(*args, **kwargs)
File "/home/dev/anaconda3/envs/sm/lib/python3.7/site-packages/classification_models/models_factory.py", line 78, in wrapper
return func(*args, **new_kwargs)
File "/home/dev/anaconda3/envs/sm/lib/python3.7/site-packages/efficientnet/model.py", line 538, in EfficientNetB3
**kwargs)
File "/home/dev/anaconda3/envs/sm/lib/python3.7/site-packages/efficientnet/model.py", line 368, in EfficientNet
img_input = layers.Input(shape=input_shape)
File "/home/dev/anaconda3/envs/sm/lib/python3.7/site-packages/keras/engine/topology.py", line 1439, in Input
input_tensor=tensor)
File "/home/dev/anaconda3/envs/sm/lib/python3.7/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/dev/anaconda3/envs/sm/lib/python3.7/site-packages/keras/engine/topology.py", line 1301, in __init__
name = prefix + '_' + str(K.get_uid(prefix))
File "/home/dev/anaconda3/envs/sm/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py", line 63, in get_uid
graph = tf.get_default_graph()
AttributeError: module 'tensorflow' has no attribute 'get_default_graph'
I am unsure as to what went wrong here, the code worked six months ago. Now I am unable to run the same code, probably due to version changes of libraries. But at least it should have worked with the older versions that I myself used earlier, or at least with what link says.

Related

running temporal fusion transformer default dataset shape error

I ran default code of Temporal fusion transformer in google colab which downloaded at github.
After clone, when I ran the step 2, there's no way to test training.
python3 -m script_train_fixed_params volatility outputs yes
The problem is shape error in the below.
Computing best validation loss
Computing test loss
/usr/local/lib/python3.7/dist-packages/keras/engine/training_v1.py:2079: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically.
updates=self.state_updates,
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/content/drive/MyDrive/tft_tf2/script_train_fixed_params.py", line 239, in <module>
use_testing_mode=True) # Change to false to use original default params
File "/content/drive/MyDrive/tft_tf2/script_train_fixed_params.py", line 156, in main
targets = data_formatter.format_predictions(output_map["targets"])
File "/content/drive/MyDrive/tft_tf2/data_formatters/volatility.py", line 183, in format_predictions
output[col] = self._target_scaler.inverse_transform(predictions[col])
File "/usr/local/lib/python3.7/dist-packages/sklearn/preprocessing/_data.py", line 1022, in inverse_transform
force_all_finite="allow-nan",
File "/usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py", line 773, in check_array
"if it contains a single sample.".format(array)
ValueError: Expected 2D array, got 1D array instead:
array=[-1.43120418 1.58885804 0.28558148 ... -1.50945972 -0.16713021
-0.57365613].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
I've tried to modify code which is predict dataframe shpae of 'data_formatters/volatility.py", line 183, in format_predictions' because I guessed that's where the problem arises.), but I can't handle that.
You have to change line
183 in volatitlity.py
output[col] = self._target_scaler.inverse_transform(predictions[col].values.reshape(-1, 1))
and line 216 in electricity.py
sliced_copy[col] = target_scaler.inverse_transform(sliced_copy[col].values.reshape(-1, 1))
Afterwards the example electricity works fine. And I guess this should be the same with volatility.

onnxruntime: Given model could not be parsed while creating inference session. Error message: Protobuf parsing failed

According to the example code mentioned below the library. I have followed the example code but it didn't work.
[Library] https://github.com/notAI-tech/NudeNet/
Code
from nudenet import NudeClassifier
import onnxruntime
classifier = NudeClassifier()
classifier.classify('/home/coremax/Downloads/DETECTOR_AUTO_GENERATED_DATA/IMAGES/3FEF7B75-3823-4153-8490-87483AAC6ABC'
'.jpg')
I have also followed the previous solution on StackOverflow but it didn't work
Error on running Super Resolution Model from ONNX
Traceback (most recent call last):
File "/snap/pycharm-community/276/plugins/python-ce/helpers/pydev/pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/snap/pycharm-community/276/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/coremax/Documents/NudeNet/main.py", line 3, in <module>
classifier = NudeClassifier()
File "/home/coremax/Documents/NudeNet/nudenet/classifier.py", line 37, in __init__
self.nsfw_model = onnxruntime.InferenceSession(model_path)
File "/home/coremax/anaconda3/envs/AdultNET/lib/python3.6/site-packages/onnxruntime/capi/session.py", line 158, in __init__
self._load_model(providers or [])
File "/home/coremax/anaconda3/envs/AdultNET/lib/python3.6/site-packages/onnxruntime/capi/session.py", line 166, in _load_model
True)
RuntimeError: /onnxruntime_src/onnxruntime/core/session/inference_session.cc:238 onnxruntime::InferenceSession::InferenceSession(const onnxruntime::SessionOptions&, const onnxruntime::Environment&, const string&) status.IsOK() was false. Given model could not be parsed while creating inference session. Error message: Protobuf parsing failed.
I know it is too late but hope this helps someone build a very useful software.
why it fails
the error is that for NudeClassifier to work it has to download the onnx model from this link
but github now requires you to be logged in to download any file so the constructor for the NudeClassifier fails as it tries to download this model
Solution
create a folder in your user's home folder with the name .NudeNet/
download the model from this link
save the model in the folder you created in step one
you should now have the model at the following path ~/.NudeNet/classifier_model.onnx
now you're ready to go good luck!

SentencePiece in Google Colab

I want to use sentencepiece, from https://github.com/google/sentencepiece in a Google Colab project where I am training an OpenNMT model. I'm a little confused with how to set up the sentencepiece binaries in Google Colab. Do I need to build with cmake?
When I try and install using pip install sentencepiece and try to include sentencepiece in my "transforms" in my script, I get this following error
After running this script (matched from the OpenNMT translation tutorial)
!onmt_build_vocab -config en-sp.yaml -n_sample -1
I get:
Traceback (most recent call last):
File "/usr/local/bin/onmt_build_vocab", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/onmt/bin/build_vocab.py", line 63, in main
build_vocab_main(opts)
File "/usr/local/lib/python3.7/dist-packages/onmt/bin/build_vocab.py", line 32, in build_vocab_main
transforms = make_transforms(opts, transforms_cls, fields)
File "/usr/local/lib/python3.7/dist-packages/onmt/transforms/transform.py", line 176, in make_transforms
transform_obj.warm_up(vocabs)
File "/usr/local/lib/python3.7/dist-packages/onmt/transforms/tokenize.py", line 110, in warm_up
load_src_model.Load(self.src_subword_model)
File "/usr/local/lib/python3.7/dist-packages/sentencepiece/__init__.py", line 367, in Load
return self.LoadFromFile(model_file)
File "/usr/local/lib/python3.7/dist-packages/sentencepiece/__init__.py", line 171, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
TypeError: not a string
Below is how my script is written. I'm not sure what the not a string is coming from.
## Where the samples will be written
save_data: en-sp/run/example
## Where the vocab(s) will be written
src_vocab: en-sp/run/example.vocab.src
tgt_vocab: en-sp/run/example.vocab.tgt
## Where the model will be saved
save_model: drive/MyDrive/Europarl/model/model
# Prevent overwriting existing files in the folder
overwrite: False
# Corpus opts:
data:
europarl:
path_src: train_europarl-v7.es-en.es
path_tgt: train_europarl-v7.es-en.en
transforms: [sentencepiece, filtertoolong]
weight: 1
valid:
path_src: dev_europarl-v7.es-en.es
path_tgt: dev_europarl-v7.es-en.en
transforms: [sentencepiece]
skip_empty_level: silent
world_size: 1
gpu_ranks: [0]
...
EDIT: So I went ahead and Googled the issue more and found a google colab project that built sentencepiece using cmake here https://colab.research.google.com/github/mymusise/gpt2-quickly/blob/main/examples/gpt2_quickly.ipynb#scrollTo=dDAup5dxDXZW. However, even after building using cmake, I'm still getting this issue.
To fix this issue, I had to filter and tokenize my dataset and then train with sentencepiece. I used the scripts from this helpful source: https://github.com/ymoslem/MT-Preparation to do everything and now my model is training!

Using graph_metrics.py with a saved graph

I want to view statistics of my model by saving my graph to a file then running graph_metrics.py.
I have tried a few different things to write the file, my best effort is:
tf.train.write_graph( session.graph_def, ".", "my_graph", as_text=True )
But here's what happens:
$ python ./util/graph_metrics.py --noinput_binary --graph my_graph
Traceback (most recent call last):
File "./util/graph_metrics.py", line 137, in <module>
tf.app.run()
File ".virtualenv/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "./util/graph_metrics.py", line 85, in main
FLAGS.batch_size)
File "./util/graph_metrics.py", line 109, in calculate_graph_metrics
input_tensor = sess.graph.get_tensor_by_name(input_layer)
File ".virtualenv/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2531, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
File ".virtualenv/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2385, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File ".virtualenv/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2427, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name)))
KeyError: "The name 'Mul:0' refers to a Tensor which does not exist. The operation, 'Mul', does not exist in the graph."
Is there a complete working example of saving a graph, then analyzing it with graph_metrics.py?
This process seems to involve a magic incantation that I haven't yet discovered.
The error you're hitting is because you need to specify the name of your own input node with --input_layer= (it just defaults to Mul:0 because that's what we use in one of our Inception models):
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/graph_metrics.py#L51
The graph_metrics script is still very much a work in progress unfortunately, and you may hit problems with shape inference, but hopefully this should get you past the initial hurdle.

matplotlib savefig IO error

I am trying to use the matplotlib.pyplot.savefig() function to save some figures.
I am saving them to a directory, however I keep getting the error:
matplotlib.pyplot.savefig(savepath,dpi=dpi,size=size)
File "C:\Anaconda\lib\site-packages\matplotlib\pyplot.py", line 577, in savefig
res = fig.savefig(*args, **kwargs)
File "C:\Anaconda\lib\site-packages\matplotlib\figure.py", line 1476, in savefig
self.canvas.print_figure(*args, **kwargs)
File "C:\Anaconda\lib\site-packages\matplotlib\backends\backend_qt5agg.py", line 161, in print_figure
FigureCanvasAgg.print_figure(self, *args, **kwargs)
File "C:\Anaconda\lib\site-packages\matplotlib\backend_bases.py", line 2211, in print_figure
**kwargs)
File "C:\Anaconda\lib\site-packages\matplotlib\backends\backend_agg.py", line 526, in print_png
filename_or_obj = open(filename_or_obj, 'wb')
IOError: [Errno 2] No such file or directory:
Of course the file doesn't exist, as I am trying to save it now.
The directory does exist, I have checked repeatedly.
I am completely baffled, as this worked perfectly fine 2 days ago, but with no changes to the code it does not now. EDIT: I updated the version of the anaconda python distribution that I was using before, From 32 bit anaconda 2.0 to 64 bit 2.3, both for python 2.7.
Does anyone have any clue?
Thank you for reading my desperate plea for assistance!
EDIT:
I am also getting what I believe to be the same error from saving txt files in python now.
f = open(fname, 'w')
IOError: [Errno 2] No such file or directory: 'D:\\DropBox\\Dropbox\\abc\\Time resolved spectroscopy data\\LiHoF4\\High resolution 1cm\\power spectra\\Si\\RT\\25ns\\CUT POWER SPECTRUM LiHoF pumping 5G5 449.8nm DC si detector 1cm resolution 25ns data aquisition 2000 points 5AVG RT 450nmlongpassfilter.0.dpt_fitting_output.txt'
Could this be to do with the long filename?
I don't understand why there would be a problem with a file not existing when opening it to write to.