TensorFlow New Op: AttributeError: 'module' object has no attribute 'custom_op' - tensorflow

I am creating new ops (https://www.tensorflow.org/extend/adding_an_op) for TensorFlow (r1.0) running both on x86 and ARMv7.
Minor code modifications are necessary to run TensorFlow on ARMv7, but this guide helps a lot:
https://github.com/samjabrahams/tensorflow-on-raspberry-pi/blob/master/GUIDE.md.
But I noticed that the custom operations do not work on my ARMv7 installation of TensorFlow.
For example, when I test my custom operation in a Python script on ARMv7:
import tensorflow as tf
_custom_op_module = tf.load_op_library('custom_op.so')
custom_op = _custom_op_module.add_stub
I get the following error (that does not show up on x86):
$ python test_custom_op.py
Traceback (most recent call last):
File "custom_op.py", line 3, in <module>
add_stub = _custom_op_module.add_stub
AttributeError: 'module' object has no attribute 'custom_op'
I further investigated the issue, and apparently there is not my custom operation in the .so library file.
$ python
>>> import tensorflow as tf
>>> _custom_op_module = tf.load_op_library('custom_op.so')
>>> dir(_custom_op_module)
>>> ['LIB_HANDLE', 'OP_LIST', '_InitOpDefLibrary', '__builtins__', '__doc__', '__name__', '__package__', '_collections', '_common_shapes', '_op_def_lib', '_op_def_library', '_op_def_pb2', '_op_def_registry', '_ops', '_text_format']
>>> _custom_op_module.OP_LIST
>>>
The same commands on x86 have the following output:
>>> import tensorflow as tf
>>> _custom_op_module = tf.load_op_library('custom_op.so')
>>> dir(_custom_op_module)
>>> ['LIB_HANDLE', 'OP_LIST', '_InitOpDefLibrary', '__builtins__', '__doc__', '__name__', '__package__', '_add_stub_outputs', '_collections', '_common_shapes', '_op_def_lib', '_op_def_library', '_op_def_pb2', '_op_def_registry', '_ops', '_text_format', 'custom_op']
>>> _custom_op_module.OP_LIST
op {
name: "CustomOp"
...
}
>>>
Does anybody have similar issue? Can we consider this a bug?

I hit a similar issue with a similar error message when I tried to load my new op, however, my problem was I tried to register a customized op that had the same op name as tensorflow, and that led to a name collision. Changing the name fixed it without recompiling TF.
The error message I encountered:
AttributeError: module '6e237d88703da016805889179d3f5baa' has no attribute 'custom_op'

Apparently, recompiling and re-installing the TF made it works.

Related

KeyError: 'NormalizeUTF8' loading a model to another jupyter notebook

I'm trying to reload another model to another jupyter notebook using this code:
import tensorflow as tf
reloaded = tf.saved_model.load('m_translator')
result = reloaded.tf_translate(input_text)
and I recently got this error:
KeyError Traceback (most recent call last)
File ~\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py:4177, in Graph._get_op_def(self, type)
4176 try:
-> 4177 return self._op_def_cache[type]
4178 except KeyError:
KeyError: 'NormalizeUTF8'
FileNotFoundError: Op type not registered 'NormalizeUTF8' in binary running on LAPTOP-D3PPA576. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
You may be trying to load on a different device from the computational device. Consider setting the `experimental_io_device` option in `tf.saved_model.LoadOptions` to the io_device such as '/job:localhost'.
I had this issue before. To solve it, you need to install tensorflow_text. You should try to :
>>> tf.__version__
2.8.2
>>>!pip install tensorflow-text==2.8.2
In addition to installing tensorflow_text library, what helped me with a similar problem was importing the library at the top of the notebook:
import tensorflow_text

How can i fix the error tensorflow has not attribute 'ConfigProto' witthout deprecating?

i'm trying to train a Retinanet on my Dataset with this command line :
retinanet-train --batch-size 4 --steps 349 --epochs 50 --weights logos/resnet50_coco_best_v2.1.0.h5 --snapshot-path logos/snapshots csv logos/retinanet_train.csv logos/retinanet_classes.csv
And I get this Error :
AttributeError: module 'tensorflow' has no attribute 'ConfigProto'
I know that , this is related to the version of Tensorlow , in the new version ConfigProto disappeared , but i want to fix it without 're-installing' the old version the 1.14, cause otherwise it will be a mess.
Any suggestion would be super appreciated , thank you.
Since tf.ConfigProto is deprecated in TF 2.0, use tf.compat.v1.ConfigProto() instead by replacing the occurences of tf.ConfigProto() in the retinanet-train code (assuming that's where tf.ConfigProto() is being called). Link to tensorflow doc here.

Keras: Error when downloading Fashion_MNIST Data

I am trying to download data from Fashion MNIST, but it produces an error. Originally, it was downloading and working properly, but I had to terminate it because I had to turn off my computer. Once I opened the file up again, it gives me an error. I'm not sure what the problem is, but is it because I already downloaded some parts of the data once, and keras doesn't recognize that? I am using Jupyter notebook in a conda environment
Here is the link to the image:
https://i.stack.imgur.com/wLGDm.png
You have missed adding tf. to the line
fashion_mnist = keras.datasets.fashion_mnist
The below code works perfectly for me. Importing the fashion_mnist dataset has been outlined in tensorflow documention here.
Change your code to:
import tensorflow as tf
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
or, use the better way to do it below. This avoids creating an extra variable fashion_mnist:
import tensorflow as tf
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data()
I am using tensorflow 1.9.0, keras 2.2.2 and python 3.6.6 on Windows 10 x64 OS.
I know my pc well, I can't download anything larger than 2.7 MB (in terminal), due to WinError 8.
So I manually downloaded all packs from storage.google (since some packs are 25 MB).
Check the packs:
then I paste all packs to \datasets\fashion-mnist
The next time u run your code, it should be fixed.
Note : If u have VScode then just CTRL and click the link, then you can download it easily.
I had an error regarding the cURL connection, and by looking into the error message I was able to track the file where the URL was declared. In my case it was:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/datasets/fashion_mnist.py
At line 44 I have commented out the line:
# base = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/'
And declared a different base URL, which I had found looking into the documentation of the original dataset:
base = 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/'
The download started immediately and gave no errors. Hope this helps.
This is because for some reason you have an incomplete download for the MNIST dataset.
You will have to manually delete the downloaded folder which usually resides in ~/.keras/datasets or any path specified by you relative to this path, in your case MNIST_data.
Go to : C:\Users\Username.keras\datasets
and then Delete the Dataset that you want to redownload or has the error
You should be good to go!
You can also manually add print for the path from which it is taking dataset ..
Ex: print(paths) in file fashion_mnist.py
with gzip.open(paths[3], 'rb') as imgpath:
print(paths) #debug print in fashion_mnist.py
x_test = np.frombuffer(
imgpath.read(), np.uint8, offset=16).reshape(len(y_test), 28, 28)
& from this path, remove the files & this will start to download fresh data ..
Change The base address with 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/' as described previously. It works for me.
I was getting error of Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
Traceback (most recent call last):
File "C:\Users\AsadA\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\lib\npyio.py", line 448, in load
return pickle.load(fid, **pickle_kwargs)
EOFError: Ran out of input
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\AsadA\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\lib\npyio.py", line 450, in load
raise IOError(
OSError: Failed to interpret file 'C:\\Users\\AsadA\\.keras\\datasets\\mnist.npz' as a pickle"**
GO TO FILE C:\Users\AsadA\AppData\Local\Programs\Python\Python38\Lib\site-packages\tensorflow\python\keras\datasets (In my Case) and follow the instructions:

Where Tensorflow EagerTensor is defined?

In my code I want to check whether returned object type is EagerTensor:
import tensorflow as tf
import inspect
if __name__ == '__main__':
tf.enable_eager_execution()
iterator = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]).__iter__()
elem = iterator.next()
print(type(elem))
print(inspect.getmodule(elem))
assert type(elem) == tf.python.framework.ops.EagerTensor
But the result is:
<class 'EagerTensor'>
<module 'tensorflow.python.framework.ops' from '/home/antek/anaconda3/envs/mnist_identification/lib/python3.6/site-packages/tensorflow/python/framework/ops.py'>
Traceback (most recent call last):
File "/home/antek/.PyCharm2018.1/config/scratches/scratch_4.py", line 11, in <module>
assert type(elem) == tf.python.framework.ops.EagerTensor
AttributeError: module 'tensorflow' has no attribute 'python'
Here: AttributeError: module 'tensorflow' has no attribute 'python' I found out that tensorflow purposely deletes its reference to the python module. So how can I check that my object is an EagerTensor instance?
I am not sure if you can, but I think you probably don't need to. You already have the following tools:
tf.is_tensor (previously tf.contrib.framework.is_tensor) that will return True for an EagerTensor
tf.executing_eagerly that returns True if you are, well, executing eagerly.
I believe they should cover 99% of your needs -- and I would be curious to hear about your problem if it falls in that percentage left out.
In the modern version of TensorFlow (2.2), you can use the is_tensor function documented here.
assert(tf.is_tensor(elem))

Is lxml buggy in handling of attributes as dictionaries via attrib?

I am trying to migrate some code from using ElementTree to using lxml.etree and have encountered an error early on:
>>> import lxml.etree as ET
>>> main = ET.Element("main")
>>> another = ET.Element("another", foo="bar")
>>> main.attrib.update(another.attrib)
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
main.attrib.update(another.attrib)
File "lxml.etree.pyx", line 2153, in lxml.etree._Attrib.update
(src/lxml/lxml.etree.c:46972)
ValueError: too many values to unpack (expected 2)
But I am able to update using the following:
>>> main.attrib.update({'foo': 'bar'})
Is this a bug in lxml (version 2.3) or am I just missing something obvious?
I'm getting the same error, don't think that it's only 2.3 issue.
Workaround:
main.attrib.update(dict(another.attrib))
# or more efficient if it has many attributes:
main.attrib.update(another.attrib.iteritems())
UPDATE
lxml.etree._Attrib.update accepts dict or iterable (source). Although _Attrib has dict interface, it is not dict instance.
In [3]: type(another.attrib)
Out[3]: lxml.etree._Attrib
In [4]: isinstance(another.attrib, dict)
Out[4]: False
So update tries to iterate items as key, value. Maybe it's done for perfomance. Only lxml author knows.
Ways to change it in lxml:
Subclass dict.
Check for hasattr(sequence_or_dict, 'items').
I'm not familiar with Cython and don't know what is better.