In my code I want to check whether returned object type is EagerTensor:
import tensorflow as tf
import inspect
if __name__ == '__main__':
tf.enable_eager_execution()
iterator = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]]).__iter__()
elem = iterator.next()
print(type(elem))
print(inspect.getmodule(elem))
assert type(elem) == tf.python.framework.ops.EagerTensor
But the result is:
<class 'EagerTensor'>
<module 'tensorflow.python.framework.ops' from '/home/antek/anaconda3/envs/mnist_identification/lib/python3.6/site-packages/tensorflow/python/framework/ops.py'>
Traceback (most recent call last):
File "/home/antek/.PyCharm2018.1/config/scratches/scratch_4.py", line 11, in <module>
assert type(elem) == tf.python.framework.ops.EagerTensor
AttributeError: module 'tensorflow' has no attribute 'python'
Here: AttributeError: module 'tensorflow' has no attribute 'python' I found out that tensorflow purposely deletes its reference to the python module. So how can I check that my object is an EagerTensor instance?
I am not sure if you can, but I think you probably don't need to. You already have the following tools:
tf.is_tensor (previously tf.contrib.framework.is_tensor) that will return True for an EagerTensor
tf.executing_eagerly that returns True if you are, well, executing eagerly.
I believe they should cover 99% of your needs -- and I would be curious to hear about your problem if it falls in that percentage left out.
In the modern version of TensorFlow (2.2), you can use the is_tensor function documented here.
assert(tf.is_tensor(elem))
Related
I'm trying to reload another model to another jupyter notebook using this code:
import tensorflow as tf
reloaded = tf.saved_model.load('m_translator')
result = reloaded.tf_translate(input_text)
and I recently got this error:
KeyError Traceback (most recent call last)
File ~\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py:4177, in Graph._get_op_def(self, type)
4176 try:
-> 4177 return self._op_def_cache[type]
4178 except KeyError:
KeyError: 'NormalizeUTF8'
FileNotFoundError: Op type not registered 'NormalizeUTF8' in binary running on LAPTOP-D3PPA576. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
You may be trying to load on a different device from the computational device. Consider setting the `experimental_io_device` option in `tf.saved_model.LoadOptions` to the io_device such as '/job:localhost'.
I had this issue before. To solve it, you need to install tensorflow_text. You should try to :
>>> tf.__version__
2.8.2
>>>!pip install tensorflow-text==2.8.2
In addition to installing tensorflow_text library, what helped me with a similar problem was importing the library at the top of the notebook:
import tensorflow_text
I am creating new ops (https://www.tensorflow.org/extend/adding_an_op) for TensorFlow (r1.0) running both on x86 and ARMv7.
Minor code modifications are necessary to run TensorFlow on ARMv7, but this guide helps a lot:
https://github.com/samjabrahams/tensorflow-on-raspberry-pi/blob/master/GUIDE.md.
But I noticed that the custom operations do not work on my ARMv7 installation of TensorFlow.
For example, when I test my custom operation in a Python script on ARMv7:
import tensorflow as tf
_custom_op_module = tf.load_op_library('custom_op.so')
custom_op = _custom_op_module.add_stub
I get the following error (that does not show up on x86):
$ python test_custom_op.py
Traceback (most recent call last):
File "custom_op.py", line 3, in <module>
add_stub = _custom_op_module.add_stub
AttributeError: 'module' object has no attribute 'custom_op'
I further investigated the issue, and apparently there is not my custom operation in the .so library file.
$ python
>>> import tensorflow as tf
>>> _custom_op_module = tf.load_op_library('custom_op.so')
>>> dir(_custom_op_module)
>>> ['LIB_HANDLE', 'OP_LIST', '_InitOpDefLibrary', '__builtins__', '__doc__', '__name__', '__package__', '_collections', '_common_shapes', '_op_def_lib', '_op_def_library', '_op_def_pb2', '_op_def_registry', '_ops', '_text_format']
>>> _custom_op_module.OP_LIST
>>>
The same commands on x86 have the following output:
>>> import tensorflow as tf
>>> _custom_op_module = tf.load_op_library('custom_op.so')
>>> dir(_custom_op_module)
>>> ['LIB_HANDLE', 'OP_LIST', '_InitOpDefLibrary', '__builtins__', '__doc__', '__name__', '__package__', '_add_stub_outputs', '_collections', '_common_shapes', '_op_def_lib', '_op_def_library', '_op_def_pb2', '_op_def_registry', '_ops', '_text_format', 'custom_op']
>>> _custom_op_module.OP_LIST
op {
name: "CustomOp"
...
}
>>>
Does anybody have similar issue? Can we consider this a bug?
I hit a similar issue with a similar error message when I tried to load my new op, however, my problem was I tried to register a customized op that had the same op name as tensorflow, and that led to a name collision. Changing the name fixed it without recompiling TF.
The error message I encountered:
AttributeError: module '6e237d88703da016805889179d3f5baa' has no attribute 'custom_op'
Apparently, recompiling and re-installing the TF made it works.
I am trying to use serialized data using the proto interface as suggested here
https://www.tensorflow.org/versions/master/how_tos/reading_data/index.html#reading-from-files
I try to use the example :
https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py
But it fails because :
In [99]: tf.FixedLenFeature
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-99-e5629528302a> in <module>()
----> 1 tf.FixedLenFeature
AttributeError: 'module' object has no attribute 'FixedLenFeature'
I guess I am missing something here...
The version of fully_connected_reader.py includes some (breaking) changes to the tf.parse_example() API that were made after the TensorFlow 0.6.0 release. These changes included adding the tf.FixedLenFeature class as a helper for defining the schema to be used in tf.parse_example(). You should build TensorFlow from source or wait for the upcoming 0.7.0 release to use this version of the API.
Alternatively, the old version of the example code is available here, and the documentation for tf.parse_example() is available here.
Hi I am getting the following error
'numpy.ndarray' object is not callable
when performing the calculation in the following manner
rolling_means = pd.rolling_mean(prices,20,min_periods=20)`
rolling_std = pd.rolling_std(prices, 20)`
#print rolling_means.head(20)
upper_band = rolling_means + (rolling_std)* 2
lower_band = rolling_means - (rolling_std)* 2
I am not sure how to resolve, can someone point me in right direction....
The error TypeError: 'numpy.ndarray' object is not callable means that you tried to call a numpy array as a function. We can reproduce the error like so in the repl:
In [16]: import numpy as np
In [17]: np.array([1,2,3])()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/user/<ipython-input-17-1abf8f3c8162> in <module>()
----> 1 np.array([1,2,3])()
TypeError: 'numpy.ndarray' object is not callable
If we are to assume that the error is indeed coming from the snippet of code that you posted (something that you should check,) then you must have reassigned either pd.rolling_mean or pd.rolling_std to a numpy array earlier in your code.
What I mean is something like this:
In [1]: import numpy as np
In [2]: import pandas as pd
In [3]: pd.rolling_mean(np.array([1,2,3]), 20, min_periods=5) # Works
Out[3]: array([ nan, nan, nan])
In [4]: pd.rolling_mean = np.array([1,2,3])
In [5]: pd.rolling_mean(np.array([1,2,3]), 20, min_periods=5) # Doesn't work anymore...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/user/<ipython-input-5-f528129299b9> in <module>()
----> 1 pd.rolling_mean(np.array([1,2,3]), 20, min_periods=5) # Doesn't work anymore...
TypeError: 'numpy.ndarray' object is not callable
So, basically you need to search the rest of your codebase for pd.rolling_mean = ... and/or pd.rolling_std = ... to see where you may have overwritten them.
Also, if you'd like, you can put in reload(pd) just before your snippet, which should make it run by restoring the value of pd to what you originally imported it as, but I still highly recommend that you try to find where you may have reassigned the given functions.
For everyone with this problem in 2021, sometimes you can have this problem when you create
a numpy variable with the same name as one of your function, what happens is that instead of calling the function python tries to call the numpy array as a function and you get the error, just change the name of the numpy variable
I met the same question and the solved.
The point is that my function parameters and variables have the same name.
Try to give them different name.
I am trying to migrate some code from using ElementTree to using lxml.etree and have encountered an error early on:
>>> import lxml.etree as ET
>>> main = ET.Element("main")
>>> another = ET.Element("another", foo="bar")
>>> main.attrib.update(another.attrib)
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
main.attrib.update(another.attrib)
File "lxml.etree.pyx", line 2153, in lxml.etree._Attrib.update
(src/lxml/lxml.etree.c:46972)
ValueError: too many values to unpack (expected 2)
But I am able to update using the following:
>>> main.attrib.update({'foo': 'bar'})
Is this a bug in lxml (version 2.3) or am I just missing something obvious?
I'm getting the same error, don't think that it's only 2.3 issue.
Workaround:
main.attrib.update(dict(another.attrib))
# or more efficient if it has many attributes:
main.attrib.update(another.attrib.iteritems())
UPDATE
lxml.etree._Attrib.update accepts dict or iterable (source). Although _Attrib has dict interface, it is not dict instance.
In [3]: type(another.attrib)
Out[3]: lxml.etree._Attrib
In [4]: isinstance(another.attrib, dict)
Out[4]: False
So update tries to iterate items as key, value. Maybe it's done for perfomance. Only lxml author knows.
Ways to change it in lxml:
Subclass dict.
Check for hasattr(sequence_or_dict, 'items').
I'm not familiar with Cython and don't know what is better.