So, I literally copied and pasted the code from this quickstart:
https://www.tensorflow.org/get_started/estimator
into a new datalab notebook.... this code has worked for me many times in the past but now I'm getting an error that says the tf module does not contain the attribute 'feature_column'... this thing
https://www.tensorflow.org/api_docs/python/tf/feature_column
really sorta at a loss as to what I should do from here, any ideas? I've found this question
'module' object has no attribute 'feature_column'
and the OP said it had something to do with version control? Perhaps there's a way to ensure datalab is using the latest version of Tensorflow? Not sure where to begin looking through datalab's version controls... google searching hasn't returned anything that has jumped out at me so far...
It's probably because of different TensorFlow version. The sample on https://www.tensorflow.org/get_started/estimator is based on tf 1.3. So install 1.3 in your Datalab instance by running the following in a cell:
!pip install tensorflow==1.3
See if it works. Note that Datalab's preinstalled TF is 1.0 (will be updated soon). So if you create a new instance of Datalab you'll need to install it again.
Related
I am a beginner who just learned about TensorFlow using Google Colab.
As in the attached image file, numbers 10 to 13 are underlined in tensorflow.keras~, what is the problem?
It's probably a function that indicates a typo, but there's nothing wrong with running it.
enter image description here
This underline error was an ongoing issue earlier in Google Colab, which is resolved now. Please try again replicating the same code in Google colab what you mentioned in the screen shot and let us know if the issue still persists at your end.
Please check the code below: (I replicated the same in Google Colab with python 3.8.10 and TensorFlow 2.9.2)
from keras.preprocessing import Sequence will show the error as you are not giving proper alias to import the Sequence API which is provided correctly in next line (using tensorflow.keras.preprocessing prefix or tensorflow.keras.utils).
In order to use a pre-trained model with Tensorflow, we clone the Model Garden for TensorFlow, then choose a model in Model Zoo, for example, Detection Model Zoo: EfficientDet D0 512x512.
Is there anyway to detect the right version of Tensorflow, e.g. 2.7.0, or 2.7.1, or 2.8.0, that will surely work with the aforementioned setup?
The documentation (README.md) doesn't seem to mention this requirement. Maybe it is implied somehow?
I checked setup.py for Object Detection, but there is still no clue!
\models\research\object_detection\packages\tf2\setup.py
REQUIRED_PACKAGES = [
# Required for apache-beam with PY3
'avro-python3',
'apache-beam',
'pillow',
'lxml',
'matplotlib',
'Cython',
'contextlib2',
'tf-slim',
'six',
'pycocotools',
'lvis',
'scipy',
'pandas',
'tf-models-official>=2.5.1',
'tensorflow_io',
'keras'
]
I am not aware of a formal/quick way to determine the right Tensorflow version, given a specific Model Garden version, master branch. However, here is my workaround:
In the REQUIRED_PACKAGES above, we see tf-models-official>=2.5.1.
Checking the package history on pypi.org, the latest version, as of 03.02.2022, is 2.8.0.
So when installing this \models\research\object_detection\packages\tf2\setup.py file, pip will naturally fetch the latest version of tf-models-official, which is 2.8.0, thanks to >= symbol.
However, given tf-models-official, v2.8.0, its required packages are defined in tf-models-official-2.8.0\tf_models_official.egg-info\requires.txt (Note: download the package and extract it, using the link.)
Here, we find out:
tensorflow~=2.8.0
...meaning the required Tensorflow version is 2.8.*.
This may not be desired, e.g. in CoLab, currently, the version is 2.7.0.
To workaround, we should use tf-models-official, v2.7.0. Notice it matches the Tensorflow version. In this version 2.7.0's requires.txt, we should see tensorflow>=2.4.0, which is already satisfied by CoLab's default Tensorflow version (2.7.0).
To make this workaround possible, the \models\research\object_detection\packages\tf2\setup.py should be modified from, e.g. 'tf-models-official>=2.5.1' to 'tf-models-official==2.7.0'.
Caveat: I think this hack doesn't affect the functionality of the Object Detection API because it originally demands any tf-models-official >= 2.5.1. We just simply fix it to ==2.7.0 instead.
I use
layers.Normalization()
in Keras, in keras.Sequential
When I try to run it, I get the following error:
module 'tensorflow.keras.layers' has no attribute 'Normalization'
I've seen the command layers.Normalization() being used in many codes, so I don't know what's wrong. Did something change?
One reason can be that you are using the tensorflow version older then the required to use that layer. There are two ways to get around this problem.
Upgrade tensorflow as discussed above.
Or you can add the layer as follows:
tf.keras.layers.experimental.preprocessing.Normalization
Regards
Check the version of TensorFlow you have:
import tensorflow as tf
print(tf.__version__)
tf.keras.layers.Normalization is an attribute in TensorFlow v2.6.0, so might not work on earlier versions: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Normalization
If you have an earlier version, you can upgrade using
pip install --upgrade tensorflow
I have just over 100k word embeddings which I created using gensim, originally each containing 200 dimensions. I've been trying to visualize them within tensorboard's projector but I have only failed so far.
My problem is that tensorboard seems to freeze while computing PCA. At first, I left the page open for 16 hours, imagining that it was just too much to be calculated, but nothing happened. At this point, I started to try and test different scenarios just in case all I needed was more time and I was trying to rush things. The following is a list of my testing so far, all of which failed at the same spot, computing PCA:
I plotted only 10 points of 200 dimensions;
I retrained my gensim model so that I could reduce its dimensionality to 100;
Then I reduced it to 10;
Then to 2;
Then I tried plotting only 2 points, i.e. 2 two dimensional points;
I am using Tensorflow 1.11;
You can find my last saved tensor flow session here, would you mind trying it out?
I am still a beginner, therefore I used a couple tutorial to get me started; I used Sud Harsan work so far.
Any help is much appreciated. Thanks.
Updates:
A) I've found someone else dealing with the same problem; I tried the solution provided, but it didn't change anything.
B) I thought it could have something to do with my installation, therefore I tried uninstalling tensorflow and installing it back; no luck. I then proceeded to create a new environment dedicated to tensorflow and that also didn't work.
C) Assuming there was something wrong with my code, I ran tensorflow's basic embedding tutorial to check if I could open its projector's results. And guess what?! I still can't go past "Calculating PCA"
Now, I did visit the online projector example and that loads perfectly.
Again, Any help would be more than appreciated. Thanks!
I have the same problem with word2vec_basic.py
My environment: win10, conda, python 3.6.7, tensorflow 1.11, tensorboard 1.11
That may not your fault because I roll back tensorflow & tensorboard from 1.11 to 1.7
And guess what?! The projector appears just a few seconds!
reference
Update 10/11
tensorboard & tensorflow 1.12 are available in conda today, I take a try and this problem seems to be fixed.
As mentioned by Bluedrops, updating tensorboard and tensorflow seems to fix the problem.
I created a new environment with conda and installed the newest versions of Tensorflow, Tensorboard and their dependencies and that seems to fix the issue.
Encountered this issue with 2 different visualization libraries.
PYLDAVIS and DISPLACY (spacy).
On executing a code in jupyterlab (kernel as python3), the output expected should be Jupyter Notebook to show the graph or webcontent. But my Jupyter doesnt show any output with graph / dependency image . I only see textual output in JupyterLab.
eg.
displacy.serve(doc, style='dep')
I'm using KAGGLE docker image which has JUPYTERLAB and on top of that I have updated to latest packages.
Any pointers if this is JUPYTERLAB related or underlying packages?
I can only really comment on the spaCy part of this, but one thing I noticed is that you are using displacy.serve instead of displacy.render, which would be the correct method to call from within a Jupyter environment (see the spaCy visualizer docs for a full example and more details). The reason behind this is that displacy.serve will start a web server to show the visualization in a browser – all of which is not necessary if you're already in a Jupyter Notebook. So when you call displacy.render, it will detect your Jupyter environment, and wrap the visualization accordingly. You can also set jupyter=True to force this behaviour.
try
from spacy import displacy
displacy.render(doc, style="dep", jupyter=True, options={'distance': 140})
or
displacy.render(doc, style="ent", jupyter=True, options={'distance': 140})