I need to use the smartwatch_gestures from Tensorflow datasets and here is my code:
pip install --upgrade tfds-nightly
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
train_data,ds_info = tfds.load('smartwatch_gestures', as_supervised=True, split = 'train')
print(ds_info.features)
But, i get the following error:
DatasetNotFoundError: Dataset smartwatch_gestures not found.
Related
I am following a tutorial to convert keras (.hdf5/.h5) model to onnx model. the link is (https://www.youtube.com/watch?v=7ndUGBzGVvg)
So, the code is
import tensorflow as tf
from tensorflow.keras.models import load_model
print(tf.__version__)
model = load_model('./model/weights.28-3.73.hdf5')
model.summary()
import tf2onnx
I also tried importing import keras2onnx but it generates error saying
module 'tensorflow.python.keras' has no attribute 'applications'. So I tried with tf2onnx.
I am using
keras==2.4.3
tensorflow==2.5.0
python==3.8.5
tf2onnx==1.13.0
onnx==1.13.0
I'm folowing an example that uses tensorflow's 1.15.0 object detection API.
The tutorial goes clearly on the following aspects:
how to download a model
how to load a custom database with .xml files, make .cvs files from them, and then .record files
how to configure a training pipeline
how to get tensorboard graphs
how to train the net saving checkpoints (using model_main.py)
how to export (save) the model (using export_inference_graph.py)
What I have not been able to accomplish, however, is loading the saved model to use it.
I tryed with tf.saved_model.loader.load(sess, flags, export_dir, but I get
INFO:tensorflow:Saver not created because there are no variables in the graph to restore.
INFO:tensorflow:The specified SavedModel has no variables; no checkpoints were restored.
the folder given in export_dir has the following structure:
+dir
+saved_model
-saved_model.pb
-model.ckpt.data-00000-of-00001
-model.ckpt.index
-checkpoint
-frozen_inference_graph.pb
-model.ckpt.meta
-pipeline.config
My final goal here is to capture images with a camera, and feed them to the net for real time object detection.\
As an in between step, now I just want to be able to feed a single picture and get the output. I was able to train the net, but now I can't use it.
Thank you in advance.
I found an example on how to download a model that let me go through it.\
Since the folder format of the file that is downloaded in the example is the same I get on my code, I just had to adapt it.
The orifinal function that downloads the model is
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
model = model.signatures['serving_default']
return model
Then I used that function to create this new one
def load_local_model(model_path):
model_dir = pathlib.Path(model_path)/"saved_model"
model = tf.saved_model.load(str(model_dir))
model = model.signatures['serving_default']
return model
At first this didn't worked, since tf.saved_model.load expected 3 arguments, but that was solved by importing the two import blocks on the same example, I stll dont know wich import did the trick and why (I'll edit this answer when I get it), but for the moment this code works and the example lets do more things.
The import blocks are the following
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
and
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
EDIT
What was really needed for this to work was the following block.
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
!git clone --depth 1 https://github.com/tensorflow/models
%%bash
cd models/research/
protoc object_detection/protos/*.proto --python_out=.
%%bash
cd models/research
pip install .
Otherwhise this import block won't work
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
I am trying to run the following codes:
import os
import pprint
import numpy as np
import tensorflow as tf
import tensorlayer as tl
from tensorlayer.layers import *
from tensorlayer.prepro import *
from random import shuffle
However, I am getting this error:
~\Documents\Jupyter Notebooks far Run\cGAN at Lockdown\Re-implement CycleGAN in
Tensorlayer=GD\CycleGAN_Tensorlayer-master\tensorlayer\layers.py in <module>
31 TF_GRAPHKEYS_VARIABLES = tf.GraphKeys.GLOBAL_VARIABLES
32 except: # For TF11 and before
---> 33 TF_GRAPHKEYS_VARIABLES = tf.GraphKeys.VARIABLES
34
35 ## Variable Operation
AttributeError: module 'tensorflow' has no attribute 'GraphKeys'
The version of my tf is 2.0.0-beta1 while I have installed TensorLayer 2.2.1
Kindly let me know where I am getting it wrong.
Can someone please help, why am I getting this error AttributeError: module 'tensorflow' has no attribute 'version' ? ver installed TF2.0.0.rc0
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
print(tf.version)
You need to modify your last statement as print(tf.__version__) instead of print(tf.version) as the attribute name is __version__ rather than version. There are two leading and trailing underscores.
I wanted to try out the embeddings provided in tensorflow-hub, the 'universal-sentence-encoder' to be specific. I tried the examples provided (https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb)
and it worked fine. So I tried to do the same with 'multilingual' model but every time the multilingual model is loaded, the colab kernel fails and restarts. What is the problem and how can I get around this?
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
import tf_sentencepiece
import sentencepiece
# Import the Universal Sentence Encoder's TF Hub module
embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-multilingual/1") // This is where the kernel dies.
print("imported model")
# Compute a representation for each message, showing various lengths supported.
word = "코끼리"
sentence = "나는 한국어로 쓰여진 문장이야."
paragraph = (
"동해물과 백두산이 마르고 닳도록. "
"하느님이 보우하사 우리나라 만세~")
messages = [word, sentence, paragraph]
# Reduce logging output.
tf.logging.set_verbosity(tf.logging.ERROR)
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
message_embeddings = session.run(embed(messages))
for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3]))
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
I had similar issues with the multilingual sentence encoder. I resolved it by specifying tensorflow version to 1.14.0 and tf-sentencepiece to 0.1.83, so before running your code in colab try:
!pip3 install tensorflow==1.14.0
!pip3 install tensorflow-hub
!pip3 install sentencepiece
!pip3 install tf-sentencepiece==0.1.83
I was able to replicate your problem in colab and this solution loaded the model correctly:
It seems to be a compatibility problem between sentencepiece and tensorflow, check for updates on this issue here.
Let us know how it goes. Best of luck and I hope this helps.
EDIT: If tensorflow version 1.14.0 does not work, change it to 1.13.1. This problem should be resolved once compatibility between tensorflow and sentencepiece is figured it out.