\builders\dataset_builder.py", line 82, in _read_dataset_internal
raise RuntimeError('Did not find any input files matching the glob pattern '
RuntimeError: Did not find any input files matching the glob pattern ['Tensorflow\workspace\annotations\train.record']
I am following https://www.youtube.com/watch?v=yqkISICHH-U tutorial and the training doesn't seem to work. What am I doing wrong?
I've seen that some people change the "\" to "" for windows but I'm not sure how to do that because none of that is actually in the command which is
python Tensorflow\models\research\object_detection\model_main_tf2.py --model_dir=Tensorflow\workspace\models\my_ssd_mobnet --pipeline_config_path=Tensorflow\workspace\models\my_ssd_mobnet\pipeline.config --num_train_steps=2000
I've seen that some people change the "\" to "" for windows but I'm not sure how to do that because none of that is actually in the command which is
python Tensorflow\models\research\object_detection\model_main_tf2.py --model_dir=Tensorflow\workspace\models\my_ssd_mobnet --pipeline_config_path=Tensorflow\workspace\models\my_ssd_mobnet\pipeline.config --num_train_steps=2000
Related
I have a tensorflow "graph-model" consisting of a model.json and several .bin files. In javascript I am able to read those files using
const weights = browser.runtime.getURL("web_model/model.json");
tf.loadGraphModel(weights)
However I would like to be able to use this model in python, in order to process the results better.
When I try to load the model in python with
new_model = keras.models.load_model('./web_model/model.json')
I get the following error:
File "h5py/h5f.pyx", line 106, in h5py.h5f.open
OSError: Unable to open file (file signature not found)
I don't understand, since the javascript code is able to run the model, I think python should be able to do the same as well. What am I doing wrong ?
I want to use sentencepiece, from https://github.com/google/sentencepiece in a Google Colab project where I am training an OpenNMT model. I'm a little confused with how to set up the sentencepiece binaries in Google Colab. Do I need to build with cmake?
When I try and install using pip install sentencepiece and try to include sentencepiece in my "transforms" in my script, I get this following error
After running this script (matched from the OpenNMT translation tutorial)
!onmt_build_vocab -config en-sp.yaml -n_sample -1
I get:
Traceback (most recent call last):
File "/usr/local/bin/onmt_build_vocab", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/onmt/bin/build_vocab.py", line 63, in main
build_vocab_main(opts)
File "/usr/local/lib/python3.7/dist-packages/onmt/bin/build_vocab.py", line 32, in build_vocab_main
transforms = make_transforms(opts, transforms_cls, fields)
File "/usr/local/lib/python3.7/dist-packages/onmt/transforms/transform.py", line 176, in make_transforms
transform_obj.warm_up(vocabs)
File "/usr/local/lib/python3.7/dist-packages/onmt/transforms/tokenize.py", line 110, in warm_up
load_src_model.Load(self.src_subword_model)
File "/usr/local/lib/python3.7/dist-packages/sentencepiece/__init__.py", line 367, in Load
return self.LoadFromFile(model_file)
File "/usr/local/lib/python3.7/dist-packages/sentencepiece/__init__.py", line 171, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
TypeError: not a string
Below is how my script is written. I'm not sure what the not a string is coming from.
## Where the samples will be written
save_data: en-sp/run/example
## Where the vocab(s) will be written
src_vocab: en-sp/run/example.vocab.src
tgt_vocab: en-sp/run/example.vocab.tgt
## Where the model will be saved
save_model: drive/MyDrive/Europarl/model/model
# Prevent overwriting existing files in the folder
overwrite: False
# Corpus opts:
data:
europarl:
path_src: train_europarl-v7.es-en.es
path_tgt: train_europarl-v7.es-en.en
transforms: [sentencepiece, filtertoolong]
weight: 1
valid:
path_src: dev_europarl-v7.es-en.es
path_tgt: dev_europarl-v7.es-en.en
transforms: [sentencepiece]
skip_empty_level: silent
world_size: 1
gpu_ranks: [0]
...
EDIT: So I went ahead and Googled the issue more and found a google colab project that built sentencepiece using cmake here https://colab.research.google.com/github/mymusise/gpt2-quickly/blob/main/examples/gpt2_quickly.ipynb#scrollTo=dDAup5dxDXZW. However, even after building using cmake, I'm still getting this issue.
To fix this issue, I had to filter and tokenize my dataset and then train with sentencepiece. I used the scripts from this helpful source: https://github.com/ymoslem/MT-Preparation to do everything and now my model is training!
I used the following code in Pycharm:
import tensorflow as tf
sess = tf.Session()
a = tf.constant(value=5, name='input_a')
b = tf.constant(value=3, name='input_b')
c = tf.multiply(a,b, name='mult_c')
d = tf.add(a,b, name='add_d')
e = tf.add(c,d, name='add_e')
print(sess.run(e))
writer = tf.summary.FileWriter("./tb_graph", sess.graph)
Then, I pasted following line to the Anaconda Prompt:
tensorboard --logdir=="tb_graph"
I tried both with "" and '' as there were proposed: Tensorboard: No graph definition files were found. and it does nothing for me.
I had similar issue. The issue occurred when I specified 'logdir' folder inside single quotes instead of double quotes. Hope this may be helpful to you.
egs: tensorboard --logdir='my_graph' -> Tensorboard didn't detect the graph
tensorboard --logdir="my_graph" -> Tensorboard detected the graph
I checked the code on laptop with Ubuntu 16.04 and another one with Win10, so it probably isn't system-based error.
I also tried adding and removing --host=127.0.0.1 in An Prompt and checking several times both http://localhost:6006/ and http://desktop-.......:6006/.
Still same error:
No graph definition files were found.
To store a graph, create a tf.summary.FileWriter and pass the graph either via the constructor, or by calling its add_graph() method. You may want to check out the graph visualizer tutorial.
....
Please tell me what is wrong in the code/propmp command?
EDIT: On Ubuntu I used the normal terminal, of course.
EDIT2: I used both = and == in command prompt
The answer to my question is:
1) change "./new1_dir" into ".\\new1_dir"
and
2)put full track to file to anaconda propmpt: --logdir="C:\Users\Admin\Documents\PycharmProjects\try_tb\new1_dir"
Thanks #BugKiller for your help!
EDIT: Working only on Windows for me, but still better than nothing
EDIT2: Works on Ubuntu 16.04 too
I am trying to download data from Fashion MNIST, but it produces an error. Originally, it was downloading and working properly, but I had to terminate it because I had to turn off my computer. Once I opened the file up again, it gives me an error. I'm not sure what the problem is, but is it because I already downloaded some parts of the data once, and keras doesn't recognize that? I am using Jupyter notebook in a conda environment
Here is the link to the image:
https://i.stack.imgur.com/wLGDm.png
You have missed adding tf. to the line
fashion_mnist = keras.datasets.fashion_mnist
The below code works perfectly for me. Importing the fashion_mnist dataset has been outlined in tensorflow documention here.
Change your code to:
import tensorflow as tf
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
or, use the better way to do it below. This avoids creating an extra variable fashion_mnist:
import tensorflow as tf
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data()
I am using tensorflow 1.9.0, keras 2.2.2 and python 3.6.6 on Windows 10 x64 OS.
I know my pc well, I can't download anything larger than 2.7 MB (in terminal), due to WinError 8.
So I manually downloaded all packs from storage.google (since some packs are 25 MB).
Check the packs:
then I paste all packs to \datasets\fashion-mnist
The next time u run your code, it should be fixed.
Note : If u have VScode then just CTRL and click the link, then you can download it easily.
I had an error regarding the cURL connection, and by looking into the error message I was able to track the file where the URL was declared. In my case it was:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/datasets/fashion_mnist.py
At line 44 I have commented out the line:
# base = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/'
And declared a different base URL, which I had found looking into the documentation of the original dataset:
base = 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/'
The download started immediately and gave no errors. Hope this helps.
This is because for some reason you have an incomplete download for the MNIST dataset.
You will have to manually delete the downloaded folder which usually resides in ~/.keras/datasets or any path specified by you relative to this path, in your case MNIST_data.
Go to : C:\Users\Username.keras\datasets
and then Delete the Dataset that you want to redownload or has the error
You should be good to go!
You can also manually add print for the path from which it is taking dataset ..
Ex: print(paths) in file fashion_mnist.py
with gzip.open(paths[3], 'rb') as imgpath:
print(paths) #debug print in fashion_mnist.py
x_test = np.frombuffer(
imgpath.read(), np.uint8, offset=16).reshape(len(y_test), 28, 28)
& from this path, remove the files & this will start to download fresh data ..
Change The base address with 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/' as described previously. It works for me.
I was getting error of Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
Traceback (most recent call last):
File "C:\Users\AsadA\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\lib\npyio.py", line 448, in load
return pickle.load(fid, **pickle_kwargs)
EOFError: Ran out of input
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\AsadA\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\lib\npyio.py", line 450, in load
raise IOError(
OSError: Failed to interpret file 'C:\\Users\\AsadA\\.keras\\datasets\\mnist.npz' as a pickle"**
GO TO FILE C:\Users\AsadA\AppData\Local\Programs\Python\Python38\Lib\site-packages\tensorflow\python\keras\datasets (In my Case) and follow the instructions:
I've been always using linear kernels in libsvm by following command
python grid.py -log2c -1,10,1 -log2g -1,1,1 -t 0 data
But I now consider linear kernel in libsvm is different from liblinear. The example given in liblinear official site gives me "ValueError: could not convert string to float: null"
python grid.py -log2c -3,0,1 -log2g null -svmtrain ./train heart_scale
The other example in liblinear documentation doesn't work neither, saying "Unknown option: -g" and TypeError in line 219: if rate is None: raise "get no rate".
./grid.py -log2c -14,14,1 -log2g 1,1,1 -svmtrain ./train news20.scale
I'm wondering what's the right way of using liblinear train with grid.py.
Get the latest version of grid.py from the the libsvm website.
I happen to modifying my copy of grid.py at the moment and I can see it explicitly has handling for -log2g null.