Keras: Error when downloading Fashion_MNIST Data - tensorflow

I am trying to download data from Fashion MNIST, but it produces an error. Originally, it was downloading and working properly, but I had to terminate it because I had to turn off my computer. Once I opened the file up again, it gives me an error. I'm not sure what the problem is, but is it because I already downloaded some parts of the data once, and keras doesn't recognize that? I am using Jupyter notebook in a conda environment
Here is the link to the image:
https://i.stack.imgur.com/wLGDm.png

You have missed adding tf. to the line
fashion_mnist = keras.datasets.fashion_mnist
The below code works perfectly for me. Importing the fashion_mnist dataset has been outlined in tensorflow documention here.
Change your code to:
import tensorflow as tf
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
or, use the better way to do it below. This avoids creating an extra variable fashion_mnist:
import tensorflow as tf
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data()
I am using tensorflow 1.9.0, keras 2.2.2 and python 3.6.6 on Windows 10 x64 OS.

I know my pc well, I can't download anything larger than 2.7 MB (in terminal), due to WinError 8.
So I manually downloaded all packs from storage.google (since some packs are 25 MB).
Check the packs:
then I paste all packs to \datasets\fashion-mnist
The next time u run your code, it should be fixed.
Note : If u have VScode then just CTRL and click the link, then you can download it easily.

I had an error regarding the cURL connection, and by looking into the error message I was able to track the file where the URL was declared. In my case it was:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/datasets/fashion_mnist.py
At line 44 I have commented out the line:
# base = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/'
And declared a different base URL, which I had found looking into the documentation of the original dataset:
base = 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/'
The download started immediately and gave no errors. Hope this helps.

This is because for some reason you have an incomplete download for the MNIST dataset.
You will have to manually delete the downloaded folder which usually resides in ~/.keras/datasets or any path specified by you relative to this path, in your case MNIST_data.
Go to : C:\Users\Username.keras\datasets
and then Delete the Dataset that you want to redownload or has the error
You should be good to go!

You can also manually add print for the path from which it is taking dataset ..
Ex: print(paths) in file fashion_mnist.py
with gzip.open(paths[3], 'rb') as imgpath:
print(paths) #debug print in fashion_mnist.py
x_test = np.frombuffer(
imgpath.read(), np.uint8, offset=16).reshape(len(y_test), 28, 28)
& from this path, remove the files & this will start to download fresh data ..

Change The base address with 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/' as described previously. It works for me.
I was getting error of Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
Traceback (most recent call last):
File "C:\Users\AsadA\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\lib\npyio.py", line 448, in load
return pickle.load(fid, **pickle_kwargs)
EOFError: Ran out of input
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\AsadA\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\lib\npyio.py", line 450, in load
raise IOError(
OSError: Failed to interpret file 'C:\\Users\\AsadA\\.keras\\datasets\\mnist.npz' as a pickle"**
GO TO FILE C:\Users\AsadA\AppData\Local\Programs\Python\Python38\Lib\site-packages\tensorflow\python\keras\datasets (In my Case) and follow the instructions:

Related

how to import a tensorflow webmodel in python

I have a tensorflow "graph-model" consisting of a model.json and several .bin files. In javascript I am able to read those files using
const weights = browser.runtime.getURL("web_model/model.json");
tf.loadGraphModel(weights)
However I would like to be able to use this model in python, in order to process the results better.
When I try to load the model in python with
new_model = keras.models.load_model('./web_model/model.json')
I get the following error:
File "h5py/h5f.pyx", line 106, in h5py.h5f.open
OSError: Unable to open file (file signature not found)
I don't understand, since the javascript code is able to run the model, I think python should be able to do the same as well. What am I doing wrong ?

KeyError: 'NormalizeUTF8' loading a model to another jupyter notebook

I'm trying to reload another model to another jupyter notebook using this code:
import tensorflow as tf
reloaded = tf.saved_model.load('m_translator')
result = reloaded.tf_translate(input_text)
and I recently got this error:
KeyError Traceback (most recent call last)
File ~\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py:4177, in Graph._get_op_def(self, type)
4176 try:
-> 4177 return self._op_def_cache[type]
4178 except KeyError:
KeyError: 'NormalizeUTF8'
FileNotFoundError: Op type not registered 'NormalizeUTF8' in binary running on LAPTOP-D3PPA576. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
You may be trying to load on a different device from the computational device. Consider setting the `experimental_io_device` option in `tf.saved_model.LoadOptions` to the io_device such as '/job:localhost'.
I had this issue before. To solve it, you need to install tensorflow_text. You should try to :
>>> tf.__version__
2.8.2
>>>!pip install tensorflow-text==2.8.2
In addition to installing tensorflow_text library, what helped me with a similar problem was importing the library at the top of the notebook:
import tensorflow_text

Problem converting tensorflow saved_model to tensorflowjs

I want to convert trained python model (.pb) to tensorflowjs model. To accomplish this, first I saved the model with estimator.export_savedmodel function, then I run the tensorflowjs_converter command on Google Colab. However, no file is created for tensorflowjs. The conversion also gives a lot of warning and ends with an error.
Here is the full code, and please run to see the full output:
https://colab.research.google.com/drive/19k2s8eHpQY9Trps9dyaxPp0HqHWp5qpb
What is the reason of the problem and how can I fix it?
Part of the output:
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
Traceback (most recent call last):
File "/usr/local/bin/tensorflowjs_converter", line 8, in <module>
sys.exit(pip_main())
File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/converter.py", line 638, in pip_main
main([' '.join(sys.argv[1:])])
File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/converter.py", line 642, in main
convert(argv[0].split(' '))
File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/converter.py", line 591, in convert
strip_debug_ops=args.strip_debug_ops)
File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 435, in convert_tf_saved_model
strip_debug_ops=strip_debug_ops)
File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 141, in optimize_graph
', '.join(unsupported))
ValueError: Unsupported Ops in the model before optimization
ParallelDynamicStitch, StringSplit, Unique, RegexReplace, DynamicPartition, StringToHashBucketFast, ParseExample, LookupTableFindV2, LookupTableSizeV2, SparseFillEmptyRows, StringJoin, AsString, SparseSegmentSqrtN, HashTableV2
Edit:
Seems like it isn't supported:
https://github.com/tensorflow/tfjs/issues/2322
This is because your model has ops that are not supported by tensorflow.js yet. And seems like you missed the missing op name in the output you pasted. Please feel free to update the output with missing op name or file a feature request in the tensorflow.js repo with more details.

Accessing already downloaded dataset with tensorflow_datasets API

I am trying to work with the quite recently published tensorflow_dataset API to train a Keras model on the Open Images Dataset. The dataset is about 570 GB in size. I downloaded the data with the following code:
import tensorflow_datasets as tfds
import tensorflow as tf
open_images_dataset = tfds.image.OpenImagesV4()
open_images_dataset.download_and_prepare(download_dir="/notebooks/dataset/")
After the download was complete, the connection to my jupyter notebook somehow interrupted but the extraction seemed to be finished as well, at least all downloaded files had a counterpart in the "extracted" folder. However, I am not able to access the downloaded data now:
tfds.load(name="open_images_v4", data_dir="/notebooks/open_images_dataset/extracted/", download=False)
This only gives the following error:
AssertionError: Dataset open_images_v4: could not find data in /notebooks/open_images_dataset/extracted/. Please make sure to call dataset_builder.download_and_prepare(), or pass download=True to tfds.load() before trying to access the tf.data.Dataset object.
When I call the function download_and_prepare() it only downloads the whole dataset again.
Am I missing something here?
Edit:
After the download the folder under "extracted" has 18 .tar.gz files.
This is with tensorflow-datasets 1.0.1 and tensorflow 2.0.
The folder hierarchy should be like this:
/notebooks/open_images_dataset/extracted/open_images_v4/0.1.0
All the datasets have a version. Then the data could be loaded like this.
ds = tf.load('open_images_v4', data_dir='/notebooks/open_images_dataset/extracted', download=False)
I didn't have open_images_v4 data. I put cifar10 data into a folder named open_images_v4 to check what folder structure tensorflow_datasets was expecting.
The solution to this was to also use the "data_dir" parameter when initializing the dataset:
builder = tfds.image.OpenImagesV4(data_dir="/raid/openimages/dataset")
builder.download_and_prepare(download_dir="/raid/openimages/dataset")
This way the dataset is donwloaded and extracted in the same directory. Before, it was (for me unnoticeably) extracting to the default directory, which is under /home/.../. That's what caused the error, as there wasn't enough space left under my home directory.
After the extraction, the folder structure is exactly as Manoj-Mohan described.
Above solution haven't worked for me.
builder = tfds.builder(name='folder_name', data_dir=data_dir)
builder.download_and_prepare(download_dir="/home/...")
ds = builder.as_dataset()

Downloading Fashion MNIST file in TensorFlow tutorial is taking forever

I am trying to do this tutorial for a machine learning class I am taking in college.
www.tensorflow.org/tutorials/keras/basic_classification
When it executes the lines
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
it is taking forever to download the data. At the rate it is downloading, it is going to take a few days or weeks to download all of it. I am using a MacBook. My classmate is also using a MacBook and when he downloads the data it only takes a few seconds. Please help.
In my case the download was giving me an error. By digging into the error I was able to find the file in which the base URL was declared, which in my case was:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/datasets/fashion_mnist.py
At line 44 I have commented out the line:
# base = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/'
And declared a different base URL, which I had found looking into the documentation of the original dataset:
base = 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/'
The download started immediately and gave no errors. Hope this helps.