Problem with connecting google Colab with google Cloud TPUs - tensorflow

I have this code which based on t5 notebook (https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/master/notebooks/t5-trivia.ipynb)
FINETUNE_STEPS = 3000##param {type: "integer"}
model.finetune(
mixture_or_task_name="text_diacritization_short",
pretrained_model_dir=PRETRAINED_DIR,
finetune_steps=FINETUNE_STEPS
)
my code was working fine in 8 Augustus then something happened resulting of this error.
these two lines appeared when my model worked so i don't think they are the problem.
INFO:root:system_path_file_exists:gs://my_bucket/my_file/models/small/operative_config.gin
ERROR:root:Path not found: gs://my_bucket/my_file/models/small/operative_config.gin
Rest of the error.
From /usr/local/lib/python3.7/dist-packages/tensorflow/python/training/training_util.py:399: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
WARNING:absl:Using an uncached FunctionDataset for training is not recommended since it often results in insufficient shuffling on restarts, resulting in overfitting. It is highly recommended that you cache this task before training with it or use a data source that supports lower-level shuffling (e.g., FileDataSource).
SimdMeshImpl ignoring devices ['', '', '', '', '', '', '', '']
Using default tf glorot_uniform_initializer for variable encoder/block_000/layer_000/SelfAttention/relative_attention_bias The initialzer will guess the input and output dimensions based on dimension order.
Using default tf glorot_uniform_initializer for variable decoder/block_000/layer_000/SelfAttention/relative_attention_bias The initialzer will guess the input and output dimensions based on dimension order.
From /usr/local/lib/python3.7/dist-packages/tensorflow/python/training/saver.py:1161: get_checkpoint_mtimes (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file utilities to get mtimes.
From /usr/local/lib/python3.7/dist-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py:758: Variable.load (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Prefer Variable.assign which has equivalent behavior in 2.X.
I changed the google cloud account and the Colab notebook to completely new gmail account, I think the problem is that something got updated in google Colab regarding connecting to Google Cloud TPUs.
Also, I can connect to my bucket normally using this code.
BASE_DIR = "gs://my_bucket/my_file" ##param { type: "string" }
if not BASE_DIR or BASE_DIR == "gs://":
raise ValueError("You must enter a BASE_DIR.")
DATA_DIR = os.path.join(BASE_DIR, "data")
FINETUNE_MODELS_DIR = os.path.join(BASE_DIR, "models")
ON_CLOUD = True
if ON_CLOUD:
print("Setting up GCS access...")
import tensorflow_gcs_config
from google.colab import auth
# Set credentials for GCS reading/writing from Colab and TPU.
TPU_TOPOLOGY = "v2-8"
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
TPU_ADDRESS = tpu.get_master()
print('Running on TPU:', TPU_ADDRESS)
except ValueError:
raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')
auth.authenticate_user()
tf.enable_eager_execution()
tf.config.experimental_connect_to_host(TPU_ADDRESS)
tensorflow_gcs_config.configure_gcs_from_colab_auth()
tf.disable_v2_behavior()
# Improve logging.
from contextlib import contextmanager
import logging as py_logging
if ON_CLOUD:
tf.get_logger().propagate = False
py_logging.root.setLevel('INFO')
#contextmanager
def tf_verbosity_level(level):
og_level = tf.logging.get_verbosity()
tf.logging.set_verbosity(level)
yield
tf.logging.set_verbosity(og_level)
it would be great if someone can help me I have been looking in the issue for a week and found nothing, is there any changes to how Google Colab works that I am not aware of.
Thanks in advance.

Related

Running GluonCV object detection model on Android

I need to run a custom GluonCV object detection module on Android.
I already fine-tuned the model (ssd_512_mobilenet1.0_custom) on a custom dataset, I tried running inference with it (loading the .params file produced during the training) and everything works perfectly on my computer. Now, I need to export this to Android.
I was referring to this answer to figure out the procedure, there are 3 suggested options:
You can use ONNX to convert models to other runtimes, for example [...] NNAPI for Android
You can use TVM
You can use SageMaker Neo + DLR runtime [...]
Regarding the first one, I converted my model to ONNX.
However, in order to use it with NNAPI, it is necessary to convert it to daq. In the repository, they provide a precomplied AppImage of onnx2daq to make the conversion, but the script returns an error. I checked the issues section, and they report that "It actually fails for all onnx object detection models".
Then, I gave a try to DLR, since it's suggested to be the easiest way.
As I understand, in order to use my custom model with DLR, I would first need to compile it with TVM (which also covers the second point mentioned in the linked post). In the repo, they provide a Docker image with some conversion scripts for different frameworks.
I modified the 'compile_gluoncv.py' script, and now I have:
#!/usr/bin/env python3
from tvm import relay
import mxnet as mx
from mxnet.gluon.model_zoo.vision import get_model
from tvm_compiler_utils import tvm_compile
shape_dict = {'data': (1, 3, 300, 300)}
dtype='float32'
ctx = [mx.cpu(0)]
classes_custom = ["CML_mug"]
block = get_model('ssd_512_mobilenet1.0_custom', classes=classes_custom, pretrained_base=False, ctx=ctx)
block.load_parameters("ep_035.params", ctx=ctx) ### this is the file produced by training on the custom dataset
for arch in ["arm64-v8a", "armeabi-v7a", "x86_64", "x86"]:
sym, params = relay.frontend.from_mxnet(block, shape=shape_dict, dtype=dtype)
func = sym["main"]
func = relay.Function(func.params, relay.nn.softmax(func.body), None, func.type_params, func.attrs)
tvm_compile(func, params, arch, dlr_model_name)
However, when I run the script it returns the error:
ValueError: Model ssd_512_mobilenet1.0_custom is not supported. Available options are
alexnet
densenet121
densenet161
densenet169
densenet201
inceptionv3
mobilenet0.25
mobilenet0.5
mobilenet0.75
mobilenet1.0
mobilenetv2_0.25
mobilenetv2_0.5
mobilenetv2_0.75
mobilenetv2_1.0
resnet101_v1
resnet101_v2
resnet152_v1
resnet152_v2
resnet18_v1
resnet18_v2
resnet34_v1
resnet34_v2
resnet50_v1
resnet50_v2
squeezenet1.0
squeezenet1.1
vgg11
vgg11_bn
vgg13
vgg13_bn
vgg16
vgg16_bn
vgg19
vgg19_bn
Am I doing something wrong? Is this thing even possible?
As a side note, after this I'd need to deploy on Android a pose detection model (simple_pose_resnet18_v1b) and an activity recognition one (i3d_nl10_resnet101_v1_kinetics400) as well.
You actually can run GluonCV model directly on Android with Deep Java Library (DJL)
What you need to do is:
hyridize your GluonCV model and save as MXNet model
Build MXNet engine for android, MXNET already support Android build
Include MXNet shared library into your android project
Use DJL in your android project, you can follow this DJL Android demo for PyTorch
The error message is self-explanatory - there is no model "ssd_512_mobilenet1.0_custom" supported by mxnet.gluon.model_zoo.vision.get_model. You are confusing GluonCV's get_model with MXNet Gluon's get_model.
Replace
block = get_model('ssd_512_mobilenet1.0_custom',
classes=classes_custom, pretrained_base=False, ctx=ctx)
with
import gluoncv
block = gluoncv.model_zoo.get_model('ssd_512_mobilenet1.0_custom',
classes=classes_custom, pretrained_base=False, ctx=ctx)

If you use Tensorflow Dataset do you have to upload your data?

I have been looking at Tensorflow Dataset (TFDS) and seems really useful. The only issue I can see is that if you want to use it, you have to upload your dataset publicly to TFDS. Is that correct?
Is there anyway of using TFDS on a private server? Only for internal use?
You can easily create new local datasets, as described in the documentation. Here's an excerpt:
class MyDataset(tfds.core.GeneratorBasedBuilder):
"""DatasetBuilder for my_dataset dataset."""
VERSION = tfds.core.Version('1.0.0')
RELEASE_NOTES = {
'1.0.0': 'Initial release.',
}
def _info(self) -> tfds.core.DatasetInfo:
"""Dataset metadata (homepage, citation,...)."""
return tfds.core.DatasetInfo(
builder=self,
features=tfds.features.FeaturesDict({
'image': tfds.features.Image(shape=(256, 256, 3)),
'label': tfds.features.ClassLabel(names=['no', 'yes']),
}),
)
def _split_generators(self, dl_manager: tfds.download.DownloadManager):
"""Download the data and define splits."""
extracted_path = dl_manager.download_and_extract('http://data.org/data.zip')
# dl_manager returns pathlib-like objects with `path.read_text()`,
# `path.iterdir()`,...
return {
'train': self._generate_examples(path=extracted_path / 'train_images'),
'test': self._generate_examples(path=extracted_path / 'test_images'),
}
def _generate_examples(self, path) -> Iterator[Tuple[Key, Example]]:
"""Generator of examples for each split."""
for img_path in path.glob('*.jpeg'):
# Yields (key, example)
yield img_path.name, {
'image': img_path,
'label': 'yes' if img_path.name.startswith('yes_') else 'no',
}
If you want to see your datasets in TFDS you have to put your data in your GCS bucket or on your drive, but if you want that your data can be downloaded using some registration or password then your data also be implemented in TFDS but it was not directly downloaded and extracted by tfds data pipelines, user have to follow manually downloading instructions as given here.
Also there is a public GCS bucket of tfds in which you can also upload your data but before it you have to contact with members of TFDS (as they have to check if it's good to upload your data in tfds gcs), by putting data on TFDS GCS helps a lot it avoid some issues of bad servers as well as user can directly load your data using tfds.load API.

fastai ulmfit model trained on cuda machin for cpu

I have an export.pkl model which has been trained on a cuda machine. I want to use it on a macbook:
from fastai.text import load_learner
from utils import get_corpus
learner = load_learner('./models')
corpus = get_corpus()
res = [ str(learner.predict(c)[0]) for c in corpus ]
I get the following error:
...
File "/Users/gautiergilabert/Envs/cc/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 146, in forward
"them on device: {}".format(self.src_device_obj, t.device))
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu
I have two questions:
I found the raise in my export.pkl:
for t in chain(self.module.parameters(), self.module.buffers()):
if t.device != self.src_device_obj:
raise RuntimeError("module must have its parameters and buffers "
"on device {} (device_ids[0]) but found one of "
"them on device: {}".format(self.src_device_obj, t.device))
It is said about the module in the docstring: module to be parallelized. I don't really understand what it is. My macbook ?
Apart my macbook, I would like to run the model on a cpu
Is there a way to make this export.pkl model works on a cpu ?
Is there a way to make another export.pkl on cuda and make it available on a cpu ?
Thanks
One way is to load the learner by specifying the model with empty dataset and loading the model weights afterwards. For resnet image classifier something like this should work:
from fastai.vision import *
# path where the model is saved under path/models/model-name
path = "model_path"
tfms = get_transforms()
data = ImageDataBunch.single_from_classes(".", classes=["class1", "class2"], ds_tfms=tfms)
learner = cnn_learner(data, models.resnet34, metrics=accuracy)
# loads model from model_path/models/model_name.pth
learner.load("model_name")
image = open_image("test.jpg")
pred_class, pred_idx, outputs = learner.predict(image)

Accessing already downloaded dataset with tensorflow_datasets API

I am trying to work with the quite recently published tensorflow_dataset API to train a Keras model on the Open Images Dataset. The dataset is about 570 GB in size. I downloaded the data with the following code:
import tensorflow_datasets as tfds
import tensorflow as tf
open_images_dataset = tfds.image.OpenImagesV4()
open_images_dataset.download_and_prepare(download_dir="/notebooks/dataset/")
After the download was complete, the connection to my jupyter notebook somehow interrupted but the extraction seemed to be finished as well, at least all downloaded files had a counterpart in the "extracted" folder. However, I am not able to access the downloaded data now:
tfds.load(name="open_images_v4", data_dir="/notebooks/open_images_dataset/extracted/", download=False)
This only gives the following error:
AssertionError: Dataset open_images_v4: could not find data in /notebooks/open_images_dataset/extracted/. Please make sure to call dataset_builder.download_and_prepare(), or pass download=True to tfds.load() before trying to access the tf.data.Dataset object.
When I call the function download_and_prepare() it only downloads the whole dataset again.
Am I missing something here?
Edit:
After the download the folder under "extracted" has 18 .tar.gz files.
This is with tensorflow-datasets 1.0.1 and tensorflow 2.0.
The folder hierarchy should be like this:
/notebooks/open_images_dataset/extracted/open_images_v4/0.1.0
All the datasets have a version. Then the data could be loaded like this.
ds = tf.load('open_images_v4', data_dir='/notebooks/open_images_dataset/extracted', download=False)
I didn't have open_images_v4 data. I put cifar10 data into a folder named open_images_v4 to check what folder structure tensorflow_datasets was expecting.
The solution to this was to also use the "data_dir" parameter when initializing the dataset:
builder = tfds.image.OpenImagesV4(data_dir="/raid/openimages/dataset")
builder.download_and_prepare(download_dir="/raid/openimages/dataset")
This way the dataset is donwloaded and extracted in the same directory. Before, it was (for me unnoticeably) extracting to the default directory, which is under /home/.../. That's what caused the error, as there wasn't enough space left under my home directory.
After the extraction, the folder structure is exactly as Manoj-Mohan described.
Above solution haven't worked for me.
builder = tfds.builder(name='folder_name', data_dir=data_dir)
builder.download_and_prepare(download_dir="/home/...")
ds = builder.as_dataset()

Google Storage (gs) wrapper file input/out for Cloud ML?

Google recently announced the Clould ML, https://cloud.google.com/ml/ and it's very useful. However, one limitation is that the input/out of a Tensorflow program should support gs://.
If we use all tensorflow APIS to read/write files, it should OK, since these APIs support gs://.
However, if we use native file IO APIs such as open, it does not work, because they don't understand gs://
For example:
with open(vocab_file, 'wb') as f:
cPickle.dump(self.words, f)
This code won't work in Google Cloud ML.
However, modifying all native file IO APIs to tensorflow APIs or Google Storage Python APIs is really tedious. Is there any simple way to do this? Any wrappers to support google storage systems, gs:// on top of the native file IO?
As suggested here Pickled scipy sparse matrix as input data?, perhaps we can use file_io.read_file_to_string('gs://...'), but still this requrements significant code modifcation.
Do it like this:
from tensorflow.python.lib.io import file_io
with file_io.FileIO('gs://.....', mode='w+') as f:
cPickle.dump(self.words, f)
Or you can read pickle file in like this:
file_stream = file_io.FileIO(train_file, mode='r')
x_train, y_train, x_test, y_test = pickle.load(file_stream)
One solution is to copy all of the data to local disk when the program starts up. You can do that using gsutil inside the Python script that gets run, something like:
vocab_file = 'vocab.pickled'
subprocess.check_call(['gsutil', '-m' , 'cp', '-r',
os.path.join('gs://path/to/', vocab_file), '/tmp'])
with open(os.path.join('/tmp', vocab_file), 'wb') as f:
cPickle.dump(self.words, f)
And if you have any outputs, you can write them to local disk and gsutil rsync them. (But, be careful to handle restarts correctly, because you may be put on a different machine).
The other solution is to monkey patch open (Note: untested):
import __builtin__
# NB: not all modes are compatible; should handle more carefully.
# Probably should be reported on
# https://github.com/tensorflow/tensorflow/issues/4357
def new_open(name, mode='r', buffering=-1):
return file_io.FileIO(name, mode)
__builtin__.open = new_open
Just be sure to do that before any module actually tries to read from GCS.
apache_beam has the gcsio module which can be used to return a standard Python file object to read/write GCS objects. You can use this object with any method that works with Python file objects. For example
def open_local_or_gcs(path, mode):
"""Opens the given path."""
if path.startswith('gs://'):
try:
return gcsio.GcsIO().open(path, mode)
except Exception as e: # pylint: disable=broad-except
# Currently we retry exactly once, to work around flaky gcs calls.
logging.error('Retrying after exception reading gcs file: %s', e)
time.sleep(10)
return gcsio.GcsIO().open(path, mode)
else:
return open(path, mode)
with open_local_or_gcs(vocab_file, 'wb') as f:
cPickle.dump(self.words, f)