Load dataset from Roboflow in colab - google-colaboratory

I'm trying to retreive a roboflow project dataset in google colab. It works for two of the dataset versions, but not the latest I have created (same project, version 5).
Anyone know what goes wrong?
Snippet:
from roboflow import Roboflow
rf = Roboflow(api_key="keyremoved")
project = rf.workspace().project("project name")
dataset = project.version(5).download("yolov5")
loading Roboflow workspace...
loading Roboflow project...
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-22-7f073ab2bc86> in <module>()
7 rf = Roboflow(api_key="keyremoved")
8 project = rf.workspace().project("projectname")
----> 9 dataset = project.version(5).download("yolov5")
10
11
/usr/local/lib/python3.7/dist-packages/roboflow/core/version.py in download(self, model_format, location)
76 link = resp.json()['export']['link']
77 else:
---> 78 raise RuntimeError(resp.json())
79
80 def bar_progress(current, total, width=80):
RuntimeError: {'error': {'message': 'Unsupported get request. Export with ID `idremoved` does not exist or cannot be loaded due to missing permissions.', 'type': 'GraphMethodException', 'hint': 'You can find the API docs at https://docs.roboflow.com'}}

There can be limits for the number of images+augmentations that you can export with roboflow according to the plans that you use. Please check your account details and limits. Contact roboflow support if you need more help.

Related

Cannot load checkpoints

I taught a model (tensorflow tutorial) in Jupyter then saved it, then succesfully loaded it back (kernel was restarted). Here's the code:
# Directory where the checkpoints will be saved
checkpoint_dir = '/home/charlie-chin/william_model/training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
model.save('/home/charlie-chin/william_model')
model = keras.models.load_model('/home/charlie-chin/william_model', custom_objects={'loss':loss})
checkpoint_num = 10
model.load_weights(tf.train.Checkpoint("/home/charlie-chin/william_model/training_checkpoints/ckpt_" + str(checkpoint_num)))
All went good except the last 2 lines which gave me this error:
ValueError: `Checkpoint` was expecting root to be a trackable object (an object derived from `Trackable`), got /home/charlie-chin/william_model/training_checkpoints/ckpt_1. If you believe this object should be trackable (i.e. it is part of the TensorFlow Python API and manages state), please open an issue.
I checked the path - it is correct. Here's full output of the error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [39], in <cell line: 4>()
1 checkpoint_num = 10
2 # model.load_weights(tf.train.load_checkpoint("./william_model/training_checkpoints/ckpt_"))
3 # model.load_weights(tf.train.Checkpoint("/home/charlie-chin/william_model/training_checkpoints/ckpt_" + str(checkpoint_num)+".data-00000-of-00001"))
----> 4 model.load_weights(tf.train.Checkpoint("/home/charlie-chin/william_model/training_checkpoints/ckpt_" + str(checkpoint_num)))
File ~/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py:2107, in Checkpoint.__init__(self, root, **kwargs)
2105 if root:
2106 trackable_root = root() if isinstance(root, weakref.ref) else root
-> 2107 _assert_trackable(trackable_root, "root")
2108 attached_dependencies = []
2110 # All keyword arguments (including root itself) are set as children
2111 # of root.
File ~/.local/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py:1546, in _assert_trackable(obj, name)
1543 def _assert_trackable(obj, name):
1544 if not isinstance(
1545 obj, (base.Trackable, def_function.Function)):
-> 1546 raise ValueError(
1547 f"`Checkpoint` was expecting {name} to be a trackable object (an "
1548 f"object derived from `Trackable`), got {obj}. If you believe this "
1549 "object should be trackable (i.e. it is part of the "
1550 "TensorFlow Python API and manages state), please open an issue.")
ValueError: `Checkpoint` was expecting root to be a trackable object (an object derived from `Trackable`), got /home/charlie-chin/william_model/training_checkpoints/ckpt_10. If you believe this object should be trackable (i.e. it is part of the TensorFlow Python API and manages state), please open an issue.
You should be able to load the checkpoints according to the TensorFlow documentation like this:
checkpoint_num = 10
model.load_weights("/home/charlie-chin/william_model/training_checkpoints/ckpt_" + str(checkpoint_num))

Instance Normalization Error while converting model from tensorflow to Coreml (4.0)

I try to convert my model from Tensorflow to Coreml however I get below error. Isn't it possible to convert instance normalization layer to CoreML? Any workaround to overcome?
ValueError Traceback (most recent call last)
in ()
6
7 model = ct.convert(
----> 8 tf_keras_model )
6 frames
/usr/local/lib/python3.6/dist-packages/coremltools/converters/mil/mil/block.py in remove_ops(self, existing_ops)
700 + "used by ops {}"
701 )
--> 702 raise ValueError(msg.format(op.name, i, v.name, child_op_names))
703 # Check that the output Var isn't block's output
704 if v in self._outputs:
ValueError: Cannot delete op 'Generator/StatefulPartitionedCall/StatefulPartitionedCall/encoder_down_resblock_0/instance_norm_0/Shape' with active output at id 0: 'Generator/StatefulPartitionedCall/StatefulPartitionedCall/encoder_down_resblock_0/instance_norm_0/Shape' used by ops ['Generator/StatefulPartitionedCall/StatefulPartitionedCall/encoder_down_resblock_0/instance_norm_0/strided_slice']
SEARCH STACK OVERFLOW
I use keras-contrib instead and it works fine. Please see issue and its solution below. It is still open for tensorflow_addons.
https://github.com/apple/coremltools/issues/1007

Using pandas' read_hdf to load data on Google Drive fails with ValueError

I have uploaded a HDF file to Google Drive and wish to load it in Colab. The file was created from a dataframe with DataFrame.to_hdf() and it can be loaded successfully locally with pd.read_hdf(). However, when I try to mount my Google Drive and read the data in Colab, it fails with a ValueError.
Here is the code I am using to read the data:
from google.colab import drive
drive.mount('/content/drive')
data = pd.read_hdf('/content/drive/My Drive/Ryhmäytyminen/data/data.h5', 'students')
And this is the full error message:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-15-cfe913c26e60> in <module>()
----> 1 data = pd.read_hdf('/content/drive/My Drive/Ryhmäytyminen/data/data.h5', 'students')
7 frames
/usr/local/lib/python3.6/dist-packages/tables/vlarray.py in read(self, start, stop, step)
819 listarr = []
820 else:
--> 821 listarr = self._read_array(start, stop, step)
822
823 atom = self.atom
tables/hdf5extension.pyx in tables.hdf5extension.VLArray._read_array()
ValueError: cannot set WRITEABLE flag to True of this array
Reading some JSON data was successful, so the problem probably is not with mounting. Any ideas what is wrong or how to debug this problem?
Thank you!
Try navigating to the directory that you store your HDF file first:
cd /content/drive/My Drive/Ryhmäytyminen/data
From here you should be able to load the HDF file directly:
data = pd.read_hdf('data.h5', 'students')

ValueError: unknown url type with DeepLab demo.ipynb

I am running Demo DeepLab.ipnyb using Google Colab. Demo provides images work well. When I tried to add my own image, I receive an error "ValueError: unknown url type: '/content/harshu-06032019.png'. I see that the file is uploaded to Colab.
Any help on why I am getting this error is appreciated.
I tried to put this file into Google Drive and grant access to Colab by mounting the Google Drive. That doesn't work as well.
But if the file is uploaded to google drive, I am getting the error "Cannot retrieve image. Please check url"
This is the code provided by DeepLabv3+
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-ba1edc5ae51a> in <module>()
24
25 image_url = IMAGE_URL or _SAMPLE_URL % SAMPLE_IMAGE
---> 26 run_visualization(image_url)
5 frames
/usr/lib/python3.6/urllib/request.py in _parse(self)
382 self.type, rest = splittype(self._full_url)
383 if self.type is None:
--> 384 raise ValueError("unknown url type: %r" % self.full_url)
385 self.host, self.selector = splithost(rest)
386 if self.host:
ValueError: unknown url type: '/content/harshu-06032019.png'
To fix this "ValueError: unknown url type: '/content/harshu-06032019.png'", follow these steps:
Remove content from the URL path
The path to your image should only be harshu-06032019.png
run !ls to see if the image file is present

pandas.read_clipboard from cloud-hosted jupyter?

I am running a Data8 instance of JupyterHub running JupyterLab on a server, and pd.read_clipboard() does not seem to work. I see the same problem in google colab.
import pandas as pd
pd.read_clipboard()
errors out like so:
---------------------------------------------------------------------------
PyperclipException Traceback (most recent call last)
<ipython-input-2-8cbad928c47b> in <module>()
----> 1 pd.read_clipboard()
/opt/conda/lib/python3.6/site-packages/pandas/io/clipboards.py in read_clipboard(sep, **kwargs)
29 from pandas.io.clipboard import clipboard_get
30 from pandas.io.parsers import read_table
---> 31 text = clipboard_get()
32
33 # try to decode (if needed on PY3)
/opt/conda/lib/python3.6/site-packages/pandas/io/clipboard/clipboards.py in __call__(self, *args, **kwargs)
125
126 def __call__(self, *args, **kwargs):
--> 127 raise PyperclipException(EXCEPT_MSG)
128
129 if PY2:
PyperclipException:
Pyperclip could not find a copy/paste mechanism for your system.
For more information, please visit https://pyperclip.readthedocs.org
Is there a way to get this working?
No. The machine is run in the cloud. Python from there cannot access your local machine to get clipboard content.
I tried Javascript clipboad api, but it didn't work probably because the output is in an iframe which isn't allow access to clipboard either. If it did, this would have worked
from google.colab.output import eval_js
text = eval_js("navigator.clipboard.readText()")