I am trying to run the Tensorflow object detection notebook from the Google colab example that can be found here. I am running the example locally and when I try to create the exported model using the model.export(export_dir) function, the model is not saved in the referenced export_dir option. Instead, the code seems to ignore the option and saves the model in a temporary directory in /tmp/something. I tried to use both full and relative path but the model goes to /tmp. My environment is Ubuntu in WSL2 and I am using a /mnt/n drive.
Any idea how to fix the issue?
Related
Scenario:
You have a github repo you want to work on. The repo has the following file structure:
src
|
-- directory containing .py files
notebook1.ipynb
notebook2.ipynb
You head to colab and create a new empty notebook as the entry point between your repo and the google colab runtime.
On that empty colab notebook you add the following command to clone your github repo:
!git clone your_repo_address
Checking the colab file explorer we seen that our repo and file format are copied to colab runtime.
So far so good, now say you want to open notebook1.ipynb and execute cells and work on it.
How the hell you do that?
Every time I try that it opens in a file explorer without the possibility to execute the notebook cells.
Why can't colab work similar to jupyter. It's extremely cumbersome and time wasting in this regard compared to jupyter.
I am building a tensorflow_object_detection_api setup locally, and the hope is to transfer all the setup to a computer cluster I have access to through my school, and train the model there. The environment is hard to set up on the shared linux system on the cluster, so I am trying to do as much locally as possible, and hopefully just ship everything there and run the training command. My question is, can I generate the tfrecords locally and just transfer them to the cluster? I am asking this because I don't know how these records work,, do they include links to the actual local directories? or do they contain all the necessary information in them?
P.S. I tried to generate them on the cluster, but the environment is tricky to set up: tensorflow and opencv are installed in a singularity which needs to be called to run any script with tf or opencv, but that singularity does not have the necessary requirements to run the script which generates the tfrecords from csv annotations.
I am pertty new to most of this so any help is appreciated.
Yes. I tried it and it worked. Apparently, tfrecords contain all the images and their annotations; all I needed to do is transfer the weights to Colab and start training.
I'm working on Ubuntu 18.04 LTS. According to https://keras.io/getting_started/faq/:
The default directory where all Keras data is stored is:
$HOME/.keras/
In case Keras cannot create the above directory (e.g. due to
permission issues), /tmp/.keras/ is used as a backup.
In my case, Keras lacks permissions for $HOME/.keras and can't write to /tmp/.keras due to a full hard drive. I would like to place Keras's tmp directory in a custom location on a different hard drive. How is this best accomplished?
I am trying to download the Tensorflow's handpose model files for running offline.
So, i have downloaded the Handpose Model files from here.
https://tfhub.dev/mediapipe/tfjs-model/handskeleton/1/default/1
But, how can we use these files offline and predict in javascript and as well as on the react-Native code.
Just change all urls in the hanpose package to point to the url where you put your model ( in your localhost/public_dir)
that works well for me :)
I'm attempting to train the NER within SpaCy to recognize a new set of entities. Everything works just fine until I try to save and reload the model.
I'm attempting to follow the SpaCy doc recommendations from https://spacy.io/usage/training#saving-loading, so I have been saving with:
model.to_disk("save_this_model")
and then going to the Command Line and attempting to turn it into a package using:
python -m spacy package save_this_model saved_model_package
so I can then use
spacy.load('saved_model_package')
to pull the model back up.
However, when I'm attempting to use spacy package from the Command Line, I keep getting the error message "Can't locate model data"
I've looked in the save_this_model file and there is a meta.json there, as well as folders for the various pipes (I've tried this with all pipes saved and the non-NER pipes disabled, neither works).
Does anyone know what I might be doing wrong here?
I'm pretty inexperienced, so I think it's very possible that I'm attempting to make a package incorrectly or committing some other basic error. Thank you very much for your help in advance!
The spacy package command will create an installable and loadable Python package based on your model data, which you can then pip install and store in a single .tar.gz file. If you just want to load a model you've saved out, you usually don't even need to package it – you can simply pass the path to the model directory to spacy.load. For example:
nlp = spacy.load('/path/to/save_this_model')
spacy.load can take either a path to a model directory, a model package name or the name of a shortcut link, if available.
If you're new to spaCy and just experimenting with training models, loading them from a directory is usually the simplest solution. Model packages come in handy if you want to share your model with others (because you can share it as one installable file), or if you want to integrate it into your CI workflow or test suite (because the model can be a component of your application, like any other package it depends on).
So if you do want a Python package, you'll first need to build it by running the package setup from within the directory created by spacy package:
cd saved_model_package
python setup.py sdist
You can find more details here in the docs. The above command will create a .tar.gz archive in a directory /dist, which you can then install in your environment.
pip install /path/to/en_example_model-1.0.0.tar.gz
If the model installed correctly, it should show up in the installed packages when you run pip list or pip freeze. To load it, you can call spacy.load with the package name, which is usually the language code plus the name you specified when you packaged the model. In this example, en_example_model:
nlp = spacy.load('en_example_model')