tensorflow lite models for android and instructions - tensorflow

I am trying to follow instructions here to generate models for android.
https://github.com/tensorflow/examples/tree/master/lite/examples/gesture_classification/ml
But when I try to run all commands in codelabs it asks for model.json and the binary weights file model-weights.bin files to upload. Not sure what this means.
If I skip this step second last step failes
No such file or directory: 'model.json'
Where can I find these?

You need to first use the gesture classification web app to generate the TensorFlow.js model trained by your gestures.
https://github.com/tensorflow/examples/tree/master/lite/examples/gesture_classification/web
Once the model is trained from the web app, you can download files like model.json and model-weights.bin which are needed in other steps.

Related

Using .config files to load models in tensorflow

I have a .config file, name being ssd_mobilenet_v1.config which I know is supposed to load a pretrained model from tensorflow. However, I am unable to find how to do that.
I have searched the internet and there are instructions to do so from a dict but not directly using a .config file

Could I transfer my tfrecords and use them in another computer?

I am building a tensorflow_object_detection_api setup locally, and the hope is to transfer all the setup to a computer cluster I have access to through my school, and train the model there. The environment is hard to set up on the shared linux system on the cluster, so I am trying to do as much locally as possible, and hopefully just ship everything there and run the training command. My question is, can I generate the tfrecords locally and just transfer them to the cluster? I am asking this because I don't know how these records work,, do they include links to the actual local directories? or do they contain all the necessary information in them?
P.S. I tried to generate them on the cluster, but the environment is tricky to set up: tensorflow and opencv are installed in a singularity which needs to be called to run any script with tf or opencv, but that singularity does not have the necessary requirements to run the script which generates the tfrecords from csv annotations.
I am pertty new to most of this so any help is appreciated.
Yes. I tried it and it worked. Apparently, tfrecords contain all the images and their annotations; all I needed to do is transfer the weights to Colab and start training.

Tensorflow Handpose model offline loading

I am trying to download the Tensorflow's handpose model files for running offline.
So, i have downloaded the Handpose Model files from here.
https://tfhub.dev/mediapipe/tfjs-model/handskeleton/1/default/1
But, how can we use these files offline and predict in javascript and as well as on the react-Native code.
Just change all urls in the hanpose package to point to the url where you put your model ( in your localhost/public_dir)
that works well for me :)

Using "Spacy package" on trained model: error "Can't locate model data"

I'm attempting to train the NER within SpaCy to recognize a new set of entities. Everything works just fine until I try to save and reload the model.
I'm attempting to follow the SpaCy doc recommendations from https://spacy.io/usage/training#saving-loading, so I have been saving with:
model.to_disk("save_this_model")
and then going to the Command Line and attempting to turn it into a package using:
python -m spacy package save_this_model saved_model_package
so I can then use
spacy.load('saved_model_package')
to pull the model back up.
However, when I'm attempting to use spacy package from the Command Line, I keep getting the error message "Can't locate model data"
I've looked in the save_this_model file and there is a meta.json there, as well as folders for the various pipes (I've tried this with all pipes saved and the non-NER pipes disabled, neither works).
Does anyone know what I might be doing wrong here?
I'm pretty inexperienced, so I think it's very possible that I'm attempting to make a package incorrectly or committing some other basic error. Thank you very much for your help in advance!
The spacy package command will create an installable and loadable Python package based on your model data, which you can then pip install and store in a single .tar.gz file. If you just want to load a model you've saved out, you usually don't even need to package it – you can simply pass the path to the model directory to spacy.load. For example:
nlp = spacy.load('/path/to/save_this_model')
spacy.load can take either a path to a model directory, a model package name or the name of a shortcut link, if available.
If you're new to spaCy and just experimenting with training models, loading them from a directory is usually the simplest solution. Model packages come in handy if you want to share your model with others (because you can share it as one installable file), or if you want to integrate it into your CI workflow or test suite (because the model can be a component of your application, like any other package it depends on).
So if you do want a Python package, you'll first need to build it by running the package setup from within the directory created by spacy package:
cd saved_model_package
python setup.py sdist
You can find more details here in the docs. The above command will create a .tar.gz archive in a directory /dist, which you can then install in your environment.
pip install /path/to/en_example_model-1.0.0.tar.gz
If the model installed correctly, it should show up in the installed packages when you run pip list or pip freeze. To load it, you can call spacy.load with the package name, which is usually the language code plus the name you specified when you packaged the model. In this example, en_example_model:
nlp = spacy.load('en_example_model')

Using Tensorboard to graph from a log directory

Can anyone give me a quick and dirty tutorial about how to modify the code in the this MNIST tutorial and this seq2seq tutorial to log things to a log directory which can then be used in TensorBoard? I didn't really understand the ones on the official site.
You can create a SummaryWriter object, passing it the log directory, and call add_summary to log summaries and events to files in that directory. word2vec.py has an example. You can simply point TensorBoard at the log directory by passing it through --logdir and visualize the summaries.