Tensorflow Handpose model offline loading - tensorflow

I am trying to download the Tensorflow's handpose model files for running offline.
So, i have downloaded the Handpose Model files from here.
https://tfhub.dev/mediapipe/tfjs-model/handskeleton/1/default/1
But, how can we use these files offline and predict in javascript and as well as on the react-Native code.

Just change all urls in the hanpose package to point to the url where you put your model ( in your localhost/public_dir)
that works well for me :)

Related

Exported object detection model not saved in referenced directory

I am trying to run the Tensorflow object detection notebook from the Google colab example that can be found here. I am running the example locally and when I try to create the exported model using the model.export(export_dir) function, the model is not saved in the referenced export_dir option. Instead, the code seems to ignore the option and saves the model in a temporary directory in /tmp/something. I tried to use both full and relative path but the model goes to /tmp. My environment is Ubuntu in WSL2 and I am using a /mnt/n drive.
Any idea how to fix the issue?

Could I transfer my tfrecords and use them in another computer?

I am building a tensorflow_object_detection_api setup locally, and the hope is to transfer all the setup to a computer cluster I have access to through my school, and train the model there. The environment is hard to set up on the shared linux system on the cluster, so I am trying to do as much locally as possible, and hopefully just ship everything there and run the training command. My question is, can I generate the tfrecords locally and just transfer them to the cluster? I am asking this because I don't know how these records work,, do they include links to the actual local directories? or do they contain all the necessary information in them?
P.S. I tried to generate them on the cluster, but the environment is tricky to set up: tensorflow and opencv are installed in a singularity which needs to be called to run any script with tf or opencv, but that singularity does not have the necessary requirements to run the script which generates the tfrecords from csv annotations.
I am pertty new to most of this so any help is appreciated.
Yes. I tried it and it worked. Apparently, tfrecords contain all the images and their annotations; all I needed to do is transfer the weights to Colab and start training.

tensorflow lite models for android and instructions

I am trying to follow instructions here to generate models for android.
https://github.com/tensorflow/examples/tree/master/lite/examples/gesture_classification/ml
But when I try to run all commands in codelabs it asks for model.json and the binary weights file model-weights.bin files to upload. Not sure what this means.
If I skip this step second last step failes
No such file or directory: 'model.json'
Where can I find these?
You need to first use the gesture classification web app to generate the TensorFlow.js model trained by your gestures.
https://github.com/tensorflow/examples/tree/master/lite/examples/gesture_classification/web
Once the model is trained from the web app, you can download files like model.json and model-weights.bin which are needed in other steps.

Using "Spacy package" on trained model: error "Can't locate model data"

I'm attempting to train the NER within SpaCy to recognize a new set of entities. Everything works just fine until I try to save and reload the model.
I'm attempting to follow the SpaCy doc recommendations from https://spacy.io/usage/training#saving-loading, so I have been saving with:
model.to_disk("save_this_model")
and then going to the Command Line and attempting to turn it into a package using:
python -m spacy package save_this_model saved_model_package
so I can then use
spacy.load('saved_model_package')
to pull the model back up.
However, when I'm attempting to use spacy package from the Command Line, I keep getting the error message "Can't locate model data"
I've looked in the save_this_model file and there is a meta.json there, as well as folders for the various pipes (I've tried this with all pipes saved and the non-NER pipes disabled, neither works).
Does anyone know what I might be doing wrong here?
I'm pretty inexperienced, so I think it's very possible that I'm attempting to make a package incorrectly or committing some other basic error. Thank you very much for your help in advance!
The spacy package command will create an installable and loadable Python package based on your model data, which you can then pip install and store in a single .tar.gz file. If you just want to load a model you've saved out, you usually don't even need to package it – you can simply pass the path to the model directory to spacy.load. For example:
nlp = spacy.load('/path/to/save_this_model')
spacy.load can take either a path to a model directory, a model package name or the name of a shortcut link, if available.
If you're new to spaCy and just experimenting with training models, loading them from a directory is usually the simplest solution. Model packages come in handy if you want to share your model with others (because you can share it as one installable file), or if you want to integrate it into your CI workflow or test suite (because the model can be a component of your application, like any other package it depends on).
So if you do want a Python package, you'll first need to build it by running the package setup from within the directory created by spacy package:
cd saved_model_package
python setup.py sdist
You can find more details here in the docs. The above command will create a .tar.gz archive in a directory /dist, which you can then install in your environment.
pip install /path/to/en_example_model-1.0.0.tar.gz
If the model installed correctly, it should show up in the installed packages when you run pip list or pip freeze. To load it, you can call spacy.load with the package name, which is usually the language code plus the name you specified when you packaged the model. In this example, en_example_model:
nlp = spacy.load('en_example_model')

tensorflow serving impossible proto files structure for grpc interface

I'm trying to compile pb files for GRPC calls to Tensorflow Serving (in php, but the question is not PHP related)
The file serving/tensorflow_serving/apis/predict.proto has:
import "tensorflow/core/framework/tensor.proto";
import "tensorflow_serving/apis/model.proto";
However in a normal setup tensorflow and tensorflow serving are not located in a hierarchy that has a common folder from which the two import can work together.
Assuming that compiling the proto files to pb files for grpc keeps the hirarchy , it cannot work without locating tensorflow serving under /tensorflow/. What am I missing here?
What is the best practice in order to compile pb files for grpc client?
Another issue: if the pb files are created - they include the imports with same hirarchy so it will force the same folder structure on the client ??? this is against the meaning of GRPC which is isolation and seperation between the entities.
I don't know anything about tensorflow, but i'm approaching the problem from a just-another-protobuf-creation point of view. Here https://github.com/tensorflow/serving i'm seeing both tensorflow_serving and a submodule tensorflow which is a root of your desired dependency (i.e. it has another tensorflow subfolder in it). So i guess that you either missed some configuration step, which would have copied the folder into right relative location, or you are running an incomplete/incorrect protoc command line, i.e. you are missing some -I <path>