I've been using tensorflow image recognition. I've build many scripts which interact with classify_image.py.
I also retrained the model using retrain.py, with my own dataset.
How can use the two files generated: output_graph.pb, output_labels.txt with classify_image.py ?
Ah, the docs say
If you'd like to use the retrained model in a Python program this example from #eldor4do shows what you'll need to do.
Just copied/edited that one file locally, and python .\edited-retraining-example.py. And that was easy.
Note that if you're on Windows, change all examples of /tmp/... to c:/tmp/....
Related
Overview
I know this subject has been discussed many times, but I am having a hard time understanding the workflow, or rather, the variations of the workflow.
For example, imagine you are installing TensorFlow on Windows 10. The main goal being to train a custom model, convert to TensorFlow Lite, and copy the converted .tflite file to a Raspberry Pi running TensorFlow Lite.
The confusion for me starts with the conversion process. After following along with multiple guides, it seems TensorFlow is often install with pip, or Anaconda. But then I see detailed tutorials which indicate it needs to be built from source in order to convert from TensorFlow models to TFLite models.
To make things more interesting, I've also seen models which are converted via Python scripts as seen here.
Question
So far I have seen 3 ways to do this conversion, and it could just be that I don't have a grasp on the full picture. Below are the abbreviated methods I have seen:
Build from source, and use the TensorFlow Lite Optimizing Converter (TOCO):
bazel run --config=opt tensorflow/lite/toco:toco -- --input_file=$OUTPUT_DIR/tflite_graph.pb --output_file=$OUTPUT_DIR/detect.tflite ...
Use the TensorFlow Lite Converter Python API:
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
Use the tflite_convert CLI utilities:
tflite_convert --saved_model_dir=/tmp/mobilenet_saved_model --output_file=/tmp/mobilenet.tflite
I *think I understand that options 2/3 are the same, in the sense that the tflite_convert utility is installed, and can be invoked either from the command line, or through a Python script. But is there a specific reason you should choose one over the other?
And lastly, what really gets me confused is option 1. And maybe it's a version thing (1.x vs 2.x)? But what's the difference between the TensorFlow Lite Optimizing Converter (TOCO) and the TensorFlow Lite Converter. It appears that in order to use TOCO you would have to build TensorFlow from source, so is there is a reason you would use one over the other?
There is no difference in the output from different conversion methods, as long as the parameters remain the same. The Python API is better if you want to generate TFLite models in an automated way (for eg a Python script that's run periodically).
The TensorFlow Lite Optimizing Converter (TOCO) was the first version of the TF->TFLite converter. It was recently deprecated and replaced with a new converter that can handle more ops/models. So I wouldn't recommend using toco:toco via bazel, but rather use tflite_convert as mentioned here.
You should never have to build the converter from source, unless you are making some changes to it and want to test them out.
Actually we are creating a platform to be able to put AI usecases in production. TFX is the first choice but what if we want to use non-tensorflow based libraries like scikit learn etc and want to include a python script to create models. Will output of such a model be served by tensorflow server. How can I make sure to be able to run both tensorflow based model and non-tensorflow based libraries and models in one system design. Please suggest.
Mentioned below is the procedure to Deploy and Serve a Sci-kit Learn Model in Google Cloud Platform.
First step is to Save/Export the SciKit Learn Model using the below code:
from sklearn.externals import joblib
joblib.dump(clf, 'model.joblib')
Next step is to upload the model.joblib file to Google Cloud Storage.
After that, we need to create our model and version, specifying that we are loading up a scikit-learn model, and select the runtime version of Cloud ML engine, as well as the version of Python that we used to export this model.
Next, we need to present the data to Cloud ML Engine as a simple array, encoded as a json file, like shown below. We can use JSON Library as well.
print(list(X_test.iloc[10:11].values))
Next, we need to run the below command to perform the Inference,
gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances $INPUT_FILE
For more information, please refer this link.
I want to use pre trained models in my go application. Especially the Inception-ResNet-v2 model.
This model seems to be only available via tensorflow hub (https://www.tensorflow.org/hub/).
However I could not find any documentation how to use tensorflow hub with the go language bindings for tensorflow.
How can I download and use these models in go?
So after a lot of work in the past few days I finally found a way.
At first I wanted to just use Python to do all the Tensorflow stuff and then provide the results via a rest service. However it turned out that the number of models provided by Tensorflow Hub is very small. This was a problem for me because I had to try out different models and compare them.
Thus I switched to using models from https://github.com/tensorflow/models. There are several tutorials how to export the data to .pb files. Those files can then be loaded in Go using gocv.
It requires a lot of work to convert the files, but in the end I think this is the best way to use Tensorflow models in go.
I want to convert a Tensorflow model with the following structure to a .mlmodel file for use in an iOS app:
cub_image_experiment/
logdir/
val_summaries/
test_summaries/
finetune/
val_summaries/
cmds.txt
config_train.yaml
config_test.yaml
I'm following this tutorial: https://github.com/visipedia/tf_classification/wiki/CUB-200-Image-Classification
However, I'm having trouble understanding the structure of the project. Which files are important and how do I convert all the separate config files and everything into a single .mlmodel file so that I can use in my application?
I've looked online and all I could find was how to convert .caffemodel to .mlmodel or .pb file to .mlmodel. These are all single files, however my project has multiple files. I found a tutorial on how to convert a tf model into a single .pb file, however, that model's structure was different and it did not contain any yaml files. My project is not focused on creating a model at the moment, but merely integrating a model into an iOS app. I found this model interesting for an app idea and wanted to know if it can be integrated. If there are any tutorials out there that might help me in this sort of problem please let me know.
None of that stuff is used by the Core ML model. The yaml files etc are used only to train the TF model.
All you need to provide is a frozen graph (a .pb file) and then convert it to an mlmodel using tfcoreml.
It looks like your project doesn't have a frozen graph but checkpoints. There is a TF utility that you can use to convert the checkpoint to a frozen graph, see https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py
I am starting to work with Intel movidius neural compute stick.
To start working, in my case, it is necessary to download pretrained models.
In the tutorials they refer to http://download.tensorflow.org/models/.
However, there is not a list that shows all the models available for download.
If the latest version of a net, lets say inception_v4, is not known, I cannot download the corresponding .tar.gz file.
Does anyone know a method to have an updated list of the .tar.gz files of the pretrained models available for download?
Thanks
The following two links may help
detection_model_zoo
TensorFlow-Slim image classification model library
keras.applications makes it easy to load models and their pretrained weights if you can use Keras
Considering that most of the posted linked outdated, I suggest looking into the TensorFlow hub (https://www.tensorflow.org/hub) for more recent pre-trained models.