Went thru https://tensorflow.github.io/serving/serving_basic and was able to run inference MNIST example using Tensorflow Serving server.
Now, I would like to use a trained modified InceptionV3 model (which generates 2 files: .pb and .txt) to export and use it for inference.
Serving Basic tutorial use mnist_saved_model.py for training and exporting the model. Is this file to be modified for the trained modified InceptionV3 model? Also, what is the difference between mnist_saved_model.py and mnist_export.py?
Looked at How to serve the Tensorflow graph file (output_graph.pb) via Tensorflow serving? but the example mnist_saved_model.py creates a directory called 1 and subdirectories as shown below
$> ls /tmp/mnist_model/1
saved_model.pb variables
Thoughts??
Related
I trained a model using yolov5, Then exported it to TensorFlow saved_model format, the result was a yolo5s.pt file. As far as I know yolov5 uses PyTorch, I prefer TensorFlow. Now I want to build a model in TensorFlow using the saved_model file, how can I do it?
It will be preferable if the solution is in google colab, I didn't included my code because I don't have any ideas how to start.
I have downloaded the .weights and .cfg file for YOLOv3 from darknet (link: https://pjreddie.com/darknet/yolo/) I want to create a model and assign the weights from these files, and I want to save the model with the assigned weights to a .h5 file so that I can load the .h5 model into Keras by using keras.models.load_model().
Please help.
You should check the instructions given in this repository. This is basically the keras implementation of YOLOv3 (Tensorflow backend).
Download YOLOv3 weights from YOLO website.
Convert the Darknet YOLO model to a Keras model.
python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5
As you have already downloaded the weights and configuration file, you can skip the first step. Download the convert.py script from repository and simply run the above command.
Note: Above command assumes that yolov3.cfg, yolov3.weights and model_data(folder) are present at the same path as convert.py.
For people getting error from this try changing the layers part in 'convert.py'
Not sure if it was version problem but changing the way converter.py file was loading 'keras.layers' solved all errors for me
I have a frozen inference graph(frozen_inference_graph.pb) and a checkpoint (model.ckpt.data-00000-of-00001, model.ckpt.index), how to deploy these to Tensorflow serving? serving need SavedModel format, how to convert to it?
I study Tensorflow and found Deeplab v3+ provide PASCAL VOC 2012 model, I run train, eval, visualization on my local PC, but I don't know how to deploy it on serving.
Have you tried export_inference_graph.py?
Prepares an object detection tensorflow graph for inference using model
configuration and a trained checkpoint. Outputs inference
graph, associated checkpoint files, a frozen inference graph and a
SavedModel
I've been trying to use tensorflow.js, but I need the model in the SavedModel format. So far, I only have the Frozen Graph, as I used Tensorflow for Poets Codelab.
How can I convert the Frozen Graph into SavedModel?
I've been using the latest Python version and Tensorflow 1.8
The SavedModel is really just a wrapper around a frozen graph that provides assets and the serving signature. For a code implementation, see this answer.
I could download and successfully test brain parcellation demo of NiftyNet package. However, this only gives me the ultimate parcellation result of a pre-trained network, whereas I need to get access to the output of the intermediate layers too.
According to this demo, the following line downloads a pre-trained model and a test MR volume:
wget -c https://www.dropbox.com/s/rxhluo9sub7ewlp/parcellation_demo.tar.gz -P ${demopath}
where ${demopath} is the path to the demo folder. Extracting the downloaded file will create a .ckpt file which seems to contain a pre-trained tensorflow model, however I could not manage to load it into a tensorflow session.
Is there a way that I can load the pre-trained model and have access to the all its intermediate activation maps? In other words, how can I load the pre-trained models from NiftyNet library into a tensorflow session such that I can explore through the model or probe certain intermediate layer for a any given input image?
Finally, in NiftyNet's website it is mentioned that "a number of models from the literature have been (re)implemented in the NiftyNet framework". Are pre-trained weights of these models also available? The demo is using a pre-trained model called HighRes3DNet. If the pre-trained weights of other models are also available, what is the link to download those weights or saved tensorflow models?
To answer your 'Finally' question first, NiftyNet has some network architectures implemented (e.g., VNet, UNet, DeepMedic, HighRes3DNet) that you can train on your own data. For a few of these, there are pre-trained weights for certain applications (e.g. brain parcellation with HighRes3DNet and abdominal CT segmentation with DenseVNet).
Some of these pre-trained weights are linked from the demos, like the parcellation one you linked to. We are starting to collect the pre-trained models into a model zoo, but this is still a work in progress.
Eli Gibson [NiftyNet developer]