I manage to retrain my specific classification model using the generic inception model following this tutorial. I would like now to deploy it on the google cloud machine learning following this steps.
I already managed to export it as MetaGraph but I can't manage to get the proper inputs and outputs.
Using it locally, my entry point to the graph is DecodeJpeg/contents:0 which is fed with a jpeg image in binary format. The output are my predictions.
The code I use locally (which is working) is:
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor,{'DecodeJpeg/contents:0': image_data})
Should the input tensor be DecodeJpeg? What would be the changes I need to make if I would like to have a base64 image as input ?
I defined the output as:
outputs = {'prediction':softmax_tensor.name}
Any help is highly appreciated.
In your example, the input tensor is 'DecodeJpeg/contents:0', so you would have something like:
inputs = {'image': 'DecodeJpeg/contents:0')
outputs = {'prediction': 'final_result:0')
(Be sure to follow all of the instructions for preparing a model).
The model directory you intend to export should have files such as:
gs://my_bucket/path/to/model/export.meta
gs://my_bucket/path/to/model/checkpoint*
When you deploy your model, be sure to set gs://my_bucket/path/to/model as the deployment_uri.
To send an image to the service, as you suggest, you will need to base64 encode the image bytes. The body of your request should look like the following (note the 'tag', 'b64', indicating the data is base-64 encoded):
{'instances': [{'b64': base64.b64encode(image)}]}
We've now released a tutorial on how to retrain the Inception model, including instructions for how to deploy the model on the CloudML service.
https://cloud.google.com/blog/big-data/2016/12/how-to-train-and-classify-images-using-google-cloud-machine-learning-and-cloud-dataflow
Related
To set the context, if i go to : https://huggingface.co/ProsusAI/finbert and input the following sentence on their hosted API form
Stocks rallied and the British pound gained.
I get the sentiment as 89.8% positive,6.7% neutral and the rest negative, which is as one would expect.
However if I download the tensorflow version of the model from :https://huggingface.co/ProsusAI/finbert/tree/main along with the respective Json files, and it run it locally I get the output as
array([[0.2945392 , 0.4717328 , 0.23372805]] which corresponds to a ~ 30% positive sentiment.
The code i am using locally is as follows ( modfin is the local folder where i have stored the t5_model.h5 alongwith the other files)
model = TFAutoModelForSequenceClassification.from_pretrained("C:/Users/Downloads/modfin",config="C:/Users/Downloads/modfin/config.json",num_labels=3)
tokenizer = AutoTokenizer.from_pretrained("C:/Users/Downloads/modfin",config="C:/Users/Downloads/modfin/tokenizer_config.json")
inputs = tokenizer(sentences, padding = True, truncation = True, return_tensors='tf')
outputs = model(**inputs)
nn.softmax(outputs[0]).numpy()
for the model I also get a warning as follows
All model checkpoint layers were used when initializing TFBertForSequenceClassification.
Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at C:/Users/Downloads/modfin and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
which is strange since I would assume a pre-trained model such as finbert should already be fine-tuned. when i replace TFAutoModelForSequenceClassification with TFAutoModel i see that the transofrmer that is chosen automatically is 'TFBertModel' whose output is 10 X768 tensor which i am not able to interpret into sentiment classes. any help here would be greatly appreciated
im quite new to object detection but i managed to train my first Tensorflow custom model yesterday. I think it worked fine besides some warnings, at least i got my exported_model folder with checkpoint, saved model and pipeline.config. I built it with exporter_main_v2.py from Tensorflow. I just loaded some images of deers and want to try to detect some on different pictures.
That's what i would like to test now, but i dont know how. I already did an object detection tutorial with pre trained models and it worked fine. I tried to just replace config_file_path, saved_model_path and image_path with the paths linking to my exported model but it didnt work:
error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\tensorflow\tf_io.cpp:42: error: (-2:Unspecified error) FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse GraphDef file: D:\VSCode\Machine_Learning_Tests\Tensorflow\workspace\exported_models\first_model\saved_model\saved_model.pb in function 'cv::dnn::ReadTFNetParamsFromBinaryFileOrDie'
There are endless tutorials on how to train custom detection but i cant find a good explanation how to manually test my exported model.
Thanks in advance!
EDIT: I need to know how to build a script where i can import a model i saved with Tensorflow exporter_main_v2.py and an image i want to test this model on and get a result, either in text or with rectangels in picture. Seeing many tutorials but none works for me with a model i saved with Tensorflow exporter_main_v2.py
From the error it looks like you have a model saved as .pb. If you want to do inference you can write something like this:
# load the model
model = tf.keras.models.load_model(my_model_dir)
prediction = model.predict(x=x_test, ...)
You'll have to set x which is the only mandatory argument. It is your test dataset (the images you want to obtain predictions from). Also, predict is useful when you have a great amount of images to predict. It handles the prediction in a batched way, avoiding filling up the memory. If you have just a few you can use directly the __call__() method of your model, like this:
prediction = model(x_test, training=False)
More about prediction can be found at the Tensorflow documentation.
I am trying to generate the custom tensor flow model (tf/tflite file) which i wanted to use for my mobile application.
I have gone through few machine learning and tensor flow blogs, from there I started to generate a simple ML model.
https://www.datacamp.com/community/tutorials/tensorflow-tutorial
https://www.edureka.co/blog/tensorflow-object-detection-tutorial/
https://blog.metaflow.fr/tensorflow-how-to-freeze-a-model-and-serve-it-with-a-python-api-d4f3596b3adc
https://www.youtube.com/watch?v=ICY4Lvhyobk
All these are really nice and they guided me to do the below steps,
i)Install all necessary tools (TensorFlow,Python,Jupyter,etc).
ii)Load the Training and testing Data.
iii)Run the tensor flow session for train and evaluate the results.
iv)Steps to increase the accuracy
But i am not able to generate the .tf/.tflite files.
I tried the following code, but that generates an empty file.
converter = tf.contrib.lite.TFLiteConverter.from_session(sess,[],[])
model = converter.convert()
file = open( 'model.tflite' , 'wb' )
file.write( model )
I have checked few answers in stackoverflow and according to my understanding in-order to generate the .tf files we need to create the pb files, freezing the pb file and then generating the .tf files.
But how can we achieve this?
Tensorflow provides Tflite converter to convert saved model to Tflite model.For more details find here.
tf.lite.TFLiteConverter.from_saved_model() (recommended): Converts a SavedModel.
tf.lite.TFLiteConverter.from_keras_model(): Converts a Keras model.
tf.lite.TFLiteConverter.from_concrete_functions(): Converts concrete functions.
I'm working on an image classifier with tensorflow estimator + keras retraining the last layer of a pretrained application inception_v3 on GCP ML engine.
The keras model is exported with tf.keras.estimator.model_to_estimator and the input function receive the path of the image stored on GCP cloud storage open the image with tf.image.decode_jpeg and return a dataset with the following format dict(zip(['inception_v3_input'], [image])), label
I'm trying to define the tf.estimator.export.ServingInputReceiver but I'm having some trouble defining it.
The model is serving correctly the prediction with the predict method using the input function without the labels.
My idea was to reuse the input_function to decode the image passing only the path of the image on cloud storage to the prediction also for the google endpoint, but I can't understand how to do it.
Thank's for your help
If I'm understanding correctly, your question is how to get the file from Cloud Storage, considering that you want to decode the image this way:
image_decoded = tf.image.decode_jpeg(image_string)
So, in this case, you can use:
image_string = file_io.FileIO(filename, mode='r')
By importing file_io first:
from tensorflow.python.lib.io import file_io
According to the comments on this question about reading input data from GCS, using the file_read function should provide the same results since " there was a bunch of work done to abstract file io and file systems, so there all the io functionality works consistently". So you can try also with read_file function.
I have a model trained with tf.estimator and it was exported after training as below
serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(
feature_placeholders)
classifier.export_savedmodel(
r'./path/to/model/trainedModel', serving_input_fn)
This gives me a saved_model.pb and a folder which contains weights as a .data file. I can reload the saved model using
predictor = tf.contrib.predictor.from_saved_model(r'./path/to/model/trainedModel')
I'd like to run this model on android and that requires the model to be in .pb format. How can I freeze this predictor for use on android platform?
I don't deploy to Android, so you might need to customize the steps a bit, but this is how I do this:
Run <tensorflow_root_installation>/python/tools/freeze_graph.py with arguments --input_saved_model_dir=<path_to_the_savedmodel_directory>, --output_node_names=<full_name_of_the_output_node> (you can get the name of the output node from graph.pbtxt, although that's not the most comfortable of ways), --output_graph=frozen_model.pb
(optionally) Run <tensorflow_root_installation>/python/tools/optimize_for_inference.py with adequate arguments. Alternatively you can look up the Graph Transform Tool and selectively apply optimizations.
At the end of step 1 you'll already have a frozen model with no variables left, that you can then deploy to Android.