image classify and tensorflow serving - tensorflow-serving

First of all sorry I am not prcise for this question but I am studying the tensorflow-serving and how to put in production my cnn. sincerely the documentation is quite confuse to me. I hope you can help to understand better the save model architecture. So please reply to me as teacher, i would like to know more about the whole flow.
I am developping a simple cnn to classify an image to 4 output.
I need tensorflow-serving to put it in production.
The image in input can be watherver size, the CNN should resize it first and predict.
Here the code
import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras.preprocessing.image import ImageDataGenerator
from matplotlib import pyplot as plt
from scipy.misc import toimage
from keras.models import Sequential
from keras.layers import *
from keras.optimizers import *
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants, signature_def_utils_impl
import cv2
#train_path='Garage/train'
#train_datagen = ImageDataGenerator(rescale=1./255)
#train_batch = train_datagen.flow_from_directory(train_path, target_size=(64,64), class_mode='categorical', batch_size=10, color_mode='grayscale')
#validation_datagen = ImageDataGenerator(rescale=1./255)
#validation_batch = validation_datagen.flow_from_directory(
# './Garage/validation',
# target_size=(64, 64),
# batch_size=3,
# class_mode='categorical', color_mode='grayscale')
model = Sequential()
model.add(InputLayer(input_shape=[64,64,1]))
model.add(Conv2D(filters=32,kernel_size=5,strides=1,padding='same',activation='relu'))
model.add(MaxPool2D(pool_size=5,padding='same'))
model.add(Conv2D(filters=50,kernel_size=5,strides=1,padding='same',activation='relu'))
model.add(MaxPool2D(pool_size=5,padding='same'))
model.add(Conv2D(filters=80,kernel_size=5,strides=1,padding='same',activation='relu'))
model.add(MaxPool2D(pool_size=5,padding='same'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512,activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(4,activation='softmax'))
optimizer=Adam(lr=1e-3)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
#model.fit_generator(
# train_batch,
# epochs=50,
# steps_per_epoch=6,
# validation_data=validation_batch,
# validation_steps=5)
model.load_weights('model.h5')
#score = model.evaluate_generator(validation_batch,steps=3)
#print('Test loss:', score[0])
#print('Test accuracy:', score[1])
#model.save('model.h5')
from PIL import Image
import requests
from io import BytesIO
response = requests.get('http://192.168.3.21:7451/shot.jpg')
image_pil = Image.open(BytesIO(response.content))
image = np.asarray(image_pil)
img2 = cv2.resize(image,(64,64))
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
img = np.reshape(img2,[1,64,64,1])
classes = model.predict_classes(img)
print(classes)
model_version="1"
sess = tf.Session()
#setting values for the sake of saving the model in the proper format
x = model.input
y = model.output
prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def({"inputs":x}, {"prediction":y})
valid_prediction_signature = tf.saved_model.signature_def_utils.is_valid_signature(prediction_signature)
if(valid_prediction_signature == False):
raise ValueError("Error: Prediction signature not valid!")
builder = saved_model_builder.SavedModelBuilder('./'+model_version)
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
# Add the meta_graph and the variables to the builder
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature, },
legacy_init_op=legacy_init_op)
# save the graph
builder.save()
the code will take the picture from a cam http://192.168.3.21:7451/shot.jpg
and then it will predict it
When I compile the code it return a lot of errors when it try to save the model. can you please check it and tell me if the save model instructions are right?
I use x = model.input as input from the serving but I would like it take the picture as input from the server.
I am quite confuse actually, sorry.
The scope is when I request by gRPC to predict the image the model can give me the prediction result
Thanks

I tried to comment because I don't have a for sure answer for you but I didn't have enough space. Hopefully this info is helpful and can pass for an "answer".
Anyways, it's tough to say what the problems are for a Tensorflow newb like me without seeing any errors.
One thing I noticed is that the method call for predict_signature_def() seems to not follow the method signature I found at here.
Also, I don't think you want to do your image downloading/processing in the same code you have your model in. TFServe isn't supposed to run pre-post processing; Just host your models.
So what you can do is create something like a RESTful service that you accept the image at, then you run your preprocessing on it and send that processed image in as part of the request to TFServe. It looks something like this:
user+image
-> requests classification to RESTful service
-> REST API receives image
-> REST service resizes image
-> REST service makes classification request to TFS (where your model is)
-> TFS receives request, including resized/preprocessed image
-> TFS invokes classification using your model and the arguments you sent
-> TFS responds to REST service with model's response
-> REST service responds to user with the classification from your model
This sort of sucks because passing images around the network is inefficient and can be slow but you can optimize when you find pain points.
The main idea is that your models should be saved off into an artifact/binary that can run and doesn't need your code to run. This lets you separate out modeling from data pre and post processing and gives your models a more consistent place to run from; e.g. you don't need to worry about competing versions of dependencies for your model to run.
The downside is it can be a bit of a learning curve to get these pieces to fit nicely after breaking them out from the monolithic architecture it seems like you have.
So, until a real Tensorflow knower comes along with a real answer, I hope this helps a bit.

Related

Tensorflow: Classifying images in batches

I have followed this TensorFlow tutorial to classify images using transfer learning approach. Using almost 16,000 manually classified images (with about 40/60 split of 1/0) added on top of the pre-trained MobileNet V2 model, my model achieved 96% accuracy on the hold out test set. I then saved the resulting model.
Next, I would like to use this trained model to classify new images. To do so, I have adapted one of the portions of the tutorial's code (in the end where it says #Retrieve a batch of images from the test set) in the way described below. The code works, however, it only processes one batch of 32 images and that's it (there are hundreds of images in the source folder). What am I missing here? Please advise.
# Import libraries
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import preprocessing
from tensorflow.keras.preprocessing import image_dataset_from_directory
import matplotlib.pyplot as plt
import numpy as np
import os
# Load saved model
model = tf.keras.models.load_model('/model')
# Re-compile model
base_learning_rate = 0.0001
model.compile(optimizer=tf.keras.optimizers.Adam(lr=base_learning_rate),
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
# Define paths
PATH = 'Data/'
new_dir = os.path.join(PATH, 'New_images') # New_images must contain at least one class (sub-folder)
IMG_SIZE = (640, 640)
BATCH_SIZE = 32
new_dataset = image_dataset_from_directory(new_dir, shuffle=True, batch_size=BATCH_SIZE, image_size=IMG_SIZE)
# Retrieve a batch of images from the test set
image_batch, label_batch = new_dataset.as_numpy_iterator().next()
predictions = model.predict_on_batch(image_batch).flatten()
# Apply a sigmoid since our model returns logits
predictions = tf.nn.sigmoid(predictions)
predictions = tf.where(predictions < 0.5, 0, 1)
print('Predictions:\n', predictions.numpy())
len(new_dataset) # equals 25, i.e., there are 25 batches
Replace this code:
# Retrieve a batch of images from the test set
image_batch, label_batch = new_dataset.as_numpy_iterator().next()
predictions = model.predict_on_batch(image_batch).flatten()
with this one:
predictions = model.predict(new_dataset,batch_size=BATCH_SIZE).flatten()
tf.data.Dataset objects can be directly passed to the method predict(). Reference

Saving, loading, and predicting from a TensorFlow Estimator model (2.0)

Is there a guide anywhere for serializing and restoring Estimator models in TF2? The documentation is very spotty, and much of it not updated to TF2. I've yet to see a clear ands complete example anywhere of an Estimator being saved, loaded from disk and used to predict from new inputs.
TBH, I'm a bit baffled by how complicated this appears to be. Estimators are billed as simple, relatively high-level ways of fitting standard models, yet the process for using them in production seems very arcane. For example, when I load a model from disk via tf.saved_model.load(export_path) I get an AutoTrackable object:
<tensorflow.python.training.tracking.tracking.AutoTrackable at 0x7fc42e779f60>
Its not clear why I don't get my Estimator back. It looks like there used to be a useful-sounding function tf.contrib.predictor.from_saved_model, but since contrib is gone, it does not appear to be in play anymore (except, it appears, in TFLite).
Any pointers would be very helpful. As you can see, I'm a bit lost.
maybe the author doesn't need the answer anymore but I was able to save and load a DNNClassifier using TensorFlow 2.1
# training.py
from pathlib import Path
import tensorflow as tf
....
# Creating the estimator
estimator = tf.estimator.DNNClassifier(
model_dir = <model_dir>,
hidden_units = [1000, 500],
feature_columns = feature_columns, # this is a list defined earlier
n_classes = 2,
optimizer = 'adam')
feature_spec = tf.feature_column.make_parse_example_spec(feature_columns)
export_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
servable_model_path = Path(estimator.export_saved_model(<model_dir>, export_input_fn).decode('utf8'))
print(f'Model saved at {servable_model_path}')
For loading, you found the correct method, you just need to retrieve the predict_fn
# testing.py
import tensorflow as tf
import pandas as pd
def predict_input_fn(test_df):
'''Convert your dataframe using tf.train.Example() and tf.train.Features()'''
examples = []
....
return tf.constant(examples)
test_df = pd.read_csv('test.csv', ...)
# Loading the estimator
predict_fn = tf.saved_model.load(<model_dir>).signatures['predict']
# Predict
predictions = predict_fn(examples=predict_input_fn(test_df))
Hope that this can help other people too (:

Too Much Memory Issue with Semantic Image Segmentation NN (DeepLabV3+)

I first explain my task: I have nearly 3000 images from two different ropes. They contain rope 1, rope 2 and the background. My Labels/Masks are images, where for example the pixel value 0 represents the background, 1 represents the first rope and 2 represents the second rope. You can see both the input picture and the ground truth/labels here on picture 1 and 2 below. Notice that my ground truth/label has only 3 values: 0, 1 and 2.
My input picture is gray, but for DeepLab i converted it to a RGB Picture, because DeepLab was trained on RGB Pictures. But my converted picture still doesn't contain color.
The idea of this task is that the Neural Network should learn the structure from ropes, so it can label ropes correctly even if there are knotes. Therfore the color information is not important, because my ropes have different color, so it is easy to use KMeans for creating the ground truth/labels.
For this task i choose a Semantic Segmentation Network called DeepLab V3+ in Keras with TensorFlow as Backend. I want to train the NN with my nearly 3000 images. The size of alle the images is under 100MB and they are 300x200 pixels.
Maybe DeepLab is not the best choice for my task, because my pictures doesn't contain color information and the size of my pictures are very small (300x200), but i didn't find any better Semantic Segmentation NN for my task so far.
From the Keras Website i know how to load the Data with flow_from_directory and how to use the fit_generator method. I don't know if my code is logical correct...
Here are the links:
https://keras.io/preprocessing/image/
https://keras.io/models/model/
https://github.com/bonlime/keras-deeplab-v3-plus
My first question is:
With my implementation my graphic card used nearly all the memory (11GB). I don't know why. Is it possible, that the weights from DeepLab are that big? My Batchsize is default 32 and all my nearly 300 images are under 100MB big. I already used the config.gpu_options.allow_growth = True code, see my code below.
A general question:
Does somebody know a good semantic segmentation NN for my task? I don't need NN, which were trained with color images. But i also don't need NN, which were trained with binary ground truth pictures...
I tested my raw color image(picture 3) with DeepLab, but the result label i got was not good...
Here is my code so far:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
import numpy as np
from model import Deeplabv3
import tensorflow as tf
import time
import tensorboard
import keras
from keras.preprocessing.image import img_to_array
from keras.applications import imagenet_utils
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import TensorBoard
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
from keras import backend as K
K.set_session(session)
NAME = "DeepLab-{}".format(int(time.time()))
deeplab_model = Deeplabv3(input_shape=(300,200,3), classes=3)
tensorboard = TensorBoard(log_dir="logpath/{}".format(NAME))
deeplab_model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=['accuracy'])
# we create two instances with the same arguments
data_gen_args = dict(featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=90,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.2)
image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)
# Provide the same seed and keyword arguments to the fit and flow methods
seed = 1
#image_datagen.fit(images, augment=True, seed=seed)
#mask_datagen.fit(masks, augment=True, seed=seed)
image_generator = image_datagen.flow_from_directory(
'/path/Input/',
target_size=(300,200),
class_mode=None,
seed=seed)
mask_generator = mask_datagen.flow_from_directory(
'/path/Label/',
target_size=(300,200),
class_mode=None,
seed=seed)
# combine generators into one which yields image and masks
train_generator = zip(image_generator, mask_generator)
print("compiled")
#deeplab_model.fit(X, y, batch_size=32, epochs=10, validation_split=0.3, callbacks=[tensorboard])
deeplab_model.fit_generator(train_generator, steps_per_epoch= np.uint32(2935 / 32), epochs=10, callbacks=[tensorboard])
print("finish fit")
deeplab_model.save_weights('deeplab_1.h5')
deeplab_model.save('deeplab-1')
session.close()
Here is my code to test DeepLab (from Github):
from matplotlib import pyplot as plt
import cv2 # used for resize. if you dont have it, use anything else
import numpy as np
from model import Deeplabv3
import tensorflow as tf
from PIL import Image, ImageEnhance
deeplab_model = Deeplabv3(input_shape=(512,512,3), classes=3)
#deeplab_model = Deeplabv3()
img = Image.open("Path/Input/0/0001.png")
imResize = img.resize((512,512), Image.ANTIALIAS)
imResize = np.array(imResize)
img2 = cv2.cvtColor(imResize, cv2.COLOR_GRAY2RGB)
w, h, _ = img2.shape
ratio = 512. / np.max([w,h])
resized = cv2.resize(img2,(int(ratio*h),int(ratio*w)))
resized = resized / 127.5 - 1.
pad_x = int(512 - resized.shape[0])
resized2 = np.pad(resized,((0,pad_x),(0,0),(0,0)),mode='constant')
res = deeplab_model.predict(np.expand_dims(resized2,0))
labels = np.argmax(res.squeeze(),-1)
plt.imshow(labels[:-pad_x])
plt.show()
First question: The DeepLabV3+ is a very large model (I assume you are using the Xception backbone?!) and 11 GB of needed GPU capacity is totally normal regarding a bachsize of 32 with 200x300 pixels :) (Training DeeplabV3+, I needed approx. 11 GB using a batchsize of 5 with 500x500 pixels). One note to the second sentence of your question: the needed GPU resources are influenced by many factors (model, optimizer, batchsize, image crop, preprocessing etc) but the actual size of your dataset set shouldn't influence it. So it doesn't matter if your dataset is 300MB or 300GB large.
General Question: You are using a small dataset. Choosing DeeplabV3+ & Xception might not be a good fit, since the model might be too large. This might lead to overfitting. If you haven't obtained satisfying results yet you might try a smaller network. If you want to stick to the DeepLab-framework you could switch the backbone from the Xception network to MobileNetV2 (In the official tensorflow version it is already implemented). Alternatively, you could try using a standalone network like the Inception network with a FCN head...
In each case it would be essential to use a pre-trained encoder with a well-trained feature representation. If you don't find a good initialization of your desired model based on grayscale input images, just use a model pre-trained on RGB images and extend the pre-training with a grayscale dataset (basically you can convert any big rgb dataset to be grayscale) and finetune the weights on the grayscale input before using your data.
I hope this helps! Cheers, Frank
IBM's Large Model Support (LMS) library enables training of large deep neural networks that would normally exhaust GPU memory while training. LMS manages this over-subscription of GPU memory by temporarily swapping tensors to host memory when they are not needed.
Description - https://developer.ibm.com/components/ibm-power/articles/deeplabv3-image-segmentation-with-pytorch-lms/
Pytorch - https://github.com/IBM/pytorch-large-model-support
TensorFlow - https://github.com/IBM/tensorflow-large-model-support

Can't import frozen graph with BatchNorm layer

I have trained a Keras model based on this repo.
After the training I save the model as checkpoint files like this:
sess=tf.keras.backend.get_session()
saver = tf.train.Saver()
saver.save(sess, current_run_path + '/checkpoint_files/model_{}.ckpt'.format(date))
Then I restore the graph from the checkpoint files and freeze it using the standard tf freeze_graph script. When I want to restore the frozen graph I get the following error:
Input 0 of node Conv_BN_1/cond/ReadVariableOp/Switch was passed float from Conv_BN_1/gamma:0 incompatible with expected resource
How can I fix this issue?
Edit: My problem is related to this question. Unfortunately, I can't use the workaround.
Edit 2:
I have opened an issue on github and created a gist to reproduce the error.
https://github.com/keras-team/keras/issues/11032
Just resolved the same issue. I connected this few answers: 1, 2, 3 and realized that issue originated from batchnorm layer working state: training or learning. So, in order to resolve that issue you just need to place one line before loading your model:
keras.backend.set_learning_phase(0)
Complete example, to export model
import tensorflow as tf
from tensorflow.python.framework import graph_io
from tensorflow.keras.applications.inception_v3 import InceptionV3
def freeze_graph(graph, session, output):
with graph.as_default():
graphdef_inf = tf.graph_util.remove_training_nodes(graph.as_graph_def())
graphdef_frozen = tf.graph_util.convert_variables_to_constants(session, graphdef_inf, output)
graph_io.write_graph(graphdef_frozen, ".", "frozen_model.pb", as_text=False)
tf.keras.backend.set_learning_phase(0) # this line most important
base_model = InceptionV3()
session = tf.keras.backend.get_session()
INPUT_NODE = base_model.inputs[0].op.name
OUTPUT_NODE = base_model.outputs[0].op.name
freeze_graph(session.graph, session, [out.op.name for out in base_model.outputs])
to load *.pb model:
from PIL import Image
import numpy as np
import tensorflow as tf
# https://i.imgur.com/tvOB18o.jpg
im = Image.open("/home/chichivica/Pictures/eagle.jpg").resize((299, 299), Image.BICUBIC)
im = np.array(im) / 255.0
im = im[None, ...]
graph_def = tf.GraphDef()
with tf.gfile.GFile("frozen_model.pb", "rb") as f:
graph_def.ParseFromString(f.read())
graph = tf.Graph()
with graph.as_default():
net_inp, net_out = tf.import_graph_def(
graph_def, return_elements=["input_1", "predictions/Softmax"]
)
with tf.Session(graph=graph) as sess:
out = sess.run(net_out.outputs[0], feed_dict={net_inp.outputs[0]: im})
print(np.argmax(out))
This is bug with Tensorflow 1.1x and as another answer stated, it is because of the internal batch norm learning vs inference state. In TF 1.14.0 you actually get a cryptic error when trying to freeze a batch norm layer.
Using set_learning_phase(0) will put the batch norm layer (and probably others like dropout) into inference mode and thus the batch norm layer will not work during training, leading to reduced accuracy.
My solution is this:
Create the model using a function (do not use K.set_learning_phase(0)):
def create_model():
inputs = Input(...)
...
return model
model = create_model()
Train model
Save weights:
model.save_weights("weights.h5")
Clear session (important so layer names are the same) and set learning phase to 0:
K.clear_session()
K.set_learning_phase(0)
Recreate model and load weights:
model = create_model()
model.load_weights("weights.h5")
Freeze as before
Thanks for pointing the main issue! I found that keras.backend.set_learning_phase(0) to be not working sometimes, at least in my case.
Another approach might be: for l in keras_model.layers: l.trainable = False

Understanding data augmentation in the object detection API

I am using the object detection API to train with a different dataset and I would like to know if it is possible to have sample images of what is reaching the network during the training.
I ask this because I am trying to find a good combination of data augmentation options (here the options), but the result adding them has been worse. Seeing what reaches the network in training would be very helpful.
Another question is if it is possible to get the API to help with balancing the classes, in case that the dataset passed have them unbalanced.
Thank you!
Yes, it is possible. Shortly speaking, you need to get an instance of tf.data.Dataset. Then, you can iterate over it and get the network input data as NumPy arrays. Saving it to image files using PIL or OpenCV is trivial then.
Assuming you use TF2 the pseudo-code is like this:
ds = ... get dataset object somehow
sample_num = 0
for features, _ in ds:
images = features[fields.InputDataFields.image] # is a [batch_size, H, W, C] float32 tensor with preprocessed images
batch_size = images.shape[0]
for i in range(batch_size):
image = np.array(images[i] * 255).astype(np.uint8) # assuming input data is only scaled to [0..1]
cv2.imwrite(output_path, image)
sample_num += 1
if sample_num >= MAX_SAMPLES:
break
The trick here is to get the Dataset instance. Google object detection API is very sophisticated, but I guess you should start with calling train_input function here: https://github.com/tensorflow/models/blob/3c8b6f1e17e230b68519fd8d58c4dd9e9570d789/research/object_detection/inputs.py#L763
It requires pipeline config sub-parts describing training, train_input and the model.
You can find some code snippets on how to work with pipeline here: Dynamically Editing Pipeline Config for Tensorflow Object Detection
import argparse
import tensorflow as tf
from google.protobuf import text_format
from object_detection.protos import pipeline_pb2
def parse_arguments():
parser = argparse.ArgumentParser(description='')
parser.add_argument('pipeline')
parser.add_argument('output')
return parser.parse_args()
def main():
args = parse_arguments()
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
with tf.gfile.GFile(args.pipeline, "r") as f:
proto_str = f.read()
text_format.Merge(proto_str, pipeline_config)