I am using tensorflow serving to serve a model which has an additional signature. Having the additional signature changes the output response (Just the keys. The values are correct). Why would that happen?
import tensorflow as tf
from tensorflow.python.keras import Input, Model
from tensorflow.python.keras.layers import , Dense
input1 = Input(shape=(3,), dtype=tf.float32, name='value')
dense = Dense(1, activation='tanh', name='dense')(input1)
prediction = Dense(1, activation='tanh', name='prediction_label')(dense)
inputs = {"inputs"}
model = Model(inputs = input1, outputs=prediction, name='models')
tf.saved_model.save(model, "model_with_single_endpoint")
#tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.int32)])
def get_version(x):
return tf.constant('1.1.1.1')
main_sig = tf.function(model, input_signature=[model._get_save_spec()]).get_concrete_function()
model.save("model_with_two_endpoint", signatures={'serving_default': main_sig, 'get_version': get_version}, save_format='tf')
When I load these 2 models in tfserving. The responses for the same request are
model_with_single_signature
outputs {
key: "prediction_label"
value {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 1
}
dim {
size: 1
}
}
float_val: 0.2734697759151459
}
}
model_with_two_signatures
outputs {
key: "output_0"
value {
dtype: DT_FLOAT
tensor_shape {
dim {
size: 1
}
dim {
size: 1
}
}
float_val: 0.2734697759151459
}
}
Why is the outputs key different when I add a new signature? Thanks for the help!
tensorflow ==2.3 and tensorflow-serving-api==2.3
Related
I've been stuck with this problem for a while and I can't find a solution here on StackOverflow for this case.
I'm trying to build a prediction model for a chatbot. The predictor is a message in a bag of words format, in a tensor of shape [1,108] while the target variable is another tensor with dummies from a nominal classificatory variable, with shape [1,15].
When I run a prediction it returns an array with random probabilities for each of the dummies.
I used brain.js using the same functions and it gave me a good prediction, however I am having trouble doing the same thing here on TensorFlow.
Here is my code:
async function trainModel() {
XandY = await createTrainingData()
let X = XandY[0];
var y = XandY[1];
const inputShape = [X.length,X[0].length] // [# of observations, # of features]
const outputShape = [y.length,y[0].length]
X = tf.tensor2d( X, inputShape )
y = tf.tensor2d( y, outputShape)
traningDataObject = {
data: X,
target: y,
}
const model = tf.sequential();
model.add(tf.layers.dense(
{ units: 128, activation: 'relu', inputShape: [108] }));
model.add(tf.layers.dense(
{ units: 64, activation: 'relu' }));
model.add(tf.layers.dense(
{ units: 32, activation: 'relu' }));
model.add(tf.layers.dense(
{ units: 15, activation: 'softmax' }));
model.compile({
optimizer: tf.train.adam(0.01),
loss: 'categoricalCrossentropy',
metrics: ['accuracy']
});
model.fit(X, y,
{epochs: 150, validationData: [X,y]});
return model
}
async function getPrediction(message,wordset) {
model = await trainModel();
bow = bagOfWords(message, wordset);
const input = tf.tensor([bagOfWords(message, wordset)]);
var prediction = model.predict(input);
var prediction_values = prediction.dataSync();
var prediction_array = Array.from(prediction_values);
console.log(prediction_array)
let greatestProba = 0;
prediction_array.forEach((element) => {
if (greatestProba < element) {
greatestProba = element;
}
});
if (greatestProba > 0.02) {
return intents[prediction_array.indexOf(greatestProba)];
} else {
return 'undefined'
}
}
async function main(){
const wordset = await getWordset();
const message = "Meu pagamento não caiu";
console.log(await getPrediction(message,wordset));
}
main()
How can I solve this? Where is the problem?
Using the same network in python gives me good predictions, but here not.
Alright, so the problem seemed to be the number of Epochs I used...
It is odd because I'm pretty sure I used a very reasonable number of epochs...
I removed the ValidationData as well...
If you do it like this:
const res = await model.fit(X, y, { epochs: 50 });
console.log(res.history.loss[0]);
it should work.
I am relatively new to object detection using tensorflow and need guidance on the below issue.
I am building a custom model to detect two objects using tensorflow and Faster_Rcnn_inception_v2 model. For this I have used 600 images which contains both the objects. These images are divided into 75% train and 25% test folders.
I am able to train with the model on GPU (Linux) machine and achieved loss of only 0.05
After generating frozen_inference_graph.pb file, when I tested, it is not even detecting a single object in over 10 images.
It is only working when I lowered the value of min_score_thresh parameter to 0.4
The objects are detected with around 47% confidence.
However, when I trained the same model on different CPU (Windows) machine, it works absolutely fine and results are satisfactory with confidence level above 80 percent.
Can someone please throw some light on this issue? Why the model is not working when trained on GPU but the same model working on CPU?
Note: The issue is occurring only recently, 2 months back, the GPU model was working exceptionally for a different object.
I can share the content of config labelmap or any other file if required.
Command for Training:
python train.py --logtostderr --train_dir="TrainingDp" --pipeline_config_path="TrainingDp/faster_rcnn.config"
Code:
#!/usr/bin/env python
# coding: utf-8
# In[3]:
import os
import pathlib
if "models" in pathlib.Path.cwd().parts:
while "models" in pathlib.Path.cwd().parts:
os.chdir('..')
elif not pathlib.Path('models').exists():
get_ipython().system('git clone --depth 1 https://github.com/tensorflow/models')
# In[4]:
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import cv2
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
from IPython.display import display
# In[5]:
from object_detection.utils import ops as utils_ops
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
# In[6]:
# patch tf1 into `utils.ops`
utils_ops.tf = tf.compat.v1
# Patch the location of gfile
tf.gfile = tf.io.gfile
# In[7]:
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = pathlib.Path('C:/Users/xxxxxx/Desktop/models-master/models-master/research/object_detection/test_images')
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg")))
len(TEST_IMAGE_PATHS)
num=len(TEST_IMAGE_PATHS)+1
# In[16]:
model_name = r'C:\Users\xxxxxx\Desktop\models-master\models-master\research\object_detection\TrainingDp2'
PATH_TO_FROZEN_GRAPH= r'C:\Users\xxxxxx\Desktop\models-master\models-master\research\object_detection\inference_graph\frozen_inference_graph.pb'
PATH_TO_LABELS= r'C:\Users\xxxxxx\Desktop\models-master\models-master\research\object_detection\TrainingDp2\labelmap.pbtxt'
# In[17]:
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
# In[18]:
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
# In[19]:
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
# In[20]:
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 10) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
# In[21]:
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[1], image.shape[2])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: image})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.int64)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
# In[22]:
get_ipython().run_line_magic('matplotlib', 'inline')
# In[23]:
count =0
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np_expanded, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=4
,min_score_thresh=.4
)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
#cv2.imshow('img',image_np)
RGB=cv2.cvtColor(image_np,cv2.COLOR_BGR2RGB)
filename=r'C:\Users\xxxxxx\Desktop\models-master\models-master\research\object_detection\validation\'iMAGE'+str(count)+'.jpg'
cv2.imwrite(filename,RGB)
count+=1
Content of Config file:
# Faster R-CNN with Inception v2, configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.
model {
faster_rcnn {
num_classes: 2
image_resizer {
keep_aspect_ratio_resizer {
min_dimension: 600
max_dimension: 1024
}
}
feature_extractor {
type: 'faster_rcnn_inception_v2'
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config: {
batch_size: 1
optimizer {
momentum_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.0002
schedule {
step: 900000
learning_rate: .00002
}
schedule {
step: 1200000
learning_rate: .000002
}
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "C:/Users/xxxxxx/Desktop/models-master/models-master/research/object_detection/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt"
from_detection_checkpoint: true
load_all_detection_checkpoint_vars: true
# Note: The below line limits the training process to 200K steps, which we
# empirically found to be sufficient enough to train the pets dataset. This
# effectively bypasses the learning rate schedule (the learning rate will
# never decay). Remove the below line to train indefinitely.
num_steps: 200000
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
tf_record_input_reader {
input_path: "C:/Users/xxxxxx/Desktop/models-master/models-master/research/object_detection/Train.record"
}
label_map_path: "C:/Users/xxxxxx/Desktop/models-master/models-master/research/object_detection/TrainingDp2/labelmap.pbtxt"
}
eval_config: {
metrics_set: "coco_detection_metrics"
num_examples: 1101
}
eval_input_reader: {
tf_record_input_reader {
input_path: "C:/Users/xxxxxx/Desktop/models-master/models-master/research/object_detection/Test.record"
}
label_map_path: "C:/Users/xxxxxx/Desktop/models-master/models-master/research/object_detection/TrainingDp2/labelmap.pbtxt"
shuffle: false
num_readers: 1
}
Thanks.
In case if anyone is having the same issue,
I managed to resolve by changing the batch_size = 2 in the configuration file.
I am using the tf.estimator to train and serve my tensorflow model. the training completed as expected, but fails in serving. I read my data in as a TFRecordDataset. My parsing function applies a transformation to feature "x2". "x2" is a string that is split. the tranformed feature is "x3".
def parse_function(example_proto):
features={"x1":tf.FixedLenFeature((), tf.string), "x2":tf.FixedLenFeature((),
tf.string),
"label":tf.FixedLenFeature((), tf.int64)}
parsed_features = tf.parse_example(example_proto, features)
x3=tf.string_split(parsed_features["string"],',')
parsed_features["x3"]=x3
return parsed_features, parsed_features["label"]
My serving fucnction is
def serving_input_fn():
receiver_tensor = {}
for feature_name in record_columns:
if feature_name in {"x1", "x2","x3"}:
dtype = tf.string
else:
dtype=tf.int32
receiver_tensor[feature_name] = tf.placeholder(dtype, shape=[None])
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in receiver_tensor.items()
}
return tf.estimator.export.ServingInputReceiver(features, receiver_tensor)
It always worked in the past when I didn't have any transformations in my parsing function, but it fails now with the error.
cloud.ml.prediction.prediction_utils.PredictionError: Failed to run the provided model: Exception during running the graph: Cannot feed value of shape (2, 1) for Tensor u'Placeholder_2:0', which has shape '(?,)' (Error code: 2)
I think I have to apply the transformation to "x2" in my serving function, but I don't know how. Any help would be greatly appreciated
Following this link
I processed feature "x3" after creating the receiver_tensor. Splitting the string in the serving fucntion required squeezing the tensor before splitting
def serving_input_fn():
receiver_tensor = {}
receiver_tensor["x1"] = tf.placeholder(tf.string, shape=[None], name="x1")
receiver_tensor["label"] = tf.placeholder(tf.int32, shape=[None], name="x2")
receiver_tensor["x2"] = tf.placeholder(tf.string, shape=[None],
name="string")
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in receiver_tensor.items()
}
features["x3"]=tf.string_split(tf.squeeze(features["x2"]),',')
return tf.estimator.export.ServingInputReceiver(features, receiver_tensor)
I wrote following code and executed, then a PB file called test.pb generated.
import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, shape=[1,2,3,4], name="x")
mu, sigma = tf.nn.moments(x, [0,1,2])
in_data = np.array([i for i in range(24)]).reshape([1,2,3,4])
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
print sess.run([mu, sigma], feed_dict={x: in_data})
tf.train.write_graph(sess.graph_def, "./", "test.pb")
In test.pb, there's a node:
node {
name: "moments/normalize/divisor"
op: "Reciprocal"
input: "moments/sufficient_statistics/Const"
input: "^moments/sufficient_statistics/mean_ss"
input: "^moments/sufficient_statistics/var_ss"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
}
My question is that what the punctuation ^ mean in ^moments/sufficient_statistics/mean_ss and ^moments/sufficient_statistics/var_ss?
I am new to tensorflow and are reading mnist_export.py in tensorflow serving example.
There is something here I cannot understand:
sess = tf.InteractiveSession()
serialized_tf_example = tf.placeholder(tf.string, name='tf_example')
feature_configs = {
'x': tf.FixedLenFeature(shape=[784], dtype=tf.float32),
}
tf_example = tf.parse_example(serialized_tf_example, feature_configs)
x = tf.identity(tf_example['x'], name='x') # use tf.identity() to assign name
Above, serialized_tf_example is a Tensor.
I have read the api document tf.parse_example but it seems that serialized is serialized Example protos like:
serialized = [
features
{ feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } },
features
{ feature []},
features
{ feature { key: "ft" value { float_list { value: [3.0] } } }
]
So how to understand tf_example = tf.parse_example(serialized_tf_example, feature_configs) here as serialized_tf_example is a Tensor, not Example proto?
Here serialized_tf_example is serialized string of a tf.train.Example. See tf.parse_example for the usage. Reading data chapter gives some example link.
tf_example.SerializeToString() converts tf.train.Example to string and tf.parse_example parses the serialized string to a dict.
The below mentioned code provides the simple example of using parse_example
import tensorflow as tf
sess = tf.InteractiveSession()
serialized_tf_example = tf.placeholder(tf.string, shape=[1], name='serialized_tf_example')
feature_configs = {'x': tf.FixedLenFeature(shape=[1], dtype=tf.float32)}
tf_example = tf.parse_example(serialized_tf_example, feature_configs)
feature_dict = {'x': tf.train.Feature(float_list=tf.train.FloatList(value=[25]))}
example = tf.train.Example(features=tf.train.Features(feature=feature_dict))
f = example.SerializeToString()
sess.run(tf_example,feed_dict={serialized_tf_example:[f]})