How to add custom evaluation metrics in Tensorflow Object Detection API? - tensorflow

I would like to have my custom list of metrics when evaluating an instance segmentation model in Tensorflow's Object Detection API, which can be summarized as follows;
Precision values for IOUs of 0.5-0.95 with increments of 0.05
Recall values for IOUs of 0.5-0.95 with increments of 0.05
AUC values for precision and recall between 0-1 with increments of 0.05
What I've currently tested is modifying the already existing coco evaluation metrics by tweaking some code in the PythonAPI of pycocotools and the additional metrics file within Tensorflow's research model. Currently the default output values for COCO evaluation are the following
Precision/mAP
Precision/mAP#.50IOU
Precision/mAP#.75IOU
Precision/mAP (small)
Precision/mAP (medium)
Precision/mAP (large)
Recall/AR#1
Recall/AR#10
Recall/AR#100
Recall/AR#100 (small)
Recall/AR#100 (medium)
Recall/AR#100 (large)
So I decided first to use coco_detection_metrics in my eval_config field inside the .config file used for training
eval_config: {
metrics_set: "coco_detection_metrics"
}
And edit cocoeval.py and cocotools.py multiple times (proportional to the number of values) by adding more items to the stats list and stats sumary dictionary in order to get the desired result. For demonstration purposes, I am only going to show one example by adding precision at IOU=0.55 on top of precision at IOU=0.5.
So, this is the modified method of the COCOeval class inside cocoeval.py
def _summarizeDets():
stats[1] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[2])
stats[12] = _summarize(1, iouThr=.5, maxDets=self.params.maxDets[2])
and the edited methods under the COCOEvalWrapper class inside coco_tools.py
summary_metrics = OrderedDict([
('Precision/mAP#.50IOU', self.stats[1]),
('Precision/mAP#.55IOU', self.stats[12])
for category_index, category_id in enumerate(self.GetCategoryIdList()):
per_category_ap['Precision mAP#.50IOU ByCategory/{}'.format( category)] = self.category_stats[1][category_index]
per_category_ap['Precision mAP#.55IOU ByCategory/{}'.format( category)] = self.category_stats[12][category_index]
It would be useful to know a more efficient way to deal with my problem and easily request a list of custom evaluation metrics without having to tweak the already existing COCO files. Ideally, my primary goal is to
Be able to create a custom console output based on the metrics provided at the beginning of the question
and my secondary goals would be to
Export the metrics with their respective values in JSON format
Visualize the three graphs in Tensorboard

Related

Non Max Suppression settings and postprocessing for EfficientDet

I've downloaded and installed the Tensorflow Object Detection API and downloaded one of the EfficientDet models. As I want to do some work on the raw scores directly before Non-Max Suppression reduces it to class output, my first goal was to try and get the same final outputs from the raw scores, using the downloaded model config as a guide.
post_processing {
batch_non_max_suppression {
score_threshold: 9.99999993922529e-09
iou_threshold: 0.5
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
As the Object Detection API has no score converter method under postprocessing, I'm not sure what this does, but the only batch NMS method in utils seems to be batch_multiclass_non_max_suppression.
So, having fed an image into the network and got an output detections, to try and replicate its results:
result = post_processing.batch_multiclass_non_max_suppression(tf.expand_dims(detections['raw_detection_boxes'], 2), detections['raw_detection_scores'], 9.99999993922529e-09, 0.5, 100, max_total_size=100)
detections['detection_boxes'] = result[0]
detections['detection_scores'] = result[1]
detections['detection_classes'] = result[2]
i.e., substitute the relevant scores in the detections with the output of NMS, and insert the dimension needed for the batch function to work. This is then visualised following per the TensorFlow Hub colab.
The problem is that whilst the input image (this from the MSCOCO dataset) should produce this:
It instead produces this:
The bounding boxes are all (seemingly) shifted upwards and the categories are simply off, which suggests there's more processing being done between the raw scores, NMS, and output, but it's entirely unclear what. The scores are correct, so it appears to be pruning correctly.
Edit: I suspect, after looking at the SSD model template, that the problem with the misaligned bounding boxes is because I'm not passing the resized image dimensions along to NMS, which is generated by the preprocessing step, which should be easy enough to address via generating the image resize function. However, after applying the slice operation to remove a background class doesn't address the incorrect labels:
Instead, it seems to have lost the person class entirely--this makes sense; it isn't configured to include a background class of any sort and if Person (id 1) is instead coming out as index 0, then this would cut them off.
EDIT 2: I looked at the original meta-architecture further and copied the image-resizing function, i.e.:
from object_detection.protos import image_resizer_pb2
from object_detection.utils import config_util as c
from object_detection.utils import shape_utils
config = c.get_configs_from_pipeline_file(r"C:\Users\Person\.keras\datasets\efficientdet_d7_coco17_tpu-32\pipeline.config")
image_config = c.get_image_resizer_config(config['model'])
resize = image_resizer_builder.build(image_config)
def compute_clip_window(preprocessed_images, true_image_shapes):
# identical to the meta-arch definition
# image resizing
im = tf.cast(input_tensor, tf.float32)
channel_offset = [0.485, 0.456, 0.406]
channel_scale = [0.229, 0.224, 0.225]
im = ((im / 255.0) - [[channel_offset]]) / [[channel_scale]]
resized = shape_utils.resize_images_and_return_shapes(im, resize)
clip = compute_clip_window(resized[0], resized[1])
Therefore allowing the clip argument to be supplied to NMS. However, this doesn't change anything, and it still returns the same mis-aligned boxes as the second image. This is incredibly confusing, as this seems like it should replicate everything the model needs in both the preprocessing and postprocessing steps to generate its own output: the image is normalized and resized; the true image size is retained alongside the resized image; no further processing of the raw boxes or raw scores happens before they get passed to the NMS (the returned versions of the raw values are identical to the values passed to NMS except with one dimension and the model itself doesn't interfere with the post-processing at all--and the call signature calls preprocessing, prediction, and postprocessing in turn, so nothing else should be happening in the interim.
Edit 3: I added another line was added (to no effect)--setting the multiclass scores in the NMS additional fields to the detection scores with backgrounds (i.e., the raw scores). By adding +1 to all the label classes, I got the following image:
Whilst this is correct, this only corrects for the earlier parts of the dataset, i.e. where the only empty class is the 0th. It still appears that there must be some mapping step I'm not following, alongside whatever is causing the image misalignment.
The easiest solution in my case was to load the model from the checkpoint and configs, rather than use the saved model directly, in order to access the original preprocess, predict, and postprocess methods, rather than having a single function call.

TF object detection: return subset of inference payload

Problem
I'm working on training and deploying an instance segmentation model using TF's object detection API. I'm able to successfully train the model, package it into a TF Serving Docker image (latest tag as of Oct 2020), and process inference requests via the REST interface. However, the amount of data returned from an inference request is very large (hundreds of Mb). This is a big problem when the inference request and processing don't happen on the same machine because all that returned data has to go over the network.
Is there a way to trim down the number of outputs (either during model export or within the TF Serving image) so allow faster round trip times during inference?
Details
I'm using TF OD API (with TF2) to train a Mask RCNN model, which is a modified version of this config. I believe the full list of outputs is described in code here. The list of items I get during inference is also pasted below. For a model with 100 object proposals, that information is ~270 Mb if I just write the returned inference as json to disk.
inference_payload['outputs'].keys()
dict_keys(['detection_masks', 'rpn_features_to_crop', 'detection_anchor_indices', 'refined_box_encodings', 'final_anchors', 'mask_predictions', 'detection_classes', 'num_detections', 'rpn_box_predictor_features', 'class_predictions_with_background', 'proposal_boxes', 'raw_detection_boxes', 'rpn_box_encodings', 'box_classifier_features', 'raw_detection_scores', 'proposal_boxes_normalized', 'detection_multiclass_scores', 'anchors', 'num_proposals', 'detection_boxes', 'image_shape', 'rpn_objectness_predictions_with_background', 'detection_scores'])
I already encode the images within my inference requests as base64, so the request payload is not too large when going over the network. It's just that the inference response is gigantic in comparison. I only need 4 or 5 of the items out of this response, so it'd be great to exclude the rest and avoid passing such a large package of bits over the network.
Things I've tried
I've tried setting the score_threshold to a higher value during the export (code example here) to reduce the number of outputs. However, this seems to just threshold the detection_scores. All the extraneous inference information is still returned.
I also tried just manually excluding some of these inference outputs by adding the names of keys to remove here. That also didn't seem to have any effect, and I'm worried this is a bad idea because some of those keys might be needed during scoring/evaluation.
I also searched here and on tensorflow/models repo, but I wasn't able to find anything.
I was able to find a hacky workaround. In the export process (here), some of the components of the prediction dict are deleted. I added additional items to the non_tensor_predictions list, which contains all keys that will get removed during the postprocess step. Augmenting this list cut down my inference outputs from ~200MB to ~12MB.
Full code for the if self._number_of_stages == 3 block:
if self._number_of_stages == 3:
non_tensor_predictions = [
k for k, v in prediction_dict.items() if not isinstance(v, tf.Tensor)]
# Add additional keys to delete during postprocessing
non_tensor_predictions = non_tensor_predictions + ['raw_detection_scores', 'detection_multiclass_scores', 'anchors', 'rpn_objectness_predictions_with_background', 'detection_anchor_indices', 'refined_box_encodings', 'class_predictions_with_background', 'raw_detection_boxes', 'final_anchors', 'rpn_box_encodings', 'box_classifier_features']
for k in non_tensor_predictions:
tf.logging.info('Removing {0} from prediction_dict'.format(k))
prediction_dict.pop(k)
return prediction_dict
I think there's a more "proper" way to deal with this using signature definitions during the creation of the TF Serving image, but this worked for a quick and dirty fix.
I've ran into the same problem. In the exporter_main_v2 code there is stated that the outputs should be:
and the following output nodes returned by the model.postprocess(..):
* `num_detections`: Outputs float32 tensors of the form [batch]
that specifies the number of valid boxes per image in the batch.
* `detection_boxes`: Outputs float32 tensors of the form
[batch, num_boxes, 4] containing detected boxes.
* `detection_scores`: Outputs float32 tensors of the form
[batch, num_boxes] containing class scores for the detections.
* `detection_classes`: Outputs float32 tensors of the form
[batch, num_boxes] containing classes for the detections.
I've submitted an issue on the tensorflow object detection github repo, I hope we will get feedback from the tensorflow dev team.
The github issue can be found here
If you are using exporter_main_v2.py file to export your model, you can try this hack way to solve this problem.
Just add following codes in the function _run_inference_on_images of exporter_lib_v2.py file:
detections[classes_field] = (
tf.cast(detections[classes_field], tf.float32) + label_id_offset)
############# START ##########
ignored_model_output_names = ["raw_detection_boxes", "raw_detection_scores"]
for key in ignored_model_output_names:
if key in detections.keys(): del detections[key]
############# END ##########
for key, val in detections.items():
detections[key] = tf.cast(val, tf.float32)
Therefore, the generated model will not output the values of ignored_model_output_names.
Please let me know if this can solve your problem.
Another approach would be to alter the signatures of the saved model:
model = tf.saved_model.load(path.join("models", "efficientdet_d7_coco17_tpu-32", "saved_model"))
infer = model.signatures["serving_default"]
outputs = infer.structured_outputs
for o in ["raw_detection_boxes", "raw_detection_scores"]:
outputs.pop(o)
tf.saved_model.save(
model,
export_dir="export",
signatures={"serving_default" : infer},
options=None
)

Feature wise center in ImageDataGenerator

The feature wise center means we have to subtract the mean value of dataset from the image. So in ImageDataGenrator if I set featurewise_center=True it will do same. I have 2 questions.
That mean values calculated over augmented data or the data which is stored in train directory?
At test time I want that same values of mean to subtract from test image. How to get that one?
That mean values calculated over augmented data or the data which is stored in train directory?
According to the Keras documentation:
fit(x, augment=False, rounds=1, seed=None )
Fits the data generator to some sample data.
This computes the internal data stats related to the data-dependent
transformations, based on an array of sample data.
Only required if featurewise_center or featurewise_std_normalization
or zca_whitening are set to True.
When rescale is set to a value, rescaling is applied to sample data
before computing the internal data stats.
So, you shall fit the ImageDataGenerator to some image data previously stored as an array of rank 4 and choose if you want to compute the stats based on the augmented images or not, by setting the 'augment' parameter to True or False. If you don't fit the data to the ImageDataGenerator object, it will just ignore the featurewise center transformation.
At test time I want that same values of mean to subtract from test image. How to get that one?
You can copy the stats from one Data Generator to another and you won't have to fit the Data Generator for the test set. After you fit the train Data Generator, just copy the stats to the test Data Generator, E.g
image_train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
featurewise_center=True,
horizontal_flip = True,
rotation_range = 20,
zoom_range = 0.2,
shear_range = 0.1,
)
image_train_datagen.fit(fit_array)
image_test_datagen = tf.keras.preprocessing.image.ImageDataGenerator(featurewise_center=True)
image_test_datagen.mean = image_train_datagen.mean
You may copy the std. deviation(for featurewise std normalization, by copying the 'std' attribute) and principal components (for zca whitening, by copying the 'principal_components' attribute) as well.

binary classification target specifically on false positive

I got a little confused when using models from sklearn, how do I set the specific optimization functions? for example, when RandomForestClassifier is used, how do I let the model 'know' that I want to maximize 'recall' or 'F1 score'. or 'AUC' instead of 'accuracy'?
Any suggestions? Thank you.
What you are looking for is Parameter Tuning. Basically, first you select an estimator , then you define a hyper-parameter space (i.e. all possible parameters and their respective values that you want to tune), a cross validation scheme and scoring function. Now depending upon your choice of searching the parameter space, you can choose the following:
Exhaustive Grid Search
In this approach, sklearn creates a grid of all possible combination of hyper-paramter values defined by the user using the GridSearchCV method. For instance, :
my_clf = DecisionTreeClassifier(random_state=0,class_weight='balanced')
param_grid = dict(
classifier__min_samples_split=[5,7,9,11],
classifier__max_leaf_nodes =[50,60,70,80],
classifier__max_depth = [1,3,5,7,9]
)
In this case, the grid specified is a cross-product of values of classifier__min_samples_split, classifier__max_leaf_nodes and classifier__max_depth. The documentation states that:
The GridSearchCV instance implements the usual estimator API: when “fitting” it on a dataset all the possible combinations of parameter values are evaluated and the best combination is retained.
An example for using GridSearch :
#Create a classifier
clf = LogisticRegression(random_state = 0)
#Cross-validate the dataset
cv=StratifiedKFold(n_splits=n_splits).split(features,labels)
#Declare the hyper-parameter grid
param_grid = dict(
classifier__tol=[1.0,0.1,0.01,0.001],
classifier__C = np.power([10.0]*5,list(xrange(-3,2))).tolist(),
classifier__solver =['newton-cg', 'lbfgs', 'liblinear', 'sag'],
)
#Perform grid search using the classifier,parameter grid, scoring function and the cross-validated dataset
grid_search = GridSearchCV(clf, param_grid=param_grid, verbose=10,scoring=make_scorer(f1_score),cv=list(cv))
grid_search.fit(features.values,labels.values)
#To get the best score using the specified scoring function use the following
print grid_search.best_score_
#Similarly to get the best estimator
best_clf = grid_logistic.best_estimator_
print best_clf
You can read more about it's documentation here to know about the various internal methods, etc. to retrieve the best parameters, etc.
Randomized Search
Instead of exhaustively checking for the hyper-parameter space, sklearn implements RandomizedSearchCV to do a randomized search over the paramters. The documentation states that:
RandomizedSearchCV implements a randomized search over parameters, where each setting is sampled from a distribution over possible parameter values.
You can read more about it from here.
You can read more about other approaches here.
Alternative link for reference:
How to Tune Algorithm Parameters with Scikit-Learn
What is hyperparameter optimization in machine learning in formal terms?
Grid Search for hyperparameter and feature selection
Edit: In your case, if you want to maximize the recall for the model, you simply specify recall_score from sklearn.metrics as the scoring function.
If you wish to maximize 'False Positive' as stated in your question, you can refer this answer to extract the 'False Positives' from the confusion matrix. Then use the make scorer function and pass it to the GridSearchCV object for tuning.
I would suggest you grab a cup of coffee and read (and understand) the following
http://scikit-learn.org/stable/modules/model_evaluation.html
You need to use something along the lines of
cross_val_score(model, X, y, scoring='f1')
possible choices are (check the docs)
['accuracy', 'adjusted_mutual_info_score', 'adjusted_rand_score',
'average_precision', 'completeness_score', 'explained_variance',
'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted',
'fowlkes_mallows_score', 'homogeneity_score', 'mutual_info_score',
'neg_log_loss', 'neg_mean_absolute_error', 'neg_mean_squared_error',
'neg_mean_squared_log_error', 'neg_median_absolute_error',
'normalized_mutual_info_score', 'precision', 'precision_macro',
'precision_micro', 'precision_samples', 'precision_weighted', 'r2',
'recall', 'recall_macro', 'recall_micro', 'recall_samples',
'recall_weighted', 'roc_auc', 'v_measure_score']
Have fun
Umberto

Incorporating very large constants in Tensorflow

For example, the comments for the Tensorflow image captioning example model state:
NOTE: This script will consume around 100GB of disk space because each image
in the MSCOCO dataset is replicated ~5 times (once per caption) in the output.
This is done for two reasons:
1. In order to better shuffle the training data.
2. It makes it easier to perform asynchronous preprocessing of each image in
TensorFlow.
The primary goal of this question is to see if there is an alternative to this type of duplication. In my use case, storing the data in this way would require each image to be duplicated in the TFRecord files many more times, on the order of 20 - 50 times.
I should note first that I have already fed the images through VGGnet to extract 4096 dim features, and I have these stored as a mapping between filename and the vectors.
Before switching over to Tensorflow, I had been feeding batches containing filename strings and then looking up the corresponding vector on a per-batch basis. This allows me to store all of the image data in ~15GB without needing to duplicate the data on disk.
My first attempt to do this in in Tensorflow involved storing indices in the TFExample buffers and then doing a "preprocessing" step to slice into the corresponding matrix:
img_feat = pd.read_pickle("img_feats.pkl")
img_matrix = np.stack(img_feat)
preloaded_images = tf.Variable(img_matrix)
first_image = tf.slice(preloaded_images, [0,0], [1,4096])
However, in this case, Tensorflow disallows a variable larger than 2GB. So my next thought was to partition this across several variables:
img_tensors = []
for i in range(NUM_SPLITS):
with tf.Graph().as_default():
img_tensors.append(tf.Variable(img_matrices[i], name="preloaded_images_%i"%i))
first_image = tf.concat(1, [tf.slice(t, [0,0], [1,4096//NUM_SPLITS]) for t in img_tensors])
In this case, I'm forced to store each partition on a separate graph, because it seems any one graph cannot be this large either. However, now the concat fails because each tensor I am concatenating is on a separate graph.
Any advice on incorporating a large amount (~15GB) of preloaded into the Tensorflow graph.
Potentially related is this question; however in this case I'd like to override the decoding of the actual JPEG file with the preprocessed value in a tensor op.