I'm trying to load saved tensorflow.keras model, on loading I'm getting the following error.
ValueError: Unknown loss function:cce_dice_loss
cce_dice_loss is from the library segmentation_models
please find the follwowing code for loss function
from segmentation_models.losses import cce_dice_loss
model2.compile(optimizer, cce_dice_loss, metrics=[iou_score])
please find the following code for saving and loading the model
model2.save("my_model",save_format='tf')
new_model = tf.keras.models.load_model('my_model', custom_objects={'convolutional_block': convolutional_block,'identity_block':identity_block,'global_flow':global_flow,'context_flow':context_flow,'sum_layer':sum_layer,'fsm':fsm,'agcn':agcn,'iou_score':iou_score,'focal_loss':focal_loss})
While loading the model I'm getting the error as I mentioned
please find the screen of the error
can anyone help me on resolving this issue.
By saving as HDF5 file, I was able to load like this:
import segmentation_models as sm
from tensorflow.keras.models import load_model
from segmentation_models.metrics import iou_score
print(iou_score.__name__)
>>> iou_score # Use this as custom_objects name
focal_loss = sm.losses.cce_dice_loss
print(focal_loss.__name__)
>>> categorical_crossentropy_plus_dice_loss # Use this as custom_objects name
model2.compile(optimizer, focal_loss, metrics = [iou_score])
model2.save('model.h5')
model = load_model('model.h5',
custom_objects = {
'categorical_crossentropy_plus_dice_loss' : focal_loss,
'iou_score' : iou_score
}
)
Related
I have a dataset which I have to process in such a way that it works with a convolutional neural network of PyTorch (I'm completely new to PyTorch). The data is stored in a dataframe with a column for pictures (28 x 28 ndarrays with int32 entries) and a column with its class labels. The pixels of the images merely adopt values +1 and -1 (since it is simulation data of a classical 2d Ising Model). The dataframe looks like this.
I imported the following (a lot of this is not relevant for now, but I included everything for completeness. "data_loader" is a custom py file.):
import numpy as np
import matplotlib.pyplot as plt
import data_loader
import pandas as pd
import torch
import torchvision.transforms as T
from torchvision.utils import make_grid
from torch.nn import Module
from torch.nn import Conv2d
from torch.nn import Linear
from torch.nn import MaxPool2d
from torch.nn import ReLU
from torch.nn import LogSoftmax
from torch import flatten
from sklearn.metrics import classification_report
import time as time
from torch.utils.data import DataLoader, Dataset
Then, I want to get this in the correct shape in order to make it useful for PyTorch. I do this by defining the following class
class MetropolisDataset(Dataset):
def __init__(self, data_frame, transform=None):
self.data_frame = data_frame
self.transform = transform
def __len__(self):
return len(self.data_frame)
def __getitem__(self,idx):
if torch.is_tensor(idx):
idx = idx.tolist()
label = self.data_frame['label'].iloc[idx]
image = self.data_frame['image'].iloc[idx]
image = np.array(image)
if self.transform:
image = self.transform(image)
return (image, label)
I call instances of this class as:
train_set = MetropolisDataset(data_frame = df_train,
transform = T.Compose([
T.ToPILImage(),
T.ToTensor()]))
validation_set = MetropolisDataset(data_frame = df_validation,
transform = T.Compose([
T.ToPILImage(),
T.ToTensor()]))
test_set = MetropolisDataset(data_frame = df_test,
transform = T.Compose([
T.ToPILImage(),
T.ToTensor()]))
The problem does not yet arise here, because I am able to read out and show images from these instances of the above defined class.
Then, as far as I found out, it is necessary to let this go through the DataLoader of PyTorch, which I do as follows:
batch_size = 64
train_dl = DataLoader(train_set, batch_size, shuffle=True, num_workers=3, pin_memory=True)
validation_dl = DataLoader(validation_set, batch_size, shuffle=True, num_workers=3, pin_memory=True)
test_dl = DataLoader(test_set, batch_size, shuffle=True, num_workers=3, pin_memory=True)
However, if I want to use these instances of the DataLoader, simply nothing happens. I neither get an error, nor the computation seems to get anywhere. I tried to run a CNN but it does not seem to compute anything. Something else I tried was to show some sample images with the code provided by this article, but the same issue occurs. The sample code is:
def show_images(images, nmax=10):
fig, ax = plt.subplots(figsize=(8, 8))
ax.set_xticks([]); ax.set_yticks([])
ax.imshow(make_grid((images.detach()[:nmax]), nrow=8).permute(1, 2, 0))
def show_batch(dl, nmax=64):
for images in dl:
show_images(images, nmax)
break
show_batch(test_dl)
It seems that there is some error in the implementation of my MetropolisDataset class or with the DataLoader itself. How could this problem be solved?
As mentioned in the comments, the problem was partly solved by setting num_workers to zero since I was working in a Jupyter notebook, as answered here. However, this left open one further problem that I got errors when I wanted to apply the DataLoader to run a CNN. The issue was then that my data did consist of int32 numbers instead of float32. I do not include further codes, because this was related directly to my data - however, the issue was (as very often) merely a wrong datatype.
i am exporting Keras TF model without a luck:
import tensorflow as tf
import numpy as np
ssValues = np.zeros(shape=(640,800,6),dtype=np.float16)
ssValues += 3.
ssKerasConstant = tf.keras.backend.constant(value=ssValues, dtype=tf.dtypes.float16, shape=(1,640,800,6));
inputLayer = tf.keras.Input(shape=(640,800,6),
name='inputLayer',
batch_size=None,
dtype=tf.dtypes.float16)
ssConstant = tf.constant(ssValues, dtype=tf.dtypes.float16, shape=(1,640,800,6), name='ss')
ssm = tf.keras.layers.Multiply()([inputLayer,ssKerasConstant])
model = tf.keras.models.Model(inputs=inputLayer, outputs=ssm)
tf.keras.experimental.export_saved_model(model, '~/models/model7.pb')
and i get the following error:
graph = inputs[0].graph
IndexError: list index out of range
even though i am able to predict the model.
You can save the model successfully by replacing the last line of your code,
tf.keras.experimental.export_saved_model(model, '~/models/model7.pb')
with the below line:
tf.saved_model.save(model, '~/models/model7.pb')
It works in Tensorflow Version, 2.0. Please find Gist here.
I am trying to use TF.dataset.map to port over this old code because I get a deprecation warning.
Old code which reads a set of custom protos from a TFRecord file:
record_iterator = tf.python_io.tf_record_iterator(path=filename)
for record in record_iterator:
example = MyProto()
example.ParseFromString(record)
I am trying to use eager mode and map, but I get this error.
def parse_proto(string):
proto_object = MyProto()
proto_object.ParseFromString(string)
dataset = tf.data.TFRecordDataset(dataset_paths)
parsed_protos = raw_tf_dataset.map(parse_proto)
This code works:
for raw_record in raw_tf_dataset:
proto_object = MyProto()
proto_object.ParseFromString(raw_record.numpy())
But the map gives me an error:
TypeError: a bytes-like object is required, not 'Tensor'
What is the right way to take use the argument the function results of the map and treat them like a string?
You need to extract string form the tensor and use in the map function. Below are the steps to be implemented in the code to achieve this.
You have to decorate the map function with tf.py_function(get_path, [x], [tf.float32]). You can find more about tf.py_function here. In tf.py_function, first argument is the name of map function, second argument is the element to be passed to map function and final argument is the return type.
You can get your string part by using bytes.decode(file_path.numpy()) in map function.
So modify your program as below,
parsed_protos = raw_tf_dataset.map(parse_proto)
to
parsed_protos = raw_tf_dataset.map(lambda x: tf.py_function(parse_proto, [x], [function return type]))
Also modify parse_proto as below,
def parse_proto(string):
proto_object = MyProto()
proto_object.ParseFromString(string)
to
def parse_proto(string):
proto_object = MyProto()
proto_object.ParseFromString(bytes.decode(string.numpy()))
In the below simple program, we are using tf.data.Dataset.list_files to read path of the image. Next in the map function we are reading the image using load_img and later doing the tf.image.central_crop function to crop central part of the image.
Code -
%tensorflow_version 2.x
import tensorflow as tf
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array, array_to_img
from matplotlib import pyplot as plt
import numpy as np
def load_file_and_process(path):
image = load_img(bytes.decode(path.numpy()), target_size=(224, 224))
image = img_to_array(image)
image = tf.image.central_crop(image, np.random.uniform(0.50, 1.00))
return image
train_dataset = tf.data.Dataset.list_files('/content/bird.jpg')
train_dataset = train_dataset.map(lambda x: tf.py_function(load_file_and_process, [x], [tf.float32]))
for f in train_dataset:
for l in f:
image = np.array(array_to_img(l))
plt.imshow(image)
Output -
Hope this answers your question. Happy Learning.
Is there a way to view the images that tensorflow object detection api trains on after all preprocessing/augmentation.
I'd like to verify that things look correctly. I was able to verify the resizing my looking at the graph post resize in inference but I obviously can't do that for augmentation options.
In the past with Keras I've been able to do that and I've found that I was to aggressive.
The API provides test code for augmentation options. In input_test.py file, the function test_apply_image_and_box_augmentation is for that. You can rewrite this function by passing your own images to the tensor_dict and then save the augmented_tensor_dict_out for verification or you can directly visualize it.
EDIT:
Since this answer was long ago answered and still not accepted, I decided to provide a more specific answer with examples. I wrote a little test script called augmentation_test.py.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import os
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from scipy.misc import imsave, imread
from object_detection import inputs
from object_detection.core import preprocessor
from object_detection.core import standard_fields as fields
from object_detection.utils import config_util
from object_detection.utils import test_case
FLAGS = tf.flags.FLAGS
class DataAugmentationFnTest(test_case.TestCase):
def test_apply_image_and_box_augmentation(self):
data_augmentation_options = [
(preprocessor.random_horizontal_flip, {
})
]
data_augmentation_fn = functools.partial(
inputs.augment_input_data,
data_augmentation_options=data_augmentation_options)
tensor_dict = {
fields.InputDataFields.image:
tf.constant(imread('lena.jpeg').astype(np.float32)),
fields.InputDataFields.groundtruth_boxes:
tf.constant(np.array([[.5, .5, 1., 1.]], np.float32))
}
augmented_tensor_dict =
data_augmentation_fn(tensor_dict=tensor_dict)
with self.test_session() as sess:
augmented_tensor_dict_out = sess.run(augmented_tensor_dict)
imsave('lena_out.jpeg',augmented_tensor_dict_out[fields.InputDataFields.image])
if __name__ == '__main__':
tf.test.main()
You can put this script under models/research/object_detection/ and simply run it with python augmentation_test.py. To successfully run it you should provide any image name 'lena.jpeg' and the output image after augmentation would be saved as 'lena_out.jpeg'.
I ran it with the 'lena' image and here is the result before augmentation and after augmentation.
.
Note that I used preprocessor.random_horizontal_flip in the script. And the result showed exactly what the input image looks like after random_horizontal_flip. To test it with other augmentation options, you can replace the random_horizontal_flip with other methods (which are all defined in preprocessor.py and also in the config proto file), all you can append other options to the data_augmentation_options list, for example:
data_augmentation_options = [(preprocessor.resize_image, {
'new_height': 20,
'new_width': 20,
'method': tf.image.ResizeMethod.NEAREST_NEIGHBOR
}),(preprocessor.random_horizontal_flip, {
})]
Here is the code to achieve what has been asked in the question https://github.com/majrie/visualize_augmentation/blob/master/visualize_augmentation.ipynb .
It is based on the answer of #danyfang in following question Visualizing augmented train images [tensorflow object detection api].
Current version of tensorflow-serving try to load warmup request from assets.extra/tf_serving_warmup_requests file.
2018-08-16 16:05:28.513085: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:83] No warmup data file found at /tmp/faster_rcnn_inception_v2_coco_2018_01_28_string_input_version-export/1/assets.extra/tf_serving_warmup_requests
I wonder if tensorflow provides common api to export request to the location or not? Or should we write request to the location manually?
At this point there is no common API for exporting the warmup data into the assets.extra. It's relatively simple to write a script (similar to below):
import tensorflow as tf
from tensorflow_serving.apis import model_pb2
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_log_pb2
def main():
with tf.python_io.TFRecordWriter("tf_serving_warmup_requests") as writer:
request = predict_pb2.PredictRequest(
model_spec=model_pb2.ModelSpec(name="<add here>"),
inputs={"examples": tf.make_tensor_proto([<add here>])}
)
log = prediction_log_pb2.PredictionLog(
predict_log=prediction_log_pb2.PredictLog(request=request))
writer.write(log.SerializeToString())
if __name__ == "__main__":
main()
We refered to the official doc
Specially, we used Classification instead of Prediction, so we altered that code to be
log = prediction_log_pb2.PredictionLog(
classify_log=prediction_log_pb2.ClassifyLog(request=<request>))
This is a complete example of an object detection system using a ResNet model. The prediction consist of an image.
import tensorflow as tf
import requests
import base64
from tensorflow.python.framework import tensor_util
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_log_pb2
IMAGE_URL = 'https://tensorflow.org/images/blogs/serving/cat.jpg'
NUM_RECORDS = 100
def get_image_bytes():
image_content = requests.get(IMAGE_URL, stream=True)
image_content.raise_for_status()
return image_content.content
def main():
"""Generate TFRecords for warming up."""
with tf.io.TFRecordWriter("tf_serving_warmup_requests") as writer:
image_bytes = get_image_bytes()
predict_request = predict_pb2.PredictRequest()
predict_request.model_spec.name = 'resnet'
predict_request.model_spec.signature_name = 'serving_default'
predict_request.inputs['image_bytes'].CopyFrom(
tensor_util.make_tensor_proto([image_bytes], tf.string))
log = prediction_log_pb2.PredictionLog(
predict_log=prediction_log_pb2.PredictLog(request=predict_request))
for r in range(NUM_RECORDS):
writer.write(log.SerializeToString())
if __name__ == "__main__":
main()
This script will create a file called “tf_serving_warmup_requests”
I moved this file to /your_model_location/resnet/1538687457/assets.extra/ and then restart my docker image to pickup the new changes.