Saving subclass model with custom training - tensorflow

I want to save custom model, build using tensorflow subclass. It is trained using custom training. (Not .fit() or .compile() method).
For example this tutorials -
https://www.tensorflow.org/tutorials/text/nmt_with_attention
https://www.tensorflow.org/tutorials/text/image_captioning
I didn't find the documentation for saving such models so as to convert it to tensorflow lite format and use it on android/ ios devices.
Thank you in advance.
I have searched a lot. please help me if you know the answer.

Related

How do I use the Object Detection API to evaulate an own custom model? What do I write into the config files?

I have a custom object detection model that I can call with model = MyModel() and model.loadweights(checkpoint) and I want to evaluate it using the Object Detection API.
From what I understood there are two possibilities, either I use the legacy eval.py, there I don't know, what to put into the pipeline_config file
Or I use the newer version that is implemented in model_main_tf2.py, but there I would have to save my model as model.config and I don't know what to put the pipeline file either.
Since my model is a YOLO model, it is not included in the sample once yet.
https://github.com/tensorflow/models/tree/master/research/object_detection/configs/tf2
Would really appreciate the help!
You can't calculate the mAP using the Object Detection API because there's no pipeline.config file for Yolo.
However, you can check this repo out. It's a Tensorflow based implementation of YoloV3. They have working code for calculating mAP. You can modify this accordingly to calculate the mAP of your model.

CreateML what kind of ObjectDetector Network is trained?

I used CreateML do train a new custom ObjectDector.
Everything worked well so far.
Now I am just wondering, what kind of Network is trained in the background?
Is it something like YOLO or Mobilenet?
I did not found anything on the official documentation:
https://developer.apple.com/documentation/createml#overview
There are two options:
TinyYOLOv2
Using transfer learning. This uses a built-in feature extractor model (VisionFeaturePrint.Objects). This is available with Create ML in Xcode 12.

issue with converting the model from colab to tf.keras h5 model

I'm having a really hard time converting this model to s h5 model so I can then convert it to Tensorflow lite Someone managed to do that. I shared the colab here:
I really appreciate any help that I can get.
here is my colab:
https://colab.research.google.com/drive/1ZON8lvha8sI9ZCJEF0Ad8au2NNc9sUkU
I used this approach:
from docproduct.models import MedicalQAModelwithBert
medical_qa_model = MedicalQAModelwithBert(
config_file=os.path.join(
pretrained_path, 'bert_config.json'),
checkpoint_file=os.path.join(pretrained_path, 'biobert_model.ckpt'))
medical_qa_model.save("model.h5")
The error that I get is
NotImplementedError: The save method requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider using save_weights, in order to save the weights of the model.
I can save_weights but then I will have issue with converting with tflite because that requires the whole model. Any one have any suggestions how to solve this issue ?
My ultimate goal is to convert the model to tflite.
Thanks
Update:
It seems the issue is that they make their own subclass of Model and do not implement save().
https://github.com/re-search/DocProduct/blob/master/docproduct/models.py#L62
Is there any workaround to be able to convert the model without training it from scratch?

Building deep learning from config file using Tensorflow

I would like to ask whether there is any method at hand to build deep learning models from config file using Tensorflow, just like that in caffe. Thank you very much.
The project caffe-tensorflow allows you to convert Caffe models to TensorFlow.

Object detection using CNTK

I am very new to CNTK.
I wanted to train a set of images (to detect objects like alcohol glasses/bottles) using CNTK - ResNet/Fast-R CNN.
I am trying to follow below documentation from GitHub; However, it does not appear to be a straight forward procedure. https://github.com/Microsoft/CNTK/wiki/Object-Detection-using-Fast-R-CNN
I cannot find proper documentation to generate ROI's for the images with different sizes and shapes. And how to create object labels based on the trained models? Can someone point out to a proper documentation or training link using which I can work on the cntk model? Please see the attached image in which I was able to load a sample image with default ROI's in the script. How do I properly set the size and label the object in the image ? Thanks in advance!
sample image loaded for training
Not sure what you mean by proper documentation. This is an implementation of the paper (https://arxiv.org/pdf/1504.08083.pdf). Looks like you are trying to generate ROI's. Can you look through the helper functions as documented at the site to parse what you might need:
To run the toy example, make sure that in PARAMETERS.py the datasetName is set to "grocery".
Run A1_GenerateInputROIs.py to generate the input ROIs for training and testing.
Run A2_RunCntk_py3.py to train a Fast R-CNN model using the CNTK Python API and compute test results.
The algo will work on several candidate regions and then generate outputs: one for the classes of objects and another one that generates the bounding boxes for the objects belonging to those classes. Please refer to the code for getting the details of the implementation.
Can someone point out to a proper documentation or training link using which I can work on the cntk model?
You can take a look at my repository on GitHub.
It will guide you through all the steps required to train your own model for object detection and classification with CNTK.
But in short the proper steps should look something like this:
Setup environment
Prepare data
Tag images (ground truth)
Download pretrained model and create mappings for your custom dataset
Run training
Evaluate the model on test set