Tensorflow port model from 1.x to 2.x - tensorflow

this is a basic question.
Currently I am using one available 1.x model for object detection.
For this I am re-training this model with my own data and can detect the objects I want.
I would like to port all my logic to 2.x version in order to use the new released tools for converting models to TFLite.
Do I need to retrain the weights of the reference model (coco for example) once I modify the code to 2.0 ?
Or only retrain my customized data ?
if yes, is there any recommendation to do this without much effort ?
Thanks for the advice

Luckily for all users, tensorflow has a lot of documentation and the developers of tensorflow thought you would ask this question and therefore have answered it already for you. This post should help you perfectly migrating your model from 1.x to 2.x.
To sum it up quickly, if you are using high level APIs like keras it is basically no work at all. If you want to make use of of the performance improvements made in tensorflow 2 or if you are not using said high level APIs it might be a bit more work.
Weights of your network should generally not have to be retrained, except if you want to change your model obviously. If you want to just use the same model but then in tensorflow 2, the link above should help you transfer your code to tensorflow 2 and you should not have to retrain the weights of your model.

Related

How to use the models under tensorflow/models/research/object_detection/models?

I'm looking into training an object detection network using Tensorflow, and I had a look at the TF2 Model Zoo. I noticed that there are noticeably less models there than in the directory /models/research/models/, including the MobileDet with SSDLite developed for the jetson xavier.
To clarify, the readme says that there is a MobileDet GPU with SSDLite, and that the model and checkpoints trained on COCO are provided, yet I couldn't find them anywhere in the repo.
How is one supposed to use those models?
I already have a custom-trained MobileDetv3 for image classification, and I was hoping to see a way to turn the network into an object detection network, in accordance with the MobileDetv3 paper. If this is not straightforward, training one network from scratch could be ok too, I just need to know where to even start from.
If you plan to use the object detection API, you can't use your existing model. You have to choose from a list of models here for v2 and here for v1
The documentation is very well maintained and the steps to train or validate or run inference (test) on custom data is very well explained here by the TensorFlow team. The link is meant for TensorFlow version v2. However, if you wish to use v1, the process is fairly similar and there are numerous blogs/videos explaining how to go about it

Understanding the Hugging face transformers

I am new to the Transformers concept and I am going through some tutorials and writing my own code to understand the Squad 2.0 dataset Question Answering using the transformer models. In the hugging face website, I came across 2 different links
https://huggingface.co/models
https://huggingface.co/transformers/pretrained_models.html
I want to know the difference between these 2 websites. Does one link have just a pre-trained model and the other have a pre-trained and fine-tuned model?
Now if I want to use, let's say an Albert Model For Question Answering and train with my Squad 2.0 training dataset on that and evaluate the model, to which of the link should I further?
I would formulate it like this:
The second link basically describes "community-accepted models", i.e., models that serve as the basis for the implemented Huggingface classes, like BERT, RoBERTa, etc., and some related models that have a high aceptance or have been peer-reviewed.
This list has bin around much longer, whereas the list in the first link only recently got introduced directly on the Huggingface website, where the community can basically upload arbitrary checkpoints that are simply considered "compatible" with the library. Oftentimes, these are additional models trained by practitioners or other volunteers, and have a task-specific fine-tuning. Note that al models from /pretrained_models.html are also included in the /models interface as well.
If you have a very narrow usecase, you might as well check and see if there was already some model that has been fine-tuned on your specific task. In the worst case, you'll simply end up with the base model anyways.

How to use a custom model with Tensorflow Hub?

My goal is to test out Google's BERT algorithm in Google Colab.
I'd like to use a pre-trained custom model for Finnish (https://github.com/TurkuNLP/FinBERT). The model can not be found on TFHub library. I have not found a way to load model with Tensorflow Hub.
Is there a neat way to load and use a custom model with Tensorflow Hub?
Fundamentally: yes. Everyone can create the kind of models that TF Hub hosts, and I hope authors of interesting models do consider that.
For TF1 and the hub.Module format tailored to it, see
https://www.tensorflow.org/hub/tf1_hub_module#creating_a_new_module
For TF2 and its revised SavedModel format, see
https://www.tensorflow.org/hub/tf2_saved_model#creating_savedmodels_for_tf_hub
That said, a sophisticated model like BERT requires a bit of attention to export it with all bells and whistles, so it helps to have some tooling to build on. The BERT reference implementation for TF2 at https://github.com/tensorflow/models/tree/master/official/nlp/bert comes with an open-sourced export_tfhub.py script, and anyone can use that to export custom BERT instances created from that code base.
However, I understand from https://github.com/TurkuNLP/FinBERT/blob/master/nlpl_tutorial/training_bert.md#general-info that you are using Nvidia's fork of the original TF1 implementation of BERT. There are Hub modules created from the original research code, but the tooling to that end has not been open-sourced, and Nvidia doesn't seem to have added their own either.
If that's not changing, you'll probably have to resort to doing things the pedestrian way and get acquainted with their codebase and load their checkpoints into it.

fast.ai equivalent in tensorflow

Is there any equivalent/alternate library to fastai in tensorfow for easier training and debugging deep learning models including analysis on results of trained model in Tensorflow.
Fastai is built on top of pytorch looking for similar one in tensorflow.
The obvious choice would be to use tf.keras.
It is bundled with tensorflow and is becoming its official "high-level" API -- to the point where in TF 2 you would probably need to go out of your way not using it at all.
It is clearly the source of inspiration for fastai to easy the use of pytorch as Keras does for tensorflow, as mentionned by the authors time and again:
Unfortunately, Pytorch was a long way from being a good option for part one of the course, which is designed to be accessible to people with no machine learning background. It did not have anything like the clear simple API of Keras for training models. Every project required dozens of lines of code just to implement the basics of training a neural network. Unlike Keras, where the defaults are thoughtfully chosen to be as useful as possible, Pytorch required everything to be specified in detail. However, we also realised that Keras could be even better. We noticed that we kept on making the same mistakes in Keras, such as failing to shuffle our data when we needed to, or vice versa. Also, many recent best practices were not being incorporated into Keras, particularly in the rapidly developing field of natural language processing. We wondered if we could build something that could be even better than Keras for rapidly training world-class deep learning models.

Object-Detection-using-Fast-R-CNN python and brain_script model different?

For the toy example A2 part of the Beta 12 Release, it is said that there are two option for training:
A2_RunCntk_py3.py (python API)
A2_RunCntk.py (brain_script)
Are the models trained from these two methods the same? Or in other words, can I load the model from brain_script into python API and then detect other testing images?
Also see Object Detection using Fast R CNN.
Yes it is possible to use Python to load a model you trained with Brainscript. A few gotchas in doing this correctly are described here. We are working on making things work seamlessly without too much Python code for massaging the data.