What is planned for the tf model garden? - tensorflow

First, thanks for a great library. While it helps with lots of great implementations, its seems that at least some parts of it do not keep up with the pace of tensorflow development.
What is planned for object detection stuff? Will tf-slim be replaced with something alive? Is tf2 support planned?

The official repository provides a collection of example implementations for SOTA models using the latest TensorFlow 2's high-level APIs.
The TensorFlow Model Garden team is actively working on providing more TensorFlow 2 models.
Please read this blog for more information.
https://blog.tensorflow.org/2020/03/introducing-model-garden-for-tensorflow-2.html
Please also check the GitHub repository to find more news.
https://github.com/tensorflow/models/tree/master/official#more-models-to-come

Please check the milestone for Object Detection API at https://github.com/tensorflow/models/milestones.
It will support TensorFlow 2 by early July.

Related

How am I supposed to use the TF model garden beta API?

The TF garden library provides vision-related beta features in https://github.com/tensorflow/models/tree/master/official/vision/beta.
I am using this because the training of ResNet-RS model is known to be able in this library. However, the API seems to have a very different interface and internal mechanisms from the original API(image classification in particular). Especially, they are not documented and the code seems to be updated almost every day. The README.MD file contains a single sentence: This directory contains the new design of TF model garden vision framework.
Are users supposed to use the beta API? Or are they a work-in-progress and do I need to make a custom implementation? Is there documentation somewhere else?

What is the difference between Tensorflow Hub vs Tensorflow Official Models?

Regarding Tensorflow models in Tensorflow Hub (https://tfhub.dev/tensorflow) vs models found on official Tensorflow Github Repository (https://github.com/tensorflow/models/tree/master/official):
Does Tensorflow regularly maintain and update both of them?
What are the biggest differences between them?
What functionality does one have that the other doesn't?
Do they have overlaps models (i.e Resnet, R-CNN)? Are there some models that are only exclusive to one of them?
Are they installed differently? Why or why not?
Are they deployed on differently? Why or why not?
Is one more "official" than the other? Does one have more stable models?
As a user of the model, what would be the biggest difference in experience while using them?
Thanks!

Tensorflow port model from 1.x to 2.x

this is a basic question.
Currently I am using one available 1.x model for object detection.
For this I am re-training this model with my own data and can detect the objects I want.
I would like to port all my logic to 2.x version in order to use the new released tools for converting models to TFLite.
Do I need to retrain the weights of the reference model (coco for example) once I modify the code to 2.0 ?
Or only retrain my customized data ?
if yes, is there any recommendation to do this without much effort ?
Thanks for the advice
Luckily for all users, tensorflow has a lot of documentation and the developers of tensorflow thought you would ask this question and therefore have answered it already for you. This post should help you perfectly migrating your model from 1.x to 2.x.
To sum it up quickly, if you are using high level APIs like keras it is basically no work at all. If you want to make use of of the performance improvements made in tensorflow 2 or if you are not using said high level APIs it might be a bit more work.
Weights of your network should generally not have to be retrained, except if you want to change your model obviously. If you want to just use the same model but then in tensorflow 2, the link above should help you transfer your code to tensorflow 2 and you should not have to retrain the weights of your model.

How to use a custom model with Tensorflow Hub?

My goal is to test out Google's BERT algorithm in Google Colab.
I'd like to use a pre-trained custom model for Finnish (https://github.com/TurkuNLP/FinBERT). The model can not be found on TFHub library. I have not found a way to load model with Tensorflow Hub.
Is there a neat way to load and use a custom model with Tensorflow Hub?
Fundamentally: yes. Everyone can create the kind of models that TF Hub hosts, and I hope authors of interesting models do consider that.
For TF1 and the hub.Module format tailored to it, see
https://www.tensorflow.org/hub/tf1_hub_module#creating_a_new_module
For TF2 and its revised SavedModel format, see
https://www.tensorflow.org/hub/tf2_saved_model#creating_savedmodels_for_tf_hub
That said, a sophisticated model like BERT requires a bit of attention to export it with all bells and whistles, so it helps to have some tooling to build on. The BERT reference implementation for TF2 at https://github.com/tensorflow/models/tree/master/official/nlp/bert comes with an open-sourced export_tfhub.py script, and anyone can use that to export custom BERT instances created from that code base.
However, I understand from https://github.com/TurkuNLP/FinBERT/blob/master/nlpl_tutorial/training_bert.md#general-info that you are using Nvidia's fork of the original TF1 implementation of BERT. There are Hub modules created from the original research code, but the tooling to that end has not been open-sourced, and Nvidia doesn't seem to have added their own either.
If that's not changing, you'll probably have to resort to doing things the pedestrian way and get acquainted with their codebase and load their checkpoints into it.

How to deploy parsey's cousins with tensorflow serving

Are there instructions or some documentation somewhere or could somebody describe how to deploy the models available as "Parsey's Cousins" (see https://github.com/tensorflow/models/blob/master/syntaxnet/universal.md) with SyntaxNet under Tensorflow Serving? Even deploying just Parsey is a rather complex undertaking that is not really documented anywhere, but how to do this for the additional 40 languages?
This pull request partially addresses your request, but it still has some issues: https://github.com/tensorflow/models/pull/250.
We do have some tentative plans to provide easier integration between SyntaxNet and Tensorflow Serving, but no precise timeline.
Just for the benefit of anyone else who finds this question, after some digging around on GitHub, one can find the following issue started by Johann Petrak:
https://github.com/dsindex/syntaxnet/issues/7
a model from parsey's cousin is not able to export by that patch due to version mismatch
So whilst some people have been able to modify syntaxnet so that it works with Tensorflow Serving, this seems to be at the cost of using a version which is not compatible with Parsey's Cousins.
Currently the only way to get Tensorflow Serving working with languages other than English is to use something like dsindex's code and train your own models.