What is the difference between TFHub and Model Garden? - tensorflow

TensorFlow Hub is a repository for pre-trained models. Model Garden (Model Zoo) also keeps SOTA models and provides facilities for downloading and leveraging its models like TfHub, and both of them are created by TensorFlow.
Why did Tensorflow make two concepts for a model repository?
When should we use TfHub for retrieving a well-known model, and when should we use Model Garden to download a model? What is the difference between them?

TF Hub provides trained models in SavedModel, TFLite, or TF.js format. These artifacts can be used for inference and some can be used in code for fine-tuning. TF Hub does not provide modeling library code to train your own models from scratch.
Model Garden is a modeling library for training BERT, image classification models, and more. Model Garden provides code for training your own models from scratch as well as some checkpoints to start from.

Related

Can Rasa NLU share the same spacy model among multiple models

I am using Rasa NLU. I have 3 models trained using the same pipeline with different training datasets. The pipeline uses Spacy for tokenization and to build the WordVec.
When I load all of those 3 models to memory, Exactly how many times, Rasa loads Spacy en_core_web_lg model to the memory? Can we share the same Spacy model between multiple trained NLU models?
The Spacy model will be loaded into memory each time you train a model using it. It will however only be downloaded once, in which sense the same model is used for all NLU models trained in the same environment.

Retraining existing base BERT model with additional data

I have generated new Base BERT model(dataset1_model_cased_L-12_H-768_A-12) using cased_L-12_H-768_A-12 as trained multi label classification from biobert-run_classifier
I need to add more additional data as dataset2 and the model should be dataset2_model_cased_L-12_H-768_A-12
Is tensorflow-hub help this to resolve my problem?
Model training life cycle will be like this below,
cased_L-12_H-768_A-12 => dataset1 => dataset1_model_cased_L-12_H-768_A-12
dataset1_model_cased_L-12_H-768_A-12 => dataset2 =>
dataset2_model_cased_L-12_H-768_A-12
Tensorflow Hub is a platform for sharing pre-trained model pieces or whole models, and an API to facilitate this sharing. In TF 1.x, this API was a stand-alone API and in TF 2.x this API (SavedModel: https://www.tensorflow.org/guide/saved_model) is part of the core TF API.
In the proposed training life-cycle example, using SavedModel to save relevant model between the training steps could simplify pipeline architecture design. Alternatively, you could use coding examples available as part of the TF Model Garden to perform this pre-training: https://github.com/tensorflow/models/tree/master/official/nlp.

Jointly training models in Tensorflow and Pytorch

I have two models, model A in Tensorflow 2.0 and model B in Pytorch 1.3. Model A's output is B's input. I'd like to train the two models end-to-end.
Is it possible to do without porting one of the models to the other framework?
I believe this is impossible to jointly train models in Tensorflow and Pytorch. Those two frameworks use very different backend architectures to calculate the loss and do backpropagation, so they are incompatible with each other for training deep learning models.
A more detailed question ought to be which Tensorflow model and which Pytorch are you using in your problem. With the development of the deep learning community, more and more basic deep learning algorithms have various versions of implementations and support both Pytorch and Tensorflow. It seldom happens that you can only find a unique implementation in either Pytorch and Tensorflow. Just try to find corresponding implementation and join them together!

How to add feature extractor netwrok for example mobilenetv2 to tensorflow's object detection API

This tutorial discusses how to use objection detection API at tensorflow.
I am looking for the tutorial explaining how to add feature extractor such as mobilenetV2 to tensorflow's object detection framework.
Have you checked out the Tensorflow provided Model Zoo? :)
It includes various object detection models with various feature extractors such as MobileNet, Inception, ResNet etc.
Here below you will find a link to the Tensorflow Detection Model Zoo, where you can choose detection model architectures, Region-Based (R-CNN) or Single Shot Detector (SSD) models, and feature extractors.
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
You can download a frozen graph of the pre-trained models based on COCO, Kitti and Open-Images etc.

How can I get access to intermediate activation maps of the pre-trained models in NiftyNet?

I could download and successfully test brain parcellation demo of NiftyNet package. However, this only gives me the ultimate parcellation result of a pre-trained network, whereas I need to get access to the output of the intermediate layers too.
According to this demo, the following line downloads a pre-trained model and a test MR volume:
wget -c https://www.dropbox.com/s/rxhluo9sub7ewlp/parcellation_demo.tar.gz -P ${demopath}
where ${demopath} is the path to the demo folder. Extracting the downloaded file will create a .ckpt file which seems to contain a pre-trained tensorflow model, however I could not manage to load it into a tensorflow session.
Is there a way that I can load the pre-trained model and have access to the all its intermediate activation maps? In other words, how can I load the pre-trained models from NiftyNet library into a tensorflow session such that I can explore through the model or probe certain intermediate layer for a any given input image?
Finally, in NiftyNet's website it is mentioned that "a number of models from the literature have been (re)implemented in the NiftyNet framework". Are pre-trained weights of these models also available? The demo is using a pre-trained model called HighRes3DNet. If the pre-trained weights of other models are also available, what is the link to download those weights or saved tensorflow models?
To answer your 'Finally' question first, NiftyNet has some network architectures implemented (e.g., VNet, UNet, DeepMedic, HighRes3DNet) that you can train on your own data. For a few of these, there are pre-trained weights for certain applications (e.g. brain parcellation with HighRes3DNet and abdominal CT segmentation with DenseVNet).
Some of these pre-trained weights are linked from the demos, like the parcellation one you linked to. We are starting to collect the pre-trained models into a model zoo, but this is still a work in progress.
Eli Gibson [NiftyNet developer]