Retraining existing base BERT model with additional data - tensorflow

I have generated new Base BERT model(dataset1_model_cased_L-12_H-768_A-12) using cased_L-12_H-768_A-12 as trained multi label classification from biobert-run_classifier
I need to add more additional data as dataset2 and the model should be dataset2_model_cased_L-12_H-768_A-12
Is tensorflow-hub help this to resolve my problem?
Model training life cycle will be like this below,
cased_L-12_H-768_A-12 => dataset1 => dataset1_model_cased_L-12_H-768_A-12
dataset1_model_cased_L-12_H-768_A-12 => dataset2 =>
dataset2_model_cased_L-12_H-768_A-12

Tensorflow Hub is a platform for sharing pre-trained model pieces or whole models, and an API to facilitate this sharing. In TF 1.x, this API was a stand-alone API and in TF 2.x this API (SavedModel: https://www.tensorflow.org/guide/saved_model) is part of the core TF API.
In the proposed training life-cycle example, using SavedModel to save relevant model between the training steps could simplify pipeline architecture design. Alternatively, you could use coding examples available as part of the TF Model Garden to perform this pre-training: https://github.com/tensorflow/models/tree/master/official/nlp.

Related

Inspecting functional keras model structure

I would like to inspect the layers and connections in a model, after having created a model using the Functional API in Keras. Essentially to start at the output and recursively enumerate the inputs of each layer instance. Is there a way to do this in the Keras or TensorFlow API?
The purpose is to create a more detailed visualisation than the ones provided by Keras (tf.keras.utils.plot_model). The model is generated procedurally based on a parameter file.
I have successfully used attributes of the KerasTensor objects to do this inspection:
output = Dense(1)(...)
print(output)
print(output.node)
print(output.node.keras_inputs)
print(output.node.keras_inputs[0].node)
This wasn't available in TF 2.6, only 2.7, and I realise it's not documented anywhere.
Is there a proper way to do this?

What is the difference between TFHub and Model Garden?

TensorFlow Hub is a repository for pre-trained models. Model Garden (Model Zoo) also keeps SOTA models and provides facilities for downloading and leveraging its models like TfHub, and both of them are created by TensorFlow.
Why did Tensorflow make two concepts for a model repository?
When should we use TfHub for retrieving a well-known model, and when should we use Model Garden to download a model? What is the difference between them?
TF Hub provides trained models in SavedModel, TFLite, or TF.js format. These artifacts can be used for inference and some can be used in code for fine-tuning. TF Hub does not provide modeling library code to train your own models from scratch.
Model Garden is a modeling library for training BERT, image classification models, and more. Model Garden provides code for training your own models from scratch as well as some checkpoints to start from.

can we build object detection model using Tensorflow or it is only possible with the help f tf.keras

Is there any way to build object detection model using Tensorflow without any help of tf.keras module?
From Tensorflow documentation I'm not able to find any example which helps to create model without Keras.
Keras is a high level API. But if you want to use only Tensorflow then you have to implement the architecture using low level API. You can certainly implement but you have to code it yourself to build all the convolutional layers and dense layer by yourself.

Online Predictions for Keras model via API

I have an image classification deep learning CNN model (.h5 file) trained using Keras and Tensorflow 2 that I want to use online for predictions. I want an API that takes the single input image over HTTP and responds with the predicted class labels using the trained model. Is there an API provided by Keras or Tensorflow to do the same?
There's two basic options:
Use TensorFlow Serving - it provides ready-to-go REST API server, the only thing that you need to do is to convert your model to .pb format.
Write your own simple REST server (on Flask, for example) which will call model.predict() on the inputs (that approach may be easier to start with, but it will be hard to scale/optimize for heavy load.

Should I use the standalone Keras library or tf.keras?

As Keras becomes an API for TensorFlow, there are lots of old versions of Keras code, such as https://github.com/keiserlab/keras-neural-graph-fingerprint/blob/master/examples.py
from keras import models
With the current version of TensorFlow, do we need to change every Keras code as?
from tensorflow.keras import models
You are mixing things up:
Keras (https://keras.io/) is a library independent from TensorFlow, which specifies a high-level API for building and training neural networks and is capable of using one of multiple backends (among which, TensorFlow) for low-level tensor computation.
tf.keras (https://www.tensorflow.org/guide/keras) implements the Keras API specification within TensorFlow. In addition, the tf.keras API is optimized to work well with other TensorFlow modules: you can pass a tf.data Dataset to the .fit() method of a tf.keras model, for instance, or convert a tf.keras model to a TensorFlow estimator with tf.keras.estimator.model_to_estimator. Currently, the tf.keras API is the high-level API to look for when building models within TensorFlow, and the integration with other TensorFlow features will continue in the future.
So to answer your question: no, you don't need to convert Keras code to tf.keras code. Keras code uses the Keras library, potentially even runs on top of a different backend than TensorFlow, and will continue to work just fine in the future. Even more, it's important to not just mix up Keras and tf.keras objects within the same script, since this might produce incompatabilities, as you can see for example in this question.
Update: Keras will be abandoned in favor of tf.keras: https://twitter.com/fchollet/status/1174019423541157888