What is a lite model in Deep Learning? - tensorflow

I wanted to know what is a lite model?
I know that a model that is easier to train and has fewer neurons is a lite model but how to say how much are these "fewer neurons"??
If I use a pre-trained model and add two Dense layers to it (where I freeze those pre-trained model layers and train only the Final two layers) can I call these a lite model as it is faster to train and inference results are also fast???

Related

How can I use multiple pre-trained model without TF-slim?

I want to use/combine different parts of different pre-trained models into one model. For example, I want to use the first few layers (with the pre-trained weights) of ResNet as encoder and then combine them with a decoder from another model, and then I want to train further on that. Is there a way, preferably without using TF-slim? I'm using TensorFlow 1.4.

How to Load a pre-trained model into Keras without the Weight and bias?

I need to Load a pre-trained model into Keras without the Weight and bias. I
also just want to use the Architecture of the model alone for my training.
Example:
I want to load coco_mobilenet model pre-trained without Weights and bias.
Any suggestions would be appreciated.
net=keras.applications.MobileNet(weight=None)
net.summary()
view keras mobilenet api for detail

ResNet34 - Pretrained model on imagenet using tensorflow

Can someone point me to the Resnet34 pre-trained model on image-net using tensorflow? I am not sure but TF-slim trained model are same or would there be difference?
You can use Keras ResNet(18,34,50,101,152) pre-trained models https://github.com/qubvel/classification_models

How to load a pretrained vgg model in distributed tensorflow model training scene like faster-rcnn?

I want to implements a faster-rcnn model using distributed tensorflow, But I have difficult to load a pretrained vgg model,How to do it? thanks
The TensorFlow tutorial on retraining inception is a good start to read. Then try to reproduce what it does starting from an already trained vgg model.

Simultaneous finetuning and network modification in tensorflow

How can I add or remove layers from pre-trained inception module, and learn the weights of new layers, and finetune the weights of pre-trained layers?
Ex. Remove the softmax and add one LSTM layer after inception module.