How to use SqueezeNet in CNTK? - cntk

I am a CNTK user. I use AlexNet but would like a more compact NN -- so SqueezeNet seems to be of interest. Or does someone have some other suggestion? How do CNTK users deploy when size matters? Does somebody have a CNTK implementation of SqueezeNet?

The new ONNX model format now has several pretrained vision models, including one for SqueezeNet. You can download the model and load it into CNTK:
import cntk as C
z = C.Function.load(<path of your ONNX model>, format=C.ModelFormat.ONNX)
You can find tutorials for importing/exporting ONNX models in CNTK here.

SqueezeNet is a good choice for a small network with the possibility of a good accuracy. Take a look at DSDSqueezeNet for even better accuracy.
However, if it does not need to be as small as SqueezeNet you also could take a look at MobileNet or NasNet Mobile. These networks may be bigger, but they provide state of the art performance in the task of image classification.
Unfortunately, I do not have an CNTK implementation of SqueezeNet, but maybe a pretrained CNTK model, which you can reuse and finetune using Transfer Learning is all what you are looking for. In this case I can recommend you MMdnn, a conversion tool, which allows to convert a existing pretrained Caffe network to the CNTK model format. In this issue you can find a step by step guide for SqueezeNet.
I would not know of a method for especially small deployment, but you have basically two choices when it comes to saving your model: The standard CNTK model format and the new ONNX format which CNTK is going to support or does already. Till now, I could not try it myself, but maybe it offers a smaller size for the same network.
Since the CNTK model format is already saving the model in binary, I would not expect high improvements anyway, for any format. Anyway, compressing the model definitively could be an option, if size is very important.

Related

CNN - Do we need to manually preprocess training images?

Am a complete noob and I just want to ask, do we need to preprocess images (like manually) before feeding the image into CNN for training? I've read that CNN already has some filtering techniques to extract features and such. I'm thinking what if all the train images are binary images or even just edges (teaching the model for shapes), is it advisable or I'll just feed grayscale images? Additionally, if the answer is yes, may I know what kind of preparation is done normally or what you would use?
I am aware of the other preprocessing techniques by Keras, such as the VGG16, but I would like something simple and manual.
The preprocessing functions of Keras only preprocess the input according to the state-of-the-art models requirements, e.g., changing the data format etc. May be, what you are talking about is hand-engineered features, which is not simple. And I think, it's not advisable because CNN does a better job. But it may also depend on what you actually want to do.

Tensorflow Extended: Is it possible to use pytorch training loop in Tensorflow extended flow

I have trained an image classification model using pytorch.
Now, I want to move it from research to production pipeline.
I am thinking of using TensorFlow extended. I have a very noob doubt that will I'll be able to use my PyTorch trained model in the TensorFlow extended pipeline(I can convert the trained model to ONNX and then to Tensorflow compatible format).
I don't want to rewrite and retrain the training part to TensorFlow as it'll be a great overhead.
Is it possible or Is there any better way to productionize the PyTorch trained models?
You should be able to convert your PyTorch image classification model to Tensorflow format using ONNX, as long as you are using standard layers. I would recommend doing the conversion and then look at both model summaries to make sure they are relatively similar. Also, do some tests to make sure your converted model handles any particular edge cases you have. Once you have confirmed that the converted model works, save your model as a TF SavedModel format and then you should be able to use it in Tensorflow Extended (TFX).
For more info on the conversion process, see this tutorial: https://learnopencv.com/pytorch-to-tensorflow-model-conversion/
You could considering using the torchX library. I haven't use it yet, but it seems to make it easier to deploy models by creating and running model pipelines. I don't think it has the same data validation functionality that Tensorflow Extended has, but maybe that will be added in the future.

Quantization of Bert Classifier Model

I am currently trying to quantize a bert-classifier model but am running into an error, I was wondering if this is even supported at the moment or not? For clarity I am asking if quantization is supported on the BERT Classifier super class in the tensorflow-model-garden? Thanks in advance for the help!
Quantizing the standard BERT classifier is probably not a good way to go, if you are interesting in running a BERT-like model on a resource constrained edge device (like a mobile phone). For your specific question, I believe the answer is 'no, quantization of the standard BERT is not supported.' However, a better answer is probably to use one of the smaller BERT-type models that have been created for the edge use case, such as MobileBERT:
https://github.com/google-research/google-research/tree/master/mobilebert
The above link includes scripts for fine-tuning and then converting to TF Lite format in order to run on device.

fast.ai equivalent in tensorflow

Is there any equivalent/alternate library to fastai in tensorfow for easier training and debugging deep learning models including analysis on results of trained model in Tensorflow.
Fastai is built on top of pytorch looking for similar one in tensorflow.
The obvious choice would be to use tf.keras.
It is bundled with tensorflow and is becoming its official "high-level" API -- to the point where in TF 2 you would probably need to go out of your way not using it at all.
It is clearly the source of inspiration for fastai to easy the use of pytorch as Keras does for tensorflow, as mentionned by the authors time and again:
Unfortunately, Pytorch was a long way from being a good option for part one of the course, which is designed to be accessible to people with no machine learning background. It did not have anything like the clear simple API of Keras for training models. Every project required dozens of lines of code just to implement the basics of training a neural network. Unlike Keras, where the defaults are thoughtfully chosen to be as useful as possible, Pytorch required everything to be specified in detail. However, we also realised that Keras could be even better. We noticed that we kept on making the same mistakes in Keras, such as failing to shuffle our data when we needed to, or vice versa. Also, many recent best practices were not being incorporated into Keras, particularly in the rapidly developing field of natural language processing. We wondered if we could build something that could be even better than Keras for rapidly training world-class deep learning models.

translating pyTorch code to CNTK code

I need to re-write some code from pyTorch to CNTK.
I know CNTK and deep learning basics quite well.
Is it easy to relate pyTorch and CNTK?
Do I need to be aware of some special things?
I did translate TensorFlow and CNTK codes before, and I found it easy.
But I know Tensorflow reasonably well ... but now I do not want to put effort in learning pyTorch.
One thing you can try is to export your model to ONNX format from PyTorch. Then you can load it in CNTK. See here for saving and here for loading. Note that this is still a very new effort and different toolkits have different degrees of support for ONNX.