I am new to Tensorflow.
I am looking to get some help in understanding what is the minimum I would need to setup and work with a TensorFlow system?
Do I really need to read through the Tensorflow website documentation to understand the whole work process?
Basics of tensorflow is that first we create a model which is called a computational graph with tensorflow objects then we create a tensorflow session in which we start running all the computation.
To install in windows ,I found this webpage Installation of tensorflow in windows
To learn more about tensorflow ,you also see tensorflow guide.
I hope this helps.
YES YOU SHOULD!
Here is an easier version of tutorial: https://pythonprogramming.net/tensorflow-introduction-machine-learning-tutorial/
Easier and funnier version: How to Make a Tensorflow Neural Network (LIVE)
Related
My goal is to test out Google's BERT algorithm in Google Colab.
I'd like to use a pre-trained custom model for Finnish (https://github.com/TurkuNLP/FinBERT). The model can not be found on TFHub library. I have not found a way to load model with Tensorflow Hub.
Is there a neat way to load and use a custom model with Tensorflow Hub?
Fundamentally: yes. Everyone can create the kind of models that TF Hub hosts, and I hope authors of interesting models do consider that.
For TF1 and the hub.Module format tailored to it, see
https://www.tensorflow.org/hub/tf1_hub_module#creating_a_new_module
For TF2 and its revised SavedModel format, see
https://www.tensorflow.org/hub/tf2_saved_model#creating_savedmodels_for_tf_hub
That said, a sophisticated model like BERT requires a bit of attention to export it with all bells and whistles, so it helps to have some tooling to build on. The BERT reference implementation for TF2 at https://github.com/tensorflow/models/tree/master/official/nlp/bert comes with an open-sourced export_tfhub.py script, and anyone can use that to export custom BERT instances created from that code base.
However, I understand from https://github.com/TurkuNLP/FinBERT/blob/master/nlpl_tutorial/training_bert.md#general-info that you are using Nvidia's fork of the original TF1 implementation of BERT. There are Hub modules created from the original research code, but the tooling to that end has not been open-sourced, and Nvidia doesn't seem to have added their own either.
If that's not changing, you'll probably have to resort to doing things the pedestrian way and get acquainted with their codebase and load their checkpoints into it.
I recently started learning Keras and TensorFlow. I am testing out a few models currently on the MNIST dataset (pretty basic stuff). I wanted to know, exactly how much my model is consuming memory-wise, during training and inference. I tried googling but did not find much info.
I came across Nvidia-smi. I tried using config.gpu_options.allow_growth = True option but still am not able to use the exact memory python.exe is consuming due to some issues with Nvidia-smi. I know that I could run a separate pass of train and inference, but this is too cumbersome. It is very easy if I could just find the right API to do the job.
Tensorflow being such a well known and well-used library, I am hoping to find a better and faster way to get to these numbers.
Finally, once again my question is:
How to get the exact memory usage for a Keras model during training and inference.
Relevant specs:
OS: Windows 10
GPU: GTX 1050
TensorFlow version: 1.14
Please let me know if any other details are required.
Thanks!
I got a sparse weight matrix from Tensorflow-pruning to reduce SqueezeNet. After strip_pruning_vars, I checked the most of elements in weight matrix pruned to 0 successfully. However, the performance of the model didn't increase on what I expected. It seems that additional software library or hardware supporting sparse matrix operations are required. Someone told me that using Intel-MKL library will be helpful, but I don't know how to integrate it with Tensorflow. Now, I have .pb files of SqueezeNet pruned. Any type of help will be highly appreciated.
You can try IntelĀ® Optimization for TensorFlow* Wheel.
It is recommended to use an Intel environment for the same.
Please follow the below steps.
Create a conda environment using the command:
conda create -n my_intel_env -c intel python=3.6
Activate the environment.
source activate my_intel_env
Install the wheel
pip install https://storage.googleapis.com/intel-optimized-tensorflow/tensorflow-1.11.0-cp36-cp36m-linux_x86_64.whl
For more details, you can refer https://software.intel.com/en-us/articles/intel-optimization-for-tensorflow-installation-guide
After installation you can check whether mkl is enabled by following the below commands from the python prompt.
from tensorflow.python.framework import test_util
test_util.IsMklEnabled()
This should return 'True' if mkl is enabled.
Hope this helps.
I have met the same problem with you. I used tensorflow to prune a model, but in fact the pruned model did not got a faster prediction speed.
In roadmap of tensorflow (https://www.tensorflow.org/model_optimization/guide/roadmap) they say that they will support for sparse model execution in the future. So I guess the reason is tensorflow does not support it so far, so we can only get a sparse model but no speed improvement.
I wanted to quantize (change all the floats into INT8) a ssd-mobilenet model and then want to deploy it onto my raspberry-pi. So far, I have not yet found any thing which can help me with it. Any help would be highly appreciated.
I saw tensorflow-lite but it seems it only supports android and iOS.
Any library/framweork is acceptable.
Thanks in advance.
Tensorflow Lite now has support for the Raspberry Pi via Makefiles. Here's the shell script. Regarding Mobilenet-SSD, you can get details on how to use it with TensorFlow Lite in this blog post (and here)
You can try using TensorRT library.
One of the features of the library is quantization.
In general mobilenets are difficult to quantize (see https://arxiv.org/pdf/2004.09602.pdf) but the library should do a good work
I have found tutorials and posts which only says to serve tensorflow models using tensor serving.
In model.conf file, there is a parameter model_platform in which tensorflow or any other platform can be mentioned. But how, do we export other platform models in tensorflow way so that it can be loaded by tensorflow serving.
I'm not sure if you can. The tensorflow platform is designed to be flexible, but if you really want to use it, you'd probably need to implement a C++ library to load your saved model (in protobuf) and give a serveable to tensorflow serving platform. Here's a similar question.
I haven't seen such an implementation, and the efforts I've seen usually go towards two other directions:
Pure python code serving a model over HTTP or GRPC for instance. Such as what's being developed in Pipeline.AI
Dump the model in PMML format, and serve it with a java code.
Not answering the question, but since no better answers exist yet: As an addition to the alternative directions by adrin, these might be helpful:
Clipper (Apache License 2.0) is able to serve PyTorch and scikit-learn models, among others
Further reading:
https://www.andrey-melentyev.com/model-interoperability.html
https://medium.com/#vikati/the-rise-of-the-model-servers-9395522b6c58
Now you can serve your scikit-learn model with Tensorflow Extended (TFX):
https://www.tensorflow.org/tfx/guide/non_tf