Is it possible to use a TensorFlow.JS models from regular TensorFlow? Some of them like FaceMesh have no direct counterparts available.
We do not have any tools to convert from TensorFlow.js format to Python compatible format.
We only have the ability to go from Python -> JavaScript right now to the best of my knowledge (most folk want to take some research written in Python and use it in JS land vs the other way around).
Related
I am new to machine learning models and data science libraries. I wanted to use the Hidden Markov model for statistical data prediction on the fly which read the data from kafka and builds the model which is used to predict the data during the run-time and do the same for continous stream always.
Currently i can see only Tensorflow hidden markov model implementation in tensorflow python (tensorflow_probability distribution). Is their any other library available which can help me acheive the above scenario
Suggestions can involve the libraries of JAVA and python
Please feel free to add any resource links that can help me to understand the usage of tensorflow for hidden markov model
this might be a nice place to start: https://hmmlearn.readthedocs.io/en/latest/tutorial.html
Other alternatives, I found, are
Java:
Mallet library and it's extention GRMM in particular.
Python:
Pommegranate with it's HMM support.
Having said that, TensorFlow is much better known active and supported library, in my impression. I'd try that first.
I'm searching a library that would support Hierarchical HMMs (HHMM). That would probably require some tweaking into one of the listed ones.
I have been working in TensorFlow for about a year now, and I am transitioning from TF 1.x to TF 2.0, and I am looking for some guidance on how to use the tf.keras.backend library in TF 2.0. I understand that the transition to TF 2.0 is supposed to remove a lot of redundancies in modeling and building graphs, since there were many ways to create equivalent layers in earlier TensorFlow versions (and I'm insanely grateful for that change!), but I'm getting stuck on understanding when to use tf.keras.backend, because the operations appear redundant with other TensorFlow libraries.
I see that some of the functions in tf.keras.backend are redundant with other TensorFlow libraries. For instance, tf.keras.backend.abs and tf.math.abs are not aliases (or at least, they're not listed as aliases in the documentation), but both take the absolute value of a tensor. After examining the source code, it looks like tf.keras.backend.abs calls the tf.math.abs function, and so I really do not understand why they are not aliases. Other tf.keras.backend operations don't appear to be duplicated in TensorFlow libraries, but it looks like there are TensorFlow functions that can do equivalent things. For instance, tf.keras.backend.cast_to_floatx can be substituted with tf.dtypes.cast as long as you explicitly specify the dtype. I am wondering two things:
when is it best to use the tf.keras.backend library instead of the equivalent TensorFlow functions?
is there a difference in these functions (and other equivalent tf.keras.backend functions) that I am missing?
Short answer: Prefer tensorflow's native API such as tf.math.* to thetf.keras.backend.* API wherever possible.
Longer answer:
The tf.keras.backend.* API can be mostly viewed as a remnant of the keras.backend.* API. The latter is a design that serves the "exchangeable backend" design of the original (non-TF-specific) keras. This relates to the historical aspect of keras, which supports multiple backend libraries, among which tensorflow used to be just one of them. Back in 2015 and 2016, other backends, such as Theano and MXNet were quite popular too. But going into 2017 and 2018, tensorflow became the dominant backend of keras users. Eventually keras became a part of the tensorflow API (in 2.x and later minor versions of 1.x). In the old multi-backend world, the backend.* API provides a backend-independent abstraction over the myriad of supported backend. But in the tf.keras world, the value of the backend API is much more limited.
The various functions in tf.keras.backend.* can be divided into a few categories:
Thin wrappers around the equivalent or mostly-equivalent tensorflow native API. Examples: tf.keras.backend.less, tf.keras.backend.sin
Slightly thicker wrappers around tensorflow native APIs, with more features included. Examples: tf.keras.backend.batch_normalization, tf.keras.backend.conv2d(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/backend.py#L4869). They often perform proprocessing and implement other logics, which make your life easier than using native tensorflow API.
Unique functions that don't have equivalent in the native tensorflow API. Examples: tf.keras.backend.rnn, tf.keras.backend.set_learning_phase
For category 1, use native tensorflow APIs. For categories 2 and 3, you may want to use the tf.keras.backend.* API, as long as you can find it in the documentation page: https://www.tensorflow.org/api_docs/python/, because the documented ones have backward compatibility guarantees, so that you don't need to worry about a future version of tensorflow removing it or changing its behavior.
I want to use pre trained models in my go application. Especially the Inception-ResNet-v2 model.
This model seems to be only available via tensorflow hub (https://www.tensorflow.org/hub/).
However I could not find any documentation how to use tensorflow hub with the go language bindings for tensorflow.
How can I download and use these models in go?
So after a lot of work in the past few days I finally found a way.
At first I wanted to just use Python to do all the Tensorflow stuff and then provide the results via a rest service. However it turned out that the number of models provided by Tensorflow Hub is very small. This was a problem for me because I had to try out different models and compare them.
Thus I switched to using models from https://github.com/tensorflow/models. There are several tutorials how to export the data to .pb files. Those files can then be loaded in Go using gocv.
It requires a lot of work to convert the files, but in the end I think this is the best way to use Tensorflow models in go.
I have found tutorials and posts which only says to serve tensorflow models using tensor serving.
In model.conf file, there is a parameter model_platform in which tensorflow or any other platform can be mentioned. But how, do we export other platform models in tensorflow way so that it can be loaded by tensorflow serving.
I'm not sure if you can. The tensorflow platform is designed to be flexible, but if you really want to use it, you'd probably need to implement a C++ library to load your saved model (in protobuf) and give a serveable to tensorflow serving platform. Here's a similar question.
I haven't seen such an implementation, and the efforts I've seen usually go towards two other directions:
Pure python code serving a model over HTTP or GRPC for instance. Such as what's being developed in Pipeline.AI
Dump the model in PMML format, and serve it with a java code.
Not answering the question, but since no better answers exist yet: As an addition to the alternative directions by adrin, these might be helpful:
Clipper (Apache License 2.0) is able to serve PyTorch and scikit-learn models, among others
Further reading:
https://www.andrey-melentyev.com/model-interoperability.html
https://medium.com/#vikati/the-rise-of-the-model-servers-9395522b6c58
Now you can serve your scikit-learn model with Tensorflow Extended (TFX):
https://www.tensorflow.org/tfx/guide/non_tf
Seems, that there is no way to convert from the box Mobilenet (and other models from TF OD API) to uff format and then to TensorRT format, because of much unsupported layers.
Is there some way to remove \ replace that layers? For example with graph_transform tool maybe? I understand the purpose of not all these layers.
Here is the default model, if someone wants to try.
The UFF conversion tools seems to choke on some unrecognized layers, perhaps this will be improved in the GA release. For now, you will need to remove those layers (And keep only the smallest subset needed for inference), then implement those using the nvinfer plugin API.
From 2017 year there is still no significant progress.
Many new networks has been developed, but UFF and TensorRT converter still can not work with many models even from 2017, what can I say about 2019.
There is also some indirect information in the internet that, in sense, with UFF and TRT convertion in NVIDIA works very small department with just a few guys. So seems that way is dead end.
Better way and the way they are counting on is conversion tools inside framework itself. Like Tensorflow do with their TRT bindings.
With such approach new layers are added by developer of framework and not Nvidia, because guys there don't keep up with marketing. So more chances eventually to get working tool.