would it be straight forward to implement a spatial transformer network in tensorflow? - tensorflow

i am interested in trying things out with a spatial transformer network and I can't find any implementation of it in caffe or tensorflow, which are the only two libraries I'm interested in using. I have a pretty good grasp of tensorflow but was wondering if it would be straight forward to implement with the existing building blocks that tensorflow offers without having to do something too complicated like write a custom c++ module

Yes, it is very straight forward to setup the Tensorflow graph for a spatial transformer network with the existing API.
You can find an example implementation in Tensorflow here [1].
[1] https://github.com/daviddao/spatial-transformer-tensorflow

There is an implementation in caffe here. https://github.com/daerduoCarey/SpatialTransformerLayer

Tensorflow has a implementation of Spatial Transformer Network in the models repository - https://github.com/tensorflow/models/tree/master/research/transformer

Related

Is there any TF implementation of the Original BERT other than Google and HuggingFace?

Trying to find any Tensorflow/Keras implementation of the original BERT model trained using MLM/NSP. The official google and HuggingFace implementations are very complex and has so much of added functionalities. But I want to learn and implement BERT for just learning its working.
Any leads will be helpful?
As mentioned in the comment, you can try the following implementation of MLP-BERT TensorFlow. It's a simplified version and easy to follow comparatively.

I want to use hidden markov model for data prediction

I am new to machine learning models and data science libraries. I wanted to use the Hidden Markov model for statistical data prediction on the fly which read the data from kafka and builds the model which is used to predict the data during the run-time and do the same for continous stream always.
Currently i can see only Tensorflow hidden markov model implementation in tensorflow python (tensorflow_probability distribution). Is their any other library available which can help me acheive the above scenario
Suggestions can involve the libraries of JAVA and python
Please feel free to add any resource links that can help me to understand the usage of tensorflow for hidden markov model
this might be a nice place to start: https://hmmlearn.readthedocs.io/en/latest/tutorial.html
Other alternatives, I found, are
Java:
Mallet library and it's extention GRMM in particular.
Python:
Pommegranate with it's HMM support.
Having said that, TensorFlow is much better known active and supported library, in my impression. I'd try that first.
I'm searching a library that would support Hierarchical HMMs (HHMM). That would probably require some tweaking into one of the listed ones.

How to use a custom model with Tensorflow Hub?

My goal is to test out Google's BERT algorithm in Google Colab.
I'd like to use a pre-trained custom model for Finnish (https://github.com/TurkuNLP/FinBERT). The model can not be found on TFHub library. I have not found a way to load model with Tensorflow Hub.
Is there a neat way to load and use a custom model with Tensorflow Hub?
Fundamentally: yes. Everyone can create the kind of models that TF Hub hosts, and I hope authors of interesting models do consider that.
For TF1 and the hub.Module format tailored to it, see
https://www.tensorflow.org/hub/tf1_hub_module#creating_a_new_module
For TF2 and its revised SavedModel format, see
https://www.tensorflow.org/hub/tf2_saved_model#creating_savedmodels_for_tf_hub
That said, a sophisticated model like BERT requires a bit of attention to export it with all bells and whistles, so it helps to have some tooling to build on. The BERT reference implementation for TF2 at https://github.com/tensorflow/models/tree/master/official/nlp/bert comes with an open-sourced export_tfhub.py script, and anyone can use that to export custom BERT instances created from that code base.
However, I understand from https://github.com/TurkuNLP/FinBERT/blob/master/nlpl_tutorial/training_bert.md#general-info that you are using Nvidia's fork of the original TF1 implementation of BERT. There are Hub modules created from the original research code, but the tooling to that end has not been open-sourced, and Nvidia doesn't seem to have added their own either.
If that's not changing, you'll probably have to resort to doing things the pedestrian way and get acquainted with their codebase and load their checkpoints into it.

Is there a worked example for neural network pruning for the Faster-RCNN architecture from TensorFlow's object detection api?

I am trying to find a worked example of neural network pruning for the Faster-RCNN architecture.
My core stack is Tensorflow 1.12, its object_detection API (link) on Python3.5.2 in Ubuntu 16.04 LTS. I came across some Neural Network Pruning repos (e.g. link, implementing NVIDIA's pruning paper with Taylor expansion link - looking the most promising however (a) implemented in Pytorch and (b) on classification networks rather than detectors).
I am also aware of the existence of a pruning functionality within TensorFlow under this package (link), but could only run an example found in the comments of the following StackOverflow question (link) to train and prune (not thoroughly tested) a simple Neural Network for hand written digits classification using MNIST dataset.
I am looking for a worked example and not reporting any bugs or issues in code.
Can someone point to me a worked example of pruning Faster-RCNN -or other detectors- found on the TensorFlow's object detection API (link), preferably using TensorFlow's pruning package (link)?
Pruning is orthogonal to the meta-architecture used for object detection. When we talk about the TensorFlow Object Detection API, it heavily relies on builders that read the config and create corresponding nets, classes etc. I believe you want to prune the feature extractor as the most heavy part. If so, you need to first prune some feature extractor from slim (let's say, Inception-V2), give it a name, add its pruned version to models, adjust proto config and many more. Shortly speaking, you need to introduce a new type of feature extractor. But I am not aware of any existing examples on that.

how to serve pytorch or sklearn models using tensorflow serving

I have found tutorials and posts which only says to serve tensorflow models using tensor serving.
In model.conf file, there is a parameter model_platform in which tensorflow or any other platform can be mentioned. But how, do we export other platform models in tensorflow way so that it can be loaded by tensorflow serving.
I'm not sure if you can. The tensorflow platform is designed to be flexible, but if you really want to use it, you'd probably need to implement a C++ library to load your saved model (in protobuf) and give a serveable to tensorflow serving platform. Here's a similar question.
I haven't seen such an implementation, and the efforts I've seen usually go towards two other directions:
Pure python code serving a model over HTTP or GRPC for instance. Such as what's being developed in Pipeline.AI
Dump the model in PMML format, and serve it with a java code.
Not answering the question, but since no better answers exist yet: As an addition to the alternative directions by adrin, these might be helpful:
Clipper (Apache License 2.0) is able to serve PyTorch and scikit-learn models, among others
Further reading:
https://www.andrey-melentyev.com/model-interoperability.html
https://medium.com/#vikati/the-rise-of-the-model-servers-9395522b6c58
Now you can serve your scikit-learn model with Tensorflow Extended (TFX):
https://www.tensorflow.org/tfx/guide/non_tf