How to implement dynamic network structure with TensorFlow? - tensorflow

I am trying to implement a dynamic network, which is able to change the network structure according to the input data. Here is an example https://arxiv.org/pdf/1511.02799v3.pdf
I wonder it that possible to use TensorFlow to implement dynamic network?
I think we may need to use placeholder to control the network?
Thank you very much.

This was announced a few months after your question. This is a roundabout way to do that. I have heard other libraries like MxNet will support this too.
https://research.googleblog.com/2017/02/announcing-tensorflow-fold-deep.html
You might want to checkout DyeNet for true dynamic graphs.

Related

How to remove the last layer from Hub Module in Tensorflow

I want to remove the last layer(s) from MobileBERT from Hub. I know there is a solution for Keras Model in TensorFlow, but this case is different from that one.
I was thinking of something like this, but it doesn't seem user-friendly.
What is the common way of doing this?
There is no first-class APIs to do this. The solution along the lines what you have mentioned is the way to go.

Is it possible to run two TFLite models at the same time on a Flutter App? / make Teachable Machine recognize when an object is not present?

I am using a Teachable Machine model which i trained to recognize some specific objects, the issue with it, however, is that it does not recognize when there is nothing, basically it always assumes that one of the objects is there. One potential solution I am considering is combining two models like the YOLO V2 Tflite model in the same app. Would this be even possible/efficient? If it is what would be the best way to do it?
If anyone knows a solution to get teachable machine to recognize when the object is not present that would probably be a much better solution.
Your problem can be solved making a model ensemble: Train a classifier that learns to know if your specific objects are not in the visual space, and then use your detection model.
However, I really recommend you to upload your model to an online service and consume it via an API. As I know tflite package just supports well MobileNet based models.
I had the same problem, just create another class called whatever you want(for example none) and put some non-related images in it, then train the model.
Now whenever there is nothing in the field, it should output none.

Is there any way to implement the mathematical deconvolution(which exactly reverse the convolution) using tensorflow? Please let me know if there is

I'm trying to make a software in which I need to reverse the convolution process. I haven't found anything useful.
Yes, it is called Transposed Convolution in Tensorflow and also in PyTorch. Here is the link for TF1.14.
Here is for TF2.0.

How can I change the network dynamically in tensorflow?

I have a deep fully connected network.
I want to be able to change the structure of middle layers of the network dynamically.
What is the best way of doing that?
What I did right now is to create an output placeholder for my network. I thought I will create a network dynamically by using feed_dict. However, when I run it it says.
`ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ... `
Tensorflow won't make this easy for you. Once you define the graph and open a session it's fixed. I believe you need to define a new graph, copy over your variables, and move on from there every time you want to alter the architecture. Kinda annoying for experimenting with this kind of stuff.
I have a friend/fellow researcher who's been experimenting with dynamic neural network architectures and is tackling this in pytorch, which has specific support for dynamically altering network architectures.

regarding caffe to tensorflow

Currently, there are a lot of deep learning models developed in Caffe instead of tensorflow. If I want to re-write these models in tensorflow, how to start? I am not familiar with Caffe structure. It seems to me that there are some files storing the model architecture only. My guess is that I only need to understand and transfer those architecture design into Tensorflow. The input/output/training will be re-written anyway. Is this thought meaningful?
I see some Caffe implementation also need to hack into the original Caffe framework down to the C++ level, and make some modifications. I am not sure under what kind of scenario the Caffe model developer need to go that deep? If I just want to re-implement their models in Tensorflow, do I need to go to check their C++ modifications, which are sometimes not documented at all.
I know there are some Caffe-Tensorflow transformation tool. But there are always some constraints, and I think re-write the model directly maybe more straightforward.
Any thougts, suggestions, and link to tutorials are highly appreciated.
I have already asked a similar question.
To synthetise the possible answers :
You can either use pre-existing tools like etheron's kaffe(which is really simple to use). But its simplicity comes at a cost: it is not easy to debug.
As #Yaroslav Bulatov answered start from scratch and try to make each layer match. In this regard I would advise you to look at ry's github which is a remarkable example where you basically have small helper functions which indicate how to reshape the weights appropriately from caffe to Tensorflow, which is the only real thing you have to do to make simple models match and also provides activations check layer by layer.