Distributed Tensorflow Independent Weights - tensorflow

I'm new to distributed tensorflow and I'm trying to implement an asynchronous algorithm where each worker has its own weights, but can access the weights of other workers globally. The intent is that during each training step, the worker has an option of either to continue training its current weights, or inherit the weights of another worker.
I've scoured many examples on the internet regarding data parallelism where each device has the same model / graph, but it seems like in all their cases the weights are shared, which is not what I want.
So my question is, how can I setup the same graph on each device, but keep the trainable weights independent? And how can I create a global variable where all workers can effectively dump or retrieve weights into it? I'm assuming this will have to be set up on the parameter server.
Thanks.

Related

How to deploy a live learning tensor flow model in cloud?

How do I deploy a tensor flow model in cloud which can learn and update the weights when given as input . Since most of the deployment methods I saw involved model freezing which implied freezing of weights also . Is it possible or is the latter the only way ?
Freezing the model is the most compact form and lets you have a smaller inference node which you can call for just prediction and only has the necessary information to do just that.
If you want to have and model and make it available to learn online and also make inference you could have so it has all the graph loaded with the newest weights. For security save the weights from time to time. Of course you could have two programs one for inference with the latest frozen model and another one that you up from time to time to make a new training, using the last saved weights. I recommend you the second option. Hope it helps!

How to run inference on inception v3 trained models?

I've successfully trained the inception v3 model on custom 200 classes from scratch. Now I have ckpt files in my output dir. How to use those models to run inference?
Preferably, load the model on GPU and pass images whenever I want while the model persists on GPU. Using TensorFlow serving is not an option for me.
Note: I've tried to freeze these models but failed to correctly put output_nodes while freezing. Used ImagenetV3/Predictions/Softmax but couldn't use it with feed_dict as I couldn't get required tensors from freezed model.
There is poor documentation on TF site & repo on this inference part.
It sounds like you're on the right track, you don't really do anything different at inference time as you do at training time except that you don't ask it to compute the optimizer at inference time, and by not doing so, no weights are ever updated.
The save and restore guide in tensorflow documentation explains how to restore a model from checkpoint:
https://www.tensorflow.org/programmers_guide/saved_model
You have two options when restoring a model, either you build the OPS again from code (usually a build_graph() method) then load the variables in from the checkpoint, I use this method most commonly. Or you can load the graph definition & variables in from the checkpoint if the graph definition was saved with the checkpoint.
Once you've loaded the graph you'll create a session and ask the graph to compute just the output. The tensor ImagenetV3/Predictions/Softmax looks right to me (I'm not immediately familiar with the particular model you're working with). You will need to pass in the appropriate inputs, your images, and possibly whatever parameters the graph requires, sometimes an is_train boolean is needed, and other such details.
Since you aren't asking tensorflow to compute the optimizer operation no weights will be updated. There's really no difference between training and inference other than what operations you request the graph to compute.
Tensorflow will use the GPU by default just as it did with training, so all of that is pretty much handled behind the scenes for you.

Distributed training of a wide and shallow model

I am working on a very wide and shallow computation graph with a relatively small number of shared parameters on a single machine. I would like to make the graph wider but am running out of memory. My understanding is that, by using Distributed Tensorflow, it is possible to split the graph between workers by using the tf.device context manager. However it's not clear how to deal with the loss, which can only be calculated by running the entire graph, and the training operation.
What would be the right strategy to train the parameters for this kind of model?
TensorFlow is based on the concept of a data-flow graph. You define a graph consisting of variables and ops and you can place said variables and ops on different servers and/or devices. When you call session.Run, you pass data in to the graph and each operation between the inputs (specified in the feed_dict) and the outputs (specified in the fetches argument to session.Run) run, regardless of where those ops reside. Of course, passing data across servers incurs communication overhead, but that overhead is often made up for by the fact that you can have multiple concurrent workers performing computation simultaneously.
In short, even if you put ops on other servers, you can still compute the loss over the full graph.
Here's a tutorial on large scale linear models: https://www.tensorflow.org/tutorials/linear
And here's a tutorial on distributed training in TensorFlow:
https://www.tensorflow.org/deploy/distributed

Max iteration of Tensorflow object API with resnet faster r-cnn

I am training Oxford dataset using tutorial with ResNet101 Faster R-CNN.
I am running the training on my local machine with 1 GPU not using Google Cloud.
My question is may I know what will be the max iteration?
My step is already over than 13,000,000 and did not stop yet.
The original faster r-cnn could define max iteration size here
https://github.com/rbgirshick/py-faster-rcnn/blob/master/tools/train_faster_rcnn_alt_opt.py#L80
, but I am not sure about TensorFlow object detection API.
I did not change any parameter except for input_path and fine_tune_checkpoint (which I am using COCO pre-trained data with ResNet).
I thought that the max iteration will be in config file
https://github.com/tensorflow/models/blob/master/object_detection/samples/configs/faster_rcnn_resnet101_pets.config#L100, but it seems it only define learning rate after certain step.
As per the docs By default, the training job will run indefinitely until the user kills it. So, run the training and evaluation jobs simultaneously and kill the processes(early stopping based on the validation accuracy saturation.
Note: from Jonathan comment, you can also add the number of steps explicitly num_steps

Sharing Queue between two graphs in tensorflow

Is it possible to share a queue between two graphs in TensorFlow? I'd like to do a kind of bootstrapping to select "hard negative" examples during training.
To speed up the process, I want separate threads for hard negative example selection, and for the training process. The hard negative selection is based on the evaluation of the current model, and it will load its graph from a checkpoint file. The training graph is run on another thread and writes the checkpoint file. The two graphs should share the same queue: the training graph will consume examples and the hard negative selection will produce them.
Currently there's no support for sharing state between different graphs in the open-source version of TensorFlow: each graph runs in a separate session, and each session uses an isolated set of devices.
However, it seems like it would be possible to achieve your goal using a queue in single graph. Simply construct a queue (using e.g. tf.FIFOQueue) and use tf.import_graph_def() to import the graph from the checkpoint file into the current graph. Using the return_elements argument to tf.import_graph_def() you can specify the name of the tensor that will contain the negative examples, and then add a q.enqueue_many() operation to add them to your queue. You would then fork a thread to run the enqueue_many operation in a loop. In your training graph, you can use q.dequeue_many() to get a batch of negative examples, and use them as the input to your training process.