tensorflow federated learning checkpoint - tensorflow

I am studying a federated_learning_for_image_classification.ipynb with tensorflow federated API.
In the example, I could check each simulated clients train Accuracy, Loss and Total accuracy, Total loss.
But there are no checkpoint files.
I want to make each client checkpoint file and total checkpoint files.
And then compare the client parameter variables and total parameter variables.
Anyone can help me to make checkpoint file in federated_learning_for_image_classification.ipynb example?

One question to ask is whether you want to compare the variables within TFF (as part of the federated computation) or post-hoc/outside TFF (analyzing within Python).
Modifying the tff.utils.IterativeProcess construction performed by tff.learning.build_federated_averaging_process may be a good way to go. In fact, I'd recommend forking the simplified implementation on GitHub at tensorflow_federated/python/research/simple_fedavg/simple_fedavg.py, rather than digging into tff.learning.
Changing the line that performs a tff.fedetated_mean on the updates from the clients to a tff.federated_collect will will give a list of all the client's models that can then be compared to the global model.
Example:
client_deltas = tff.federated_collect(client_outputs.weights_delta)
#tff.tf_computation(server_state.model.type_signature,
client_deltas.type_signature)
def compare_deltas_to_global(global_model, deltas):
for delta in deltas:
# do something with delta vs global_model
tff.federated_apply(compare_deltas_to_global, (server_state.model, client_deltas))

Related

DIfferent optimization with different TF versions

I'm trying to train a convolutional neural network with keras and Tensorflow version 2.6, also I did it with Tensorflow version 1.11. I think that I did the migration okey (two neural networks converged) but when I see the results they are very different, worst in TF2.6, I used an optimizer Adam for both cases with the same hyperparameters (learning_rate = 0.001) but the optimization in the loss function in TF1.11 is better than in TF2.6
I'm trying to find out where the differences could be. What things must be taken into account when we work with differents TF versions? Can have important numerical differences? I know that in TF1.x the default mode is graph and in TF2 the default is eager, I don't know if this could bring different behavior in the training.
It surprises me how much the loss function is reduced in the first epochs reaching a lower value at the end of the training.
you understand that is correct they are working in different working modes eager and graph but the loss Fn is defined by how much change of value to required optimized pointed calculated by your or configured method.
You cannot directly be compared one model training history to another directly, running it several time you experience TF 1 is faster and smaller in the number of losses in the loss Fn that is needed to review the changelog Changlog
Loss Fn are updated, the graph is the powerful technique we know but TF 2.x supports access of the value at its level, why you have easy delegated methods such as callback, dynamic FNs, and working update value runtime. ( Trends to understand and experiments for student or user compared by both versions on the same tasks )
Symetrics in methods not create different results.

Loading a model from tensorflow SavedModel onto mutliple GPUs

Let's say someone hands me a TF SavedModel and I would like to replicate this model on the 4 GPUs I have on my machine so I can run inference in parallel on batches of data. Are there any good examples of how to do this?
I can load a saved model in this way:
def load_model(self, saved_model_dirpath):
'''Loads a model from a saved model directory - this should
contain a .pb file and a variables directory'''
signature_key = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
input_key = 'input'
output_key = 'output'
meta_graph_def = tf.saved_model.loader.load(self.sess, [tf.saved_model.tag_constants.SERVING],
saved_model_dirpath)
signature = meta_graph_def.signature_def
input_tensor_name = signature[signature_key].inputs[input_key].name
output_tensor_name = signature[signature_key].outputs[output_key].name
self.input_tensor = self.sess.graph.get_tensor_by_name(input_tensor_name)
self.output_tensor = self.sess.graph.get_tensor_by_name(output_tensor_name)
..but this would require that I have a handle to the session. For models that I have written myself, I would have access to the inference function and I could just call it and wrap it using with tf.device(), but in this case, I'm not sure how to extract the inference function out of a Saved Model. Should I load 4 separate sessions or is there a better way? Couldn't find much documentation on this, but apologies in advance if I missed something. Thanks!
There is no support for this use case in TensorFlow at the moment. Unfortunately, "replicating the inference function" based only on the SavedModel (which is basically the computation graph with some metadata), is a fairly complex (and brittle, if implemented) graph transformation problem.
If you don't have access to the source code that produced this model, your best bet is to load the SavedModel 4 times into 4 separate graphs, rewriting the target device to the corresponding GPU each time. Then, run each graph/session separately.
Note that you can invoke sess.run() multiple times concurrently since sess.run() releases the GIL for the time of actual computation. All you need is several Python threads.

How to convert a saved_model.pb to EvalSavedModel?

I was going through the tensorflow-model-analysis documentation evaluating TensorFlow models. The getting started guide talks about a special SavedModel called the EvalSavedModel.
Quoting the getting started guide:
This EvalSavedModel contains additional information which allows TFMA
to compute the same evaluation metrics defined in your model in a
distributed manner over a large amount of data, and user-defined
slices.
My question is how can I convert an already existing saved_model.pb to an EvalSavedModel?
EvalSavedModel is exported as SavedModel message, thus there is no need in such conversion.
EvalSavedModel uses SavedModelBuilder under the hood. It populates the estimator graph with several placeholders, creates some additional metric collections. Later on, it performs simple SavedModelBuilder procedure.
Source - https://github.com/tensorflow/model-analysis/blob/master/tensorflow_model_analysis/eval_saved_model/export.py#L228
P.S. I suppose you want to run model-analysis on your model, exported by SavedModelBuilder. Since SavedModel doesn't have neither metric nodes nor related collections, which are created in EvalSavedModel, it's useless to do so - model-analysis just simply couldn't find any metric related to your estimator.
If I understand your question correctly, you have saved_model.pb generated, either by using tf.saved_model.simple_save or tf.saved_model.builder.SavedModelBuilderor by estimator.export_savedmodel.
If my understanding is correct, then, you are exporting Training and Inference Graphs to saved_model.pb.
The Point you mentioned from the Guide of TF Org Website states that, in addition to Exporting Training Graph, we need to Export Evaluation Graph as well. That is called EvalSavedModel.
The Evaluation Graph comprises the Metrics for that Model, so that you can Evaluate the Model's performance using Visualizations.
Before we Export EvalSaved Model, we should prepare eval_input_receiver_fn, similar to serving_input_receiver_fn.
We can mention other functionalities as well, like, if you want the Metrics to be defined in a Distributed Manner or if we want to Evaluate our Model using Slices of Data, rather than the Entire Dataset. Such Options can be mentioned in eval_input_receiver_fn.
Then we can Export the EvalSavedModel using the Code below:
tfma.export.export_eval_savedmodel(estimator=estimator,export_dir_base=export_dir,
eval_input_receiver_fn=eval_input_receiver_fn)

How to smoothly produce Tensorflow auc summaries for training and test sets?

Tensorflow describes writing file summaries to visualize graph execution.
I envision three stages:
training the data (with optimization)
measuring accuracy on the training set (no optimization)
measuring accuracy on the test set (no optimization!)
I'd like all stages in the same script, as in the evaluate function of the wide_and_deep tutorial, but with the low-level API. I'd like three different graphs for stats like loss or AUC, one for each stage.
Suppose I use one session, and in each stage I define an AUC summary op:
# define auc
auc, auc_op = tf.metrics.auc(labels, predictions)
# summary scalar to track it
tf.summary.scalar("auc", auc_op, family=family_name)
# merge all summaries for evaluation and later writing
summary_op = tf.summary.merge_all()
...
summary_writer.add_summary(summary, step_num)
There are three graphs, but the first graph has all three runs on it, and the second graph has the last two runs (see below). What's worse, each stage starts from the previous state. This makes sense, because all the variables from the previous stages are still around.
I could use a different session for each stage, but that would throw away the model as well.
What is the smooth way to handle this?
I'd like to just clear some of the summary variables. I've tried re-initializing some variables, looked at related questions, read about name scope and variable scope and tried not to re-use variables for AUC, read about variables and sharing, looked into pruning nodes (though I don't understand it), etc. I have not made it work yet.
I am using the low-level API. I saw something like this in the high-level API in _eval_metric_ops, but I don't understand how they 'clear' the different stages. With name_scope?
Do I have to save and load the model into a new session just for this, or is there some clean way to graph each summary separately?
The metric ops will be local variables, so you could run tf.local_variables_initializer() in your Session, which will reset all of your metrics. You could also look through the local variables collection for those with "auc" in the name if you wanted to be a bit more discerning. The high-level way to do this would be to use an Estimator, which will manage metrics for you.

What does tf.train.get_global_step() do in TensorFlow?

What is the use of the function tf.train.get_global_step() in TensorFlow?
In machine learning concepts what is it equivalent to?
You could use it to restart training exactly where you left off when the training procedure has been stopped for some reason. Of course you can always restart training without knowing the global_step (if you save checkpoints regularly in your code, that is), but unless you somehow keep track of how many iterations you already performed, you will not know how many iterations are left after the restart. Sometimes you really want your model to be trained exactly n iterations and not n plus unknown amount before crash. So in my opinion, this is more of a practicality than a theoretical machine learning concept.
tf.train.get_global_step() return global step(variable, tensor from variable node or None) through get_collection(tf.GraphKeys.GLOBAL_STEP) or get_tensor_by_name('global_step:0')
global step is widely used in learn rate decay(like tf.train.exponential_decay, see Decaying the learning rate for more information).
You can pass global step to optimzer apply_gradients or minimize method to increment by one.
while you defined the global step operator, you can get value of it by sess.run(global_step_op)