chain together tensorflow operations as user defined function - tensorflow

I am wondering whether there is an easy way to define a user defined tensorflow operation if it consists only of chained tensorflow operations. This is just to make code from being unnecessarily long, especially if the same operations must be performed on similar objects:
For example, if I want to define a feed forward mechanism on a neural network with 2 hidden layers, I will need to do this:
layer1_output = tf.nn.relu(tf.nn.matmul(input,weights1) + biases1)
layer2_output = tf.nn.relu(tf.nn.matmul(layer1_output,weights2) + biases2)
layer3_output = tf.nn.softmax(tf.nn.matmul(layer2_output,weights3) + biases2)
However, this oftentimes needs to be done for validation and test sets also, so I would like to define a function which would let me do all the operations in a single swoop so I could get something like this:
train_output = feed_forward(input_train)
test_output = feed_forward(input_test)...
This seems like something simple to do, but I can't seem to find the documentation.

The standard way to do this is just to define a Python function the builds your network:
def feed_forward(input_data):
layer1_output = tf.nn.relu(tf.nn.matmul(input_data, weights1) + biases1)
layer2_output = tf.nn.relu(tf.nn.matmul(layer1_output, weights2) + biases2)
layer3_output = tf.nn.softmax(tf.nn.matmul(layer2_output, weights3) + biases2)
return layer3_output
Depending on how you define your weights and biases variables, you might pass them in as arguments to the function, or use tf.get_variable() to handle the construction and sharing of variables between different calls to the function.

Related

Customized aggregation algorithm for gradient updates in tensorflow federated

I have been trying to implement this paper . Basically what I want to do is sum the per client loss and compare the same with previous epoch. Then for each constituent layer of the model compare the KL divergence between the weights of the server and the client model to get the layer specific parameter updates and then doing a softmax and to decide whether an adaptive update or a normal FedAvg approach is needed.
The algorithm is as follows-
FedMed
I tried to make use of the code here to build a custom federated avg process. I got the basic understanding that there are some tf.computations and some tff.computations which are involved. I get that I need to make changes in the orchestration logic in the run_one_round function and basically manipulate the client outputs to do adaptive averaging instead of the vanilla federated averaging. The client_update tf.computation function basically returns all the values that I need i.e the weights_delta (can be used for client based model weights), model_output(which can be used to calculate the loss).
But I am not sure where exactly I should make the changes.
#tff.federated_computation(federated_server_state_type,
federated_dataset_type)
def run_one_round(server_state, federated_dataset):
server_message = tff.federated_map(server_message_fn, server_state)
server_message_at_client = tff.federated_broadcast(server_message)
client_outputs = tff.federated_map(
client_update_fn, (federated_dataset, server_message_at_client))
weight_denom = client_outputs.client_weight
# todo
# instead of using tff.federated_mean I wish to do a adaptive aggregation based on the client_outputs.weights_delta and server_state model
round_model_delta = tff.federated_mean(
client_outputs.weights_delta, weight=weight_denom)
#client_outputs.weights_delta has all the client model weights.
#client_outputs.client_weight has the number of examples per client.
#client_outputs.model_output has the output of the model per client example.
I want to make use of the server model weights using server_state object.
I want to calculate the KL divergence between the weights of server model and each client's model per layer. Then use a relative weight to aggregate the client weights instead of vanilla federated averaging.
Instead of using tff.federated_mean I wish to use a different strategy basically an adaptive one based on the algorithm above.
So I needed some suggestions on how to go about implementing this.
Basically what I want to do is :
1)Sum all the values of client losses.
2)Calculate the KL divergence per layerbasis of all the clients with server and then determine whether to use adaptive optimization or FedAvg.
Also is there a way to manipulate this value as a python value which will be helpful for debugging purposes( I tried to use tf.print but that was not helpful either). Thanks!
Simplest option: compute weights for mean on clients
If I read the algorithm above correctly, we need only compute some weights for a mean on-the-fly. tff.federated_mean accepts an optional CLIENTS-placed weight argument, so probably the simplest option here is to compute the desired weights on the clients and pass them in to the mean.
This would look something like (assuming the appropriate definitions of the variables used below, which we will comment on):
#tff.federated_computation(...)
def round_function(...):
...
# We assume there is a tff.Computation training_fn that performs training,
# and we're calling it here on the correct arguments
trained_clients = tff.federated_map(training_fn, clients_placed_arguments)
# Next we assume there is a variable in-scope server_model,
# representing the 'current global model'.
global_model_at_clients = tff.federated_broadcast(server_model)
# Here we assume a function compute_kl_divergence, which takes
# two structures of tensors and computes the KL divergence
# (as a scalar) between them. The two arguments here are clients-placed,
# so the result will be as well.
kl_div_at_clients = tff.federated_map(compute_kl_divergence,
(global_model_at_clients, trained_clients))
# Perhaps we wish to not use raw KL divergence as the weight, but rather
# some function thereof; if so, we map a postprocessing function to
# the computed divergences. The result will still be clients-placed.
mean_weight = tff.federated_map(postprocess_divergence, kl_div_at_clients)
# Now we simply use the computed weights in the mean.
return tff.federated_mean(trained_clients, weight=mean_weight)
More flexible tool: tff.federated_reduce
TFF generally encourages algorithm developers to implement whatever they can 'in the aggregation', and as such exposes some highly customizable primitives like tff.federated_reduce, which allow you to run arbitrary TensorFlow "in the stream" between clients and server. If the above reading of the desired algorithm is incorrect and something more involved is needed, or you wish to flexibly experiment with totally different notions of aggregation (something TFF encourages and is designed to support), this may be the option for you.
In TFF's heuristic typing language, tff.federated_reduce has signature:
<{T}#CLIENTS, U, (<U, T> -> U)> -> U#SERVER
Meaning, federated_reduce take a value of type T placed at the clients, a 'zero' in a reduction algebra of type U, and a function accepting a U and a T and producing a U, and applies this function 'in the stream' on the way between clients and server, producing a U placed at the server. The function (<U, T> -> U) will be applied to the partially accumulated value U, and the 'next' element in the stream T (note however that TFF does not guarantee ordering of these values), returning another partially accumulated value U. The 'zero' should represent whatever 'partially accumulated' means over the empty set in your application; this will be the starting point of the reduction.
Application to this problem
The components
Your reduction function needs access to two pieces of data: the global model state and the result of training on a given client. This maps quite nicely to the type T. In this application, we will have something like:
T = <server_model=server_model_type, trained_model=trained_model_type>
These two types are likely to be the same, but may not necessarily be so.
Your reduction function will accept the partial aggregate, your server model and your client-trained model, returning a new partial aggregate. Here we will start assuming the same reading of the algorithm as above, that of a weighted mean with particular weights. Generally, the easiest way to compute a mean is to keep two accumulators, one for numerator and one for denominator. This will affect the choice of zero and reduction function below.
Your zero should contain a structure of tensors with value 0 mapping to the weights of your model--this will be the numerator. This would be generated for you if you had an aggregation like tff.federated_sum (as TFF knows what the zero should be), but for this case you'll have to get your hands on such a tensor yourself. This shouldn't be too hard with tf.nest.map_structure and tf.zeros_like.
For the denominator, we will assume we just need a scalar. TFF and TF are much more flexible than this--you could keep a per-layer or per-parameter denominator if desired--but for simplicity we will assume that we just want to divide by a single float in the end.
Therefore our type U will be something like:
U = <numerator=server_model_type, denominator=tf.float32>
Finally we come to our reduction function. It will be more or less a different composition of the same pieces above; we will make slightly tighter assumptions about them here (in particular, that all the local functions are tff.tf_computations--a technical assumption, arguably a bug on TFF). Our reduction function will be along the lines (assuming appropriate type aliases):
#tff.tf_computation(U, T)
def reduction(partial_accumulate, next_element):
kl_div = compute_kl_divergence(
next_element.server_model, next_element.trained_model)
weight = postprocess_divergence(kl_div)
new_numerator = partial_accumulate.numerator + weight * next_element.trained_model
new_denominator = partial_accumulate.denominator + weight
return collections.OrderedDict(
numerator=new_numerator, denominator=new_denominator)
Putting them together
The basic outline of a round will be similar to the above; but we have put more computation 'in the stream', and consequently there wil be less on the clients. We assume here the same variable definitions.
#tff.federated_computation(...)
def round_function(...):
...
trained_clients = tff.federated_map(training_fn, clients_placed_arguments)
global_model_at_clients = tff.federated_broadcast(server_model)
# This zip I believe is not necessary, but it helps my mental model.
reduction_arg = tff.federated_zip(
collections.OrderedDict(server_model=global_model_at_clients,
trained_model=trained_clients))
# We assume a zero as specified above
return tff.federated_reduce(reduction_arg,
zero,
reduction)

What's the meaning of "rl_actions" of flow/tutorials/tutorial09_environments.ipynb

When I am learning tutorial 9, I am confusing the rl_actions.
Because on the program, the rl_actions is not initialized and definition.
Why there is a 'rl_actions' parameter of the _apply_rl_actions function and compute_reward function?
I also check the vehicle kernel code, about apply_acceleration function.
The original one is:
def apply_acceleration(self, veh_ids, acc):
"""See parent class."""
# to hand the case of a single vehicle
if type(veh_ids) == str:
veh_ids = [veh_ids]
acc = [acc]
for i, vid in enumerate(veh_ids):
if acc[i] is not None and vid in self.get_ids():
this_vel = self.get_speed(vid)
next_vel = max([this_vel + acc[i] * self.sim_step, 0])
self.kernel_api.vehicle.slowDown(vid, next_vel, 1e-3)
Look into flow/envs/base_env.py in the step method, this is where apply_rl_actions and compute_reward are called. All these 3 methods take as parameter the actions rl_actions to apply to the agents. These actions are provided by the RL algorithm. The shape of rl_actions is the one provided in the action_space method of your environment.
The RL algorithm automatically calls your step method at each step, giving it the actions to be applied. Flow's environments are actually encapsulated within a Gym environment, which is given to the RL algorithm. The RL algorithm can work with any Gym environment, which allows it to be very general, since all Gym environments have methods such as step, reset etc. If you want to know more about how this work, look into how to train a custom Gym environment.

How to get two variables list from two different neural networks

Usually, if we define a function through a neural network in a class, then in another class, if we need the function's parameter or variables list, in tensorflow, we can use tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="function name"), it is convenient and familiar to me, although I guess there are many other more efficient ways to do so.
However, in some cases, we may need to define a function which is built upon two different neural networks, say F(x) = F(NN_1(x), NN_2(x)), then in another class, what is the right way to get the two variables list of both NN_1() and NN_2()? It's clear that use tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="function name here leads to get mixed variables list of F(x) insetead of two variables lists of NN_1 and NN_2.
def function()
with tf.name_scope(function):
with tf.name_scope(subfunction_1):
neural_network_1
with tf.name_scope(subfunction_2):
neural_network_2
Within a tree of name scopes you can access the individual scope variables with:
vars_1 = tf.get_collection(
tf.GraphKeys.TRAINABLE_VARIABLES, scope="function_name/subfunction_name_1")
vars_2 = tf.get_collection(
tf.GraphKeys.TRAINABLE_VARIABLES, scope="function_name/subfunction_name_2")

Reinforcement learning a3c with multiple independent outputs

I am attempting to modify and implement googles pattern of the Asynchronous Advantage Actor Critic (A3C) model. There are plenty of examples online out there that have gotten me started but I am running into a issues attempting to expand the samples.
All of the examples I can find focus on pong as the example which has a state based output of left or right or stay still. What I am trying to expand this to is a system that also has a separate on off output. In the context of pong, it would be a boost to your speed.
The code I am basing my code on can be found here. It is playing doom, but it still has the same left and right but also a fire button instead of stay still. I am looking at how I could modify this code such that fire was an independent action from movement.
I know I can easily add another separate output from the model so that the outputs would look something like this:
self.output = slim.fully_connected(rnn_out,a_size,
activation_fn=tf.nn.softmax,
weights_initializer=normalized_columns_initializer(0.01),
biases_initializer=None)
self.output2 = slim.fully_connected(rnn_out,1,
activation_fn=tf.nn.sigmoid,
weights_initializer=normalized_columns_initializer(0.01),
biases_initializer=None)
The thing I am struggling with is how then do I have to modify the value output and redefine the loss function. The value is still tied to the combination of the two outputs. Or is there a separate value output for each of the independent output. I feel like it should still only be one output as the value, but I am unsure how I them use that one value and modify the loss function to take this into account.
I was thinking of adding a separate term to the loss function so that the calculation would look something like this:
self.actions_1 = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_2 = tf.placeholder(shape=[None],dtype=tf.float32)
self.actions_onehot = tf.one_hot(self.actions_1,a_size,dtype=tf.float32)
self.target_v = tf.placeholder(shape=[None],dtype=tf.float32)
self.advantages = tf.placeholder(shape=[None],dtype=tf.float32)
self.responsible_outputs = tf.reduce_sum(self.output1 * self.actions_onehot, [1])
self.responsible_outputs_2 = tf.reduce_sum(self.output2 * self.actions_2, [1])
#Loss functions
self.value_loss = 0.5 * tf.reduce_sum(tf.square(self.target_v - tf.reshape(self.value,[-1])))
self.entropy = - tf.reduce_sum(self.policy * tf.log(self.policy))
self.policy_loss = -tf.reduce_sum(tf.log(self.responsible_outputs)*self.advantages) -
tf.reduce_sum(tf.log(self.responsible_outputs_2)*self.advantages)
self.loss = 0.5 * self.value_loss + self.policy_loss - self.entropy * 0.01
I am looking to know if I am on the right track here, or if there are resources or examples that I can expand off of.
First of all, the example you are mentioning don't need two output nodes. One output node with continuous output value is enough to solve. Also you should't use placeholder for advantage, but rather you should use for discounted reward.
self.discounted_reward = tf.placeholder(shape=[None],dtype=tf.float32)
self.advantages = self.discounted_reward - self.value
Also while calculating the policy loss you have to use tf.stop_gradient to prevent the value node gradient feedback contribution for policy learning.
self.policy_loss = -tf.reduce_sum(tf.log(self.responsible_outputs)*tf.stop_gradient(self.advantages))

In dynamic_rnn() is it valid to include variables in the state?

I'm implementing a RNN cell around BasicLSTMCell where I want to be able to look back on past hidden states (across batch boundaries). I'm using dynamic_rnn() and the basic pattern I use is:
def __call__(self, inputs, old_state, scope=None):
mem = old_state[2]
# [do something with with mem]
cell_out, new_state = self.cell(inputs,
(old_state[0],
old_state[1]))
h_state = new_state.h
c_state = new_state.c
# control dependency required because of self.buf_index
with tf.get_default_graph().control_dependencies([cell_out]):
new_mem = write_to_buf(self.out_buf,
cell_out,
self.buf_index)
# update the buffer index
with tf.get_default_graph().control_dependencies(new_mem):
inc_step = tf.assign(self.buf_index, (self.buf_index + 1) %
self.buf_size)
with tf.get_default_graph().control_dependencies([inc_step]):
h_state = tf.identity(h_state)
t = [c_state, h_state, new_mem]
return cell_out, tuple(t)
self.buf and self.buf_index are variables. write_to_buf() is a function that uses scatter_update() to write the new hidden states to the buffer and returns the result.
I rely on the assumption that accesses to scatter updates return value guarantee that the new variable value is used (similar to this) so that caching of variables does not mess things up.
From debug prints it seems to work but it would be nice to get some confirmation or suggestions on alternatives.