Creating a new random operation in computation graph of Tensorflow - tensorflow

How can I create a new random operator (something like tf.random_normal) which is part of the graph? I want to add Cauchy random variable to the output of one layer of my network.
I found tf.contrib.distributions.Cauchy, but how can I make it work inside a layer ( as we have with random_normal)?

Related

How to implement the tensor product of two layers in Keras/Tf

I'm trying to set up a DNN for classification and at one point I want to take the tensor product of a vector with itself. I'm using the Keras functional API at the moment but it isn't immediately clear that there is a layer that does this already.
I've been attempting to use a Lambda layer and numpy in order to try this, but it's not working.
Doing a bit of googling reveals
tf.linalg.LinearOperatorKronecker, which does not seem to work either.
Here's what I've tried:
I have a layer called part_layer whose output is a single vector (rank one tensor).
keras.layers.Lambda(lambda x_array: np.outer(x_array, x_array),) ( part_layer) )
Ideally I would want this to to take a vector of the form [1,2] and give me [[1,2],[2,4]].
But the error I'm getting suggests that the np.outer function is not recognizing its arguments:
AttributeError: 'numpy.ndarray' object has no attribute '_keras_history
Any ideas on what to try next, or if there is a simple function to use?
You can use two operations:
If you want to consider the batch size you can use the Dot function
Otherwise, you can use the the dot function
In both case the code should look like this:
dot_lambda = lambda x_array: tf.keras.layers.dot(x_array, x_array)
# dot_lambda = lambda x_array: tf.keras.layers.Dot(x_array, x_array)
keras.layers.Lambda(dot_lamda)( part_layer)
Hope this help.
Use tf.tensordot(x_array, x_array, axes=0) to achieve what you want. For example, the expression print(tf.tensordot([1,2], [1,2], axes=0)) gives the desired result: [[1,2],[2,4]].
Keras/Tensorflow needs to keep an history of operations applied to tensors to perform the optimization. Numpy has no notion of history, so using it in the middle of a layer is not allowed. tf.tensordot performs the same operation, but keeps the history.

How do I store and retrieve Tensors in a global namespace in Tensorflow?

I am trying to read a big Tensorflow project. For a project that the nodes of the computation graph are scattered around the project, I wonder if there is a way to store a Tensor node of the computation graph and add that node to the fetch list in sess.run?
For example, if I want to add probs at line 615 of
https://github.com/allenai/document-qa/blob/master/docqa/nn/span_prediction.py to a global namespace, is there a method like tf.add_node(probs, "probs"), and later I could get tf.get_node("probs"), just for the sake of conveniently passing node around the project.
A more general question would be, what will be a better idea to structure the tensorflow code and improve the efficiency of experimenting with different models.
Of course you can. To retrieve it later, you'll have to give it a name so that you can retrieve it by name. Take probs in your code as an example. It's created with tf.nn.softmax() function, the API for which is shown below.
tf.nn.softmax(
logits,
axis=None,
name=None,
dim=None
)
See the parameter name? You can add this parameter to line 615 like this:
probs = tf.nn.softmax(all_logits, name='my_tensor')
Later when you need it, you can call tf.Graph.get_tensor_by_name(name) to retrieve this tensor.
graph = tf.get_default_graph()
retrieved_probs = graph.get_tensor_by_name('my_tensor:0')
'my_tensor' is the name of the softmax operation, and ':0' should be added to the end of it meaning that you're retrieving the tensor instead of the operation. When calling Graph.get_operation_by_name(), no ':0' should be added.
You'll have to make sure that the tensor exists(it might be created in the code executed before this line, or it might be restored from a meta graph file). If it's created in a variable scope, you'll also have to add the scope name and a '/' in the front of the name param. For example, 'my_scope/my_tensor:0'.

When working with batches via the Dataset API in Tensorflow what is the recommended way to perform index lookups in dictionary?

I am currently going about refactoring existing code over to the newer TF Dataset API. In our current process we populate a standard python dictionary with product ids to classification ids.
Now I have moved over our images/paths to a TF Dataset and then using tf.string_split I extract various information from the filename itself. One of them being the product_id. At this point the product_id is a tf tensor which I am unable to perform a lookup using our previous means via "if product_id in products_to_class" because I now have a tensor and I can't perform a search via the standard dictionary.
So I am using this project as a way to learn how to increase performance. So I wanted to know what the "best/recommended" approach is to take here when working with the tf Dataset API batches. Do I convert the product_id to a string and just perform the lookup via the current if check above or do I now go about converting the products_to_class dictionary to another data structure such as another Dataset and perform the lookup using tensors throughout? Any advice would be greatly appreciated.
Small example of what I have currently is:
prod_to_class = {'12345': 0, '67890': 1}
#Below logic is in a mapped function used on a TF.Dataset
def _parse_fn(filename, label)
core_file = tf.string_split([filename], '\\').values[-1]
product_id = tf.string_split([core_file], ".").values[0]
#unable to perform below because product_id is now a tensor and
#products_to_class is a python dictionary
if product_id in products_to_class:
label = products_to_class[product_id]
The built-in TensorFlow mechanism for doing this is to use a tf.contrib.lookup table. For example, if you have a list of string keys that you want to map to dense integers, you can define the following outside your _parse_fn():
# This constructor creates a lookup table that implicitly maps each string in the
# argument to its index in the list (e.g. '67890' -> 1).
products_to_class = tf.contrib.lookup.index_table_from_tensor(['12345', '67890'])
...and then use products_to_class.lookup() in your _parse_fn().
def _parse_fn(filename, label):
core_file = tf.string_split([filename], '\\').values[-1]
product_id = tf.string_split([core_file], ".").values[0]
# Returns a `tf.Tensor` that corresponds to the value associated with
# `product_id` in the `products_to_class` table.
label = products_to_class.lookup(product_id)
# ...
Note that this places two additional constraints on your program:
You must use Dataset.make_initializable_iterator() instead of Dataset.make_one_shot_iterator().
You must call sess.run(tf.tables_initializer()) before starting to consume elements from the input pipeline.
Both of these will be handled for you if you use the high-level tf.estimator API and return the tf.data.Dataset from your input_fn.

How to assign value to one graph with the other graph who has the same structure in tensorflow?

I'm trying to implement DQN in tensorflow. Here I have one target network and one training network who have the same structure with each other. In the beginning of every 10000 training steps, I want to load the value from checkpoint to target network and training network, then stop_gradient target network. However, I tried those ways, and none of them worked:
1, Put the two networks in one graph. However, every time I load them, I don't know how to assign the value of training network part to target network part.(They are saved in different values, since one is stop gradient.)
2, Define two graphs using tf.graph() and run two session respectively. However, I can't load the checkpoint of one graph to another, even they have the same structure. After all, they are two different graphs.
So, any one who can give me some advice? Very appreciated!
The typical approach would be to put everything in one graph, put your two networks in two name scopes, and then create tf.assign ops for each variable in one scope to the another and use the tf.group to construct a final "copying" operation. Lets assume that function create_net() builds a single network
with tf.name_scope('main_network'):
main_net = create_net()
with tf.name_scope('target_network):
target_network = create_net()
main_variables = tf.get_collection(tf.GraphKeys.VARIABLES, scope='main_network')
target_variables = tf.get_collection(tf.GraphKeys.VARIABLES, scope='target_network')
# I am assuming get_collection returns variables in the same order, please double
# check this is actually happening
assign_ops = []
for main_var, target_var in zip(main_variables, target_variables):
assign_ops.append(tf.assign(target_var, tf.identity(main_var)))
copy_operation = tf.group(*assign_ops)
Now executing copy_operation in session.run should copy your main network parameters to the target network. The above code should be considered a pseudo code, rather than something you can copy&paste.

Torch, neural nets - forward function on gmodule object - nngraph class

I am a newbie to torch and lua (as anyone who has been following my latest posts could attest :) and have the following question on the forward function for the gmodule object (class nngraph).
as per the source code (https://github.com/torch/nn/blob/master/Module.lua - as class gmodule inherits from nn.module) the syntax is:
function Module:forward(input)
return self:updateOutput(input)
end
However, I have found cases where a table is passed as input, as in:
local lst = clones.rnn[t]:forward{x[{{}, t}], unpack(rnn_state[t-1])}
where:
clones.rnn[t]
is itself a gmodule object. In turn, rnn_state[t-1] is a table with 4 tensors. So in the end, we have something akin to
result_var = gmodule:forward{[1]=tensor_1,[2]=tensor_2,[3]=tensor_3,...,[5]=tensor_5}
The question is, depending on the network architecture, can you pass input - formatted as table - not only to the input layer but also to the hidden layers?
In that case, you have to check that you pass exactly one input per layer? (with the exception of the output layer)
Thanks so much
I finally found the answer. The module class (as well as the inherited class gmodule) has an input and an output.
However, the input (as well as the output) needs not be a vector, but it could be a collection of vectors - that depends on the neural net configuration, in this particular case it is a pretty complex recursive neural net.
So if the net has more than one input vector, you can do:
result_var = gmodule:forward{[1]=tensor_1,[2]=tensor_2,[3]=tensor_3,...,[5]=tensor_5}
where each tensor/vector is one of the input vectors. Only one of those vectors is the X vector, or the feature vector. The others could serve as input to other intermediate nodes.
In turn, result_var (which is the output) can have one output as tensor (the prediction) or a collection of tensors as output (a collection of tensors), depending on the network configuration.
If the latter is the case, one of those output tensors is the prediction, and the reminder are usually used as input to the intermediate nodes in the next time step - but that again depends on the net configuration.