How to perform Virtual Batch Normalization (VBN) in keras - tensorflow

VBN is talked in This paper. And implemented Here, Here and Here. I donot want to go to core/full code. I just want to know, how to use VBN as keras layer, as i am not very expert tensorflow/keras coder. I generally use simple batch normalization (BN) as follows
model.add(BatchNormalization(momentum=0.8))
In a similar way how to use VBN instead of BN in following keras code?
model.add(Dense(256,input_dim=self.input_dim))
model.add(LeakyReLU(alpha=.2))
model.add(BatchNormalization(momentum=0.8))%I want to replace this with VBN
model.add(Dense(512))
......
.......

In the first link they say
The __init__ API is intended to mimic
tf.compat.v1.layers.batch_normalization as
closely as possible.
So if you take a look at https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization,
It says you use this function as ...
x_norm = tf.layers.batch_normalization(x, training=training)
So if I understand well,
using the functional API https://keras.io/getting-started/functional-api-guide/,
You should probably do something like:
layer_n = VBN(**kwargs, layer_n-1)
I hope it helps

Related

How to make a keras model take a (None,) tensor as Input

I am using the tf.keras API and I want my Model to take input with shape (None,), None is batch_size.
The shape of keras.layers.Input() doesn't include batch_size, so I think it can't be used.
Is there a way to achieve my goal? I prefer a solution without tf.placeholder since it is deprecated
By the way, my model is a sentence embedding model, so I want the input is something like ['How are you.','Good morning.']
======================
Update:
Currently, I can create an input layer with layers.Input(dtype=tf.string,shape=1), but this need my input to be something like [['How are you.'],['Good morning.']]. I want my input to have only one dimension.
Have you tried tf.keras.layers.Input(dtype=tf.string, shape=())?
If you wanted to set a specific batch size, tf.keras.Input() does actually include a batch_size parameter. But the batch size is presumed to be None by default, so you shouldn't even need to change anything.
Now, it seems like what you actually want is to be able to provide samples (sentences) of variable length. Good news! The tf.keras.layers.Embedding layer allows you to do this, although you'll have to generate an encoding for your sentences first. The Tensorflow website has a good tutorial on the process.

Tensorflow Embedding for training and inference

I am trying to code a simple Neural machine translation using tensorflow. But I am a little stuck regarding the understanding of the embedding on tensorflow :
I do not understand the difference between tf.contrib.layers.embed_sequence(inputs, vocab_size=target_vocab_size,embed_dim=decoding_embedding_size)
and
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
In which case should I use one to another ?
The second thing I do not understand is about tf.contrib.seq2seq.TrainingHelper and tf.contrib.seq2seq.GreedyEmbeddingHelper. I know that in the case of translation, we use mainly TrainingHelper for the training step (use the previous target to predict the next target) and GreedyEmbeddingHelper for the inference step (use the previous timestep to predict the next target).
But I do not understand how does it work. In particular the different parameters used. For example why do we need a sequence length in the case of TrainingHelper (why do we not used an EOS)? Why both of them do not use the embedding_lookup or embedding_sequence as input ?
I suppose that you're coming from this seq2seq tutorial. Even though this question is starting to get old, I'll try to answer for the people passing by like me:
For the first question, I looked at the source code behind tf.contrib.layers.embed_sequence, and it is actually using tf.nn.embedding_lookup. So it just wraps it, and creates the embedding matrix (tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))) for you. Although this is convenient and less verbose, by using embed_sequence there doesn't seem to a direct way to access the embeddings. So if you want to, you have to query for the internal variable used as the embedding matrix by using the same name space. I have to admit that the code in the tutorial above is confusing. I even suspect he's using different embeddings in the encoder and the decoder.
For the second question:
I guess it is equivalent to use a sequence length or an embedding.
The TrainingHelper doesn't need the embedding_lookup as it only forwards the inputs to the decoder, GreedyEmbeddingHelper does take as a first input the embedding_lookup as mentioned in the documentation.
If I understand you correctly, the first question is about the differences between tf.contrib.layers.embed_sequence and tf.nn.embedding_lookup.
According to the official docs (https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence),
Typical use case would be reusing embeddings between an encoder and decoder.
I think tf.contrib.layers.embed_sequence is designed for seq2seq models.
I found the following post:
https://github.com/tensorflow/tensorflow/issues/17417
where #ispirmustafa mentioned:
embedding_lookup doesn't support invalid ids.
Also, in another post: tf.contrib.layers.embed_sequence() is for what?
#user1930402 said:
When building a neural network model that has multiple gates that take features as input, by using tensorflow.contrib.layers.embed_sequence, you can reduce the number of parameters in your network while preserving depth. For example, it eliminates the need for each gates of the LSTM to perform its own linear projection of features.
It allows for arbitrary input shapes, which helps the implementation be simple and flexible.
For the second question, sorry that I didn't use TrainingHelper and can't answer your question.

Syntax of Keras Functional API

I am kinda confused on how the syntax in the keras functional API works. Its really useful to define complex multi input and output models. But the syntax is kinda puzzling for me.
new_layer = Conv2d(...)(old_layer)
as far as I know the Conv2d is a class. How does Conv2d()() syntax work in python?
Every object in python that implements a __call__() method can be called directly (you can take a look at this question or this tutorial). All keras layers implement this function (see source) and the implementation is supposed to return output of the layer given the input tensor.
Conv2d(...).(X) is equivalent to:
layer = Conv2d(...)
X = layer(X)
where layer() is equivalent to layer.__call__(self,....).

tf.contrib.layers.layer_norm with tf.nn.rnn_cell.MultiRNNCell

I have multiple RNN layers right now setup like:
stack = tf.nn.rnn_cell.MultiRNNCell([
tf.nn.rnn_cell.GRUCell(num_hidden, activation=clipped_relu)
for _ in range(num_rnn_layers)
])
But am trying to add layer normalization using https://www.tensorflow.org/api_docs/python/tf/contrib/layers/layer_norm to the RNN layers. I've tried a number of different setups but can't get the model to compile.
Has anyone done this yet? And if so, how did you implement it?
I think you need to define your own layer class that normalizes inside the call function. Did you try that?
There is a layer normalization implementation here:
tf.contrib.rnn.LayerNormBasicLSTMCell
which can be used in the MultiRNNCell function.

Is there any way to get variable importance with Keras?

I am looking for a proper or best way to get variable importance in a Neural Network created with Keras. The way I currently do it is I just take the weights (not the biases) of the variables in the first layer with the assumption that more important variables will have higher weights in the first layer. Is there another/better way of doing it?
Since everything will be mixed up along the network, the first layer alone can't tell you about the importance of each variable. The following layers can also increase or decrease their importance, and even make one variable affect the importance of another variable. Every single neuron in the first layer itself will give each variable a different importance too, so it's not something that straightforward.
I suggest you do model.predict(inputs) using inputs containing arrays of zeros, making only the variable you want to study be 1 in the input.
That way, you see the result for each variable alone. Even though, this will still not help you with the cases where one variable increases the importance of another variable.
*Edited to include relevant code to implement permutation importance.
I answered a similar question at Feature Importance Chart in neural network using Keras in Python. It does implement what Teque5 mentioned above, namely shuffling the variable among your sample or permutation importance using the ELI5 package.
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor
import eli5
from eli5.sklearn import PermutationImportance
def base_model():
model = Sequential()
...
return model
X = ...
y = ...
my_model = KerasRegressor(build_fn=basemodel, **sk_params)
my_model.fit(X,y)
perm = PermutationImportance(my_model, random_state=1).fit(X,y)
eli5.show_weights(perm, feature_names = X.columns.tolist())
It is not that simple. For example, in later stages the variable could be reduced to 0.
I'd have a look at LIME (Local Interpretable Model-Agnostic Explanations). The basic idea is to set some inputs to zero, pass it through the model and see if the result is similar. If yes, then that variable might not be that important. But there is more about it and if you want to know it, then you should read the paper.
See marcotcr/lime on GitHub.
This is a relatively old post with relatively old answers, so I would like to offer another suggestion of using SHAP to determine feature importance for your Keras models. SHAP also allows you to process Keras models using layers requiring 3d input like LSTM and GRU while eli5 cannot.
To avoid double-posting, I would like to offer my answer to a similar question on Stackoverflow on using SHAP.