From the Tensorflow documentation when using Keras subclassing API, they give this example on how to pass a mask along to other layers that implement masking. I am wondering if this is explicitly required or if it is handled correctly after the Embedding layer has mask_zero=True.
class MyLayer(layers.Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
self.embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
self.lstm = layers.LSTM(32)
def call(self, inputs):
x = self.embedding(inputs)
# Note that you could also prepare a `mask` tensor manually.
# It only needs to be a boolean tensor
# with the right shape, i.e. (batch_size, timesteps).
mask = self.embedding.compute_mask(inputs)
output = self.lstm(x, mask=mask) # The layer will ignore the masked values
return output
layer = MyLayer()
x = np.random.random((32, 10)) * 100
x = x.astype('int32')
layer(x)
My confusion comes from another area of the documentation which states:
Masking
This layer supports masking for input data with a variable number of
timesteps. To introduce masks to your data, use an Embedding layer
with the mask_zero parameter set to True.
Which seems to mean that if mask_zero=True no further commands need to be done on subsequent layers.
If you read about the Masking layer, it will also support that once you used the mask at the beginning, all the rest of the layers get the mask automatically.
Quote:
For each timestep in the input tensor (dimension #1 in the tensor), if all values in the input tensor at that timestep are equal to mask_value, then the timestep will be masked (skipped) in all downstream layers (as long as they support masking).
If any downstream layer does not support masking yet receives such an input mask, an exception will be raised.
This other link also states the same. The mask will be propagated to all layers.
Quote:
When using the Functional API or the Sequential API, a mask generated by an Embedding or Masking layer will be propagated through the network for any layer that is capable of using them (for example, RNN layers). Keras will automatically fetch the mask corresponding to an input and pass it to any layer that knows how to use it.
The second link is really full of details on masking.
Notice that the code you showed is for a custom embedding. If teaches you how to "create and pass" a mask, if you want to create a layer that will create a mask. It's basically showing what the normal Embedding layer does.
So, we can conclude that if you're using a normal Embedding layer, all you need is mask_zero=True and everything will go down the stream.
In addition to the high-level answer given, let's have a look at some important technical details.
In case of doubts inspect the masking source code, to understand how it works.
Masking adds a _keras_mask attribute to the tensor, which flags entries to be skipped, effectively letting other API methods know about it.
Test yourself if a layer supports the mask, via supports_masking attribute. Example: tf.keras.layers.GlobalMaxPool1D().supports_masking
Masking logic is: skip a timestep if all features are equal to the masked value (TF source code uses not_equal and any to flag what remains)
import tensorflow ast f
arr = np.arange(6).reshape((1,6,1))
arr_masked = tf.keras.layers.Masking(mask_value=5)(arr)
print(arr_masked._keras_mask)
print(arr_masked.numpy())
I think you have to pass the mask from layer to layer in a subclassing layer.
From the Tensorflow documentation: Quote
Note that in the call method of a subclassed model or layer, masks aren't automatically propagated, so you will need to manually pass a mask argument to any layer that needs one.
Related
This answer says:
If there's a mask in your model, it'll be propagated layer-by-layer
and eventually applied to the loss. So if you're padding and masking
the sequences in a correct way, the loss on the padding placeholders
would be ignored.
However in TensorFlow's tutorial on Transformers, the author has implemented custom loss and metric where masks are computed and applied internally. Is this necessary?
Note in the code of the Transformer model, the author has deleted the keras mask:
....
....
try:
# Drop the keras mask, so it doesn't scale the losses/metrics.
# b/250038731
del logits._keras_mask
except AttributeError:
pass
# Return the final output and the attention weights.
return logits
Do we need to implement a custom loss and metric with mask, or we can use the built-in ones?
I am attempting to port some TensorFlow 1 code to TensorFlow 2. The old code used the now deprecated MultiRNNCell to create a GRU layer with multiple hidden layers. In TensorFlow 2 I want to use the in-built GRU Layer, but there doesn't seem to be an option which allows for multiple hidden layers with that class. The PyTorch equivalent has such an option exposed as an initialization parameter, num_layers.
My workaround has been to use the TensorFlow RNN layer and pass a GRU cell for each hidden layer I want - this is the way recommended in the docs:
dim = 1024
num_layers = 4
cells = [tf.keras.layers.GRUCell(dim) for _ in range(num_layers)]
gru_layer = tf.keras.layers.RNN(
cells,
return_sequences=True,
stateful=True
)
But the in-built GRU layer has support for CuDNN, which the plain RNN seems to lack, to quote the docs:
Mathematically, RNN(LSTMCell(10)) produces the same result as
LSTM(10). In fact, the implementation of this layer in TF v1.x was
just creating the corresponding RNN cell and wrapping it in a RNN
layer. However using the built-in GRU and LSTM layers enables the use
of CuDNN and you may see better performance.
So how can I achieve this? How do I get a GRU layer that supports both multiple hidden layers and has support for CuDNN? Given that the inbuilt GRU layer in TensorFlow lacks such an option, is it in fact necessary? Or is the only way to get a deep GRU network is to stack multiple GRU layers in a sequence?
EDIT: It seems, according to this answer to a similar question, that there is indeed no in-built way to create a GRU Layer with multiple hidden layers, and that they have to be stacked manually.
OK, so it seems the only way to achieve this is to define a stack of GRU Layer instances. This is what I came up with (note that I only need stateful GRU layers that return sequences, and don't need the last layer's return state):
class RNN(tf.keras.layers.Layer):
def __init__(self, dim, num_layers=1):
super(RNN, self).__init__()
self.dim = dim
self.num_layers = num_layers
def layer():
return tf.keras.layers.GRU(
self.dim,
return_sequences=True,
return_state=True,
stateful=True)
self._layer_names = ['layer_' + str(i) for i in range(self.num_layers)]
for name in self._layer_names:
self.__setattr__(name, layer())
def call(self, inputs):
seqs = inputs
state = None
for name in self._layer_names:
rnn = self.__getattribute__(name)
(seqs, state) = rnn(seqs, initial_state=state)
return seqs
It's necessary to manually add the internal rnn layers to the parent layer using __setattr__. It seems adding the rnns to a list and setting that as a layer attribute won't allow the internal layers to be tracked by the parent layer (see this answer to this issue).
I hoped that this would speed up my network. Tests on Colab have showed no difference so far, if anything it's actually slightly slower than using a straight RNN initialized with a list of GRU cells. I thought that increasing the batch size from 10 to 64 might make a difference, but no, they still seem to be performing at around the same speed.
UPDATE: In fact there does seem to be a noticeable speed up, but only if I don't decorate my training step function with tf.function (I have a custom training loop, I don't use Model.fit). Not a huge increase in speed - maybe about 33% faster, with a batch size of 96. A much smaller batch size (between 10 to 20) gives an even bigger speed up, about 70%.
In the keras documentation it states that the embedding layer "can only be used as the first layer in a model." This makes no sense to me, I might want to do a reshape/flatten on an input before passing it to the embedding layer, but this is not allowed. Why must the embedding layer be used only as the first layer?
"can only be used as the first layer in a model." This makes no sense
to me
Generally, an embedding layer maps discrete values to continues values. In the subsequence layers, we have continues vector representation that means there is no need to convert the vectors again.
I might want to do a reshape/flatten on input before passing it to
the embedding layer
Of course, you can reshape or flatten an input but in most cases is meaningless. For example, assume we have sentences with a length of 30 and want to flatten them before passed them to embedding:
input_layer = Input(shape=(30))
flatten = Flatten()(input_layer)
embedd = Embedding(1000, 100)(flatten)
In the above example, flatten layer has no effect at all. Before and after flatten our vector size is [batch, 30].
Let look at another example, assume our inputs vector our 2D with the shape of [batch, 30, 2]. After flatting the input, the vectors have the size of [batch, 60]. We can feed them into Embedding layer but in most of the scenarios, it has no meaning. In fact, we destroy the logical relationship between features.
input_layer = Input(shape=(30, 2))
flatten = Flatten()(input_layer)
embedd = Embedding(1000, 100)(flatten)
I'm trying to write my own recurrent layer in Keras and noticed this line in the Keras source:
# Properly set learning phase on output tensor.
if 0 < self.dropout + self.recurrent_dropout:
if training is None:
output._uses_learning_phase = True
Checking the backend code for in_train_phase:
if training is None:
training = learning_phase()
uses_learning_phase = True
else:
uses_learning_phase = False
This is rather confusing. Isn't "training" the "learning phase"?! I guess more importantly, do I need to set _uses_learning_phase on output in my custom recurrent layer?
Intro
A "Training Flag" is meant to enable a Model (or Layer) to behave different from training when it predicts results or is being tested.
Depending on the backend used, Keras may need to implement its own boolean "training flag" (on CNTK as for Keras 2.2.4) or can use a native backend tensor (like with Tensorflow) Therefor dynamic-purpose code was integrated.
As consequence Layer class has a property described as followed:
uses_learning_phase: Whether any operation
of the layer uses `K.in_training_phase()`
or `K.in_test_phase()`.
and output tensors may be given an attribute _uses_learning_phase which is read by the property. If any output tensor has the attribute (and it is true), the layer's property returns true.
Usage in Keras's Recurrent layer
Your code snippet comes from keras/layers/recurrent.py and when calling the private _generate_dropout_mask method, the backend's operation creator "in_train_phase()" is being called. Therefore the output tensor's flag "_uses_learning_phase" is being set.
Explanation of quoted backend code
in_training_phase() and in_test_phase() are just the same. "training" is an optional argument and references the Training Flag. If the argument is not given, the Training Flag is refered automatically at
training = learning_phase()
However, the output tensor's attribute _uses_learning_phase is only set (and set True), if Training Flag is a tensor of the backend AND optional training argument was not set. (This may also explain, why a layer needs to set _uses_learning_phase itself, but I see no usecase for creating an operation via in_test_phase without flagging the output tensor. For now, assume there is one.)
Appologizes for misuse of technical terms.
I am working on a project of semantic segmentation via CNNs ; trying to implement an architecture of type Encoder-Decoder, therefore output is the same size as the input.
How do you design the labels ?
What loss function should one apply ? Especially in the situation of heavy class inbalance (but the ratio between the classes is variable from image to image).
The problem deals with two classes (objects of interest and background). I am using Keras with tensorflow backend.
So far, I am going with designing expected outputs to be the same dimensions as the input images, applying pixel-wise labeling. Final layer of model has either softmax activation (for 2 classes), or sigmoid activation ( to express probability that the pixels belong to the objects class). I am having trouble with designing a suitable objective function for such a task, of type:
function(y_pred,y_true),
in agreement with Keras.
Please,try to be specific with the dimensions of tensors involved (input/output of the model). Any thoughts and suggestions are much appreciated. Thank you !
Actually when you use a TensorFlow backend you could simply apply a predefined Keras objectives in a following manner:
output = Convolution2D(number_of_classes, # 1 for binary case
filter_height,
filter_width,
activation = "softmax")(input_to_output) # or "sigmoid" for binary
...
model.compile(loss = "categorical_crossentropy", ...) # or "binary_crossentropy" for binary
And then feed either a one-hot encoded feature map or matrix of shape (image_height, image_width) with integer encoded classes (remember than in this case you should use sparse_categorical_crossentropy as a loss).
To deal with a class inbalance (I guess it's beacuse of a backgroud class) I strongly recommend you to read carefully answers to this Stack Overflow question.
I suggest starting with a base architecture used in practice like this one in nerve-segmentation: https://github.com/EdwardTyantov/ultrasound-nerve-segmentation. Here a dice_loss is used as a loss function. This works very well for a two class problem as has been shown in literature: https://arxiv.org/pdf/1608.04117.pdf.
Another loss function that has been widely used is cross entropy for such a problem. For problems like yours most commonly long and short skip connections are deployed to stabilize training as denoted in the paper above.
Two ways :
You could try 'flattening':
model.add(Reshape(NUM_CLASSES,HEIGHT*WIDTH)) #shape : HEIGHT x WIDTH x NUM_CLASSES
model.add(Permute(2,1)) # now itll be NUM_CLASSES x HEIGHT x WIDTH
#Use some activation here- model.activation()
#You can use Global averaging or Softmax
One hot encoding every pixel:
In this case your final layer should Upsample/Unpool/Deconvolve to HEIGHT x WIDTH x CLASSES. So your output is essentially of the shape: (HEIGHT,WIDTH,NUM_CLASSES).