Strange output of Conv2D in tflite graph - tensorflow

I have a tflite graph fragment of which depicted on attached picture
I needed to debug it's behavior and already on the first step I got quite puzzling results.
When I feed zeros tensor as input after first Conv2D I expect to get a tensor which consists only of values from bias of Conv2D (since all kernel elements get multiplied by zeros), but instead I've got a tensor which consists of some random data, here is the code snippet:
def test_graph(path=PATH_DEFAULT):
interp = tf.lite.Interpreter(path)
interp.allocate_tensors()
input_details = interp.get_input_details()
in_idx = input_details[0]['index']
zeros = np.zeros(shape=(1, 256, 256, 3), dtype=np.float32)
interp.set_tensor(in_idx, zeros)
interp.invoke()
# index of output of first conv2d operator is 3 (see netron pic)
after_conv_2d = interp.get_tensor(3)
# shape of bias is just [count of output channels]
n, h, w, c = after_conv_2d.shape
# if we feed zeros as input, we can expect that the only values we get are the values of bias
# since all kernel elems in that case are multiplied by zeros
uniq_vals_cnt = len(np.unique(after_conv_2d))
assert uniq_vals_cnt <= c, f"There are {uniq_vals_cnt} in output, should be <= than {c}"
output:
AssertionError: There are 287928 in output, should be <= than 24
Can someone help me with my misunderstanding?

Seems my assumption that I can get any intermediate tensor from interpreter is wrong, we can do it only for outputs, even though interpreter do not raise error and even gives tensors of the right shape for indices related to non-output tesnors.
One way to debug such graph would be to make all tensors outputs, but it seems easiest way to do it would be converting tflite file to pb with toco and then convert pb back to tflite with new outputs specified. This way is not ideal though because toco support for tflite -> pb conversion was removed after 1.9 and using versions before that can break (in my case it breaks) on some graphs.
More of it is here:
tflite: get_tensor on non-output tensors gives random values

Related

How to mask logits for tf.softmax_cross_entropy_with_logits to implement valid actions

I want to compute the softmax_cross_entropy_with_logits of a batch tensor.
I have a batch of logits tensor as input, however I want to mask this tensor before with a boolean mask. The boolean mask is also a batch of masks, in every mask there can be a different amount of True.
Thus applying this mask to the whole tensor will not be dense afterwards.
Trying this either flattens the tensor (tf.boolean_mask) or creates a ragged one (tf.ragged.boolean_mask), which both produce wrong results or don't work with the softmax function.
So basically I want to make the following code work:
# logits.shape = (batch, outputs), e.g. (512,8)
# mask.shape = (batch, valid), e.g. (512,8)
# expected result shape (512,)
one_hot_actions = tf.one_hot(x, logits.get_shape().as_list()[-1])
stopgradient = tf.stop_gradient(one_hot_actions)
return tf.nn.softmax_cross_entropy_with_logits_v2(
logits=tf.boolean_mask(logits, mask),
labels=tf.boolean_mask(stopgradient, mask))
But with tf.boolean_mask this produces just one value, not four and with tf.ragged.boolean_mask this function does not work.
I tried combing the two ragged tensors row wise (first masked logits row with first masked labels row) and compute the softmax rowwise. This did not work since the tf.map_fn that I used does not accept ragged tensors as inputs. I tried this same idea also with SparseTensors and list of Tensors (tf.split) but never got any working code out of it.
The only idea I had to solve this issue is very ugly.
Replace all masked values with tf.where to NaN and then use map_fn on these now dense tensors. Every row I can then mask again to exclude NaN and now can call the softmax function row-wise again.
EDIT This is what the code currently looks like:
stopgradient = tf.stop_gradient(one_hot_actions)
nan_logits = tf.where(mask, logits, float('NaN') + tf.zeros_like(self.logits))
nan_labels = tf.where(mask, stopgradient, float('NaN') + tf.zeros_like(stopgradient))
nan_lola = tf.stack([nan_logits, nan_labels], axis=1)
def fn(x):
nan_lo = x[0]
nan_la = x[1]
masked_lo = tf.boolean_mask(nan_lo, tf.logical_not(tf.math.is_nan(nan_lo)))
masked_la = tf.boolean_mask(nan_la, tf.logical_not(tf.math.is_nan(nan_la)))
return tf.nn.softmax_cross_entropy_with_logits_v2(
logits=masked_lo,
labels=masked_la
)
result = tf.map_fn(fn, nan_lola)
return result
This works but is very slow (my training time almost doubles).
To those interested: I stumbled upon this problem trying to mask valid actions in reinforcement learning.
Do you know of any way to do this (faster)?
Can you replace the masked values with a value that does not affect the softmax?
Thank you!

Keras image captioning model not compiling because of concatenate layer when mask_zero=True in a previous layer

I am new to Keras and I am trying to implement a model for an image captioning project.
I am trying to reproduce the model from Image captioning pre-inject architecture (The picture is taken from this paper: Where to put the image in an image captioning generator) (but with a minor difference: generating a word at each time step instead of only generating a single word at the end), in which the inputs for the LSTM at the first time step are the embedded CNN features. The LSTM should support variable input length and in order to do this I padded all the sequences with zeros so that all of them have maxlen time steps.
The code for the model I have right now is the following:
def get_model(model_name, batch_size, maxlen, voc_size, embed_size,
cnn_feats_size, dropout_rate):
# create input layer for the cnn features
cnn_feats_input = Input(shape=(cnn_feats_size,))
# normalize CNN features
normalized_cnn_feats = BatchNormalization(axis=-1)(cnn_feats_input)
# embed CNN features to have same dimension with word embeddings
embedded_cnn_feats = Dense(embed_size)(normalized_cnn_feats)
# add time dimension so that this layer output shape is (None, 1, embed_size)
final_cnn_feats = RepeatVector(1)(embedded_cnn_feats)
# create input layer for the captions (each caption has max maxlen words)
caption_input = Input(shape=(maxlen,))
# embed the captions
embedded_caption = Embedding(input_dim=voc_size,
output_dim=embed_size,
input_length=maxlen)(caption_input)
# concatenate CNN features and the captions.
# Ouput shape should be (None, maxlen + 1, embed_size)
img_caption_concat = concatenate([final_cnn_feats, embedded_caption], axis=1)
# now feed the concatenation into a LSTM layer (many-to-many)
lstm_layer = LSTM(units=embed_size,
input_shape=(maxlen + 1, embed_size), # one additional time step for the image features
return_sequences=True,
dropout=dropout_rate)(img_caption_concat)
# create a fully connected layer to make the predictions
pred_layer = TimeDistributed(Dense(units=voc_size))(lstm_layer)
# build the model with CNN features and captions as input and
# predictions output
model = Model(inputs=[cnn_feats_input, caption_input],
outputs=pred_layer)
optimizer = Adam(lr=0.0001,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-8)
model.compile(loss='categorical_crossentropy',optimizer=optimizer)
model.summary()
return model
The model (as it is above) compiles without any errors (see: model summary) and I managed to train it using my data. However, it doesn't take into account the fact that my sequences are zero-padded and the results won't be accurate because of this. When I try to change the Embedding layer in order to support masking (also making sure that I use voc_size + 1 instead of voc_size, as it's mentioned in the documentation) like this:
embedded_caption = Embedding(input_dim=voc_size + 1,
output_dim=embed_size,
input_length=maxlen, mask_zero=True)(caption_input)
I get the following error:
Traceback (most recent call last):
File "/export/home/.../py3_env/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1567, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 0 in both shapes must be equal, but are 200 and 1. Shapes are [200] and [1]. for 'concatenate_1/concat_1' (op: 'ConcatV2') with input shapes: [?,1,200], [?,25,1], [] and with computed input tensors: input[2] = <1>
I don't know why it says the shape of the second array is [?, 25, 1], as I am printing its shape before the concatenation and it's [?, 25, 200] (as it should be).
I don't understand why there'd be an issue with a model that compiles and works fine without that parameter, but I assume there's something I am missing.
I have also been thinking about using a Masking layer instead of mask_zero=True, but it should be before the Embedding and the documentation says that the Embedding layer should be the first layer in a model (after the input).
Is there anything I could change in order to fix this or is there a workaround to this ?
The non-equal shape error refers to the mask rather than the tensors/inputs. With concatenate supporting masking, it need to handle mask propagation. Your final_cnn_feats doesn't have mask (None), while your embedded_caption has a mask of shape (?, 25). You can find this out by doing:
print(embedded_caption._keras_history[0].compute_mask(caption_input))
Since final_cnn_feats has no mask, concatenate will give it a all non-zero mask for proper mask propagation. While this is correct, the shape of the mask, however, has the same shape as final_cnn_feats which is (?, 1, 200) rather than (?, 1), i.e. masking all features at all time step rather than just all time step. This is where the non-equal shape error comes from ((?, 1, 200) vs (?, 25)).
To fix it, you need to give final_cnn_feats a correct/matching mask. Now I'm not familiar with your project here. One option is to apply a Masking layer to final_cnn_feats, since it is designed to mask timestep(s).
final_cnn_feats = Masking()(RepeatVector(1)(embedded_cnn_feats))
This can be correct only when not all 200 features in final_cnn_feats are zero, i.e. there is always at least one non-zero value in final_cnn_feats. With that condition, Masking layer will give a (?, 1) mask and will not mask the single time step in final_cnn_feats.

MultiClass Keras Classifier prediction output meaning

I have a Keras classifier built using the Keras wrapper of the Scikit-Learn API. The neural network has 10 output nodes, and the training data is all represented using one-hot encoding.
According to Tensorflow documentation, the predict function outputs a shape of (n_samples,). When I fitted 514541 samples, the function returned an array with shape (514541, ), and each entry of the array ranged from 0 to 9.
Since I have ten different outputs, does the numerical value of each entry correspond exactly to the result that I encoded in my training matrix?
i.e. if index 5 of my one-hot encoding of y_train represents "orange", does a prediction value of 5 mean that the neural network predicted "orange"?
Here is a sample of my model:
model = Sequential()
model.add(Dropout(0.2, input_shape=(32,) ))
model.add(Dense(21, activation='selu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
There are some issues with your question.
The neural network has 10 output nodes, and the training data is all represented using one-hot encoding.
Since your network has 10 output nodes, and your labels are one-hot encoded, your model's output should also be 10-dimensional, and again hot-encoded, i.e. of shape (n_samples, 10). Moreover, since you use a softmax activation for your final layer, each element of your 10-dimensional output should be in [0, 1], and interpreted as the probability of the output belonging to the respective (one-hot encoded) class.
According to Tensorflow documentation, the predict function outputs a shape of (n_samples,).
It's puzzling why you refer to Tensorflow, while your model is clearly a Keras one; you should refer to the predict method of the Keras sequential API.
When I fitted 514541 samples, the function returned an array with shape (514541, ), and each entry of the array ranged from 0 to 9.
If something like that happens, it must be due to a later part in your code that you do not show here; in any case, the idea would be to find the argument with the highest value from each 10-dimensional network output (since they are interpreted as probabilities, it is intuitive that the element with the highest value would be the most probable). In other words, somewhere in your code there must be something like this:
pred = model.predict(x_test)
y = np.argmax(pred, axis=1) # numpy must have been imported as np
which will give an array of shape (n_samples,), with each y an integer between 0 and 9, as you report.
i.e. if index 5 of my one-hot encoding of y_train represents "orange", does a prediction value of 5 mean that the neural network predicted "orange"?
Provided that the above hold, yes.

autocorrelation of the input in tensorflow/keras

I have a 1D input signal. I want to compute autocorrelation as the part of the neural net for further use inside the network.
I need to perform convolution of input with input itself.
To perform convolution in keras custom layer/ tensorflow. We need the following parameters
data shape is "[batch, in_height, in_width, in_channels]",
filter shape is "[filter_height, filter_width, in_channels, out_channels]
There is no batch present in filter shape, which needs to be input in my case
TensorFlow now has an auto_correlation function. It should be in release 1.6. If you build from source you can use it right now (see e.g. the github code).
Here is a possible solution.
By self convolution, I understood a regular convolution where the filter is exactly the same as the input (if it's not that, sorry for my misunderstanding).
We need a custom function for that, and a Lambda layer.
At first I used padding = 'same' which brings outputs with the same length as the inputs. I'm not sure about what output length you want exactly, but if you want more, you should add padding yourself before doing the convolution. (In the example with length 7, for a complete convolution from one end to another, this manual padding would include 6 zeros before and 6 zeros after the input length, and use padding = 'valid'. Find the backend functions here)
Working example - Input (5,7,2)
from keras.models import Model
from keras.layers import *
import keras.backend as K
batch_size = 5
length = 7
channels = 2
channels_batch = batch_size*channels
def selfConv1D(x):
#this function unfortunately needs to know previously the shapes
#mainly because of the for loop, for other lines, there are workarounds
#but these workarounds are not necessary since we'll have this limitation anyway
#original x: (batch_size, length, channels)
#bring channels to the batch position:
x = K.permute_dimensions(x,[2,0,1]) #(channels, batch_size, length)
#suppose channels are just individual samples (since we don't mix channels)
x = K.reshape(x,(channels_batch,length,1))
#here, we get a copy of x reshaped to match filter shapes:
filters = K.permute_dimensions(x,[1,2,0]) #(length, 1, channels_batch)
#now, in the lack of a suitable available conv function, we make a loop
allChannels = []
for i in range (channels_batch):
f = filters[:,:,i:i+1]
allChannels.append(
K.conv1d(
x[i:i+1],
f,
padding='same',
data_format='channels_last'))
#although channels_last is my default config, I found this bug:
#https://github.com/fchollet/keras/issues/8183
#convolution output: (1, length, 1)
#concatenate all results as samples
x = K.concatenate(allChannels, axis=0) #(channels_batch,length,1)
#restore the original form (passing channels to the end)
x = K.reshape(x,(channels,batch_size,length))
return K.permute_dimensions(x,[1,2,0]) #(batch_size, length, channels)
#input data for the test:
x = np.array(range(70)).reshape((5,7,2))
#little model that just performs the convolution
inp= Input((7,2))
out = Lambda(selfConv1D)(inp)
model = Model(inp,out)
#checking results
p = model.predict(x)
for i in range(5):
print("x",x[i])
print("p",p[i])
You can just use tf.nn.conv3d by treating the "batch size" as "depth":
# treat the batch size as depth.
data = tf.reshape(input_data, [1, batch, in_height, in_width, in_channels])
kernel = [filter_depth, filter_height, filter_width, in_channels, out_channels]
out = tf.nn.conv3d(data, kernel, [1,1,1,1,1], padding='SAME')

LSTM Followed by Mean Pooling (TensorFlow)

I am aware that there is a similar topic at LSTM Followed by Mean Pooling, but that is about Keras and I work in pure TensorFlow.
I have an LSTM network where the recurrence is handled by:
outputs, final_state = tf.nn.dynamic_rnn(cell,
embed,
sequence_length=seq_lengths,
initial_state=initial_state)
where I pass the correct sequence lengths for each sample (padding by zeros). In any case, outputs contains irrelevant outputs since some samples produce longer outputs than others, based on sequence lengths.
Right now I'm extracting the last relevant output by means of the following method:
def extract_axis_1(data, ind):
"""
Get specified elements along the first axis of tensor.
:param data: Tensorflow tensor that will be subsetted.
:param ind: Indices to take (one for each element along axis 0 of data).
:return: Subsetted tensor.
"""
batch_range = tf.range(tf.shape(data)[0])
indices = tf.stack([batch_range, ind], axis=1)
res = tf.reduce_mean(tf.gather_nd(data, indices), axis=0)
where I pass sequence_length - 1 as indices. In reference to the last topic, I would like to select all relevant outputs followed by average pooling, instead of just the last one.
Now, I tried passing nested lists as indeces to extract_axis_1 but tf.stack does not accept this.
Any solution directions for this?
You can exploit the weight parameter of the tf.contrib.seq2seq.sequence_loss function.
From the documentation:
weights: A Tensor of shape [batch_size, sequence_length] and dtype float. weights constitutes the weighting of each prediction in the sequence. When using weights as masking, set all valid timesteps to 1 and all padded timesteps to 0, e.g. a mask returned by tf.sequence_mask.
You need to compute a binary mask that distinguish between your valid outputs and invalid ones. Then you can just provide this mask to the weights parameter of the loss function (probably, you will want to use a loss like this one); the function will not consider the outputs with a 0 weight in the computation of the loss.
If you can't/don't need to use a sequence loss you can do exactly the same thing manually. You compute a binarymask and then multiply your outputs by this mask and provide these as inputs to your fully connected layer.