I am trying to design a recurrent classificatory network with Keras. I have analyzed key characteristics of the frames of a video, and from them I want to identify when certain events occur during the video.
Specifically, I have a matrix (30 x 2) for each frame, which represents the positions of various given objects. From these positions, I would like the network to detect 4 different events, as well as in which frames they occur.
As an example, suppose I have the position of 30 cars in each frame already detected, and I want the network to learn to detect the frames in which:
a car stops
a car starts
two cars collide
a car turns
In each frame, one or none of these events can occur (category 0), but not more than one.
Something remarkable is that to identify these 4 events it is necessary to know both the data of the previous frames and those of the later ones. For example, to know that two cars collide, it is necessary to know that beforehand both were in motion, as well as that after the collision neither moves.
Following this example, and just to clarify, suppose I have a sample of 100 frames, in which there is a crash at frames 4 and 75, a stop at 12, a start at 37, and turns at 3, 30, and 60. It would have an input of 100x30x2, and an output of 100x1.
After several hours, I get the feeling that I am not understanding something well in the way of indicating to Keras how he is the model.
So far I have been trying the following, with variations in the number of LSTM layers and the number of classification neurons:
model = keras.Sequential()
model.add(layers.LSTM(100, input_shape=(30, 2)))
model.add(layers.Dense(16, activation = 'relu'))
model.add(layers.Dense(5, activation = 'sigmoid'))
model.summary()
model.compile(loss='sparse_categorical_crossentropy', optimizer = 'Adam', metrics = ['SparseCategoricalAccuracy'])
I have also tried to introduce the variation
model3.add(layers.LSTM(100, input_shape=(30, 2), return_sequences=True))
so that you take into account that not only is the final output valid, but if I don't add a flatten it doesn't work, and I deduce that I don't understand the matter well.
Edit 1:
After your advice, I have now the following model:
I start with the dataset, stored in an input xp and an output yp. Here I print the sizes of both variables:
xp.shape, yp.shape
((203384, 25, 2), (203384, 1))
Then I'm encoding yp with keras.utils, and I change the shape of each input element from a (25x2) matrix to a (1x50) vector:
n = len(xp)
yp_encoded = keras.utils.to_categorical(yp)
xp_reshaped = xp[0:n,:].reshape(n,1,50)
print(n, xp_reshaped.shape, yp_encoded.shape)
203384, (203384, 1, 50), (203384, 5)
After, I define the model as we talked, with just LSTM layers
batch_size = 10
model = keras.Sequential()
model.add(layers.LSTM(100, batch_input_shape=(batch_size, 1, 50), activation = 'relu', return_sequences = True, stateful=True))
model.add(layers.LSTM(5, stateful=True, activation = 'softmax'))
model.summary()
Model: "sequential_140"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_184 (LSTM) (10, 1, 100) 58800
lstm_185 (LSTM) (10, 5) 2120
=================================================================
Total params: 60,920
Trainable params: 60,920
Non-trainable params: 0
So, as I understand, I have a LSTM model with an input of (batch_size, 1, 100) elements, that I should fit with an output of (batch_size, 5).
Then, I do the model compilation, and the fit:
model.compile(loss='categorical_crossentropy', optimizer = 'Adam', metrics = ['SparseCategoricalAccuracy'])
model.fit(xp_reshaped, yp_encoded, epochs = 5, batch_size = batch_size, shuffle = False)
I get the following error:
ValueError: Can not squeeze dim[1], expected a dimension of 1, got 5 for '{{node Squeeze}} = Squeeze[T=DT_FLOAT, squeeze_dims=[-1]](IteratorGetNext:1)' with input shapes: [10,5].
You're trying to solve classification problem - classify frames as events with classes: start, stop, turn, collide and the 0 class (nothing happens). You've chosen the right approach - LSTMs, though your archtitecture is not good.
All your layers should be LSTMs with stateful=True and return_sequences=True, you should transform your target variable to one-hot encoding, set last layer's activation to softmax, it's output shape will be (n_frames, 4).
Related
I have successfully trained a Keras/TensorFlow model consisting of layers SimpleRNN→Conv1D→GRU→Dense. The model is meant to run on an Apple Watch for real time inference, which means I want to feed it with a new feature vector and predict a new output for each time step. My problem is that I don't know how to feed data into it such that the convolutional layer receives the latest k outputs from the RNN layer.
I can see three options:
Feed it with one feature vector at a time, i.e. (1,1,6). In this case I assume that the convolutional layer will receive only one time step and hence zero pad for all the previous samples.
Feed it with the last k feature vectors for each time step, i.e. (1,9,6), where k = 9 is the CNN kernel length. In this case I assume that the state flow in the recurrent layers will not work.
Feed it with the last k feature vectors every k:th time step, again where k = 9 is the CNN kernel length. I assume this would work, but introduces unnecessary latency that I wish to avoid.
What I want is a model that I can feed with a new single feature vector for each time step, and it will automatically feed the last k outputs of the SimpleRNN layer into the following Conv1D layer. Is this possible with my current model? If not, can I work with the layer arguments, or can I introduce some kind of FIFO buffer layer between the SimpleRNN and Conv1D layer?
Here is my current model:
feature_vector_size = 6
model = tf.keras.models.Sequential([
Input(shape=(None, feature_vector_size)),
SimpleRNN(16, return_sequences=True, name="rnn"),
Conv1D(16, 9, padding="causal", activation="relu"),
GRU(12, return_sequences=True, name="gru"),
Dropout(0.2),
Dense(1, activation=tf.nn.sigmoid, name="dense")
])
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
rnn (SimpleRNN) (None, None, 16) 368
_________________________________________________________________
conv1d (Conv1D) (None, None, 16) 2320
_________________________________________________________________
gru (GRU) (None, None, 12) 1080
_________________________________________________________________
dropout (Dropout) (None, None, 12) 0
_________________________________________________________________
dense (Dense) (None, None, 1) 13
=================================================================
Edit:
After having researched the problem a bit, I have realized:
The Conv1D layer will zero pad in all three cases that I described, so option 3 won't work either. Setting padding="valid" solves this particular problem.
The SimpleRNN and GRU layers must have stateful=True. I found this description of how to make a model stateful after it has been trained stateless: How to implement a forward pass in a Keras RNN in real-time?
Keras sequence models seem to be made for complete, finite sequences only. The infinite streaming use case with one time step at a time isn't really supported.
However, the original question remains open: How can I build and/or feed new feature vectors into the model such that the convolutional layer receives the latest k outputs from the RNN layer?
For anyone else with the same problem: I couldn't solve the SimpleRNN to Conv1D data flow easily, so I ended up replacing the SimpleRNN layer with another Conv1D layer and setting padding="valid" on both Conv1D layers. The resulting model outputs exactly one time step when fed with a sequence of c * k - 1 time steps, where c is the number of Conv1D layers and k is the convolutional kernel length (c = 2 and k = 9 in my case):
feature_vector_size = 6
model = tf.keras.models.Sequential([
Input(shape=(None, feature_vector_size)),
Conv1D(16, 9, padding="valid", name="conv1d1"),
Conv1D(16, 9, padding="valid", name="conv1d2"),
GRU(12, return_sequences=True, name="gru"),
Dropout(0.2),
Dense(1, activation=tf.nn.sigmoid, name="dense")
])
After training, I make the GRU layer stateful according to How to implement a forward pass in a Keras RNN in real-time?. For real-time inference I keep a FIFO queue of the 17 latest feature vectors and feed all these 17 vectors into the model as an input sequence for each new time step.
I don't know if this is the best possible solution, but at least it works.
I have a plausible problem that currently fails to solve. During the training, my loss function explodes becomes inf or NaN, because the MSE of all errors becomes huge if the predictions (at the beginning of the training) are worse. And that is the normal intended behavior and correct. But, how do I train a ConvLSTM to which loss function to be able to learn a multi-step multi-variate output?
E.g. i try a (32, None, 200, 12) to predict (32, None, 30, 12). 32 is the batch size, None is the number of samples (>3000). 200 is the number of time steps, 12 features wide. 30 output time steps, 12 features wide.
My ConvLSTM model:
input = Input(shape=(None, input_shape[1]))
conv1d_1 = Conv1D(filters=64, kernel_size=3, activation=LeakyReLU())(input)
conv1d_2 = Conv1D(filters=32, kernel_size=3, activation=LeakyReLU())(conv1d_1)
dropout = Dropout(0.3)(conv1d_2)
lstm_1 = LSTM(32, activation=LeakyReLU())(dropout)
dense_1 = Dense(forecasting_horizon * input_shape[1], activation=LeakyReLU())(lstm_1)
output = Reshape((forecasting_horizon, input_shape[1]))(dense_1)
model = Model(inputs=input, outputs=output)
My ds generation:
ds_inputs = tf.keras.utils.timeseries_dataset_from_array(df[:-forecast_horizon], None, sequence_length=window_size, sequence_stride=1,
shuffle=False, batch_size=None)
ds_targets = tf.keras.utils.timeseries_dataset_from_array(df[forecast_horizon:], None, sequence_length=forecast_horizon, sequence_stride=1,
shuffle=False, batch_size=None)
ds_inputs = ds_inputs.batch(batch_size, drop_remainder=True)
ds_targets = ds_targets.batch(batch_size, drop_remainder=True)
ds = tf.data.Dataset.zip((ds_inputs, ds_targets))
ds = ds.shuffle(buffer_size=(len(ds)))
Besides MSE, I already tried MeanAbsoluteError, MeanSquaredLogarithmicError, MeanAbsolutePercentageError, CosineSimilarity. Where the last, produce non-sense. MSLE works best but does not favor large errors and therefore the MSE (used as metric has an incredible variation during training). Additionally, after a while, the Network becomes stale and gets no better loss (my explanation is that the difference in loss becomes too minor on the logarithmic scale and therefore the weights cannot be well adjusted).
I can partially answer my own question. One issue is that I used ReLu/LeakyReLu which will lead to exploding gradient problem because the RNN/LSTM Layer applies the same weights over time leading to exploding values as the values add up. Weights will not be reduced by any chance (ReLu min == 0). With Tanh as activation, it is possible to have negative values which also allow a reduction of the internal weights and really minimize the chance of exploding weights/predictions within the network. After some tests, the LSTM layer stays numerical stable.
I'm currently trying to train a custom model with tensorflow to detect 17 landmarks/keypoints on each of 2 hands shown in an image (fingertips, first knuckles, bottom knuckles, wrist, and palm), for 34 points (and therefore 68 total values to predict for x & y). However, I cannot get the model to converge, with the output instead being an array of points that are pretty much the same for every prediction.
I started off with a dataset that has images like this:
each annotated to have the red dots correlate to each keypoint. To expand the dataset to try to get a more robust model, I took photos of the hands with various backgrounds, angles, positions, poses, lighting conditions, reflectivity, etc, as exemplified by these further images:
I have about 3000 images created now, with the landmarks stored inside a csv as such:
I have a train-test split of .67 train .33 test, with the images randomly selected to each. I load the images with all 3 color channels, and scale the both the color values & keypoint coordinates between 0 & 1.
I've tried a couple different approaches, each involving a CNN. The first keeps the images as they are, and uses a neural network model built as such:
model = Sequential()
model.add(Conv2D(filters = 64, kernel_size = (3,3), padding = 'same', activation = 'relu', input_shape = (225,400,3)))
model.add(Conv2D(filters = 64, kernel_size = (3,3), padding = 'same', activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2,2), strides = 2))
filters_convs = [(128, 2), (256, 3), (512, 3), (512,3)]
for n_filters, n_convs in filters_convs:
for _ in np.arange(n_convs):
model.add(Conv2D(filters = n_filters, kernel_size = (3,3), padding = 'same', activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2,2), strides = 2))
model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dense(96, activation="relu"))
model.add(Dense(72, activation="relu"))
model.add(Dense(68, activation="sigmoid"))
opt = Adam(learning_rate=.0001)
model.compile(loss="mse", optimizer=opt, metrics=['mae'])
print(model.summary())
I've modified the various hyperparameters, yet nothing seems to make any noticeable difference.
The other thing I've tried is resizing the images to fit within a 224x224x3 array to use with a VGG-16 network, as such:
vgg = VGG16(weights="imagenet", include_top=False,
input_tensor=Input(shape=(224, 224, 3)))
vgg.trainable = False
flatten = vgg.output
flatten = Flatten()(flatten)
points = Dense(256, activation="relu")(flatten)
points = Dense(128, activation="relu")(points)
points = Dense(96, activation="relu")(points)
points = Dense(68, activation="sigmoid")(points)
model = Model(inputs=vgg.input, outputs=points)
opt = Adam(learning_rate=.0001)
model.compile(loss="mse", optimizer=opt, metrics=['mae'])
print(model.summary())
This model has similar results to the first. No matter what I seem to do, I seem to get the same results, in that my mse loss minimizes around .009, with an mae around .07, no matter how many epochs I run:
Furthermore, when I run predictions based off the model it seems that the predicted output is basically the same for every image, with only slight variation between each. It seems the model predicts an array of coordinates that looks somewhat like what a splayed hand might, in the general areas hands might be most likely to be found. A catch-all solution to minimize deviation as opposed to a custom solution for each image. These images illustrate this, with the green being predicted points, and the red being the actual points for the left hand:
So, I was wondering what might be causing this, be it the model, the data, or both, because nothing I've tried with either modifying the model or augmenting the data seems to have done any good. I've even tried reducing the complexity to predict for one hand only, to predict a bounding box for each hand, and to predict a single keypoint, but no matter what I try, the results are pretty inaccurate.
Thus, any suggestions for what I could do to help the model converge to create more accurate & custom predictions for each image of hands it sees would be very greatly appreciated.
Thanks,
Sam
Usually, neural networks will have a very hard time to predict exact coordinates of landmarks. A better approach is probably a fully convolutional network. This would work as follows:
You omit the dense layers at the end and thus end up with an output of (m, n, n_filters) with m and n being the dimensions of your downsampled feature maps (since you use maxpooling at some earlier stage in the network they will be lower resolution than your input image).
You set n_filters for the last (output-)layer to the number of different landmarks you want to detect plus one more to indicate no landmark.
You remove some of the max pooling such that your final output has a fairly high resolution (so the earlier referenced m and n are bigger). Now your output has shape mxnx(n_landmarks+1) and each of the nxm (n_landmark+1)-dimensional vectors indicate which landmark is present as the position in the image that corresponds to the position in the mxn grid. So the activation for your last output convolutional layer needs to be a softmax to represent probabilities.
Now you can train your network to predict the landmarks locally without having to use dense layers.
This is a very simple architecture and for optimal results a more sophisticated architecture might be needed, but I think this should give you a first idea of a better approach than using the dense layers for the prediction.
And for the explanation why your network does predict the same values every time: This is probably, because your network is just not able to learn what you want it to learn because it is not suited to do so. If this is the case, the network will just learn to predict a value, that is fairly good for most of the images (so basically the "average" position of each landmark for all of your images).
A standard RNN computational graph looks like follows (In my case, for regression to a single scalar value y)
I want to construct a network which accepts as input m sequences X_1...X_m (where both m and sequence lengths vary), runs the RNN on each sequence X_i to obtain a representation vector R_i, averages the representations and then runs a fully connected net to compute the output y_hat. Computational graph should look something like this:
Question
Can this be implemented (preferably) in Keras? Otherwise in TensorFlow? I'd very much appreciate if someone can point me to a working implementation of this or something similar.
There isn't a straightforward Keras implementation, as Keras enforces the batch axis (sampels dimension, dimension 0) as fixed for the input & output layers (but not all layers in-between) - whereas you seek to collapse it by averaging. There is, however, a workaround - see below:
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Input, Dense, GRU, Lambda
from tensorflow.keras.layers import Reshape, GlobalAveragePooling1D
from tensorflow.keras.models import Model
from tensorflow.keras.utils import plot_model
import numpy as np
def make_model(batch_shape):
ipt = Input(batch_shape=batch_shape)
x = Lambda(lambda x: K.squeeze(x, 0))(ipt)
x, s = GRU(4, return_state=True)(x) # s == last returned state
x = Lambda(lambda x: K.expand_dims(x, 0))(s)
x = GlobalAveragePooling1D()(x) # averages along axis1 (original axis2)
x = Dense(32, activation='relu')(x)
out = Dense(1, activation='sigmoid')(x)
model = Model(ipt, out)
model.compile('adam', 'binary_crossentropy')
return model
def make_data(batch_shape):
return (np.random.randn(*batch_shape),
np.random.randint(0, 2, (batch_shape[0], 1)))
m, timesteps = 16, 100
batch_shape = (1, m, timesteps, 1)
model = make_model(batch_shape)
model.summary() # see model structure
plot_model(model, show_shapes=True)
x, y = make_data(batch_shape)
model.train_on_batch(x, y)
Above assumes the task is binary classification, but you can easily adapt it to anything else - the main task's tricking Keras by feeding m samples as 1, and the rest of layers can freely take m instead as Keras doesn't enforce the 1 there.
Note, however, that I cannot guarantee this'll work as intended per the following:
Keras treats all entries along the batch axis as independent, whereas your samples are claimed as dependent
Per (1), the main concern is backpropagation: I'm not really sure how gradient will flow with all the dimensionality shuffling going on.
(1) is also consequential for stateful RNNs, as Keras constructs batch_size number of independent states, which'll still likely behave as intended as all they do is keep memory, but still worth understanding fully - see here
(2) is the "elephant in the room", but aside that, the model fits your exact description. Chances are, if you've planned out forward-prop and all dims agree w/ code's, it'll work as intended - else, and also for sanity-check, I'd suggest opening another question to verify gradients flow as you intend them to per above code.
model.summary():
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(1, 32, 100, 1)] 0
_________________________________________________________________
lambda (Lambda) (32, 100, 1) 0
_________________________________________________________________
gru (GRU) [(32, 16), (32, 16)] 864
_________________________________________________________________
lambda_1 (Lambda) (1, 32, 16) 0
_________________________________________________________________
global_average_pooling1d (Gl (1, 16) 0
_________________________________________________________________
dense (Dense) (1, 8) 136
_________________________________________________________________
dense_1 (Dense) (1, 1) 9
On LSTMs: will return two last states, one for cell state, one for hidden state - see source code; you should understand what this exactly means if you are to use it. If you do, you'll need concatenate:
from tensorflow.keras.layers import concatenate
# ...
x, s1, s2 = LSTM(return_state=True)(x)
x = concatenate([s1, s2], axis=-1)
# ...
I'm trying to use batch normalization in a conv2d_transpose as follows:
h1 = tf.layers.conv2d_transpose(inputs, 64, 4, 2, padding='SAME',
kernel_initializer=tf.variance_scaling_initializer,
bias_initializer=tf.ones_initializer,
activity_regularizer=tf.layers.batch_normalization,
)
h2 = tf.layers.conv2d_transpose(h1, 3, 4, 2, padding='SAME',
kernel_initializer=tf.variance_scaling_initializer,
bias_initializer=tf.ones_initializer,
activity_regularizer=tf.layers.batch_normalization,
)
And I am receiving the following error:
ValueError: Dimension 1 in both shapes must be equal, but are 32 and 64
From merging shape 2 with other shapes. for 'tower0/AddN' (op: 'AddN') with input shapes: [?,32,32,64], [?,64,64,3].
I've seen that other people have had this error in Keras because of the difference in dimension ordering between TensorFlow and Theano. However, I'm using pure TensorFlow, all of my variables are in TensorFlow dimension format (batch_size, height, width, channels), and the data_format of the conv2d_transpose layer should be the default 'channels_last'. What am I missing here?
tf.layers.batch_normalization should be added as a layer, not a regularizer. activity_regularizer is a function that takes activity (layer's output) and produces an extra loss term that is added to the overall loss term of the whole network. For example, you might want to penalize networks that produce high activation. You can see how activity_regularizer is called on the outputs and its result added to the loss here.