I have the original data with shape (1599, 1782), plus other features and together make the data with shape (1599, 1782, 10). 1599 is the date, and each day there are 1782 independent categories, and each category has 10 features. I choose the window size for 16 days for both train and valid data, thus:
X_train.shape (1568, 16, 1782, 10)
y_train.shape (1568, 16, 1782)
I do not know how to put the data into the LSTM model. I want the input shape (?, 16, 1782, 10) and output shape (?, 16, 1782). however, my current model is not working:
model.add(LSTM(units=50, return_sequences=True,input_shape=[16, 1782, 10]))
model.add(Dropout(0.2))
model.add(LSTM(units=50, return_sequences=True))
model.add(Dropout(0.2))
model.add(TimeDistributed(Dense(1782)))
The error shows:
ValueError: Input 0 of layer "lstm" is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 16, 1782, 10)
Related
I am trying to call tf.image.random_crop(image, size=INPUT_SHAPE) and I get this error:
ValueError: Dimensions must be equal, but are 4 and 3 for '{{node random_crop/GreaterEqual}} = GreaterEqual[T=DT_INT32](random_crop/Shape, random_crop/size)' with input shapes: [4], [3].
So while I was trying to understand what was going on, I tried printing the shape of my dataset with
print(len(train_dataset), train_dataset)
and I got this:
23 <BatchDataset element_spec=(TensorSpec(shape=(None, 160, 160, 1), dtype=tf.float32, name=None), TensorSpec(shape=(None,), dtype=tf.int32, name=None))>
First, I don't understand the number 23 and more concerning is the TensorSpec(shape=(None, 160, 160, 1). My INPUT_SHAPE is (160, 160, 1) so I am wondering if that's what's causing the problem.
I saw a thread that said I should change the batch size to 1 but that didn't work out for me. Right now, I have no batch size for the dataset at all
Looks like you are applying random_crop after batch. For the above to work you need to set INPUT_SHAPE = (batch_size, 120, 120,3) or you can batch after applying random_crop.
I am trying to implement a dense layer in keras. The input is EEG recording using 2 channels, each of them consist of a vector of 8 points and the total number of training points is 17. The y is also 17 points.
I used
x=x.reshape(17,2,8,1)
y=y.reshape(17,1,1,1)
model.add(Dense(1, input_shape=(2,8,1), activation='relu'))
print(model.summary())
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam')
print(model.compile)
model.fit(x, y, batch_size = 17,epochs=500, verbose=1)
but i get the following error
Error when checking target: expected dense_57 to have shape (2, 8, 1) but got array with shape (17, 1, 1)
Since the Dense layer has output dimension 1, it would expect y to be of the shape (2, 8, 1). An easy fix would be to do the following
x = x.reshape(17, 16)
y = y.reshape(17, 1)
model.add(Dense(1, input_shape=(16,), activation='relu'))
I'm interested in using the Networkx Python package to perform network analysis on convolutional neural networks. To achieve this I want to extract the edge and weight information from Keras model objects and put them into a Networkx Digraph object where it can be (1) written to a graphml file and (2) be subject to the graph analysis tools available in Networkx.
Before jumping in further, let me clarify and how to consider pooling. Pooling (examples: max, or average) means that the entries within a convolution window will be aggregated, creating an ambiguity on 'which' entry would be used in the graph I want to create. To resolve this, I would like every possible choice included in the graph as I can account for this later as needed.
For the sake of example, let's consider doing this with VGG16. Keras makes it pretty easy to access the weights while looping over the layers.
from keras.applications.vgg16 import VGG16
model = VGG16()
for layer_index, layer in enumerate(model.layers):
GW = layer.get_weights()
if layer_index == 0:
print(layer_index, layer.get_config()['name'], layer.get_config()['batch_input_shape'])
elif GW:
W, B = GW
print(layer_index, layer.get_config()['name'], W.shape, B.shape)
else:
print(layer_index, layer.get_config()['name'])
Which will print the following:
0 input_1 (None, 224, 224, 3)
1 block1_conv1 (3, 3, 3, 64) (64,)
2 block1_conv2 (3, 3, 64, 64) (64,)
3 block1_pool
4 block2_conv1 (3, 3, 64, 128) (128,)
5 block2_conv2 (3, 3, 128, 128) (128,)
6 block2_pool
7 block3_conv1 (3, 3, 128, 256) (256,)
8 block3_conv2 (3, 3, 256, 256) (256,)
9 block3_conv3 (3, 3, 256, 256) (256,)
10 block3_pool
11 block4_conv1 (3, 3, 256, 512) (512,)
12 block4_conv2 (3, 3, 512, 512) (512,)
13 block4_conv3 (3, 3, 512, 512) (512,)
14 block4_pool
15 block5_conv1 (3, 3, 512, 512) (512,)
16 block5_conv2 (3, 3, 512, 512) (512,)
17 block5_conv3 (3, 3, 512, 512) (512,)
18 block5_pool
19 flatten
20 fc1 (25088, 4096) (4096,)
21 fc2 (4096, 4096) (4096,)
22 predictions (4096, 1000) (1000,)
For the convolutional layers, I've read that the tuples will represent (filter_x, filter_y, filter_z, num_filters) where filter_x, filter_y, filter_z give the shape of the filter and num_filters is the number of filters. There's one bias term for each filter, so the last tuple in these rows will also equal the number of filters.
While I've read explanations of how the convolutions within a convolutional neural network behave conceptually, I seem to be having a mental block when I get to handling the shapes of the layers in the model object.
Once I know how to loop over the edges of the Keras model, with Networkx I should be able to easily code the construction of the Networkx object. The code for this might loosely resemble something like this, where keras_edges is an iterable that contains tuples formatted as (in_node, out_node, edge_weight).
import networkx as nx
g = nx.DiGraph()
g.add_weighted_edges_from(keras_edges)
nx.write_graphml(g, 'vgg16.graphml')
So to be specific, how do I loop over all the edges in a way that accounts for the shape of the layers and the pooling in the way I described above?
Since Keras doesn't have an edge element, and a Keras node seems to be something totally different (a Keras node is an entire layer when it's used, it's the layer as presented in the graph of the model)
So, assuming you are using the smallest image possible (which is equal to the kernel size), and that you're creating nodes manually (sorry, I don't know how it works in networkx):
For a convolution that:
Has i input channels (channels in the image that comes in)
Has o output channels (the selected number of filters in keras)
Has kernel_size = (x, y)
You already know the weights, which are shaped (x, y, i, o).
You would have something like:
#assuming a node here is one pixel from one channel only:
#kernel sizes x and y
kSizeX = weights.shape[0]
kSizeY = weights.shape[1]
#in and out channels
inChannels = weights.shape[2]
outChannels = weights.shape[3]
#slide steps x
stepsX = image.shape[0] - kSizeX + 1
stepsY = image.shape[1] - kSizeY + 1
#stores the final results
all_filter_results = []
for ko in range(outChannels): #for each output filter
one_image_results = np.zeros((stepsX, stepsY))
#for each position of the sliding window
#if you used the smallest size image, start here
for pos_x in range(stepsX):
for pos_y in range(stepsY):
#storing the results of a single step of a filter here:
one_slide_nodes = []
#for each weight in the filter
for kx in range(kSizeX):
for ky in range(kSizeY):
for ki in range(inChannels):
#the input node is a pixel in a single channel
in_node = image[pos_x + kx, pos_y + ky, ki]
#one multiplication, single weight x single pixel
one_slide_nodes.append(weights[kx, ky, ki, ko] * in_node)
#so, here, you have in_node and weights
#the results of each step in the slide is the sum of one_slide_nodes:
slide_result = sum(one_slide_nodes)
one_image_results[pos_x, pos_y] = slide_result
all_filter_results.append(one_image_results)
I have 2 layers with the following shapes.
Layer 1: LSTM
K.int_shape(x)
(None, None, 500)
Layer 2 : Conv2D -> Flatten -> Reshape
K.int_shape(y)
(None, 1, 2352)
I need to concatenate them, but I get the following error.
ValueError: A 'Concatenate' layer requires inputs with matching shapes
except for the concat axis. Got inputs shapes: [(None, None, 500),
(None, 1, 2352)]
I'm using Keras v2.1.4
I have a time series signal (n samples, each sample has 81 time steps and 3 features = n x 81 x 3).
I am using an conv1D-LSTM network. n_timesteps = 81, n_features = 3.
Normal LSTM specifies both n_timesteps and n_features, however when combined with conv1d, these are not specified.
How does the LSTM know how many time steps and features there are in the input to it?
How does the LSTM know the end of the sequence for each sample?
Are the time steps "stored up" and them fed into the LSTM or are the processed one time step at a time and fed into the LSTM one time step at a time?
If I include the "flatten" (below) it fails. Why?
Do the number of filters in the conv1d have to correspond to the number of filters in the LSTM?
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features)))
model.add(Conv1D(filters=32, kernel_size=3, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=2))
#model.add(Flatten())
#model.add(LSTM(units=128, input_shape=(n_timesteps, n_features), return_sequences=True))
model.add(LSTM(units=128, return_sequences=True))
model.add(Dropout(0.3))
model.add(LSTM(units=64, dropout=0.5, recurrent_dropout=0.5, return_sequences=True))
model.add(LSTM(units=32, dropout=0.5, recurrent_dropout=0.5))
model.add(Dense(16, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(1, activation='sigmoid'))
1 and 2
Everything is based on tensors (sort of like matrices, but with any number of dimensions).
The tensors have shapes and everything is based on the shapes. Your data tensors are three-dimensional: (samples, time_steps, features).
It happens that 1D convolutions also use the same 3D tensors: (samples, length, channels). So:
samples = examples = sequences
time_steps = length
features = channels
There is no secret. The data is structured and the layers will use this structure. Look at your model.summary() and see the number of steps and features for every layer's output.
3
There is no interleaving between layers.
The conv layer will process its entire input tensor and generate an output tensor.
The next conv layer will take this entire output and produce another entire output
The LSTM layer will do the same, take an entire input and output an entire tensor.
4
If you flatten the data, your 3D tensors (samples, steps, feats) will become 2D tensors (samples, something). No more data structure that can be understood by the layers.
5
There is absolutely no requirement for number of filters or units. The only thing is that the final output of your model needs to have the same shape of your y_train data.
Here is my model summary. It appears that the number of features has changed from the original 3 (of the input) to 32 (for the conv1d). Is it correct that the LSTM will now process then entire time steps (~81) on the 32 features of the conv1d instead of the 3 features of the input?
Example of summary:
The first LSTM will take an input shape of (None, 38,32). This means this LSTM will process:
38 steps
32 features
The convolutions are discarding border steps and the maxpooling is halving the steps.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1d (Conv1D) (None, 79, 32) 320
_________________________________________________________________
conv1d_1 (Conv1D) (None, 77, 32) 3104
_________________________________________________________________
dropout (Dropout) (None, 77, 32) 0
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 38, 32) 0
_________________________________________________________________
lstm (LSTM) (None, 38, 128) 82432
_________________________________________________________________
dropout_1 (Dropout) (None, 38, 128) 0
_________________________________________________________________
lstm_1 (LSTM) (None, 38, 64) 49408
_________________________________________________________________
lstm_2 (LSTM) (None, 32) 12416
_________________________________________________________________
dense (Dense) (None, 16) 528
_________________________________________________________________
dropout_2 (Dropout) (None, 16) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 148,225
Trainable params: 148,225
Non-trainable params: 0
_________________________________________________________________```