I am trying to create a CNN to classify data. My Data is X[N_data, N_features]
I want to create a neural net capable of classifying it. My problem is concerning the input shape of a Conv1D for the keras back end.
I want to repeat a filter over.. let say 10 features and then keep the same weights for the next ten features.
For each data my convolutional layer would create N_features/10 New neurones.
How can i do so? What should I put in input_shape?
def cnn_model():
model = Sequential()
model.add(Conv1D(filters=1, kernel_size=10 ,strides=10,
input_shape=(1, 1,N_features),kernel_initializer= 'uniform',
activation= 'relu'))
model.flatten()
model.add(Dense(N_features/10, init= 'uniform' , activation= 'relu' ))
Any advice?
thank you!
Try:
def cnn_model():
model = Sequential()
model.add(Conv1D(filters=1, kernel_size=10 ,strides=10,
input_shape=(N_features, 1),kernel_initializer= 'uniform',
activation= 'relu'))
model.flatten()
model.add(Dense(N_features/10, init= 'uniform' , activation= 'relu' ))
....
And reshape your x to shape (nb_of_examples, nb_of_features, 1).
EDIT:
Conv1D was designed for a sequence analysis - to have convolutional filters which would be the same no matter in which part of sequence we are. The second dimension is so called features dimension where you could have a vector of multiple features at each of timesteps. One may think about sequence dimension the same as spatial dimensions and feature dimension the same as channel dimension or color dimension in Conv2D. As #putonspectacles mentioned in his comment - you may set sequence dimension to None in order to make your network input length invariant.
#Marcin's answer might work, but might suggestion given the documentation here:
When using this layer as the first layer in a model, provide an
input_shape argument (tuple of integers or None, e.g. (10, 128) for
sequences of 10 vectors of 128-dimensional vectors, or (None, 128) for
variable-length sequences of 128-dimensional vectors.
would be:
model = Sequential()
model.add(Conv1D(filters=1, kernel_size=10 ,strides=10,
input_shape=(None, N_features),kernel_initializer= 'uniform',
activation= 'relu'))
Note that since input data (N_Data, N_features), we set the number of examples as unspecified (None). The strides argument controls the size of of the timesteps in this case.
To input a usual feature table data of shape (nrows, ncols) to Conv1d of Keras, following 2 steps are needed:
xtrain.reshape(nrows, ncols, 1)
# For conv1d statement:
input_shape = (ncols, 1)
For example, taking first 4 features of iris dataset:
To see usual format and its shape:
iris_array = np.array(irisdf.iloc[:,:4].values)
print(iris_array[:5])
print(iris_array.shape)
The output shows usual format and its shape:
[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]]
(150, 4)
Following code alters the format:
nrows, ncols = iris_array.shape
iris_array = iris_array.reshape(nrows, ncols, 1)
print(iris_array[:5])
print(iris_array.shape)
Output of above code data format and its shape:
[[[5.1]
[3.5]
[1.4]
[0.2]]
[[4.9]
[3. ]
[1.4]
[0.2]]
[[4.7]
[3.2]
[1.3]
[0.2]]
[[4.6]
[3.1]
[1.5]
[0.2]]
[[5. ]
[3.6]
[1.4]
[0.2]]]
(150, 4, 1)
This works well for Conv1d of Keras. For input_shape (4,1) is needed.
Related
I am learning Tensorflow and Keras to implement LSTM many-to-many model where the length of input sequence is equal to the length of the output sequence.
Sample Code:
Inputs:
voc_size = 10000
embed_dim = 64
lstm_units = 75
size_batch = 30
count_classes = 5
Model:
from tensorflow.keras.layers import ( Bidirectional, LSTM,
Dense, Embedding, TimeDistributed )
from tensorflow.keras import Sequential
def sample_build(embed_dim, voc_size, batch_size, lstm_units, count_classes):
model = Sequential()
model.add(Embedding(input_dim=voc_size,
output_dim=embed_dim,input_length=50))
model.add(Bidirectional(LSTM(units=lstm_units,return_sequences=True),
merge_mode="ave"))
model.add(Dense(200))
model.add(TimeDistributed(Dense(count_classes+1)))
# Compile model
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.summary()
return model
sample_model = sample_build(embed_dim,voc_size,
size_batch, rnn_units,
count_classes)
I am having trouble understanding the shapes of input and output for each layer. For example, the shape of the output of Embedding_Layer is (BATCH_SIZE, time_steps, length_of_input) and in this case, it is (30, 50, 64).
Similarly, the output shape of Bidirectional LSTM later is (30, 50, 75). This is will be the input for the next Dense Layer with 200 units. But the shape of the weight matrix of Dense Layer is (number of units in the current layer, number of units in the previous layer, which is (200,75) in this case. So how does the matrix calculation happen between 2D shape of the Dense Layer and the 3D shape of the Bidirectional Layer? Any explanations on the shape clarification will be helpful
The Dense can do 3D operation, it will flatten the the input to shape (batch_size * time_steps, features) and then apply a dense layer and reshape it back to orignal (batch_size, time_steps, units). In keras's documentation of Dense layer, it says:
Note: If the input to the layer has a rank greater than 2, then Dense computes the dot product between the inputs and the kernel along the last axis of the inputs and axis 1 of the kernel (using tf.tensordot). For example, if input has dimensions (batch_size, d0, d1), then we create a kernel with shape (d1, units), and the kernel operates along axis 2 of the input, on every sub-tensor of shape (1, 1, d1) (there are batch_size * d0 such sub-tensors). The output in this case will have shape (batch_size, d0, units).
Another point regarding the output of Embedding layer. As you said, it is correct that it is a 3D output, but correctly the shape correspond to (BATCH_SIZE, input_dim, embeddings_dim)
I am starting off my deep learning journey using the IMDB movie review dataset. I am not sure how the training data is loaded and the input_shape is specified.
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
If I understand it correctly, there are 25000 reviews from different movies and the words are encoded as list of integers.
train_data.shape
(25000,)
Since each review has different lengths, how is it possible to store such data in a matrix (i.e., 25,000 rows but columns with different lengths)?
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu', input_shape=(16,)))
model.add(layers.Dense(1, activation='sigmoid', input_shape=(16,)))
After reshaping the input to (25000,10000) using one-hot encoding, why is the first argument to input_shape not 25000 (number of samples) rather than 10000 since the Dense layer will compute the outputs according to relu(dot(w, inputs) + b)? ==> (25000, 10000) dot (10000, 16) = (25000, 16) Why don't we specify the input_shape as [None, 10000] as in Tensorflow core?
You can pad using keras pad_sequence
data = pad_sequences(data, maxlen=max_length, padding='post')
I recommend you to use an Embedding layer also
model = Sequential()
model.add(Embedding(num_words, 32, input_length=maxlen))
I'm reading and testing the basic example of CNN from TensorFlow tutorial web site:
The model from the tutorial looks:
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu')
model.add(layers.Flatten())
# 1.why do we need the next line ?
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
Two basic questions:
We are building CNN network.
Why do we need the last layer (model.add(layers.Dense(64, activation='relu'))) ?
It is not part of the CNN network, and from my tests, I'm getting same results (accuracy) with or without this last layer
In the tutorial they wrote that they used softmax in the last layer:
"CIFAR has 10 output classes, so you use a final Dense layer with 10 outputs and a softmax activation"
but they didn't use softmax in their code.
I checked the documentation, and the default activation function is None and not softmax. so the tutrial has a mistake and it is not used with softmax ?
Convolutional Neural Network (CNN)
CNN consist of (conv-pool)n-(flatten or globalpool)-(Dense)m, where the (conv-pool)n part extracts the features from a 2D signal and (Dense)m selects the features from the previous layers.
The output of the last layer is (4,4,64) which are 64 feature maps of size 4 × 4 (2D signals). We then flattens them to get a 4 × 4 × 64=1024 dim vector (instead, we can also use global max/avg pool to get a 64 dim vector). If you are using flatten then it will yield a 1024 dime vector and we have 10 classes. This will drastically reduce the dimension, leading to loss of important features. This is known as 'representation bottleneck'. To avoid this you can insert a Dense layer with (say 64 neuron) which will first project 1024 dim vector → 64 dim vector and then from 64 dim → 10 dim vector. If you use global max/ avg pooing then you can skip the additional Dense layer. In your case it seems that the representational bottleneck is avoided.
The tutorial is using
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Tensorflow has efficient implementation for logits calculation. This way, you need not use softmax in the layer. It will automatically optimize it as if you used softmax.
But if you still wish to use softmax in the Dense layer then you can use it. but then in the compile() use from_logits=False. However, the later approach is less efficient as it requires double work.
The purpose of a dense layer or a fully connected layer before the final dense layer is to give weights or it votes to select the most appropriate label before selecting in the final layer. In this case of the image below adding a few more neurons to select the label cat
Check this link out for a deeper understanding of fc layers: https://missinglink.ai/guides/convolutional-neural-networks/fully-connected-layers-convolutional-neural-networks-complete-guide/
A softmax layer typically maps the predictions(logits) into a more understandable format where's each value in the tensor can add up to become 1
[1.6e-7, 1.6e-8, 1.6e-9, 1.6e-10] # Before applying softmax
[0.6, 0.1, 0.2, 0.1] # After applying softmax
Note: The typical way of using the predictions is getting the highest value with the tensor
import numpy as np
preds = model.predict(batch_data)
highest_val = np.argmax(preds) # returns an index, in this case 0
Let us say that I build an extreamly simple CNN with Keras to classify vectors.
My input (X_train) is a matrix in which each row is a vector and each column is a feature. My input labels (y_train) is matrix where each line is a one hot encoded vector. This is a binary classifier.
my CNN is built as follows:
model = Sequential()
model.add(Conv1D(64,3))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dense(2))
model.add(Activation('sigmoid'))
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', matrics =
['accuracy'])
model.fit(X_train,y_train,batch_size = 32)
But when I try to run this code, I get back this error message:
Input 0 is incompatible with layer conv1d_23: expected ndim=3, found
ndim=2
why would keras expect 3 dims? one dim for samples, and one for features. And more importantly, how can I fix this?
X_train is suppose to have the shape: (batch_size, steps, input_dim), see documentation. It seems like you are missing one of the dimensions.
I would guess input_dim in your case is 1 and that is why it is missing. If so, change the
model.fit
line to
model.fit(tf.expand_dims(X_train,-1), y_train,batch_size = 32)
Your code is not a minimal working example, so I am not able to verify if that is the only problem, but this should hopefully fix your current error message.
A Conv1D layer expects an input with shape (samples, width, channels), so this does not match your input data, producing an error.
The convolution operation is done on the width dimension, so assuming that you want to do convolution on what you call features, then you should reshape your data to add a dummy channels dimension with a value of one:
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
I have the following network:
model = Sequential()
model.add(Embedding(400000, 100, weights=[emb], input_length=12, trainable=False))
model.add(Conv2D(256,(2,2),activation='relu'))
the output from the embedding layer is of shape (batchSize, 12, 100). The conv2D layer requires an input of shape (batchSize, filter, 12, 100), and I get the following error:
Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=3
So, how can I expand the output from the embedding layer to make it proper for the Conv2D layer?
I'm using Keras with Tensorflow as the back end.
Adding a reshape Layer should be the way to go https://keras.io/layers/core/#reshape
Depending on the concrete situation Conv1D cold although work.
I managed to add another dimension with the following piece of code:
model = Sequential()
model.add(Embedding(400000, 100, weights=[emb], input_length=12, trainable=False))
model.add(Lambda(lambda x: expand_dims(x, 3)))
model.add(Conv2D(256,(2,2),activation='relu'))