Input_from Conv_1D signal data - tensorflow

This is my signal data
The length of each sample data is = 64.
The sum of train data is =49572
length=len(x_train)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(32, 3, activation='relu', input_shape=(length,64)),
tf.keras.layers.MaxPooling1D(3),
tf.keras.layers.Conv1D(64, 3, activation='relu'),
tf.keras.layers.MaxPooling1D(3),
tf.keras.layers.Conv1D(128, 3, activation='relu'),
tf.keras.layers.MaxPooling1D(3),
tf.keras.layers.Conv1D(128, 3, activation='relu'),
tf.keras.layers.MaxPooling1D(3),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(29, activation='softmax')
])
I want to make a CNN model for signal data. So, I use Conv1d.
How to know the input_shape from my data?

From the keras Conv1D documentation:
When using this layer as the first layer in a model, provide an
input_shape argument (tuple of integers or None, e.g. (10, 128) for
sequences of 10 vectors of 128-dimensional vectors, or (None, 128) for
variable-length sequences of 128-dimensional vectors.
From your image, it seems like your data is simple 1-dimensional hat means length should equal to 1 in your case.
Think of the length dimension as the color channel of an image in the 2D convolution case. Black-white images have only a single color dimension, therefore width x height x 1, whereas RGB images have 3 color channels, hence width x height x 3.
Similar, if you work with time series and 1D convolutions you may have more then one signal, e.g. temperature + atmospheric pressure + humidity measured throughout the day for each minute. Then your signal would be of shape 1440 x 3

Related

what are the values of kernel matrix?

When using CNN with tensorflow, what the convulsion matrix looks like (what are the kernel values) ?
Look on this basic example of CNN:
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
what the convolution matrix looks like ?
what are the values of the 3x3 matrix ?
In the example above, we use 3 Conv2D layers (each layer use 3x3 convultion matrix).
Does those 3 matrixes are the same ? or they will have different values ?
Each convolution layer will have a weight and bias which can be inspected using
# For 1 layer <conv> (weight)
model.layers[0].get_weights()[0]
# For 1 layer <conv> (bias)
model.layers[0].get_weights()[1]
# For 2 layer <pool> (no weight and bias term) <so empty list is returned>
model.layers[1].get_weights()
#and so on....
conv matrix is a 4D tensor (in_channel × filter_size × filter_size × out_channel) and for your case: (3, 3, 3, 32).
Each filter will have different value. Nothing is common.

input output dimension in keras dense layer

I am trying to implement a dense layer in keras. The input is EEG recording using 2 channels, each of them consist of a vector of 8 points and the total number of training points is 17. The y is also 17 points.
I used
x=x.reshape(17,2,8,1)
y=y.reshape(17,1,1,1)
model.add(Dense(1, input_shape=(2,8,1), activation='relu'))
print(model.summary())
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam')
print(model.compile)
model.fit(x, y, batch_size = 17,epochs=500, verbose=1)
but i get the following error
Error when checking target: expected dense_57 to have shape (2, 8, 1) but got array with shape (17, 1, 1)
Since the Dense layer has output dimension 1, it would expect y to be of the shape (2, 8, 1). An easy fix would be to do the following
x = x.reshape(17, 16)
y = y.reshape(17, 1)
model.add(Dense(1, input_shape=(16,), activation='relu'))

How does tf.keras.layers.Conv2DTranspose behave with stride and padding?

While a convolution layer in TensorFlow has a complete description https://www.tensorflow.org/api_guides/python/nn#Convolution, transposed convolution does not have one.
Although tf.keras.layers.Conv2DTranspose has a reference to https://arxiv.org/pdf/1603.07285.pdf, it is not complete.
Is there any documentation that describes how tf.keras.layers.Conv2DTranspose behaves?
Conv2DTranspose is often used as upsampling for an image/feature map. The code below use 1X1 filter kernel to show how the input is padded with zero. the code is for tensorflow 2.0, add enable_eager_execution() with tensorflow1.x
data = tf.ones([2,2],tf.float32,"input_data")
input_layer = tf.reshape(data, [-1, 2, 2, 1])
transpose2d = layers.Conv2DTranspose(1, (1, 1), kernel_initializer='ones', strides=(2, 2), padding='valid', use_bias=False)
x = transpose2d(input_layer)
print(x)
The input is
1,1
1,1
The x is
1,0,1,0
0,0,0,0
1,0,1,0
0,0,0,0
you can change the stride value to see the diffrence

tensorflow conv1d kernel size dimensionality error

When taking the one dimensional convolution of a one dimensional array, I receive an error which suggests my second dimension is not big enough.
Here is the overview of the relevant code:
inputs_ = tf.placeholder(tf.float32 ,(None, 45), name='inputs')
x1 = tf.expand_dims(inputs_, axis=1)
x1 = tf.layers.conv1d(x1, filters=64, kernel_size=1, strides=1, padding='valid')
I am hoping to increase the kernel size to 3 such that neighbouring points also influence the output of each input node, however I get the following error:
ValueError: Negative dimension size caused by subtracting 3 from 1 for
'conv1d_4/convolution/Conv2D' (op: 'Conv2D') with input shapes:
[?,1,1,45], [1,3,45,64].
My guess is that tensorflow is expecting me to reshape my input into two dimensions so that some depth can be used to do the kernel multiplication. Question is why is this the case and what to expect for the layer behaviour based on the input dimensions
You need to add a Channel dimension as last dimension even if you only have one channel.
So this code works:
inputs_ = tf.placeholder(tf.float32 ,(None, 45), name='inputs')
x1 = tf.expand_dims(inputs_, axis=-1)
x1 = tf.layers.conv1d(x1, filters=64, kernel_size=3, strides=1, padding='valid')
So basically the error was caused because your tensor looked like having a width of 1, with 45 channels. TensorFlow was trying to convolve with a kernel size 3 along a size 1 dimension.

How to calculate input_dim for a keras sequential model?

Keras Dense layer needs an input_dim or input_shape to be specified. What value do I put in there?
My input is a matrix of 1,000,000 rows and only 3 columns. My output is 1,600 classes.
What do I put there?
dimensionality of the inputs (1000000, 1600)
2 because it's a 2D matrix
input_dim is the number of dimensions of the features, in your case that is just 3. The equivalent notation for input_shape, which is an actual dimensional shape, is (3,)
In your case
lets assume x and y=target variable and are look like as follows after feature engineering
x.shape
(1000000, 3)
y.shape
((1000000, 1600)
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_shape=x.shape[1])) # Input layer
# now the model will take as input arrays of shape (*, 3)
# and output arrays of shape (*, 32)
...
...
model.add(Dense(y.shape[1],activation='softmax')) # Output layer
y.shape[1]= 1600, the number of output which is the number of classes you have, since you are dealing with Classification.
X = dataset.iloc[:, 3:13]
meaning the X parameter having all the rows and 3rd column till 12th column inclusive and 13th column exclusive.
We will also have a X0 parameter to be given to the neural network, so total
input layers becomes 10+1 = 11.
Dense(input_dim = 11, activation = 'relu', kernel_initializer = 'he_uniform')