model.add(Conv2D(16,(3,3),1,activation='relu',input_shape=(256,256,3)))
model.add(MaxPooling2D())
model.add(Conv2D(32,(3,3),1,activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(16,(3,3),1,activation='relu'))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(256,activation='relu'))
model.add(Dense(64,activation='relu'))
model.add(Dense(16,activation='softmax'))
pre=Precision()
re=Recall()
ba=BinaryAccuracy()
for batch in test.as_numpy_iterator():
X,y=batch
yhat=model.predict(X)
print(X.shape,y.shape,yhat.shape)
pre.update_state(y,yhat) -> Error Line
re.update_state(y,yhat)
ba.update_state(y,yhat)
I did write the above program. I am able to train the model, but in the above Error line I am getting an error as mentioned above. Can anyone help me with this?
I tried to classify Image into 16 different calsses. I am able to train the but getting the error in above line.
Related
I've been trying to train a model on AWS Sagemaker as I found that my computer is no longer powerful enough to train my model in a reasonable amount of time. However, when I tried to load the model (after copy pasting the code from my computer) I got an unexpected error.
After tinkering around for a little bit, I found that the very first Conv2D layer has a different output shape than it was on my computer.
Sagemaker output dimensions:
(None, 128, 498, 3)
Expected output dimensions:
(None, 498, 498, 3)
My code is below:
import tensorflow as tf
from tensorflow import keras
model = keras.models.Sequential()
model.add(keras.Input(shape = (500,500,3)))
model.add(keras.layers.Conv2D(filters=128, kernel_size = (3,3), activation='relu'))
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0001),
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
How can I fix this?
I came here, because I had the same problem. I found the solution, but I am still really confused about it. I just want to mention that I use the same tensorflow version locally and on sagemaker (2.10). And on both EXACTLY the same code.
If you go https://keras.io/api/layers/convolution_layers/convolution2d/
It states:
"Output shape
4+D tensor with shape: batch_shape + (filters, new_rows, new_cols) if data_format='channels_first' or 4+D tensor with shape: batch_shape + (new_rows, new_cols, filters) if data_format='channels_last'. rows and cols values might have changed due to padding."
So I forced the Sagemaker's version to `data_format='channels_last'
Now both version the local one and AWS one are consistent.
`
I have regression problem, where there are around 20 features, the expected output is prices(in float).
Can I use Convolutional neural network here to predict the prices. I used both 1D,2D convolution.
But I get below errors,
For 2D,error is
ValueError: Input 0 of layer sequential_4 is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: (None, 18)
For 1D, error is
ValueError: Input 0 of layer sequential_4 is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (None, 18)
Can I use CNN for data other than images? What I am missing here. Please help here.
Below is the code,
model2 = tf.keras.Sequential()
model2.add(tf.keras.layers.Conv1D(32,kernel_size=(3),strides=1, activation='relu'))
model2.add(tf.keras.layers.BatchNormalization())
model2.add(tf.keras.layers.Conv1D(64, kernel_size=(3), strides=(2)))
model2.add(tf.keras.layers.ReLU())
model2.add(tf.keras.layers.BatchNormalization())
model2.add(tf.keras.layers.Dense(1, activation='linear'))
model2.compile(optimizer='adam',loss='mean_absolute_error',metrics['mean_absolute_error'])
I found the issue. There was issue in model.fit. Instead of calling model.fit(X_train,y_train,val=(X_test,y_test)). I had called in this way model.fit((X_train,y_train), val=(X_test,y_Test)). Model was trying to call it as fit_generator instead of fit because of the extra bracket.
I am relatively new to ml, and am building a CNN+LSTM Model for video classification, but ran into an error while trying to run model.summary().
A bit of background on the model itself: I am passing (48, 48, 1) grayscale images through with a batch size of 15 using ImageDataGenerator. It is binary classification (either a or b).
I am encountering one error with model.summary(), and when trying to fix this error, am running into more errors.
Here is the code for my model:
from tensorflow.keras import Sequential
from tensorflow.keras.layers import *
cnn = Sequential()
num_timesteps = 2
# 1st conv layer
cnn.add(Conv2D(64,(3,3), padding='same'))
cnn.add(BatchNormalization())
cnn.add(Activation('relu'))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Dropout(0.5))
# 2nd conv layer
cnn.add(Conv2D(128,(5,5), padding='same'))
cnn.add(BatchNormalization())
cnn.add(Activation('relu'))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Dropout(0.5))
# 3rd conv layer
cnn.add(Conv2D(512,(3,3), padding='same'))
cnn.add(BatchNormalization())
cnn.add(Activation('relu'))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Dropout(0.5))
# 4th conv layer
cnn.add(Conv2D(512,(3,3), padding='same'))
cnn.add(BatchNormalization())
cnn.add(Activation('relu'))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Dropout(0.5))
# flatten
cnn.add(Flatten())
# fully connected 1
cnn.add(Dense(256))
cnn.add(BatchNormalization())
cnn.add(Activation('relu'))
cnn.add(Dropout(0.5))
#fully connected 2
cnn.add(Dense(512))
cnn.add(BatchNormalization())
cnn.add(Activation('relu'))
cnn.add(Dropout(0.5))
model = Sequential()
model.add(Reshape((1, 48, 48, 1)))
model.add(TimeDistributed(cnn, input_shape=(num_timesteps, 48, 48, 1)))
model.add(LSTM(num_timesteps, return_sequences=True))
model.add(Dropout(0.5))
model.add(Dense(1, activation='softmax'))
#model.build(input_shape)
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
The error I am receiving is:
This model has not yet been built. Build the model first by calling build() or calling fit() with some data, or specify an input_shape argument in the first layer(s) for automatic build.
I have specified the input for the model, so I am confused as to why I am getting this error.
I have tried to fix this error by doing the following things:
I have added model.build(), but that only brings up a subsequent error later on when I try to run model.predict(). It states that "WARNING:tensorflow:Sequential models without an input_shape passed to the first layer cannot reload their optimizer state. As a result, your model isstarting with a freshly initialized optimizer", before giving the following error:
TypeError: Value passed to parameter 'input' has DataType uint8 not in list of allowed values: float16, bfloat16, float32, float64, int32
I have also tried removing the reshape layer which fixes the model.summary() error, but that gives a different error when running model.fit():
Error when checking input: expected time_distributed_108_input to have 5 dimensions, but got array with shape (15, 48, 48, 1)
I am also confused as to why I am getting an error here as when looking at the output shape in the output of model.summary, there are clearly 5 dimensions (None, 2, 48, 48, 64). To my understanding "None" gets replaced with the batch size which is 15.
So my understanding of all of these errors is that by fixing the first error with the input_shape in model.summary() or the error with model.fit(), the rest of the errors will be fixed
Thank you for taking the time to read through this, and any help with fixing these errors would be greatly appreciated!
Hello I am trying to get an output of an array of 7 classes. But when I run my code it says that it expects my data output labels to have some other shape. Here is my code -
def make_model(self):
self.model.add(InceptionV3(include_top=False,
input_shape=(self.WIDTH, self.HEIGHT, 3),
weights="imagenet"))
self.model.add(Dense(7, activation='softmax'))
self.model.layers[0].trainable = False
My model compilation and fitment part
def train(self):
self.model.compile(optimizer=self.optimizer, loss='mse', metrics=['accuracy'])
self.model.fit(x=x, y=y, batch_size=64,
validation_split=0.15, shuffle=True, epochs=self.epochs,
callbacks=[self.tensorboard, self.reducelr])
I get the error -
File "model.py", line 60, in train
callbacks=[self.tensorboard, self.reducelr])
ValueError: A target array with shape (23639, 7) was passed for an output of shape (None, 6, 13, 7) while using as loss `mean_squared_error`. This loss expects targets to have the same shape as the output.
Now here it is saying that it expected (None, 6, 13, 7) however i gave it labels - (23639, 7)
Now we can clearly see that in the self.model.add(Dense(7, activation='softmax')) I have specified 7 as the number of output categories
Here is the model summary -
So can someone tell me what is wrong here
By the way i did try using categorical_crossentropy to see if it makes a difference but it didn't.
In case you wanted the full code -
Full Code
The problem is in the output of the InceptionV3... it returns 4D sequences, you need to reduce the dimensionality before the final dense layer in order to match the target dimensionality (2D). you can do this using Flatten or GlobalPooling layers.
If yours is a classification problem I also recommend you use categorical_crossentropy (if you have one-hot encoded label) or sparse_categorical_crossentropy (if u have integer encoded labels). mse is suited for regression problems
Let us say that I build an extreamly simple CNN with Keras to classify vectors.
My input (X_train) is a matrix in which each row is a vector and each column is a feature. My input labels (y_train) is matrix where each line is a one hot encoded vector. This is a binary classifier.
my CNN is built as follows:
model = Sequential()
model.add(Conv1D(64,3))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dense(2))
model.add(Activation('sigmoid'))
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', matrics =
['accuracy'])
model.fit(X_train,y_train,batch_size = 32)
But when I try to run this code, I get back this error message:
Input 0 is incompatible with layer conv1d_23: expected ndim=3, found
ndim=2
why would keras expect 3 dims? one dim for samples, and one for features. And more importantly, how can I fix this?
X_train is suppose to have the shape: (batch_size, steps, input_dim), see documentation. It seems like you are missing one of the dimensions.
I would guess input_dim in your case is 1 and that is why it is missing. If so, change the
model.fit
line to
model.fit(tf.expand_dims(X_train,-1), y_train,batch_size = 32)
Your code is not a minimal working example, so I am not able to verify if that is the only problem, but this should hopefully fix your current error message.
A Conv1D layer expects an input with shape (samples, width, channels), so this does not match your input data, producing an error.
The convolution operation is done on the width dimension, so assuming that you want to do convolution on what you call features, then you should reshape your data to add a dummy channels dimension with a value of one:
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))