i am new to python and i am trying to create a model that can measure how similar movies are based on the movies description,the steps i followed so far are:
1.turn each movie description into a vector of 100*(maximum number of words possible for a movie description) values using Word2Vec, this results in a 21300-values vector for each movie description.
2.create a deep convolutional autoencoder that tries to compress each vector(and hopefully extract meaning from it).
while the first step was successful and i am still struggling with the autoencoder, here is my code so far:
encoder_input = keras.Input(shape=(21300,), name='sum')
encoded= tf.keras.layers.Reshape((150,142,1),input_shape=(21300,))(encoder_input)
x = tf.keras.layers.Conv2D(128, (3, 3), activation="relu", padding="same",input_shape=(1,128,150,142))(encoded)
x = tf.keras.layers.MaxPooling2D((2, 2), padding="same")(x)
x = tf.keras.layers.Conv2D(64, (3, 3), activation="relu", padding="same")(x)
x = tf.keras.layers.MaxPooling2D((2, 2), padding="same")(x)#49*25*64
x = tf.keras.layers.Conv2D(32, (3, 3), activation="relu", padding="same")(x)
x = tf.keras.layers.MaxPooling2D((2, 2), padding="same")(x)#25*13*32
x = tf.keras.layers.Conv2D(16, (3, 3), activation="relu", padding="same")(x)
x = tf.keras.layers.MaxPooling2D((2, 2), padding="same")(x)
x = tf.keras.layers.Conv2D(8, (3, 3), activation="relu", padding="same")(x)
x = tf.keras.layers.MaxPooling2D((2, 2), padding="same")(x)
x=tf.keras.layers.Flatten()(x)
encoder_output=keras.layers.Dense(units=90, activation='relu',name='encoder')(x)
x= tf.keras.layers.Reshape((10,9,1),input_shape=(28,))(encoder_output)
# Decoder
decoder_input=tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = tf.keras.layers.UpSampling2D((2, 2))(decoder_input)
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(x)
x = tf.keras.layers.UpSampling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(x)
x = tf.keras.layers.UpSampling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(64, (3, 3), activation='relu')(x)
x = tf.keras.layers.UpSampling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(128, (3, 3), activation='relu')(x)
x = tf.keras.layers.UpSampling2D((2, 2))(x)
decoder_output = keras.layers.Conv2D(1, (3, 3), activation='relu', padding='same')(x)
autoencoder = keras.Model(encoder_input, decoder_output)
opt = tf.keras.optimizers.Adam(learning_rate=0.001, decay=1e-6)
autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
autoencoder.compile(opt, loss='mse')
print("STARTING FITTING")
history = autoencoder.fit(
movies_vector,
movies_vector,
epochs=25,
)
print("ENCODER READY")
#USING THE MIDDLE LAYER
encoder = keras.Model(inputs=autoencoder.input,
outputs=autoencoder.get_layer('encoder').output)
running this code gives me the following error:
required broadcastable shapes [[node mean_squared_error/SquaredDifference (defined at tmp/ipykernel_52/3425712667.py:119) ]] [Op:__inference_train_function_1568]
i have two questions:
1.how can i fix this error?
2.how can i improve my autoencoder so that i can use the compressed vectors to test for movie similarity?
The output of your model is (batch_size, 260, 228, 1), while your targets appear to be (batch_size, 21300). You can solve that problem by either adding a tf.keras.layers.Flatten() layer to the end of your model, or by not flattening your input.
You probably should not be using 2D convolutions, as there is no spatial or temporal correlation between adjacent feature channels in most text embedding. You should be able to safely reshape to (150,142) rather than (150, 142, 1) and use 1D convolution, pooling, and upsampling layers.
I want to train an autoencoder for the purpose of gpr investigations.
The input data dimension is 149x8.However, While i am trying deep autoencoder it works fine
input_img = Input(shape=(8,))
encoded1 = Dense(8, activation='relu')(input_img)
encoded2 = Dense(4, activation='relu')(encoded1)
encoded3 = Dense(2, activation='relu' )(encoded2)
decoded1 = Dense(2, activation='relu' )(encoded3)
decoded2 = Dense(4, activation='relu')(decoded1)
decoded3 = Dense(8, activation='relu' )(decoded2)
decoded = Dense(8, activation='linear')(decoded3)
autoencoder = Model(input_img, decoded)
sgd = optimizers.Adam(lr=0.001)
autoencoder.compile(optimizer=sgd, loss='mse')
autoencoder.summary()
..................................................
But while trying to use convolutional autoencoder for the same input
it gives error `ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=2`
can anybody suggest me how to overcome this problem.
My code is
input_img = Input(shape=(8,))
x = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = layers.UpSampling2D((2, 2))(x)
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = layers.UpSampling2D((2, 2))(x)
x = layers.Conv2D(16, (3, 3), activation='relu')(x)
x = layers.UpSampling2D((2, 2))(x)
decoded = layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
sgd = optimizers.Adam(lr=0.001)
autoencoder.compile(optimizer=sgd, loss='mse')
autoencoder.summary()
Wrong Input Shape:
This is because we are passing the input shape of (8,) and 1 extra dimension added by TensorFlow for Batch size, so the error message says that it found ndim=3, but the CNN has expected min_ndim=4, 3 for the image size and 1 for the batch size. e.g.
input_shape=(number_of_rows, 28,28,1)
I tried to develop an FCN-16 model in Keras. I initialized the weights with similar FCN-16 model weights.
def FCN8 (nClasses, input_height=256, input_width=256):
## input_height and width must be devisible by 32 because maxpooling with filter size = (2,2) is operated 5 times,
## which makes the input_height and width 2^5 = 32 times smaller
assert input_height % 32 == 0
assert input_width % 32 == 0
IMAGE_ORDERING = "channels_last"
img_input = Input(shape=(input_height, input_width, 3)) ## Assume 224,224,3
## Block 1
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='conv1_1', data_format=IMAGE_ORDERING)(
img_input)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='conv1_2', data_format=IMAGE_ORDERING)(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool', data_format=IMAGE_ORDERING)(x)
f1 = x
# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='conv2_1', data_format=IMAGE_ORDERING)(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='conv2_2', data_format=IMAGE_ORDERING)(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool', data_format=IMAGE_ORDERING)(x)
f2 = x
# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='conv3_1', data_format=IMAGE_ORDERING)(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='conv3_2', data_format=IMAGE_ORDERING)(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='conv3_3', data_format=IMAGE_ORDERING)(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool', data_format=IMAGE_ORDERING)(x)
pool3 = x
# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv4_1', data_format=IMAGE_ORDERING)(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv4_2', data_format=IMAGE_ORDERING)(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv4_3', data_format=IMAGE_ORDERING)(x)
pool4 = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool', data_format=IMAGE_ORDERING)(
x) ## (None, 14, 14, 512)
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv5_1', data_format=IMAGE_ORDERING)(pool4)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv5_2', data_format=IMAGE_ORDERING)(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv5_3', data_format=IMAGE_ORDERING)(x)
pool5 = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool', data_format=IMAGE_ORDERING)(
x)
n = 4096
o = (Conv2D(n, (7, 7), activation='relu', padding='same', name="fc6", data_format=IMAGE_ORDERING))(pool5)
conv7 = (Conv2D(n, (1, 1), activation='relu', padding='same', name="fc7", data_format=IMAGE_ORDERING))(o)
conv7 = (Conv2D(nClasses, (1, 1), activation='relu', padding='same', name="conv7_1", data_format=IMAGE_ORDERING))(conv7)
conv7_4 = Conv2DTranspose(nClasses, kernel_size=(2, 2), strides=(2, 2), data_format=IMAGE_ORDERING)(
conv7)
pool411 = (
Conv2D(nClasses, (1, 1), activation='relu', padding='same', name="pool4_11",use_bias=False, data_format=IMAGE_ORDERING))(pool4)
o = Add(name="add")([pool411, conv7_4])
o = Conv2DTranspose(nClasses, kernel_size=(16, 16), strides=(16, 16), use_bias=False, data_format=IMAGE_ORDERING)(o)
o = (Activation('softmax'))(o)
GDI= Model(img_input, o)
GDI.load_weights(Model_Weights_path)
model = Model(img_input, o)
return model
Then I did train, test split and trying to run the model as:
from keras import optimizers
sgd = optimizers.SGD(lr=1E-2, momentum=0.91,decay=5**(-4), nesterov=True)
model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'],)
hist1 = model.fit(X_train,y_train,validation_data=(X_test,y_test),batch_size=32,epochs=1000,verbose=2)
model.save("/content/drive/My Drive/HCI_prep/new.h5")
But this code is throwing error in the first epoch:
NotFoundError: 2 root error(s) found.
(0) Not found: No algorithm worked!
[[{{node pool4_11_3/Conv2D}}]]
[[loss_4/mul/_629]]
(1) Not found: No algorithm worked!
[[{{node pool4_11_3/Conv2D}}]]
0 successful operations.
0 derived errors ignored.
add the following to your code:
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
And then restart the python kernel.
Had the same issue.
The padding='same' for MaxPooling didn't work for me.
I changed the color_mode parameter in the train and test generators from 'rgb' to 'grayscale' and then it worked for me.
This worked for me:
import tensorflow as tf
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
In my case, this was solved by ending all processes, that still allocated memory on one of the GPUs. Apparently, one of them did not finish (correctly). I did not have to change any code.
My problem was that I called the model with an input_shape of (?,28,28,1) and later called it with (?,28,28,3).
import tensorflow.keras
from tensorflow.keras.models import *
IMAGE_ORDERING = 'channels_last'
# take vgg-16 pretrained model from "https://github.com/fchollet/deep-learning-models" here
pretrained_url = "https://github.com/fchollet/deep-learning-models/" \
"releases/download/v0.1/" \
"vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5"
pretrained = 'imagenet' # 'imagenet' if weights need to be initialized!
"""
Function Name: get_vgg_encoder()
Functionalities: This function defines the VGG encoder part of the FCN network
and initialize this encoder part with VGG pretrained weights.
Parameter:input_height=224, input_width=224, pretrained=pretrained
Returns: final layer of every blocks as f1,f2,f3,f4,f5
"""
def get_vgg_encoder(input_height=224, input_width=224, pretrained=pretrained):
pad = 1
# heights and weights must be divided by 32, for fcn
assert input_height % 32 == 0
assert input_width % 32 == 0
img_input = Input(shape=(input_height, input_width, 3))
# Unlike base paper, stride=1 has not been used here, because
# Keras has default stride=1
x = (ZeroPadding2D((pad, pad), data_format=IMAGE_ORDERING))(img_input)
x = Conv2D(64, (3, 3), activation='relu', padding='valid', name='block1_conv1', data_format=IMAGE_ORDERING)(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2', data_format=IMAGE_ORDERING)(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool', data_format=IMAGE_ORDERING)(x)
f1 = x
# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1', data_format=IMAGE_ORDERING)(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2', data_format=IMAGE_ORDERING)(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool', data_format=IMAGE_ORDERING)(x)
f2 = x
# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1', data_format=IMAGE_ORDERING)(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2', data_format=IMAGE_ORDERING)(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3', data_format=IMAGE_ORDERING)(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool', data_format=IMAGE_ORDERING)(x)
x = Dropout(0.5)(x)
f3 = x
# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1', data_format=IMAGE_ORDERING)(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2', data_format=IMAGE_ORDERING)(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3', data_format=IMAGE_ORDERING)(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool', data_format=IMAGE_ORDERING)(x)
f4 = x
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1', data_format=IMAGE_ORDERING)(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2', data_format=IMAGE_ORDERING)(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3', data_format=IMAGE_ORDERING)(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool', data_format=IMAGE_ORDERING)(x)
# x= Dropout(0.5)(x)
f5 = x
# Check if weights are initialised, model is learning!
if pretrained == 'imagenet':
VGG_Weights_path = tensorflow.keras.utils.get_file(
pretrained_url.split("/")[-1], pretrained_url)
Model(img_input, x).load_weights(VGG_Weights_path)
return img_input, [f1, f2, f3, f4, f5]
"""
Function Name: fcn_16()
Functionalities: This function defines the Fully Convolutional part of the FCN network
and adds skip connections to build FCN-16 network
Parameter:n_classes, encoder=get_vgg_encoder, input_height=224,input_width=224
Returns: model
"""
def fcn_16(n_classes, encoder=get_vgg_encoder, input_height=224, input_width=224):
# Take levels from the base model, i.e. vgg
img_input, levels = encoder(input_height=input_height, input_width=input_width)
[f1, f2, f3, f4, f5] = levels
o = f5
# fcn6
o = (Conv2D(4096, (7, 7), activation='relu', padding='same', data_format=IMAGE_ORDERING))(o)
o = Dropout(0.5)(o)
# fc7
o = (Conv2D(4096, (1, 1), activation='relu', padding='same', data_format=IMAGE_ORDERING))(o)
o = Dropout(0.3)(o)
conv7 = (Conv2D(1, (1, 1), activation='relu', padding='same', name="score_sal", data_format=IMAGE_ORDERING))(o)
conv7_4 = Conv2DTranspose(1, kernel_size=(4, 4), strides=(2, 2), padding='same', name="upscore_sal2",
use_bias=False, data_format=IMAGE_ORDERING)(conv7)
pool411 = (
Conv2D(1, (1, 1), activation='relu', padding='same', name="score_pool4", data_format=IMAGE_ORDERING))(f4)
# Add a crop layer
o, o2 = crop(pool411, conv7_4, img_input)
# add skip connection
o = Add()([o, o2])
# 16 x upsample
o = Conv2DTranspose(n_classes, kernel_size=(32, 32), strides=(16, 16), use_bias=False, data_format=IMAGE_ORDERING)(
o)
# crop layer
## Caffe calls crop layer that takes o and img_input as argument, it takes their difference and crops
## But keras takes it as touple, I checked the size diff and put this value manually.
## output dim was 240 , input dim was 224. 240-224=16. so 16/2=8
score = Cropping2D(cropping=((8, 8), (8, 8)), data_format=IMAGE_ORDERING)(o)
o = (Activation('sigmoid'))(score)
model = Model(img_input, o)
model.model_name = "fcn_16"
return model
This error is quite general and basically indicates that "something" went wrong. As, the variety of answers suggest the error can arise from incompatibilities of the implementation with the underlying versions of keras/tensorflow, or the filter sizes are incorrect, or or or...
There is no single solution to this. For me, it also was an input shape issue. Instead of using rgb using grayscale worked as the network expected 1 channel.