'IGenerator' object has no attribute '_assert_compile_was_called' - tensorflow

enter image description here
trains the model to increase the encoder and decoder accuracy
Model.fit(IGenerator(), steps_per_epoch=500, epochs=5)

Related

retrain a pretrained model after adding layers dives broadcastable shapes error

I'm trying to train a model that I loaded and freezed its layers then added 3 new layers that I want to train, in the model.fit stage I'm getting InvalidArgumentError: required broadcastable shapes [Op:Sub]
This is the code I'm using
# Load Saved Model and freeze layers
file_path = r'F:\ku.ac.ae\Intelligent Robotic Manufacturing - Documents\codes\Visuotactile sensor\contact_est\final\m3_130x173_512x16_DATASET_3'
loaded_model = tf.keras.models.load_model(file_path)
tf.keras.backend.set_epsilon(1)
model = tf.keras.models.Sequential(loaded_model.layers[:-3])
for layer in model.layers[:]:
layer.trainable = False
#print(layer, layer.trainable)
# Add Layers
model.add(tfl.Flatten())
model.add(tfl.Dense(64))
model.add(tfl.Dense(66, activation='softmax'))
for layer in model.layers[:]:
print(layer, layer.trainable)
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss='mean_absolute_percentage_error',
metrics=['mean_absolute_error'],
#metrics=['accuracy'],
run_eagerly=True)
file_name = 'freezed_m3_130x173_512x32_dataset3'
and then I run this
history = model.fit(
x_train, y_train,
epochs = 512,
batch_size = 32,
validation_data = (x_valid, y_valid),
#callbacks = callbacks_list,
shuffle=True)
I'm getting the error InvalidArgumentError: required broadcastable shapes [Op:Sub]
Any idea about this ? knowing that x_train and y_train have the exact same shape of the loaded model and in fact they are the train dataset used to train the loaded model I just want to play with the last layer
Thanks

Large variation in loss and accuracy validation values during training Resnet50 on binary class image classification

I am using Resnet50 for binary image classification and the model shows a high variation in loss and accuracy on validation data during the epochs.
This is what I get after 40 epochs.
Here is my model code:
def build_model():
model = ResNet50(include_top=True ,input_shape=(224,224,3) , weights="imagenet")
for layer in model.layers:
layer.trainable=True
base_input= model.layers[0].input
base_output= model.layers[-2].output
l =Dense(units = 512 ,activation='sigmoid')(l)
l=BatchNormalization()(l)
l=Dropout(0.4)(l)
final_output= Dense(units = 1 ,activation='sigmoid')(l)
new_model= Model(inputs=base_input,outputs= final_output)
return new_model
def train_model(model, train_generator, valid_generator):
model.compile(optimizer = Adam(), loss = 'binary_crossentropy', metrics = ['accuracy'])
history= model.fit(train_generator ,validation_data=(valid_generator) ,epochs=40)
return history
I need to know what's the problem & how to fix it
thanks in advance

How to get a LSTM Layer in Tensorflow Lite?

I trained a simple model with Keras:
model = tf.keras.models.Sequential([tf.keras.layers.LSTM(20,
time_major=False, unroll=False, input_shape=(28,28)),
tf.keras.layers.Dense(10, activation=tf.nn.softmax, name='output')])
Then, I converted my model with TfLite:
converter = tf.lite.TFLiteConverter.from_saved_model("mnist_lstm_model")
converter.experimental_new_converter = True
tflite_model = converter.convert()
I obtain a UNIDIRECTIONNAL_SEQUENCE_LSTM layer instead of LSTM. But I really need a LSTM layer for inference.
Thank you!

I am working on multiclass text classification, How to pass one hot encoded in to keras model for training in ytrainset?

I am working on text classification problem. i have 9 labels in my ytrain.but when i pass xtrain and ytrain to model , it give me error : that expected to have shape(1,) but got (9,). and my size of ytrain is (32,9). Picture of Ytrain is attachted ::
Below is my model :
model = Sequential()
model.add(layers.Embedding(input_dim=vocab_size, output_dim=embedding_dim, input_length=max_len))
model.add(layers.Conv1D(filters=100, kernel_size=4))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(12, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy',precision,recall,f1])
model.summary()
Traceback:
Error when checking target: expected dense_9 to have shape (1,) but got array with shape (9,)
'''
You should convert y_train to the one-hot encoding. For example, you can use keras.utils.to_categorical.

How to use batch trained model, to predict on single input?

I have RNN model that have been trained on Dataset:
train = tf.data.Dataset.from_tensor_slices((data_x[:train_size],
data_y[:train_size])).batch(batch_size).repeat()
model:
model = tf.keras.Sequential()
model.add(tf.keras.layers.GRU(units=lstm_num_units,
return_sequences=True,
kernel_initializer='random_uniform',
recurrent_initializer='random_uniform',
bias_initializer='random_uniform',
batch_size=batch_size,
input_shape = [seq_len, num_features]))
model.add(tf.keras.layers.LSTM(units=lstm_num_units,
batch_size=batch_size,
return_sequences=True,
input_shape = [seq_len, num_features]))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(units=dence_units))
model.add(tf.keras.layers.Dropout(drop_flat))
model.add(tf.keras.layers.Dense(units=out_units))
model.add(tf.keras.layers.Softmax())
model.compile(loss="sparse_categorical_crossentropy",
optimizer=tf.train.RMSPropOptimizer(opt),
metrics=['accuracy'])
model.fit(train, epochs=EPOCHS,
steps_per_epoch=repeat_size_train,
validation_data=validate,
validation_steps=repeat_size_validate,
verbose=1,
shuffle=True)
callbacks=[tensorboard, cp_callback])
I need to do prediction on single input of seq_len, but looks like my input have to be of a batch size:
ar = np.random.randint(98, size=[batch_size, seq_len])
ar = np.reshape(ar, [batch_size, seq_len, 1])
prediction = model.m.predict(ar)
Is there a way to make it work on a single input of shape [1, seq_len, 1]?
Yes, simply rebuild the model without a batch size in the first layer.
Copy the weights of the old model.
newModel.set_weights(oldModel.get_weights())
The purpose of the batch size only exists in stateful=True models to keep consistency between batches.
Even though, there is no mathematical change due to batch size.