No step marker observed on tensorboard - tensorflow

I'm working on keras model with an LSTM. To optimize performance I'd like to use the performance profiler from TensorBoard.
However it shows this error message at the top:
No step marker observed and hence the step time is unknown. This may happen if (1) training steps are not instrumented (e.g., if you are not using Keras) or (2) the profiling duration is shorter than the step time. For (1), you need to add step instrumentation; for (2), you may try to profile longer.
This is my keras model:
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=tuple(config.input_dims)),
tf.keras.layers.LSTM(128),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(64),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Dense(5, activation=tf.nn.softmax)
])
model.compile(loss='categorical_crossentropy', metrics=['categorical_accuracy'], optimizer="adam")
model.summary()
model.fit(x=train, validation_data=validation, epochs=10, callbacks=callbacks)
If I replace the LSTM with a flatten layer the profiler shows correct data.
The model can be trained and used. Any idea what's the problem?

I had the same problem. Above that warning, an error is shown: "Failed to load libcupti (is it installed and accessible?)". As I checked when executing the model and observed that TensorFlow could not find CUPTI. So linked it as it is mentioned here.

Related

training model CNN KERAS

hello everyone i am trying to train a model using cnn and keras but the training don't finish and i got this warning and it stops training , i don't know why and i didn't understand where the problem is can anyone gives me a advice or what i should change in the code
def myModel():
no_Of_Filters=60
size_of_Filter=(5,5) # THIS IS THE KERNEL THAT MOVE AROUND THE IMAGE TO GET THE FEATURES.
# THIS WOULD REMOVE 2 PIXELS FROM EACH BORDER WHEN USING 32 32 IMAGE
size_of_Filter2=(3,3)
size_of_pool=(2,2) # SCALE DOWN ALL FEATURE MAP TO GERNALIZE MORE, TO REDUCE OVERFITTING
no_Of_Nodes = 500 # NO. OF NODES IN HIDDEN LAYERS
model= Sequential()
model.add((Conv2D(no_Of_Filters,size_of_Filter,input_shape=(imageDimesions[0],imageDimesions[1],1),activation='relu'))) # ADDING MORE CONVOLUTION LAYERS = LESS FEATURES BUT CAN CAUSE ACCURACY TO INCREASE
model.add((Conv2D(no_Of_Filters, size_of_Filter, activation='relu')))
model.add(MaxPooling2D(pool_size=size_of_pool)) # DOES NOT EFFECT THE DEPTH/NO OF FILTERS
model.add((Conv2D(no_Of_Filters//2, size_of_Filter2,activation='relu')))
model.add((Conv2D(no_Of_Filters // 2, size_of_Filter2, activation='relu')))
model.add(MaxPooling2D(pool_size=size_of_pool))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(no_Of_Nodes,activation='relu'))
model.add(Dropout(0.5)) # INPUTS NODES TO DROP WITH EACH UPDATE 1 ALL 0 NONE
model.add(Dense(noOfClasses,activation='softmax')) # OUTPUT LAYER
# COMPILE MODEL
model.compile(Adam(lr=0.001),loss='categorical_crossentropy',metrics=['accuracy'])
return model
############################### TRAIN
model = myModel()
print(model.summary())
history=model.fit_generator(dataGen.flow(X_train,y_train,batch_size=batch_size_val),steps_per_epoch=steps_per_epoch_val,epochs=epochs_val,validation_data=(X_validation,y_validation),shuffle=1)
WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 20000 batches). You may need to use the repeat() function when building your dataset.
While using generators, you can either run the model without the step_per_epoch parameter and let the model figure out how many steps are there to cover an epoch.
history=model.fit_generator(dataGen.flow(X_train,y_train,batch_size=batch_size_val),
epochs=epochs_val,
validation_data=(X_validation,y_validation),
shuffle=1)
OR
you'll have to calculate steps_per_epoch and use it while training as follows;
history=model.fit_generator(dataGen.flow(X_train,y_train,batch_size=batch_size_val),
steps_per_epoch=(data_samples/batch_size)
epochs=epochs_val,
validation_data=(X_validation,y_validation),
shuffle=1)
Let us know if the issue still persists. Thanks!

Approximating a smooth multidimensional function using Keras to an error of 1e-4

I am trying to approximate a function that smoothly maps five inputs to a single probability using Keras, but seem to have hit a limit. A similar problem was posed here (Keras Regression to approximate function (goal: loss < 1e-7)) for a ten-dimensional function and I have found that the architecture proposed there, namely:
model = Sequential()
model.add(Dense(128,input_shape=(5,), activation='tanh'))
model.add(Dense(64,activation='tanh'))
model.add(Dense(1,activation='sigmoid'))
model.compile(optimizer='adam', loss='mae')
gives me my best results, converging to a best loss of around 7e-4 on my validation data when the batch size is 1000. Adding or removing more neurons or layers seems to reduce the accuracy. Dropout regularisation also reduces accuracy. I am currently using 1e7 training samples, which took two days to generate (hence the desire to approximate this function). I would like to reduce the mae by another order of magnitude, does anyone have any suggestions how to do this?
I recommend use utilize the keras callbacks ReduceLROnPlateau, documentation is [here][1] and ModelCheckpoint, documentation is [here.][2]. For the first, set it to monitory validation loss and it will reduce the learning rate by a factor(factor) if the loss fails to reduce after a fixed number (patience) of consecutive epochs. For the second also monitor validation loss and save the weights for the model with the lowest validation loss to a directory. After training load the weights and use them to evaluate or predict on the test set. My code implementation is shown below.
checkpoint=tf.keras.callbacks.ModelCheckpoint(filepath=save_loc, monitor='val_loss', verbose=1, save_best_only=True,
save_weights_only=True, mode='auto', save_freq='epoch', options=None)
lr_adjust=tf.keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5, patience=1, verbose=1, mode="auto",
min_delta=0.00001, cooldown=0, min_lr=0)
callbacks=[checkpoint, lr_adjust]
history = model.fit_generator( train_generator, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,validation_data=validation_generator,
validation_steps=VALIDATION_STEPS, callbacks=callbacks)
model.load_weights(save_loc) # load the saved weights
# after this use the model to evaluate or predict on the test set.
# if you are satisfied with the results you can then save the entire model with
model.save(save_loc)
[1]: https://keras.io/api/callbacks/reduce_lr_on_plateau/
[2]: https://keras.io/api/callbacks/model_checkpoint/

efficientnet.tfkeras vs tf.keras.applications.efficientnet

I am trying to use efficientnet to custom train my dataset.
And I find out with all other code/data/config the same.
efficientnet.tfkeras.EfficientNetB0 can gives ~90% training/prediction accruacy and tf.keras.applications.efficientnet.EfficientNetB0 only gives ~70% accuracy.
But I guess both should be the same implementation of the efficient net, or I am missing something here?
I am using latest efficientnet and Tensorflow 2.3.0
with strategy.scope():
model = tf.keras.Sequential([
efficientnet.tfkeras.EfficientNetB0( #tf.keras.applications.efficientnet.EfficientNetB0
input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3),
weights='imagenet',
include_top=False
),
L.GlobalAveragePooling2D(),
L.Dense(1, activation='sigmoid')
])
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['binary_crossentropy']
)
model.summary()
I did run into the same problem for EfficientNetB4 and did encounter the following:
The number of total parameters are not equal. The trainable parameters are equal, but the non-trainable parameters aren't. The efficientnet.tfkeras has 7 fewer non-trainable parameters than the tf.keras.applications model.
The number of layers are not equal, the efficientnet.tfkeras has fewer layers than tf.keras.application model.
The different layers are at the very beginning, the most noteworthy are the normalization and rescaling layers, which are in the tf.keras.applications model, but not in the efficientnet.tfkeras model. You can observe this yourself using the model.summary() method.
When applying this layer, by using model.layers[i](array), it turn out these layers do rescale the image by dividing it by 255 and applying normalization according to:
(input_image - IMAGENET_MEAN) / square_root(IMAGENET_STD)
Thus, it turns out the image normalization is build into the model. When you perform this normalization yourself to the input image, the image will be normalized twice resulting in extremely small pixel values. The model will therefore have a hard time learning.
TLDR: Do not normalize the input image as it is build into the tf.keras.application model, input images should have values in the range 0-255.

Keras OOM for data validation using GPU

I'm trying to run a deep model using GPU and seems Keras is running the validation against the whole validation data set in one batch instead of validating in many batches and that's causing out of memory problem:
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM
when allocating tensor with shape[160000,64,64,1] and type double on
/job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[Op:GatherV2]
I did not have this problem when I was running on CPU, it's just happening when I'm running on GPU, my fit code looks like this
history = model.fit(patches_imgs_train, patches_masks_train, batch_size=8,
epochs=10, shuffle=True, verbose=1, validation_split=0.2)
When I delete the validation parameter from the fit method the code works, but I need the validation.
Since no one is answering this, I can offer you a workaround. You can separate fit() and evaluate() and run the evaluation on CPU.
You'll have to split your data manually to provide the testx and testy to evaluate().
for i in range(10):
with tf.device('/GPU:0'):
model.fit(x, y, epochs=1)
with tf.device('/CPU:0'):
loss, acc = model.evaluate(testx, testy)
You'll need deal with the accuracy values if you wanted some early stop.
It isn't perfect but it'll allow you to run much larger networks without OOMs.
Hope it helps.
So I could consider what is happening as a bug in Keras implementation, looks like it's trying to load the whole data set to the memory for splitting it into validation and training sets and it's not related to batch size, after trying many ways to go around it I found the best way to approach it is splitting the data using sklearn train_test_split instead of splitting it down in the fitting method using validation_split param.
x_train, x_v, y_train, y_v = train_test_split(x,y,test_size = 0.2,train_size =0.8)
history = model.fit(x_train,y_train,
batch_size=16,
epochs=5,
shuffle=True,
verbose=2,
validation_data=(x_v, y_v))

Keras: BiLSTM only works when return_sequences=True

I've been trying to implement this BiLSTM in Keras: https://github.com/ffancellu/NegNN
Here is where I'm at, and it kind of works:
inputs_w = Input(shape=(sequence_length,), dtype='int32')
inputs_pos = Input(shape=(sequence_length,), dtype='int32')
inputs_cue = Input(shape=(sequence_length,), dtype='int32')
w_emb = Embedding(vocabulary_size+1, embedding_dim, input_length=sequence_length, trainable=False)(inputs_w)
p_emb = Embedding(tag_voc_size+1, embedding_dim, input_length=sequence_length, trainable=False)(inputs_pos)
c_emb = Embedding(2, embedding_dim, input_length=sequence_length, trainable=False)(inputs_cue)
summed = keras.layers.add([w_emb, p_emb, c_emb])
BiLSTM = Bidirectional(CuDNNLSTM(hidden_dims, return_sequences=True))(summed)
DPT = Dropout(0.2)(BiLSTM)
outputs = Dense(2, activation='softmax')(DPT)
checkpoint = ModelCheckpoint('bilstm_one_hot.hdf5', monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
early = EarlyStopping(monitor='val_loss', min_delta=0.0001, patience=5, verbose=1, mode='auto')
model = Model(inputs=[inputs_w, inputs_pos, inputs_cue], outputs=outputs)
model.compile('adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
model.fit([X_train, X_pos_train, X_cues_train], Y_train, batch_size=batch_size, epochs=num_epochs, verbose=1, validation_split=0.2, callbacks=[early, checkpoint])
In the original code, in Tensorflow, the author uses masking and softmax cross entropy with logits. I don't get how to implement this in Keras yet. If you have any advice don't hesitate.
My main issue here is with return_sequences=True. The author doesn't appear to be using it in his tensorflow implementation and when I turn it to False, I get this error:
ValueError: Error when checking target: expected dense_1 to have 2 dimensions, but got array with shape (820, 109, 2)
I also tried using:
outputs = TimeDistributed(Dense(2, activation='softmax'))(BiLSTM)
which returns and AssertionError without any information.
Any ideas ?
Thanks
the author uses masking and softmax cross entropy with logits. I don't get how to implement this in Keras yet.
Regarding softmax cross entropy with logits, you are doing it correctly. softmax_cross_entropy_with_logits as the loss function + no activation function on the last layer is the same as your approach with categorical_crossentropy as loss + softmax activation on the last layer. The only difference is that the latter one is numerically less stable. If this turns out to be an issue for you, you can (if your Keras backend is tensorflow) just pass tf.softmax_cross_entropy_with_logits as your loss. If you have another backend, you will have to look for an equivalent there.
Regarding masking, I'm not sure if I fully understand what the author is doing. However, in Keras the Embedding layer has a mask_zero parameter that you can set to True. In that case all timesteps that have a 0 will be ignored in all further calculations. In your source, it is not 0 that is being masked, though, so you would have to adjust the indices accordingly. If that doesn't work, there is the Masking layer in Keras that you can put before your recurrent layer, but I have little experience with that.
My main issue here is with return_sequences=True. The author doesn't
appear to be using it
What makes you think that he doesn't use it? Just because that keyword does not appear in the code doesn't mean anything. But I'm also not sure. The code is pretty old and I don't find it in the docs anymore that could tell what the defaults are.
Anyway, if you want to use return_sequences=False (for whatever reason) be aware that this changes the output shape of the layer:
with return_sequences=True the output shape is (batch_size, timesteps, features)
with return_sequences=False the output shape is (batch_size, features)
The error you are getting is basically telling you that your network's output has one dimension less than the target y values you are feeding it.
So, to me it looks like return_sequences=True is just what you need, but without further information it is hard to tell.
Then, regarding TimeDistributed. I'm not quite sure what you are trying to achieve with it, but quoting from the docs:
This wrapper applies a layer to every temporal slice of an input.
The input should be at least 3D, and the dimension of index one will be considered to be the temporal dimension.
(emphasis is mine)
I'm not sure from your question, in which scenario the empty assertion occurs.
If you have a recurrent layer with return_sequences=False before, you are again missing a dimension (I can't tell you why the assertion is empty, though).
If you have a recurrent layer with return_sequences=True before, it should work, but it would be completely useless, as Dense is applied in a time distributed way anyways. If I'm not mistaken, this behavior of the Dense layer was changed in some older Keras version (they should really update the example there and stop using Dense!). As the code you are referring to is quite old, it's well possible that TimeDistributed was needed back then, but is not needed anymore.
If your plan was to restore the missing dimension, TimeDistributed won't help you, but RepeatVector would. But, as already said, in that case better use return_sequences=True in the first place.
The problem is that your target values seem to be time distributed. So you have 109 timesteps with a onehot target vector of size two. This is why you need the return_sequences=True. Otherwise you will just feed the last timestep to the Dense layer and you would just have one output.
So depending on what you need you need to keep it like it is now or if just the last timestep is enough for you you can get rid of it, but then you would need to adjust the y values accordingly.