Drowsy dataset for training the neural network - tensorflow

I am trying to build a tensorflow classifier for which I need a drowsy dataset. Is there any available drowsy dataset which can be used for my training set?

Related

Lower validation accuracy on ImageNet when evaluating Keras pre-trained models

I want to work with Keras models pre-trained on ImageNet. The models and information about their performance are here.
I downloaded ILSVRC 2012 (ImageNet) dataset and evaluated ResNet50 on the validation dataset. The top-1 accuracy should be 0.749 but I get 0.68. The top-5 accuracy should be 0.921, mine is 0.884. I also tried VGG16 and MobileNet with similar discrepancies.
I preprocess the images using built-in preprocess_input function (e.g. tensorflow.keras.applications.resnet50.preprocess_input()).
My guess is that the dataset is different. How can I make sure that the validation dataset that I use for evaluation is the same as the one that was used by the authors? Could there be any other reason why I get different results?

Adding Batch Dimension to tensorflow graph which was trained without batching support

I trained the network without batching, hence the input dimension of graph is (H,W,C) (not even [1,H,W,C]).
But during inference, I need predictions for multiple images (batched inference).
How can we achieve this

What are the image datasets that pre trained weights are available in keras?

I want to know,
what are the datasets that the pretrained weights are available in keras. Let's say keras inception v3 model refering the weights of imagenet dataset.
keras.applications.InceptionV3(weights='imagenet')
But I want to know other datasets like imagenet?
It says in the docs: One of None (random initialization), imagenet (pre-training on ImageNet), or the path to the weights file to be loaded. Default to imagenet. Link to the docs: https://www.tensorflow.org/api_docs/python/tf/keras/applications/InceptionV3

Why training not using the full training data set in TensorFlow text tutorial

I have a question after reading the text classification tutorial of TensorFlow: https://www.tensorflow.org/tutorials/keras/text_classification_with_hub
In the data preparation phase (https://www.tensorflow.org/tutorials/keras/text_classification_with_hub#download_the_imdb_dataset), it says the training data contains 15,000 examples for training.
However, in the model training phase (https://www.tensorflow.org/tutorials/keras/text_classification_with_hub#train_the_model), the code uses 10000 samples.
history = model.fit(train_data.shuffle(10000).batch(512),
epochs=20,
validation_data=validation_data.batch(512),
verbose=1)
Could anyone explain why the training dose not uses the whole training dataset (i.e., 15000 samples)? Thanks.

Logging accuracy when using tf.estimator.Estimator

I'm following this tutorial - https://www.tensorflow.org/tutorials/estimators/cnn - to build a CNN using TensorFlow Estimators with the MNIST data.
I would like to visualize training accuracy and loss at each step, but I'm not sure how to do that with tf.train.LoggingTensorHook.