Random Forest Classifier is giving me an array of zeroes - tensorflow

I used VGG16 as feature extractor on a dataset with 9 classes and trained the Random Forest Classifier on the feature vector. I tried to make prediction on the test feature vector but the prediction is an array of zeroes. What am i doing wrong ?
All Zero prediction
Notebook Link
I tried fitting the RandomForestClassifier on test_feature_vector and tried to predict it on part of train_feature_vector and still got all zeroes.

Related

TensorFlow keras expected dimension not expected LSTM

I'm trying to complete the LSTM composing using TensorFlow from
https://www.datacamp.com/tutorial/using-tensorflow-to-compose-music
I've got so far as the LSTM model, but a dimension error for the inputs is given. I've used the code provided in the tutorial. A new trainingset is created, but not converted, as for the auto encoder models in the examples above.
This piece of code is not included in the preparation step for the LSTM model
# Convert to one-hot encoding and swap chord and sequence dimensions
trainChords = tf.keras.utils.to_categorical(trainChords).transpose(0,2,1)
# Convert data to numpy array of type float
trainChords = np.array(trainChords, np.float)
# Flatten sequence of chords into single dimension
trainChordsFlat = trainChords.reshape(nSamples, nChordsSequence)
What do these steps do? Are they also required for the LSTM model?

Do you always need to one-hot encode if you have classification task

I’m really new in deep learning and I came across with MNIST dataset problem.
So my question is when you have a classification task are you supposed to do one hot encoding before feeding it to neural network?
No you have a number of options based on the loss you select in compiling your model. Typical you set the loss in model.compile to categorical_cross_entropy if you have one hot encoded your labels. However you can encode your labels as integers and use sparse_categorical_crossentropy as you loss function.

Converting tensorflow dataset to numpy array

I have an autoencoder defined using tf.keras in tensorflow 1.15. I cannot upgrade to tensorflow to 2.0 for some specific reasons.
This particular autoencoder is used for anomaly detection. I currently compute the AUC score of the autoencoder as follows:
All anomalous inputs are labelled 1 and all normal inputs are labelled 0. This is y_true
I feed the autoencoder with unseen inputs and then measure the reconstruction error, like so: errors = np.mean(np.square(data - model.predict(data)), axis=-1)
The mean of this array is then said to the predicted label, y_pred.
I then compute the AUC using auc = metrics.roc_auc_score(y_true, y_pred).
This approach works well. I now need to move towards using tf.data.dataset to feed in my data. Previously, it was numpy arrays. The issue is, I am unable to convert tf.data.dataset to a numpy array and hence unable to compute the mean squared error as seen in 2.
Once I have a tf.data.Dataset, I feed it for prediction like so: results = model.predict(x_test)
This yields a numpy array, results. I want to compute the mean square error of results with x_test. However, x_test is of type tf.data.Dataset. So the question is, how can I convert a tf.data.dataset to a numpy array in tensorflow 1.15 or what is an alternative method to do this?

Why does Keras to_categorical method not return 3-D tensor when inputting 2-D tensor?

I was trying to build a LSTM neural net with Keras to predict tags for words in a set of sentences.
The implementation is all pretty straight forward, but the surprising thing was that
given the exactly same and otherwise correctly implemented code and
using Tensorflow 1.4.0 with Keras running on Tensorflow Backend,
on some people's computers, it returned tensors with wrong dimensions, while for others it worked perfectly.
The problem occured in the following context:
First, we turned the list of training sentences (sentences as a list of word indeces) into a 2-D matrix using the pad_sequences method from Keras (https://keras.io/preprocessing/sequence/):
def do_padding(sequences, length, padding_value):
return pad_sequences(sequences, maxlen=length, padding='post',
truncating='post', value=padding_value)
train_sents_padded = do_padding(train_sents, MAX_LENGTH,
word_to_id[PAD_TOKEN])
Next, we used our do_padding method on the corresponding training labels to turn them into a padded matrix. At the same time, we used the Keras to_categorical method (https://keras.io/utils/#to_categorical) to add a one-hot encoded vector to the created label matrix (one one-hot vector for each cell in the matrix, that means for word in each training sentence):
train_labels_padded = to_categorical(do_padding(train_labels, MAX_LENGTH,
label_to_id["O"]), NUM_LABELS)
We expected the resulting shape to be 3-D: (len(train_labels), MAX_LENGTH, NUM_LABELS). Yet, we found that the resulting shape was 2-D and basically looked like this: ((len(train_labels) x MAX_LENGTH), NUM_LABELS), meaning the numbers on the two expected dimensions len(train_labels) and MAX_LENGTH were multiplied and flattened into one dimension.
Interestingly, this problem as said before only occured for about 50% of the people, using Tensorflow 1.4.0 and Keras running on Tensorflow Backend.
We managed to solve the problem by reshaping the label matrix manually:
train_labels_padded = np.reshape(train_labels_padded, (len(train_labels),
MAX_LENGTH, NUM_LABELS))
I was just wondering if any of you have experienced a similar problem and have figured out the reason why this happens.

Neural Network with my own dataset

I have downloaded many face images from web. In order to learn Tensorflow I want to feed those images to a simple fully-connected neural network with a single hidden layer. I have found an example code in here.
Since I am a beginner, I don't know how to train, evaluate, and test the network with the downloaded images. The code owner used a '.mat' file and a .pkl file. I don't understand how he organized training and test set.
In order to run the code with my images;
Do I need to divide my images into training, test, and validation folders and turn each folder into a mat file? How am I going to provide labels for the training?
Besides, I don't understand why he used a '.pkl' file?
All in all, I would like to change this code so that I can find test, training , and validation set classification performance with my image dataset.
It might be an easy question, but it is important for me as it is a starting step. Thanks for your understanding.
First, you don't have to use .mat files nor pickles. Tensorflow expects numpy array.
For instance, let's say you have 70000 images of size 28x28 (=784 dimensions) belonging to 10 classes. Let's also assume that you'd like to train a simple feedforward neural network to classify the images.
The first step would be to split the images between train and test (and validation, but let's put this aside for the sake of simplicity). For the sake of the example, let's imagine that you chose randomly 60000 images for your training set and 10000 for your test set.
The second step would be to ensure that your data has the right format. Here, you'd like your training set to consist in one numpy array of shape (60000, 784) for the images and another one of shape (60000, 10) for the labels (if you use one-hot encoding to represent your classes). As for your test set, you should have an array of shape (10000, 784) for the images and one of shape (10000, 10) for the labels.
Once you have these big numpy arrays, you should define placeholders that will allow you to feed data to you network during training and evaluation.
images = tf.placeholder(tf.float32, shape=[None, 784])
labels = tf.placeholder(tf.int64, shape=[None, 10])
The None here means that you can feed a batch of any size, i.e. as many images as you want, as long as you numpy array is of shape (anything, 784).
The third step consists in defining your model as well as the loss function and the optimizer.
The fourth step consists in training your network by feeding it with random batches of data using the placeholders created above. As your network is training, you can periodically print its performance like the training loss/accuracy as well as the test loss/accuracy.
You can find a complete and very simple example here.