https://www.tensorflow.org/tutorials/images/classification
TensorFlow has a beautiful tutorial on how to build an image classifier model to detect cats and dogs...
But they left out two crucial steps.
Step 1: How do you prepare an image to feed the just made model ?
Step 2: How do you feed the model ?
This is what I tried, with failing results.
new_array = cv2.imread('cat.jpg') "<--- CV2 Read Image"
dImg= new_array.reshape(1,150,150,3) "<-- Convert it to 4D input "
prediction = model.predict(dImg/255) "<---- Scale down by 255 ??? Idk im guessing "
print(str((prediction[0][0])) + " cat") "<-- Print the list of list prediction which rn gives unusable results"
Update
new_array = cv2.imread('dog.jpg')
dImg= new_array.reshape(1,150,150,3)
prediction = model.predict(dImg/255)
print(str((int(prediction[0][0]))) + ' dog')
print(prediction[0][0])
new_array = cv2.imread('dog-2.jpg')
dImg= new_array.reshape(1,150,150,3)
prediction = model.predict(dImg/255)
print(str((int(prediction[0][0]))) + ' dog')
print(prediction[0][0])
new_array = cv2.imread('dog-3.jpg')
dImg= new_array.reshape(1,150,150,3)
prediction = model.predict(dImg/255)
print(str((int(prediction[0][0]))) + ' dog')
print(prediction[0][0])
new_array = cv2.imread('dog-4.jpg')
dImg= new_array.reshape(1,150,150,3)
prediction = model.predict(dImg/255)
print(str((int(prediction[0][0]))) + ' dog')
print(prediction[0][0])
result
0 cat
0.5860402
-2 cat
-2.1347654
-1 cat
-1.380995
-4 cat
-4.0731945
1 dog
1.6571417
1 dog
1.759522
0 dog
-0.05260024
0 dog
-0.827193
To answer your questions and uncertainties:
"<---- Scale down by 255 ??? Idk im guessing "
This depends on how the model has been trained. If the images were divided to 255 before being fed to the model, then when you test you must divide to 255.
Testing on one image in Keras and TensorFlow means you need to add the batch_axis. In Keras and TensorFlow, one can only predict on batches of data.
Therefore, when loading an image (regardless of OpenCV and PIL):
image = cv2.imread(image_path) #note that opencv loads in BGR format
#If necessary convert to RGB(depends again on how the model was trained)
image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
#If necessary divide by 255
image /= 255
image = np.expand_dims(image,axis=0) #or tf.expand_dims(image, axis=0)
model.predict(image)
Related
I am practicing one of the image captioning keras model (LINK IS HERE). Basically this model takes Flickr8k dataset where each image has 5 captions like below.
1000268201_693b08cb0e.jpg,A child in a pink dress is climbing up a set of stairs in an entry
way .
1000268201_693b08cb0e.jpg,A girl going into a wooden building .
1000268201_693b08cb0e.jpg,A little girl climbing into a wooden playhouse .
1000268201_693b08cb0e.jpg,A little girl climbing the stairs to her playhouse .
1000268201_693b08cb0e.jpg,A little girl in a pink dress going into a wooden cabin .
During training they feed all five captions with each image as below.
for i in range(self.num_captions_per_image):
loss, acc = self._compute_caption_loss_and_acc(
img_embed, batch_seq[:, i, :], training=False
)
When I train this model and try to predict some images from the validation data it generates only one caption with each image sample. While I need to generate the same number of captions per image as we trained the model (i.e 5 captions per image).
I am confused which part of the below code should I change to generate the same number of images as my validation data (each image with 5 captions).
vocab = vectorization.get_vocabulary()
index_lookup = dict(zip(range(len(vocab)), vocab))
max_decoded_sentence_length = SEQ_LENGTH - 1
valid_images = list(valid_data.keys())
def generate_caption():
# Select a random image from the validation dataset
sample_img = np.random.choice(valid_images)
# Read the image from the disk
sample_img = decode_and_resize(sample_img)
img = sample_img.numpy().clip(0, 255).astype(np.uint8)
plt.imshow(img)
plt.show()
# Pass the image to the CNN
img = tf.expand_dims(sample_img, 0)
img = caption_model.cnn_model(img)
# Pass the image features to the Transformer encoder
encoded_img = caption_model.encoder(img, training=False)
# Generate the caption using the Transformer decoder
decoded_caption = "<start> "
for i in range(max_decoded_sentence_length):
tokenized_caption = vectorization([decoded_caption])[:, :-1]
mask = tf.math.not_equal(tokenized_caption, 0)
predictions = caption_model.decoder(
tokenized_caption, encoded_img, training=False, mask=mask
)
sampled_token_index = np.argmax(predictions[0, i, :])
sampled_token = index_lookup[sampled_token_index]
if sampled_token == " <end>":
break
decoded_caption += " " + sampled_token
decoded_caption = decoded_caption.replace("<start> ", "")
decoded_caption = decoded_caption.replace(" <end>", "").strip()
print("Predicted Caption: ", decoded_caption)
# Check predictions for a few samples
generate_caption()
Predicted Caption: a group of dogs race in the snow
The above code generates just one caption from the above image while in the actual dataset this image has 5 captions. I have made a rough image in paint to show what I really want. Is it possible to generate mutli output captions from this model?
I am using this function to predict the output of never seen images
def predictor(img, model):
image = cv2.imread(img)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (224, 224))
image = np.array(image, dtype = 'float32')/255.0
plt.imshow(image)
image = image.reshape(1, 224,224,3)
clas = model.predict(image).argmax()
name = dict_class[clas]
print('The given image is of \nClass: {0} \nSpecies: {1}'.format(clas, name))
how to change it, if I want the top 2(or k) accuracy
i.e
70% chance its dog
15% its a bear
If you are using TensorFlow + Keras and probably doing multi-class classification, then the output of model.predict() is a tensor representing either the logits or already the probabilities (softmax on top of logits).
I am taking this example from here and slightly modifying it : https://www.tensorflow.org/api_docs/python/tf/math/top_k.
#See the softmax, probabilities add up to 1
network_predictions = [0.7,0.2,0.05,0.05]
prediction_probabilities = tf.math.top_k(network_predictions, k=2)
top_2_scores = prediction_probabilities.values.numpy()
dict_class_entries = prediction_probabilities.indices.numpy()
And here in dict_class_entries you have then the indices (sorted ascendingly) in accordance with the probabilities. (i.e. dict_class_entries[0] = 0 (corresponds to 0.7) and top_2_scores[0] = 0.7 etc.).
You just need to replace network_probabilities with model.predict(image).
Notice I removed the argmax() in order to send an array of probabilities instead of the index of the max score/probability position (that is, argmax()).
General Explanation:
My codes work fine, but the results are wired. I don't know the problem is with
the network structure,
or the way I feed the data to the network,
or anything else.
I am struggling with this error several weeks and so far I have changed the loss function, optimizer, data generator, etc., but I could not solve it. I appreciate any help.
If the following information is not enough, let me know, please.
Field of study:
I am using tensorflow, keras for multiclass classification. The dataset has 36 binary human attributes. I have used resnet50, then for each part of the body (head, upper body, lower body, shoes, accessories), I have added a separated branch to the network. The network has 1 input image with 36 labels and 36 output nodes (36 denes layers with sigmoid activation).
Problem:
The problem is that the accuracy that keras is reporting is high, but f1-score is very low or zero for most of the outputs (even when I use f1-score as a metric when compiling the network, the f1-socre for validation is very bad).
aAfter train, when I use the network in prediction mode, it returns always one/zero for some classes. It means that the network is not able to learn (even when I use weighted loss function or focal loss function.)
Why it is weird? Because, state-of-the-art methods report heigh f1 score even after the first epoch (e.g. https://github.com/chufengt/iccv19_attribute, that I have run it in my PC and got good results after one epoch).
Parts of the Codes:
print("setup model ...")
input_image = KL.Input(args.img_input_shape, name= "input_1")
C1, C2, C3, C4, C5 = resnet_graph(input_image, architecture="resnet50", stage5=False, train_bn=True)
output_layers = merged_model (input_features=C4)
model = Model(inputs=input_image, outputs=output_layers, name='SoftBiometrics_Model')
...
print("model compiling ...")
OPTIM = optimizers.Adadelta(lr=args.learning_rate, rho=0.95)
model.compile(optimizer=OPTIM, loss=binary_focal_loss(alpha=.25, gamma=2), metrics=['acc',get_f1])
plot_model(model, to_file='model.png')
...
img_datagen = ImageDataGenerator(rotation_range=6, width_shift_range=0.03, height_shift_range=0.03, brightness_range=[0.85,1.15], shear_range=0.06, zoom_range=0.09, horizontal_flip=True, preprocessing_function=preprocess_input_resnet, rescale=1/255.)
img_datagen_test = ImageDataGenerator(preprocessing_function=preprocess_input_resnet, rescale=1/255.)
def multiple_outputs(generator, dataframe, batch_size, x_col):
Gen = generator.flow_from_dataframe(dataframe=dataframe,
directory=None,
x_col = x_col,
y_col = args.Categories,
target_size = (args.img_input_shape[0],args.img_input_shape[1]),
class_mode = "multi_output",
classes=None,
batch_size = batch_size,
shuffle = True)
while True:
gnext = Gen.next()
# return image batch and 36 sets of lables
labels = gnext[1]
output_dict = {"{}_output".format(Category): np.array(labels[index]) for index, Category in enumerate(args.Categories)}
yield {'input_1':gnext[0]}, output_dict
trainGen = multiple_outputs (generator = img_datagen, dataframe=Train_df_img, batch_size=args.BATCH_SIZE, x_col="Train_Filenames")
testGen = multiple_outputs (generator = img_datagen_test, dataframe=Test_df_img, batch_size=args.BATCH_SIZE, x_col="Test_Filenames")
STEP_SIZE_TRAIN = len(Train_df_img["Train_Filenames"]) // args.BATCH_SIZE
STEP_SIZE_VALID = len(Test_df_img["Test_Filenames"]) // args.BATCH_SIZE
...
print("Fitting the model to the data ...")
history = model.fit_generator(generator=trainGen,
epochs=args.Number_of_epochs,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=testGen,
validation_steps=STEP_SIZE_VALID,
callbacks= [chekpont],
verbose=1)
There is a possibility that you are passing binary f1-score to compile function. This should fix the problem -
pip install tensorflow-addons
...
import tensorflow_addons as tfa
f1 = tfa.metrics.F1Score(36,'micro' or 'macro')
model.compile(...,metrics=[f1])
You can read more about how f1-micro and f1-macro is calculated and which can be useful here.
Somehow, the predict_generator() of Keras' model does not work as expected. I would rather loop through all test images one-by-one and get the prediction for each image in each iteration. I am using Plaid-ML Keras as my backend and to get prediction I am using the following code.
import os
from PIL import Image
import keras
import numpy
print("Prediction result:")
dir = "/path/to/test/images"
files = os.listdir(dir)
correct = 0
total = 0
#dictionary to label all traffic signs class.
classes = {
0:'This is Cat',
1:'This is Dog',
}
for file_name in files:
total += 1
image = Image.open(dir + "/" + file_name).convert('RGB')
image = image.resize((100,100))
image = numpy.expand_dims(image, axis=0)
image = numpy.array(image)
image = image/255
pred = model.predict_classes([image])[0]
sign = classes[pred]
if ("cat" in file_name) and ("cat" in sign):
print(correct,". ", file_name, sign)
correct+=1
elif ("dog" in file_name) and ("dog" in sign):
print(correct,". ", file_name, sign)
correct+=1
print("accuracy: ", (correct/total))
I have successfully converted a quantized 8bit tflite model for object detection. My model was originally trained on images that are normalized by dividing 255 so the original input range is [0, 1]. Since my quantized tflite model requires input to be uint8, how can I convert my image (originally [0, 255]) to be correct for my network?
Also how can I convert output to float to compare the results with floating point model?
The following code does not give me the right result.
'''python
im = cv2.imread(image_path)
im = im.astype(np.float32, copy=False)
input_image = im
input_image = np.array(input_image, dtype=np.uint8)
input_image = np.expand_dims(input_image, axis=0)
interpreter.set_tensor(input_details[0]['index'], input_image)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
output_data2 = interpreter.get_tensor(output_details[1]['index'])
output_data3 = interpreter.get_tensor(output_details[2]['index'])
min_1 = -8.198164939880371
max_1 = 8.798029899597168
scale = (max_1 - min_1)/ 255.0
min_2 = -9.77856159210205
max_2 = 10.169703483581543
scale_2 = (max_2 - min_2) / 255.0
min_3 = -14.382895469665527
max_3 = 11.445544242858887
scale_3 = (max_3 - min_3) / 255.0
output_data = (output_data ) * scale + min_1
output_data2 = (output_data2) * scale_2 + min_2
output_data3 = (output_data3) * scale_3 + min_3
'''
i met the same problem but in pose estimation.
have you solved the problem yet?
you use quantized aware training?
i think you can get a q and z value(because you have to give mean and std-err when you use tflite api or toco commonad to get a quantized 8bit tflite model) about your input image.
try these codes:
image = q_input* (image - z_input)
output_data = q_output(image - z_output)
etc.
(for different layers you can access different q and z)
Let me know if you tried this way
I've converted the image via OpenCV to "CV_8UC3" and this worked for me:
// Convert to RGB color space
if (image.channels() == 1) {
cv::cvtColor(image, image, cv::COLOR_GRAY2RGB);
} else {
cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
}
image.convertTo(image, CV_8UC3);
I have used one hot encoding to my dataset before training my SVM classifier.
which increased number of features in training set to 982. But during
prediction of test dataset which has 7 features i am getting error " X has 7
features per sample; expecting 982". I don't understand how to increase
number of features in test dataset.
My code is:
df = pd.read_csv('train.csv',header=None);
features = df.iloc[:,:-1].values
labels = df.iloc[:,-1].values
encode = LabelEncoder()
features[:,2] = encode.fit_transform(features[:,2])
features[:,3] = encode.fit_transform(features[:,3])
features[:,4] = encode.fit_transform(features[:,4])
features[:,5] = encode.fit_transform(features[:,5])
df1 = pd.DataFrame(features)
#--------------------------- ONE HOT ENCODING --------------------------------#
hotencode = OneHotEncoder(categorical_features=[2])
features = hotencode.fit_transform(features).toarray()
hotencode = OneHotEncoder(categorical_features=[14])
features = hotencode.fit_transform(features).toarray()
hotencode = OneHotEncoder(categorical_features=[37])
features = hotencode.fit_transform(features).toarray()
hotencode = OneHotEncoder(categorical_features=[466])
features = hotencode.fit_transform(features).toarray()
X = np.array(features)
y = np.array(labels)
clf = svm.LinearSVC()
clf.fit(X,y)
d_test = pd.read_csv('query.csv')
Z_test =np.array(d_test)
confidence = clf.predict(Z_test)
print("The query image belongs to Class ")
print(confidence)
######################### test dataset
query.csv
1 0.076 1 3232236298 2886732679 3128 60604
The short answer: you need to apply the same OHE transform (or LE+OHE in your case) on the test set.
For a good advice, see Scikit Learn OneHotEncoder fit and transform Error: ValueError: X has different shape than during fitting or How to deal with imputation and hot one encoding in pandas?