I have been working on MNIST dataset to learn how to use Tensorflow and Python for my deep learning course.
I want to resize MNIST as 22 & 22 using tensorflow, then I train it, but I do not how to do?
Could you help me?
TheRevanchist's answer is correct. However, for the mnist dataset, you first need to reshape the mnist array before you send it to tf.image.resize_images():
import tensorflow as tf
import numpy as np
import cv2
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
batch = mnist.train.next_batch(10)
X_batch = batch[0]
batch_tensor = tf.reshape(X_batch, [10, 28, 28, 1])
resized_images = tf.image.resize_images(batch_tensor, [22,22])
The code above takes out a batch of 10 mnist images and reshapes them from 28x28 images to 22x22 tensorflow images.
If you want to display the images, you can use opencv and the code below. The resized_images.eval() converts the tensorflow image to a numpy array!
with tf.Session() as sess:
numpy_imgs = resized_images.eval(session=sess) # mnist images converted to numpy array
for i in range(10):
cv2.namedWindow('Resized image #%d' % i, cv2.WINDOW_NORMAL)
cv2.imshow('Resized image #%d' % i, numpy_imgs[i])
cv2.waitKey(0)
Did you try tf.image.resize_image?
The method:
resize_images(images, size, method=ResizeMethod.BILINEAR,
align_corners=False)
where images is a batch of images, and size is a vector tensor which determines the new height and width. You can look at the full documentation here: https://www.tensorflow.org/api_docs/python/tf/image/resize_images
Updated: TensorFlow 2.4.1
Short Answer
Use tf.image.resize (instead of resize_images). The link other provided no longer exits. Updated link.
Long Answer
MNIST in tf.keras.datasets.mnist is the following shape
(batch_size, 28 , 28)
Here is the full implementation. Please read the comment which attach with the code.
(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()
# expand new axis, channel axis
x_train = np.expand_dims(x_train, axis=-1)
# [optional]: we may need 3 channel (instead of 1)
x_train = np.repeat(x_train, 3, axis=-1)
# it's always better to normalize
x_train = x_train.astype('float32') / 255
# resize the input shape , i.e. old shape: 28, new shape: 32
x_train = tf.image.resize(x_train, [32,32]) # if we want to resize
print(x_train.shape)
# (60000, 32, 32, 3)
You can use cv2.resize() function of opencv
Use a for loop to go iterate through every image
And inside for loop for every image add this line cv2.resize(source_image, (22, 22))
def resize(mnist):
train_data = []
for img in mnist.train._images:
resized_img = cv2.resize(img, (22, 22))
train_data.append(resized_img)
return train_data
Related
I wanted to test my model by uploading an image but I got this error. And I think I got the error somewhere in these lines, I'm just not sure how to fix.
IMAGE_SIZE = [244,720]
inception = InceptionV3(input_shape=IMAGE_SIZE + [3], weights='imagenet',include_top=False)
Also here's the code of uploading my test image
picture = image.load_img('/content/DSC_0365.JPG', target_size=(244,720))
img = img_to_array(picture)
prediction = model.predict(img)
print (prediction)
I'm still a newbie in Machine learning so my knowledge right now is not yet that deep.
This is mostly because you didn't prepare your input (its dimension) for your inception model. Here is one possible solution.
Model
from tensorflow.keras.applications import *
IMAGE_SIZE = [244,720]
inception = InceptionV3(input_shape=IMAGE_SIZE + [3],
weights='imagenet', include_top=False)
# check it's input shape
inception.input_shape
(None, 244, 720, 3)
Inference
Let's test a sample by passing it to the model.
from PIL import Image
a = Image.open('/content/1.png').convert('RGB')
display(a)
Check its basic properties.
a.mode, a.size, a.format
('RGB', (297, 308), None)
So, its shape already in (297 x 308 x 3). But to able to pass it to the model, we need an extra axis which is the batch axis. To do that, we can do
import tensorflow as tf
import numpy as np
a = tf.expand_dims(np.array(a), axis=0)
a.shape
TensorShape([1, 308, 297, 3])
Much better. Now, we may want to normalize our data and resize it according to the model input shape. To do that, we can do:
a = tf.divide(a, 255)
a = tf.image.resize(a, [244,720])
a.shape
TensorShape([1, 244, 720, 3])
And lastly, pass it to the model.
inception(a).shape
TensorShape([1, 6, 21, 2048])
# or, preserve the prediction to later analysis
y_pred = inception(a)
Updated
If you're using the [tf.keras] image processing function which loads the image into PIL format, then we can do simply:
image = tf.keras.preprocessing.image.load_img('/content/1.png',
target_size=(244,720))
input_arr = tf.keras.preprocessing.image.img_to_array(image)
input_arr = np.array([input_arr]) # Convert single image to a batch.
inception(input_arr).shape
TensorShape([1, 6, 21, 2048])
I am running in google colab and the tensor flow version is: 2.2.0 and the keras version is: 2.3.0-tf
Question:
How can I print the value of african_elephant_output? I tried print (african_elephant_output). This prints only the following:
Tensor("Mul_1:0", shape=(None,), dtype=float32)
Location of Code is: See code at In [31]
Relevant code is:
from keras.applications import VGG16
from keras import backend as K
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input, decode_predictions
import numpy as np
import tensorflow as tf
print (tf.__version__)
print (tf.keras.__version__)
# The local path to our target image
img_path = '/content/pic.jpg'
# `img` is a PIL image of size 224x224
img = image.load_img(img_path, target_size=(224, 224))
# `x` is a float32 Numpy array of shape (224, 224, 3)
x = image.img_to_array(img)
# We add a dimension to transform our array into a "batch"
# of size (1, 224, 224, 3)
x = np.expand_dims(x, axis=0)
# Finally we preprocess the batch
# (this does channel-wise color normalization)
x = preprocess_input(x)
K.clear_session()
# Note that we are including the densely-connected classifier on top;
# all previous times, we were discarding it.
model = VGG16(weights='imagenet')
preds = model.predict(x)
print('Predicted:', decode_predictions(preds, top=3)[0])
np.argmax(preds[0])
# this prints 386
# This is the "african elephant" entry in the prediction vector
african_elephant_output = model.output[:, 386]
Based on Tensorflow documentation and my experience you should be able to get the contents of a Tensor using .numpy() API. So african_elephant_output.numpy().
You can also check the tutorial from Tensorflow here for reference.
This is my code below it works fine for classification of two categories of images it takes labels based on directory names but whenever I add one more directory it stops working can someone help me
This is my code for image classification for images from two directories and two labels but when I convert it to three labels/ directories I get an error the error is posted below can someone help me solve the problem This if for image classification
I have tried removing the NumPy array I somewhere saw I need to just pass it through a CNN but I couldn't do that.
I am trying to make a classifier for pneumonia caused by a coronavirus and other disease using frontal chest x rays
from tensorflow.keras.preprocessing.image import ImageDataGeneratorfrom
from tensorflow.keras.applications import VGG16
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse
import cv2
import os
# construct the argument parser and parse the arguments
# initialize the initial learning rate, number of epochs to train for,
# and batch size
INIT_LR = 1e-3
EPOCHS = 40
BS = 66
# grab the list of images in our dataset directory, then initialize
# the list of data (i.e., images) and class images
print("[INFO] loading images...")
imagePaths = list(paths.list_images('/content/drive/My Drive/testset/'))
data = []
labels = []
# loop over the image paths
for imagePath in imagePaths:
# extract the class label from the filename
label = imagePath.split(os.path.sep)[-2]
# load the image, swap color channels, and resize it to be a fixed
# 224x224 pixels while ignoring aspect ratio
image = cv2.imread(imagePath)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (224, 224))
# update the data and labels lists, respectively
data.append(image)
labels.append(label)
# convert the data and labels to NumPy arrays while scaling the pixel
# intensities to the range [0, 255]
data = np.array(data) / 255.0
labels = np.array(labels)
# perform one-hot encoding on the labels
lb = LabelBinarizer()
labels = lb.fit_transform(labels)
labels = to_categorical(labels)
# partition the data into training and testing splits using 80% of
# the data for training and the remaining 20% for testing
(trainX, testX, trainY, testY) = train_test_split(data, labels,
test_size=0.20, stratify=labels, random_state=42)
# initialize the training data augmentation object
trainAug = ImageDataGenerator(
rotation_range=15,
fill_mode="nearest")
# load the VGG16 network, ensuring the head FC layer sets are left
# off
baseModel = VGG16(weights="imagenet", include_top=False,
input_tensor=Input(shape=(224, 224, 3)))
# construct the head of the model that will be placed on top of the
# the base model
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(4, 4))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(64, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(2, activation="softmax")(headModel)
# place the head FC model on top of the base model (this will become
# the actual model we will train)
model = Model(inputs=baseModel.input, outputs=headModel)
# loop over all layers in the base model and freeze them so they will
# *not* be updated during the first training process
for layer in baseModel.layers:
layer.trainable = False
# compile our model
print("[INFO] compiling model...")
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="binary_crossentropy", optimizer=opt, metrics=["accuracy"])
# train the head of the network
print("[INFO] training head...")
H = model.fit(
trainAug.flow(trainX, trainY, batch_size=BS),
steps_per_epoch=len(trainX) // BS,
validation_data=(testX, testY),
validation_steps=len(testX) // BS,
epochs=EPOCHS)
# make predictions on the testing set
print("[INFO] evaluating network...")
predIdxs = model.predict(testX, batch_size=BS)
# for each image in the testing set we need to find the index of the
# label with corresponding largest predicted probability
predIdxs = np.argmax(predIdxs, axis=1)
# show a nicely formatted classification report
print(classification_report(testY.argmax(axis=1), predIdxs,
target_names=lb.classes_))
# compute the confusion matrix and and use it to derive the raw
# accuracy, sensitivity, and specificity
cm = confusion_matrix(testY.argmax(axis=1), predIdxs)
total = sum(sum(cm))
acc = (cm[0, 0] + cm[1, 1]) / total
sensitivity = cm[0, 0] / (cm[0, 0] + cm[0, 1])
specificity = cm[1, 1] / (cm[1, 0] + cm[1, 1])
# show the confusion matrix, accuracy, sensitivity, and specificity
print(cm)
print("acc: {:.4f}".format(acc))
print("sensitivity: {:.4f}".format(sensitivity))
print("specificity: {:.4f}".format(specificity))
# plot the training loss and accuracy
N = EPOCHS
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy on COVID-19 Dataset")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="lower left")
plt.savefig("plot.png")
# serialize the model to disk
print("[INFO] saving COVID-19 detector model...")
model.save('/content/drive/My Drive/setcovid/model.h5', )
This is the error I got in my program
There are a few changes you need to make it work. The error you're getting is because of one-hot-encode. You're encoding your labels to one-hot twice.
lb = LabelBinarizer()
labels = lb.fit_transform(labels)
labels = to_categorical(labels)
Please remove the last line 'to_categorical' from your code. You will get the one-hot encode in the correct format. It will fix the error you're getting now.
And there is another problem I must mention. Your model output layer has only 2 neurons but you want to classify 3 classes. Please set the output layer neurons to 3.
headModel = Dense(3, activation="softmax")(headModel)
And you're now training with 3 classes, it's not binary anymore. You have to use another loss. I will recommend you to use categorical.
model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])
You also forgot to import the followings. Add these imports too.
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.layers import *
And you're good to go.
Btw, I'm pretty much afraid of the batch size(66) you're using. I don't know which GPU you have but still, I would suggest you decrease the batch size.
I want to use a feature extractor (such as ResNet101) and add layers after that which use the output of the feature extractor layer. However, I can't seem to figure out how. I have only found solutions online where an entire network is used without adding additional layers.
I am inexperienced with Tensorflow.
In the code below you can see what I have tried. I can run the code properly without the additional convolutional layer, however my goal is to add more layers after the ResNet.
With this attempt at adding the extra conv layer, this type error is returned:
TypeError: Expected float32, got OrderedDict([('resnet_v1_101/conv1', ...
Once I have added more layers, I would like to start training on a very small test set to see if my model can overfit.
import tensorflow as tf
import tensorflow.contrib.slim as slim
from tensorflow.contrib.slim.python.slim.nets import resnet_v1
import matplotlib.pyplot as plt
numclasses = 17
from google.colab import drive
drive.mount('/content/gdrive')
def decode_text(filename):
img = tf.io.decode_jpeg(tf.io.read_file(filename))
img = tf.image.resize_bilinear(tf.expand_dims(img, 0), [224, 224])
img = tf.squeeze(img, 0)
img.set_shape((None, None, 3))
return img
dataset = tf.data.TextLineDataset(tf.cast('gdrive/My Drive/5LSM0collab/filenames.txt', tf.string))
dataset = dataset.map(decode_text)
dataset = dataset.batch(2, drop_remainder=True)
img_1 = dataset.make_one_shot_iterator().get_next()
net = resnet_v1.resnet_v1_101(img_1, 2048, is_training=False, global_pool=False, output_stride=8)
net = slim.conv2d(net, numclasses, 1)
sess = tf.Session()
global_init = tf.global_variables_initializer()
local_init = tf.local_variables_initializer()
sess.run(global_init)
sess.run(local_init)
img_out, conv_out = sess.run((img_1, net))
resnet_v1.resnet_v1_101 does not return just net, but instead returns a tuple net, end_points. The second element is a dictionary, which is presumably why you are getting this particular error message.
For the documentation of this function:
Returns:
net: A rank-4 tensor of size [batch, height_out, width_out,
channels_out]. If global_pool is False,
then height_out and width_out are reduced by a
factor of output_stride compared to the respective height_in and width_in,
else both height_out and width_out equal one. If num_classes is 0 or None,
then net is the output of the last ResNet block, potentially after global
average pooling. If num_classes a non-zero integer, net contains the
pre-softmax activations.
end_points: A dictionary from components of the network to the corresponding
activation.
So you can write for example:
net, _ = resnet_v1.resnet_v1_101(img_1, 2048, is_training=False, global_pool=False, output_stride=8)
net = slim.conv2d(net, numclasses, 1)
You can also choose an intermediate layer, e.g.:
_, end_points = resnet_v1.resnet_v1_101(img_1, 2048, is_training=False, global_pool=False, output_stride=8)
net = slim.conv2d(end_points["main_Scope/resnet_v1_101/block3"], numclasses, 1)
(you can look into end_points to find the names of the endpoints. Your scope name will be different than main_Scope.)
I am reading a batch of MNIST data using the inbuilt Tensorflow datasets module. That gives a numpy array as a batch. However, if I copy the array into another variables and make changes to that second variables, the original batch array is also changed.
I am doubtful as to why there is any connection between the original array and the copied array.
You can test on this CoLab link:
https://colab.research.google.com/drive/1DN4n5_YCO33LozxtidM7STqEAUWypNOv
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
def test_reconstruction(mnist, h=28, w=28, batch_size=100):
# Test the trained model: reconstruction
batch = mnist.test.next_batch(batch_size)
batch_clean = batch[0]
print('before damage:', np.mean(batch_clean))
batch_damaged = np.reshape(batch_clean, (batch_size, 28, 28))
tmp = batch_damaged
tmp[:, 10:20, 10:20] = 0
print('after damage:', np.mean(batch_clean))
test_reconstruction(mnist)
Expected: Both the print statements should return the same mean value
Actual: I am getting different mean values for the two print statements
In your line batch_damaged = np.reshape(batch_clean, (batch_size, 28, 28)) you copy the reference of batch_clean and not its values. You should use numpy.copy to return a copy of your array.
batch_damaged = np.copy(np.reshape(batch_clean, (batch_size, 28, 28)))