Reshaping image and Plotting in Python - tensorflow

I am working on mnist_fashion data. The images in mnist_data are 28x28 pixel. For the purpose of feeding it to a neural network(multi-layer perceptron), I transformed the data into (784,) shape.
Further, I need to again reshape it back to the original size.
For this, I used below given code:-
from keras.datasets import fashion_mnist
import numpy as np
import matplotlib.pyplot as plt
(train_imgs,train_lbls), (test_imgs, test_lbls) = fashion_mnist.load_data()
plt.imshow(test_imgs[0].reshape(28,28))
no_of_test_imgs = test_imgs.shape[0]
test_imgs_trans = test_imgs.reshape(test_imgs.shape[1]*test_imgs.shape[2], no_of_test_imgs).T
plt.imshow(test_imgs_trans[0].reshape(28,28))
Unfortunately, I am not getting the similar image. I am not able to understand why this is happening.
expected image:
recieved image:
Kindly help me to resolve the problem.

pay attention when you flatten the images in test_imgs_trans
(train_imgs,train_lbls), (test_imgs, test_lbls) = tf.keras.datasets.fashion_mnist.load_data()
plt.imshow(test_imgs[0].reshape(28,28))
no_of_test_imgs = test_imgs.shape[0]
test_imgs_trans = test_imgs.reshape(no_of_test_imgs, test_imgs.shape[1]*test_imgs.shape[2])
plt.imshow(test_imgs_trans[0].reshape(28,28))

Related

Problem with manual data for PyTorch's DataLoader

I have a dataset which I have to process in such a way that it works with a convolutional neural network of PyTorch (I'm completely new to PyTorch). The data is stored in a dataframe with a column for pictures (28 x 28 ndarrays with int32 entries) and a column with its class labels. The pixels of the images merely adopt values +1 and -1 (since it is simulation data of a classical 2d Ising Model). The dataframe looks like this.
I imported the following (a lot of this is not relevant for now, but I included everything for completeness. "data_loader" is a custom py file.):
import numpy as np
import matplotlib.pyplot as plt
import data_loader
import pandas as pd
import torch
import torchvision.transforms as T
from torchvision.utils import make_grid
from torch.nn import Module
from torch.nn import Conv2d
from torch.nn import Linear
from torch.nn import MaxPool2d
from torch.nn import ReLU
from torch.nn import LogSoftmax
from torch import flatten
from sklearn.metrics import classification_report
import time as time
from torch.utils.data import DataLoader, Dataset
Then, I want to get this in the correct shape in order to make it useful for PyTorch. I do this by defining the following class
class MetropolisDataset(Dataset):
def __init__(self, data_frame, transform=None):
self.data_frame = data_frame
self.transform = transform
def __len__(self):
return len(self.data_frame)
def __getitem__(self,idx):
if torch.is_tensor(idx):
idx = idx.tolist()
label = self.data_frame['label'].iloc[idx]
image = self.data_frame['image'].iloc[idx]
image = np.array(image)
if self.transform:
image = self.transform(image)
return (image, label)
I call instances of this class as:
train_set = MetropolisDataset(data_frame = df_train,
transform = T.Compose([
T.ToPILImage(),
T.ToTensor()]))
validation_set = MetropolisDataset(data_frame = df_validation,
transform = T.Compose([
T.ToPILImage(),
T.ToTensor()]))
test_set = MetropolisDataset(data_frame = df_test,
transform = T.Compose([
T.ToPILImage(),
T.ToTensor()]))
The problem does not yet arise here, because I am able to read out and show images from these instances of the above defined class.
Then, as far as I found out, it is necessary to let this go through the DataLoader of PyTorch, which I do as follows:
batch_size = 64
train_dl = DataLoader(train_set, batch_size, shuffle=True, num_workers=3, pin_memory=True)
validation_dl = DataLoader(validation_set, batch_size, shuffle=True, num_workers=3, pin_memory=True)
test_dl = DataLoader(test_set, batch_size, shuffle=True, num_workers=3, pin_memory=True)
However, if I want to use these instances of the DataLoader, simply nothing happens. I neither get an error, nor the computation seems to get anywhere. I tried to run a CNN but it does not seem to compute anything. Something else I tried was to show some sample images with the code provided by this article, but the same issue occurs. The sample code is:
def show_images(images, nmax=10):
fig, ax = plt.subplots(figsize=(8, 8))
ax.set_xticks([]); ax.set_yticks([])
ax.imshow(make_grid((images.detach()[:nmax]), nrow=8).permute(1, 2, 0))
def show_batch(dl, nmax=64):
for images in dl:
show_images(images, nmax)
break
show_batch(test_dl)
It seems that there is some error in the implementation of my MetropolisDataset class or with the DataLoader itself. How could this problem be solved?
As mentioned in the comments, the problem was partly solved by setting num_workers to zero since I was working in a Jupyter notebook, as answered here. However, this left open one further problem that I got errors when I wanted to apply the DataLoader to run a CNN. The issue was then that my data did consist of int32 numbers instead of float32. I do not include further codes, because this was related directly to my data - however, the issue was (as very often) merely a wrong datatype.

Using Sklearn with NumPy and Images and get this error 'setting an array element with a sequence'

I am trying to create a simple image classification tool.
I would like the code below to work with classifying images. It works fine when it is a non image NumPy array.
#https://e2eml.school/images_to_numbers.html
import numpy as np
from sklearn.utils import Bunch
from PIL import Image
monkey = [1]
dog = [2]
example_animals = Bunch(data = np.array([monkey,dog]),target = np.array(['monkey','dog']))
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2) #with KMeans you get to pre specify the number of Clusters
KModel = kmeans.fit(example_animals.data) #fit a model using the training data , in this case original example animal data passed through
import pandas as pd
crosstab = pd.crosstab(example_animals.target,KModel.labels_)
print(crosstab)
I have looked into how to make an image into a NumPy array at https://e2eml.school/images_to_numbers.html
The code below where I have converted images to NumPy array doesn't work.
When run it gets the following error
** 'setting an array element with a sequence'**
#https://e2eml.school/images_to_numbers.html
import numpy as np
from sklearn.utils import Bunch
from PIL import Image
monkey = np.asarray(Image.open("monkey.jpg"))
dog = np.asarray(Image.open("dog.jpeg"))
example_animals = Bunch(data = np.array([monkey,dog]),target = np.array(['monkey','dog']))
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2) #with KMeans you get to pre specify the number of Clusters
KModel = kmeans.fit(example_animals.data) #fit a model using the training data , in this case original example animal data passed through
import pandas as pd
crosstab = pd.crosstab(example_animals.target,KModel.labels_)
print(crosstab)
I would appreciate any insight how I fix the error 'setting an array element with a sequence' so that the images will be compatible with the sklearn processing.
You need to be sure that your images "monkey.jpg" and "dog.jpeg" have the same number of pixels. Otherwise, you will have to resize the images to have the same size. Moreover, the data of your Bunch object need to be of shape (n_samples, n_features) (you can check the documentation https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans.fit)
You need to be aware that you use an unserpervised learning model (Kmeans). So the output of the model is not directly "monkey" or "dog".
I found the solution to error setting an array element with a sequence
Kmeans requires the data arrays for comparison need to be the same size.
This means if importing pictures, the pictures need to be resized, converted into a numpy array (a format that is compatible with Kmeans) and finally made into a 1 dimensional array.
#https://e2eml.school/images_to_numbers.html
#https://machinelearningmastery.com/how-to-load-and-manipulate-images-for-deep-learning-in-python-with-pil-pillow/
import numpy as np
from matplotlib import pyplot as plt
from sklearn.utils import Bunch
from PIL import Image
from sklearn.cluster import KMeans
import pandas as pd
monkey = Image.open("monkey.jpg")
dog = Image.open("dog.jpeg")
#resize pictures
monkey1 = monkey.resize((180,220))
dog1 = dog.resize((180,220))
#make pictures into numpy array
monkey2 = np.asarray(monkey1)
dog2 = np.asarray(dog1)
#https://www.quora.com/How-do-I-convert-image-data-from-2D-array-to-1D-using-python
#make numpy array into 1 dimensional array
monkey3 = monkey2.reshape(-1)
dog3 = dog2.reshape(-1)
example_animals = Bunch(data = np.array([monkey3,dog3]),target = np.array(['monkey','dog']))
kmeans = KMeans(n_clusters=2) #with KMeans you get to pre specify the number of Clusters
KModel = kmeans.fit(example_animals.data) #fit a model using the training data , in this case original example food data passed through
crosstab = pd.crosstab(example_animals.target,KModel.labels_)
print(crosstab)

Applying gaussian blur to images in a loop

I have a simple ndarray with shape as:
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(trainImg[0]) #can display a sample image
print(trainImg.shape) : (4750, 128, 128, 3) #shape of the dataset
I intend to apply Gaussian blur to all the images. The for loop I went with:
trainImg_New = np.empty((4750, 128, 128,3))
for idx, img in enumerate(trainImg):
trainImg_New[idx] = cv2.GaussianBlur(img, (5, 5), 0)
I tried to display a sample blurred image as:
plt.imshow(trainImg_New[0]) #view a sample blurred image
but I get an error:
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
It just displays a blank image.
TL;DR:
The error is most likely caused by trainImg_New is float datatype and its value is larger than 1. So, as #Frightera mentioned, try using np.uint8 to convert images' datatype.
I tested the snippets as below:
import numpy as np
import matplotlib.pyplot as plt
import cv2
trainImg_New = np.random.rand(4750, 128, 128,3) # all value is in range [0, 1]
save = np.empty((4750, 128, 128,3))
for idx, img in enumerate(trainImg_New):
save[idx] = cv2.GaussianBlur(img, (5, 5), 0)
plt.imshow(np.float32(save[0]+255)) # Reported error as question
plt.imshow(np.float32(save[0]+10)) # Reported error as question
plt.imshow(np.uint8(save[0]+10)) # Good to go
First of all, cv2.GaussianBlur will not change the range of the arrays' value and the original image arrays's value is legitimate. So I believe the only reason is the datatype of the trainImg_New[0] is not match its range.
So I tested the snippets above, we can see when the datatype of trainImg_New[0] matter the available range of the arrays' value.
I suggest you use tfa.image.gaussian_filter2d from the tensorflow_addons package. I think you'll be able to pass all your images at once.
import tensorflow as tf
from skimage import data
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
image = data.astronaut()
plt.imshow(image)
plt.show()
blurred = tfa.image.gaussian_filter2d(image,
filter_shape=(25, 25),
sigma=3.)
plt.imshow(blurred)
plt.show()

Apply random_shear augment to image tensor

I am trying to apply random_shear image augmentation from Keras but the images are completely distorted after this.
shape = inputs.shape #(32,512,512,3)
temp = np.empty(shape=(shape[0], shape[1],shape[2], shape[3]))
for i in range(shape[0]):
array_inputs = tf.keras.preprocessing.image.img_to_array(inputs[i])
sheared = tf.keras.preprocessing.image.random_shear(array_inputs, .2,
row_axis=0, col_axis=1,
channel_axis=2)
temp[i]= sheared
return tf.convert_to_tensor(temp)
I am not sure what is wrong here.
Can anybody help me here?
For what is worth, here is the code I tested:
import tensorflow as tf
import numpy as np
from PIL import Image
inputs = [Image.open('./homersimpson.0.0.jpg')]
shape = (1,1400,1400,3)
temp = np.empty(shape=(shape[0], shape[1],shape[2], shape[3]))
for i in range(shape[0]):
array_inputs = tf.keras.preprocessing.image.img_to_array(inputs[i])
sheared = tf.keras.preprocessing.image.random_shear(array_inputs, 50,
row_axis=0, col_axis=1,
channel_axis=2)
temp[i]= sheared
for i in range(shape[0]):
tf.keras.preprocessing.image.array_to_img(temp[i]).show()
Turning this image:
To this:

Cutting and resizing a numpy array to a new shape based on ROI

I have a numpy array and I need to cut a partition of it based on an ROI like (x1,y1)(x2,y2). The background color of the numpy array is zero.
I need to crop that part from the first numpy array and then resize the cropped array to (640,480) pixel.
I am new to numpy and I don't have any clue how to do this.
#numpy1: the first numpy array
roi=[(1,2),(3,4)]
It kind of sounds like you want to do some image processing. Therefore, I suggest you to have a look at the OpenCV library. In their Python implementation, images are basically NumPy arrays. So, cropping and resizing become quite easy:
import cv2
import numpy as np
# OpenCV images are NumPy arrays
img = cv2.imread('path/to/your/image.png') # Just use your NumPy array
# instead of loading some image
# Set up ROI [(x1, y1), (x2, y2)]
roi = [(40, 40), (120, 150)]
# ROI cutout of image
cutout = img[roi[0][1]:roi[1][1], roi[0][0]:roi[1][0], :]
# Generate new image from cutout with desired size
new_img = cv2.resize(cutout, (640, 480))
# Just some output for visualization
img = cv2.rectangle(img, roi[0], roi[1], (0, 255, 0), 2)
cv2.imshow('Original image with marked ROI', img)
cv2.imshow('Resized cutout of image', new_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.5
NumPy: 1.19.1
OpenCV: 4.4.0
----------------------------------------
You can crop an array like
array = array[start_x:stop_x, start_y:stop_y]
or in your case
array = array[roi[0][0]:roi[0][1], roi[1][0]:roi[1][1]]
or one of
array = array[slice(*roi[0]), slice(*roi[1])]
array = array[tuple(slice(*r) for r in roi)]
depending on the amount of abstraction and over-engineering that you need.
I recommend using slicing and skimage. skimage.transform.resize is what you need.
import matplotlib.pyplot as plt
from skimage import data
from skimage.transform import resize
image = data.camera()
crop = image[10:100, 10:100]
crop = resize(crop, (640, 480))
plt.imshow(crop)
More about slicing, pls see here.
Details on skimage, see here