This question already has answers here:
PyQt5 QImage from Numpy Array
(7 answers)
Converting numpy image to QPixmap
(1 answer)
Closed 10 months ago.
In some interface using PyQt5, let button be a QPushButton. If we want to set an icon for the button using a given saved image image.jpg, we usually write:
button.setIcon(QtGui.QIcon("image.jpg"))
However, if I am given an n-dimensional numpy array array which also represents an image, I cannot simply write button.setIcon(QtGui.QIcon(array)) as it will give an error message. What should I do to instead? (saving the array as an image is not in consideration as I want large amount of arrays to be made into PushButton)
Edit:
For a typical situation, consider:
import numpy
array = numpy.random.rand(100,100,3) * 255
Then array is a square array representing an image. To see this, we can write (we do not have to use PIL or Image to solve our problem and this is just a demonstration):
from PIL import Image
im = Image.fromarray(imarray.astype('uint8')).convert('RGBA')
Reference:100x100 image with random pixel colour
I want to make this fixed array an icon.
You have to build a QImage -> QPixmap -> QIcon:
import numpy as np
from PyQt5.QtGui import QIcon, QImage, QPixmap
from PyQt5.QtWidgets import QApplication, QPushButton
app = QApplication([])
array = (np.random.rand(1000, 1000, 3) * 255).astype("uint8")
height, width, _ = array.shape
image = QImage(bytearray(array), width, height, QImage.Format.Format_RGB888)
# image.convertTo(QImage.Format.Format_RGB32)
button = QPushButton()
button.setIconSize(image.size())
button.setIcon(QIcon(QPixmap(image)))
button.show()
app.exec_()
Related
This question already has answers here:
Add a large shapefile to map in python using folium
(1 answer)
Use QWebEngineView to Display Something Larger Than 2MB?
(1 answer)
Closed 12 months ago.
I have PyQt5 app that embeds a folium Map within a QWidget.
Here's a minimal example of the class I wrote :
import folium
import io
from folium.plugins import Draw, MousePosition, HeatMap
from PySide2 import QtWidgets, QtWebEngineWidgets
class FoliumMap(QtWidgets.QWidget):
def __init__(self, parent=None):
QtWidgets.QWidget.__init__(self, parent)
self.layout = QtWidgets.QVBoxLayout()
m = folium.Map(
title='coastlines',
zoom_start=3)
data = io.BytesIO()
m.save(data, close_file=False)
webView = QtWebEngineWidgets.QWebEngineView()
webView.setHtml(data.getvalue().decode())
self.layout.addWidget(webView)
self.setLayout(self.layout)
If i run my program here's what I have :
Now I want to add a GeoJson layer within the __init__ of my class :
folium.GeoJson('data/custom.geo.json', name='coasts').add_to(m)
This results in having a blank Qwidget :
If I save the map under html format I am able to see the layer on my web browser:
Has anyone an idea on why the layer implementation makes the QWidget blank ? And how to fix this ?
The problem seems to be the following command :
webView.setHtml(data.getvalue().decode())
It is said in the Qt documentation that :
Content larger than 2 MB cannot be displayed, because setHtml() converts the provided HTML to percent-encoding and places data: in front of it to create the URL that it navigates to. Thereby, the provided code becomes a URL that exceeds the 2 MB limit set by Chromium. If the content is too large, the loadFinished() signal is triggered with success=false.
My html file weighs 2628 KB > 2MB. So we have to use webView.load() method instead.
This is the way to go :
import folium
import io
from folium.plugins import Draw, MousePosition, HeatMap
from PySide2 import QtWidgets, QtWebEngineWidgets, QtCore
class FoliumMap(QtWidgets.QWidget):
def __init__(self, parent=None):
QtWidgets.QWidget.__init__(self, parent)
self.layout = QtWidgets.QVBoxLayout()
m = folium.Map(
title='coastlines',
zoom_start=3)
url = "C:/MYPATH/TO/map.html"
m.save(url)
webView = QtWebEngineWidgets.QWebEngineView()
html_map = QtCore.QUrl.fromLocalFile(url)
webView.load(html_map)
self.layout.addWidget(webView)
self.setLayout(self.layout)
I am trying to create a simple image classification tool.
I would like the code below to work with classifying images. It works fine when it is a non image NumPy array.
#https://e2eml.school/images_to_numbers.html
import numpy as np
from sklearn.utils import Bunch
from PIL import Image
monkey = [1]
dog = [2]
example_animals = Bunch(data = np.array([monkey,dog]),target = np.array(['monkey','dog']))
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2) #with KMeans you get to pre specify the number of Clusters
KModel = kmeans.fit(example_animals.data) #fit a model using the training data , in this case original example animal data passed through
import pandas as pd
crosstab = pd.crosstab(example_animals.target,KModel.labels_)
print(crosstab)
I have looked into how to make an image into a NumPy array at https://e2eml.school/images_to_numbers.html
The code below where I have converted images to NumPy array doesn't work.
When run it gets the following error
** 'setting an array element with a sequence'**
#https://e2eml.school/images_to_numbers.html
import numpy as np
from sklearn.utils import Bunch
from PIL import Image
monkey = np.asarray(Image.open("monkey.jpg"))
dog = np.asarray(Image.open("dog.jpeg"))
example_animals = Bunch(data = np.array([monkey,dog]),target = np.array(['monkey','dog']))
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2) #with KMeans you get to pre specify the number of Clusters
KModel = kmeans.fit(example_animals.data) #fit a model using the training data , in this case original example animal data passed through
import pandas as pd
crosstab = pd.crosstab(example_animals.target,KModel.labels_)
print(crosstab)
I would appreciate any insight how I fix the error 'setting an array element with a sequence' so that the images will be compatible with the sklearn processing.
You need to be sure that your images "monkey.jpg" and "dog.jpeg" have the same number of pixels. Otherwise, you will have to resize the images to have the same size. Moreover, the data of your Bunch object need to be of shape (n_samples, n_features) (you can check the documentation https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans.fit)
You need to be aware that you use an unserpervised learning model (Kmeans). So the output of the model is not directly "monkey" or "dog".
I found the solution to error setting an array element with a sequence
Kmeans requires the data arrays for comparison need to be the same size.
This means if importing pictures, the pictures need to be resized, converted into a numpy array (a format that is compatible with Kmeans) and finally made into a 1 dimensional array.
#https://e2eml.school/images_to_numbers.html
#https://machinelearningmastery.com/how-to-load-and-manipulate-images-for-deep-learning-in-python-with-pil-pillow/
import numpy as np
from matplotlib import pyplot as plt
from sklearn.utils import Bunch
from PIL import Image
from sklearn.cluster import KMeans
import pandas as pd
monkey = Image.open("monkey.jpg")
dog = Image.open("dog.jpeg")
#resize pictures
monkey1 = monkey.resize((180,220))
dog1 = dog.resize((180,220))
#make pictures into numpy array
monkey2 = np.asarray(monkey1)
dog2 = np.asarray(dog1)
#https://www.quora.com/How-do-I-convert-image-data-from-2D-array-to-1D-using-python
#make numpy array into 1 dimensional array
monkey3 = monkey2.reshape(-1)
dog3 = dog2.reshape(-1)
example_animals = Bunch(data = np.array([monkey3,dog3]),target = np.array(['monkey','dog']))
kmeans = KMeans(n_clusters=2) #with KMeans you get to pre specify the number of Clusters
KModel = kmeans.fit(example_animals.data) #fit a model using the training data , in this case original example food data passed through
crosstab = pd.crosstab(example_animals.target,KModel.labels_)
print(crosstab)
I have a simple ndarray with shape as:
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(trainImg[0]) #can display a sample image
print(trainImg.shape) : (4750, 128, 128, 3) #shape of the dataset
I intend to apply Gaussian blur to all the images. The for loop I went with:
trainImg_New = np.empty((4750, 128, 128,3))
for idx, img in enumerate(trainImg):
trainImg_New[idx] = cv2.GaussianBlur(img, (5, 5), 0)
I tried to display a sample blurred image as:
plt.imshow(trainImg_New[0]) #view a sample blurred image
but I get an error:
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
It just displays a blank image.
TL;DR:
The error is most likely caused by trainImg_New is float datatype and its value is larger than 1. So, as #Frightera mentioned, try using np.uint8 to convert images' datatype.
I tested the snippets as below:
import numpy as np
import matplotlib.pyplot as plt
import cv2
trainImg_New = np.random.rand(4750, 128, 128,3) # all value is in range [0, 1]
save = np.empty((4750, 128, 128,3))
for idx, img in enumerate(trainImg_New):
save[idx] = cv2.GaussianBlur(img, (5, 5), 0)
plt.imshow(np.float32(save[0]+255)) # Reported error as question
plt.imshow(np.float32(save[0]+10)) # Reported error as question
plt.imshow(np.uint8(save[0]+10)) # Good to go
First of all, cv2.GaussianBlur will not change the range of the arrays' value and the original image arrays's value is legitimate. So I believe the only reason is the datatype of the trainImg_New[0] is not match its range.
So I tested the snippets above, we can see when the datatype of trainImg_New[0] matter the available range of the arrays' value.
I suggest you use tfa.image.gaussian_filter2d from the tensorflow_addons package. I think you'll be able to pass all your images at once.
import tensorflow as tf
from skimage import data
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
image = data.astronaut()
plt.imshow(image)
plt.show()
blurred = tfa.image.gaussian_filter2d(image,
filter_shape=(25, 25),
sigma=3.)
plt.imshow(blurred)
plt.show()
I have a numpy array and I need to cut a partition of it based on an ROI like (x1,y1)(x2,y2). The background color of the numpy array is zero.
I need to crop that part from the first numpy array and then resize the cropped array to (640,480) pixel.
I am new to numpy and I don't have any clue how to do this.
#numpy1: the first numpy array
roi=[(1,2),(3,4)]
It kind of sounds like you want to do some image processing. Therefore, I suggest you to have a look at the OpenCV library. In their Python implementation, images are basically NumPy arrays. So, cropping and resizing become quite easy:
import cv2
import numpy as np
# OpenCV images are NumPy arrays
img = cv2.imread('path/to/your/image.png') # Just use your NumPy array
# instead of loading some image
# Set up ROI [(x1, y1), (x2, y2)]
roi = [(40, 40), (120, 150)]
# ROI cutout of image
cutout = img[roi[0][1]:roi[1][1], roi[0][0]:roi[1][0], :]
# Generate new image from cutout with desired size
new_img = cv2.resize(cutout, (640, 480))
# Just some output for visualization
img = cv2.rectangle(img, roi[0], roi[1], (0, 255, 0), 2)
cv2.imshow('Original image with marked ROI', img)
cv2.imshow('Resized cutout of image', new_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.5
NumPy: 1.19.1
OpenCV: 4.4.0
----------------------------------------
You can crop an array like
array = array[start_x:stop_x, start_y:stop_y]
or in your case
array = array[roi[0][0]:roi[0][1], roi[1][0]:roi[1][1]]
or one of
array = array[slice(*roi[0]), slice(*roi[1])]
array = array[tuple(slice(*r) for r in roi)]
depending on the amount of abstraction and over-engineering that you need.
I recommend using slicing and skimage. skimage.transform.resize is what you need.
import matplotlib.pyplot as plt
from skimage import data
from skimage.transform import resize
image = data.camera()
crop = image[10:100, 10:100]
crop = resize(crop, (640, 480))
plt.imshow(crop)
More about slicing, pls see here.
Details on skimage, see here
I edited it to view the foreground image on a white background but now, none of the images are visible.
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('91_photo.jpg')
mask = np.zeros(img.shape[:2],np.uint8)
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
rect = (10,10,360,480)
cv2.grabCut(img,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0),0,255).astype('uint8')
img = img*mask2[:,:,np.newaxis]
plt.imshow(img),plt.colorbar(),plt.show()
Expecting the result to be a visible image on a white background
This is what i'm getting
There are a number of small issues with your code that are adding up to that weird result.
OpenCV uses BGR ordering of the channels of an image, where matplotlib uses RGB. That means if you read an image with OpenCV but want to display with matplotlib, you need to convert the image from BGR to RGB before displaying (that's the reason the colors are weird). Also, not that important, but color images are not displayed with a colormap, so showing the colormap does not do anything for you.
In numpy, it's best to keep masks boolean whenever you can, because you can use them to index your arrays. Your current code converts a boolean mask to a uint8 image with 0 and 255 values and then you multiply that with your image. That means your image will be set to zero wherever the mask is zero---and your image values will explode (or do weird stuff with overflow). Instead, keep the mask boolean and use it to index your array. That way anywhere the mask is True you can just set the value in your image to something specific (like 255 for white).
This should fix you up:
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('91_photo.jpg')
mask = np.zeros(img.shape[:2], np.uint8)
bgdModel = np.zeros((1, 65), np.float64)
fgdModel = np.zeros((1, 65), np.float64)
rect = (10, 10, 360, 480)
cv2.grabCut(img, mask, rect, bgdModel, fgdModel, 5, cv2.GC_INIT_WITH_RECT)
mask2 = (mask==2) | (mask==0)
img[mask2] = 255
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()