pygame, why it can't show gray scale picture [duplicate] - numpy

This question already has answers here:
Pygame, create grayscale from 2d numpy array
(4 answers)
How do I convert an OpenCV image (BGR and BGRA) to a pygame.Surface object
(1 answer)
Closed 1 year ago.
I am using pygame to show grayscale images. However, it can't show grayscale image, the output is here
The images are numpy arrays of which shape is 80X80, and the code is here
surf_1 = pygame.surfarray.make_surface(pic_1)
surf_2 = pygame.surfarray.make_surface(pic_2)
surf_3 = pygame.surfarray.make_surface(pic_3)
surf_4 = pygame.surfarray.make_surface(pic_4)
self.screen.blit(surf_1, (120, 40))
self.screen.blit(surf_2, (210, 40))
self.screen.blit(surf_3, (300, 40))
self.screen.blit(surf_4, (390, 40))
My question is how can I show real grayscale images instead of the blue-green one.
Update
Here is the code to convert RGB to grayscale, it is from the Internet, I don't know the exact source, sorry about that.
def rgb2gray(rgb, img_shape):
gray = np.dot(rgb[..., :3], [0.299, 0.587, 0.114])
return gray.reshape(*img_shape)

Related

How to save charts without cutting off x-axis labels? [duplicate]

This question already has answers here:
matplotlib savefig - text chopped off
(3 answers)
X and Y label being cut in matplotlib plots
(2 answers)
Closed yesterday.
Could you please help me correctly save chart images?
The saved files has axes x-titles that are cut off? See below:
Current Chart Image
I'm currently using:
ax.figure.set_size_inches(16, 9)
#adjusting the tick marks
plt.xticks(rotation=75)
#saving plot
plt.show()
fig.savefig('figure6.png')
plt.clf()

Seaborn colorbar height to match heatmap [duplicate]

This question already has answers here:
How to set Matplotlib colorbar height for image with aspect ratio < 1
(2 answers)
Set Matplotlib colorbar size to match graph
(9 answers)
Setting the size of a matplotlib ColorbarBase object
(2 answers)
Closed 11 months ago.
There are a few examples showing how to use "shrink" for changing the colorbar. How can I automatically figure out what the shrink should be so the colorbar is equal to the height of the heatmap?
I don't have a matplotlib axis because I am using seaborn and plotting the heatmap from the dataframe.
r = []
r.append(np.arange(0,1,.1))
r.append(np.arange(0,1,.1))
r.append(np.arange(0,1,.1))
df_cm = pd.DataFrame(r)
sns.heatmap(df_cm, square=True, cbar_kws=dict(ticks=[df_cm.min().min(), df_cm.max().max()]))
plt.tight_layout()
plt.savefig(f'test.png', bbox_inches="tight")

how to convert flattened array of RGB image(1-D) back to original image

I have flattened 1D array of (1*3072) created from RGB image of dimension(32*32*3). I want to extract back the original RGB image of dimension(32*32*3) and plot it.
I have tried the solution suggested in how to convert a 1-dimensional image array to PIL image in Python
But it's not working for me, As it seems it is for a greyscale image
from PIL import Image
from numpy import array
img = Image.open("sampleImage.jpg")
arr = array(img)
arr = arr.flatten()
print(arr.shape)
#tried with 'L' & 'RGB' both
img2 = Image.fromarray(arr.reshape(200,300), 'RGB')
plt.imshow(img2, interpolation='nearest')
plt.show()
"Getting below error which expected because it is not able covert RGB"
ValueError: cannot reshape array of size 180000 into shape (200,300)
In order to interpret an array as an RGB image, it needs to have 3 channels. A channel is the 3rd dimension in the numpy array. So change your code to this:
img2 = Image.fromarray(arr.reshape(200,300,3), 'RGB')
I should mention that you talk about your flattened array being 1x3072, yet the example code seems to assume 200x300x3, which would be 1x180,000 when flattened. Which of these two is the truth, I couldn't tell you.

Numpy + OpenCV-Python : Get coordinates of white pixels [duplicate]

This question already has answers here:
Numpy binary matrix - get rows and columns of True elements
(1 answer)
Python: How to find the value of a number in a numpy array?
(1 answer)
Closed 5 years ago.
I have a grayscale image and I want to get all the coordinates of pixels with intensity level 255.
I tried using:
pixels = np.where(img[img == 255])
But instead of getting a list of (X,Y) coordinates, I got a list of numbers, like if img was a one dimensional vector.
What am I doing wrong?

imshow inverts colors after using astype(float)

I'm using matplotlib imshow to visualize data from cifar-10. After reading in cifar10 data I've noticed that the image rendered from imshow is different after I use .astype(float).
For example,
Without .astype(float)
Here's what I see with .astype(float)
Why does it look like the image is rendering with the colors inverted?
Here is the code I am using:
dir = 'resources/datasets/cifar-10-batches-py'
import cPickle
fo = open(dir + '/data_batch_1', 'rb')
dict = cPickle.load(fo)
fo.close()
X=dict['data'].reshape((10000, 3, 32, 32)).transpose(0, 2, 3, 1).astype(float)
Y=dict['labels']
plt.imshow(X[2,])
plt.show()
A little late but for people seeking help and finding this post:
A good explanation of how matplotlib decides on the colormap is given in this post.
I have just faced this same problem and found a solution in this post. It seems that matplotlib assumes 3d images of type float to be in range [0,1] and uses a modulo operation with 1 to clamp the values like 2.7 -> 0.7.
Try this:
X=dict['data'].reshape((10000, 3, 32, 32)).transpose(0, 2, 3,1)
X = X.astype(float) / 255
plt.imshow(X[2,])
Now matplotlib should directly use the values provided and not use internal modulo magic.