So I am trying to create a code that can rotate an image counterclockwise using Python by implementing the rotation matrix. This code is supposed to rotate the image counterclockwise, but why does it rotate the picture in a clockwise motion?
import math
import numpy as np
from PIL import Image
img = Image.open('squidward.jpg')
Im = np.array(img)
angle = 30
# Define the most occuring variables
angle=math.radians(angle) #converting degrees to radians
cosine=math.cos(angle)
sine=math.sin(angle)
height=Im.shape[0] #define the height of the image
width=Im.shape[1] #define the width of the image
# Define the height and width of the new image that is to be formed
new_height = round(abs(Im.shape[0]*cosine)+abs(Im.shape[1]*sine))+1
new_width = round(abs(Im.shape[1]*cosine)+abs(Im.shape[0]*sine))+1
# define another image variable of dimensions of new_height and new _column filled with zeros
Rot_Im=np.zeros((new_height,new_width,Im.shape[2]))
# Find the centre of the image about which we have to rotate the image
original_centre_height = round(((Im.shape[0]+1)/2)-1) #with respect to the original image
original_centre_width = round(((Im.shape[1]+1)/2)-1) #with respect to the original image
# Find the centre of the new image that will be obtained
new_centre_height= round((((new_height)+1)/2)-1) #with respect to the new image
new_centre_width= round((((new_width)+1)/2)-1) #with respect to the new image
for i in range(height):
for j in range(width):
#co-ordinates of pixel with respect to the centre of original image
y0=Im.shape[0]-1-i-original_centre_height
x0=Im.shape[1]-1-j-original_centre_width
#co-ordinate of pixel with respect to the rotated image
new_y0=round(x0*sine+y0*cosine)
new_x0=round(x0*cosine-y0*sine)
'''since image will be rotated the centre will change too,
so to adust to that we will need to change new_x and new_y with respect to the new centre'''
new_y0=new_centre_height-new_y0
new_x0=new_centre_width-new_x0
# adding if check to prevent any errors in the processing
if 0 <= new_x0 < new_width and 0 <= new_y0 < new_height and new_x0>=0 and new_y0>=0:
Rot_Im[new_y0,new_x0,:]=Im[i,j,:] #writing the pixels to the new destination in the output image
pil_img=Image.fromarray((Rot_Im).astype(np.uint8)) # converting array to image
pil_img.save("rotated_image.png") # saving the image
Do -30 for counterclockwise. I think you will get the answer but it is too late i suppose
Related
I'm trying to speed up tiling a PIL image converted to a NumPy array, without changing the size of the image.
The input is an image of x,y dimensions and the output is an image of same x, y dimensions but with the image inside tiled.
This is what I used to did first without numpy:
import numpy
from PIL import Image
def tile_image(texture, texture_tiling = (5, 5)):
#texture is a PIL image, for e.g. Image.open(filename)
width, height = texture.size
tile = texture.copy()
tiled_texture = Image.new('RGBA', (width*texture_tiling[0], height*texture_tiling[1]))
for x in range(texture_tiling[0]):
for y in range(texture_tiling[1]):
x_ = width*x
y_ = height*y
tiled_texture.paste(tile, (x_, y_))
tiled_texture = tiled_texture.resize(texture.size, Image.BILINEAR)
return tiled_texture
This is the function with numpy:
def tile_image(texture, texture_tiling = (5, 5)):
tile = numpy.array(texture.copy())
tile = numpy.tile(tile, (texture_tiling[1], texture_tiling[0], 1))
tile = Image.fromarray(tile)
tile = tile.resize(texture.size, Image.BILINEAR)
return tile
The problem with both of these is that it requires increasing the image size before resizing them, but this becomes difficult with higher resolution textures. But trying to use a for loop and replacing pixels at [x, y] with [(texture_tiling[0]*x)%width, (texture_tiling[1]*y)%height] is way too slow using a regular for loop. What do I do to speed up the above pixel operation?
NOTE: I don't try resizing the tile to be smaller than paste on an empty layer, because the tiling could be odd and mess up the tile size.
I have an image of a circle, refer to the image attached below. I already retrieved the transparent circle and want to paste that circle back to the image to make some overlapped circles.
Below is my code but it led to the problem A, it's like a (transparent) hole in the image. I need to have circles on normal white background.
height, width, channels = circle.shape
original_image[60:60+height, 40:40+width] = circle
I used cv2.addWeighted but got blending issue, I need clear circles
circle = cv2.addWeighted(original_image[60:60+height, 40:40+width],0.5,circle,0.5,0)
original_image[60:60+rows, 40:40+cols] = circle
If you already have a transparent black circle, then in Python/OpenCV here is one way to do that.
- Read the transparent image unchanged
- Extract the bgr channels and the alpha channel
- Create a colored image of the background color and size desired
- Create similar sized white and black images
- Initialize a copy of the background color image for the output
- Define a list offset coordinates in the larger image
- Loop for over the list of offsets and do the following
- Insert the bgr image into a copy of the white image as the base image
- Insert the alpha channel into a copy of the black image for a mask
- composite the initialized output and base images using the mask image
- When finished with the loop, save the result
Input (transparent):
import cv2
import numpy as np
# load image with transparency
img = cv2.imread('black_circle_transp.png', cv2.IMREAD_UNCHANGED)
height, width = img.shape[:2]
print(img.shape)
# extract the bgr channels and the alpha channel
bgr = img[:,:,0:3]
aa = img[:,:,3]
aa = cv2.merge([aa,aa,aa])
# create whatever color background you want, in this case white
background=np.full((500,500,3), (255,255,255), dtype=np.float64)
# create white image of the size you want
white=np.full((500,500,3), (255,255,255), dtype=np.float64)
# create black image of the size you want
black=np.zeros((500,500,3), dtype=np.float64)
# initialize output
result = background.copy()
# define top left corner x,y locations for circle offsets
xy_offsets = [(100,100), (150,150), (200,200)]
# insert bgr and alpha into white and black images respectively of desired output size and composite
for offset in xy_offsets:
xoff = offset[0]
yoff = offset[1]
base = white.copy()
base[yoff:height+yoff, xoff:width+xoff] = bgr
mask = black.copy()
mask[yoff:height+yoff, xoff:width+xoff] = aa
result = (result * (255-mask) + base * mask)/255
result = result.clip(0,255).astype(np.uint8)
# save resulting masked image
cv2.imwrite('black_circle_composite.png', result)
# display result, though it won't show transparency
cv2.imshow("image", img)
cv2.imshow("aa", aa)
cv2.imshow("bgr", bgr)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
I initialize my dataset using the following function (simplified):
WIDTH = ...
HEIGHT = ...
def load_data(dataset_path):
images = []
labels = []
for all_images:
image = cv2.imread(pimage_path)
image = cv2.resize(image, (WIDTH, HEIGHT)) #???
labels.add(corresponding_label)
return (np.array(images).reshape(-1, WIDTH, HEIGHT, 3) / 255, np.array(labels))
In the tutorials I watched, people resize the input images to (WIDTH, HEIGHT). But this proceeds to stretch the images. I don't understand why we have to do that, because in the model I'm using the input images are applied a convolution. So I tried to not resize the input images but I got an error during the reshape process at the end of my function.
What am I missing?
You aren't limited to stretching the image, perhaps you could either crop the image or add a bufferzone with a consistent color, although if you can afford to crop the images that'd be more convenient but still you can just fill the rest of the space with a fixed color, the model would not care less.
What kind of error did you get when reshaping? Chances are that if you do not reshape the image you cannot later on resize the numpy array to WIDTH, HEIGHT. In that case, you must change the value of WIDTH and HEIGHT.
i'm trying to save the image after vis_bbox prediction with its original image dimension.
my code:
from PIL import Image, ImageChops
import cv2
img = utils.read_image('/home/ubuntu/ui.jpg', color=True)
bboxes, labels,scores = model.predict([img])
bbox, label, score = bboxes[0], labels[0], scores[0],
colors = voc_colormap(label + 1)
bccd_labels = ('cell', 'cell')
vis_bbox(img, bbox, label_names=bccd_labels, instance_colors=colors, alpha=0.9, linewidth=1.0)
plt.axis("off")
plt.savefig("/home/ubuntu/ins.jpg")
while saving , it saves the image with white background and default size (432 *288).
i need to save the predicted image from vis_bbox with the original dimension (1300 *1300).
Any suggestions would be helpful!
I would like to create a matrix subplot and display each BMP files, from a directory, in a different subplot, but I cannot find the appropriate solution for my problem, could somebody helping me?.
This the code that I have:
import os, sys
from PIL import Image
import matplotlib.pyplot as plt
from glob import glob
bmps = glob('*trace*.bmp')
fig, axes = plt.subplots(3, 3)
for arch in bmps:
i = Image.open(arch)
iar = np.array(i)
for i in range(3):
for j in range(3):
axes[i, j].plot(iar)
plt.subplots_adjust(wspace=0, hspace=0)
plt.show()
I am having the following error after executing:
natively matplotlib only supports PNG images, see http://matplotlib.org/users/image_tutorial.html
then the way is always read the image - plot the image
read image
img1 = mpimg.imread('stinkbug1.png')
img2 = mpimg.imread('stinkbug2.png')
plot image (2 subplots)
plt.figure(1)
plt.subplot(211)
plt.imshow(img1)
plt.subplot(212)
plt.imshow(img2)
plt.show()
follow the tutorial on http://matplotlib.org/users/image_tutorial.html (because of the import libraries)
here is a thread on plotting bmps with matplotlib: Why bmp image displayed as wrong color with plt.imshow of matplotlib on IPython-notebook?
The bmp has three color channels, plus the height and width, giving it a shape of (h,w,3). I believe plotting the image gives you an error because the plot only accepts two dimensions. You could grayscale the image, which would produce a matrix of only two dimensions (h,w).
Without knowing the dimensions of the images, you could do something like this:
for idx, arch in enumerate(bmps):
i = idx % 3 # Get subplot row
j = idx // 3 # Get subplot column
image = Image.open(arch)
iar_shp = np.array(image).shape # Get h,w dimensions
image = image.convert('L') # convert to grayscale
# Load grayscale matrix, reshape to dimensions of color bmp
iar = np.array(image.getdata()).reshape(iar_shp[0], iar_shp[1])
axes[i, j].plot(iar)
plt.subplots_adjust(wspace=0, hspace=0)
plt.show()