how to fix this issue ? cv2.error: OpenCV(4.1.2) ... error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize' - numpy

I am trying to do rotation on multiple images in a folder but I am having this error when I put values of fx, fy greater than 0.2 in the resize function
(cv2.error: OpenCV(4.1.2) ... error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize')
Although, when I try to rotate a single image and put values of fx and fy equal to 0.5, it works perfectly fine.
Is there a way to fix this issue because it is very hectic to augment images one by one? Plus the multiple images which are rotated by the code attached here, with fx and fy values equal to 0.2, have undesirable dimensions i.e the photos are very small and their quality is also reduced.
the part of code for rotation of multiple images is given below:
for imag in os.listdir(source_folder):
img = cv2.imread(os.path.join(source_folder,imag))
img = cv2.resize(img, (0,0), fx=0.5, fy=0.5)
width = img.shape[1]
height = img.shape[0]
M = cv2.getRotationMatrix2D((width/2,height/2),5,1.0)
rotated_img = cv2.warpAffine(img,M,(img.shape[1],img.shape[0]))
cv2.imwrite(os.path.join(destination_right_folder, "v4rl" + str(a) + '.jpg') , rotated_img)
#cv2.imshow("rotated_right",rotated_img)
#cv2.waitKey(0)
a += 1

Add a check after you read the image to see if it is None:
img = cv2.imread(os.path.join(source_folder,imag))
if img is None: continue
The error is happening when you call the cv2.resize() function. Maybe files are being read that are not images.

Related

remove background using u2net produced mask

I am trying to remove background from an image. For this purpose I am using U2NET. I am writing the network structure using Tensorflow by following this repository. I have changed the model architecture according to my needs. It takes 96x96 image and produces 7 masks. I am taking 1st mask (out of 7) and multiplying it against the all channels of original 96x96 image.
The code that predicts 7 masks is:
img = Image.open(os.path.join('DUTS-TE','DUTS-TE-Image', test_x_names[90]))
copied = deepcopy(img)
copied = copied.resize((96,96))
copied = np.expand_dims(copied,axis=0)
preds = model.predict(copied)
preds = np.squeeze(preds)
"preds[0]" is:
predicted mask
Multiplying the mask against the original image produces:
masked image and corresponding code is ("img2" is original image):
img2 = np.asarray(img2)
immg = np.zeros((96,96,3), np.uint8)
for i in range(0,3):
immg[:,:,i] = img2[:,:,i] * preds[0]
plt.imshow(immg)
plt.show()
If i binarize the mask and then multiply it against the original image it produces :
enter image description here and corresponding code is :
frame = binarize(preds[0,:,:], threshold = 0.5)
img2 = np.asarray(img2)
immg = np.zeros((96,96,3), np.uint8)
for i in range(0,3):
immg[:,:,i] = img2[:,:,i] * frame
plt.imshow(immg)
plt.show()
Multiplying the original image with mask or binarized mask do not segment the foreground properly from the background. So, what can be done? Am I missing something?

OpenCV Error using function matchTemplate

While using the matchTemplate function in OpenCV, I get the error that the template image is larger than the original image. How to overcome that?
The code is as follows:
def imagecheck(name1):
os.chdir('/content/drive/My Drive/Mad Street Den/Images')
main_image = cv2.imread('image_name_100.jpg')
gray_image = cv2.cvtColor(main_image, cv2.COLOR_BGR2GRAY)
#open the template as gray scale image
os.chdir('/content/drive/My Drive/Mad Street Den/Crops')
template = cv2.imread(name1, 0)
width, height = template.shape[::-1] #get the width and height
#match the template using cv2.matchTemplate
match = cv2.matchTemplate(gray_image, template, cv2.TM_CCOEFF_NORMED)
threshold = 0.9
position = np.where(match >= threshold) #get the location of template in the image
for point in zip(*position[::-1]): #draw the rectangle around the matched template
cv2.rectangle(main_image, point, (point[0] + width, point[1] + height), (0, 204, 153), 2)
#result=[position[1][0],position[0][0],position[0][1],position[0][2]]
result=[]
if (all (position)):
result.append(int(position[1]))
result.append(int(position[0]))
result.append(int(position[1]+width))
result.append(int(position[0]+height))
return (result)
#cv2_imshow(main_image)
for i in range(0,273):
name1='image_name_'+str(i)+'.jpg'
result=imagecheck(name1)
print(name1, ' : ',result)
The Error is
error: OpenCV(3.4.3) /io/opencv/modules/imgproc/src/templmatch.cpp:1107: error: (-215:Assertion failed) _img.size().height <= _templ.size().height && _img.size().width <= _templ.size().width in function 'matchTemplate' site:stackoverflow.com
You can avoid the issue by not attempting to match a template against an image if the template is larger. Compare the template dimensions to the image dimensions and, in this case, return [] if they template is larger in any dimension.

Matplotlib Stopping an Animation

The following code is the function that's used to create an animation of appearing and then fading-away points on a Matplotlib basemap. I was wondering how it's possible to slow the interval down between each point? In this case, I have set frames = 62, because there are 62 points. However, changing the interval to a larger value doesn't seem to slow down the interval between points. Am I missing something here? The attached animation function and GIF is attached below. The rest of the code isn't here, because I didn't think it was relevant to the question. Thanks.
def animate(frame):
eq_num = frame % len(X)
i = frame % len(P)
P['colour'][:,3] = np.maximum(0, P['colour'][:,3] - 1.0/len(P))
P['size'] += P['growth']
magnitude = X['magnitude'][eq_num]
P['epicentre'][i] = m(*X['epicentre'][eq_num])
P['size'][i] = 5
P['growth'][i]= np.exp(magnitude) * 0.1
if magnitude < 4:
P['colour'][i] = 0,0,1,1
else:
P['colour'][i] = 1,0,0,1
scatter.set_edgecolors(P['colour'])
scatter.set_facecolors(P['colour']*(1,1,1,0.25))
scatter.set_sizes(P['size'])
scatter.set_offsets(P['epicentre'])
return scatter,
ani = FuncAnimation(fig,animate,frames=62,interval=1000,blit=False)
ani.save('animation.gif', writer='imagemagick', fps=100)
#plt.show()

Grayscale image using opencv from numpy array failed

I use the following numpy array that hold an image which is black and white image with the following shape
print(img.shape)
(28, 112)
when I try to grayscale the image, to use it to get contours using opencv with following steps
#grayscale the image
grayed = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#thredshold image
thresh = cv2.threshold(grayed, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
I got the following error
<ipython-input-178-7ebff17d1c18> in get_digits(img)
6
7 #grayscale the image
----> 8 grayed = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
9
10
error: C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:11073: error: (-215) depth == 0 || depth == 2 || depth == 5 in function cv::cvtColor
the opencv errors have no information in it to be able to get what is wrong
Here is the working code for how you were trying it:
img = np.stack((img,) * 3,-1)
img = img.astype(np.uint8)
grayed = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(grayed, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
A simpler way of getting the same result is to invert the image yourself:
img = (255-img)
thresh = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)[1]
As you discovered, as you perform different operations on images, the image is required to be in different formats.
cv2.THRESH_BINARY_INV and cv2.THRESH_BINARY are designed to take a color image (and convert it to grayscale) so you need a three channel representation.
cv2.THRESH_OTSU works with grayscale images so one channel is okay for that.
Since your image was already grayscale from the start, you weren't able to convert it from color to grayscale nor did you really need to. I assume you were trying to invert the image but that's easy enough on your own (255-img).
At one point you tried to do an cv2.THRESH_OTSU with floating point values but cv2.THRESH_OTSU requires integers between 0 and 255.
If openCV had more user-friendly error messages it would really help with issues like these.

How to resize font in plot_net feature of phyloseq?

I want to resize my text in plot_net but none of the options are working for me. I am trying
p <- plot_net(physeqP, maxdist = 0.4, point_label = "ID", color = "Cond", shape = "Timeperiod") p + geom_text(size=15)
This gives me error
"Error: geom_text requires the following missing aesthetics: x, y,
label".
Can anyone please tell me how can I fix the issue?
I dont want to resize legends or the axis, but the nodes text.
this image is drawn using phyloseq but since the font size is very small, i want to make it prominent.
Without an example it's hard to reproduce.
p <- plot_net(physeqP, maxdist = 0.4, point_label = "ID"
, color = "Cond", shape = "Timeperiod", cex_val = 2)
I believe this is with the NeuralNetTools package.
Try using: cex_val numeric value indicating size of text labels, default 1
https://www.rdocumentation.org/packages/NeuralNetTools/versions/1.5.0/topics/plotnet