OpenCV : Replace Mask with background color - numpy

I would like to apply a mask operation on an image where my ultimate goal to remove the mask completely and replace it with the background color
Here is an Image Sample :
The Mask:
I have used cv2.bitwise_not but the output is not a complete removal
res = cv2.bitwise_not(img,img,mask=closex)
I assume that there is a numpy operation can do that.

Try using:
import cv2
img = cv2.imread("theBaseImage.jpg", 1)
mask = cv2.imread("theImageToUseAsMask.jpg", 1)
whiteOut = cv2.add(mask, img) #add your images, making the desired regions white
cv2.imwrite("maskedImage.jpg", whiteOut)
This results in the following image:

Related

Input array must have a shape == (..., 3)), got (299, 299, 4)

I am using a pretrained resnet50 model to validate some classes. I am using LIME to test how the model is testing this data as well. However, some of the images are not RGB and may be different formats, and I noticed that RGB arrays are value 3 instead of other numbers (like 4). I am using skimage to preprocess the images and test it with LIME. Any suggestions on how I can fix this with skimage and tensorflow? I am using panda dataframes to collect the images and train and test generators to see if the model is able to guess correctly.
code:
def transform_img_fn_ori(url):
img = skimage.io.imread(url)
img = skimage.transform.resize(img, (299,299))
img = (img - 0.5)*2
img = np.expand_dims(img, axis=0)
return img
url="" #this is a path on pc
images=transform_img_fn_ori(url)
explanation= explainer.explain_instance(images[0].astype('double'), model.predict, top_labels=3, hide_color=0, num_samples=1000)
temp_1, mask_1 = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=True, num_features=5, hide_rest=True)
temp_2, mask_2 = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=10, hide_rest=False)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,15))
ax1.imshow(mark_boundaries(temp_1, mask_1))
ax2.imshow(mark_boundaries(temp_2, mask_2))
ax1.axis('off')
ax2.axis('off')
Your model expects RGB images and your url may point to non-RGB images.
In this situation the best is to make sure images are read in RGB. For instance, OpenCV reads images always in BGR by default.
In skimage, you can't ensure the format being read, it can be grayscale, RGB or RGBA according to docs.
In addition to this, skimage doesn't provide a single method to convert any image to RGB, like convert method in in Pillow. So, you need to guess which is your color mode and convert it to RGB.
img = skimage.io.imread(url)
if img.ndim == 2 or (img.ndim==3 and img.shape[2] ==1):
# your image is in grayscale
img = skimage.color.gray2rgb(img)
elif img.ndim==3 and img.shape[2] == 4:
# your image is in RGBA
img = skimage.color.rgba2rgb(img)
else:
# your image is already in RGB
assert img.ndim == 3 and img.shape[2] == 3
The last assert is to make sure everything is ok.
Finally, probably not your case, but images may contain any number of channels and other space colors than RGB. That's why I don't like skimage and prefer OpenCV. So, whatever method you using to read images, check out the docs to make sure what does it returns: it is impossible in some cases to distinguish, like between RGB and BGR for instance.

Python OpenCV Duplicate a transparent shape in the same image

I have an image of a circle, refer to the image attached below. I already retrieved the transparent circle and want to paste that circle back to the image to make some overlapped circles.
Below is my code but it led to the problem A, it's like a (transparent) hole in the image. I need to have circles on normal white background.
height, width, channels = circle.shape
original_image[60:60+height, 40:40+width] = circle
I used cv2.addWeighted but got blending issue, I need clear circles
circle = cv2.addWeighted(original_image[60:60+height, 40:40+width],0.5,circle,0.5,0)
original_image[60:60+rows, 40:40+cols] = circle
If you already have a transparent black circle, then in Python/OpenCV here is one way to do that.
- Read the transparent image unchanged
- Extract the bgr channels and the alpha channel
- Create a colored image of the background color and size desired
- Create similar sized white and black images
- Initialize a copy of the background color image for the output
- Define a list offset coordinates in the larger image
- Loop for over the list of offsets and do the following
- Insert the bgr image into a copy of the white image as the base image
- Insert the alpha channel into a copy of the black image for a mask
- composite the initialized output and base images using the mask image
- When finished with the loop, save the result
Input (transparent):
import cv2
import numpy as np
# load image with transparency
img = cv2.imread('black_circle_transp.png', cv2.IMREAD_UNCHANGED)
height, width = img.shape[:2]
print(img.shape)
# extract the bgr channels and the alpha channel
bgr = img[:,:,0:3]
aa = img[:,:,3]
aa = cv2.merge([aa,aa,aa])
# create whatever color background you want, in this case white
background=np.full((500,500,3), (255,255,255), dtype=np.float64)
# create white image of the size you want
white=np.full((500,500,3), (255,255,255), dtype=np.float64)
# create black image of the size you want
black=np.zeros((500,500,3), dtype=np.float64)
# initialize output
result = background.copy()
# define top left corner x,y locations for circle offsets
xy_offsets = [(100,100), (150,150), (200,200)]
# insert bgr and alpha into white and black images respectively of desired output size and composite
for offset in xy_offsets:
xoff = offset[0]
yoff = offset[1]
base = white.copy()
base[yoff:height+yoff, xoff:width+xoff] = bgr
mask = black.copy()
mask[yoff:height+yoff, xoff:width+xoff] = aa
result = (result * (255-mask) + base * mask)/255
result = result.clip(0,255).astype(np.uint8)
# save resulting masked image
cv2.imwrite('black_circle_composite.png', result)
# display result, though it won't show transparency
cv2.imshow("image", img)
cv2.imshow("aa", aa)
cv2.imshow("bgr", bgr)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:

how to save figure in vis_bbox without white background, when plotting with matplotlib?

i'm trying to save the image after vis_bbox prediction with its original image dimension.
my code:
from PIL import Image, ImageChops
import cv2
img = utils.read_image('/home/ubuntu/ui.jpg', color=True)
bboxes, labels,scores = model.predict([img])
bbox, label, score = bboxes[0], labels[0], scores[0],
colors = voc_colormap(label + 1)
bccd_labels = ('cell', 'cell')
vis_bbox(img, bbox, label_names=bccd_labels, instance_colors=colors, alpha=0.9, linewidth=1.0)
plt.axis("off")
plt.savefig("/home/ubuntu/ins.jpg")
while saving , it saves the image with white background and default size (432 *288).
i need to save the predicted image from vis_bbox with the original dimension (1300 *1300).
Any suggestions would be helpful!

Barcode decoding with pyzbar on raspberry pi

I am using pyzbar to decode barcodes on Raspberry Pi 3 using Pi Camera v1 (resolution 1296x972). Qr codes are decoded very well. When decoding two dimensional barcodes (CODABAR), the success rate is very low.
I have tried saving one frame from the video stream and decode it with pyzbar on the Raspberry and it fails. When i try to decode the same image on Ubuntu, and decodes it successfully.
from pyzbar import pyzbar
from PIL import Image
img = Image.open('sampleImage.png')
d = pyzbar.decode(img)
print (d)
Any thoughts what may be the problem?
UPDATE:
The following image is my specific use case.
Because I am using Pi Camera v1 to take images, I tried to do adjustment to image sharpness:
from picamera import PiCamera
self.camera = PiCamera()
self.camera.sharpness = 100
The following image is with sharpness 100. However, pyzbar still fails to decode it on the Raspberry Pi.
You need to remove the black border from your image. According to this answer,
you can simply crop your image then feed the image to pyzbar.decode() function.
import cv2
from pyzbar import pyzbar
import numpy as np
def autocrop(image, threshold=0):
"""Crops any edges below or equal to threshold
Crops blank image to 1x1.
Returns cropped image.
"""
if len(image.shape) == 3:
flatImage = np.max(image, 2)
else:
flatImage = image
assert len(flatImage.shape) == 2
rows = np.where(np.max(flatImage, 0) > threshold)[0]
if rows.size:
cols = np.where(np.max(flatImage, 1) > threshold)[0]
image = image[cols[0]: cols[-1] + 1, rows[0]: rows[-1] + 1]
else:
image = image[:1, :1]
return image
if __name__ == "__main__":
image = cv2.imread('sampleImage.png')
crop = autocrop(image, 165)
d = pyzbar.decode(crop)
print(d)

Wrote a code in python to edit an image's background and the output i am getting is totally off

I edited it to view the foreground image on a white background but now, none of the images are visible.
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('91_photo.jpg')
mask = np.zeros(img.shape[:2],np.uint8)
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
rect = (10,10,360,480)
cv2.grabCut(img,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask2 = np.where((mask==2)|(mask==0),0,255).astype('uint8')
img = img*mask2[:,:,np.newaxis]
plt.imshow(img),plt.colorbar(),plt.show()
Expecting the result to be a visible image on a white background
This is what i'm getting
There are a number of small issues with your code that are adding up to that weird result.
OpenCV uses BGR ordering of the channels of an image, where matplotlib uses RGB. That means if you read an image with OpenCV but want to display with matplotlib, you need to convert the image from BGR to RGB before displaying (that's the reason the colors are weird). Also, not that important, but color images are not displayed with a colormap, so showing the colormap does not do anything for you.
In numpy, it's best to keep masks boolean whenever you can, because you can use them to index your arrays. Your current code converts a boolean mask to a uint8 image with 0 and 255 values and then you multiply that with your image. That means your image will be set to zero wherever the mask is zero---and your image values will explode (or do weird stuff with overflow). Instead, keep the mask boolean and use it to index your array. That way anywhere the mask is True you can just set the value in your image to something specific (like 255 for white).
This should fix you up:
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('91_photo.jpg')
mask = np.zeros(img.shape[:2], np.uint8)
bgdModel = np.zeros((1, 65), np.float64)
fgdModel = np.zeros((1, 65), np.float64)
rect = (10, 10, 360, 480)
cv2.grabCut(img, mask, rect, bgdModel, fgdModel, 5, cv2.GC_INIT_WITH_RECT)
mask2 = (mask==2) | (mask==0)
img[mask2] = 255
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()