How to exchange colors between 2 images? - numpy

I have an image of spectacles with black background that I need to overlay onto a face image. To do so, I am taking the part of face image with shape same as spectacles; and put the colors of face image on black parts of the spectacles image. Then this small part of image can be put back. But I am not being able to take the correct colors from face image for the spectacles image. I tried this :
specs[np.where((hmd == [0,0,0,0]).all(axis=2))] = sub_face
specs image:
face image:
I need to put a resized specs image to face. I have resized specs image and also know the position where I will place the specs on face image. I just need to remove black background from specs and add relevant face colors so it looks like there are specs on face in a natural way.
Code I am using :
import cv2
specs = cv2.imread("rot_h0v0z0.png")
face = cv2.imread("~/Downloads/celebA/000001.png")
specs = cv2.resize(image, None, fx=0.3, fy=0.3, interpolation=cv2.INTER_AREA)
sub_face = face[0:specs.shape[0], 0:specs.shape[1]]
specs[np.where((hmd == [0,0,0,0]).all(axis=2))] = sub_face
Was able to solve it, turned out pretty simple :P
(b,g,r) = cv2.split(specs)
indices = np.where(b == [0])
for i,j in zip(indices[0], indices[1]):
specs[i,j] = sub_face[i,j]

Was able to solve it, turned out pretty simple :P
(b,g,r) = cv2.split(specs)
indices = np.where(b == [0])
for i,j in zip(indices[0], indices[1]):
specs[i,j] = sub_face[i,j]

Related

remove background using u2net produced mask

I am trying to remove background from an image. For this purpose I am using U2NET. I am writing the network structure using Tensorflow by following this repository. I have changed the model architecture according to my needs. It takes 96x96 image and produces 7 masks. I am taking 1st mask (out of 7) and multiplying it against the all channels of original 96x96 image.
The code that predicts 7 masks is:
img = Image.open(os.path.join('DUTS-TE','DUTS-TE-Image', test_x_names[90]))
copied = deepcopy(img)
copied = copied.resize((96,96))
copied = np.expand_dims(copied,axis=0)
preds = model.predict(copied)
preds = np.squeeze(preds)
"preds[0]" is:
predicted mask
Multiplying the mask against the original image produces:
masked image and corresponding code is ("img2" is original image):
img2 = np.asarray(img2)
immg = np.zeros((96,96,3), np.uint8)
for i in range(0,3):
immg[:,:,i] = img2[:,:,i] * preds[0]
plt.imshow(immg)
plt.show()
If i binarize the mask and then multiply it against the original image it produces :
enter image description here and corresponding code is :
frame = binarize(preds[0,:,:], threshold = 0.5)
img2 = np.asarray(img2)
immg = np.zeros((96,96,3), np.uint8)
for i in range(0,3):
immg[:,:,i] = img2[:,:,i] * frame
plt.imshow(immg)
plt.show()
Multiplying the original image with mask or binarized mask do not segment the foreground properly from the background. So, what can be done? Am I missing something?

How to exporting adversarial examples for Facenet in Cleverhans?

I am trying to follow this blog https://brunolopezgarcia.github.io/2018/05/09/Crafting-adversarial-faces.html to generate adversarial face images against Facenet. The code is here https://github.com/tensorflow/cleverhans/tree/master/examples/facenet_adversarial_faces and works fine! My question is how can I export these adversarial images. Is this question too straightforward, so the blog didn't mention it, but only shows some sample pictures.
I was thinking it is not a hard problem, since I know the generated adversarial samples are in the "adv". But this adv (float32) came from faces1, after being prewhiten and normalized. To restore the int8 images from adv(float32), I have to reverse the normalization and prewhiten process. It seems like if we want output some images from facenet, we have to do this process.
I am new to Facenet and Cleverhans, I am not sure whether this is the best way to do that, or is that common way(such as functions) for people to export images from Facenet.
In facenet_fgsm.py, we finally got the adversarial samples. I need to export adv to plain int images.
adv = sess.run(adv_x, feed_dict=feed_dict)
In set_loader.py. There are some kinda of normalization.
def load_testset(size):
# Load images paths and labels
pairs = lfw.read_pairs(pairs_path)
paths, labels = lfw.get_paths(testset_path, pairs, file_extension)
# Random choice
permutation = np.random.choice(len(labels), size, replace=False)
paths_batch_1 = []
paths_batch_2 = []
for index in permutation:
paths_batch_1.append(paths[index * 2])
paths_batch_2.append(paths[index * 2 + 1])
labels = np.asarray(labels)[permutation]
paths_batch_1 = np.asarray(paths_batch_1)
paths_batch_2 = np.asarray(paths_batch_2)
# Load images
faces1 = facenet.load_data(paths_batch_1, False, False, image_size)
faces2 = facenet.load_data(paths_batch_2, False, False, image_size)
# Change pixel values to 0 to 1 values
min_pixel = min(np.min(faces1), np.min(faces2))
max_pixel = max(np.max(faces1), np.max(faces2))
faces1 = (faces1 - min_pixel) / (max_pixel - min_pixel)
faces2 = (faces2 - min_pixel) / (max_pixel - min_pixel)
In the facenet.py load_data function, there is a prewhiten process.
nrof_samples = len(image_paths)
images = np.zeros((nrof_samples, image_size, image_size, 3))
for i in range(nrof_samples):
img = misc.imread(image_paths[i])
if img.ndim == 2:
img = to_rgb(img)
if do_prewhiten:
img = prewhiten(img)
img = crop(img, do_random_crop, image_size)
img = flip(img, do_random_flip)
images[i,:,:,:] = img
return images
I hope some expert can point me some hidden function in facenet or cleverhans that can directly export the adv images, otherwise reversing normalization and prewhiten process seems akward. Thank you very much.
I don't know much about the Facenet code. From your discussion, it seems like you will have to save the values of min_pixel,max_pixelto reverse the normalization, and then look at theprewhiten` function to see how you can reverse it. I'll email Bruno to see if he has any further comments to help you out.
EDIT: Now image exporting is included in the Facenet example of Cleverhans: https://github.com/tensorflow/cleverhans/commit/08f6fb9cf2a7f199467d5ed60179fc3ae9140458

Grayscale image using opencv from numpy array failed

I use the following numpy array that hold an image which is black and white image with the following shape
print(img.shape)
(28, 112)
when I try to grayscale the image, to use it to get contours using opencv with following steps
#grayscale the image
grayed = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#thredshold image
thresh = cv2.threshold(grayed, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
I got the following error
<ipython-input-178-7ebff17d1c18> in get_digits(img)
6
7 #grayscale the image
----> 8 grayed = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
9
10
error: C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:11073: error: (-215) depth == 0 || depth == 2 || depth == 5 in function cv::cvtColor
the opencv errors have no information in it to be able to get what is wrong
Here is the working code for how you were trying it:
img = np.stack((img,) * 3,-1)
img = img.astype(np.uint8)
grayed = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(grayed, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
A simpler way of getting the same result is to invert the image yourself:
img = (255-img)
thresh = cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)[1]
As you discovered, as you perform different operations on images, the image is required to be in different formats.
cv2.THRESH_BINARY_INV and cv2.THRESH_BINARY are designed to take a color image (and convert it to grayscale) so you need a three channel representation.
cv2.THRESH_OTSU works with grayscale images so one channel is okay for that.
Since your image was already grayscale from the start, you weren't able to convert it from color to grayscale nor did you really need to. I assume you were trying to invert the image but that's easy enough on your own (255-img).
At one point you tried to do an cv2.THRESH_OTSU with floating point values but cv2.THRESH_OTSU requires integers between 0 and 255.
If openCV had more user-friendly error messages it would really help with issues like these.

How to resize font in plot_net feature of phyloseq?

I want to resize my text in plot_net but none of the options are working for me. I am trying
p <- plot_net(physeqP, maxdist = 0.4, point_label = "ID", color = "Cond", shape = "Timeperiod") p + geom_text(size=15)
This gives me error
"Error: geom_text requires the following missing aesthetics: x, y,
label".
Can anyone please tell me how can I fix the issue?
I dont want to resize legends or the axis, but the nodes text.
this image is drawn using phyloseq but since the font size is very small, i want to make it prominent.
Without an example it's hard to reproduce.
p <- plot_net(physeqP, maxdist = 0.4, point_label = "ID"
, color = "Cond", shape = "Timeperiod", cex_val = 2)
I believe this is with the NeuralNetTools package.
Try using: cex_val numeric value indicating size of text labels, default 1
https://www.rdocumentation.org/packages/NeuralNetTools/versions/1.5.0/topics/plotnet

Carrierwave/Minimagick - Cropping is always inaccurate, except when 'y' parameter is 0

Having implemented the ability to crop as shown in the Railscasts episode 182 (revised), I can't seem to get cropping work accurately. What is cropped is always the top 20% of the area selected in the crop. Except when the 'y' parameter is 0, that is when the cropping area is touching the top of the image. Then cropping works fine.
My implementation is the same as shown in the screencast, except that I am calling the crop_avatar method from the controller like this:
#profile.crop_x = params[:profile][:crop_x]
#profile.crop_y = params[:profile][:crop_y]
#profile.crop_h = params[:profile][:crop_y]
#profile.crop_w = params[:profile][:crop_w]
#profile.crop_avatar
#profile.save!
Also the crop method in avatar_uploader is implemented like this:
def crop
if model.crop_x.present?
resize_to_limit(500, 500)
manipulate! do |img|
x = model.crop_x
y = model.crop_y
w = model.crop_w
h = model.crop_h
img.crop "#{w}x#{h}+#{x}+#{y}"
img
end
end
end
I am using Rails 3.2.1, Carrierwave 0.7.1, JCrop 0.9.12.
I was having a similar issue and I found that resizing the image inside the manipulate! call rather than using the "resize_to_limit" carrierwave helper method solved the problem for me.
def crop
if model.crop_x.present?
manipulate! do |img|
x = model.crop_x
y = model.crop_y
w = model.crop_w
h = model.crop_h
img.resize "500x500"
img.crop "#{w}x#{h}+#{x}+#{y}"
img
end
end
end
I also highly recommend reading this answer for more details on what is actually going on in this code.