How to understand negative pixels in CNNs and the ReLU - tensorflow

I'm new to computer vision.
Does the negative pixels produced by the convolutional layer filters mean it's black in color?
And if the ReLU only converts all negative pixels to 0, is it just converting black color to black color?

0 or negative does not necessarily mean black color. For example, if you rescale pixel values of an image between 0-1, that does not mean all the pixels will look like black color. It is relative i.e. now pixels closer to the value 1 will look like closer to white color. Similar situation happens in your case.

Related

Apply 3D mask to 3D mask, background is white

I try to apply a binary mask on a 3D image by multiplying them. It, however, returns an image with a white background, rather than black (what I would expect since the binary mask is 0 and 1, where all of the background pixels equal 0).
First I load the 3D scan/image (.nii.gz) and mask (.nii.gz) with nibabel, using:
scan = nib.load(path_to_scan).getfdata()
mask = nib.load(path_to_mask).getfdata()
Then I use:
masked_scan = scan*mask
Below visualized that when applying another mask, the background is darker..
enter image description here
Below visualized what they look like in 3D slicer as volume.
enter image description here
What am I missing? Aim is to have a black background...
I also tried np.where(mask==1, scan, mask*scan)

How can I find the amount of pixels in part of an image?

I have an image and I want to see how many pixels are in different parts of the image. Is there a software I can use to do this?
In Gimp, the "Histogram" dialog applies to the selection, so the pixel count displayed is the pixels in the selection (weighted by their selection level):
In the image below the selection covers the black circle which has a 100px radius. The Pixels value is close to 100²*Pi (314000 instead of 314159).
The Count is the number of pixels between the two values indicated by the handles at the bottom of the histogram.
Of course the selection can have any shape and be obtained with various tools.
I assume PS has something equivalent.

conv2d on non-rectangular image in Tensorflow

I have dataset of images which are half black in a upper triangular fashion, i.e. all pixels below the main diagonal are black.
Is there a way in Tensorflow to give such an image to a conv2d layer and mask or limit the convolution to only the relevant pixels?
If the black translates to 0 then you don't need to do anything. The convolution will multiply the 0 by whatever weight it has so it's not going to contribute to the result. If it's not you can multiply the data with a binary mask to make them 0.
For all black pixels you will still get any bias term if you have any.
You could multiply the result with a binary mask to 0 out the areas you don't want populated. This way you can also decide to drop results that have too many black cells, like around the diagonal.
You can also write your own custom operation that does what you want. I would recommend against it because you only get a speedup of at most 2 (the other operations will lower it). You probably get more performance by running on a GPU.

Photoshop layer blending - null out background

Is it possible using layer blending and maybe masking to null out the background(transparent) using a picture with the product in place and one without the product as a background reference picture.
Like a layer mask that only reveals where images are different. (Product and shadow)
Thanks in advance.
Totally possbile but i'm not sure you'll get the desired effect. If you want to subtract absolutely just the areas where there are color differences that means shadow edges will be pixelated. Nonetheless, here's how you do it
Set up your layers like so:
Layer 1 (background + product baked in on top)
Layer 2 (background only)
Set Layer 1 blending mode to "Difference" --All the pixels with the same color information will turn black.
Flatten this and we'll call it "Layer 3 (Difference)"
Go into Layer 3's FX Styles. Under Blending Options, all the way at the bottom you'll find "Blend if:"
Set this to Gray and slide the "This layer:" markers until it says 0 0.
Voila. You have the mask to put on your original Layer 1 that eliminates all pixels with the same information.

Apply an alpha to a color mathematically

How does one get the colour value (rgb) after applying an alpha to a colour?
I would like to apply an alpha to a colour and get the rgb values from the result.
Maybe I am over thinking this, or is it just the value e.g. 120 * alpha (0.6) = resulting colour? White is at 255 though, so should it be 120 += 120 * alpha (0.6) ?
The resulting color of the pixel would be dependent on what color was painted behind it, if the color is partially transparent. Which makes this far more complex that you might think.
The rgb values of the color do not change at all when you apply the alpha. All the changes is how that color will blend with other elements in the view.
So you would have to know where on the screen the color will be drawn, and query the view for the color at that pixel, and then blend it with your color according to the colors alpha value.
//psuedocode
resultColor = (backgroundColor * (1 - alpha)) + (myColor * alpha)
So if your alpha was 0.2 you blend the colors so the result is 80% background color and 20% foreground color.