Apply 3D mask to 3D mask, background is white - background

I try to apply a binary mask on a 3D image by multiplying them. It, however, returns an image with a white background, rather than black (what I would expect since the binary mask is 0 and 1, where all of the background pixels equal 0).
First I load the 3D scan/image (.nii.gz) and mask (.nii.gz) with nibabel, using:
scan = nib.load(path_to_scan).getfdata()
mask = nib.load(path_to_mask).getfdata()
Then I use:
masked_scan = scan*mask
Below visualized that when applying another mask, the background is darker..
enter image description here
Below visualized what they look like in 3D slicer as volume.
enter image description here
What am I missing? Aim is to have a black background...
I also tried np.where(mask==1, scan, mask*scan)

Related

Smooth Edges In Segmentation

I am working on project in which i have to do segmentation masking. Now problem is that exported masked image should have 2 things.
NOTE: Please download both images for better understanding my problem
Only 2 Colors.
No Jagged Edges.
Tools Used:
Cvat
Photopea ( for Reducing Colors )
Pixspy ( for checking how many colors image have )
If i achieve 1 ( means if i reduce color ) than for sure jagged edges will appear. because mask is png and if you remove colors from jagged edges than it is obvious that all shades which given illusion of image smoothing will disappear. and if i go with no jagged edges than i will get thousands of colors in image.
Image with Smooth edges but many colors around 12k
Image with Jagged edges but limited colors only 2
How can i achieve both of the things on same time for my mask.

How do I see the actual color of a single RGB value in Google Colab?

Very basic question. I have a single vector (e.g., [53, 21, 110]) and I want to print the RGB color it represents in a colab notebook. Like a color swatch. What's the simplest way to do this?
The simplest way would be using the Image module from PIL. According to the documentation, you can construct an image with:
PIL.Image.new(mode, size, color=0)
mode [required]: determines the mode used for the image, it can be RGB, RGBA, HSV, etc. You can find more modes in the docs
size [required]: this is a tuple (weight, height) that represents the dimensions of your image in pixels.
color [optional]: this is the color of the image, it can receive a tuple to represent the RGB color in your case. The default color is black.
Then, to show the image within colab, you would use
display(img)
Given your question, the mode would need to be 'RGB' and if your vector is a list, you need to convert into a tuple to use it.
To show an 300px by 300px image, the code would look like.
from PIL import Image
img = Image.new('RGB', (300,300), color = (53, 21, 110))
display(img)

Subtract Blending Mode

I have been trying to implement some of the layer blending modes of GIMP (GEGL) to Python. Currently, I am stuck in Subtract Blending mode. As per documentation, Subtract = max(Background - Foreground, 0). However, doing a simple test in GIMP, with Background image = (205,36,50) and Foreground image = (125,38,85), the resultant composite image/colour comes to be (170, 234, 0) which doesn't quite follow the math above.
As per understanding, Subtract does not use Alpha Blending. So, could this be a compositing issue? Or Subtract follows different math? More details and background can be find in a separate SO question.
EDIT [14/10/2021]:
I tried with this image as my Source. Performed following steps on images normalised in range [0, 1]:
Applied a Colour Dodge (no prior conversion from sRGB -> linear RGB was done) and obtained this from my implementation which matches with GIMP result.
sRGB -> linear RGB conversion on Colour Dodge and Source image. [Reference]
Apply Subtract blending with Background = Colour Dodge and Foreground = Source Image
Reconvert linear RGB-> sRGB
I obtain this from POC. Left RGB triplet: (69,60,34); Right RGB triplet: (3,0,192). And the GIMP result. Left RGB triplet: (69,60,35); Right RGB triplet: (4,255,255)
If you are looking at channel values in the 0 ➞ 255 range they are likely gamma-corrected. The operation is possibly done like this:
convert each layer to "linear light" in the 0.0 ➞ 1.0 range using something like
L = ((V/255) ** gamma) (*)
apply the "difference" formula
convert the result back to gamma-corrected:
V = (255 * (Diff ** (1/gamma)))
With gamma=2.2 you obtain 170 for the Red channel, but I don't see why you get 234 on the Green channel.
(*) The actual formula has a special case for the very low values IIRC.

How to understand negative pixels in CNNs and the ReLU

I'm new to computer vision.
Does the negative pixels produced by the convolutional layer filters mean it's black in color?
And if the ReLU only converts all negative pixels to 0, is it just converting black color to black color?
0 or negative does not necessarily mean black color. For example, if you rescale pixel values of an image between 0-1, that does not mean all the pixels will look like black color. It is relative i.e. now pixels closer to the value 1 will look like closer to white color. Similar situation happens in your case.

depth image based rendering

I have to implement a depth image base rendering. Given a 2D image and a depth map, the algorithm will generate a virtual view - what the scene would look like if a camera was placed in a different position. I wrote this function, V is the matrix with the pixel of 2d view, D the pixels from depth map and camera shift a parameter.
Z=1.1-D./255; is a normalization. I try to follow this instruction:
For each pixel in the depth map, compute the disparity that results from the depth, For each pixel in the source 2D image, find a new location for it in the virtual view: old location + disparity of that specific pixel.
The function doesnt work very well, what's wrong?
function[virtualView]=renderViews(V,D,camerashift)
Z=1.1-D./255;
[M,N]=size(Z);
for m=1:M
for n=1:N
d=camerashift/Z(m,n);
shift=round(abs(d));
V2(m,n)=V(m+shift,n);
end
end
imshow(V2)