I am working on project in which i have to do segmentation masking. Now problem is that exported masked image should have 2 things.
NOTE: Please download both images for better understanding my problem
Only 2 Colors.
No Jagged Edges.
Tools Used:
Cvat
Photopea ( for Reducing Colors )
Pixspy ( for checking how many colors image have )
If i achieve 1 ( means if i reduce color ) than for sure jagged edges will appear. because mask is png and if you remove colors from jagged edges than it is obvious that all shades which given illusion of image smoothing will disappear. and if i go with no jagged edges than i will get thousands of colors in image.
Image with Smooth edges but many colors around 12k
Image with Jagged edges but limited colors only 2
How can i achieve both of the things on same time for my mask.
Related
I try to apply a binary mask on a 3D image by multiplying them. It, however, returns an image with a white background, rather than black (what I would expect since the binary mask is 0 and 1, where all of the background pixels equal 0).
First I load the 3D scan/image (.nii.gz) and mask (.nii.gz) with nibabel, using:
scan = nib.load(path_to_scan).getfdata()
mask = nib.load(path_to_mask).getfdata()
Then I use:
masked_scan = scan*mask
Below visualized that when applying another mask, the background is darker..
enter image description here
Below visualized what they look like in 3D slicer as volume.
enter image description here
What am I missing? Aim is to have a black background...
I also tried np.where(mask==1, scan, mask*scan)
I have an image and I want to see how many pixels are in different parts of the image. Is there a software I can use to do this?
In Gimp, the "Histogram" dialog applies to the selection, so the pixel count displayed is the pixels in the selection (weighted by their selection level):
In the image below the selection covers the black circle which has a 100px radius. The Pixels value is close to 100²*Pi (314000 instead of 314159).
The Count is the number of pixels between the two values indicated by the handles at the bottom of the histogram.
Of course the selection can have any shape and be obtained with various tools.
I assume PS has something equivalent.
I have a large dicom mri dataset for several patients. For each patient, there is a folder including many 2d slices of .dcm files and the data of each patient has different sizes. For example:
patient1: PixelSpacing=0.8mm,0.8mm, SliceThickness=2mm, SpacingBetweenSlices=1mm, 400x400 pixels
patient2: PixelSpacing=0.625mm,0.625mm, SliceThickness=2.4mm, SpacingBetweenSlices=1mm, 512x512 pixels
So my question is how can I convert all of them into {Pixel Spacing} = 1mm,1mm and {Slice Thickness = 1mm}?
Thanks.
These are two different questions:
About harmonizing positions and pixel spacing, these links will be helpful:
Finding the coordinates (mm) of identical slice locations for two MR datasets acquired in the same scanning session
Interpolation between two images with different pixelsize
http://nipy.org/nibabel/dicom/dicom_orientation.html
Basically, you want to build your target volume and interpolate each of its pixels from the nearest neighbors in the source volumes.
About modifying the slice thickness: If you really want to modify the slice thickness rather than the slice distance, I do not see any chance to do this correctly with the source data you have. This is because the thickness says which width of the raw data was used to calculate the values for a slice in your stack (e.g. by averaging or calculating an integral). With a slice thickness of 2 or 2.4mm in the source volumes, you will not be able to reconstruct the gray values with a thickness of 1 mm. If your question was referring to slice distance rather than slice thickness, answer 1 applies.
Hi I am working on an OBJ loader for use in iOS programming, I have managed to load the vertices and the faces but I have an issue with the transparency of the faces.
For the colours of the vertices I have just made them for now, vary from 0 - 1. So each vertex will gradually change from black to white. The problem is that the white vertices and faces seem to appear over the black ones. The darker the vertices the more they appeared covered.
For an illustration of this see the video I posted here < http://youtu.be/86Sq_NP5jrI >
The model here consists of two cubes, one large cube with a smaller one attached to a corner.
How do you assign a color to vertex? I assume, that you have RGBA render target. So you need to setup color like this:
struct color
{
u8 r, g, b, a;
};
color newColor;
newColor.a = 255;//opaque vertex, 0 - transparent
//other colors setup
I have a physical map (real world), for example, a little town map.
A "path" line is painted over the map, think about it like "you are here. here's how to reach the train station" :)
Now, let's suppose I can get an image of that scenario (likewise, coming from a photo).
An image that looks like:
My goal is not easy way out!
I want to GET the path OUT of the image, i.e., separate the two layers.
Is there a way to extract those red marks from the image?
Maybe using CoreGraphics? Maybe an external library?
It's not an objective C specific question, but I am working on Apple iOS.
I already worked with something similar, the face-recognition.
Now the answer I expect is: "What do you mean by PATH?"
Well, I really don't know, maybe a line (see above image) of a completely different color from the 'major' colors in the background.
Let's talk about it.
If you can use OpenCV then it becomes simpler. Here's a general method:
Separate the image into Hue, Saturation and Variation (HSV colorspace)
Here's the OpenCV code:
// Compute HSV image and separate into colors
IplImage* hsv = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 3 );
cvCvtColor( img, hsv, CV_BGR2HSV );
IplImage* h_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
IplImage* s_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
IplImage* v_plane = cvCreateImage( cvGetSize( img ), 8, 1 );
cvCvtPixToPlane( hsv, h_plane, s_plane, v_plane, 0 );
Deal with the Hue (h_plane) image only as it gives just the hue without any change in value for a lighter or darker shade of the same color
Check which pixels have Red hue (i think red is 0 degree for HSV, but please check the OpenCV values)
Copy these pixels into a separate image
I's strongly suggest using the OpenCV library if possible, which is basically made for such tasks.
You could filter the color, define a threshold for what the color red is, then filter everything else to alpha, and you have left over what your "path" is?