What are the commands to get and set the contrast Gamma setting of raster image displays? - dm-script

I am trying to overlay two images, but I also want to be able to pass the gamma from each of the images to the final image. I know that one can get and set contrast limits as well as adjust the intensity transformation (ITT), but I have not found commands to access the Gamma value.
Am I just missing something? It would be helpful to be able to set the gamma for both images separately before overlaying them.

The according commands are
Number ImageDisplayGetGammaCorrection( ImageDisplay imgDisp )
and
void ImageDisplaySetGammaCorrection( ImageDisplay imgDisp, Number gamma )
and they are used like in the following example:
image img1:=RealImage("test1",4,256,256)
img1 = icol
ShowImage(img1)
img1.ImageGetImageDisplay(0).ImageDisplaySetGammaCorrection(0.6)

Related

how to add random values (random number to specific spot ) to x-ray image with tensorflow

I want to predict disease and I want to try to make the image have some noise or disruption in specific spot or randomly spot is there any method or solution for it??
is there any way to add noise (random value) to image with tensorflow
I read the image and convert it to array and make a copy of it and then add to it some number is that right??
and i have noticed that when convert it the array became values of zeros and ones even it in rgb form.
i expect the some value in the array or the value in the image change to another values so when imshow (the image) notice some noise (different from guassian noise) so when the input to the model become different from the original image
I have trying this but operand didn't match between(224,224,3) and (224,224)
but when set colormode to grayscal the operand work but i didnt see that much of change in image.
,when trying replace img.size with img.height did'nt work either
img = tf.keras.preprocessing.image.load_img("/content/person1_bacteria_2.jpeg",color_mode="rgb",target_size=(256, 256))
nois_factor = 0.3
n = nois_factor * np.random.randn(*img.size)
noise_image = img + n
plt.imshow(noise_image)

Use of base anchor size in Single Shot Multi-box detector

I was digging in the Tensorflow Object Detection API in order to check out the anchor box generations for SSD architecture. In this py file where the anchor boxes are generated on the fly, I am unable to understand the usage of base_anchor_size. In the corresponding paper, there is no mention of such thing. Two questions in short:
What is the use of base_anchor_size parameter? Is it important?
How does this parameter affect the training in the cases where the original input image is square in shape and the case when it isn't square?
In SSD architecture there are scales for anchors which are fixed ahead, e.g. linear values across the range 0.2-0.9. These values are relative to the image size. For example, given 320x320 image, then smallest anchor (with 1:1 ratio) will be 64x64, and largest anchor will be 288x288. However, if you wish to insert to your model a larger image, e.g. 640x640, but without changing the anchor sizes (for example since these are images of far objects, so there's no need for large objects; not leaving the anchor sizes untouched allows you not to fine-tune the model on the new resolution), then you can simply have a base_anchor_size=0.5, meaning the anchor scales would be 0.5*[0.2-0.9] relative to the input image size.
The default value for this parameter is [1.0, 1.0], meaning not having any affect.
The entries correspond to [height, width] relative to the maximal square you can fit in the image, meaning [min(image_height,image_width),min(image_height,image_width)]. So, if for example, your input image is VGA, i.e. 640x480, then the base_anchor_size is taken to be relative to [480,480].

Do input size effect mobilenet-ssd in aspect-ratio and real anchor ratio? (Tensorflow API)

im recently using tensorflow api object detection. The default SSD-MobileNet v1 is using 300 x 300 images as input training image, but i gonna edit the image size as width and height in different values. For instance, 320 * 180. Are aspects ratio in .config represent the real ratio of the anchors width/height ratio or they are just for the square images?
You can change the "size" to any different value , the general guidance is preserve the aspect ratio of the original image while the size can be different value.
Aspect ratios represent the real ratio of anchors. You can use it for different input ratios, but you will get the best result if you use input ratio similar to square images.

What is the right way to resize using NVIDIA NPP to exact destination dimensions?

I'm trying to use NVIDIA NPP to experiment with some image resizing routines. I want to resize to an exact dimension. I've been looking at image resizing using NVIDIA NPP but all of its resize functions take scale factors for X and Y Dimensions, and I could not see any API taking direct destination dimensions.
As an example, this is one API:
NppStatus nppiResizeSqrPixel_8u_C1R(const Npp8u * pSrc, NppiSize oSrcSize, int nSrcStep, NppiRect oSrcROI, Npp8u * pDst, int nDstStep, NppiRect oDstROI, double nXFactor, double nYFactor, double nXShift, double nYShift, int eInterpolation);
I realize one way could be to find the appropriate scale factor the destination dimension, but we don't exactly know how the API decides destination ROI based on scale factor (since it is floating point math). We could reverse the calculation in the jpegNPP sample to find the scale factor, but the API itself does not make any guarantees so I'm not sure how safe it is. Any ideas what are the possibilities?
As a side question, the API also takes two params, nXShift and nYShift, but just says "Source pixel shift in x-direction". I'm not exactly clear what shift is being talked about here. Do you have an idea?
If I wanted to map the whole SRC image to the smaller rectangle in the DST image as shown in the image below I would use xFactor = yFactor = 0.5 and xShift = 0.5*DST.width and yShift = 0.
Mapping src to half size destination image
In other words, the pixel at (x,y) in the SRC is mapped to the pixel (x',y') in the DST as
x' = xFactor * x + xShift
y' = yFactor * y + yShift
In this case, both the source and dest ROI could be the entire support of the respective images.

ImageMagick - Monochromatic Noise

I'm trying to add a monochromatic noise to an image similar to Photoshop version using command line however I can't see any option to achieve it.
I've created a code in JS that does it very well and the logic here is very simple:
Foreach pixel:
Generate random noise pixel
Add or subtract (random) noise pixel to/from original pixel
The create a monochromatic noise add/subtract are on a pixel not channel basis e.g.
Pi - original pixel
Pr - noise pixel
MonoPixel = Pi+Pr or Pi-Pr
Is there any way I can randomly add or subtract pixels via command line ?
Thanks
You can use the ImageMagick +noise command to add noise. To get monochromatic noise, you'll have to do something more complex where you create a separate noise image combined with a base color and composite that with your source image.
This link may be helpful: http://brunogirin.blogspot.com/2009/09/making-noise-with-imagemagick.html
You could try and build your own little shell function. Use $RANDOM (Bash environment variable which returns a random integer in the range 0..32767) and see if it is an odd or an even number. Make odd to mean + and even to mean -.
echo $(($RANDOM % 2))
should return 1 ($RANDOM was odd) or 0 ($RANDOM was even) in random order...