I have been trying to implement some of the layer blending modes of GIMP (GEGL) to Python. Currently, I am stuck in Subtract Blending mode. As per documentation, Subtract = max(Background - Foreground, 0). However, doing a simple test in GIMP, with Background image = (205,36,50) and Foreground image = (125,38,85), the resultant composite image/colour comes to be (170, 234, 0) which doesn't quite follow the math above.
As per understanding, Subtract does not use Alpha Blending. So, could this be a compositing issue? Or Subtract follows different math? More details and background can be find in a separate SO question.
EDIT [14/10/2021]:
I tried with this image as my Source. Performed following steps on images normalised in range [0, 1]:
Applied a Colour Dodge (no prior conversion from sRGB -> linear RGB was done) and obtained this from my implementation which matches with GIMP result.
sRGB -> linear RGB conversion on Colour Dodge and Source image. [Reference]
Apply Subtract blending with Background = Colour Dodge and Foreground = Source Image
Reconvert linear RGB-> sRGB
I obtain this from POC. Left RGB triplet: (69,60,34); Right RGB triplet: (3,0,192). And the GIMP result. Left RGB triplet: (69,60,35); Right RGB triplet: (4,255,255)
If you are looking at channel values in the 0 ➞ 255 range they are likely gamma-corrected. The operation is possibly done like this:
convert each layer to "linear light" in the 0.0 ➞ 1.0 range using something like
L = ((V/255) ** gamma) (*)
apply the "difference" formula
convert the result back to gamma-corrected:
V = (255 * (Diff ** (1/gamma)))
With gamma=2.2 you obtain 170 for the Red channel, but I don't see why you get 234 on the Green channel.
(*) The actual formula has a special case for the very low values IIRC.
Related
I try to apply a binary mask on a 3D image by multiplying them. It, however, returns an image with a white background, rather than black (what I would expect since the binary mask is 0 and 1, where all of the background pixels equal 0).
First I load the 3D scan/image (.nii.gz) and mask (.nii.gz) with nibabel, using:
scan = nib.load(path_to_scan).getfdata()
mask = nib.load(path_to_mask).getfdata()
Then I use:
masked_scan = scan*mask
Below visualized that when applying another mask, the background is darker..
enter image description here
Below visualized what they look like in 3D slicer as volume.
enter image description here
What am I missing? Aim is to have a black background...
I also tried np.where(mask==1, scan, mask*scan)
There are multiple pages (like this and this) that present examples about the effect of channel_shift_range in images. At first glance, it appears as if the images have only had a change in brightness applied.
This issue has multiple comments mentioning this observation. So, if channel_shift_range and brightness_range do the same, why do they both exist?
After long hours of reverse engineering, I found that:
channel_shift_range: applies the (R + i, G + i, B + i) operation to all pixels in an image, where i is an integer value within the range [0, 255].
brightness_range: applies the (R * f, G * f, B * f) operation to all pixels in an image, where f is a float value around 1.0.
Both parameters are related to brightness, however, I found a very interesting difference: the operation applied by channel_shift_range roughly preserves the contrast of an image, while the operation applied by brightness_range roughly multiply the contrast of an image by f and roughly preserves its saturation. It is important to note that these conclusions could not be fulfilled for large values of i and f, since the brightness of the image will be intense and it will have lost much of its information.
Channel shift and Brightness change are completely different.
Channel Shift: Channel shift changes the color saturation level(eg. light Red/dark red) of pixels by changing the [R,G,B] channels of the input image. Channel shift is used to introduce the color augmentation in the dataset so as to make the model learn color based features irrespective of its saturation value.
Below is the example of Channel shift from mentioned the article:
In the above image, if you observe carefully, objects(specially cloud region) are still clearly visible and distinguishable from their neighboring regions even after channel shift augmentation.
Brightness change: Brightness level of the image explains the light intensity throughout the image and used to add under exposure and over exposure augmentation in the dataset.
Below is the example of Brightness augmentation:
In the above image, at low brightness value objects(eg. clouds) have lost their visibility due to low light intensity level.
I have been trying to use opencv's template matching function to match templates within images. However, when the images are dark brown and dark green, the template matching does not work so well. I am pretty sure it is the grey scale conversion that is responsible for this because in greyscale it looks very similar.
However from what I see, cv2.matchtemplate() only takes in grey scale images. How can I do coloured template matching? Should I seperate the RGB image into 3 images: one red, one green, one blue and treat each one as gray scale images and apply matchtemplate then sum the similarity rating for each pixel position? Is that the way to do it? Or is there a different function or a parameter value I can use to make matchtemplate work for coloured images?
You may try this code:
import numpy as np
import cv2
threshold = 0.8
##Read Main and Needle Image
imageMainRGB = cv2.imread(main/Image/Path/main.png)
imageNeedleRGB = cv2.imread(needle/Image/Path/needle.png)
##Split Both into each R, G, B Channel
imageMainR, imageMainG, imageMainB = cv2.split(imageMainRGB)
imageNeedleR, imageNeedleG, imageNeedleB = cv2.split(imageNeedleRGB)
##Matching each channel
resultB = cv2.matchTemplate(imageMainR, imageNeedleR, cv2.TM_SQDIFF)
resultG = cv2.matchTemplate(imageMainG, imageNeedleG, cv2.TM_SQDIFF)
resultR = cv2.matchTemplate(imageMainB, imageNeedleB, cv2.TM_SQDIFF)
##Add together to get the total score
result = resultB + resultG + resultR
loc = np.where(result >= 3 * threshold)
print("loc: ", loc)
The Image I tested with are:
main.png
needle.png
result.png
Remark: This code may not function in some photos, where a user may need to modify it further to enhance it.
Note: This image was getting from pexels.com which is free copyright. If you have any issues with the image copyright and want to take down this image, welcome to contact me. Thanks.
I'm struggling with a problem when making plots with filledcurves. Between the filled areas, there seems to be a "gap". However, these artifacts do not appear on the print, but depend on the viewer and zoom-options. In Gnuplot I use the eps terminal, the eps-files look great, but the lines appear when I'm converting to pdf. The conversion it either done directly after plotting or when converting the latex-document from dvi to pdf. As most of the documents are here on the display nowadays, this is an issue. The problem also appears when I'm directly using the pdfcairo terminal in Gnuplot, so it's not caused by the conversion (tried epstopdf and ps2pdf) alone.
I attached a SCREENSHOT of a plot displayed in "acroread" (same problem in other pdf-viewers).
Has anybody an idea how to get rid of it but keeping the graphic vectorized?
I just ran into the same issue. Apparently the filling between two curves
is done as a set of polygons that do not exactly touch one another, thus
the thin white lines visible on some PDF viewers.
One way to fix the issue is to draw over these polygon boundaries. First
define min and max functions in gnuplot:
min(x, y) = x < y ? x : y
max(x, y) = x > y ? x : y
Then, assuming that column 1 of "datafile" contains your x values and
that columns 2 and 3 contain the y values of curves 2 and 3, write:
plot "datafile" using 1:2:3 with filledcurves lc rgb "gray", \
"" using 1:2:(min($2, $3)):(max($2, $3)) with yerrorbars ps 0 lt 1 \
lc rgb "gray" lw 0.5
The first plot instruction fills the spaces between the curves in gray.
The second plot instruction draws points of zero size (ps 0) at each
x value (1) on curve (2) with thin (lw 0.5), continuous (lt 1), gray
(lc rgb "gray"), vertical errorbars (yerrorbars) from the lower to
the higher of curves 2 and 3.
This covers the white lines. To get best results you may need to
experiment with the thickness of the bars (e.g., lw 0.6, lw 0.2).
This issue is fixed with gnuplot 5.2, see https://sourceforge.net/p/gnuplot/patches/749/
The actual problem was, that filled curves were previously plotted as many quadrilaterals, which leads to artifacts (white stripes) in many viewers due to antialiasing.
Since version 5.2 filled curves are rendered as single polygon, which prevents these problems (see issue linked above).
The problem is still present in Gnuplot 5.0.4 and at least the cairolatex terminal which I use to output PDFs.
I also wanted to color the area between two curves, in my case defined as functions.
When I used something like
f(x) = 2 + sin(x)
g(x) = cos(x)
plot '+' using 1:(f($1)):(g($1)) with filledcurves closed
I got the same vertical white lines as in the question.
A simple solution for curves where one is always above the other is to let Gnuplot fill the area from the upper curve to the x-axis with the desired color and then paint it over with white from the lower curve downwards:
f(x) = 2 + sin(x)
g(x) = cos(x)
plot f(x) with filledcurves x1, g(x) w filledc x1 fs lc rgb "white"
Apparently this filledcurves style (not between curves but between a curve and an axis) avoids the trapezoid artifacts.
This can be readily extended for plotting data files and multiple stacked cures like in the question. Just paint from top to bottom and finish with white for the empty area between the lowest curve and the x-axis.
For overlapping curves a construction of minimum and maximum curves like in the answer from françois-tonneau might do the trick.
If you're talking about the red and cyan bits the gap could be an illusion caused by the Red + Cyan = White on a RGB screen. Maybe there's no gap, but the border areas appear as white due to the proximity of the pixels.
Take the screenshot and blow it up so you can see the individual pixels around the perceived gap.
If this is the case, maybe selecting a different colour scheme for the adjacent colurs would get rid of the effect. I certainly can't see anything matching your description on anywhere but the red and cyan bits.
From https://groups.google.com/forum/#!topic/comp.graphics.apps.gnuplot/ivRaKpu5cJ8, it seemed to be a pure Gostscript issue.
Using the eps terminal of Gnuplot and converting the eps file to pdf with
epstopdf -nogs <file.eps> -o <file.pdf>
solved the problem on my system. From the corresponding Man page, the "-nogs" option instructs epstopdf not to use Gostscript.
How can I accomplish this? A programmatic solution (Objective-c) is great, but even a non-progarmmatic one is good.
I have pixelmator -> But that doesn't give you the option. I can't seem to do it with Preview either.
I have tried googling, but haven't been able to find a solution so far. The only tool I have been able to use to do this is TexturePacker, but that creates a sprite sheet.
You can use libpng to convert the PNG image to three-byte (8:8:8) RGB. Then you can downsample to the 5:6:5 16-bit color values of RGB565. If r, g, and b are the respective 8-bit colors (stored in an unsigned char type), then the 16-bit RGB565 value is:
((r >> 3) << 11) | ((g >> 2) << 5) | (b >> 3)
You can improve a tad on this by rounding instead of chopping, being careful to not overflow the values. You can also force the green value to be equal to the blue and red values when they are all equal in the original 8-bit values. Otherwise it is possible to have colors that were originally gray inadvertently take on color after conversion.
Create Bitmap Context with color RGB565 using Quartz, paint your PNG on this context, save this bitmap context to file.
PNG does not support a RGB565 packing. You can always apply a posterize to the image (programatically or with ImageMagick or with any image editor), which amounts to discard the lower significant bits in each channel. When saving to PNG, you will still be saving 8 bits per channel (unless you use a palette), but even then you will get an appreciable reduction in size, because of the PNG compression.
A quick example: original:
after a simple posterize with 32 levels (equivalent to a RGB555) applied with XnView
The size goes from 89KB to 47KB, with a small quality loss.
In case of synthetic images with gradients, the quality loss could be much more noticiable (banding).
I received this answer from the creator of texture packer:
you can do it from command line - see
http://www.texturepacker.com/uncategorized/batch-converting-images-to-pvr-or-pvr-ccz/
Just adjust the opt and set output to .png instead of pvr.ccz
Make sure that you do not overwrite your source images.
According to Wikipedia, which is always right, the only 16-bit PNG is a greyscale PNG. http://en.wikipedia.org/wiki/Portable_Network_Graphics
If you just add your 32-bit (alpha) or 24-bit (no alpha) PNG to your project as normal, and then set the texture format in Cocos2D, all should be fine. The code for that is:
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGB565];