I fail to understand the input parameters of the CIFilter named CITemperatureAndTint. The documentation says it has two input parameters which are both a 2D CIVector.
I played with this filter a lot - via actual code, via Core Image Fun House (example project from Apple - "FunHouse") and via iPhoto.
My intuition says that this filter should have two scalar input parameters: One for the temperature and one for the tint. If you look at the UI of iPhoto you see this:
Screenshot of iPhotos Temperature and Tint UI:
As expected: One slider for the temperature and one for the hue. How did apple "bind" the value of each slider to a 2D-Vector? akaru asked this question already but got no answer: What's up with CITemperatureAndTint having vector inputs?
I have opened a technical support incident at Apple and asked them the same question. Here is the answer from the Apple engineer:
CITemperatureAndTint has three input parameters: Image, Neutral and
TargetNeutral. Neutral and TargetNeutral are of 2D CIVector type, and
in both of them, note that the first dimension refers to Temperature
and the second dimension refers to Tint. What the CITemperatureAndTint
filter basically does is computing a matrix that adapts RGB values
from the source white point defined by Neutral (srcTemperature,
srcTint) to the target white point defined by TargetNeutral
(dstTemperature, dstTint), and then applying this matrix on the input
image (using the CIColorMatrix filter). If Neutral and TargetNeutral
are of the same values, then the image will not change after applying
this filter. I don't know the implementation details about iPhoto, but
I think the two slide bars give the Temperature and Tint changes (i.e.
differences between source and target Temperature and Tint values
already) that you want to add to the source image.
Now I have to get my head around this answer but it seems to be a very good response from Apple.
They should be 2D vectors containing the color temperature. The default of (6500, 0) will leave the color unchanged, as described here. You can see what values for color temperature give you which colors in this wikipedia link. I'm not sure what the 2nd element of the vector is for.
Related
I've successfully calibrated my camera and I can get the dimensions of a XLD in world coordinates with ContourToWorldPlaneXld and then HeightWidthRatioXld. This returns me the measures of a contour extracted from a shape.
Now I need to convert a value inserted by the user in mm (example in mm: 0.1) and get how many pixels the measure is, for example, to draw a line.
I need the pixel value as per request. I tried looking around in the Halcon documentation but I didn't find what I was looking for.
Also I read this answer but it' not exactly what I'm looking for.
I'm using Halcon Progress 21.11.
Edit: A possible solution could be obtaining the dimensions before converting them to world plane and then do something like pixel/world but I would prefer a better method if it exists.
I have a data-set of images(every image is in rgb format) and corresponding label image(which contains label of every pixel in the image).
I need to extract the objects(pixels) of a particular class from original images.
first i have to find location of object using label image(by providing label of given object)(it is doable by using explicit for loops but, i don't want to use explicit for loops)
Now my questions-
If there is any in-build function in tensorflow that gives me the location(Rectangles are fine) of given object(if i provide the labels of that object)?
After that i can use the tf.image.crop_and_resize to crop the image. but i am not able to find any function that will give me location of objects.
So I was reading a Document about Displacement Mappings and Surface Blendings and came across this equation which is supposed to be a Alpha-Blending equation:
while v1,...,vn are supposed to be the value vector and w1,....,wn the weight vector (is how the document describes it).
To tell what my interpretation of this equation is, is that considering n being the number of surfaces we are trying to blend together the value vectors are supposed to represent as the name says the value (probably color related?) of each surface and the weight vector basically describing the value preference of each surface (so the higher the weight value the more we would see the color of that one surface after the blend). The multiplication and division part is something what i do not fully understand (just interpreting it as the 'it just works like that' part of the equation)
I couldn't find any similar equation anywhere so far so I figured out that either I didn't search deep enough or I am not understanding something that is supposed to be very obvious and I wanted to make sure that fully understand this equation for further read in the document which bases on this idea.
I download the following graph-cut code:
https://github.com/shaibagon/GCMex
I compiled the mex files, and ran it for pre-defined image in the code (which is rgb image)
I wanna optimize the image segmentation results,
I have probability map of the image, which its dimension is (width,height, 5). Five probability distribution over the image dimension are stacked together. each relates to one the classes.
My problem is which parts of code should according to the probability image.
I want to define Data and Smoothing terms based on my application.
My question is:
1) Has someone refined the code according to the defining different energy function (I wanna change Unary and pair-wise formulation).
2) I have a stack of 3D images. I wanna define 6-neighborhood system, 4 neighbors in current slice and the other two from two adjacent slices. In which function and part of code can I do the refinements?
Thanks
I need to know how to get the inverse color by lesscss.
Example: I have #000, i need #FFF.
And i need the detail explanation of spin(). And necessary links where i can see a color wheel where i can understand how spin() works.
Thanks.
Why it is not working as you expect
The spin() function only deals with hue (color), not value (grey scale changes are a value change). Take a look at Figures 9 and 10 on this page from North Carolina State University's site. Those figures help show the difference. The spin() function is rotating only in the two dimensional space of the hue circle of color, not along the axis of the third dimensional space dealing with saturation; i.e. the gray scale itself, which is what differentiates white from black, both of which have no color saturation).
This is why on the LESS site we read of spin() (emphasis added):
Note that colors are passed through an RGB conversion, which doesn't
retain hue value for greys (because hue has no meaning when there is
no saturation)
And
Colors are always returned as RGB values, so applying spin to a grey
value will do nothing.
Getting what you want (Color Inversion)
See #seven-phases-max's answer.
The spin function changes the Hue property of a colour. Shades of grey (incl. white and black) are achromatic colours (i.e. they have the same "undefined" hue value).
To simply invert a colour use either difference function:
difference(white, #colour)
or the simple colour arithmetic:
(#fff - #colour)