Halcon: Obtain how much is a mm in pixels after calibration - halcon

I've successfully calibrated my camera and I can get the dimensions of a XLD in world coordinates with ContourToWorldPlaneXld and then HeightWidthRatioXld. This returns me the measures of a contour extracted from a shape.
Now I need to convert a value inserted by the user in mm (example in mm: 0.1) and get how many pixels the measure is, for example, to draw a line.
I need the pixel value as per request. I tried looking around in the Halcon documentation but I didn't find what I was looking for.
Also I read this answer but it' not exactly what I'm looking for.
I'm using Halcon Progress 21.11.
Edit: A possible solution could be obtaining the dimensions before converting them to world plane and then do something like pixel/world but I would prefer a better method if it exists.

Related

Halcon - Extract straight edge from XLD

I have a XLD edge, like the one in red in the sample picture below.
I need to extract start/endpoint of straight lines that reppresent it. Hough lines sort of work for this, but the results are not really replicable. minor changes in the contour produce unexpected results.
How can the contours be extracted as straight lines? (blue) with start and finish coordinates?
lines shorter than a specified length should not be counted as separate line.
Contour needs to be converted to a polygon using the following function:
gen_polygons_xld (Object, Polygons, 'ramer', 25.0)
The only adjustable parameter is the alpha (25.0) which decides the approximation threshold.

Simulate Camera in Numpy

I have the task to simulate a camera with a full well capacity of 10.000 Photons per sensor element
in numpy. My first Idea was to do it like that:
camera = np.random.normal(0.0,1/10000,np.shape(img))
Imgwithnoise= img+camera
but it hardly shows an effect.
Has someone an idea how to do it?
From what I interpret from your question, if each physical pixel of the sensor has a 10,000 photon limit, this points to the brightest a digital pixel can be on your image. Similarly, 0 incident photons make the darkest pixels of the image.
You have to create a map from the physical sensor to the digital image. For the sake of simplicity, let's say we work with a grayscale image.
Your first task is to fix the colour bit-depth of the image. That is to say, is your image an 8-bit colour image? (Which usually is the case) If so, the brightest pixel has a brightness value = 255 (= 28 - 1, for 8 bits.) The darkest pixel is always chosen to have a value 0.
So you'd have to map from the range 0 --> 10,000 (sensor) to 0 --> 255 (image). The most natural idea would be to do a linear map (i.e. every pixel of the image is obtained by the same multiplicative factor from every pixel of the sensor), but to correctly interpret (according to the human eye) the brightness produced by n incident photons, often different transfer functions are used.
A transfer function in a simplified version is just a mathematical function doing this map - logarithmic TFs are quite common.
Also, since it seems like you're generating noise, it is unwise and conceptually wrong to add camera itself to the image img. What you should do, is fix a noise threshold first - this can correspond to the maximum number of photons that can affect a pixel reading as the maximum noise value. Then you generate random numbers (according to some distribution, if so required) in the range 0 --> noise_threshold. Finally, you use the map created earlier to add this noise to the image array.
Hope this helps and is in tune with what you wish to do. Cheers!

How to calculate the Horizontal and Vertical FOV for the KITTI cameras from the camera intrinsic matrix?

I would like to calculate the Horizontal and Vertical field of view from the camera intrinsic matrix for the cameras used in the KITTI dataset. The reason I need the Field of view is to convert a depth map into 3D point clouds.
Though this question has been asked quite a long time ago, I felt it needed an answer as I ran into the same issue and was unable to find any info on it.
I have however solved it using the information available in this document and some more general camera calibration documents
Firstly, we need to convert the supplied disparity into distance. This can be done through fist converting the disp map into floats through the method in the dev_kit where they state:
disp(u,v) = ((float)I(u,v))/256.0;
This disparity can then be converted into a distance through the default stereo vision equation:
Depth = Baseline * focal length/ Disparity
Now come some tricky parts. I searched high and low for the focal length and was unable to find it in documentation.
I realised just now when writing that the baseline is documented in the aforementioned source however from section IV.B we can see that it can be found in P(i)rect indirectly.
The P_rects can be found in the calibration files and will be used for both calculating the baseline and the translation from uv in the image to xyz in the real world.
The steps are as follows:
For pixel in depthmap:
xyz_normalised = P_rect \ [u,v,1]
where u and v are the x and y coordinates of the pixel respectively
which will give you a xyz_normalised of shape [x,y,z,0] with z = 1
You can then multiply it with the depth that is given at that pixel to result in a xyz coordinate.
For completeness, as P_rect is the depth map here, you need to use P_3 from the cam_cam calibration txt files to get the baseline (as it contains the baseline between the colour cameras) and the P_2 belongs to the left camera which is used as a reference for occ_0 files.

How is this Alpha Blending equation to understand?

So I was reading a Document about Displacement Mappings and Surface Blendings and came across this equation which is supposed to be a Alpha-Blending equation:
while v1,...,vn are supposed to be the value vector and w1,....,wn the weight vector (is how the document describes it).
To tell what my interpretation of this equation is, is that considering n being the number of surfaces we are trying to blend together the value vectors are supposed to represent as the name says the value (probably color related?) of each surface and the weight vector basically describing the value preference of each surface (so the higher the weight value the more we would see the color of that one surface after the blend). The multiplication and division part is something what i do not fully understand (just interpreting it as the 'it just works like that' part of the equation)
I couldn't find any similar equation anywhere so far so I figured out that either I didn't search deep enough or I am not understanding something that is supposed to be very obvious and I wanted to make sure that fully understand this equation for further read in the document which bases on this idea.

Input parameters of CITemperatureAndTint (CIFilter)

I fail to understand the input parameters of the CIFilter named CITemperatureAndTint. The documentation says it has two input parameters which are both a 2D CIVector.
I played with this filter a lot - via actual code, via Core Image Fun House (example project from Apple - "FunHouse") and via iPhoto.
My intuition says that this filter should have two scalar input parameters: One for the temperature and one for the tint. If you look at the UI of iPhoto you see this:
Screenshot of iPhotos Temperature and Tint UI:
As expected: One slider for the temperature and one for the hue. How did apple "bind" the value of each slider to a 2D-Vector? akaru asked this question already but got no answer: What's up with CITemperatureAndTint having vector inputs?
I have opened a technical support incident at Apple and asked them the same question. Here is the answer from the Apple engineer:
CITemperatureAndTint has three input parameters: Image, Neutral and
TargetNeutral. Neutral and TargetNeutral are of 2D CIVector type, and
in both of them, note that the first dimension refers to Temperature
and the second dimension refers to Tint. What the CITemperatureAndTint
filter basically does is computing a matrix that adapts RGB values
from the source white point defined by Neutral (srcTemperature,
srcTint) to the target white point defined by TargetNeutral
(dstTemperature, dstTint), and then applying this matrix on the input
image (using the CIColorMatrix filter). If Neutral and TargetNeutral
are of the same values, then the image will not change after applying
this filter. I don't know the implementation details about iPhoto, but
I think the two slide bars give the Temperature and Tint changes (i.e.
differences between source and target Temperature and Tint values
already) that you want to add to the source image.
Now I have to get my head around this answer but it seems to be a very good response from Apple.
They should be 2D vectors containing the color temperature. The default of (6500, 0) will leave the color unchanged, as described here. You can see what values for color temperature give you which colors in this wikipedia link. I'm not sure what the 2nd element of the vector is for.