I have the task to simulate a camera with a full well capacity of 10.000 Photons per sensor element
in numpy. My first Idea was to do it like that:
camera = np.random.normal(0.0,1/10000,np.shape(img))
Imgwithnoise= img+camera
but it hardly shows an effect.
Has someone an idea how to do it?
From what I interpret from your question, if each physical pixel of the sensor has a 10,000 photon limit, this points to the brightest a digital pixel can be on your image. Similarly, 0 incident photons make the darkest pixels of the image.
You have to create a map from the physical sensor to the digital image. For the sake of simplicity, let's say we work with a grayscale image.
Your first task is to fix the colour bit-depth of the image. That is to say, is your image an 8-bit colour image? (Which usually is the case) If so, the brightest pixel has a brightness value = 255 (= 28 - 1, for 8 bits.) The darkest pixel is always chosen to have a value 0.
So you'd have to map from the range 0 --> 10,000 (sensor) to 0 --> 255 (image). The most natural idea would be to do a linear map (i.e. every pixel of the image is obtained by the same multiplicative factor from every pixel of the sensor), but to correctly interpret (according to the human eye) the brightness produced by n incident photons, often different transfer functions are used.
A transfer function in a simplified version is just a mathematical function doing this map - logarithmic TFs are quite common.
Also, since it seems like you're generating noise, it is unwise and conceptually wrong to add camera itself to the image img. What you should do, is fix a noise threshold first - this can correspond to the maximum number of photons that can affect a pixel reading as the maximum noise value. Then you generate random numbers (according to some distribution, if so required) in the range 0 --> noise_threshold. Finally, you use the map created earlier to add this noise to the image array.
Hope this helps and is in tune with what you wish to do. Cheers!
Related
There are multiple pages (like this and this) that present examples about the effect of channel_shift_range in images. At first glance, it appears as if the images have only had a change in brightness applied.
This issue has multiple comments mentioning this observation. So, if channel_shift_range and brightness_range do the same, why do they both exist?
After long hours of reverse engineering, I found that:
channel_shift_range: applies the (R + i, G + i, B + i) operation to all pixels in an image, where i is an integer value within the range [0, 255].
brightness_range: applies the (R * f, G * f, B * f) operation to all pixels in an image, where f is a float value around 1.0.
Both parameters are related to brightness, however, I found a very interesting difference: the operation applied by channel_shift_range roughly preserves the contrast of an image, while the operation applied by brightness_range roughly multiply the contrast of an image by f and roughly preserves its saturation. It is important to note that these conclusions could not be fulfilled for large values of i and f, since the brightness of the image will be intense and it will have lost much of its information.
Channel shift and Brightness change are completely different.
Channel Shift: Channel shift changes the color saturation level(eg. light Red/dark red) of pixels by changing the [R,G,B] channels of the input image. Channel shift is used to introduce the color augmentation in the dataset so as to make the model learn color based features irrespective of its saturation value.
Below is the example of Channel shift from mentioned the article:
In the above image, if you observe carefully, objects(specially cloud region) are still clearly visible and distinguishable from their neighboring regions even after channel shift augmentation.
Brightness change: Brightness level of the image explains the light intensity throughout the image and used to add under exposure and over exposure augmentation in the dataset.
Below is the example of Brightness augmentation:
In the above image, at low brightness value objects(eg. clouds) have lost their visibility due to low light intensity level.
I would like to calculate the Horizontal and Vertical field of view from the camera intrinsic matrix for the cameras used in the KITTI dataset. The reason I need the Field of view is to convert a depth map into 3D point clouds.
Though this question has been asked quite a long time ago, I felt it needed an answer as I ran into the same issue and was unable to find any info on it.
I have however solved it using the information available in this document and some more general camera calibration documents
Firstly, we need to convert the supplied disparity into distance. This can be done through fist converting the disp map into floats through the method in the dev_kit where they state:
disp(u,v) = ((float)I(u,v))/256.0;
This disparity can then be converted into a distance through the default stereo vision equation:
Depth = Baseline * focal length/ Disparity
Now come some tricky parts. I searched high and low for the focal length and was unable to find it in documentation.
I realised just now when writing that the baseline is documented in the aforementioned source however from section IV.B we can see that it can be found in P(i)rect indirectly.
The P_rects can be found in the calibration files and will be used for both calculating the baseline and the translation from uv in the image to xyz in the real world.
The steps are as follows:
For pixel in depthmap:
xyz_normalised = P_rect \ [u,v,1]
where u and v are the x and y coordinates of the pixel respectively
which will give you a xyz_normalised of shape [x,y,z,0] with z = 1
You can then multiply it with the depth that is given at that pixel to result in a xyz coordinate.
For completeness, as P_rect is the depth map here, you need to use P_3 from the cam_cam calibration txt files to get the baseline (as it contains the baseline between the colour cameras) and the P_2 belongs to the left camera which is used as a reference for occ_0 files.
i want to get distance from depth image .i have 3D position and i convert this point to 2D.know i read this coordinate data from RGB and depth image .
iin this picture i mark 5 point and see this point data from depth image.
why this coordinate data is not correct ?
or why this data is 0 ?
how can i get real data from depth image ?
The data returned by Kinect has 0 values at pixels it fails to get the depth measurement. This can happen due to several reasons but especially happens at reflective surfaces or glass etc.
Also, the values around the edges are more error prone, so there are more 0 values around the edges than the center of the image.
So, (a) and (d) can be zero, but if you check the nearby pixels there will be depth values. (e) is a bad measurement, as it has a large value, whereas it is closer than rest of the points.
Before acquiring data, maybe you can try some hole filling technique such as Depth Normalization on your data, to remove most of the zero values.
I need some help trying to figure out the Volume Voxel Per Meter and Resolution settings in Kinect Fusion...mostly how, and if at all, they interact with Depth Threshold settings in the Kinect Fusion Explorer program please...because I don't get if the depth threshold minimum is increased and maximum is reduced, does that smaller range increases the overall precision of the scanned volume, or does it stay the same?
Say I set the Kinect Fusion's depth threshold minimum to 2m and the maximum to 3m, thus setting the scanned range to 3m-2m=1m, does then the volume voxels per meter setting of say 256 and a resolution of also 256 mean that I would get a voxel depth precision of 1m/256=0.003m=0.3cm (a third of a centimeter)? Or is the resolution applicable only to the complete Kinect depth range instead of the one set via depth threshold? Also, how's width and height affected by depth threshold settings, and how to calculate precision in those two remaining axis?
Thanks in advance
P.S.
If the volume voxel resolution is set to maximum for all three axis (768x768x768) what is the minimum amount of GPU memory needed to make Kinect Fusion work?
Answering an old topic; because there is no other answer:
A. Simple Answer:
Depth threshold settings simply decide what region of the depth map are you interested in. Any value below min depth threshold and above max depth threshold is simply replaced with 0 during depth map generation.
B. Detailed Answer:
Volume Voxel per meter: This is the mm value depth represented by a single voxels . So 1000mm/256 (voxel_per_meter) = ~3.9 mm/voxel
( See:PCL documentation )
Voxel Resolution: The number of voxels in the volume you are constructing.
So;
Voxel Resolution / Voxel per m = Volume of Reconstruction volume (in meters)
EG: 512 voxels / 256 vpm = 2.0m (The volume of the reconstruction cube, given that the number of voxels per side of the cube are the same - each axis can be independently defined.)
If you have the Kinect SDK installed; see the descriptions of the following variables:
minDepthClip = FusionDepthProcessor.DefaultMinimumDepth;
maxDepthClip = FusionDepthProcessor.DefaultMaximumDepth;
voxelsPerMeter; voxelsX; voxelsY; voxelsZ;
So; these values are not dependent (or vice versa) on the depth threshold value.
A good example of using the depth threshold values is in the great video by Daniel Shiffman ([Kinect & Processing])
I need some quick advice.
I would like to simulate a cellular automata (from A Simple, Efficient Method
for Realistic Animation of Clouds) on the GPU. However, I am limited to OpenGL ES 2.0 shaders (in WebGL) which does not support any bitwise operations.
Since every cell in this cellular automata represents a boolean value, storing 1 bit per cell would have been the ideal. So what is the most efficient way of representing this data in OpenGL's texture formats? Are there any tricks or should I just stick with a straight-forward RGBA texture?
EDIT: Here's my thoughts so far...
At the moment I'm thinking of going with either plain GL_RGBA8, GL_RGBA4 or GL_RGB5_A1:
Possibly I could pick GL_RGBA8, and try to extract the original bits using floating point ops. E.g. x*255.0 gives an approximate integer value. However, extracting the individual bits is a bit of a pain (i.e. dividing by 2 and rounding a couple times). Also I'm wary of precision problems.
If I pick GL_RGBA4, I could store 1.0 or 0.0 per component, but then I could probably also try the same trick as before with GL_RGBA8. In this case, it's only x*15.0. Not sure if it would be faster or not seeing as there should be fewer ops to extract the bits but less information per texture read.
Using GL_RGB5_A1 I could try and see if I can pack my cells together with some additional information like a color per voxel where the alpha channel stores the 1 bit cell state.
Create a second texture and use it as a lookup table. In each 256x256 block of the texture you can represent one boolean operation where the inputs are represented by the row/column and the output is the texture value. Actually in each RGBA texture you can represent four boolean operations per 256x256 region. Beware texture compression and MIP maps, though!