I try to apply histogram matching based on OpenCV and Scikit image on Sentinel 2 data, similar to https://www.geeksforgeeks.org/histogram-matching-with-opencv-scikit-image-and-python.
Sentinel 2 bands have a value range between 0 and 10000, also they have coordinates encoded. It looks like OpenCV and Scikit image only support a value range up to 255, as my resulting images are all black.
Is there any library that supports the value ranges of sentinel 2 images, without losing the geo information of the image?
Not sure if this helps, but are you working with the L2-A BOA imagery?
In the documentation I understand that the meaningful reflectance values go from “1” to “65535” (UINT) and "0" is reserved for NO_DATA.
As of Baseline 04.00 since 22-01-25 you also have to use the BOA_ADD_OFFSET for L2 or the RADIO _ADD_OFFSET for L1 to adjust values if you wish to compare them with pre v04.00 images. Currently all bands offset appear to be set to 1000, so you just subtract this value to get pre v04.00 values.
There is also QUANTIFICATION_VALUE which is used to scale down the values for each band - unsure of the size of that value though. But that might bring the pixel value to between 0-1 or perhaps 1-255?
See the"Sentinel-2 Products Specification Document" at https://sentinel.esa.int/documents/247904/685211/S2-PDGS-TAS-DI-PSD-V14.9.pdf/3d3b6c9c-4334-dcc4-3aa7-f7c0deffbaf7?t=1643013091529
for more details
Related
I'm doing a task in which i need to convert RGB image into an HSV image. But I only assign valid values to hue and saturation if the corresponding intensity is above a minimum value. Similarly I only assigned a valid value if the corresponding saturation is above another minimum value. non valid values will be 0. I wrote the following code for it. I don't understand how np.where works that well. Can you tell me if this code does what i need it to do:
sat=hsv[:,:,1] #saturation
iny=hsv[:,:,2] #intensty value
mask = np.zeros_like(hsv)
mask[:,:,2]=iny #all intensity values should be retained
selected_pixels = np.where(np.logical_and(np.greater_equal(iny,15),np.greater_equal(sat,15)))
mask[selected_pixels] = hsv[selected_pixels]
Secondly I also want to threshold the image using its histogram. The idea is to retain every pixel in the HSV image that has a hue and intensity value lower than a certain number in the histogram. To elaborate, if a pixel has 50 hue value and 50 intensity value. I'll check the histogram for both hue and intensity. If the bin value at 50 for both histogram is lower than a certain threshold, I'll retain that pixel. The exact thing I'm trying to follow is :
all pixels of the filtered input image are compared to the hue and the intensity histograms. A pixel is classified as an obstacle if either of the two following conditions is satisfied:
i) The hue histogram bin value at the pixel’s hue value is below the hue threshold.
ii) The intensity histogram bin value at the pixel’s intensity value is below the intensity threshold.
If none of these conditions are true, then the pixel is classified as belonging to the ground.
Can anybody tell how can i do this without going into long FOR loops because I'll have to do this task on live video so it needs to be fast.
For the second task i tried using this:
hue=hsv[:,:,0] #hue value
iny=hsv[:,:,2] #intensity value
mask = np.zeros_like(frame)
hh=cv2.calcHist([hsv],[0],None,[256],[0,256])
ih=cv2.calcHist([hsv],[2],None,[256],[0,256])
selected_pixels = np.where(np.logical_and(np.less(hh[hue],5),np.less(ih[iny],400)))
mask[selected_pixels] = frame[selected_pixels] #frame is original image, HSV is the HSV format image, Mask is the thresholded image
But it shows something i don't expect. It retains the Blue portion of the original image and doesn't threshold the image like I intended
The standard non maximum suppression or finding peaks in a 2D image involves two steps:
Do a max pool with a kernel size, as maxpooled_image.
Then select the pixels where pixel_value == maxpooled_image value
However, let us say I have an additional channel, value2. Consider two strong pixels that belong in one NMS window. Now, in the standard case, only one of these pixels will be chosen. However, I'd like to add an additional condition that if the value2 are sufficiently different by some threshold (dth), then select both pixels, but if the difference between the value2 value of pixel1 and pixel2 is small, then pick only the brighter pixel.
How do I achieve this? in numpy
I want to create a mathematical model for 2d bin packing optimization problem. I am not quite sure if it is bin packing problem it may be called strip packing, anyway let me introduce the problem.
1- There are some group of boxes to be placed on strips (see article 3.)
2- Each group contains a number of boxes which have same width and same hight. For example,
group A
100 boxes with width = 80cm and height = 120cm
group B
250 boxes with width = 150cm and height = 200cm
etc.
3- There are unlimited number of equal sized strips which have fixed width and height, for example
infinite number of Width = 800cm and Height 1400cm
4- The main goal is packing these boxes into minimum number of the strips. However, there are some restrictions to do this job.
5- If we think of the strips as a 2d row and column plane, at each column must has a fixed width of boxes. For example, if (column 0 and row 0) has a box w=100,h=80 then (column 0 and row 1) also has to has a box w=100,h=80. It is not allowed to be in the same column for diferent sized boxes. This rule is not valid for rows. Each row can have different sized boxes, there is no restriction.
6- It is not important to fill the whole strip. We want to fill strips with minimum space between boxes. The heighest column indicates a stop line through other columns and we calculate the loss value (space ratio over the whole strip area).
I tried to implement this optimization problem with GLPK linear programming tool. I have used a mathematical model from the paper (C. Blum, V. Schmid Solving the 2D bin packing problem by means of a hybrid evolutionary algorithm)
C. Blum, V. Schmid Solving the 2D bin packing problem by means of a hybrid evolutionary algorithm
This math model works great in the GLPK. However, it is designed for boxes for packing in x,y coordinates. If you see article 5 we want them in a fixed-width column fashion.
Can you please help me to modify the mathematical model to make possible to implement article 5.
Thank you all,
I have the task to simulate a camera with a full well capacity of 10.000 Photons per sensor element
in numpy. My first Idea was to do it like that:
camera = np.random.normal(0.0,1/10000,np.shape(img))
Imgwithnoise= img+camera
but it hardly shows an effect.
Has someone an idea how to do it?
From what I interpret from your question, if each physical pixel of the sensor has a 10,000 photon limit, this points to the brightest a digital pixel can be on your image. Similarly, 0 incident photons make the darkest pixels of the image.
You have to create a map from the physical sensor to the digital image. For the sake of simplicity, let's say we work with a grayscale image.
Your first task is to fix the colour bit-depth of the image. That is to say, is your image an 8-bit colour image? (Which usually is the case) If so, the brightest pixel has a brightness value = 255 (= 28 - 1, for 8 bits.) The darkest pixel is always chosen to have a value 0.
So you'd have to map from the range 0 --> 10,000 (sensor) to 0 --> 255 (image). The most natural idea would be to do a linear map (i.e. every pixel of the image is obtained by the same multiplicative factor from every pixel of the sensor), but to correctly interpret (according to the human eye) the brightness produced by n incident photons, often different transfer functions are used.
A transfer function in a simplified version is just a mathematical function doing this map - logarithmic TFs are quite common.
Also, since it seems like you're generating noise, it is unwise and conceptually wrong to add camera itself to the image img. What you should do, is fix a noise threshold first - this can correspond to the maximum number of photons that can affect a pixel reading as the maximum noise value. Then you generate random numbers (according to some distribution, if so required) in the range 0 --> noise_threshold. Finally, you use the map created earlier to add this noise to the image array.
Hope this helps and is in tune with what you wish to do. Cheers!
I would like to calculate the Horizontal and Vertical field of view from the camera intrinsic matrix for the cameras used in the KITTI dataset. The reason I need the Field of view is to convert a depth map into 3D point clouds.
Though this question has been asked quite a long time ago, I felt it needed an answer as I ran into the same issue and was unable to find any info on it.
I have however solved it using the information available in this document and some more general camera calibration documents
Firstly, we need to convert the supplied disparity into distance. This can be done through fist converting the disp map into floats through the method in the dev_kit where they state:
disp(u,v) = ((float)I(u,v))/256.0;
This disparity can then be converted into a distance through the default stereo vision equation:
Depth = Baseline * focal length/ Disparity
Now come some tricky parts. I searched high and low for the focal length and was unable to find it in documentation.
I realised just now when writing that the baseline is documented in the aforementioned source however from section IV.B we can see that it can be found in P(i)rect indirectly.
The P_rects can be found in the calibration files and will be used for both calculating the baseline and the translation from uv in the image to xyz in the real world.
The steps are as follows:
For pixel in depthmap:
xyz_normalised = P_rect \ [u,v,1]
where u and v are the x and y coordinates of the pixel respectively
which will give you a xyz_normalised of shape [x,y,z,0] with z = 1
You can then multiply it with the depth that is given at that pixel to result in a xyz coordinate.
For completeness, as P_rect is the depth map here, you need to use P_3 from the cam_cam calibration txt files to get the baseline (as it contains the baseline between the colour cameras) and the P_2 belongs to the left camera which is used as a reference for occ_0 files.