Thresholding an HSV image using its histogram - numpy

I'm doing a task in which i need to convert RGB image into an HSV image. But I only assign valid values to hue and saturation if the corresponding intensity is above a minimum value. Similarly I only assigned a valid value if the corresponding saturation is above another minimum value. non valid values will be 0. I wrote the following code for it. I don't understand how np.where works that well. Can you tell me if this code does what i need it to do:
sat=hsv[:,:,1] #saturation
iny=hsv[:,:,2] #intensty value
mask = np.zeros_like(hsv)
mask[:,:,2]=iny #all intensity values should be retained
selected_pixels = np.where(np.logical_and(np.greater_equal(iny,15),np.greater_equal(sat,15)))
mask[selected_pixels] = hsv[selected_pixels]
Secondly I also want to threshold the image using its histogram. The idea is to retain every pixel in the HSV image that has a hue and intensity value lower than a certain number in the histogram. To elaborate, if a pixel has 50 hue value and 50 intensity value. I'll check the histogram for both hue and intensity. If the bin value at 50 for both histogram is lower than a certain threshold, I'll retain that pixel. The exact thing I'm trying to follow is :
all pixels of the filtered input image are compared to the hue and the intensity histograms. A pixel is classified as an obstacle if either of the two following conditions is satisfied:
i) The hue histogram bin value at the pixel’s hue value is below the hue threshold.
ii) The intensity histogram bin value at the pixel’s intensity value is below the intensity threshold.
If none of these conditions are true, then the pixel is classified as belonging to the ground.
Can anybody tell how can i do this without going into long FOR loops because I'll have to do this task on live video so it needs to be fast.
For the second task i tried using this:
hue=hsv[:,:,0] #hue value
iny=hsv[:,:,2] #intensity value
mask = np.zeros_like(frame)
hh=cv2.calcHist([hsv],[0],None,[256],[0,256])
ih=cv2.calcHist([hsv],[2],None,[256],[0,256])
selected_pixels = np.where(np.logical_and(np.less(hh[hue],5),np.less(ih[iny],400)))
mask[selected_pixels] = frame[selected_pixels] #frame is original image, HSV is the HSV format image, Mask is the thresholded image
But it shows something i don't expect. It retains the Blue portion of the original image and doesn't threshold the image like I intended

Related

Histogram Matching on Sentinel 2 satellite images

I try to apply histogram matching based on OpenCV and Scikit image on Sentinel 2 data, similar to https://www.geeksforgeeks.org/histogram-matching-with-opencv-scikit-image-and-python.
Sentinel 2 bands have a value range between 0 and 10000, also they have coordinates encoded. It looks like OpenCV and Scikit image only support a value range up to 255, as my resulting images are all black.
Is there any library that supports the value ranges of sentinel 2 images, without losing the geo information of the image?
Not sure if this helps, but are you working with the L2-A BOA imagery?
In the documentation I understand that the meaningful reflectance values go from “1” to “65535” (UINT) and "0" is reserved for NO_DATA.
As of Baseline 04.00 since 22-01-25 you also have to use the BOA_ADD_OFFSET for L2 or the RADIO _ADD_OFFSET for L1 to adjust values if you wish to compare them with pre v04.00 images. Currently all bands offset appear to be set to 1000, so you just subtract this value to get pre v04.00 values.
There is also QUANTIFICATION_VALUE which is used to scale down the values for each band - unsure of the size of that value though. But that might bring the pixel value to between 0-1 or perhaps 1-255?
See the"Sentinel-2 Products Specification Document" at https://sentinel.esa.int/documents/247904/685211/S2-PDGS-TAS-DI-PSD-V14.9.pdf/3d3b6c9c-4334-dcc4-3aa7-f7c0deffbaf7?t=1643013091529
for more details

how to prevent plt imshow from normalize image

Let's say i have an image where maximum value is 1 and minimum is 0.8 (very brighty image).
when i use plt.imshow(image) i expect to see high intensity image, but for some reason i still see black, that means that plt.imshow normalize the range [0.8,1] to be [0,1]
how can i see the image without this normalization process?
for example, here is my image:
and min value is 0.57, where max value is 1. so why there is black in the image..?

How to fill a line in 2D image along a given radius with the data in a given line image?

I want to fill a 2D image along its polar radius, the data are stored in a image where each row or column corresponds to the radius in target image. How can I fill the target image efficiently? Such as with iradius or some functions? I do not prefer a pix-pix operation.
Are you looking for something like this?
number maxR = 100
image rValues := realimage("I(r)",4,maxR)
rValues = 10 + trunc(100*random())
image plot :=realimage("Ring",4,2*maxR,2*maxR)
rValues.ShowImage()
plot.ShowImage()
plot = rValues.warp(iradius,0)
You might also want to check out the relevant example code from the F1 help documentation of GMS itself:
Explaining warp a bit:
plot = rValues.warp(iradius,0)
Assigns values to plot based on a value-lookup in rValues.
For each pixel in plot a coordinate position in rValues is computed, and the value is simply looked up. If the computed coordinate is non-integer, bilinear interpolation between the 4 closest points is used.
In the example, the two 'formulas' for the coordinate calculation are simple x' = iradius and y' = 0 where iradius is an expression computed from the coordinate in plot, for convenience.
You can feed any expression into the parameters for warp( ) and the command is closely related to just using the square bracket notation of addressing values. In fact, the only difference is that warp performs the bilinear interpolation of values instead of truncating the coordinates to integer values.

Simulate Camera in Numpy

I have the task to simulate a camera with a full well capacity of 10.000 Photons per sensor element
in numpy. My first Idea was to do it like that:
camera = np.random.normal(0.0,1/10000,np.shape(img))
Imgwithnoise= img+camera
but it hardly shows an effect.
Has someone an idea how to do it?
From what I interpret from your question, if each physical pixel of the sensor has a 10,000 photon limit, this points to the brightest a digital pixel can be on your image. Similarly, 0 incident photons make the darkest pixels of the image.
You have to create a map from the physical sensor to the digital image. For the sake of simplicity, let's say we work with a grayscale image.
Your first task is to fix the colour bit-depth of the image. That is to say, is your image an 8-bit colour image? (Which usually is the case) If so, the brightest pixel has a brightness value = 255 (= 28 - 1, for 8 bits.) The darkest pixel is always chosen to have a value 0.
So you'd have to map from the range 0 --> 10,000 (sensor) to 0 --> 255 (image). The most natural idea would be to do a linear map (i.e. every pixel of the image is obtained by the same multiplicative factor from every pixel of the sensor), but to correctly interpret (according to the human eye) the brightness produced by n incident photons, often different transfer functions are used.
A transfer function in a simplified version is just a mathematical function doing this map - logarithmic TFs are quite common.
Also, since it seems like you're generating noise, it is unwise and conceptually wrong to add camera itself to the image img. What you should do, is fix a noise threshold first - this can correspond to the maximum number of photons that can affect a pixel reading as the maximum noise value. Then you generate random numbers (according to some distribution, if so required) in the range 0 --> noise_threshold. Finally, you use the map created earlier to add this noise to the image array.
Hope this helps and is in tune with what you wish to do. Cheers!

tensorflow: how to calculate the zero-mean and of the rgb values and uni-variance

I want to calculate the zero mean and univariance of an image.
I have already read in a pair of images in a list as tensors with the dimensions (m, n, 3)
The zero-mean is calculated by taking the mean of all red, green, blue values of all images in the list and substract the per image.
For this task, can I use the moments method? if yes, which axes are correct?
mean, var = tf.nn.moments(input, axes=[0,1,2])
For getting mean and variance using tf.nn.moments is the right thing. The axes parameter tells which axes to be included fro aggregating.
If you want a single mean\var for the entire RGB you can use:
mean, var = tf.nn.moments(RGB, axes=[0,1,2])
if you want to get a mean/var for each of the channels(R,G,B), you can use:
mean, var = tf.nn.moments(RGB, axes=[0,1])