how to prevent plt imshow from normalize image - matplotlib

Let's say i have an image where maximum value is 1 and minimum is 0.8 (very brighty image).
when i use plt.imshow(image) i expect to see high intensity image, but for some reason i still see black, that means that plt.imshow normalize the range [0.8,1] to be [0,1]
how can i see the image without this normalization process?
for example, here is my image:
and min value is 0.57, where max value is 1. so why there is black in the image..?

Related

Thresholding an HSV image using its histogram

I'm doing a task in which i need to convert RGB image into an HSV image. But I only assign valid values to hue and saturation if the corresponding intensity is above a minimum value. Similarly I only assigned a valid value if the corresponding saturation is above another minimum value. non valid values will be 0. I wrote the following code for it. I don't understand how np.where works that well. Can you tell me if this code does what i need it to do:
sat=hsv[:,:,1] #saturation
iny=hsv[:,:,2] #intensty value
mask = np.zeros_like(hsv)
mask[:,:,2]=iny #all intensity values should be retained
selected_pixels = np.where(np.logical_and(np.greater_equal(iny,15),np.greater_equal(sat,15)))
mask[selected_pixels] = hsv[selected_pixels]
Secondly I also want to threshold the image using its histogram. The idea is to retain every pixel in the HSV image that has a hue and intensity value lower than a certain number in the histogram. To elaborate, if a pixel has 50 hue value and 50 intensity value. I'll check the histogram for both hue and intensity. If the bin value at 50 for both histogram is lower than a certain threshold, I'll retain that pixel. The exact thing I'm trying to follow is :
all pixels of the filtered input image are compared to the hue and the intensity histograms. A pixel is classified as an obstacle if either of the two following conditions is satisfied:
i) The hue histogram bin value at the pixel’s hue value is below the hue threshold.
ii) The intensity histogram bin value at the pixel’s intensity value is below the intensity threshold.
If none of these conditions are true, then the pixel is classified as belonging to the ground.
Can anybody tell how can i do this without going into long FOR loops because I'll have to do this task on live video so it needs to be fast.
For the second task i tried using this:
hue=hsv[:,:,0] #hue value
iny=hsv[:,:,2] #intensity value
mask = np.zeros_like(frame)
hh=cv2.calcHist([hsv],[0],None,[256],[0,256])
ih=cv2.calcHist([hsv],[2],None,[256],[0,256])
selected_pixels = np.where(np.logical_and(np.less(hh[hue],5),np.less(ih[iny],400)))
mask[selected_pixels] = frame[selected_pixels] #frame is original image, HSV is the HSV format image, Mask is the thresholded image
But it shows something i don't expect. It retains the Blue portion of the original image and doesn't threshold the image like I intended

Difference between channel_shift_range and brightness_range in ImageDataGenerator (Keras)?

There are multiple pages (like this and this) that present examples about the effect of channel_shift_range in images. At first glance, it appears as if the images have only had a change in brightness applied.
This issue has multiple comments mentioning this observation. So, if channel_shift_range and brightness_range do the same, why do they both exist?
After long hours of reverse engineering, I found that:
channel_shift_range: applies the (R + i, G + i, B + i) operation to all pixels in an image, where i is an integer value within the range [0, 255].
brightness_range: applies the (R * f, G * f, B * f) operation to all pixels in an image, where f is a float value around 1.0.
Both parameters are related to brightness, however, I found a very interesting difference: the operation applied by channel_shift_range roughly preserves the contrast of an image, while the operation applied by brightness_range roughly multiply the contrast of an image by f and roughly preserves its saturation. It is important to note that these conclusions could not be fulfilled for large values of i and f, since the brightness of the image will be intense and it will have lost much of its information.
Channel shift and Brightness change are completely different.
Channel Shift: Channel shift changes the color saturation level(eg. light Red/dark red) of pixels by changing the [R,G,B] channels of the input image. Channel shift is used to introduce the color augmentation in the dataset so as to make the model learn color based features irrespective of its saturation value.
Below is the example of Channel shift from mentioned the article:
In the above image, if you observe carefully, objects(specially cloud region) are still clearly visible and distinguishable from their neighboring regions even after channel shift augmentation.
Brightness change: Brightness level of the image explains the light intensity throughout the image and used to add under exposure and over exposure augmentation in the dataset.
Below is the example of Brightness augmentation:
In the above image, at low brightness value objects(eg. clouds) have lost their visibility due to low light intensity level.

python numpy/scipy zoom changing center

I have a 2D numpy array, say something like:
import numpy as np
x = np.random.rand(100, 100)
Now, I want to keep zoom this image (keeping the size the same i.e. (100, 100)) and I want to change the centre of the zoom.
So, say I want to zoom keeping the point (70, 70) at the centre and normally how one would do it is to "translate" the image to that point and then zoom.
I wonder how I can achieve this with scipy. I wonder if there is way to specify say 4 coordinates from this numpy array and basically fill the canvas with the interpolated image from this region of interest?
You could use ndimage.zoom to do the zooming part. I use ndimage a lot, and it works well and is fast. https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.zoom.html
The 4 coordinates part you mention is I presume two corners of region you want to zoom into. That's easy by just using numpy slicing of your image (presuming your image is an np array):
your_image[r1:r2, c1:c2]
Assuming you want your output image at 100x100, then your r1-r2, and c1-c2 differences will be the same, so your region is square.
nd.zoom takes a zoom factor (float). You would need to compute whta athat zoom factor is in order to take your sliced image and turn it into a 100x100 sized array:
ndimage.zoom(your_image[r1:r2, c1:c2], zoom=your_zoom_factor)

Relationship between dpi and figure size

I have created a figure using matplotlib but I have realized the plot axis and the drawn line gets zoomed out.
Reading this earlier discussion thread, it explains how to set the figure size.
fig, ax = plt.subplots()
fig.set_size_inches(3, 1.5)
plt.savefig(file.jpeg, edgecolor='black', dpi=400, facecolor='black', transparent=True)
With the above code (other configurations removed for brevity), I do get a resulting image file with 1200 X 600 desired dimensions(should we say resolution too?) and desired file size.
The projected image is scaled out in an unusual way, annotations for example are enlarged. While I can set the size of the labels on the axis, the figure doesn't look proportional with respect to the scale since the bottom and right spines are large and so are the plotted lines.
The question, therefore, is, what configurations are going wrong?
Figure size (figsize) determines the size of the figure in inches. This gives the amount of space the axes (and other elements) have inside the figure. The default figure size is (6.4, 4.8) inches in matplotlib 2. A larger figure size will allow for longer texts, more axes or more ticklabels to be shown.
Dots per inches (dpi) determines how many pixels the figure comprises. The default dpi in matplotlib is 100. A figure of figsize=(w,h) will have
px, py = w*dpi, h*dpi # pixels
# e.g.
# 6.4 inches * 100 dpi = 640 pixels
So in order to obtain a figure with a pixel size of e.g. (1200,600) you may chose several combinations of figure size and dpi, e.g.
figsize=(15,7.5), dpi= 80
figsize=(12,6) , dpi=100
figsize=( 8,4) , dpi=150
figsize=( 6,3) , dpi=200
etc.
Now, what is the difference? This is determined by the size of the elements inside the figure. Most elements like lines, markers, texts have a size given in points.
Matplotlib figures use Points per inch (ppi) of 72. A line with thickness 1 point will be 1./72. inch wide. A text with fontsize 12 points will be 12./72. inch heigh.
Of course if you change the figure size in inches, points will not change, so a larger figure in inches still has the same size of the elements. Changing the figure size is thus like taking a piece of paper of a different size. Doing so, would of course not change the width of the line drawn with the same pen.
On the other hand, changing the dpi scales those elements. At 72 dpi, a line of 1 point size is one pixel strong. At 144 dpi, this line is 2 pixels strong. A larger dpi will therefore act like a magnifying glass. All elements are scaled by the magnifying power of the lens.
A comparisson for constant figure size and varying dpi is shown in the image below on the left. On the right you see a constant dpi and varying figure size. Figures in each row have the same pixel size.
Code to reproduce:
import matplotlib.pyplot as plt
%matplotlib inline
def plot(fs,dpi):
fig, ax=plt.subplots(figsize=fs, dpi=dpi)
ax.set_title("Figsize: {}, dpi: {}".format(fs,dpi))
ax.plot([2,4,1,5], label="Label")
ax.legend()
figsize=(2,2)
for i in range(1,4):
plot(figsize, i*72)
dpi=72
for i in [2,4,6]:
plot((i,i), dpi)

set color limits in python colormap

I need to display data from a 2D matrix in a gray colormap, but I need to define it in such a gray scale that white and black are not the colors for the min and max values of the matrix, in order to not saturate the image. What I need is a gray scale colormap with gray levels between 20% and 70%, with at least 20% difference between the levels of gray. Any suggestions?
I'm using the imshow task form matplotlib.
Thanks a lot!
Did you solve your problem?
I guess this is what you want, try this:
all code in pylab mode:
a = np.arange(100).reshape(10,10)
#here is the image with white and black end
imshow(a,cmap=mat.cm.binary)
colorbar()
#we extract only the 0.2-->0.7 part of original colormap and make a new one
#so that the white and black end are removed
rgba_array = mat.cm.binary(np.linspace(0,1,num=10,endpoint=True))
extract_rgba_array_255 = rgba_array[2:8,0:3]
imshow(a,cmap=mat.colors.ListedColormap(extract_rgba_array_255))
colorbar()
You could do this either by creating a custom color map with colors your prefer, or by using the vmin, vmax keywords in imshow to force a larger range on the colorbar than you want to use in your plot.