matplotlib pyplot imshow tight spacing between images - matplotlib

I have some numpy image arrays, all of the same shape (say (64, 64, 3)). I want to plot them in a grid using pyplot.subplot(), but when I do, I get unwanted spacing between images, even when I use pyplot.subplots_adjust(hspace=0, wspace=0). Below is an example piece of code.
from matplotlib import pyplot
import numpy
def create_dummy_images():
"""
Creates images, each of shape (64, 64, 3) and of dtype 8-bit unsigned integer.
:return: 4 images in a list.
"""
saturated_channel = numpy.ones((64, 64), dtype=numpy.uint8) * 255
zero_channel = numpy.zeros((64, 64), dtype=numpy.uint8)
red = numpy.array([saturated_channel, zero_channel, zero_channel]).transpose(1, 2, 0)
green = numpy.array([zero_channel, saturated_channel, zero_channel]).transpose(1, 2, 0)
blue = numpy.array([zero_channel, zero_channel, saturated_channel]).transpose(1, 2, 0)
random = numpy.random.randint(0, 256, (64, 64, 3))
return [red, green, blue, random]
if __name__ == "__main__":
images = create_dummy_images()
for i, image in enumerate(images):
pyplot.subplot(2, 2, i + 1)
pyplot.axis("off")
pyplot.imshow(image)
pyplot.subplots_adjust(hspace=0, wspace=0)
pyplot.show()
Below is the output.
As you can see, there is unwanted vertical space between those images. One way of circumventing this problem is to carefully hand-pick the right size for the figure, for example I use matplotlib.rcParams['figure.figsize'] = (_, _) in Jupyter Notebook. However, the number of images I usually want to plot varies between each time I plot them, and hand-picking the right figure size each time is extremely inconvenient (especially because I can't work out exactly what the size means in Matplotlib). So, is there a way that Matplotlib can automatically work out what size the figure should be, given my requirement that all my (64 x 64) images need to be flush next to each other? (Or, for that matter, a specified distance next to each other?)

NOTE: correct answer is reported in the update below the original answer.
Create your subplots first, then plot in them.
I did it on one line here for simplicity sake
images = create_dummy_images()
fig, axs = pyplot.subplots(nrows=1, ncols=4, gridspec_kw={'wspace':0, 'hspace':0},
squeeze=True)
for i, image in enumerate(images):
axs[i].axis("off")
axs[i].imshow(image)
UPDATE:
Nevermind, the problem was not with your subplot definition, but with imshow() which distorts your axes after you've set them up correctly.
The solution is to use aspect='auto' in the call to imshow() so that the pictures fills the axes without changing them. If you want to have square axes, you need to create a picture with the appropriate width/height ratio:
pyplot.figure(figsize=(5,5))
images = create_dummy_images()
for i, image in enumerate(images):
pyplot.subplot(2, 2, i + 1)
pyplot.axis("off")
pyplot.imshow(image, aspect='auto')
pyplot.subplots_adjust(hspace=0, wspace=0)
pyplot.show()

Related

Matplotlib: Multiple plots with same layout (no automatic layout)

I am trying to make several pie charts that I can then transition between in a presentation. For this, it would be very useful for the automatic layouting to... get out of the way. The problem is that whenever I change a label, the whole plot moves around on the canvas so that it fits perfectly. I'd like the plot to stay centered, so it occupies the same area every time. I have tried adding center=(0,0) to ax.pie(), but to no avail.
Two examples:
Image smaller, left
Image larger, right
Instead of that effect, I'd like the pie chart to be in the middle of the canvas and have the same size in both cases (and I'd then manually make sure that the labels are on canvas by setting large margins).
The code I use to generate these two images is:
import matplotlib.pyplot as plt
import numpy as np
# Draw labels, from
# https://matplotlib.org/3.2.2/gallery/pie_and_polar_charts/pie_and_donut_labels.html#sphx-glr-gallery-pie-and-polar-charts-pie-and-donut-labels-py
def make_labels(ax, wedges, labs):
bbox_props = dict(boxstyle="square,pad=0.3", fc="w", ec="k", lw=0.72)
kw = dict(arrowprops=dict(arrowstyle="-"),
bbox=bbox_props,
zorder=0, va="center")
for i, p in enumerate(wedges):
if p.theta2-p.theta1 < 5:
continue
ang = (p.theta2 - p.theta1) / 2. + p.theta1
y = np.sin(np.deg2rad(ang))
x = np.cos(np.deg2rad(ang))
horizontalalignment = {-1: "right", 1: "left"}[int(np.sign(x))]
connectionstyle = "angle,angleA=0,angleB={}".format(ang)
kw["arrowprops"].update({"connectionstyle": connectionstyle})
ax.annotate(labs[i], xy=(x, y),
xytext=(1.1*x,1.1*y),
horizontalalignment=horizontalalignment, **kw)
kw=dict(autoscale_on=False, in_layout=False, xmargin=1, ymargin=1)
fig, ax = plt.subplots(figsize=(3, 3), dpi=100, subplot_kw=kw)
wedges, texts = ax.pie(x=[1,2,3], radius=1,
wedgeprops=dict(width=1),
pctdistance=0.7,
startangle=90,
textprops=dict(fontsize=8),
center=(0, 0))
make_labels(ax, wedges, ["long text", "b", "c"])
#make_labels(ax, wedges, ["a", "b", "long text"])
plt.show()
Thanks a lot in advance!
How are you saving your figures? It looks like you may be using savefig(..., bbox_inches='tight') which automatically resized the figure to include all the artists.
If I run your code with fig.savefig(..., bbox_inches=None), I get the following output

Matplotlib Gridspsec spacing between rows

I have images with three different dimensions (WxH): 4 images with (174x145), 4 images with (145x145) and 4 images with (145x174). I could remove space between columns, but I cannot remove space between rows. Any suggestions?
This is my code:
fig = plt.figure(figsize=(10, 10))
gs = fig.add_gridspec(3, 4, hspace=0, wspace=0)
for r in range(3):
for c in range(4):
ax = fig.add_subplot(gs[r, c])
ax.imshow(slices[r][c].T, origin="lower", cmap="gray")
ax.axis("off")
As suggested in the comments, you need to set the height_ratios for your GridSpec, but that's not enough. You also need to adjust the size of your figure so that the width/height ratio of the figure matches the total width/height ratios of your images. But herein lies another problem, in that the axes will be scaled when plotting the images (because of aspect='equal') and because they do not all have the same width/height ratios.
The solution that I'm proposing is first to calculate what the dimensions of figures would be, once stretched to a common width size, then use that correct information to adjust the figure size and the height_ratios of the GridSpec.
# this is just for visualization purposes
cmaps = iter([ 'flag', 'prism', 'ocean', 'gist_earth', 'terrain', 'gist_stern',
'gnuplot', 'gnuplot2', 'CMRmap', 'cubehelix', 'brg',
'gist_rainbow', 'rainbow', 'jet', 'nipy_spectral', 'gist_ncar'])
sizes = [(174,145), (145,145), (145,174)]
# create random images
p = []
for s in sizes:
p.append([np.random.random(size=s) for _ in range(4)])
p = np.array(p)
max_w = max([w for w,h in sizes])
new_sizes = np.array([(w*max_w/w, h*max_w/w) for w,h in sizes])
print(new_sizes)
total_w = 4*new_sizes[:,0].sum()
total_h = 3*new_sizes[:,1].sum()
eps=10/total_w
fig = plt.figure(figsize=(eps*total_w,eps*total_h))
gs0 = matplotlib.gridspec.GridSpec(3,4, height_ratios=[h for w,h in new_sizes], hspace=0, wspace=0)
for i in range(3):
for j in range(4):
ax = fig.add_subplot(gs0[i,j])
ax.imshow(p[i,j].T, origin="lower", cmap=next(cmaps))
ax.set_axis_off()
Unfortunately, this solution gets you almost to the desired output, but not quite, probably due to some rounding effect. But it close enough that I think you could use aspect='auto' if you can live with pixels that are ever so slightly not square.
(...)
ax.imshow(p[i,j].T, aspect='auto', origin="lower", cmap=next(cmaps))
(...)

About use tf.image.crop_and_resize

I'm working on the ROI pooling layer which work for fast-rcnn and I am used to use tensorflow. I found tf.image.crop_and_resize can act as the ROI pooling layer.
But I try many times and cannot get the result that I expected.Or did the true result is exactly what I got?
here is my code
import cv2
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
img_path = r'F:\IMG_0016.JPG'
img = cv2.imread(img_path)
img = img.reshape([1,580,580,3])
img = img.astype(np.float32)
#img = np.concatenate([img,img],axis=0)
img_ = tf.Variable(img) # img shape is [580,580,3]
boxes = tf.Variable([[100,100,300,300],[0.5,0.1,0.9,0.5]])
box_ind = tf.Variable([0,0])
crop_size = tf.Variable([100,100])
#b = tf.image.crop_and_resize(img,[[0.5,0.1,0.9,0.5]],[0],[50,50])
c = tf.image.crop_and_resize(img_,boxes,box_ind,crop_size)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
a = c.eval(session=sess)
plt.imshow(a[0])
plt.imshow(a[1])
And I handed in my origin img and result:a0,a1
if I was wrong can anyone teach me how to use this function? thanks.
Actually, there's no problem with Tensorflow here.
From the doc of tf.image.crop_and_resize (emphasis is mine) :
boxes: A Tensor of type float32. A 2-D tensor of shape [num_boxes, 4].
The i-th row of the tensor specifies the coordinates of a box in the
box_ind[i] image and is specified in normalized coordinates [y1, x1,
y2, x2]. A normalized coordinate value of y is mapped to the image
coordinate at y * (image_height - 1), so as the [0, 1] interval of
normalized image height is mapped to [0, image_height - 1] in image
height coordinates. We do allow y1 > y2, in which case the sampled
crop is an up-down flipped version of the original image. The width
dimension is treated similarly. Normalized coordinates outside the [0,
1] range are allowed, in which case we use extrapolation_value to
extrapolate the input image values.
The boxes argument needs normalized coordinates. That's why you get a black box with your first set of coordinates [100,100,300,300] (not normalized, and no extrapolation value provided), and not with your second set [0.5,0.1,0.9,0.5].
However, as that why matplotlib show you gibberish on your second attempt, it's just because you're using the wrong datatype.
Quoting the matplotlib documentation of plt.imshow (emphasis is mine):
All values should be in the range [0 .. 1] for floats or [0 .. 255]
for integers. Out-of-range values will be clipped to these bounds.
As you're using float outside the [0,1] range, matplotlib is bounding your values to 1. That's why you get those colored pixels (either solid red, solid green or solid blue, or a mixing of these). Cast your array to uint_8 to get an image that make sense.
plt.imshow( a[1].astype(np.uint8))
Edit :
As requested, I will dive a bit more into
tf.image.crop_and_resize.
[When providing non normalized coordinates and no extrapolation values], why I just get a blank result?
Quoting the doc :
Normalized coordinates outside the [0, 1] range are allowed, in which
case we use extrapolation_value to extrapolate the input image values.
So, normalized coordinates outside [0,1] are allowed. But they still need to be normalized !
With your example, [100,100,300,300], the coordinates you provide makes the red square. Your original image is the little green dot in the upper left corner! The default value of the argument extrapolation_value is 0, so the values outside the frame of the original image are inferred as [0,0,0] hence the black.
But if your usecase needs another value, you can provide it. The pixels will take a RGB value of extrapolation_value%256 on each channel. This option is useful if the zone you need to crop is not fully included in you original images. (A possible usecase would be sliding windows for example).
It seems that tf.image.crop_and_resize expects pixel values in the range [0,1].
Changing your code to
test = tf.image.crop_and_resize(image=image_np_expanded/255., ...)
solved the problem for me.
Yet another variant is to use tf.central_crop function.
Below is a concrete implementation of the tf.image.crop_and_resize API. tf version 1.14
import tensorflow as tf
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
tf.enable_eager_execution()
def single_data_2(img_path):
img = tf.read_file(img_path)
img = tf.image.decode_bmp(img,channels=1)
img_4d = tf.expand_dims(img, axis=0)
processed_img = tf.image.crop_and_resize(img_4d,boxes=
[[0.4529,0.72,0.4664,0.7358]],crop_size=[64,64],box_ind=[0])
processed_img_2 = tf.squeeze(processed_img,0)
raw_img_3 = tf.squeeze(img_4d,0)
return raw_img_3, processed_img_2
def plot_two_image(raw,processed):
fig=plt.figure(figsize=(35,35))
raw_ = fig.add_subplot(1,2,1)
raw_.set_title('Raw Image')
raw_.imshow(raw,cmap='gray')
processed_ = fig.add_subplot(1,2,2)
processed_.set_title('Processed Image')
processed_.imshow(processed,cmap='gray')
img_path = 'D:/samples/your_bmp_image.bmp'
raw_img, process_img = single_data_2(img_path)
print(raw_img.dtype,process_img.dtype)
print(raw_img.shape,process_img.shape)
raw_img=tf.squeeze(raw_img,-1)
process_img=tf.squeeze(process_img,-1)
print(raw_img.dtype,process_img.dtype)
print(raw_img.shape,process_img.shape)
plot_two_image(raw_img,process_img)
Below is my working code, also output image is not black, this can be of help to someone
for idx in range(len(bboxes)):
if bscores[idx] >= Threshold:
#Region of Interest
y_min = int(bboxes[idx][0] * im_height)
x_min = int(bboxes[idx][1] * im_width)
y_max = int(bboxes[idx][2] * im_height)
x_max = int(bboxes[idx][3] * im_width)
class_label = category_index[int(bclasses[idx])]['name']
class_labels.append(class_label)
bbox.append([x_min, y_min, x_max, y_max, class_label, float(bscores[idx])])
#Crop Image - Working Code
cropped_image = tf.image.crop_to_bounding_box(image, y_min, x_min, y_max - y_min, x_max - x_min).numpy().astype(np.int32)
# encode_jpeg encodes a tensor of type uint8 to string
output_image = tf.image.encode_jpeg(cropped_image)
# decode_jpeg decodes the string tensor to a tensor of type uint8
#output_image = tf.image.decode_jpeg(output_image)
score = bscores[idx] * 100
file_name = tf.constant(OUTPUT_PATH+image_name[:-4]+'_'+str(idx)+'_'+class_label+'_'+str(round(score))+'%'+'_'+os.path.splitext(image_name)[1])
writefile = tf.io.write_file(file_name, output_image)

Color map an image with TensorFlow?

I'm saving grayscale images in TFRecord files. The idea then was to color map them on my GPU (only using TF of course) so they get three channels (They are going to be used on a pre-trained VGG-16 model so they have to have three channels).
Does anyone have any idea how to this properly?
I tried to do it with my homemade TF color mapping script, using for-loops, tf.scatter_nd and a mapping array with shape = (256,3)... but it took forever.
EDIT:
img_rgb = GRAY SCALE IMAGE WITH 3 CHANNELS
cmp = [[255,255,255],
[255,255,253],
[255,254,250],
[255,254,248],
[255,254,245],
...
[4,0,0],
[0,0,0]]
cmp = tf.convert_to_tensor(cmp, tf.int32) # (256, 3)
hot = tf.zeros([224,224,3], tf.int32)
for i in range(img_rgb.shape[2]):
for j in range(img_rgb.shape[1]):
for k in range(img_rgb.shape[0]):
indices = tf.constant([[k,j,i]])
updates = tf.Variable([cmp[img_rgb[k,j,i],i]])
shape = tf.constant([256, 3])
hot = tf.scatter_nd(indices, updates, shape)
This was my attempt, I know it's not optimal in any way, but It was the only solution I could come up with.
Thanks work by jimfleming, https://gist.github.com/jimfleming/c1adfdb0f526465c99409cc143dea97b
import matplotlib
import matplotlib.cm
import tensorflow as tf
def colorize(value, vmin=None, vmax=None, cmap=None):
"""
A utility function for TensorFlow that maps a grayscale image to a matplotlib
colormap for use with TensorBoard image summaries.
Arguments:
- value: 2D Tensor of shape [height, width] or 3D Tensor of shape
[height, width, 1].
- vmin: the minimum value of the range used for normalization.
(Default: value minimum)
- vmax: the maximum value of the range used for normalization.
(Default: value maximum)
- cmap: a valid cmap named for use with matplotlib's `get_cmap`.
(Default: 'gray')
Example usage:
```
output = tf.random_uniform(shape=[256, 256, 1])
output_color = colorize(output, vmin=0.0, vmax=1.0, cmap='plasma')
tf.summary.image('output', output_color)
```
Returns a 3D tensor of shape [height, width, 3].
"""
# normalize
vmin = tf.reduce_min(value) if vmin is None else vmin
vmax = tf.reduce_max(value) if vmax is None else vmax
value = (value - vmin) / (vmax - vmin) # vmin..vmax
# squeeze last dim if it exists
value = tf.squeeze(value)
# quantize
indices = tf.to_int32(tf.round(value * 255))
# gather
cm = matplotlib.cm.get_cmap(cmap if cmap is not None else 'gray')
colors = tf.constant(cm.colors, dtype=tf.float32)
value = tf.gather(colors, indices)
return value
You could also try tf.image.grayscale_to_rgb, although there seems to be only one choice of color map, gray.
We're here to help. If everyone wrote optimal code, there would be no need for Stackoverflow. :)
Here's how I would do it in place of the last 7 lines (untested code):
conv_img = tf.gather( params = cmp,
indices = img_rgb[ :, :, 0 ] )
Basically, no need for the for loops, Tensorflow will do that for you, and much quicker. tf.gather() will collect elements from cmp according to the indices provided, which here would be the 0th channel of img_rgb. Each collected element will have the three channels from cmp so when you put them all together, it will form an image.
I don't have time to test right now, gotta run, sorry. Hope it works.

Pyplot imshow colormap not working

I have the following code:
plt.figure(figsize=(15, 20))
min_v = np.min(net_l0)
max_v = np.max(net_l0)
for i in range(8):
for j in range(4):
num = i*4 + j
plt.subplot(8,4, num+1)
w_filt = net_l0[num, :3]
w_filt = w_filt.swapaxes(0, 1).swapaxes(1, 2)
imgplot = plt.imshow(w_filt, vmin=min_v, vmax=max_v, interpolation='none')
imgplot.set_cmap('gray')
plt.colorbar()
plt.show()
For some reason, however, the colormap is not applied to the image only to the colorbar? I tried and adding the cmap keyword to the imshow, but still did not work. Any ideas what I'm doing wrong?
Make sure the array you are displaying is actually 2-dimensional. If you (for example) load a grayscale image that actually has three channels, then imshow will happily show you the image, but it won't apply the colormap to it. The picture is "already color", after all.