How to fully customize subplot size in matplotlib - matplotlib

I want to have two subplots in a matplotlib figure that are sized and positioned relative to each other like the example below (for stylistic reasons). All the examples I've seen for customizing subplot placement and sizes still tile and fill the entire figure footprint. What can I do to get the rightmost plot positioned with some whitespace like below?

You need to imagine some (virtual) grid on which the subplots are placed.
The grid has 3 rows and 2 columns. The first subplot covers all three rows and the first column. The second subplot covers only the second row of the second column. The ratios between the row and column sizes are not necessarily equal.
import matplotlib.pyplot as plt
import matplotlib.gridspec
gs = matplotlib.gridspec.GridSpec(3,2, width_ratios=[1,1.4],
height_ratios=[1,3,1])
fig = plt.figure()
ax1 = fig.add_subplot(gs[:,0])
ax2 = fig.add_subplot(gs[1,1])
plt.show()
In addition you may still set different values to hspace and wspace parameters.
A good overview is given in the GridSpec tutorial.
Because it was mentionned in the comments: If absolute positionning in units of inches may be desired, I would recommend directly adding an axes in the desired size,
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
w,h = fig.get_size_inches()
div = np.array([w,h,w,h])
# define axes in by rectangle [left, bottom, width, height], numbers in inches
ax1 = fig.add_axes(np.array([.7, .7, 1.8, 3.4])/div)
ax2 = fig.add_axes(np.array([3, 1.4, 3, 2])/div)
plt.show()

--EDIT: This answer ended up startlingly similar to the answer given by #ImportanceOfBeingErnest but tacks on an approach for layout control in inches units rather than fractional units. --
It helps if you grid it out with gridspec, and then populate the grid using the desired spans of the ratios or columns. For a lot of the figures I make I need them to fit on the page well, so I use this pattern pretty frequently to give me grid control down to the 10th of an inch.
import matplotlib.pyplot as plt
from matplotlib import gridspec
fig = plt.figure(figsize=(7, 5)) # 7 inches wide, 5 inches tall
row = int(fig.get_figheight() * 10)
col = int(fig.get_figwidth() * 10)
gsfig = gridspec.GridSpec(
row, col,
left=0, right=1, bottom=0,
top=1, wspace=0, hspace=0)
gs1 = gsfig[:, 0:30]
# these spans are in tenths of an inch, so left-right
# spans from col 0 to column 30 (or 3 inches)
ax1 = fig.add_subplot(gs1)
gs1 = gsfig[20:40, 35:70] # again these spans are in tenths of an inch
ax1 = fig.add_subplot(gs1)

Related

How can I space only one axis away from other axes using matplotlib and gridspec?

I'm using matplotlib and grispec to plot 4 axes, three of which I want tight to one another and the last one I want spaced a little bit away from the above three. Reason being is that I want the top three to share the same x-axis units, and the bottom one to have different x-axis units. I've tried using gs.update right after the third axis, but this spaced out all the axis from one another, instead of the bottom fourth axis from the top three.
Is there a simple gridspec/matplotlib command that I'm missing, or do I have to hack around this somehow?
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(13,12))
gs1=fig.add_gridspec(nrows=4, ncols=2, hspace=0.0)
ax2=fig.add_subplot(gs1[1,:])
ax1=fig.add_subplot(gs1[0,:], sharey=ax2)
ax3=fig.add_subplot(gs1[2,:], sharey=ax2)
ax4=fig.add_subplot(gs1[3,:]) #<- want this spaced farther down than the above three axes
plt.show()
This is one solution -- here I first added a gridspec for the first three plots. This I specified should finish at 0.35 from the bottom (0 being the very bottom, 1 the top). Then I added another gridspec that starts at 0.3 and so has a gap of 0.05. Obviously you can play around with the numbers/placement.
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(13,12))
gs1=fig.add_gridspec(nrows=3, ncols=1, hspace=0, bottom=0.35)
ax1=fig.add_subplot(gs1[0,0])
ax2=fig.add_subplot(gs1[1,0], sharey=ax1)
ax3=fig.add_subplot(gs1[2,0], sharey=ax1)
gs2=fig.add_gridspec(nrows=1, ncols=1, top=0.3)
ax4=fig.add_subplot(gs2[0,0])
for ax in fig.axes:
ax.set_yticks([])
ax.set_xticks([])
plt.show()
Result:

How to draw a grid in a bar-plot created with plt.vlines()

I want to create a bar-plot in python. I want this plot to be beautiful though and I don't like the looks of python's axes.bar() function. Therefore, I have decided to use plt.vlines(). The challenge here is that my x-data is a list that contains strings and not numerical data. When I plot my graph, the spacing between the two columns (in my example column 2 = 0) is pretty big:
Furthermore, I want a grid. However, I would like to have minor grid lines as well. I know how to get all of this if my data was numerical. But since my x-data contains strings, I don't know how to set x_max. Any suggestions?
Internally, the positions of the labels are numbered 0,1,... So setting the x-limits a bit before 0 and after the last, shows them more centered.
Usually, bars are drawn with their 'feet' on the ground, which can be set via plt.ylim(0, ...). Minor ticks can be positioned for example at multiples of 0.2. Setting the length of the ticks to zero lets the position count for the grid, but suppresses the tick mark.
from matplotlib import pyplot as plt
from matplotlib.ticker import MultipleLocator
import numpy as np
labels = ['Test 1', 'Test 2']
values = [1, 0.7]
fig, ax = plt.subplots()
plt.vlines(labels, 0, values, colors='dodgerblue', alpha=.4, lw=7)
plt.xlim(-0.5, len(labels) - 0.5) # add some padding left and right of the bars
plt.ylim(0, 1.1) # bars usually have their 0 at the bottom
ax.xaxis.set_minor_locator(MultipleLocator(.2))
plt.tick_params(axis='x', which='both', length=0) # ticks not shown, but position serves for gridlines
plt.grid(axis='both', which='both', ls=':') # optionally set the linestyle of the grid
plt.show()

Colorbar frame and color not aligned

I have a vexing issue with a colorbar and even after vigorous research I cannot find the question even being asked. I have a plot where I overlay a contour and a pcolormesh and I would like a colorbar to indicate values. That works fine except for one thing:
The colorbar frame and color are offset
The colorbar frame and the actual bar are offset such that below you have a white bit in the frame and on top the color is poking out. While the frame is aligned with the axis as desired, the colorbar is offset.
Here is a working example that emulates the situation I was in, i.e. multiple plots with insets.
import matplotlib.gridspec as gridspec
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
figheight = 4.2 - (2.1 - 49.519 / 25.4)
matplotlib.rcParams['figure.figsize'] = (5.25, figheight)
matplotlib.rcParams['axes.linewidth'] = 0.5
fig = plt.figure()
grid = gridspec.GridSpec(2, 1, height_ratios=[49.519 / 25.4 / figheight, 2.1 / figheight])
ax0 = plt.subplot(grid[0, 0])
ax1 = plt.subplot(grid[1, 0])
plt.tight_layout()
###############################################################################################
#
# Define position of inset
#
###############################################################################################
ax1.axis('off')
pos1 = ax1.get_position()
pos2 = matplotlib.transforms.Bbox([[pos1.x0, pos1.y0],
[.8*pos1.x1,
0.8*pos1.height + pos1.y0]])
left, bottom, width, height = [pos2.x0, pos2.y0, pos2.width, pos2.height]
ax2 = fig.add_axes([left, bottom, width, height])
###############################################################################################
#
# ax2 (inset) plot
#
###############################################################################################
pos2 = ax2.get_position()
ax2.axis('on')
x = np.linspace(0,5)
z = (np.outer(np.sin(x), np.cos(x))+1)*0.5
im = ax2.pcolormesh(z)
c = ax2.contour(z, linewidths=7)
ax2pos = ax2.get_position()
cbar_axis = fig.add_axes([ax2pos.x1+0.05,ax2pos.y0, .02, ax2pos.height])
colorbar = fig.colorbar(im, ax = ax2,
cax = cbar_axis, ticks = [0.1, .5, .9])
colorbar.outline.set_visible(True)
plot = 'Minimal.pdf'
fig.savefig(plot)
plt.close()
The problem persists in both the inline display and the saved .pdf if 'Inline' graphics backend is chosen. Using tight layout or not changes how badly the offset is depending on the size of the bar - same with using PyQT5 rather than inline graphics backend. I thought it was gone when I was changing between the various combinations, but I just realized it's still there.
I would appreciate any input.
As suggested by ImportanceOfBeingErnest I have tried using np.round on the figsize and that didn't change things. While you can fiddle around with sizes to make it look okay, it always stands over on one or the other side by some amount. When I change the graphics backend on Spyder 3 from 'Inline' to 'QT5' the problem becomes less severe with or without rounding. A summary of this is in this picture Colorbar overlap cases. Note that with not rounded and PyQT5 the problem still occurs, but is not as severe.
On inspection, it is clear that the colorbar is not only bleeding out over the top of its axes, but it's also positioned slightly to the left.
So, the problem here appears to be a conflict between the position of the colorbar axis and the colorbar itself when rasterization occurs. You can find more details on this issue in matplotlib's github repository, but I'll summarize what's going on here.
Colorbars are rasterized when the output is produced, so as to avoid artifacting issues during rendering. The position of the colorbar is snapped to the nearest integer pixels during the rasterization process, while the axis is kept where it is supposed to be. Then, when the output is produced, the colorbar falls within borders of fixed pixels of the image, despite the fact that the image is, itself, vectorized. Thus, there are two strategies that can be employed to avoid this mishap.
Use a finer DPI
The conversion from vectorized coordinates to rasterized coordinates takes place assuming a given DPI on the image. By default, this is set to be 72. However, by using more DPI, the overall shift induced by the rasterization process will be smaller, as the closest pixel the colorbar will snap to will be much nearer. Here, we change the output to have fig.savefig(plot,dpi=4000), and the problem goes away:
Note, however, that on my machine, the output size changed from 62 KB to 78 KB due to this change (although the DPI adjustment was also, admittedly, extreme). If you are worried about file sizes, you should pick a lower DPI that fixes the problem.
Use a different colormap
This rasterization happens when more than 50 colors are in the colorbar. Thus, we can do a quick test, setting our colormap to Pastel1 via
im = ax2.pcolormesh(z,cmap='Pastel1'). Here, the colorbar / axis mismatch is mitigated.
As a fallback, adopting a colorbar with fewer than 50 colors should mitigate this problem.
Rasterize the Axis
For completeness, there is also a third option. If you rasterize the colorbar axis, both the axis boundaries and the colormap will be rasterized, and you'll lose the offset. This will also rasterize your labels, and the axis will shift as one, breaking alignment with the nearby axis. For this, you just need to include cbar_axis.set_rasterized(True).
First, a way to overlay a contour and a pcolormesh and create a colorbar would be the following
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
x = np.linspace(0,5)
z = (np.outer(np.sin(x), np.cos(x))+1)*0.5
fig = plt.figure(figsize=(4, 4))
ax = fig.add_subplot(111)
im = ax.pcolormesh(z)
c = ax.contour(z, linewidths=7)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", "5%", pad="3%")
colorbar = fig.colorbar(im, cax=cax, ticks = [0.1, .5, .9])
plt.show()
Now to the problem from the question. It is of course possible to create the axes to put the colorbar in manually. Replacing the colorbar creation with the code from the question still produces a nice image.
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,5)
z = (np.outer(np.sin(x), np.cos(x))+1)*0.5
fig = plt.figure(figsize=(4, 4))
ax = fig.add_subplot(111)
plt.subplots_adjust(right=0.8)
im = ax.pcolormesh(z)
c = ax.contour(z, linewidths=7)
ax2pos = ax.get_position()
cbar_axis = fig.add_axes([ax2pos.x1+0.05,ax2pos.y0, .05, ax2pos.height])
colorbar = fig.colorbar(im, ax = ax,
cax = cbar_axis, ticks = [0.1, .5, .9])
colorbar.outline.set_visible(True)
plt.show()
Conclusion so far: The issue is not reproducible, at least not without a Minimal, Complete, and Verifiable example.
I'm uncertain about the reasons for the behaviour in the example from the question. However, it seems that it can be overcome by rounding the figure size to 3 significant digits
matplotlib.rcParams['figure.figsize'] = (5.25, np.round(figheight,3))

heatmap for positive and negative values [duplicate]

I am trying to make a filled contour for a dataset. It should be fairly straightforward:
plt.contourf(x, y, z, label = 'blah', cm = matplotlib.cm.RdBu)
However, what do I do if my dataset is not symmetric about 0? Let's say I want to go from blue (negative values) to 0 (white), to red (positive values). If my dataset goes from -8 to 3, then the white part of the color bar, which should be at 0, is in fact slightly negative. Is there some way to shift the color bar?
First off, there's more than one way to do this.
Pass an instance of DivergingNorm as the norm kwarg.
Use the colors kwarg to contourf and manually specify the colors
Use a discrete colormap constructed with matplotlib.colors.from_levels_and_colors.
The simplest way is the first option. It is also the only option that allows you to use a continuous colormap.
The reason to use the first or third options is that they will work for any type of matplotlib plot that uses a colormap (e.g. imshow, scatter, etc).
The third option constructs a discrete colormap and normalization object from specific colors. It's basically identical to the second option, but it will a) work with other types of plots than contour plots, and b) avoids having to manually specify the number of contours.
As an example of the first option (I'll use imshow here because it makes more sense than contourf for random data, but contourf would have identical usage other than the interpolation option.):
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import DivergingNorm
data = np.random.random((10,10))
data = 10 * (data - 0.8)
fig, ax = plt.subplots()
im = ax.imshow(data, norm=DivergingNorm(0), cmap=plt.cm.seismic, interpolation='none')
fig.colorbar(im)
plt.show()
As an example of the third option (notice that this gives a discrete colormap instead of a continuous colormap):
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import from_levels_and_colors
data = np.random.random((10,10))
data = 10 * (data - 0.8)
num_levels = 20
vmin, vmax = data.min(), data.max()
midpoint = 0
levels = np.linspace(vmin, vmax, num_levels)
midp = np.mean(np.c_[levels[:-1], levels[1:]], axis=1)
vals = np.interp(midp, [vmin, midpoint, vmax], [0, 0.5, 1])
colors = plt.cm.seismic(vals)
cmap, norm = from_levels_and_colors(levels, colors)
fig, ax = plt.subplots()
im = ax.imshow(data, cmap=cmap, norm=norm, interpolation='none')
fig.colorbar(im)
plt.show()

Reducing the distance between two boxplots

I'm drawing the bloxplot shown below using python and matplotlib. Is there any way I can reduce the distance between the two boxplots on the X axis?
This is the code that I'm using to get the figure above:
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['ytick.direction'] = 'out'
rcParams['xtick.direction'] = 'out'
fig = plt.figure()
xlabels = ["CG", "EG"]
ax = fig.add_subplot(111)
ax.boxplot([values_cg, values_eg])
ax.set_xticks(np.arange(len(xlabels))+1)
ax.set_xticklabels(xlabels, rotation=45, ha='right')
fig.subplots_adjust(bottom=0.3)
ylabels = yticks = np.linspace(0, 20, 5)
ax.set_yticks(yticks)
ax.set_yticklabels(ylabels)
ax.tick_params(axis='x', pad=10)
ax.tick_params(axis='y', pad=10)
plt.savefig(os.path.join(output_dir, "output.pdf"))
And this is an example closer to what I'd like to get visually (although I wouldn't mind if the boxplots were even a bit closer to each other):
You can either change the aspect ratio of plot or use the widths kwarg (doc) as such:
ax.boxplot([values_cg, values_eg], widths=1)
to make the boxes wider.
Try changing the aspect ratio using
ax.set_aspect(1.5) # or some other float
The larger then number, the narrower (and taller) the plot should be:
a circle will be stretched such that the height is num times the width. aspect=1 is the same as aspect=’equal’.
http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.set_aspect
When your code writes:
ax.set_xticks(np.arange(len(xlabels))+1)
You're putting the first box plot on 0 and the second one on 1 (event though you change the tick labels afterwards), just like in the second, "wanted" example you gave they are set on 1,2,3.
So i think an alternative solution would be to play with the xticks position and the xlim of the plot.
for example using
ax.set_xlim(-1.5,2.5)
would place them closer.
positions : array-like, optional
Sets the positions of the boxes. The ticks and limits are automatically set to match the positions. Defaults to range(1, N+1) where N is the number of boxes to be drawn.
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.boxplot.html
This should do the job!
As #Stevie mentioned, you can use the positions kwarg (doc) to manually set the x-coordinates of the boxes:
ax.boxplot([values_cg, values_eg], positions=[1, 1.3])