python numpy/scipy zoom changing center - numpy

I have a 2D numpy array, say something like:
import numpy as np
x = np.random.rand(100, 100)
Now, I want to keep zoom this image (keeping the size the same i.e. (100, 100)) and I want to change the centre of the zoom.
So, say I want to zoom keeping the point (70, 70) at the centre and normally how one would do it is to "translate" the image to that point and then zoom.
I wonder how I can achieve this with scipy. I wonder if there is way to specify say 4 coordinates from this numpy array and basically fill the canvas with the interpolated image from this region of interest?

You could use ndimage.zoom to do the zooming part. I use ndimage a lot, and it works well and is fast. https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.zoom.html
The 4 coordinates part you mention is I presume two corners of region you want to zoom into. That's easy by just using numpy slicing of your image (presuming your image is an np array):
your_image[r1:r2, c1:c2]
Assuming you want your output image at 100x100, then your r1-r2, and c1-c2 differences will be the same, so your region is square.
nd.zoom takes a zoom factor (float). You would need to compute whta athat zoom factor is in order to take your sliced image and turn it into a 100x100 sized array:
ndimage.zoom(your_image[r1:r2, c1:c2], zoom=your_zoom_factor)

Related

How do I save color mapped array of same dimensions of the original array?

I have data that I would like to save as png's. I need to keep the exact pixel dimensions - I don't want any inter-pixel interpolation, smoothing, or up/down sizing, etc. I do want to use a colormap, though (and mayber some other features of matplotlib's imshow). As I see it there are a couple ways I could do this:
1) Manually roll my own colormapping. (I'd rather not do this)
2) Figure out how to make sure the pixel dimenensions of the image in the figure produced by imshow are exactly correct, and then extract just the image portion of the figure for saving.
3) Use some other method which will directly give me a color mapped array (i.e. my NxN grayscale array -> NxNx3 array, using one of matplotlibs colormaps). Then save it using another png save method such as scipy.misc.imsave.
How can I do one of the above? (Or another alternate)
My problem arose when I was just saving the figure directly using savefig, and realized that I couldn't zoom into details. Upscaling wouldn't solve the problem, since the blurring between pixels is exactly one of the things I'm looking for - and the pixel size has a physical meaning.
EDIT:
Example:
import numpy as np
import matplotlib.pyplot as plt
X,Y = np.meshgrid(np.arange(-50.0,50,.1), np.arange(-50.0,50,.1))
Z = np.abs(np.sin(2*np.pi*(X**2+Y**2)**.5))/(1+(X/20)**2+(Y/20)**2)
plt.imshow(Z,cmap='inferno', interpolation='nearest')
plt.savefig('colormapeg.png')
plt.show()
Note zooming in on the interactive figure gives you a very different view then trying to zoom in on the saved figure. I could up the resolution of the saved figure - but that has it's own problems. I really just need the resolution fixed.
It seems you are looking for plt.imsave().
In this case,
plt.imsave("filename.png", Z, cmap='inferno')

How to plot images of different size at the same resolution?

A collection of images are plotted as follow:
figure(num=None, figsize=(16, 14), dpi=300)
k=1
for i in range(1,10):
for j in range(1,6):
subplot(9,5,k,xticks=[],yticks=[])
imshow(rgb_chromosomes[k-1],interpolation='nearest')
k=k+1
It is visible that from a image to an other, pixels are not the same size.
How to fix that issue?
Use interpolation= 'bilinear' and subsample the result with regular spacing (say take every other four pixel, this depends on the final pixel size you want) and form a tiny image. Then magnify this tiny image with 'nearest' interpolation.
You can also keep the 'nearest' setting for the first interpolation, but the result will look ugly.
so, from image to image are the pixels different sizes? From context, I am guessing that these are all snippets from the same image/imaging conditions and you want the scale to be the same in all of them.
Something like:
fig, ax_lst = plt.subplots(9, 6) # better way to set up your axes
for k, ax in enumerate(ax_lst.ravel()):
ax.imshow(rgb_chromosomes[k], interpolation='none')
ax.set_xlim([0, max_image_width])
ax.set_ylim([0, max_image_height])
ax.set_frame_on(False)

Matplotlib difference between two images

I have images (4000x2000 pixels) that are derived from the same image, but with subtle differences in less than 1% of the pixels. I'd like to plot the two images side-by-side and highlight the regions of the array's that are different (by highlight I mean I want the pixels that differ to jump out, but still display the color that matches their value. I've been using rectangles that are unfilled to outline the edges of such pixels so far. I can do this very nicely in small images (~50x50) with:
fig=figure(figsize=(20,15))
ax1=fig.add_subplot(1,2,1)
imshow(image1,interpolation='nearest',origin='lower left')
colorbar()
ax2=fig.add_subplot(122,sharex=ax1, sharey=ax1)
imshow(image2,interpolation='nearest',origin='lower left')
colorbar()
#now show differences
Xspots=im1!=im2
Xx,Xy=nonzero(Xspots)
for x,y in zip(Xx,Xy):
rect=Rectangle((y-.5,x-.5),1,1,color='w',fill=False,ec='w')
ax1.add_patch(rect)
ax2.add_patch(rect)
However this doesn't work so well when the image is very large. Strange things happen, for example when I zoom in the patch disappears. Also, this way sucks because it takes forever to load things when I zoom in/out.
I feel like there must be a better way to do this, maybe one where there is only one patch that determines where all of the things are, rather than a whole bunch of patches. I could do a scatter plot on top of the imshow image, but I don't know how to fix it so that the points will stay exactly the size of the pixel when I zoom in/out.
Any ideas?
I would try something with the alpha channel:
import copy
N, M = 20, 40
test_data = np.random.rand(N, M)
mark_mask = np.random.rand(N, M) < .01 # mask 1%
# this is redundant in this case, but in general you need it
my_norm = matplotlib.colors.Normalize(vmin=0, vmax=1)
# grab a copy of the color map
my_cmap = copy.copy(cm.get_cmap('cubehelix'))
c_data= my_cmap(my_norm(test_data))
c_data[:, :, 3] = .5 # make everything half alpha
c_data[mark_mask, 3] = 1 # reset the marked pixels as full opacity
# plot it
figure()
imshow(c_data, interpolation='none')
No idea if this will work with your data or not.

Put pcolormesh and contour onto same grid?

I'm trying to display 2D data with axis labels using both contour and pcolormesh. As has been noted on the matplotlib user list, these functions obey different conventions: pcolormesh expects the x and y values to specify the corners of the individual pixels, while contour expects the centers of the pixels.
What is the best way to make these behave consistently?
One option I've considered is to make a "centers-to-edges" function, assuming evenly spaced data:
def centers_to_edges(arr):
dx = arr[1]-arr[0]
newarr = np.linspace(arr.min()-dx/2,arr.max()+dx/2,arr.size+1)
return newarr
Another option is to use imshow with the extent keyword set.
The first approach doesn't play nicely with 2D axes (e.g., as created by meshgrid or indices) and the second discards the axis numbers entirely
Your data is a regular mesh? If it doesn't, you can use griddata() to obtain it. I think that if your data is too big, a sub-sampling or regularization always is possible. If the data is too big, maybe your output image always will be small compared with it and you can exploit this.
If you use imshow() with "extent" and "interpolation='nearest'", you will see that the data is cell-centered, and extent provided the lower edges of cells (corners). On the other hand, contour assumes that the data is cell-centered, and X,Y must be the center of cells. So, you need to be care about the input domain for contour. The trivial example is:
x = np.arange(-10,10,1)
X,Y = np.meshgrid(x,x)
P = X**2+Y**2
imshow(P,extent=[-10,10,-10,10],interpolation='nearest',origin='lower')
contour(X+0.5,Y+0.5,P,20,colors='k')
My tests told me that pcolormesh() is a very slow routine, and I always try to avoid it. griddata and imshow() always is a good choose for me.

Moving x or y ticks in Matplotlib up

I have a numpy array with random values. I have plotted the values in the array using imshow() so that each element shows as a grey-scale square. The problem is that the labels (0, 1, 2 etc) start at the bottom corner. I would like to move them along a bit so they are centred underneath each square. Is there a straight-forward way of doing this?
Just found http://matplotlib.sourceforge.net/examples/pylab_examples/image_interp.html
and the most straightforward way is just to have grid(True) ->woohoo!