Unusual Mesh Outline PColorMesh - matplotlib

I am utilizing the pcolormesh function in Matplotlib to plot a series of gridded data (in parallel) across multiple map domains. The code snippet relevant to this question is as follows:
im = ax2.pcolormesh(xgrid, ygrid, data.variable.data[0], cmap=cmap, norm=norm, alpha=0.90, facecolor=None)
Where: xgrid = array of longitude points, ygrid = array of latitude points, data.variable.data[0] = array of corresponding data values, cmap = defined colormap, & norm = defined value normalization
Consider the following image generated from the provided code:
The undesired result I've found in the image above is what appears to be outlines around each grid square, or perhaps better described as patchwork that stands out slightly as the mesh alpha is reduced below 1.
I've set facecolor=None assuming that would remove these outlines, to no avail. What additions or corrections can I make to remove this feature?

Related

Matplotlib draws values that equal to zero in the array as non zero values

I draw a map of a parameter with matplotlib and cartopy. Use cartopy crs Mercator (also I tried AlbersEqualArea - result is the same).
An array (2d as it's a mesh) has some float values like 1.1 - 5.65 etc, but the main part are 0.000000000000000000e+00 values.
So, for levels=0.1,0.3,0.5,1,1.5,2,3,5,7,10,20,30,40,50
cntr = ax.contourf(lons, lats, array, levels = levels, cmap = cmap, norm = norm, transform = ccrs.PlateCarree(), extend = ext, zorder=1)
gives a map where all float zero values are in blue, so as per scale it's 0.3 value (it doesn't depend on values given in levels).
Three things help to generate a map normally:
Changing crs - for PlateCarree it's ok, for Merkator not, for Albers also not.
Changing north maximum latitude (76 for Merkator works, 78 - doesn't), but for Albers this doesn't help.
Adding array = np.where(array<0.3,float(0),array) - works for all
Also it can't be fixed by changing parameters extend or norm (tried all variants).
Question is: what sort of bug is it, where is the problem, how to fix it completely.

How to fill a line in 2D image along a given radius with the data in a given line image?

I want to fill a 2D image along its polar radius, the data are stored in a image where each row or column corresponds to the radius in target image. How can I fill the target image efficiently? Such as with iradius or some functions? I do not prefer a pix-pix operation.
Are you looking for something like this?
number maxR = 100
image rValues := realimage("I(r)",4,maxR)
rValues = 10 + trunc(100*random())
image plot :=realimage("Ring",4,2*maxR,2*maxR)
rValues.ShowImage()
plot.ShowImage()
plot = rValues.warp(iradius,0)
You might also want to check out the relevant example code from the F1 help documentation of GMS itself:
Explaining warp a bit:
plot = rValues.warp(iradius,0)
Assigns values to plot based on a value-lookup in rValues.
For each pixel in plot a coordinate position in rValues is computed, and the value is simply looked up. If the computed coordinate is non-integer, bilinear interpolation between the 4 closest points is used.
In the example, the two 'formulas' for the coordinate calculation are simple x' = iradius and y' = 0 where iradius is an expression computed from the coordinate in plot, for convenience.
You can feed any expression into the parameters for warp( ) and the command is closely related to just using the square bracket notation of addressing values. In fact, the only difference is that warp performs the bilinear interpolation of values instead of truncating the coordinates to integer values.

How to get rid of artefacts in contourplot contourf (smoothing matrix/ 2D array)?

I have data in a hdf5 file with named datasets
#Data Aquisition and manipulation
file = h5py.File('C:/Users/machz/Downloads/20200715_000_Scan_XY-Coordinate_NV-centre_APD.h5', 'r')
filename = path.basename(file.filename)
intensity = file.get('intensity')
intensity = np.array(intensity)
x_range = file.get('x range')
x_range = np.array(x_range)
x_range = np.round(x_range,1)
z_range = file.get('z range')
z_range = np.array(z_range)
z_range=np.round(z_range,1)
where intensity is a 2D array and x_range and z_range are 1D arrays. Now i want to smooth the intensity data. The raw data looks for example like this:
by using seaborn.heatmap:
heat_map = sb.heatmap(intensity, cmap="Spectral_r")
When using matplotlib.contourf via
plt.contourf(intensity, 1000, cmap="Spectral_r")
i get the following result:
which looks oke, despite it is rotated by 180 degrees. But how can I get rid of the distortion in x and y direction and get round spots? Is there a more elegant way to smooth a 2D array / matrix? - I have read somthing about Kernel density Estimation (KDE), but it looks complex.
Edit: Result by applying ´´´intensity_smooth = gaussian_filter(intensity, sigma=1, order=0)```:
The points with high intensity are dissolving, but I want sharp intensity maximas with a soft transition between two values of the matrix (see first pic).
Unfortunately I expressed my answer misunderstandable. I have 2d data and want to get rid of the box look by interpolating the given data. To do this I have found a really good answer by Andras Deak in the thread Interpolation methods on different kinds of data. Plotting is done by using the matplotlib.contourf I have gotten this:
The tickmarks must be changed but the result is good.

Contours based on a "label mask"

I have images that have had features extracted with a contouring algorithm (I'm doing astrophysical source extraction). This approach yields a "feature map" that has each pixel "labeled" with an integer (usually ~1000 unique features per map).
I would like to show each individual feature as its own contour.
One way I could accomplish this is:
for ii in range(labelmask.max()):
contour(labelmask,levels=[ii-0.5])
However, this is very slow, particularly for large images. Is there a better (faster) way?
P.S.
A little testing showed that skimage's find-contours is no faster.
As per #tcaswell's comment, I need to explain why contour(labels, levels=np.unique(levels)+0.5)) or something similar doesn't work:
1. Matplotlib spaces each subsequent contour "inward" by a linewidth to avoid overlapping contour lines. This is not the behavior desired for a labelmask.
2. The lowest-level contours encompass the highest-level contours
3. As a result of the above, the highest-level contours will be surrounded by a miniature version of whatever colormap you're using and will have extra-thick contours compared to the lowest-level contours.
Sorry for answering my own... impatience (and good luck) got the better of me.
The key is to use matplotlib's low-level C routines:
I = imshow(data)
E = I.get_extent()
x,y = np.meshgrid(np.linspace(E[0],E[1],labels.shape[1]), np.linspace(E[2],E[3],labels.shape[0]))
for ii in np.unique(labels):
if ii == 0: continue
tracer = matplotlib._cntr.Cntr(x,y,labels*(labels==ii))
T = tracer.trace(0.5)
contour_xcoords,contour_ycoords = T[0].T
# to plot them:
plot(contour_xcoords, contour_ycoords)
Note that labels*(labels==ii) will put each label's contour at a slightly different location; change it to just labels==ii if you want overlapping contours between adjacent labels.

Put pcolormesh and contour onto same grid?

I'm trying to display 2D data with axis labels using both contour and pcolormesh. As has been noted on the matplotlib user list, these functions obey different conventions: pcolormesh expects the x and y values to specify the corners of the individual pixels, while contour expects the centers of the pixels.
What is the best way to make these behave consistently?
One option I've considered is to make a "centers-to-edges" function, assuming evenly spaced data:
def centers_to_edges(arr):
dx = arr[1]-arr[0]
newarr = np.linspace(arr.min()-dx/2,arr.max()+dx/2,arr.size+1)
return newarr
Another option is to use imshow with the extent keyword set.
The first approach doesn't play nicely with 2D axes (e.g., as created by meshgrid or indices) and the second discards the axis numbers entirely
Your data is a regular mesh? If it doesn't, you can use griddata() to obtain it. I think that if your data is too big, a sub-sampling or regularization always is possible. If the data is too big, maybe your output image always will be small compared with it and you can exploit this.
If you use imshow() with "extent" and "interpolation='nearest'", you will see that the data is cell-centered, and extent provided the lower edges of cells (corners). On the other hand, contour assumes that the data is cell-centered, and X,Y must be the center of cells. So, you need to be care about the input domain for contour. The trivial example is:
x = np.arange(-10,10,1)
X,Y = np.meshgrid(x,x)
P = X**2+Y**2
imshow(P,extent=[-10,10,-10,10],interpolation='nearest',origin='lower')
contour(X+0.5,Y+0.5,P,20,colors='k')
My tests told me that pcolormesh() is a very slow routine, and I always try to avoid it. griddata and imshow() always is a good choose for me.