use shapefile to mask raster data in ArcGIS, then weighted sum - arcgis

I want to mask a raster data using a shapefile with ArcGIS, then weighted sum the masked parts.
Following is the path of the tool I used.
Spatial Analysis Tool -> Extraction -> Extract by mask.
When I use this tool to realize my intention, I always get several grids. However, what I want is an output having the same shape as my shapefile.
I hope the output includes several parts and can be weighted sum.

This is a coding site. For questions like this I would try https://gis.stackexchange.com/ instead.
I am not sure what you mean with weighted sum in this context, but here is an example of what you can do with R
Example data
library(raster)
p <- shapefile(system.file("external/lux.shp", package="raster"))[1,]
r <- raster(extent(p)+2, vals=1:100)
plot(x)
plot(p, add=T)
Raster cropped to polygon
x <- crop(r, p)
plot(x)
plot(p, add=T)
Disaggregate cells so that they fit the polygon better, followdd by crop and mask
d <- disaggregate(r, 100)
x <- crop(d, p)
m <- mask(x, p)
plot(m)
plot(p, add=T)

Related

Changing variable labels/legend in raster plot to discrete characters

I have just made a plot using raster data that consists of 6 different land types and fit them to polygon vectors. I'm trying to change the values on the continuous scale bar (1-6) to the names of each landtype (e.g. grasslands, urban, etc) which is what each different colour represents. I have tried inserting breaks, however then each box in the legend contains labels (1-2, 2-3, 3-4 etc.)
Raster plot where each diff colour represents diff land type
This is my code:
rasterxpolygonplotcode
Example data
library(terra)
r <- rast(nrows=10, ncols=10)
values(r) <- sample(3, ncell(r), replace=TRUE)
cover <- c("forest", "water", "urban")
You can either do:
plot(r, type="classes", levels=cover)
Or first make the raster categorical
levels(r) <- data.frame(id=1:3, cover=c("forest", "water", "urban"))
plot(r)

Getting the inverse of a 2d polynomial transform with numpy (for image or raster image warping/sampling)

If I have a 2-dimensional (x and y coordinates) polynomial transform function of 1st/affine, 2nd, or 3rd order (i.e. I have the coefficients/transformation matrix A), what is the mathematical or programmatic approach to getting the exact inverse of this function? Ideally, how would I implement this in Numpy? This is in the context of image warping or map georeferencing, i.e. transforming or warping the coordinates from an input image to an output image in a new warped coordinate system.
Attempted Solution
To solve this I have tried a matrix algebra approach for solving sets of equations. Mathematically, the transformation procedure is represented as Au = v. Forward transforming is easy, where you calculate u as a column-matrix containing the terms of the polynomial equation based on your input coordinates, and then matrix-multiply u with the transformation matrix A, in order to get the transformed output column matrix v containing the output coordinates. Backwards transforming on the other hand, means we know the output coordinates v and want to find the input coordinates u, so we need to reshuffle our equation as u = Av. By the rules of matrix algebra, the A matrix has to be inverted when moving it over. Implementing this in Numpy for a 2nd order polynomial transform, it does seem to work:
import numpy as np
# input coords
x = np.array([13])
y = np.array([13])
# terms of the 2nd order polynomial equation
x = x
y = y
xx = x*x
xy = x*y
yy = y*y
ones = np.ones(x.shape)
# u consists of each term in 2nd order polynomial equation
# with each term being array if want to transform multiple
u = np.array([xx,xy,yy,x,y,ones])
print('original input u', u)
## output:
## ('original input u', array([[169.],
## [169.],
## [169.],
## [ 13.],
## [ 13.],
## [ 1.]]))
# forward transform matrix
A = np.array([[1,2,3,1,6,8],
[5,2,9,2,0,1],
[8,1,5,8,4,3],
[1,4,8,2,3,9],
[9,3,2,1,9,5],
[4,2,5,6,2,1]])
# get forward coords
v = A.dot(u)
print('output v', v)
## output:
## ('output v', array([[1113.],
## [2731.],
## [2525.],
## [2271.],
## [2501.],
## [1964.]]))
# get backward coords (should exactly reproduce the input coords)
Ainv = np.linalg.inv(A)
u_pred = Ainv.dot(v)
print('backwards predicted input u', u_pred)
## output:
## ('backwards predicted input u', array([[169.],
## [169.],
## [169.],
## [ 13.],
## [ 13.],
## [ 1.]]))
In the above example the output v is actually a 1x6 matrix, where only the top two rows/values represent the transformed x and y coordinates. The problem becomes that we need all the additional values in v in order to exactly inverse the coordinates. But in real-world scenarios we only know transformed x and y values (i.e. the top two rows/values of v), we don't know the full 1x6 v matrix.
Maybe I'm thinking about this wrong, or maybe this matrix algebra approach is not the right approach, since 2nd order polynomials and higher are no longer linear? Any alternate programmatic/numpy approaches for inversing the polyonimal transformation?
Some context
I've looked up many similar questions and websites as well as numpy functions such as numpy.polynomial.Polynomial.fit, but most of them relate only to inversing 1-dimensional polynomial transforms. The few links I've found that talk about 2-dimensional transforms say there is no exact way to inverse it, which doesn't make sense since this is a very common operation in image warping/resampling and map georeferencing. For example, the steps for warping an image is often broken down to:
Forward project all original pixel (column-row) coordinates u using the transformation function/matrix A, in order to find the bounds of the transformed coordinate space v.
Then for every coordinate sampled at regular intervals in the transformed coordinate space bounds (found in step 1), backwards sample these v coordinates in the transformed coordinate system to find their original coordinates u. This determines which original pixels to sample for each location in the transformed image.
My problem then is that I have the forward transformation necessary for step 1, but I need to find the exact inverse of that transformation necessary for backwards sampling in step 2. Either a math answer or a numpy solution would be fine.
Inversion of a 2D affine function is pretty easy. It takes the resolution of a 2x2 linear system of equations.
The case of quadratic and cubic polynomials is much more problematic. If I am right, a system in two unknows is equivalent to a single quartic or nonic (degree 9) polynomial equation. Explicit (though complicated) formulas exist for the quartic case, but none for the nonic case, and you will have to resort to numerical methods (Newton's iterations).
In addition, the solution of these nonlinear equations are not unique (you can have 4 or 9 solutions) and you need to keep the right ones.
If your transformation remains close to affine (such as when correcting image distortion), I would suggest to choose an affine transformation that approximates the complete equation, use the backward transformation to find initial approximations, then refine with Newton.

How to get rid of artefacts in contourplot contourf (smoothing matrix/ 2D array)?

I have data in a hdf5 file with named datasets
#Data Aquisition and manipulation
file = h5py.File('C:/Users/machz/Downloads/20200715_000_Scan_XY-Coordinate_NV-centre_APD.h5', 'r')
filename = path.basename(file.filename)
intensity = file.get('intensity')
intensity = np.array(intensity)
x_range = file.get('x range')
x_range = np.array(x_range)
x_range = np.round(x_range,1)
z_range = file.get('z range')
z_range = np.array(z_range)
z_range=np.round(z_range,1)
where intensity is a 2D array and x_range and z_range are 1D arrays. Now i want to smooth the intensity data. The raw data looks for example like this:
by using seaborn.heatmap:
heat_map = sb.heatmap(intensity, cmap="Spectral_r")
When using matplotlib.contourf via
plt.contourf(intensity, 1000, cmap="Spectral_r")
i get the following result:
which looks oke, despite it is rotated by 180 degrees. But how can I get rid of the distortion in x and y direction and get round spots? Is there a more elegant way to smooth a 2D array / matrix? - I have read somthing about Kernel density Estimation (KDE), but it looks complex.
Edit: Result by applying ´´´intensity_smooth = gaussian_filter(intensity, sigma=1, order=0)```:
The points with high intensity are dissolving, but I want sharp intensity maximas with a soft transition between two values of the matrix (see first pic).
Unfortunately I expressed my answer misunderstandable. I have 2d data and want to get rid of the box look by interpolating the given data. To do this I have found a really good answer by Andras Deak in the thread Interpolation methods on different kinds of data. Plotting is done by using the matplotlib.contourf I have gotten this:
The tickmarks must be changed but the result is good.

Using matplotlib to plot a matrix with the third variable as source for a color map

Say you have the matrix given by three arrays, being:
x = N-dimensional array.
y = M-dimensional array.
And z is a set of "somewhat random" values from -0.3 to 0.3 in a NxM shape. I need to create a plot in which the x values are in the x-axis, y values are in the y-axis and using z as the source to indicate the intensity of each pixel with a color map.
So far, I have tried using
plt.contourf(x,y,z)
and the resulting plot is very nice for me (attached at the end of this paragraph), but a smoothing is automatically applied to the plot! I need to be able to distinguish the pixels and I cannot find a way to do it.
contourf result
I have also studied the possibility of using
ax.matshow(z)
in order to sucesfully see the pixels... but then I am struggling trying to personalize the x and y axis, since only the index of the pixel is shown (see below).
matshow result
Would you please give me some ideas? Thank you.
Without more information on your x,y data it's hard to know, but I would guess you are looking for pcolormesh.
plt.pcolormesh(x,y,z)
This would take the x and y data as input and hence shows the z data at the appropriate coordinates.
You can use imshow with the keyword interpolation='nearest'.
plt.imshow(z, interpolation='nearest')

Interpolate in one direction

I have sampled data and plot it with imshow():
I would like to interpolate just in horizontal axis so that I can easier distinguish samples and spot features.
Is it possible to make interpolation just in one direction with MPL?
Update:
SciPy has whole package with various interpolation methods.
I used simplest interp1d, as suggested by tcaswell:
def smooth_inter_fun(r):
s = interpolate.interp1d(arange(len(r)), r)
xnew = arange(0, len(r)-1, .1)
return s(xnew)
new_data = np.vstack([smooth_inter_fun(r) for r in data])
Linear and cubic results:
As expected :)
This tutorial covers a range of interpolation available in numpy/scipy. If you want to just one direction, I would work on each row independently and then re-assemble the results. You might also be interested is simply smoothing your data (exmple, Python Smooth Time Series Data, Using strides for an efficient moving average filter).
def smooth_inter_fun(r):
#what ever process you want to use
new_data = np.vstack([smooth_inter_fun(r) for r in data])