I have just made a plot using raster data that consists of 6 different land types and fit them to polygon vectors. I'm trying to change the values on the continuous scale bar (1-6) to the names of each landtype (e.g. grasslands, urban, etc) which is what each different colour represents. I have tried inserting breaks, however then each box in the legend contains labels (1-2, 2-3, 3-4 etc.)
Raster plot where each diff colour represents diff land type
This is my code:
rasterxpolygonplotcode
Example data
library(terra)
r <- rast(nrows=10, ncols=10)
values(r) <- sample(3, ncell(r), replace=TRUE)
cover <- c("forest", "water", "urban")
You can either do:
plot(r, type="classes", levels=cover)
Or first make the raster categorical
levels(r) <- data.frame(id=1:3, cover=c("forest", "water", "urban"))
plot(r)
Related
Aim: plot a column chart representing concentration values at discrete sites
Problem: the 14 site labels are numeric, so I think ggplot2 is assuming continuous data and adding spaces for what it sees as 'missing numbers'. I only want 14 columns with 14 marks/labels, relative to the 14 values in the dataframe. I've tried assigning the sites as factors and characters but neither work.
Also, how do you ensure the y-axis ends at '0', so the bottom of the columns meet the x-axis?
Thanks
Data:
Sites: 2,4,6,7,8,9,10,11,12,13,14,15,16,17
Concentration: 10,16,3,15,17,10,11,19,14,12,14,13,18,16
You have two questions in one with two pretty straightforward answers:
1. How to force a discrete axis when your column is a continuous one? To make ggplot2 draw a discrete axis, the data must be discrete. You can force your numeric data to be discrete by converting to a factor. So, instead of x=Sites in your plot code, use x=as.factor(Sites).
2. How to eliminate the white space below the columns in a column plot? You can control the limits of the y axis via the scale_y_continuous() function. By default, the limits extend a bit past the actual data (in this case, from 0 to the max Concentration). You can override that behavior via the expand= argument. Check the documentation for expansion() for more details, but here I'm going to use mult=, which uses a multiplication to find the new limits based on the data. I'm using 0 for the lower limit to make the lower axis limit equal the minimum in your data (0), and 0.05 as the upper limit to expand the chart limits about 5% past the max value (this is default, I believe).
Here's the code and resulting plot.
library(ggplot2)
df <- data.frame(
Sites = c(2,4,6,7,8,9,10,11,12,13,14,15,16,17),
Concentration = c(10,16,3,15,17,10,11,19,14,12,14,13,18,16)
)
ggplot(df, aes(x=as.factor(Sites), y=Concentration)) +
geom_col(color="black", fill="lightblue") +
scale_y_continuous(expand=expansion(mult=c(0, 0.05))) +
theme_bw()
I am plotting two vector fields on top of each other and I want to use the auto-scale feature to set the arrow size such that the two fields are at the same scale automatically. (Part of this notebook.)
If I plot them one after the other, they are drawn at different scales. In this case the black arrows are artificially inflated compared to green.
plt.quiver(*XY, *np.real(UV))
plt.quiver(*XY, *np.imag(UV), color='g')
If I use this solution the first plot sets the scale for the second plot. But this fails to take the scale of the second field into account. If the first field has a small magnitude compared to the second, then it looks terrible.
Q = plt.quiver(*XY, *np.real(UV))
Q._init()
plt.quiver(*XY, *np.imag(UV), scale=Q.scale, color='g')
I want to set the auto-scale based on both fields, not just one or the other. Ideas?
You need to pass the same scale argument to both plt.quiver calls.
If you don't provide a scale than a visually pleasing scale is derived automatically. So you could in principle extract the autoscaling code and use it to get the automatic scales for both quiver plots and then use for instance the average of the two values.
Another, easier, way is to first invisibly plot both quiver plots using the do-nothing backend 'template', retrieve the automatically calculated scales and use the average of them in both real plotting calls:
def plot_flow(x,y,q,XY,G=source,args=(),size=(7,7),ymax=None):
"Plot the geometry and induced velocity field"
# Loop through segments, superimposing the velocity
def uv(i): return q[i]*velocity(*XY, x[i], y[i], x[i+1], y[i+1], G, args)
UV = sum(uv(i) for i in range(len(x)-1))
def get_scale(XY, UV):
"""Get autoscale value by plotting to do-nothing backend."""
backend = plt.matplotlib.get_backend()
plt.matplotlib.use('template')
Q = plt.quiver(*XY, *UV, scale=None)
plt.matplotlib.use(backend)
Q._init()
return Q.scale
# Get autoscales
scale_real = get_scale(XY, np.real(UV))
scale_imag = get_scale(XY, np.imag(UV)) if np.iscomplexobj(UV) else scale_real
scale = (scale_real + scale_imag)/2
# Create plot
plt.figure(figsize=size)
ax=plt.axes(); ax.set_aspect('equal', adjustable='box')
# Plot vectors and segments
plt.quiver(*XY, *np.real(UV), scale=scale)
if np.iscomplexobj(UV):
plt.quiver(*XY, *np.imag(UV), scale=scale, color='g')
plt.plot(x,y,c='b')
plt.ylim(None,ymax)
In the example, we get a scale of 7.7 as the average of 12.2 and 3.3:
Normalizing the data before plotting it can help getting similar scales on the arrow sizes:
scale = 1
UV_real = np.real(UV) / np.linalg.norm(UV)
UV_imag = np.imag(UV) / np.linalg.norm(UV)
Q1 = plt.quiver(*XY, *UV_real, scale=scale)
Q2 = plt.quiver(*XY, *UV_imag, scale=scale, color='g')
Tested for multiple magnitude ratios between real and imaginary parts.
I have a large dicom mri dataset for several patients. For each patient, there is a folder including many 2d slices of .dcm files and the data of each patient has different sizes. For example:
patient1: PixelSpacing=0.8mm,0.8mm, SliceThickness=2mm, SpacingBetweenSlices=1mm, 400x400 pixels
patient2: PixelSpacing=0.625mm,0.625mm, SliceThickness=2.4mm, SpacingBetweenSlices=1mm, 512x512 pixels
So my question is how can I convert all of them into {Pixel Spacing} = 1mm,1mm and {Slice Thickness = 1mm}?
Thanks.
These are two different questions:
About harmonizing positions and pixel spacing, these links will be helpful:
Finding the coordinates (mm) of identical slice locations for two MR datasets acquired in the same scanning session
Interpolation between two images with different pixelsize
http://nipy.org/nibabel/dicom/dicom_orientation.html
Basically, you want to build your target volume and interpolate each of its pixels from the nearest neighbors in the source volumes.
About modifying the slice thickness: If you really want to modify the slice thickness rather than the slice distance, I do not see any chance to do this correctly with the source data you have. This is because the thickness says which width of the raw data was used to calculate the values for a slice in your stack (e.g. by averaging or calculating an integral). With a slice thickness of 2 or 2.4mm in the source volumes, you will not be able to reconstruct the gray values with a thickness of 1 mm. If your question was referring to slice distance rather than slice thickness, answer 1 applies.
I am trying to create an image with imshow, but the bins in my matrix are not equal.
For example the following matrix
C = [[1,2,2],[2,3,2],[3,2,3]]
is for X = [1,4,8] and for Y = [2,4,9]
I know I can just do xticks and yticks, but I want the axis to be equal..This means that I will need the squares which build the imshow to be in different sizes.
Is it possible?
This seems like a job for pcolormesh.
From When to use imshow over pcolormesh:
Fundamentally, imshow assumes that all data elements in your array are
to be rendered at the same size, whereas pcolormesh/pcolor associates
elements of the data array with rectangular elements whose size may
vary over the rectangular grid.
pcolormesh plots a matrix as cells, and take as argument the x and y coordinates of the cells, which allows you to draw each cell in a different size.
I assume the X and Y of your example data are meant to be the size of the cells. So I converted them in coordinates with:
xSize=[1,4,9]
ySize=[2,4,8]
x=np.append(0,np.cumsum(xSize)) # gives [ 0 1 5 13]
y=np.append(0,np.cumsum(ySize)) # gives [ 0 2 6 15]
Then if you want a similar behavior as imshow, you need to revert the y axis.
c=np.array([[1,2,2],[2,3,2],[3,2,3]])
plt.pcolormesh(x,-y,c)
Which gives us:
I have a four dimensional data set. None of the four variables are equally spaced. Right now, I visualize the data using 3D scatter (with the color of the dots indicating the fourth dimension). But this makes it extremely unwieldy while it is printed. Had the variables been evenly spaced,a series of pcolors would have been an option. Is there some way, wherein I can represent such a data using a series of 2D plots? My data set looks something like this:
x = [3.67, 3.89, 25.6]
y = [4.88, 4.88, 322.9]
z = [1.0, 2.0, 3.0]
b = [300.0,411.0,414.5]
A scatter plot matrix is a common way to plot multiple dimensions. Here's a plot of four continuous variables colored by a fifth categorical variable.
To deal with the uneven spacing, it depends on the nature of the unevenness.
You might plot it as-is if the unevenness is significant.
You might make a second plot with the extreme values excluded.
You might apply a transformation (such as log or quantile) if the data justifies it.