matplotlib contour plot geojson output? - matplotlib

I'm using python matplotlib to generate contour plots from an 2D array of temperature data (stored in a NetCDF file), and I am interested in exporting the contour polygons and/or lines into geojson format so that I can use them outside of matplotlib. I have figured out that the "pyplot.contourf" function returns a "QuadContourSet" object which has a "collections" attribute that contains the coordinates of the contours:
contourSet = plt.contourf(data, levels)
collections = contourSet.collections
Does anyone know if matplotlib has a way to export the coordinates in "collections" to various formats, in particular geojson? I've searched the matplotlib documentation, and the web, and haven't come up with anything obvious.
Thanks!

geojsoncontour is a Python module that converts matplotlib contour lines to geojson.
It uses the following, simplified but complete, method to convert a matplotlib contour to geojson:
import numpy
from matplotlib.colors import rgb2hex
import matplotlib.pyplot as plt
from geojson import Feature, LineString, FeatureCollection
grid_size = 1.0
latrange = numpy.arange(-90.0, 90.0, grid_size)
lonrange = numpy.arange(-180.0, 180.0, grid_size)
X, Y = numpy.meshgrid(lonrange, latrange)
Z = numpy.sqrt(X * X + Y * Y)
figure = plt.figure()
ax = figure.add_subplot(111)
contour = ax.contour(lonrange, latrange, Z, levels=numpy.linspace(start=0, stop=100, num=10), cmap=plt.cm.jet)
line_features = []
for collection in contour.collections:
paths = collection.get_paths()
color = collection.get_edgecolor()
for path in paths:
v = path.vertices
coordinates = []
for i in range(len(v)):
lat = v[i][0]
lon = v[i][1]
coordinates.append((lat, lon))
line = LineString(coordinates)
properties = {
"stroke-width": 3,
"stroke": rgb2hex(color[0]),
}
line_features.append(Feature(geometry=line, properties=properties))
feature_collection = FeatureCollection(line_features)
geojson_dump = geojson.dumps(feature_collection, sort_keys=True)
with open('out.geojson', 'w') as fileout:
fileout.write(geojson_dump)

A good start to be sure to export all contours is to use the get_paths method when you iterate over the Collection objects and then the to_polygons method of Path to get numpy arrays:
http://matplotlib.org/api/path_api.html?highlight=to_polygons#matplotlib.path.Path.to_polygons.
Nevertheless the final formatting is up to you.
import matplotlib.pyplot as plt
cs = plt.contourf(data, levels)
for collection in cs.collections:
for path in collection.get_paths():
for polygon in path.to_polygons():
print polygon.__class__
print polygon

Related

Equivalent of Hist()'s Layout hyperparameter in Sns.Pairplot?

Am trying to find hist()'s figsize and layout parameter for sns.pairplot().
I have a pairplot that gives me nice scatterplots between the X's and y. However, it is oriented horizontally and there is no equivalent layout parameter to make them vertical to my knowledge. 4 plots per row would be great.
This is my current sns.pairplot():
sns.pairplot(X_train,
x_vars = X_train.select_dtypes(exclude=['object']).columns,
y_vars = ["SalePrice"])
This is what I would like it to look like: Source
num_mask = train_df.dtypes != object
num_cols = train_df.loc[:, num_mask[num_mask == True].keys()]
num_cols.hist(figsize = (30,15), layout = (4,10))
plt.show()
What you want to achieve isn't currently supported by sns.pairplot, but you can use one of the other figure-level functions (sns.displot, sns.catplot, ...). sns.lmplot creates a grid of scatter plots. For this to work, the dataframe needs to be in "long form".
Here is a simple example. sns.lmplot has parameters to leave out the regression line (fit_reg=False), to set the height of the individual subplots (height=...), to set its aspect ratio (aspect=..., where the subplot width will be height times aspect ratio), and many more. If all y ranges are similar, you can use the default sharey=True.
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# create some test data with different y-ranges
np.random.seed(20230209)
X_train = pd.DataFrame({"".join(np.random.choice([*'uvwxyz'], np.random.randint(3, 8))):
np.random.randn(100).cumsum() + np.random.randint(100, 1000) for _ in range(10)})
X_train['SalePrice'] = np.random.randint(10000, 100000, 100)
# convert the dataframe to long form
# 'SalePrice' will get excluded automatically via `melt`
compare_columns = X_train.select_dtypes(exclude=['object']).columns
long_df = X_train.melt(id_vars='SalePrice', value_vars=compare_columns)
# create a grid of scatter plots
g = sns.lmplot(data=long_df, x='SalePrice', y='value', col='variable', col_wrap=4, sharey=False)
g.set(ylabel='')
plt.show()
Here is another example, with histograms of the mpg dataset:
import matplotlib.pyplot as plt
import seaborn as sns
mpg = sns.load_dataset('mpg')
compare_columns = mpg.select_dtypes(exclude=['object']).columns
mpg_long = mpg.melt(value_vars=compare_columns)
g = sns.displot(data=mpg_long, kde=True, x='value', common_bins=False, col='variable', col_wrap=4, color='crimson',
facet_kws={'sharex': False, 'sharey': False})
g.set(xlabel='')
plt.show()

How to convert 2D DICOM slices to 3D image in Python

I am currently sitting on an task in which I need to plot DICOM slices into one 3D model using NumPy, Matplotlib, (Marchingcubes, Triangulation or Volumemodel)
I have tried the method from this website :
https://www.raddq.com/dicom-processing-segmentation-visualization-in-python/
but unfortunately it didn't worked out for me
import pydicom
import numpy as np
import os
import matplotlib.pyplot as plt
import ipywidgets as widgets
from ipywidgets import interact, fixed
filesNew = []
datenSatz = []
output_path = './Head/'
print()
def load_scan(path):
slices = [pydicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.InstanceNumber))
try:
slice_thickness = np.abs(slices[0].ImagePositionPatient[2] - slices[1].ImagePositionPatient[2])
except:
slice_thickness = np.abs(slices[0].SliceLocation - slices[1].SliceLocation)
for s in slices:
s.SliceThickness = slice_thickness
return slices
for s in load_scan('./Head/'):
h = s.pixel_array
datenSatz.append(s) #dataSet from the patient
filesNew.append(h) #pixel_array
def show_image(image_stack, sliceNumber):
pxl_ar = image_stack[sliceNumber]
#print(np.array_equal(pxl_ar,filesNew[sliceNumber]))
plt.imshow(pxl_ar, cmap= plt.cm.gray)
plt.show()
slider = widgets.IntSlider(min=0,max=len(filesNew)-1,step=1,value = 0, continuous_update=False)
interact(show_image, image_stack = fixed(filesNew), sliceNumber = slider);
DICOM slices visualized
There is an example of loading a set of 2D CT slices and building a 3D array.
https://github.com/pydicom/pydicom/blob/master/examples/image_processing/reslice.py
It does not go on to construct the surface, but it should solve the first half of your problem.

Map a colorbar based on plot instead of imshow

I'm trying to get a colorbar for the following minimal example of my code.
g1 = gridspec.GridSpec(1, 1)
f, ((ax0)) = plt.subplots(1, 1)
ax0 = subplot(g1[0])
cmap = matplotlib.cm.get_cmap('viridis')
for i in linspace(0,1,11):
x = [-1,0,1]
y = [i,i,i]
rgba = cmap(i)
im = ax0.plot(x,y,color=rgba)
f.colorbar(im)
I also tried f.colorbar(cmap)
Probably pretty obvious, but I get errors such as
'ListedColormap' object has no attribute 'autoscale_None'
In reality, the value defining i is more complex, but I think this should do the trick. My data is plotted with plot and not with imshow (for which I know how to make the colormap).
The answers so far seem overly complicated. fig.colorbar() expects a ScalarMappable as its first argument. Often ScalarMappables are produced by imshow or contourplots and are readily avaible.
In this case you would need to define your custom ScalarMappable to provide to the colorbar.
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots()
cmap = plt.cm.get_cmap('viridis')
for i in np.linspace(0,1,11):
x = [-1,0,1]
y = [i,i,i]
rgba = cmap(i)
im = ax.plot(x,y,color=rgba)
sm = plt.cm.ScalarMappable(cmap=cmap)
sm.set_array([])
fig.colorbar(sm)
plt.show()
You should pass an Image or ContourSet when you call colorbar on a Figure.
You can make an image of the data points by calling plt.imshow with the data. You can start with this:
data = []
for i in np.linspace(0,1,11):
x = [-1,0,1]
y = [i,i,i]
rgba = cmap(i)
ax0.plot(x,y,color=rgba)
data.append([x, y])
image = plt.imshow(data)
figure.colorbar(image)
plt.show()
Reference:
https://matplotlib.org/api/figure_api.html#matplotlib.figure.Figure.colorbar
Oluwafemi Sule's solution almost works, but it plots the matrix into the same figure as the lines. Here a solution that opens a second figure, does the imshow call on that second figure, uses the result to draw the colorbar in the first figure, and then closes the second figure before calling plt.show():
import matplotlib
from matplotlib import pyplot as plt
from matplotlib import gridspec
import numpy as np
cmap = matplotlib.cm.get_cmap('viridis')
g1 = gridspec.GridSpec(1, 1)
f0, ((ax0)) = plt.subplots(1, 1)
f1, ((ax1)) = plt.subplots(1, 1)
for i in np.linspace(0,1,11):
x = [-1,0,1]
y = [i,i,i]
rgba = cmap(i)
ax0.plot(x,y,color=rgba)
data = np.linspace(0,1,100).reshape((10,10))
image = ax1.imshow(data)
f0.colorbar(image)
plt.close(f1)
plt.show()
The result looks like this:

Get pixels inside a patch

In matplotlib, it's possible to get the pixels inside a polygon using matplotlib.nxutils.points_inside_poly, as long as you have vertices defined beforehand.
How can you get the points inside a patch, e.g. an ellipse?
The problem: if you define a matplotlib ellipse, it has a .get_verts() method, but this returns the vertices in figure (instead of data) units.
One could do:
# there has to be a better way to do this,
# but this gets xy into the form used by points_inside_poly
xy = np.array([(x,y) for x,y in zip(pts[0].ravel(),pts[1].ravel())])
inds = np.array([E.contains_point((x,y)) for x,y in xy], dtype='bool')
However, this is very slow since it's looping in python instead of C.
use ax.transData.transform() to transform your points, and then use points_inside_poly():
import pylab as pl
import matplotlib.patches as mpatches
from matplotlib.nxutils import points_inside_poly
import numpy as np
fig, ax = pl.subplots(1, 1)
ax.set_aspect("equal")
e = mpatches.Ellipse((1, 2), 3, 1.5, alpha=0.5)
ax.add_patch(e)
ax.relim()
ax.autoscale()
p = e.get_path()
points = np.random.normal(size=(1000, 2))
polygon = e.get_verts()
tpoints = ax.transData.transform(points)
inpoints = points[points_inside_poly(tpoints, polygon)]
sx, sy = inpoints.T
ax.scatter(sx, sy)
result:

Matplotlib histogram with errorbars

I have created a histogram with matplotlib using the pyplot.hist() function. I would like to add a Poison error square root of bin height (sqrt(binheight)) to the bars. How can I do this?
The return tuple of .hist() includes return[2] -> a list of 1 Patch objects. I could only find out that it is possible to add errors to bars created via pyplot.bar().
Indeed you need to use bar. You can use to output of hist and plot it as a bar:
import numpy as np
import pylab as plt
data = np.array(np.random.rand(1000))
y,binEdges = np.histogram(data,bins=10)
bincenters = 0.5*(binEdges[1:]+binEdges[:-1])
menStd = np.sqrt(y)
width = 0.05
plt.bar(bincenters, y, width=width, color='r', yerr=menStd)
plt.show()
Alternative Solution
You can also use a combination of pyplot.errorbar() and drawstyle keyword argument. The code below creates a plot of the histogram using a stepped line plot. There is a marker in the center of each bin and each bin has the requisite Poisson errorbar.
import numpy
import pyplot
x = numpy.random.rand(1000)
y, bin_edges = numpy.histogram(x, bins=10)
bin_centers = 0.5*(bin_edges[1:] + bin_edges[:-1])
pyplot.errorbar(
bin_centers,
y,
yerr = y**0.5,
marker = '.',
drawstyle = 'steps-mid-'
)
pyplot.show()
My personal opinion
When plotting the results of multiple histograms on the the same figure, line plots are easier to distinguish. In addition, they look nicer when plotting with a yscale='log'.