cartopy - set_extent() extending requested boundary - cartopy

i'm new to cartopy and still learning basic features.
I tried to plot a specific region however, cartopy extended this region and produced a map going up to approximately 85oN when I requested 80oN. Is there a way I can ensure I only get the region I am interested in?
plt.figure(figsize=(5.12985642927, 3))
ax = plt.axes(projection=ccrs.PlateCarree(central_longitude=-35))
ax.set_extent([-100, 30, 0, 80])
ax.coastlines(resolution='110m')
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=2, color='gray', alpha=0.5, linestyle='--')
PC regional map

You should make sure to tell the set_extent method what coordinate system you are specifying the extents in, in this case:
ax.set_extent([-100, 30, 0, 80], crs=ccrs.PlateCarree())
This method is preferred in cartopy because it avoids having to use the set_xlim/set_ylim which always operate in projection coordinates, which can be the cause of much confusion when working with projections other than PlateCarree(). Using set_extent with an explicit crs will always do what you expect regardless of the projection of your plot.

have you tried ax.set_xlim to set the upper and lower limits of the x-axis? Here are some usage examples and instructions.

Related

Elbow Method for GaussianMixture

I'd like to plot an elbow method for GMM to determine the optimal number of Clusters. I'm using mean_ assuming this represents distance from cluster's center, but I'm not generating a typical elbow report. Any ideas?
from sklearn.mixture import GaussianMixture
from scipy.spatial.distance import cdist
def elbow_report(X):
meandist = []
n_clusters = range(2,15)
for n_cluster in n_clusters:
gmm = GaussianMixture(n_components=n_cluster)
gmm.fit(X)
meandist.append(
sum(
np.min(
cdist(X, gmm.means_, 'mahalanobis', VI=gmm.precisions_),
axis=1
),
X.shape[0]
)
)
plt.plot(n_clusters,meandist,'bx-')
plt.xlabel('Number of Clusters')
plt.ylabel('Mean Mahalanobis Distance')
plt.title('GMM Clustering for n_cluster=2 to 15')
plt.show()
I played around with some test data and your function. Here are my findings and suggestions:
1. Minor bug
I believe there might be a little bug in your code. Change the , X.shape[0] to / X.shape[0] in the function to compute the mean distance. In particular,
meandist.append(
sum(
np.min(
cdist(X, gmm.means_, 'mahalanobis', VI=gmm.precisions_),
axis=1
) / X.shape[0]
)
)
When creating test data, e.g.
import numpy as np
import random
from matplotlib import pyplot as plt
means = [[-5,-5,-5], [6,6,6], [0,0,0]]
sigmas = [0.4, 0.4, 0.4]
sizes = [500, 500, 500]
L = [np.random.multivariate_normal(mean=np.array(loc), cov=scale*np.eye(len(loc)), size=size).tolist() for loc,scale,size in zip(means,sigmas, sizes)]
L = [x for l in L for x in l]
random.shuffle(L)
# design matrix
X = np.array(L)
elbow_report(X)
the output looks somewhat reasonable.
2. y-axis in log-scale
Sometimes, a bad fit for one particular n_cluster-value can throw off the entire plot. In particular, when the metric is the sum rather than the mean of the distances. Adding plt.yscale("log") to the plot might help to massage visualization by taming outliers.
3. Optimization instability during fitting
Note that you compute the in-sample error since gmm is fitted on the same data X on which the metric is subsequently evaluated. Leaving aside stability issues of the underlying optimization of the fitting procedure, the more cluster there are the better the fit should be (and, in turn, the lower the errors/distances). In the extreme, each datapoint gets its own cluster center: average values of the values should be close to 0. I assume this is what you desire to observe for the ELBOW.
Regardless, the lower effective sample size per cluster makes the optimization unstable. So rather than seeing an exponential decay toward 0, you see occasional spikes even far along the x-axis. I cannot judge how severe this issue truly is in your case, as you didn't provide sample sizes. Regardless, when the sample size of the data is of the same order of magnitude as n_clusters and/or the intra-class/inter-class heterogeneity is large, this is an issue.
4. Simulated vs. real data
This brings us to the final (catch-all) point. I'd suggest checking the plot on simulated data to get a feeling when things break. The simulated data above (multivariate Gaussian, isotropic noise, etc.) fits the assumptions to a T. However, some plots still look wonky (even when the sample size is moderately high and volatility somewhat low). Unfortunately, textbook-like plots are hard to come by on real data. As my former statistics professor put it: "real-world data is dirty." In turn, the plots will be, too.

How to apply matplotlib quiver autoscale to two vector fields?

I am plotting two vector fields on top of each other and I want to use the auto-scale feature to set the arrow size such that the two fields are at the same scale automatically. (Part of this notebook.)
If I plot them one after the other, they are drawn at different scales. In this case the black arrows are artificially inflated compared to green.
plt.quiver(*XY, *np.real(UV))
plt.quiver(*XY, *np.imag(UV), color='g')
If I use this solution the first plot sets the scale for the second plot. But this fails to take the scale of the second field into account. If the first field has a small magnitude compared to the second, then it looks terrible.
Q = plt.quiver(*XY, *np.real(UV))
Q._init()
plt.quiver(*XY, *np.imag(UV), scale=Q.scale, color='g')
I want to set the auto-scale based on both fields, not just one or the other. Ideas?
You need to pass the same scale argument to both plt.quiver calls.
If you don't provide a scale than a visually pleasing scale is derived automatically. So you could in principle extract the autoscaling code and use it to get the automatic scales for both quiver plots and then use for instance the average of the two values.
Another, easier, way is to first invisibly plot both quiver plots using the do-nothing backend 'template', retrieve the automatically calculated scales and use the average of them in both real plotting calls:
def plot_flow(x,y,q,XY,G=source,args=(),size=(7,7),ymax=None):
"Plot the geometry and induced velocity field"
# Loop through segments, superimposing the velocity
def uv(i): return q[i]*velocity(*XY, x[i], y[i], x[i+1], y[i+1], G, args)
UV = sum(uv(i) for i in range(len(x)-1))
def get_scale(XY, UV):
"""Get autoscale value by plotting to do-nothing backend."""
backend = plt.matplotlib.get_backend()
plt.matplotlib.use('template')
Q = plt.quiver(*XY, *UV, scale=None)
plt.matplotlib.use(backend)
Q._init()
return Q.scale
# Get autoscales
scale_real = get_scale(XY, np.real(UV))
scale_imag = get_scale(XY, np.imag(UV)) if np.iscomplexobj(UV) else scale_real
scale = (scale_real + scale_imag)/2
# Create plot
plt.figure(figsize=size)
ax=plt.axes(); ax.set_aspect('equal', adjustable='box')
# Plot vectors and segments
plt.quiver(*XY, *np.real(UV), scale=scale)
if np.iscomplexobj(UV):
plt.quiver(*XY, *np.imag(UV), scale=scale, color='g')
plt.plot(x,y,c='b')
plt.ylim(None,ymax)
In the example, we get a scale of 7.7 as the average of 12.2 and 3.3:
Normalizing the data before plotting it can help getting similar scales on the arrow sizes:
scale = 1
UV_real = np.real(UV) / np.linalg.norm(UV)
UV_imag = np.imag(UV) / np.linalg.norm(UV)
Q1 = plt.quiver(*XY, *UV_real, scale=scale)
Q2 = plt.quiver(*XY, *UV_imag, scale=scale, color='g')
Tested for multiple magnitude ratios between real and imaginary parts.

how fix the y-axis's rate in plot

I am using a line to estimate the slope of my graphs. the data points are in the same size. But look at these two pictures. the first one seems to have a larger slope but its not true. the second one has larger slope. but since the y-axis has different rate, the first one looks to have a larger slope. is there any way to fix the rate of y-axis, then I can see with my eye which one has bigger slop?
code:
x = np.array(list(range(0,df.shape[0]))) # = array([0, 1, 2, ..., 3598, 3599, 3600])
df1[skill]=pd.to_numeric(df1[skill])
fit = np.polyfit(x, df1[skill], 1)
fit_fn = np.poly1d(fit)
df['fit_fn(x)']=fit_fn(x)
df[['Hodrick-Prescott filter',skill,'fit_fn(x)']].plot(title=skill + date)
Two ways:
One, use matplotlib.pyplot.axis to get the axis limits of the first figure and set the second figure to have the same axis limits (using the same function) (could also use get_ylim and set_ylim, which are specific to the y-axis but require directly referencing the Axes object)
Two, plot both in a subplots figure and set the argument sharey to True (my preferred, depending on the desired use)

How to visualize 3d sun position (for solar power monitoring software)?

Im working on little hobby Raspberry Pi project. I'm measuring power and energy that comes from solar panel.
Im looking for better way of sun position visualisation.
My best idea so far that is easy to implement is something like this:
I found something really good:
(image source: link)
but I feel this is a bit too hard to implement.
Im looking for some kind of compromise between these two - easy to read for user and not so hard in implementation.
A bit lacking in requirements, but I like your first approach. I'm assuming the requirement includes a terminal-based interface, so I think you should use ASCII to render it. ;-)
*
\
\
50˚(\
---------+---------
E N W
Seriously, perhaps a graph with an X/Y axis showing the altitude and azimuth, combined with the first approach? Perhaps a graph similar to one of the ones on this page showing the progression of the sun today?
P.S. I'm marking this community wiki since I think this is, sadly, off-topic. =( You won't get MY close vote though!
Have you tried MatPlotLib in combination with PySolar?! With Pysolar you could easily get Azimuth and Zenith of Sun. With Matplotlib you can then draw an image to such.
This is how I would do it..
latitude, longitude = 53.280223, 12.236105
tilt_pv = 36.16 #tilt of PV panel.
azimuth_pv = 180. #North-South alignment of your PV panel. In this case 180° depicts that panel is facing South
baseDateTime = datetime(2015, 6, 9, 12, 0, 0) #Timestamp for 9 June 2015 12 UTC
zenith = Pysolar.GetAltitude(latitude, longitude, baseDateTime)
azimuth = Pysolar.GetAzimuth(latitude, longitude, baseDateTime)
That will give you the solar position.. That should go into your MatPlotLib configuration to plot this:
from mpl_toolkits.axes_grid.axislines import SubplotZero
import matplotlib.pyplot as plt
import numpy as np
if 1:
fig = plt.figure(1)
ax = SubplotZero(fig, 111)
fig.add_subplot(ax)
for direction in ["xzero", "yzero"]:
ax.axis[direction].set_axisline_style("-|>")
ax.axis[direction].set_visible(True)
for direction in ["left", "right", "bottom", "top"]:
ax.axis[direction].set_visible(False)
x = np.linspace(0., zenith, 1000) #straight line in 1000 steps
ax.plot(x, azimuth)
plt.show()
And while we are add it. I have written a Python program to forecast solar energy from GFS weather model (I use: Global Radiation, Wind Speed and Temperature) which is freely available. Would you be interested to run and test this?! I want to see if it is any good or where I need to rune the performance.

pyplot scatter plot marker size

In the pyplot document for scatter plot:
matplotlib.pyplot.scatter(x, y, s=20, c='b', marker='o', cmap=None, norm=None,
vmin=None, vmax=None, alpha=None, linewidths=None,
faceted=True, verts=None, hold=None, **kwargs)
The marker size
s:
size in points^2. It is a scalar or an array of the same length as x and y.
What kind of unit is points^2? What does it mean? Does s=100 mean 10 pixel x 10 pixel?
Basically I'm trying to make scatter plots with different marker sizes, and I want to figure out what does the s number mean.
This can be a somewhat confusing way of defining the size but you are basically specifying the area of the marker. This means, to double the width (or height) of the marker you need to increase s by a factor of 4. [because A = WH => (2W)(2H)=4A]
There is a reason, however, that the size of markers is defined in this way. Because of the scaling of area as the square of width, doubling the width actually appears to increase the size by more than a factor 2 (in fact it increases it by a factor of 4). To see this consider the following two examples and the output they produce.
# doubling the width of markers
x = [0,2,4,6,8,10]
y = [0]*len(x)
s = [20*4**n for n in range(len(x))]
plt.scatter(x,y,s=s)
plt.show()
gives
Notice how the size increases very quickly. If instead we have
# doubling the area of markers
x = [0,2,4,6,8,10]
y = [0]*len(x)
s = [20*2**n for n in range(len(x))]
plt.scatter(x,y,s=s)
plt.show()
gives
Now the apparent size of the markers increases roughly linearly in an intuitive fashion.
As for the exact meaning of what a 'point' is, it is fairly arbitrary for plotting purposes, you can just scale all of your sizes by a constant until they look reasonable.
Edit: (In response to comment from #Emma)
It's probably confusing wording on my part. The question asked about doubling the width of a circle so in the first picture for each circle (as we move from left to right) it's width is double the previous one so for the area this is an exponential with base 4. Similarly the second example each circle has area double the last one which gives an exponential with base 2.
However it is the second example (where we are scaling area) that doubling area appears to make the circle twice as big to the eye. Thus if we want a circle to appear a factor of n bigger we would increase the area by a factor n not the radius so the apparent size scales linearly with the area.
Edit to visualize the comment by #TomaszGandor:
This is what it looks like for different functions of the marker size:
x = [0,2,4,6,8,10,12,14,16,18]
s_exp = [20*2**n for n in range(len(x))]
s_square = [20*n**2 for n in range(len(x))]
s_linear = [20*n for n in range(len(x))]
plt.scatter(x,[1]*len(x),s=s_exp, label='$s=2^n$', lw=1)
plt.scatter(x,[0]*len(x),s=s_square, label='$s=n^2$')
plt.scatter(x,[-1]*len(x),s=s_linear, label='$s=n$')
plt.ylim(-1.5,1.5)
plt.legend(loc='center left', bbox_to_anchor=(1.1, 0.5), labelspacing=3)
plt.show()
Because other answers here claim that s denotes the area of the marker, I'm adding this answer to clearify that this is not necessarily the case.
Size in points^2
The argument s in plt.scatter denotes the markersize**2. As the documentation says
s : scalar or array_like, shape (n, ), optional
size in points^2. Default is rcParams['lines.markersize'] ** 2.
This can be taken literally. In order to obtain a marker which is x points large, you need to square that number and give it to the s argument.
So the relationship between the markersize of a line plot and the scatter size argument is the square. In order to produce a scatter marker of the same size as a plot marker of size 10 points you would hence call scatter( .., s=100).
import matplotlib.pyplot as plt
fig,ax = plt.subplots()
ax.plot([0],[0], marker="o", markersize=10)
ax.plot([0.07,0.93],[0,0], linewidth=10)
ax.scatter([1],[0], s=100)
ax.plot([0],[1], marker="o", markersize=22)
ax.plot([0.14,0.86],[1,1], linewidth=22)
ax.scatter([1],[1], s=22**2)
plt.show()
Connection to "area"
So why do other answers and even the documentation speak about "area" when it comes to the s parameter?
Of course the units of points**2 are area units.
For the special case of a square marker, marker="s", the area of the marker is indeed directly the value of the s parameter.
For a circle, the area of the circle is area = pi/4*s.
For other markers there may not even be any obvious relation to the area of the marker.
In all cases however the area of the marker is proportional to the s parameter. This is the motivation to call it "area" even though in most cases it isn't really.
Specifying the size of the scatter markers in terms of some quantity which is proportional to the area of the marker makes in thus far sense as it is the area of the marker that is perceived when comparing different patches rather than its side length or diameter. I.e. doubling the underlying quantity should double the area of the marker.
What are points?
So far the answer to what the size of a scatter marker means is given in units of points. Points are often used in typography, where fonts are specified in points. Also linewidths is often specified in points. The standard size of points in matplotlib is 72 points per inch (ppi) - 1 point is hence 1/72 inches.
It might be useful to be able to specify sizes in pixels instead of points. If the figure dpi is 72 as well, one point is one pixel. If the figure dpi is different (matplotlib default is fig.dpi=100),
1 point == fig.dpi/72. pixels
While the scatter marker's size in points would hence look different for different figure dpi, one could produce a 10 by 10 pixels^2 marker, which would always have the same number of pixels covered:
import matplotlib.pyplot as plt
for dpi in [72,100,144]:
fig,ax = plt.subplots(figsize=(1.5,2), dpi=dpi)
ax.set_title("fig.dpi={}".format(dpi))
ax.set_ylim(-3,3)
ax.set_xlim(-2,2)
ax.scatter([0],[1], s=10**2,
marker="s", linewidth=0, label="100 points^2")
ax.scatter([1],[1], s=(10*72./fig.dpi)**2,
marker="s", linewidth=0, label="100 pixels^2")
ax.legend(loc=8,framealpha=1, fontsize=8)
fig.savefig("fig{}.png".format(dpi), bbox_inches="tight")
plt.show()
If you are interested in a scatter in data units, check this answer.
You can use markersize to specify the size of the circle in plot method
import numpy as np
import matplotlib.pyplot as plt
x1 = np.random.randn(20)
x2 = np.random.randn(20)
plt.figure(1)
# you can specify the marker size two ways directly:
plt.plot(x1, 'bo', markersize=20) # blue circle with size 10
plt.plot(x2, 'ro', ms=10,) # ms is just an alias for markersize
plt.show()
From here
It is the area of the marker. I mean if you have s1 = 1000 and then s2 = 4000, the relation between the radius of each circle is: r_s2 = 2 * r_s1. See the following plot:
plt.scatter(2, 1, s=4000, c='r')
plt.scatter(2, 1, s=1000 ,c='b')
plt.scatter(2, 1, s=10, c='g')
I had the same doubt when I saw the post, so I did this example then I used a ruler on the screen to measure the radii.
I also attempted to use 'scatter' initially for this purpose. After quite a bit of wasted time - I settled on the following solution.
import matplotlib.pyplot as plt
input_list = [{'x':100,'y':200,'radius':50, 'color':(0.1,0.2,0.3)}]
output_list = []
for point in input_list:
output_list.append(plt.Circle((point['x'], point['y']), point['radius'], color=point['color'], fill=False))
ax = plt.gca(aspect='equal')
ax.cla()
ax.set_xlim((0, 1000))
ax.set_ylim((0, 1000))
for circle in output_list:
ax.add_artist(circle)
This is based on an answer to this question
If the size of the circles corresponds to the square of the parameter in s=parameter, then assign a square root to each element you append to your size array, like this: s=[1, 1.414, 1.73, 2.0, 2.24] such that when it takes these values and returns them, their relative size increase will be the square root of the squared progression, which returns a linear progression.
If I were to square each one as it gets output to the plot: output=[1, 2, 3, 4, 5]. Try list interpretation: s=[numpy.sqrt(i) for i in s]