I am trying to map data on a sphere, but I can't get rid of a seam where it wraps around at azimuthal angle phi = 2*pi. I wrote a simple example to show the problem:
from mayavi import mlab
import numpy as np
from import_field_map import import_field_map
data = np.empty([24, 25])
for ldx, line in enumerate(data):
for cdx, col in enumerate(line):
data[ldx, cdx] = ldx
phi_range = np.linspace(0.0, 2 * np.pi, 25)
theta_range = np.linspace(-np.pi / 2, np.pi / 2, 24)
phis, thetas = np.meshgrid(phi_range, theta_range)
x = np.cos(thetas) * np.cos(phis)
y = np.cos(thetas) * np.sin(phis)
z = np.sin(thetas)
mlab.figure(1, bgcolor=(1, 1, 1), fgcolor=(0, 0, 0), size=(400, 300))
mlab.clf()
mlab.mesh(x, y, z, scalars=data, colormap='jet')
mlab.view()
mlab.show()
It looks as though Mayavi is not able to calculate GL vertex normals at those points, because it does not know that the mesh is periodic.
I just found one solution that does not solve the root of the problem (unless it's not a problem and I'm just using Mayavi wrong).
By using scipy.ndimage.zoom I can up-sample my data array and thus increase the number of rendered faces. Then, the difference between face and vertex normals becomes so small that the seam is not noticeable anymore.
Related
I am having difficulties to interpret results of arctangent functions. This behaviour is consistent for all implementations I came across, so I will here limit myself to NumPy and MATLAB.
The idea is to have circle of randomly placed points. The goal is to represent their positions in polar coordinate system and since they are uniformly distributed, I expect the θ angle (which is calculated using atan2 function) to be also distributed randomly over interval -π ... π.
Here is the code for MATLAB:
stp = 2*pi/2^8;
siz = 100;
num = 100000000;
x = randi([-siz, siz], [1, num]);
y = randi([-siz, siz], [1, num]);
m = (x.^2+y.^2) < siz^2;
[t, ~] = cart2pol(x(m), y(m));
figure()
histogram(t, -pi:stp:pi);
And here for Python & NumPy:
import numpy as np
import matplotlib.pyplot as pl
siz = 100
num = 100000000
rng = np.random.default_rng()
x = rng.integers(low=-siz, high=siz, size=num, endpoint=True)
y = rng.integers(low=-siz, high=siz, size=num, endpoint=True)
m = (x**2+y**2) < siz**2
t = np.arctan2(y[m], x[m]);
pl.hist(t, range=[-np.pi, np.pi], bins=2**8)
pl.show()
In both cases I got results looking like this, where one can easily see "steps" for each multiple of π/4.
It looks like some sort of precision error, but strangely for angles where I would not expect that. Also this behaviour is present for ordinary atan function as well.
Notice that you are using integers
So for each pair (p,q) you will have floor(sqrt(p**2 + q**2)/gcd(p,q)/r) pairs that give the same angle arctan(p,q). Then for the multiples of (p,q) the gcd(p,q) is 1
Notice also that p**2+q**2 is 1 for the multiples of pi/2 and 2 for the odd multiples of pi/4, with this we can predict that there will be more items that are even multiples of pi/4 than odd mulitples of pi/4. And this agrees with what we see in your plot.
Example
Let's plot the points with integer coordinates that lie in a circle of radius 10.
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
def gcd(a,b):
if a == 0 or b == 0:
return max(a,b)
while b != 0:
a,b = b, a%b
return a;
R = 10
x,y = np.indices((R+1, R+1))
m = (x**2 + y**2) <= R**2
x,y = x[m], y[m]
t = np.linspace(0, np.pi / 2)
plt.figure(figsize=(6, 6))
plt.plot(x, y, 'o')
plt.plot(R * np.cos(t), R * np.sin(t))
lines = Counter((xi / gcd(xi,yi),
yi / gcd(xi,yi)) for xi, yi in zip(x,y))
plt.axis('off')
for (x,y),f in lines.items():
if f != 1:
r = np.sqrt(x**2 + y**2)
plt.plot([0, R*x/r], [0, R*y/r], alpha=0.25)
plt.text(R*1.03*x/r, R*1.03*y/r, f'{int(y)}/{int(x)}: {f}')
Here you see on the plot a few points that share the same angle as some other. For the 45 degrees there are 7 points, and for multiples of 90 there are 10. Many of the points have a unique angle.
Basically you have many angles with few poitns and a few angles that hit many points.
But overall the points are distributed nearly uniformly with respect to angle. Here I plot the cumulative frequency that is nearly a straight line (what it would be if the distribution was unifrom), and the bin frequency form some triangular fractal pattern.
R = 20
x,y = np.indices((R+1, R+1))
m = (x**2 + y**2) <= R**2
x,y = x[m], y[m]
plt.figure(figsize=(6,6))
plt.subplot(211)
plt.plot(np.sort(np.arctan2(x,y))*180/np.pi, np.arange(len(x)), '.', markersize=1)
plt.subplot(212)
plt.plot(np.arctan2(x,y)*180/np.pi, np.gcd(x,y), '.', markersize=4)
If the size of the circle increases and you do a histogram with sufficiently wide bins you will not notice the variations, otherwise you will see this pattern in the histogram.
I would like to draw a surface and some of its iso-z contours, using the plot_surface and contour3D functions of mplot3D. Here is an example (I would like to use it to illustrate Lagrange points in physics) :
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
epsilon, r1 = 0.3, 1
r2 = epsilon*r1
Omega2 = 1/(r1*pow(r1+r2, 2))
u = np.linspace(-2, 2, 100)
x , y = np.meshgrid(u, u)
z = -epsilon/np.sqrt(np.power(x-r1, 2)+ np.power(y, 2)) - 1/np.sqrt(np.power(x+r2, 2)+ np.power(y, 2)) - 0.5*Omega2*(np.power(x, 2) + np.power(y, 2))
z = np.clip(z, -3, 0)
ax.plot_surface(x, y, z, rstride=1, cstride=1, antialiased=True, color="whitesmoke")
ax.contour3D(x, y, z+0.01, levels=np.arange(-2, -1, 0.1))
plt.show()
In the resulting plot, the contours do not show properly :
Image obtained by the code
and as the figure is interactively rotated, they randomly appear and disappear, with a wrong estimation of what part should be hidden by the surface :
Example of figure obtained by interactive rotation
This had been noticed before 4 years ago but no solution had been suggested. Hence my questions :
is it still, 4 years after, considered as a limitation of the plotting capabilities of matplolib ? And is there an alternative way, using some other graphical library ?
In matplotlib, it's possible to get the pixels inside a polygon using matplotlib.nxutils.points_inside_poly, as long as you have vertices defined beforehand.
How can you get the points inside a patch, e.g. an ellipse?
The problem: if you define a matplotlib ellipse, it has a .get_verts() method, but this returns the vertices in figure (instead of data) units.
One could do:
# there has to be a better way to do this,
# but this gets xy into the form used by points_inside_poly
xy = np.array([(x,y) for x,y in zip(pts[0].ravel(),pts[1].ravel())])
inds = np.array([E.contains_point((x,y)) for x,y in xy], dtype='bool')
However, this is very slow since it's looping in python instead of C.
use ax.transData.transform() to transform your points, and then use points_inside_poly():
import pylab as pl
import matplotlib.patches as mpatches
from matplotlib.nxutils import points_inside_poly
import numpy as np
fig, ax = pl.subplots(1, 1)
ax.set_aspect("equal")
e = mpatches.Ellipse((1, 2), 3, 1.5, alpha=0.5)
ax.add_patch(e)
ax.relim()
ax.autoscale()
p = e.get_path()
points = np.random.normal(size=(1000, 2))
polygon = e.get_verts()
tpoints = ax.transData.transform(points)
inpoints = points[points_inside_poly(tpoints, polygon)]
sx, sy = inpoints.T
ax.scatter(sx, sy)
result:
I have this binary image:
as numpy array of 0 and 1 values.
I want to sample it on every 10th pixel along the path, like:
I know how to sample orthogonal objects, by slicing the array, but I don't know what to do on irregular shape, and get evenly distributed set of "points".
You can use OpenCV to find the path by findContours. Here is the demo code, x & y are the coordinates of the pixels on the path.
import numpy as np
import cv2
import pylab as pl
img = cv2.imread("img.png")
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,img = cv2.threshold(img,127,255,0)
r = cv2.findContours(255-img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
c = r[0][0]
x, y = cc[::10, 0, 0], cc[::10, 0, 1]
pl.figure(figsize=(10, 10))
pl.imshow(img, cmap="gray", interpolation="nearest")
pl.plot(cc[::10, 0, 0], cc[::10, 0, 1], "o")
Here is the output:
I just select one point every 10 points from the path, so the distance between two nearby points are not the same. But you can use some interpolation method to convert the path to a smooth curve, and then find equidistance points.
I would like to draw a grid covering all the sphere on an orthographic projection.
The issue is cells outside the projection are not drawed correctly. This happened with drawgreatcircles as pointed here.
I have also tried to use Polygons as described here, but same problem.
Finally, I have coded a custom check based on Wikipedia. The idea is for each point of each segment, we check cos c (cf Wikipedia) and do not plot it if the cosinus is negative.
My question is : can we do this kind of check with basemap own functions ?
This strategy would not work for other projections.
Also, why is this kind of check not included in Basemap ?
Thanks to your example, I took the data and plotted it with cartopy. The following changes were needed to create the plot:
import cartopy.crs as ccrs
ax =plt.axes(projection=ccrs.Orthographic())
plt.pcolormesh(lons, lats,val, edgecolors='k',
linewidths=1, transform=ccrs.PlateCarree())
ax.coastlines()
ax.gridlines()
plt.show()
This is using pcolormesh so is pretty quick (though your example wasn't that slow on my machine in the first place).
Here is a solution using pcolor :
import pylab as plt
from mpl_toolkits.basemap import Basemap
import numpy as np
nb_lat2 = 20
nb_lat = 2*nb_lat2
nb_lon = 3*(2*(nb_lat+1) - 1)
lats = np.zeros((2*nb_lat, nb_lon))
lons = np.zeros((2*nb_lat, nb_lon))
val = np.zeros((2*nb_lat, nb_lon))
dlat = 90./nb_lat2
for i in range(nb_lat):
nb_lon = 2*(i+1)-1
if ((i+1) > nb_lat2):
nb_lon = 2*(nb_lat - i)-1
dlon = 120./nb_lon
lats[2*i][:] = 90 - i*dlat
lats[2*i+1][:] = 90 - (i+1)*dlat
for j in range(nb_lon):
lons[2*i][j] = j*dlon
lons[2*i+1][j] = j*dlon
for k in range(1,3):
lons[2*i][j + k*nb_lon] = j*dlon + 120.*k
lons[2*i+1][j + k*nb_lon] = j*dlon + 120.*k
lons[2*i][3*nb_lon:] = nb_lon*dlon + 240.
lons[2*i+1][3*nb_lon:] = nb_lon*dlon + 240.
lons = lons - 180
val = lats + lons
# Crash
##m = Basemap(projection='robin',lon_0=0,resolution=None)
#m = Basemap(projection='mill',lon_0=0)
m = Basemap(projection='ortho', lat_0=0,lon_0=0)
x, y = m(lons, lats)
m.pcolor(x,y,val, edgecolors='k', linewidths=1)
m.drawcoastlines()
m.drawparallels(np.arange(-90.,91.,30.))
m.drawmeridians(np.arange(-180.,181.,60.))
plt.show()
This does exactly what I want : drawing rectangles and filling them with one color.
But it is very slow (too slow). A lot of cells are unused : at the end of a latidude line, we set the width of unused cells to 0.
Another issue is some projections crash (Robin for example).