lat_ts on Plate Carree in cartopy - cartopy

I have data with this spatial reference in proj4 format:
+proj=eqc +lat_ts=8 +lat_0=0 +lon_0=180 +x_0=0 +y_0=0 +a=1737400 +b=1737400 +units=m +no_def
As I understand it, +proj=eqc is the Plate Carree (equidistant cylindrical) as described here.
However, that doesn't take a lat_ts parameter (latitude at true scale). Can anyone explain why, and how I can express this coordinate system in cartopy?

You can't currently express that parameter for Equidistant Cyclindrical in Cartopy. The PlateCarree projection in CartoPy has some weird behavior as well. If this is important to you, I'd suggest opening an issue, or better yet, contributing a Pull Request to add Equidistant Cylindrical as a proper projection to CartoPy.

Related

Why do we use crs.PlateCarree() instead of crs.Geodetic() when using Matplotlib and Cartopy to plot a map based on lat and lon?

I've been learning how to use Cartopy and Matplotlib to plot map. But I have a question regarding the argument transform. According to the Cartopy document, transform specifies "what coordinate system your data are defined in". Suppose I am going to plot temperatures of an area, and the area has been split into several grid cells. Each grid cells has a corresponding coordinate defined in lat and lon (Geodetic Systems). Based on the Cartopy document, I need to use crs.PlateCarree() instead of crs.Geodetic(). I'm a bit confused about it. Because,I think the PlateCarree is a way of projection. In other words, coordinates defined in PlateCarree projections are projected data. But latitude and longitude should be unprojected data. Can anyone help me with it? Thanks!

Halcon: Obtain how much is a mm in pixels after calibration

I've successfully calibrated my camera and I can get the dimensions of a XLD in world coordinates with ContourToWorldPlaneXld and then HeightWidthRatioXld. This returns me the measures of a contour extracted from a shape.
Now I need to convert a value inserted by the user in mm (example in mm: 0.1) and get how many pixels the measure is, for example, to draw a line.
I need the pixel value as per request. I tried looking around in the Halcon documentation but I didn't find what I was looking for.
Also I read this answer but it' not exactly what I'm looking for.
I'm using Halcon Progress 21.11.
Edit: A possible solution could be obtaining the dimensions before converting them to world plane and then do something like pixel/world but I would prefer a better method if it exists.

How to define axis names in WCS

I'm trying to use WCS for simple linear, non-celestial axes. These are actually just the U,V coordinates representing the Fourier transform of an image.
import astropy.wcs as wcs
w=wcs.WCS(naxis=2)
w.wcs.axis_types[0]=0
w.wcs.axis_types[1]=0
w.wcs.ctype[0]='UU---SIN'
w.wcs.ctype[1]='VV---SIN'
print(w)
ww=w.deepcopy()
As I read the documentation for axis_types, I have specified that the first two axes are are linear axes (i.e. non-celestial). However when the deep copy executes, I get an error:
astropy.wcs._wcs.InconsistentAxisTypesError: ERROR 4 in wcs_types() at line 2486 of file cextern/wcslib/C/wcs.c:
Unrecognized celestial type (UU---SIN in CTYPE1).
What am I doing wrong?
Thanks,
Tim
Ah, I see that axis_types is an attribute and cannot be set in this way. It's apparent when trying to do: w.wcs.axis_types=[0,0] . Still not sure how to do this correctly.
Instead of UU---SIN and VV---SIN, just use UU and VV. wcs is recognizing that the SIN projection indicates a celestial coordinate system, but UU and VV do not describe any celestial coordinate system.
import astropy.wcs as wcs
w=wcs.WCS(naxis=2)
w.wcs.ctype[0]='UU'
w.wcs.ctype[1] = 'VV'
w.deepcopy()
This raises a question, though, of whether there is a well-defined convention for (presumably gridded?) UV data in FITS images.
I believe AIPS still does this and I am disappointed that WCSLIB objects.
UU---SIN etc seems a right way to describe what we have in such gridded
images. Actually FFT does use this axis type while UVIMG simply uses U
and V.

VTK / ITK Dice Similarity Coefficient on Meshes

I am new to VTK and am trying to compute the Dice Similarity Coefficient (DSC), starting from 2 meshes.
DSC can be computed as 2 Vab / (Va + Vb), where Vab is the overlapping volume among mesh A and mesh B.
To read a mesh (i.e. an organ contour exported in .vtk format using 3D Slicer, https://www.slicer.org) I use the following snippet:
string inputFilename1 = "organ1.vtk";
// Get all data from the file
vtkSmartPointer<vtkGenericDataObjectReader> reader1 = vtkSmartPointer<vtkGenericDataObjectReader>::New();
reader1->SetFileName(inputFilename1.c_str());
reader1->Update();
vtkSmartPointer<vtkPolyData> struct1 = reader1->GetPolyDataOutput();
I can compute the volume of the two meshes using vtkMassProperties (although I observed some differences between the ones computed with VTK and the ones computed with 3D Slicer).
To then intersect 2 meshses, I am trying to use vtkIntersectionPolyDataFilter. The output of this filter, however, is a set of lines that marks the intersection of the input vtkPolyData objects, and NOT a closed surface. I therefore need to somehow generate a mesh from these lines and compute its volume.
Do you know which can be a good, accurate way to generete such a mesh and how to do it?
Alternatively, I tried to use ITK as well. I found a package that is supposed to handle this problem (http://www.insight-journal.org/browse/publication/762, dated 2010) but I am not able to compile it against the latest version of ITK. It says that ITK must be compiled with the (now deprecated) ITK_USE_REVIEW flag ON. Needless to say, I compiled it with the new Module_ITKReview set to ON and also with backward compatibility but had no luck.
Finally, if you have any other alternative (scriptable) software/library to solve this problem, please let me know. I need to perform these computation automatically.
You could try vtkBooleanOperationPolyDataFilter
http://www.vtk.org/doc/nightly/html/classvtkBooleanOperationPolyDataFilter.html
filter->SetOperationToIntersection();
if your data is smooth and well-behaved, this filter works pretty good. However, sharp structures, e.g. the ones originating from binary image marching cubes algorithm can make a problem for it. That said, vtkPolyDataToImageStencil doesn't necessarily perform any better on this regard.
I had once impression that the boolean operation on polygons is not really ideal for "organs" of size 100k polygons and more. Depends.
If you want to compute a Dice Similarity Coefficient, I suggest you first generate volumes (rasterize) from the meshes by use of vtkPolyDataToImageStencil.
Then it's easy to compute the DSC.
Good luck :)

project data defined on a sphere

I have some data defined on a sphere (a sphere not the earth): is it possible with Python 2.6 and matplotlib to draw them on map (of the type of Mercator map) "automatically" or do I have to project the data?
Edit: All of my data are lat-long.
It really depends on what you have and what you want: x-y and/or lat-lon? It looks like your question is similar to a problem I had and more-or-less answered:
matplotlib and apect ratio of geographical-data plots
Consider using set_aspect(), using the reciprocal of the mean latitude of your data.
See matplotlib and apect ratio of geographical-data plots for a working example.