I need to perform accurate pixel to world coordinate transformations on FITS files that were originally created using Maxim DL. Maxim uses Pinpoint for plate solving which generates TRi_j distortion coefficients. These are incompatible with the astropy.wcs coordinate transformation functions which I was proposing to use as these assume SIP distortion coefficients.
I'm therefore looking for options to re-platesolve the FITS files to generate SIP coefficients.
So far all I've found is astrometry.net but this is an on-line service. I'm really looking for offline platesolving (preferably against a local copy of the GSC) that I can perform synchronously as part of my app's workflow.
Are there any Astropy-affiliated (or other) Python packages that perform SIP-compatible platesolving against the GSC?
Alternatively, are there any equivalents to wcs.all_pix2world that can use TRi_j distortion coefficients so I can work with the Maxim DL data?
Many thanks
Nigel
In addition to SIP coefficients, the astropy.wcs methods will work with TPV distortion coefficients. This means you can use the output of the SCAMP astrometric solver directly with astropy.wcs. If you wish to convert TPV coefficients to the SIP form, you can use the sip_tpv package for which I am the lead contributor. I don't know of a Python package wrapping SCAMP -- I have wrapped it for the Zwicky Transient Facility pipeline but that code is not public.
You could do:
from astropy.io import fits
from astropy.wcs import WCS
hdul = fits.open(fitsfilename)[0]
wcs = WCS(hdul.header)
ax = fig.gca()
ax.scatter([34], [3.2], transform=ax.get_transform('world'))
(Based on this Q.)
Related
I am using Tensorflow Probability to build a VAE which includes image pixels as well as some other variables. The output of the VAE:
tfp.distributions.Independent(tfp.distributions.Bernoulli(logits), 2, name="decoder-dist")
I am trying to understand how to form other conditional distributions based on this which I can use with the inference methods (MCMC or VI). Say the output above was P(A,B,C | Z), how would I take that distribution to form a posterior P(A|B, C, Z) that I could perform inference on? I have been trying to read through the docs but I am having some trouble grasping them.
The answer to your question depends very much on the nature of the joint model within which you'd like to do the conditioning. Much has been written about the topic, and in short it's a very hard problem in general :). Without knowing a bit more about the particulars of your problem, it's near impossible to recommend a useful generic inference procedure. However, we do have a bunch of examples (scripts and jupyter/colab notebooks) in the TFP repo here: https://github.com/tensorflow/probability/tree/master/tensorflow_probability/examples
In particular, there's
The Hierarchical Linear Model example, which is a sort of Rosetta stone showing how to do posterior inference using Hamiltonian Monte Carlo (an MCMC technique) in TFP, R, and Stan,
The Linear Mixed Effects Model example, showing how you might use VI to solve a standard LME problem,
among many others. You can click the "Run in Google Colab" link at the top of any of these notebooks to open and run on them on https://colab.research.google.com.
Please feel free, also, to reach out on to us via email at tfprobability#tensorflow.org. This is a public Google Group where users can engage with the team that builds TFP directly. If you provide us some more info there on what you'd like to do, we're happy to provide guidance on modeling and inference with TFP.
Hope this is gives at least a start in the right direction!
I am trying to compute some mesh features for 3D models that I created using numpy-stl. I would like to compute all of the features given within pyradiomics, but I am not sure how to use them on just the meshes without them having all of the extra binary image, and matrix information? Unless there is a better program t use for shape feature extraction? Also, in the documentation, it says that there are some features you need to enable C extensions for. How can you do that in your python script?
C extensions are enabled by default. As of PyRadiomics 2.0, the python equivalents for those functions have been remove (horrible performance).
As to your meshes. PyRadiomics is build to extract features from images and binary labelmaps. To use meshes you would have to first convert them.
What features do you want to extract? PyRadiomics does use a sort of on-the-fly built mesh to calculate surface area and volume, which are also used in the calculation of several other shape features.
If you want to take a look at how volume and SA are calculated, the source code for that is in C (radiomics/src/cshape.c). The calculation of the derived features (e.g. sphericity) is in shape.py
I have a very fine mesh (STL) of some organic shapes (e.g., a bone) and would like to convert it to a few patches of NURBS, which will be much smoother with reasonable simplification.
I can do this manually with Solidworks ScanTo3D function, but it is not scriptable. It's a pain when I need to do hundreds of them.
Would there be a way to automate it, e.g., with some open source libraries available? I am perfectly fine with quite some loss in accuracy. I use mainly Python, but I don't mind if it is in other languages and I can work my way around it.
Note that one thing I'd like to avoid is to convert an STL of 10,000 triangles to a NURBS with 10,000 patches. I'd like to automatically (programmatically, could be with some parameter tunings) divide the mesh into a few patches and then fit it. Again, I'm perfect fine with quite some loss in accuracy.
Converting an arbitrary mesh to nurbs is not easy in general. What is a good nurbs surface for a given mesh depends on the use case. Do you want to manually edit the nurbs surface afterwards? Should symmetric structures or other features be recognized and represented correctly in the nurbs body? Is it important to keep the volume of the body? Are there boundary lines that should not be simplified as they change the appearance or angles that must be kept?
If you just want to smooth the mesh or reduce the amount of vertices there are easier ways like mesh reduction and mesh smoothing.
If you require your output to be nurbs there are different methods leading to different topologies and approximations like indicated above. A commonly used method for object simplification is to register the mesh to some handmade prototype and then perform some smaller changes to shape the specific instance. If there are for example several classes of shapes like bones, hearts, livers etc. it might be possible to model a prototype nurbs body for each class once which defines the average appearance and topology of that organ. Each instance of a class can then be converted to a nurbs by fitting the prototype to that instance. As the topology is fixed the optimization problem is reduced to the problem where we need to find the control points that approximate the mesh with the smallest error.Disadvantage of this method is that you have to create a prototype for each class. The advantage is that the topology will be nice and easily editable.
Another approach would be to first smooth the mesh and reduce the polygon count (there are libraries available for mesh reduction) and then simply converting each triangle/ quad to a nurbs patch (like the Rhino MeshToNurb Command). This method should be easier to implement but the resulting nurbs body could have an ugly topology.
If one of this methods is applicable really depends on what you want to do with your transformed data.
I am interested in visualizing meteorological and climatological data.
Here we are talking about 2D/3D visualization for weather and climate elements:
Temperature
Pressure
Wind
Example
We have used some tools previously, such as:
GrADS
Surfer (commercial software)
GIS Meteo (commercial software)
What another tools (preferably open source) would you suggest for that purpose nowadays?
I know you mentioned GrADS, but it was the tool I used mostly for development of weather products, a little more intuitive and resource friendly than IDV when I coded, and generally pretty good rate of development. You mentioned Open Source... did you know there is an OpenGrADS (http://opengrads.org/)? Most friends involved in weather product development use a combination of GrADS\OpenGrADS for much of their work. But I agree it doesn't produce knock-your-socks-off graphics.
Another commonly used free program is Gempak, another Unidata product, which really seems to be becoming outdated in my personal opinion).
And then you can talk high end graphics, you're going to pay more. http://moe.met.fsu.edu/~hrw22/movies/WIND_Katrina_2005-08-28_00Z.gif is a great video of Katrina that was produced by someone I knew using Amira. According to Wikipedia, you're looking at
"Cost: $4,000 USD + $800/year support (2009)... although now has much more ugly/complex pricing structure where each feature is priced separately (eg: Amira Mesh Option $360). I believe at NCMIR we pay ~$9000/year for five user-license." Ouch!
I don't have an open source tool, but if you can get access to a Level-II data feed (Level-II is minimally post processed radar data), I and a meteorologist friend use GR2Analyst. I would assume you know enough about weather sources to be able to figure out how to set this up.
If you're looking for an open source (and free) tool that can do 2D and 3D, which also includes access to a wide variety of datasets (obs, model output, remote sensing - radar level 2 and 3, satellite, and more!), then you might want to check out the Unidata Integrated Data Viewer (IDV):
http://www.unidata.ucar.edu/software/idv/
Source code available here:
https://github.com/Unidata/IDV
The interface is a bit complex, but we have some youtube screencasts to help people get up and going:
http://www.youtube.com/user/unidatanews/videos
If you'd like to see a video for a specific thing, we are taking requests :-) (email support-idv#unidata.ucar.edu). We do yearly training workshops as well, and those materials are available online here:
http://www.unidata.ucar.edu/software/idv/docs/workshop/
Cheers!
Sean
Panoply is a multiplataform desktop option if data is available in formats such NetCDF, HDF or GRIB.
I extracted the following text from his site that describes some of the characteristics:
Slice and plot geo-gridded latitude-longitude, latitude-vertical, longitude-vertical, or time-latitude arrays from larger multidimensional variables.
Slice and plot "generic" 2D arrays from larger multidimensional variables.
Slice 1D arrays from larger multidimensional variables and create line plots.
Combine two geo-gridded arrays in one plot by differencing, summing or averaging.
Plot lon-lat data on a global or regional map using any of over 100 map projections or make a zonal average line plot.
Overlay continent outlines or masks on lon-lat map plots.
Use any of numerous color tables for the scale colorbar, or apply your own custom ACT, CPT, or RGB color table.
Save plots to disk GIF, JPEG, PNG or TIFF bitmap images or as PDF or PostScript graphics files.
Export lon-lat map plots in KMZ format.
Export animations as AVI or MOV video or as a collection of invididual frame images.
Explore remote THREDDS and OpenDAP catalogs and open datasets served from them.
If you are interested in interactive visualization over web, there are some options such as:
ncWMS: an webmapping server that reads NetCDF data and publish it using Web Mapping Service standard.
GeoServer: another webmapping server that has plugin to read NetCDF data.
Vtk (visualization Toolkit) is a C++ open source 2D and 3D visualization library that I use to visualize radar data in 3D.
In case somebody doesn't know: A cartogram is a type of map where some country/region-dependent numeric property scales the respective regions so that that property's density is (close to) constant. An example is
from worldmapper.org. In this example, countries are scaled according to their population, resulting in near-constant population density.
Needless to say, this is really cool. Does anyone know of a Matplotlib-based library for drawing such maps? The method used at worldmapper.org is described in (1), so it would surprise me if no one has implemented this yet...
I'm also interested in hearing about other cartogram libraries, even if they're not made for Matplotlib.
(1) Michael T. Gastner and M. E. J. Newman,
Diffusion-based method for producing density-equalizing maps,
Proc. Nat. Acad. Sci. USA, 101, 7499-7504 (2004). Available at arXiv.
There's this, though it's based and a different algorithm (and though it's on the ESRI site, it doesn't require ArcGIS). Of course, once you have the cartogram you can plot it in matplotlib.
Here is a Javascript plugin to make cartograms using D3. It is a good, simple solution if you are not too concerned about the regions being sized accurately. If accuracy is important, there are other options available that give you more freedom to play with the algorithm's parameters to get to a more accurate result.
Here are two great standalone programs I know of:
Scapetoad
Carto3F
Scapetoad is very easy to use. Just give it a shapefile, tell it which attribute to use for the scaling, and set a few accuracy parameters. If there is any doubt, this post describes the process.
Carto3F is more complex and allows for greater accuracy, though it is a bit trickier to figure out - lots of parameter settings without much documentation explaining them.
There is also a QGIS cartogram plugin, written in Python. Though I have not been able to get it to work, so cannot comment on that one.
In short, no. But Newman has an excellent little implementation of his and Gastner's method on his website. Installing it is easy and it works from the command line. Here's an example of a workflow using this software that worked for me.
Compute a grid of density estimates over some region, e.g. in Python. Store it as a matrix of numbers.
Run the cart program with your density matrix as input from the command line or from as subprocess in Python.
The program returns a list of new coordinates for each grid point.
Pipe your shapefile points through the interp program and into a new shapefile to get the transformed map.
There are nice instructions on the main page.
The geoplot.cartogram function in
Geoplot: geospatial data visualization — geoplot 0.2.0
says it is a high-level Python geospatial plotting library, and an extension to cartopy and matplotlib.
Try this library if you are using geopandas, it is quick and doesnt require much customization. https://github.com/mthh/cartogram_geopandas