I'm trying to use WCS for simple linear, non-celestial axes. These are actually just the U,V coordinates representing the Fourier transform of an image.
import astropy.wcs as wcs
w=wcs.WCS(naxis=2)
w.wcs.axis_types[0]=0
w.wcs.axis_types[1]=0
w.wcs.ctype[0]='UU---SIN'
w.wcs.ctype[1]='VV---SIN'
print(w)
ww=w.deepcopy()
As I read the documentation for axis_types, I have specified that the first two axes are are linear axes (i.e. non-celestial). However when the deep copy executes, I get an error:
astropy.wcs._wcs.InconsistentAxisTypesError: ERROR 4 in wcs_types() at line 2486 of file cextern/wcslib/C/wcs.c:
Unrecognized celestial type (UU---SIN in CTYPE1).
What am I doing wrong?
Thanks,
Tim
Ah, I see that axis_types is an attribute and cannot be set in this way. It's apparent when trying to do: w.wcs.axis_types=[0,0] . Still not sure how to do this correctly.
Instead of UU---SIN and VV---SIN, just use UU and VV. wcs is recognizing that the SIN projection indicates a celestial coordinate system, but UU and VV do not describe any celestial coordinate system.
import astropy.wcs as wcs
w=wcs.WCS(naxis=2)
w.wcs.ctype[0]='UU'
w.wcs.ctype[1] = 'VV'
w.deepcopy()
This raises a question, though, of whether there is a well-defined convention for (presumably gridded?) UV data in FITS images.
I believe AIPS still does this and I am disappointed that WCSLIB objects.
UU---SIN etc seems a right way to describe what we have in such gridded
images. Actually FFT does use this axis type while UVIMG simply uses U
and V.
Related
I have data that I would like to save as png's. I need to keep the exact pixel dimensions - I don't want any inter-pixel interpolation, smoothing, or up/down sizing, etc. I do want to use a colormap, though (and mayber some other features of matplotlib's imshow). As I see it there are a couple ways I could do this:
1) Manually roll my own colormapping. (I'd rather not do this)
2) Figure out how to make sure the pixel dimenensions of the image in the figure produced by imshow are exactly correct, and then extract just the image portion of the figure for saving.
3) Use some other method which will directly give me a color mapped array (i.e. my NxN grayscale array -> NxNx3 array, using one of matplotlibs colormaps). Then save it using another png save method such as scipy.misc.imsave.
How can I do one of the above? (Or another alternate)
My problem arose when I was just saving the figure directly using savefig, and realized that I couldn't zoom into details. Upscaling wouldn't solve the problem, since the blurring between pixels is exactly one of the things I'm looking for - and the pixel size has a physical meaning.
EDIT:
Example:
import numpy as np
import matplotlib.pyplot as plt
X,Y = np.meshgrid(np.arange(-50.0,50,.1), np.arange(-50.0,50,.1))
Z = np.abs(np.sin(2*np.pi*(X**2+Y**2)**.5))/(1+(X/20)**2+(Y/20)**2)
plt.imshow(Z,cmap='inferno', interpolation='nearest')
plt.savefig('colormapeg.png')
plt.show()
Note zooming in on the interactive figure gives you a very different view then trying to zoom in on the saved figure. I could up the resolution of the saved figure - but that has it's own problems. I really just need the resolution fixed.
It seems you are looking for plt.imsave().
In this case,
plt.imsave("filename.png", Z, cmap='inferno')
I'm constructing a graph plot in Julia and need to color each edge of the graph differently, based on some weighting factor. I can't find a way to get a specific RGB (or HSV, it doesn't matter) value from a colormap. Let's say I'd like to get the RGB value on 'jet' that would correspond to a data value of n on imshow plot.
In python, I would just use jet(n), where n is the value along the colormap in which I am interested. PyPlot in Julia doesn't seem to have wrapped this functionality. I've also already tried indexing into the cmap object returned from get_cmap(). Any advice?
I'm stumped, so even an approximate solution would help. Thanks!
Maybe you can look at the Colors.jl package (https://github.com/JuliaGraphics/Colors.jl):
using Colors
palette = colormap("Oranges", 100)
Then you can access each color with palette[n]. Or are you using PyCall? A code describing what you're trying to do would help.
I have some data defined on a sphere (a sphere not the earth): is it possible with Python 2.6 and matplotlib to draw them on map (of the type of Mercator map) "automatically" or do I have to project the data?
Edit: All of my data are lat-long.
It really depends on what you have and what you want: x-y and/or lat-lon? It looks like your question is similar to a problem I had and more-or-less answered:
matplotlib and apect ratio of geographical-data plots
Consider using set_aspect(), using the reciprocal of the mean latitude of your data.
See matplotlib and apect ratio of geographical-data plots for a working example.
I'm trying to display 2D data with axis labels using both contour and pcolormesh. As has been noted on the matplotlib user list, these functions obey different conventions: pcolormesh expects the x and y values to specify the corners of the individual pixels, while contour expects the centers of the pixels.
What is the best way to make these behave consistently?
One option I've considered is to make a "centers-to-edges" function, assuming evenly spaced data:
def centers_to_edges(arr):
dx = arr[1]-arr[0]
newarr = np.linspace(arr.min()-dx/2,arr.max()+dx/2,arr.size+1)
return newarr
Another option is to use imshow with the extent keyword set.
The first approach doesn't play nicely with 2D axes (e.g., as created by meshgrid or indices) and the second discards the axis numbers entirely
Your data is a regular mesh? If it doesn't, you can use griddata() to obtain it. I think that if your data is too big, a sub-sampling or regularization always is possible. If the data is too big, maybe your output image always will be small compared with it and you can exploit this.
If you use imshow() with "extent" and "interpolation='nearest'", you will see that the data is cell-centered, and extent provided the lower edges of cells (corners). On the other hand, contour assumes that the data is cell-centered, and X,Y must be the center of cells. So, you need to be care about the input domain for contour. The trivial example is:
x = np.arange(-10,10,1)
X,Y = np.meshgrid(x,x)
P = X**2+Y**2
imshow(P,extent=[-10,10,-10,10],interpolation='nearest',origin='lower')
contour(X+0.5,Y+0.5,P,20,colors='k')
My tests told me that pcolormesh() is a very slow routine, and I always try to avoid it. griddata and imshow() always is a good choose for me.
How can I do a basic face alignment on a 2-dimensional image with the assumption that I have the position/coordinates of the mouth and eyes.
Is there any algorithm that I could implement to correct the face alignment on images?
Face (or image) alignment refers to aligning one image (or face in your case) with respect to another (or a reference image/face). It is also referred to as image registration. You can do that using either appearance (intensity-based registration) or key-point locations (feature-based registration). The second category stems from image motion models where one image is considered a displaced version of the other.
In your case the landmark locations (3 points for eyes and nose?) provide a good reference set for straightforward feature-based registration. Assuming you have the location of a set of points in both of the 2D images, x_1 and x_2 you can estimate a similarity transform (rotation, translation, scaling), i.e. a planar 2D transform S that maps x_1 to x_2. You can additionally add reflection to that, though for faces this will most-likely be unnecessary.
Estimation can be done by forming the normal equations and solving a linear least-squares (LS) problem for the x_1 = Sx_2 system using linear regression. For the 5 unknown parameters (2 rotation, 2 translation, 1 scaling) you will need 3 points (2.5 to be precise) for solving 5 equations. Solution to the above LS can be obtained through Direct Linear Transform (e.g. by applying SVD or a matrix pseudo-inverse). For cases of a sufficiently large number of reference points (i.e. automatically detected) a RANSAC-type method for point filtering and uncertainty removal (though this is not your case here).
After estimating S, apply image warping on the second image to get the transformed grid (pixel) coordinates of the entire image 2. The transform will change pixel locations but not their appearance. Unavoidably some of the transformed regions of image 2 will lie outside the grid of image 1, and you can decide on the values for those null locations (e.g. 0, NaN etc.).
For more details: R. Szeliski, "Image Alignment and Stitching: A Tutorial" (Section 4.3 "Geometric Registration")
In OpenCV see: Geometric Image Transformations, e.g. cv::getRotationMatrix2D cv::getAffineTransform and cv::warpAffine. Note though that you should estimate and apply a similarity transform (special case of an affine) in order to preserve angles and shapes.
For the face there is lot of variability in feature points. So it won't be possible to do a perfect fit of all feature points by just affine transforms. The only way to align all the points perfectly is to warp the image given the points. Basically you can do a triangulation of image given the points and do a affine warp of each triangle to get the warped image where all the points are aligned.
Face detection could be handled based on the just eye positions.
Herein, OpenCV, Dlib and MTCNN offers to detect faces and eyes. Besides, it is a python based framework but deepface wraps those methods and offers an out-of-the box detection and alignment function.
detectFace function applies detection and alignment in the background respectively.
#!pip install deepface
from deepface import DeepFace
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
DeepFace.detectFace("img.jpg", detector_backend = backends[0])
Besides, you can apply detection and alignment manually.
from deepface.commons import functions
img = functions.load_image("img.jpg")
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
detected_face = functions.detect_face(img = img, detector_backend = backends[3])
plt.imshow(detected_face)
aligned_face = functions.align_face(img = img, detector_backend = backends[3])
plt.imshow(aligned_face)
processed_img = functions.detect_face(img = aligned_face, detector_backend = backends[3])
plt.imshow(processed_img)
There's a section Aligning Face Images in OpenCV's Face Recognition guide:
http://docs.opencv.org/trunk/modules/contrib/doc/facerec/facerec_tutorial.html#aligning-face-images
The script aligns given images at the eyes. It's written in Python, but should be easy to translate to other languages. I know of a C# implementation by Sorin Miron:
http://code.google.com/p/stereo-face-recognition/