rDGAL, Tiff Files, and WorldFile - latitude-longitude

I have a set of tiff files that display convective weather across the continental US (NAD83 projection) in pixel locations from Iowa State University. My goal is the transformation of the pixel locations to lat/lon data. I read in the tiff file data as a SpatialGridDataFrame with...
imageData = readGDAL( fileNameDir, silent = TRUE )
I read somewhere that readGDAL will seek a World File if no projection data exist in the tiff file, so I created such a file (nad83WorldFile.wld) with the requisite information, see info at ESRI. I put the wld file in the same directory as my R scripts. The coefficients for the wld file are:
A = 0.01
B = 0.0
C = 0.0
D = -0.01
E = -126.0
F = 50.0
I seek advice and guidance on the pixel-to-lat/lon projection. A data file for the readGDAL example of fileNameDir and documentation on the World File format are provided in the hypertext links above. I had to change the file extension from *.png to *.tiff.

Normally, if you know that your data are projected, but that this projection isn't part of your tif file, your can simply add it in your R object after the import:
proj4string(imageData) <- CRS("your projection")
I like using EPSG for that, if your tif was in GoogleEarth projection for example I would do:
proj4string(imageData) <- CRS("+init=EPSG:4326")
Just find what you NAD83 exact projection is (this site can help http://spatialreference.org/).
Then you can reproject it in the your choice of projection:
imageDataProj <- spTransform(imageDataProj, CRS("your new projection"))
As a side note, I always prefer using the raster package for handling raster formats. However, changing the projection of a big raster file with R can be fastidious, so now I use GDAL directly (through gdalwarp). You can call all gdal options quite easily in R with the gdalUtils package but you'll have to import the results back into R after hand.
EDITS following comment from OP:
Using the raster package:
library(raster)
Loading the tif:
rr <- raster("C:\\temp\\n0r_201601011100.tif")
Save you pixel coordinates equations in functions. Noticed I changed the Lat function (removed the negative sign, it didn't work with it, you'll have to validate that)
Lon = function(JJ) 0.01 * JJ + 162
Lat = function(II) 0.01 * II + 50.0
Get the extent of your raw raster in pixel coordinates:
ext.rr <- extent(rr)
Prepare a new empty raster which will be projected, have the good resolution and extent:
rr2 <- raster(nrows=nrow(rr), ncols=ncol(rr), xmn=Lon(ext.rr#xmin), xmx=Lon(ext.rr#xmax), ymn=Lat(ext.rr#ymin), ymx=Lat(ext.rr#ymax))
Fill this new raster with your modified values (following the equation you gave in the comments):
values(rr2) <- (values(rr) - 7) * 5
And you get:
rr2
class : RasterLayer
dimensions : 2600, 6000, 15600000 (nrow, ncol, ncell)
resolution : 0.01, 0.01 (x, y)
extent : 162, 222, 50, 76 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0
data source : in memory
names : layer
values : -35, 50 (min, max)
Notice that the lat-long projection was automatically pick-up by the raster function. Hopefully it's what you are looking for.

Related

Images rotated when added to PDF in itext7

I'm using the following extension method I built on top of itext7's com.itextpdf.layout.Document type to apply images to PDF documents in my application:
fun Document.writeImage(imageStream: InputStream, page: Int, x: Float, y: Float, width: Float, height: Float) {
val imageData = ImageDataFactory.create(imageStream.readBytes())
val image = Image(imageData)
val pageHeight = pdfDocument.getPage(page).pageSize.height
image.scaleAbsolute(width, height)
val lowerLeftX = x
val lowerLeftY = pageHeight - y - image.imageScaledHeight
image.setFixedPosition(page, lowerLeftX, lowerLeftY)
add(image)
}
Overall, this works -- but with one exception! I've encountered a subset of documents where the images are placed as if the document origin is rotated 90 degrees. Even though the content of the document is presented properly oriented underneath.
Here is a redacted copy of one of the PDFs I'm experiencing this issue with. I'm wondering if anyone would be able to tell me why itext7 is having difficulties writing to this document, and what I can do to fix it -- or alternatively, if it's a potential bug in the higher level functionality of com.itextpdf.layout in itext7?
Some Additional Notes
I'm aware that drawing on a PDF works via a series of instructions concatenated to the PDF. The code above works on other PDFs we've had issues with in the past, so com.itextpdf.layout.Document does appear to be normalizing the coordinate space prior to drawing. Thus, the issue I describe above seems to be going undetected by itext?
The rotation metadata in the PDF that itext7 reports from a "good" PDF without this issue seems to be the same as the rotation metadata in PDFs like the one I've linked above. This means I can't perform some kind of brute-force fix through detection.
I would love any solution to not require me to flatten the PDF through any form of broad operation.
I can talk only about the document you`ve shared.
It contains 4 pages.
/Rotate property of the first page is 0, for other pages is 270 (defines 90 rotation counterclockwise).
IText indeed tries to normalize the coordinate space for each page.
That`s why when you add an image to pages 2-4 of the document it is rotated on 270 (90 counterclockwise) degrees.
... Even though the content of the document is presented properly oriented underneath.
Content of pages 2-4 looks like
q
0 -612 792 0 0 612 cm
/Im0 Do
Q
This is an image with applied transformation.
0 -612 792 0 0 612 cm represents the composite transformation matrix.
From ISO 32000
A transformation matrix in PDF shall be specified by six numbers,
usually in the form of an array containing six elements. In its most
general form, this array is denoted [a b c d e f]; it can represent
any linear transformation from one coordinate system to another.
We can extract a rotation from that matrix.
How to decompose the matrix you can find there.
https://math.stackexchange.com/questions/237369/given-this-transformation-matrix-how-do-i-decompose-it-into-translation-rotati
The rotation is defined by the next matrix
0 -1
1 0
This is a rotation on -90 (270) degrees.
Important note: in this case positive angle means counterclockwise rotation.
ISO 32000
Rotations shall be produced by [rc rs -rs rc 0 0], where rc = cos(q)
and rs = sin(q) which has the effect of rotating the coordinate system
axes by an angle q counter clockwise.
So the image has been rotated on the same angle in the counter direction comparing to the page.

How to get rid of artefacts in contourplot contourf (smoothing matrix/ 2D array)?

I have data in a hdf5 file with named datasets
#Data Aquisition and manipulation
file = h5py.File('C:/Users/machz/Downloads/20200715_000_Scan_XY-Coordinate_NV-centre_APD.h5', 'r')
filename = path.basename(file.filename)
intensity = file.get('intensity')
intensity = np.array(intensity)
x_range = file.get('x range')
x_range = np.array(x_range)
x_range = np.round(x_range,1)
z_range = file.get('z range')
z_range = np.array(z_range)
z_range=np.round(z_range,1)
where intensity is a 2D array and x_range and z_range are 1D arrays. Now i want to smooth the intensity data. The raw data looks for example like this:
by using seaborn.heatmap:
heat_map = sb.heatmap(intensity, cmap="Spectral_r")
When using matplotlib.contourf via
plt.contourf(intensity, 1000, cmap="Spectral_r")
i get the following result:
which looks oke, despite it is rotated by 180 degrees. But how can I get rid of the distortion in x and y direction and get round spots? Is there a more elegant way to smooth a 2D array / matrix? - I have read somthing about Kernel density Estimation (KDE), but it looks complex.
Edit: Result by applying ´´´intensity_smooth = gaussian_filter(intensity, sigma=1, order=0)```:
The points with high intensity are dissolving, but I want sharp intensity maximas with a soft transition between two values of the matrix (see first pic).
Unfortunately I expressed my answer misunderstandable. I have 2d data and want to get rid of the box look by interpolating the given data. To do this I have found a really good answer by Andras Deak in the thread Interpolation methods on different kinds of data. Plotting is done by using the matplotlib.contourf I have gotten this:
The tickmarks must be changed but the result is good.

use shapefile to mask raster data in ArcGIS, then weighted sum

I want to mask a raster data using a shapefile with ArcGIS, then weighted sum the masked parts.
Following is the path of the tool I used.
Spatial Analysis Tool -> Extraction -> Extract by mask.
When I use this tool to realize my intention, I always get several grids. However, what I want is an output having the same shape as my shapefile.
I hope the output includes several parts and can be weighted sum.
This is a coding site. For questions like this I would try https://gis.stackexchange.com/ instead.
I am not sure what you mean with weighted sum in this context, but here is an example of what you can do with R
Example data
library(raster)
p <- shapefile(system.file("external/lux.shp", package="raster"))[1,]
r <- raster(extent(p)+2, vals=1:100)
plot(x)
plot(p, add=T)
Raster cropped to polygon
x <- crop(r, p)
plot(x)
plot(p, add=T)
Disaggregate cells so that they fit the polygon better, followdd by crop and mask
d <- disaggregate(r, 100)
x <- crop(d, p)
m <- mask(x, p)
plot(m)
plot(p, add=T)

Interpolated results from gdallocationinfo?

This:
gdallocationinfo -valonly -wgs84 file longitude latitude
provides the value from the file at the resolved pixel.
Is there a gdal function that can provide an interpolated value from the neighbouring pixels?
For example these calls read elevation from Greenwich Park in London:
gdallocationinfo -wgs84 srtm_36_02.tif 0 51.4779
47
gdallocationinfo -wgs84 srtm_36_02.tif 0 51.4780
37
That's a 10 metre drop in elevation for a movement of 0.0001°, about 11 metres north.
The pixels in the file are quite coarse - corresponding to about 80 metres on the ground. I want to get smoother values out rather than sudden big jumps.
The workaround I'm currently using is to resample the source file at four times the resolution using this transformation:
gdalwarp -ts 24004 24004 -r cubicspline srtm_36_02.tif srtm_36_02_cubicspline_x4.tiff
The elevations requests for the same locations as previously using the new file:
gdallocationinfo -wgs84 srtm_36_02_cubicspline_x4.tiff 0 51.4779
43
gdallocationinfo -wgs84 srtm_36_02_cubicspline_x4.tiff 0 51.4780
41
which is much better as that is only a 2 metre jump.
The downside of this approach is that it takes a few minutes to generate the higher resolution file, but the main problem is that the file size goes from 69MB to 1.1GB.
I'm surprised that resampling is not a direct option to gdallocationinfo, or maybe there is another approach I can use?
You can write a Python or a Node.js script to do this, it would be 4 or 5 lines of code as GDAL's RasterIO can resample on the fly.
Node.js would go like this:
const cellSize = 4; // This is your resampling factor
const gdal = require('gdal-async');
const ds = gdal.open('srtm_36_02.tif');
// Transform from WGS84 to raster coordinates
const xform = new gdal.CoordinateTransformation(
gdal.SpatialReference.fromEPSG(4326), ds);
const coords = xform.transformPoint({x, y});
ds.bands.get(1).pixels.read(
coords.x - cellSize/2,
coords.y - cellSize/2,
cellSize,
cellSize,
undefined, // Let GDAL allocate an output buffer
{ buffer_width: 1, buffer_height: 1 } // of this size
);
console.log(data);
For brevity I have omitted the clamping of the coordinates when you are near the edges, you have to reduce the size in this case.
(disclaimer: I am the author of the Node.js bindings for GDAL)
You may try to get a 1-pixel raster from gdalwarp. This would use all the warp resample machinery with minimal impact on ram/cpu/disk. I am using this (inside a Python program, since the calculations may be a bit too complex for a shell script). It does work.

Optimizing size of eps/pdf files generated by Mathematica

How to optimize size of an eps or pdf file generated by Mathematica?
It is common that the file size is 50-100x bigger that it should be (an example below). For some applications (e.g. putting a figure in a publication, or even more - putting it on a large poster) I need to have axes in vector graphics, so using raster graphics for everything is not the best option for me.
Every practical solution (either with setting the right options in Mathematica or with doing further conversions in other applications) will be appreciated.
For example the following code producing an eps figure of:
plot = ListDensityPlot[
Table[Random[], {100}, {100}],
InterpolationOrder -> 0]
Export["testplot.eps", plot]
Export["testplot.pdf", plot]
produces an eps file of size of 3.3MB and a pdf size of 5MB (on Mathematica 7 on Mac OS X 10.6, if it makes a difference).
For a comparison, a 3x3 plot with the same axes has 8kB (pdf) to 20kB (eps).
100x100 points is 30kB in bmp (and a bit less in png).
The issue is the same for other types of plots, with the emphasis on ListPlot3D.
You may have figured out how to apply Alexey's answer in the link he provided. But in case you are having trouble here I provide how I apply the technique to 2D graphics.
I have found the hard way that if you want to create a good plot you need to be very specific to Mathematica. For this reason, as you may have noticed in my post Rasters in 3D I created an object specifying all the options so that Mathematica can be happy.
in = 72;
G2D = Graphics[{},
AlignmentPoint -> Center,
AspectRatio -> 1,
Axes -> False,
AxesLabel -> None,
BaseStyle -> {FontFamily -> "Arial", FontSize -> 12},
Frame -> True,
FrameStyle -> Directive[Black],
FrameTicksStyle -> Directive[10, Black],
ImagePadding -> {{20, 5}, {15, 5}},
ImageSize -> 5 in,
LabelStyle -> Directive[Black],
PlotRange -> All,
PlotRangeClipping -> False,
PlotRangePadding -> Scaled[0.02]
];
I should mention here that you must specify ImagePadding. If you set it to all your eps file will be different from what Mathematica shows you. In any case, I think having this object allows you to change properties much easily.
Now we can move on to your problem:
plot = ListDensityPlot[
Table[Random[], {100}, {100}],
InterpolationOrder -> 0,
Options[G2D]
]
The following separates the axes and the raster and combines them into result:
axes = Graphics[{}, AbsoluteOptions[plot]];
fig = Show[plot, FrameStyle -> Directive[Opacity[0]]];
fig = Magnify[fig, 5];
fig = Rasterize[fig, Background -> None];
axes = First#ImportString[ExportString[axes, "PDF"], "PDF"];
result = Show[axes, Epilog -> Inset[fig, {0, 0}, {0, 0}, ImageDimensions[axes]]]
The only difference here, which at this point I cannot explain is the axes labels, they have the decimal point. Finally, we export them:
Export["Result.pdf", result];
Export["Result.eps", result];
The result are files of sizes 115 Kb for the pdf file and 168 Kb for the eps file.
UPDATE:
If you are using Mathematica 7 the eps file will not come up correctly. All you will see is your main figure with black on the sides. This is a bug in version 7. This however is fixed in Mathematica 8.
I had mentioned previously that I did not know why the axes label were different. Alexey Popkov came up with a fix for that. To create axes, fig and result use the following:
axes = Graphics[{}, FilterRules[AbsoluteOptions[plot], Except[FrameTicks]]];
fig = Show[plot, FrameStyle -> Directive[Opacity[0]]];
fig = Magnify[fig, 5];
fig = Rasterize[fig, Background -> None];
axes = First#ImportString[ExportString[axes, "PDF"], "PDF"];
result = Show[axes, Epilog -> Inset[fig, {0, 0}, {0, 0}, ImageDimensions[axes]]]
I have had some success with both of the following:
(1) Rasterizing plots before saving. Quality is generally reasonable, size drops considerably.
(2) I Save to postscript, then (I'm on a Linux machine) use ps2pdf to get the pdf. This tends to be significantly smaller than saving directly to pdf from Mathematica.
Daniel Lichtblau
ImageResolution works well for .pdf but I haven't had success with .eps.
Export["testplot600.pdf", plot, ImageResolution -> 600]
The output size is 242 KB for 600 dpi and 94 KB for 300 dpi. You can also set ImageSize for Export.
If you want to go the third-party route, I'd recommend GraphicConverter. It is very reliable and has many many options.