GDAL doesnt recognize bipolar conic western hemisphere projection - gdal

I'm having trouble with a particular projection. It seems supported within proj4 (http://proj4.org/usage/operations/projections/bipc.html) But when I use it within gdal it's as if it doesnt exist:
gdalsrsinfo:
gdalsrsinfo -o proj4 "+proj=bipc +ns"
yields
failed to load SRS definition
gdalwarp:
gdalwarp -overwrite -s_srs EPSG:4326 -t_srs "+proj=bipc +ns" -of GTiff in.tiff out.tiff
yields
ERROR 1: Translating source or target SRS failed: +proj=bipc +ns
Also, running proj -lp lists it bipc : Bipolar conic of western hemisphere.
These commands work fine for me with more common (re)projections, and I've tried this on GDAL 1.11.5 and 2.2.2.
Why isn't this projection working/how do I get it recognized?

GDAL does not support all projections. You can make a list of these with Python:
#!/usr/bin/env python
from osgeo import osr
from subprocess import Popen, PIPE
osr.UseExceptions()
# Get the list of PROJ.4 projections
proj = {}
p = Popen(['proj', '-lp'], stdout=PIPE)
for line in p.communicate()[0].split('\n'):
if ':' in line:
a, b = line.split(':')
proj[a.strip()] = b.strip()
# Brute force method of testing GDAL's OSR module
supported = set()
not_supported = set()
for k in proj.keys():
sr = osr.SpatialReference()
try:
_ = sr.ImportFromProj4('+proj=' + k)
supported.add(k)
except RuntimeError as e:
not_supported.add(k)
print('{0} total projections, {1} supported, {2} not supported'
.format(len(proj), len(supported), len(not_supported)))
print('Supported: ' + ', '.join(sorted(supported)))
print('Not supported: ' + ', '.join(sorted(not_supported)))
134 total projections, 47 supported, 87 not supported
Supported: aea, aeqd, bonne, cass, cea, eck1, eck2, eck3, eck4, eck5, eck6, eqc, eqdc, etmerc, gall, geos, gnom, goode, gstmerc, igh, krovak, laea, lcc, merc, mill, moll, nzmg, omerc, ortho, poly, qsc, robin, sinu, somerc, stere, sterea, tmerc, tpeqd, utm, vandg, wag1, wag2, wag3, wag4, wag5, wag6, wag7
Not supported: airy, aitoff, alsk, apian, august, bacon, bipc, boggs, calcofi, cc, chamb, collg, crast, denoy, euler, fahey, fouc, fouc_s, gins8, gn_sinu, gs48, gs50, hammer, hatano, healpix, imw_p, isea, kav5, kav7, labrd, lagrng, larr, lask, latlon, lcca, leac, lee_os, lonlat, loxim, lsat, mbt_fps, mbt_s, mbtfpp, mbtfpq, mbtfps, mil_os, murd1, murd2, murd3, natearth, nell, nell_h, nicol, nsper, ob_tran, ocea, oea, ortel, pconic, putp1, putp2, putp3, putp3p, putp4p, putp5, putp5p, putp6, putp6p, qua_aut, rhealpix, rouss, rpoly, tcc, tcea, tissot, tpers, ups, urm5, urmfps, vandg2, vandg3, vandg4, vitk1, weren, wink1, wink2, wintri

Related

Sympy plotting backends library unable to save plot

I use sympy plotting backends library to create plots directly from sympy expressions. As backends i use matplotlib library. Actually, i don't need to show a plot. I create plot and then i return it as svg string in order to insert it in a web page later. Everything worked fine, but when i started to use new virtual environment (i installed all necessary packages) i got an error. I use the following code:
# create two plot: one from init_expr (sympy tree expression) and second from
# fourier_expr (sympy tree expression):
p1 = plot(
(init_expr, (var, left_bound, right_bound), label_1, line_color_1),
(fourier_expr, (var, left_bound, right_bound), label_2, line_color_2),
show = False)
# add axes decorations and legend:
p1.xlabel = var_name
p1.ylabel = "f(" + data.var_name + ")"
p1.legend = True
# buffer to write svg data:
f = io.StringIO()
# save plot in svg formate in buffer:
p1.save(f, format = "svg")
# return svg string:
return f.getvalue()
when execution begins, at line:
p1.save(f, format="svg")
i get an error:
AttributeError("'NoneType' object has no attribute 'runner'")
Any ideas what am i doing wrong?

Error with matrixprofile library (module has no attribute .stomp or matrixProfile is not defined)

I downloaded the matrxiprofile library for python but these errors occur with the followign code:
from matrixprofile import *
w = 16
mp, mpi = matrixProfile.stomp(ts.values, w)
plt.plot(mp)
plt.show()
from matrixprofile import *
w = 16
mp, mpi = matrixprofile.stomp(ts.values, w)
plt.plot(mp)
plt.show()
Both versions do not work giving the following errors:
NameError: name 'matrixProfile' is not defined
module 'matrixprofile' has no attribute 'stomp'

Using tf.custom_gradient in tensorflow r1.8

System information
Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Y
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
TensorFlow installed from (source or binary): binary
TensorFlow version (use command below): r1.8
Python version: 2.7.14
GCC/Compiler version (if compiling from source): 5.4
CUDA/cuDNN version: 8.0/7.0
GPU model and memory: GTX1080, 8G
Bazel version: N/A
Exact command to reproduce: python test_script.py
Describe the problem
Hello, I'm trying to make a custom_gradient op using the function of tf.custom_gradient. I made my test code based on the API explanation online. However, it seems there is a problem in the custom_gradient function. Thanks!
Source code / logs
import tensorflow as tf
import numpy as np
#tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
x = tf.constant(100.)
f = tf.custom_gradient(log1pexp)
y, dy = f(x)
sess = tf.Session()
print (y.eval(session=sess), y.eval(session=sess).shape)
File "/home/local/home/research/DL/unit_tests/tf_test_custom_grad.py", line 14, in <module>
y, dy = f(x)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/custom_gradient.py", line 111, in decorated
return _graph_mode_decorator(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/custom_gradient.py", line 132, in _graph_mode_decorator
result, grad_fn = f(*args)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 439, in __iter__
"Tensor objects are not iterable when eager execution is not "
TypeError: Tensor objects are not iterable when eager execution is not enabled. To iterate over this tensor use tf.map_fn.
If you just want to test the code in the documentation, here is the way.
The following code will give the instable [nan] result:
import tensorflow as tf
def log1pexp(x):
return tf.log(1 + tf.exp(x))
x = tf.constant(100.)
y = log1pexp(x)
dy = tf.gradients(y, x)
with tf.Session() as sess:
print(sess.run(dy))
And the following code will give the correct result [1.0]:
import tensorflow as tf
#tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
x = tf.constant(100.)
y = log1pexp(x)
dy = tf.gradients(y, x)
with tf.Session() as sess:
print(sess.run(dy))
Details:
The main problem here is that you are trying to decorate log1pexp twice in your code: once with #tf.custom_gradient and once with f = tf.custom_gradient(log1pexp). In python, #tf.custom_gradient here is equivalent to log1pexp = tf.custom_gradient(log1pexp). You should do this only once, especially here for the following reason.
tf.custom_gradient needs to call the function being pass to it to get both the function output and the gradient, i.e. expecting two returns. During decoration, everything works as expected because log1pexp returns tf.log(1 + e) and grad. After decorating log1pexp, log1pexp (returned by tf.custom_gradient) becomes a new function which returns only one tensor tf.log(1 + e). When you do f = tf.custom_gradient(log1pexp) after decorating log1pexp, tf.custom_gradient can only get one return which is the single tensor tf.log(1 + e). It will try to split this tensor into two by iterating this returned tensor. But it is wrong and is not allowed as the error message stated:
Tensor objects are not iterable when eager execution is not enabled.
You should not decorate log1pexp twice anyway. But this is why you got this error. One more thing to mention, your code will trigger another error for the same reason even if you removed #tf.custom_gradient. After removing #tf.custom_gradient, the line f = tf.custom_gradient(log1pexp) should work as expected. But f is a function returning only one tensor. y, dy = f(x) is wrong and will not work.

How to set the "band description" option/tag of a GeoTIFF file using GDAL (gdalwarp/gdal_translate)

Does anybody know how to change or set the "Description" option/tag of a GeoTIFF file using GDAL?
To specify what I mean, this is an example of gdalinfo return from a GeoTIFF file with set "Description":
Band 1 Block=64x64 Type=UInt16, ColorInterp=Undefined
Description = AVHRR Channel 1: 0.58 micrometers -- 0.68 micrometers
Min=0.000 Max=814.000
Minimum=0.000, Maximum=814.000, Mean=113.177, StdDev=152.897
Metadata:
LAYER_TYPE=athematic
STATISTICS_MAXIMUM=814
STATISTICS_MEAN=113.17657236931
STATISTICS_MINIMUM=0
STATISTICS_STDDEV=152.89720574652
In the example you can see: Description = AVHRR Channel 1: 0.58 micrometers -- 0.68 micrometers
How do I set this parameter using GDAL?
In Python you can set the band description like this:
from osgeo import gdal, osr
import numpy
# Define output image name, size and projection info:
OutputImage = 'test.tif'
SizeX = 20
SizeY = 20
CellSize = 1
X_Min = 563220.0
Y_Max = 699110.0
N_Bands = 10
srs = osr.SpatialReference()
srs.ImportFromEPSG(2157)
srs = srs.ExportToWkt()
GeoTransform = (X_Min, CellSize, 0, Y_Max, 0, -CellSize)
# Create the output image:
Driver = gdal.GetDriverByName('GTiff')
Raster = Driver.Create(OutputImage, SizeX, SizeY, N_Bands, 2) # Datatype = 2 same as gdal.GDT_UInt16
Raster.SetProjection(srs)
Raster.SetGeoTransform(GeoTransform)
# Iterate over each band
for band in range(N_Bands):
BandNumber = band + 1
BandName = 'SomeBandName '+ str(BandNumber).zfill(3)
RasterBand = Raster.GetRasterBand(BandNumber)
RasterBand.SetNoDataValue(0)
RasterBand.SetDescription(BandName) # This sets the band name!
RasterBand.WriteArray(numpy.ones((SizeX, SizeY)))
# close the output image
Raster = None
print("Done.")
Unfortunately, I'm not sure if ArcGIS or QGIS are able to read the band descriptions. However, the band names are clearly visible in Tuiview:
GDAL includes a python application called gdal_edit.py which can be used to modify the metadata of a file in place. I am not familiar with the Description field you are referring to, but this tool should be the one to use.
Here is the man page: gdal_edit.py
Here is an example script using an ortho-image I downloaded from the USGS Earth-Explorer.
#!/bin/sh
# Image to modify
IMAGE_PATH='11skd505395.tif'
# Field to modify
IMAGE_FIELD='TIFFTAG_IMAGEDESCRIPTION'
# Print the tiff image description tag
gdalinfo $IMAGE_PATH | grep $IMAGE_FIELD
# Change the Field
CMD="gdal_edit.py -mo ${IMAGE_FIELD}='Lake-Tahoe' $IMAGE_PATH"
echo $CMD
$CMD
# Print the new field value
gdalinfo $IMAGE_PATH | grep $IMAGE_FIELD
Output
$ ./gdal-script.py
TIFFTAG_IMAGEDESCRIPTION=OrthoVista
gdal_edit.py -mo TIFFTAG_IMAGEDESCRIPTION='Lake-Tahoe' 11skd505395.tif
TIFFTAG_IMAGEDESCRIPTION='Lake-Tahoe'
Here is another link that should provide useful info.
https://gis.stackexchange.com/questions/111610/how-to-overwrite-metadata-in-a-tif-file-with-gdal
Here's a single purpose python commandline script to edit band description in place.
''' Set image band description to specified text'''
import os
import sys
from osgeo import gdal
gdal.UseExceptions()
if len(sys.argv) < 4:
print(f"Usage: {sys.argv[0]} [in_file] [band#] [text]")
sys.exit(1)
infile = sys.argv[1] # source filename and path
inband = int(sys.argv[2]) # source band number
descrip = sys.argv[3] # description text
data_in = gdal.Open(infile, gdal.GA_Update)
band_in = data_in.GetRasterBand(inband)
old_descrip = band_in.GetDescription()
band_in.SetDescription(descrip)
new_descrip = band_in.GetDescription()
# de-reference the datasets, which triggers gdal to save
data_in = None
data_out = None
print(f"Description was: {old_descrip}")
print(f"Description now: {new_descrip}")
In use:
$ python scripts\gdal-edit-band-desc.py test-edit.tif 1 "Red please"
Description was:
Description now: Red please
$ gdal-edit-band-desc test-edit.tif 1 "Red please also"
$ python t:\ENV.558\scripts\gdal-edit-band-desc.py test-edit.tif 1 "Red please also"
Description was: Red please
Description now: Red please also
Properly it should be added to gdal_edit.py but I don't know enough do feel safe adding it directly.
gdal_edit.py with the -mo flag can be used to edit the band descriptions, with the bands numbered starting from 1:
gdal_edit.py -mo BAND_1=AVHRR_Channel_1_p58_p68_um -mo BAND_2=AVHRR_Channel_2 avhrr.tif
I didn't try it with the special characters but that might work if you use the right quotes.

Exporting a 3D numpy to a VTK file for viewing in Paraview/Mayavi

For those that want to export a simple 3D numpy array (along with axes) to a .vtk (or .vtr) file for post-processing and display in Paraview or Mayavi there's a little module called PyEVTK that does exactly that. The module supports structured and unstructured data etc..
Unfortunately, even though the code works fine in unix-based systems I couldn't make it work (keeps crashing) on any windows installation which simply makes things complicated. Ive contacted the developer but his suggestions did not work
Therefore my question is:
How can one use the from vtk.util import numpy_support function to export a 3D array (the function itself doesn't support 3D arrays) to a .vtk file? Is there a simple way to do it without creating vtkDatasets etc etc?
Thanks a lot!
It's been forever and I had entirely forgotten asking this question but I ended up figuring it out. I've written a post about it in my blog (PyScience) providing a tutorial on how to convert between NumPy and VTK. Do take a look if interested:
pyscience.wordpress.com/2014/09/06/numpy-to-vtk-converting-your-numpy-arrays-to-vtk-arrays-and-files/
It's not a direct answer to your question, but if you have tvtk (if you have mayavi, you should have it), you can use it to write your data to vtk format. (See: http://code.enthought.com/projects/files/ETS3_API/enthought.tvtk.misc.html )
It doesn't use PyEVTK, and it supports a broad range of data sources (more than just structured and unstructured grids), so it will probably work where other things aren't.
As a quick example (Mayavi's mlab interface can make this much less verbose, especially if you're already using it.):
import numpy as np
from enthought.tvtk.api import tvtk, write_data
data = np.random.random((10,10,10))
grid = tvtk.ImageData(spacing=(10, 5, -10), origin=(100, 350, 200),
dimensions=data.shape)
grid.point_data.scalars = np.ravel(order='F')
grid.point_data.scalars.name = 'Test Data'
# Writes legacy ".vtk" format if filename ends with "vtk", otherwise
# this will write data using the newer xml-based format.
write_data(grid, 'test.vtk')
And a portion of the output file:
# vtk DataFile Version 3.0
vtk output
ASCII
DATASET STRUCTURED_POINTS
DIMENSIONS 10 10 10
SPACING 10 5 -10
ORIGIN 100 350 200
POINT_DATA 1000
SCALARS Test%20Data double
LOOKUP_TABLE default
0.598189 0.228948 0.346975 0.948916 0.0109774 0.30281 0.643976 0.17398 0.374673
0.295613 0.664072 0.307974 0.802966 0.836823 0.827732 0.895217 0.104437 0.292796
0.604939 0.96141 0.0837524 0.498616 0.608173 0.446545 0.364019 0.222914 0.514992
...
...
TVTK of Mayavi has a beautiful way of writing vtk files. Here is a test example I have written for myself following #Joe and tvtk documentation. The advantage it has over evtk, is the support for both ascii and html.Hope it will help other people.
from tvtk.api import tvtk, write_data
import numpy as np
#data = np.random.random((3, 3, 3))
#
#i = tvtk.ImageData(spacing=(1, 1, 1), origin=(0, 0, 0))
#i.point_data.scalars = data.ravel()
#i.point_data.scalars.name = 'scalars'
#i.dimensions = data.shape
#
#w = tvtk.XMLImageDataWriter(input=i, file_name='spoints3d.vti')
#w.write()
points = np.array([[0,0,0], [1,0,0], [1,1,0], [0,1,0]], 'f')
(n1, n2) = points.shape
poly_edge = np.array([[0,1,2,3]])
print n1, n2
## Scalar Data
#temperature = np.array([10., 20., 30., 40.])
#pressure = np.random.rand(n1)
#
## Vector Data
#velocity = np.random.rand(n1,n2)
#force = np.random.rand(n1,n2)
#
##Tensor Data with
comp = 5
stress = np.random.rand(n1,comp)
#
#print stress.shape
## The TVTK dataset.
mesh = tvtk.PolyData(points=points, polys=poly_edge)
#
## Data 0 # scalar data
#mesh.point_data.scalars = temperature
#mesh.point_data.scalars.name = 'Temperature'
#
## Data 1 # additional scalar data
#mesh.point_data.add_array(pressure)
#mesh.point_data.get_array(1).name = 'Pressure'
#mesh.update()
#
## Data 2 # Vector data
#mesh.point_data.vectors = velocity
#mesh.point_data.vectors.name = 'Velocity'
#mesh.update()
#
## Data 3 additional vector data
#mesh.point_data.add_array( force)
#mesh.point_data.get_array(3).name = 'Force'
#mesh.update()
mesh.point_data.tensors = stress
mesh.point_data.tensors.name = 'Stress'
# Data 4 additional tensor Data
#mesh.point_data.add_array(stress)
#mesh.point_data.get_array(4).name = 'Stress'
#mesh.update()
write_data(mesh, 'polydata.vtk')
# XML format
# Method 1
#write_data(mesh, 'polydata')
# Method 2
#w = tvtk.XMLPolyDataWriter(input=mesh, file_name='polydata.vtk')
#w.write()
I know it is a bit late and I do love your tutorials #somada141. This should work too.
def numpy2VTK(img, spacing=[1.0, 1.0, 1.0]):
# evolved from code from Stou S.,
# on http://www.siafoo.net/snippet/314
# This function, as the name suggests, converts numpy array to VTK
importer = vtk.vtkImageImport()
img_data = img.astype('uint8')
img_string = img_data.tostring() # type short
dim = img.shape
importer.CopyImportVoidPointer(img_string, len(img_string))
importer.SetDataScalarType(VTK_UNSIGNED_CHAR)
importer.SetNumberOfScalarComponents(1)
extent = importer.GetDataExtent()
importer.SetDataExtent(extent[0], extent[0] + dim[2] - 1,
extent[2], extent[2] + dim[1] - 1,
extent[4], extent[4] + dim[0] - 1)
importer.SetWholeExtent(extent[0], extent[0] + dim[2] - 1,
extent[2], extent[2] + dim[1] - 1,
extent[4], extent[4] + dim[0] - 1)
importer.SetDataSpacing(spacing[0], spacing[1], spacing[2])
importer.SetDataOrigin(0, 0, 0)
return importer
Hope it helps!
Here's a SimpleITK version with the function load_itk taken from here:
import SimpleITK as sitk
import numpy as np
if len(sys.argv)<3:
print('Wrong number of arguments.', file=sys.stderr)
print('Usage: ' + __file__ + ' input_sitk_file' + ' output_sitk_file', file=sys.stderr)
sys.exit(1)
def quick_read(filename):
# Read image information without reading the bulk data.
file_reader = sitk.ImageFileReader()
file_reader.SetFileName(filename)
file_reader.ReadImageInformation()
print('image size: {0}\nimage spacing: {1}'.format(file_reader.GetSize(), file_reader.GetSpacing()))
# Some files have a rich meta-data dictionary (e.g. DICOM)
for key in file_reader.GetMetaDataKeys():
print(key + ': ' + file_reader.GetMetaData(key))
def load_itk(filename):
# Reads the image using SimpleITK
itkimage = sitk.ReadImage(filename)
# Convert the image to a numpy array first and then shuffle the dimensions to get axis in the order z,y,x
data = sitk.GetArrayFromImage(itkimage)
# Read the origin of the ct_scan, will be used to convert the coordinates from world to voxel and vice versa.
origin = np.array(list(reversed(itkimage.GetOrigin())))
# Read the spacing along each dimension
spacing = np.array(list(reversed(itkimage.GetSpacing())))
return data, origin, spacing
def convert(data, output_filename):
image = sitk.GetImageFromArray(data)
writer = sitk.ImageFileWriter()
writer.SetFileName(output_filename)
writer.Execute(image)
def wait():
print('Press Enter to load & convert or exit using Ctrl+C')
input()
quick_read(sys.argv[1])
print('-'*20)
wait()
data, origin, spacing = load_itk(sys.argv[1])
convert(sys.argv[2])