GDAL - XYZ to GeoTIFF slow - gdal

I have an XYZ raster file, 1.1GB in EPSG:23700 (EOV), 50 meters resolution.
The aim is to create a GeoTIFF file to be published via GeoServer (EPSG:4326), but I have some performance issues.
If I open the XYZ file from QGIS (2.14.0, Essen), choose Raster » Conversion » Translate and start it with the default options, it completes in several minutes, which is acceptable.
But if I copy the generated gdal_translate command and run it from CLI, than it takes more than an hour or so.
I've tried -co "GDAL_CACHEMAX=500", -co "NUM_THREADS=3", but has no effect. In the process monitor, the QGIS version uses 1 core fully (25% CPU) and the default max memory of 10MByte, but the CLI version only <10% and <10Mbyte mem. The --degub ON option shows "XYZ: New stepX=50.000000000000000" and hangs there.
I've tried to run it from the QGIS Directory \bin folder and the separately downloaded GDAL instance (C:\OSGeo4W64\bin), same results.
Windows Server 2012, 16GB RAM, 2,6 GHz 4 core Xenon CPU.
Any thoughts on this?

Looks like there's some problem with the environment variables. If I use a modified version of the QGIS startup batch file, it all works as expected.
#echo off
call "%~dp0\o4w_env.bat"
#echo off
path %OSGEO4W_ROOT%\apps\qgis\bin;%PATH%
set QGIS_PREFIX_PATH=%OSGEO4W_ROOT:\=/%/apps/qgis
set GDAL_FILENAME_IS_UTF8=YES
set GDAL_CACHEMAX = 500
rem Set VSI cache to be used as buffer, see #6448
set VSI_CACHE=TRUE
set VSI_CACHE_SIZE=1000000
set QT_PLUGIN_PATH=%OSGEO4W_ROOT%\apps\qgis\qtplugins;%OSGEO4W_ROOT%\apps\qt4\plugins
REM This line changed to run my batch file instead of starting QGIS.
call "d:\gdaltest.bat"

Related

Dymola converting output files to sdf - doesn't work for large files?

After simulation is finished Dymola runs dsres2sdf.exe to convert the results to sdf-format (if that option is enabled in the simulation setup output tab).
Usually this runs smoothly but sometimes it generates a sdf file that is very small (800 Byte) and empty.
Starting the dsres2sdf.exe manually from command line generates the same empty file.
I suspect that happens if the *.mat-File is very large (>1 GB)
Anybody has any clue how to get a proper sdf-File?
The SDF Editor and the SDF libraries for Python and MATLAB can read Dymola result files (*.mat) transparently (as if they were SDFs) and allow you to save them as *.sdf.
For example with Python:
import sdf
# load the Dymola result file
data = sdf.load('DoublePendulum.mat')
# re-save as SDF
sdf.save('DoublePendulum.sdf', data)

How to load a CSV file from the Mayavi GUI?

I know how to read the CSV into numpy and do it from a Python script, and that is good enough for my use case.
But since it has a GUI with data loading functionality, I was expecting it would just work for such an universal data format.
So I tried to go on the menu:
File
Load data
Open file
but when I select a simple CSV file:
i=0; while [ "$i" -lt 10 ]; do echo "$i,$((2*i)),$((4*i))"; i=$((i+1)); done > main.csv
which contains:
0,0,0
1,2,4
2,4,8
3,6,12
4,8,16
5,10,20
6,12,24
7,14,28
8,16,32
9,18,36
an error popup shows on the GUI:
No suitable reader found for file /home/ciro/main.csv
Google led me to this interesting file in the source tree: https://github.com/enthought/mayavi/blob/e2569be1096be3deecb15f8fa8581a3ae3fb77d3/mayavi/tools/data_wizards/csv_loader.py but that just looks like an example of how to do it from a script.
Tested in Mayavi 4.6.2.
From the documentation
One needs to have some data or the other loaded before a Module or Filter may be used. Mayavi supports several data file formats most notably VTK data file formats. Alternatively, mlab can be used to load data from numpy arrays. For advanced information on data structures, refer to the Data representation in Mayavi section.
I've tested importing using the GUI on a Asus Laptop Intel CoreTM i7-4510U CPU # 2.00 GHz with 8 GBs de RAM, using Windows 10, both in and out of a Python virtualenv and always got the same problem:
It all points to CSV files not being directly supported, so had to find another workaround.
My favorite was to use a virtual environment and install on it mayavi, jupyterlab, PyQt5 and Pandas.
Then, using PowerShell, start a Jupyter notebook (jupyter notebook) > Upload > Select the .csv. This imported a 1,25 GBs (153543233 rows x 3 columns) .csv in around 20s, which then became available for usage.

Automating a command line with increasing file number

I am very new to creating batch files.
I have to run a command, with an increasing file number e.g
c:>program.bat -propertyfile "1.property"
Right now, I have to type the command manually, wait 1 minute, then type the command again by increasing the property file # i.e "2.property" "3.property" "4.property" etc....
I want to automate this, and still would like to see the results in the command prompt as it runs.
How can this be accomplished?
See https://ss64.com/nt/for.html and specifically https://ss64.com/nt/for_l.html
FOR /L %%G IN (1,1,4) DO program.bat -propertyfile "%%G.property"
Should run your command for files 1.property to 4.property but if you're actually running it for files in a directory rather than a list of integers one of the other FOR constructs might be more appropriate. Perhaps https://ss64.com/nt/for_r.html

Patching AIX binary

I am attached to a running proces using dbx on AIX. There is a bug in the program, the offset in the opcode below is 0x9b8, but should be 0xbe8:
(dbx) listi 0x100001b14
0x100001b14 (..........+0x34) e88109b8 ld r4,0x9b8(r1)
I am able to fix that using the command below:
(dbx) assign 0x100001b14 = 0xe8810be8
but that affects only the running process and its memory. How can I change the on disk binary? I am not able to locate the pattern e88109b8 in the binary file,
otherwise I would use e.g. dd utility to patch it.
Best regards,
Pavel Filipensky

How to use csvSeparator with jpexport (jprofiler)?

I am using jprofiler to make some tests about the memory usage of my application. I would like to include them in my build process. All the steps should work in command like.
On step exports csv file from jps file with a command like:
~/jprofiler7/bin/jpexport q1.jps "TelemetryHeap" -format=csv q1_telemetry_heap.csv
On my local machine (widows), it is working. On my server (linux) the csv file is not well formatted:
"Time [s]","Committed size","Free size","Used size"
0.0,30,784,000,19,558,000,11,226,000
1.0,30,976,000,18,376,000,12,600,000
2.0,30,976,000,16,186,000,14,790,000
3.0,30,976,000,16,018,000,14,958,000
4.01,30,976,000,14,576,000,16,400,000
They is no way to distinguish the comma of csv format and the one of the numbering format.
According to the documentation, I need to change the value of -Djprofiler.csvSeparator in the file bin/export.vmoptions.
But I fail. I also try to change this value in jpexport.vmoptions and in jprofiler.vmoptions.
What should I do?
Thanks for your help
This bug was fixed in JProfiler 8.0.2.
Adding
-Djprofiler.csvSeparator=;
on a new line in bin/jpexport.vmoptions should work in JProfiler 7, though.