Matlab: increase precision of command "importdata" - file-io

i am having a problem with matlab. I am trying to import some data in a matlab code and i use the command "importdata" but i can't get the desired precision.
this is a small part of the txt which name is "NACA 0008":
NACA 0008 Airfoil M=0.0% P=0.0% T=8.0%
1.000000 0.000840
0.996057 0.001208
0.984292 0.002295
0.964888 0.004055
and part of my matlab code is this but when i display coord.data
the variables saved in the array are these:
coord = importdata('NACA 0008.txt'); %read coordinates for airfoil from "NACA 0008.txt"
disp(coord.data);
1.0000 0.0008
0.9961 0.0012
0.9843 0.0023
0.9649 0.0041
how can i increase the precision to which I want my data loaded ?

Add format longG; before importing codes, like this:
format longG;
ImportFile('data.txt');
or run format longG in Command Window before import command.

Related

How to correctly use ogr2ogr and gdal_rasterize to rasterize a smaller area of a large vector of GeoPackage?

I am using gdal_rasterize and ogr2ogr with a goal to get a partial raster of .gpkg file.
With first command I want to clip a smaller area of a large map.
ogr2ogr -spat xmin ymin xmax ymax out.gpkg in.gpkg
This results in a file that with command ogrinfo out.gpkg gives expected output by listing the layers numbers and names.
Then trying to rasterize this new file with:
gdal_rasterize out.gpkg -burn 255 -ot Byte -ts 250 250 -l anylayer out.tif
results in: ERROR 1: Cannot get layer extent when tried with any of the layers names given by ogrinfo
Using the same command on the original in.gpkg doesnt give errors and results in expected .tiff file raster.
ogr2ogr --version GDAL 2.4.2, released 2019/06/28
gdal_rasterize --version GDAL 2.4.2, released 2019/06/28
This process should at the end be implemented with the gdal C++ API.
Are the commands given some how invalid this way, how?
Should the whole process be done differently, how?
What does the ERROR 1: Cannot get layer extent mean?

Use pandas style for table outputs with custom min/max highlighting

I am using Pandas for basic evaluations and use it to output Latex tables.
I am outputting various error metrics for most metrics and the results are fine (the smallest error is shown in green).
style = df.style.highlight_min(color='darkgreen', axis=0).highlight_max(color='darkred', axis=0)
latex_table = style.to_latex(multicol_align="c", siunitx=True, hrules=True, [..])
Now I also output the so-called Q-Error (basically max(prediction/actual, actual/prediction)). This error is ideally 1.0 when completely accurate. With the standard Pandas styling, I cannot mark the best error of 1.05 over smaller numbers like 0.6 (which are larger error values).
Is there a way to customize highlights?

threshold doesn't work when trying to convert PDF to .png

I have a PDF form that has a box like this:
I'm trying to run it through AWS Textract but it interprets the pipes in between the numbers. The pipes are actually like dark gray. So I hoped if I used Image Magick with a threshold I could get the number without the pipes but it's not working.
I tried this but any threshold amount doesn't help.
magick input.pdf -threshold 95% output.png
I'm trying to get something like this (which I did manually taking a screenshot and applying a threshold)
How can I achieve the above from the command line (or in Python)?
Just adjust your threshold lower in Imagemagick.
convert img.png -threshold 60% x.png

What is the model file in svm-train command-line syntax?

I have used grid.py in LIBSVM and found the best parameter for my dataset
C -8.0 g -0.0625 CV- 63.82
Then I tried svm-train but I don't understand the syntax of the svm-train command
svm-train [options] training_set_file [model_file]
A model_File is need but grid.py only gave me a .out file. When I used this, it showed an error.
My question is:
Could you explain what the model file is, preferably using an example?
I am using LIBSVM on Debian (using the command-line).
You want command-lines like:
svm-train -C 8.0 -g 0.0625 training.data svm.model
svm-predict testing.data svm.model predict.out
The model file (svm.model) is just a place to store the model parameters learned by svm-train so that they can be later used for prediction. The model is created by svm-train, it is not produced by grid.py, and it is input to svm-predict. Therefore you can make any name you like to give to svm--train, so long as you give the same name to svm-predict. I often call the file something like model-C8.0-g0.0625 so I can later tell what it is.
A model file will look like this:
svm_type c_svc
kernel_type rbf
gamma 0.5
nr_class 2
total_sv 6164
rho -2.4768
label 1 -1
nr_sv 3098 3066
SV
2 1:-0.452773 2:-0.455573 3:-0.485312 4:-0.436805 ...
If you need to know more about the model file, see the LIBSVM FAQ

Fastest file format to save and load matrices in Octave?

I have a matrix that is about 11,000 x 1,000, saved as a csv. It takes forever to load.
What is the fastest (or recommended) format to save matrices in?
Where does the data come from?
Way back when I was in graduate school, I generated simulation data and results in a C++ program. As I owned the data, I wrote a routine to write the matrix data in the binary format expected by Octave --- and which point reading is pretty fast as it becomes a single fread call.
Don't forget the -binary option. For example,
save -binary myfile.mat X Y Z; % save X, Y, and Z matrices to myfile.mat
load myfile.mat; % load X, Y, and Z matrices from myfile.mat
When I forgot to use the -binary option, my 80,000 x 402 matrix of doubles took more than 22 minutes to load. With the -binary option, it took less than 2.5 seconds.