I need to convert vector (to cutting plotter) saved to PDF (A4 format) to HPGL (.PLT) file and GPGL file (.PLT). There is any ready to use libs in Python to do it? Or any ideas how to convert it correctly? Thanks in advice
Related
I am new to geospatial analytics and using NetCDF and GeoTIFF files. I am trying to convert the NetCDF file into GeoTIFF file. I came across this reference: netcdf-to-geotiff-file-conversion. I have successfully installed and can run gdal_translate in my MacOS terminal. However, I get this message I am trying to understand what this means and what I am missing? This appears to be warning, however it didn't generate any output.
My code executed:
gdal_translate some_nc_file_name.nc output.tif
Error/Message:
Warning 1: No UNIDATA NC_GLOBAL:Conventions attribute
Input file contains subdatasets. Please, select one of them for reading.
Here is my data output I previewed in Python:
You appear to have multiple variables in the file, so need to select one. Example (following https://nsidc.org/support/how/how-convert-golive-netcdf-variables-geotiff).
gdal_translate NETCDF:"Input_FileName.nc":variable_name Output_FileName.tif
Based on the warning message, the netCDF file lacks certain important attributes, so some issues may exist with the coordinates etc. in the tif.
I was building lately a dataset that I gather from the internet to use for training NN models. now I have a bunch of jpg images in one file and their labels on a txt file. the question is to which file format should I convert this data to make it easily callable in frameworks (python). a second question is how to build a metadata file about this dataset and which format should it have
In my opinion the easiest way is to build csv file to with two columns: directory and label. The directory value is the path (relative path) to the image, and label is of course the label. It requires you a merge from txt file and all jpg file into one csv files, but essentially it is easier to work with csv in pandas
I have around 10k images that I need to get the Hex colour from for each one. I can obviously do this manually with PS or other tools but I'm looking for a solution that would ideally:
Run against a folder full of JPG images.
Extract the Hex from dead center of the image.
Output the result to a text file, ideally a CSV, containing the file name and the resulting Hex code on each row.
Can anyone suggest something that will save my sanity please? Cheers!
I would suggest ImageMagick which is installed on most Linux distros and is available for OSX (via homebrew) and Windows.
So, just at the command-line, in a directory full of JPG images, you could run this:
convert *.jpg -gravity center -crop 1x1+0+0 -format "%f,%[fx:int(mean.r*255)],%[fx:int(mean.g*255)],%[fx:int(mean.b*255)]\n" info:
Sample Output
a.png,127,0,128
b.jpg,127,0,129
b.png,255,0,0
Notes:
If you have more files in a directory than your shell can glob, you may be better of letting ImageMagick do the globbing internally, rather than using the shell, with:
convert '*.jpg' ...
If your files are large, you may better off doing them one at a time in a loop rather than loading them all into memory:
for f in *.jpg; do convert "$f" ....... ; done
I got TFRecode file from magenta but it's difficult(and not precise description for me) to get midi file...
whoever solve this issue, share plz.
Known descrioption from Magenta groups(https://groups.google.com/a/tensorflow.org/forum/#!topic/magenta-discuss/)
The output format for the script is not a MIDI file. It is a TFRecord file containing NoteSequence protobufs with equivalent (but more readable and easily modifiable) representations of the input MIDIs.
You should be able to use sequence_proto_to_pretty_midi and then save the PrettyMIDI object as a midi file:
https://github.com/tensorflow/magenta/blob/master/magenta/lib/midi_io.py#L164
As an exercise, you might try to use the functions in note_sequence_io.py and midi_io.py to convert this file back to MIDIs.
If i achieve it I will share it for you also!
thx
We've recently added a model that you can train to generate new sequences. Have a look at https://github.com/tensorflow/magenta/blob/master/magenta/models/basic_rnn/README.md.
Thanks!
I have a binary file (capture.bin) from the rtl_sdr tool. I convert it to a .cfile with this manual http://sdr.osmocom.org/trac/wiki/rtl-sdr#Usingthedata
Where can I get the data in this file? The goal is to get a numerical format output from the the source. Is this possible?
That actually is covered by a GNU Radio FAQ entry.
What is the file format of a file_sink? How can I read files produced by a file sink?
All files are in pure binary format. Just bits. That’s it. A floating point data stream is saved as 32 bits in the file, one after the other. A complex signal has 32 bits for the real part and 32 bits for the imaginary part. Reading back a complex number means reading in 32 bits, saving that to the real part of a complex data structure, and then reading in the next 32 bits as the imaginary part of the data structure. And just keep reading the data.
Take a look at the Octave and Python files in gr-utils for reading in data using Octave and Python’s Scipy module.
The exception to the format is when using the metadata file format. These files are produced by the File Meta Sink: http://gnuradio.org/doc/doxygen/classgr_1_1blocks_1_1file__meta__sink.html block and read by the File Meta Source block. >See the manual page on the metadata file format for more information about how to deal with these files.
A one-line Python command to read the entire file into a numpy array is:
f = scipy.fromfile(open("filename"), dtype=scipy.uint8)
Replace the dtype with scipy.int16, scipy.int32, scipy.float32, scipy.complex64 or >whatever type you were using.
Update
scipy.fromfile will be deprecated in v2.0 so instead use numpy library
f = numpy.fromfile(open("filename"), dtype=numpy.uint8)