Write KML Extended Data in a different way - gps

I have some GPS raw data that I want to put on a KML file.
Currently I can generate the KML file with the Extended Data using the KML format described here https://developers.google.com/kml/documentation/kmlreference#trackexample and that's fine, but it takes too much time.
I am collecting six different types of extended data, using an Arduino and writing them on a SD card, but the entire writing process for each sample is too slow (I write the data on six different files and then I append each file to the final KML, using the gx:track element).
Is there any other way to write all six parameters at the same time, in the KML format using the Extended Data ? maybe using different tags or same tags in different order?
I don't have enough cpu power to rework the file after collecting gps raw data, so I need to write it right the first time.

write the kml totally yourself, do not use an library. Then it is as fast as simply writing text to a file. if the bottleneck is the file system, then kml is not the right format. Use a custom binary file, and transform later to kml on server side.

Related

"Live" data capable alternative for Google Earth KML

I'm currently using Google Earth + KML Files to visualize Aircraft Flightpaths in 3d, it works perfect and also looks fine, but the big disadvantage is, that there seems to be no way to feed "live" data to Google Earth and draw the Flightpaths in Realtime.
Is there an alternative that is capable to display live data without manually reloading a file or anything like this? Satellite Picture surface would be an absolute MUST.
Maybe someone out there knows a proper solution for my project.
Thanks
The KML NetworkLink tag provides several ways to automatically update/reload a KML file, which will let you provide "live" data. You can either make the NetworkLink update the KML every time the user stops moving the map (with a setable delay), or on a timer (eg: every 10 seconds). Look at the KML Reference and developer tutorials for more info.

Get GPS data from MOV (quicktime) video file

Please help to get GPS track with time from .mov file.
The file is fom car camera and consists GPS data because its viewer shows car position.
What is right way to do that?
You don't say if you're looking for a programming solution to parse the file and read the GPS metadata yourself, or whether you're looking for a tool that will display the data.
It also depends very much on the specific camera that recorded the file as they embed data in different formats. If you have an iPhone, for example, it records GPS data in a mdta metadata atom with key "com.apple.quicktime.location.ISO6709", but other formats exist too, especially if you mean real time varying GPS data embedded in each frame, rather in the header for the movie as a whole.
Tools that will read such data from the movie header include ExifTool and CatDV (though the latter is a commercial product).
I found that ffprobe from the ffmpeg project was able to extract the com.apple.quicktime.location.ISO6709 tag.

How to merge many VRT file into one

I have many VRT files generated using gdal_translate originally for adjacent images.
Is there away to merge all those VRT file into one VRT file so that when I run gdal2tiles.py I only need to give it this one composite VRT file?
I thought first gdal_wrap will do the trick, but it turn out that gdal_wrap images into one single image.. However, I dont want to merge images, I would like to merge VRT file.
There is gdalbuildvrt utility in GDAL since 1.6.1 - which merges multiple input files into one VRT mosaic file. See this official documentation for usage details:
http://www.gdal.org/gdalbuildvrt.html
You just need to list all the individual files and the output filename very probably.
You have tagged your questions with "maptiler" label, which refers to http://www.maptiler.com/ product. MapTiler is able to render multiple files out of the box and is not using VRT at all internally. It is more efficient to supply the individual input files to maptiler directly, then to create a VRT and pass it to the software. VRT introduces artificial internal block size for reading the data - which slows down the tile rendering process, in some cases significantly.
Feel free to request a demo of MapTiler Pro and compare the speed, size and quality of the map tiles you receive - and post the results here.

Is it worthwhile to Big Query in realtime XML Data?

I have an xml file around 2 MB (Yes !! 2MB small file), I want to sort the file in some predetermined format, and show the formatted result, as of not it takes 2 - 3 seconds for the whole process, we want to cut down on the time.
My Questions, are
(a) Any way to directly push XML into big query instead of CSV.
(b) I would want to do realtime, so how do i push data from my website, and get the data back on my website. (Do you think the command line would do the tricks ?
(c) I am working on .NET.
I don't think you can push XML directly into BigQuery. The documentation doesn't say, "You cannot import XML." But the fact that it only explains how to use CSV makes it pretty clear.
It doesn't sound like a perfect use case for BigQuery. BigQuery is great for huge data volumes, but you have small data (as you noted). Would it not be quicker to just sort your XML in memory without pushing it somewhere else?

Is it possible to extract tiff files from PDFs without external libraries?

I was able to use Ned Batchelder's python code, which I converted to C++, to extract jpgs from pdf files. I'm wondering if the same technique can be used to extract tiff files and if so, does anyone know the appropriate offsets and markers to find them?
Thanks,
David
PDF files may contain different image data (not surprisingly).
Most common cases are:
Fax data (CCITT Group 3 and 4)
raw raster data with decoding parameters and optional palette all compressed with Deflate or LZW compression
JPEG data
Recently, I (as developer of a PDF library) start noticing more and more PDFs with JBIG2 image data. Also, JPEG2000 sometimes can be put into a PDF.
I should say, that you probably can extract JPEG/JBIG2/JPEG2000 data into corresponding *.jpeg / *.jp2 / *.jpx files without external libraries but be prepared for all kinds of weird PDFs emitted by broken generators. Also, PDFs quite often use object streams so you'll need to implement sophisticated parser for PDF.
Fax data (i.e. what you probably call TIFF) should be at least packed into a valid TIFF. You can borrow some code for that from open source libtiff for example.
And then comes raw raster data. I don't think that it makes sense to try to extract such data without help of a library. You could do that, of course, but it will take months of work.
So, if you are trying to extract only specific kind of image data from a set of PDFs all created with the same generator, then your task is probably feasible. In all other cases I would recommend to save time, money and hair and use a library for the task.
PDF files store Jpegs as actual JPEGS (DCT and JPX encoding) so in most cases you can rip the data out. With Tiffs, you are looking for CCITT data (but you will need to add a header to the data to make it a Tiff). I wrote 2 blog articles on images in PDF files at http://www.jpedal.org/PDFblog/2010/09/understanding-the-pdf-file-format-images/ and http://www.jpedal.org/PDFblog/2011/07/extract-raw-jpeg-images-from-a-pdf-file/ which might help.