Most efficient vector source format to load and render in OpenLayers? - openlayers-6

I am in the process of translating netCDF and GeoTiff raster data formats into vector formats to visualize in OpenLayers. What is the most efficient Openlayers vector source format to load and render features, for example, CSV or GeoJson? Would this be the same for webGL PointsLayers? I am loading very large sets of point data using version 6.3.1.

Related

Is it possible to print Geospatial PDFs from Geoserver?

Is it possible to print vector layers created from shapefiles to a Geospatial PDF from Geoserver? I am using Geoserver for serving my map layers and OpenLayers for my mapping API. I currently retrieve the vector layers as a WMS and print them using the standalone MapFish print module. I print to PDFs but they are not georeferenced. It appears that by installing GDAL and using the community module for GDAL based WCS output a geospatial PDF can be created. But I am not sure what to do once those are implemented to get the output as a geospatial PDF. Any suggestions are greatly appreciated.

How to label my own point cloud data to have the 3D training labels (.txt) files like KITTI 3D object detection dataset?

I am new to this field, I have collected some point cloud data using lidar sensor and camera and now I have .pcd files for the point cloud and .png files for the images. I wanted to make this data like the KITTI dataset structure for 3d object detection to use it in a model that uses kitti dataset as training data for 3D object detection. Therefore I want to change my .pcd files to .bin files like in kitti and also I need to have .txt files for labels, so i need to annotate my data in such a way that will give me the same label files like in kitti dataset. Can somebody help me ?. I searched a lot and all the labelling tools don’t output the same attributes that are in the .txt files of KITTI.
This is the link for the KITTI 3D dataset.
http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d
There is a lot of different questions in your post, so I'm going to answer those I can. Here is a snippet of code how you can read pcd file:
import open3d as o3d
pcd = o3d.io.read_point_cloud("../../path_to_your_file.pcd")
#print(pcd)
and then you can format it as you want, including writing to binary.
This could be a helpful library, check this out
link to open3D docs
link to open3D github
You can get more references from below -
https://paperswithcode.com/task/3d-object-detection

Tensorflow: partially decode binary data

I am wondering if there is a native tensorflow function that allows to decode a binary file (for example a tfrecord) starting from a given byte (offset) and reading the following N bytes, without decoding the entire file.
This has been implemented for jpeg images: tf.image.decode_and_crop_jpeg
but I cannot find a way to do the same thing with any binary file.
This would be very useful when the cropping window is much smaller than the whole data.
Currently, I am using a custom tf.py_func as mapping function of a Dataset object. It works, but with all the limitation of a custom py_func.
Is there a native tensorflow way to do the same thing?

Tensorflow slim load data (images, signal, "kaggle format")

I am trying to use tensorflow slim but I am stuck at the beggining which is how to load the data. In fact, I find only one way on the internet : TFrecord. First, I am wondering if this format can be used for all types of data (images, signals, etc) and second if there is another way to load data (for example in keras we can load them using numpy and create some mini batch) ?

Excluding areas by raster layer in raster calculator

I am conducting a suitability analysis utilizing a road layer that I buffered around. After creating the vector buffer layer, I converted it to a raster. I now want to use the raster calculator in combination with additional raster layers to produce an output raster that excludes those areas within the buffer (the entire 'buffer raster layer'). My issue is that the 'buffer raster layer' only consists of those areas that have been buffered... Any thoughts/suggestions would be appreciated.
Best,
Eric
One solution to this is to make a new copy of the raster with a larger extent. In your geoprocessing environments, set the Extent to equal the extent of your other raster layers. Then, use the Copy Raster tool. The new raster should be the same size as your other data, and you can proceed with raster calculator.