Is it possible to print Geospatial PDFs from Geoserver? - pdf

Is it possible to print vector layers created from shapefiles to a Geospatial PDF from Geoserver? I am using Geoserver for serving my map layers and OpenLayers for my mapping API. I currently retrieve the vector layers as a WMS and print them using the standalone MapFish print module. I print to PDFs but they are not georeferenced. It appears that by installing GDAL and using the community module for GDAL based WCS output a geospatial PDF can be created. But I am not sure what to do once those are implemented to get the output as a geospatial PDF. Any suggestions are greatly appreciated.

Related

How to visualize data flow in a tensorflow project

I am trying to debug this project : https://github.com/VisualComputingInstitute/TrackR-CNN
This is a MaskRCNN based project and I want to visualize the data flow among various functions in network/FasterRCNN.py(https://github.com/VisualComputingInstitute/TrackR-CNN/blob/master/network/FasterRCNN.py)
mainly rpn_head(), fastrcnn_head(). I tried it with py_func and pdb but was not successful. SEssion.run() is created inside core/Engine.py(https://github.com/VisualComputingInstitute/TrackR-CNN/blob/master/core/Engine.py).
Is there any way to see the image manipulation during the training(i.e. rpn values, reid_dim, etc)?
Thanks.

How to label my own point cloud data to have the 3D training labels (.txt) files like KITTI 3D object detection dataset?

I am new to this field, I have collected some point cloud data using lidar sensor and camera and now I have .pcd files for the point cloud and .png files for the images. I wanted to make this data like the KITTI dataset structure for 3d object detection to use it in a model that uses kitti dataset as training data for 3D object detection. Therefore I want to change my .pcd files to .bin files like in kitti and also I need to have .txt files for labels, so i need to annotate my data in such a way that will give me the same label files like in kitti dataset. Can somebody help me ?. I searched a lot and all the labelling tools don’t output the same attributes that are in the .txt files of KITTI.
This is the link for the KITTI 3D dataset.
http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d
There is a lot of different questions in your post, so I'm going to answer those I can. Here is a snippet of code how you can read pcd file:
import open3d as o3d
pcd = o3d.io.read_point_cloud("../../path_to_your_file.pcd")
#print(pcd)
and then you can format it as you want, including writing to binary.
This could be a helpful library, check this out
link to open3D docs
link to open3D github
You can get more references from below -
https://paperswithcode.com/task/3d-object-detection

java api for feeding tfrecords bytes to tensorflow model

Does any one know of high level api if exist in java for directly reading tfrecords and feeding to a tensorflow savedModel . Python api allows both example.proto (tfrecords) and tensors to be fed to tf model for inference. The only api i have seen in java is of creating raw tensors, is there a way similar to python sdk where i can directly feed tfrecords (example.proto_ to a saved model bundle in java as well.
I just came across same scenario and I used TFRecordIO from Java Apache Beam to read records. For example,
pipeline
.apply(TFRecordIO.read().from(dataPath))
.apply(ParDo.of(new ModelEvaluationFn()));
Inside ModelEvaluationFn I do the scoring using savedModel. With Java Apache Beam, you can run locally, on GCP Dataflow, Spark, Flink, etc. But if you are using Spark directly, there's spark-tensorflow-connector.
Another thing I came across is how to parse the tfrecords in Java because I need get the label value and group by using some column values to get breakdown scores. org.tensorlfow/proto package can help you do that. Here are examples: example1, example2. Essentially, it's Example.parseFrom(byte[]).

How to use tensorflow-wavenet

I am trying to use the tensorflow-wavenet program for text to speech.
These are the steps:
Download Tensorflow
Download librosa
Install requirements pip install -r requirements.txt
Download corpus and put into directory named "corpus"
Train the machine python train.py --data_dir=corpus
Generate audio python generate.py --wav_out_path=generated.wav --samples 16000 model.ckpt-1000
After doing this, how can I generate a voice read-out of a text file?
According to the tensorflow-wavenet page:
Currently there is no local conditioning on extra information which would allow context stacks or controlling what speech is generated.
You can find more information about current development of the project by reading the issues on the repository (local conditioning is a desired feature!)
The Wavenet paper compares Wavenet to two TTS baselines, one of which appears to have code for training available online: http://hts.sp.nitech.ac.jp
A recent paper by DeepMind describes one approach to going from text to speech using WaveNet, which I have not tried to implement but which at least states the method they use: they first train one network to predict a spectrogram from text, then train WaveNet to use the same sort of spectrogram as an additional conditional input to produce speech. It's a neat idea, especially since you can train the WaveNet part on some huge database of voice-only data, for which you can extract the spectrogram, and then train the text-to-spectrogram part using a different dataset where you have text.
https://google.github.io/tacotron/publications/tacotron2/index.html has the paper and some example outputs.
There seems to be a bunch of unintuitive engineering around the spectrogram prediction part (no doubt because of the nature of text-to-time learning), but there's some detail in the paper at least. The dataset is proprietary so I've no idea how hard it would be to get any results using other datasets.
For those who may come across this question, there is a new python implementation ForwardTacotron that enables text-to-speech readily.

Tensorflow Serving with image input

I'm trying to send image input over http to classify using tensorflow. I have looked in detail in the c++ code for https://www.tensorflow.org/versions/r0.9/tutorials/image_recognition/index.html
I have implemented the inception-v3 example model using C++ API. It takes image input in the following form:
bazel-bin/tensorflow/examples/label_image/label_image --image=my_image.png
However, I want to add the case of:
bazel-bin/tensorflow/examples/label_image/label_image --image=http://www.somewebsite.com/my_image.png
This is due to the fact that it only accepts local image files. I want to add the functionality to take file pointers from online images and classify it in memory. I'm currently working on this, but so far no luck. Can anyone offer some insight how I would go about implementing this?