As a follow-up to this post:
https://groups.google.com/forum/?fromgroups#!topic/cytoscape-discuss/GV2vr4d4EQI
I was wondering if there is any documentation/example on how we can use cytoscape.js' export functionality?
Currently, we only support JSON as an input/output graph file format. You can also export the graph as a PNG image with cy.png().
Conversion to and from formats like XGMML will have to be a separate component/library.
Related
I am in the process of translating netCDF and GeoTiff raster data formats into vector formats to visualize in OpenLayers. What is the most efficient Openlayers vector source format to load and render features, for example, CSV or GeoJson? Would this be the same for webGL PointsLayers? I am loading very large sets of point data using version 6.3.1.
I am new to this field, I have collected some point cloud data using lidar sensor and camera and now I have .pcd files for the point cloud and .png files for the images. I wanted to make this data like the KITTI dataset structure for 3d object detection to use it in a model that uses kitti dataset as training data for 3D object detection. Therefore I want to change my .pcd files to .bin files like in kitti and also I need to have .txt files for labels, so i need to annotate my data in such a way that will give me the same label files like in kitti dataset. Can somebody help me ?. I searched a lot and all the labelling tools don’t output the same attributes that are in the .txt files of KITTI.
This is the link for the KITTI 3D dataset.
http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d
There is a lot of different questions in your post, so I'm going to answer those I can. Here is a snippet of code how you can read pcd file:
import open3d as o3d
pcd = o3d.io.read_point_cloud("../../path_to_your_file.pcd")
#print(pcd)
and then you can format it as you want, including writing to binary.
This could be a helpful library, check this out
link to open3D docs
link to open3D github
You can get more references from below -
https://paperswithcode.com/task/3d-object-detection
TensorFlow Object Detection API prefers TFRecord file format. MXNet and Amazon Sagemaker seem to use RecordIO format. How are these two binary file formats different, or are they the same thing?
RecordIO and TFRecord are same in the sense that they are serving the same purpose - to put data in one sequence for faster reading, and both of them using Protocol buffers under the hood for better space allocation.
It seems to me that RecordIO is more like an umbrella term: a format that is used to store huge chunk of data in one file for faster reading. Some products adopt "RecordIO" as an actual term, but in Tensorflow they decided to use a specific word for that - TFRecord. That's why some people call TFRecord as "TensorFlow-flavored RecordIO format".
There is no single RecordIO format as is. People from Apache Mesos, who also call their format RecordIO, say: "Since there is no formal specification of the RecordIO format, there tend to be slight incompatibilities between RecordIO implementations". And their RecordIO format is different from the one MXNet uses - I don't see "magic number" at the beginning of each record.
So, on structure level TFRecord of Tensorflow and RecordIO of MXNet are different file formats, e.g. you don't expect MXNet to be able to read TFRecord and vice versa. But on a logical level - they serve same purpose and can be considered similar.
When I quantized the model by lite modules in tensorflow, I cann't check the weights values that had quantized.Is there any way to view these values in the .tflite files? or Is there any way to parse the .tflite files?
There are some neural network visualizers that can also provide an interface to inspect the file. I have been using Netron. You can click on the "weights" tab of the layer you are interested in to view the data. I haven't tried it yet, but there appears to be a floppy-disk save icon when you view weights/biases in the right side-bar.
The data format is a Google FlatBuffer, defined by the schema file here. You may prefer doing this if you want to do something with the data like output it in a different format. I found the output from parsing it myself using the schema.fbs file to match Netron's for the CNN's I passed in. You can check out the FlatBuffers documentation here.
here in first answer is guide, how to create json view of .tflite model. There you can see quantized values
I am wondering if there is a native tensorflow function that allows to decode a binary file (for example a tfrecord) starting from a given byte (offset) and reading the following N bytes, without decoding the entire file.
This has been implemented for jpeg images: tf.image.decode_and_crop_jpeg
but I cannot find a way to do the same thing with any binary file.
This would be very useful when the cropping window is much smaller than the whole data.
Currently, I am using a custom tf.py_func as mapping function of a Dataset object. It works, but with all the limitation of a custom py_func.
Is there a native tensorflow way to do the same thing?