Assimp gltf2 exporter support for internal texture storage - assimp

Does the assimp gltf2 exporter support storing texture data inside the file? Specifically I am interested in the binary version of gltf2.

Assimp supports storing textures in gltf files:
Main page of assimp documentation
But unfortunately does not support glb.

Related

What is minimum .fbx file requirements for FBX-SDK

I am coding a converter to convert 3D models to fbx for Unreal Engine.
A very simple v0.1.0 converter has been completed.
I can import it correctly in Blender, but the FBX-SDK (used in Unreal Engine) returns an error.
File is corrupted
I think I'm missing some necessary element, is there a minimum requirement for FBX-SDK written anywhere?
I have tried to write out empty fbx data using the SDK but there are too many elements.

TFlite Select Ops in C++ inference without using FlexDelegate

I'm trying to build a TFLite program to run inference with a model which uses TF Select Ops written in C++ without building the entire tflite delegate library, i.e. without adding flex delegate as a dependency in the BUILD file (using bazel here). Keeping the flex delegate in allows the program to build and run on x86_64, but cross-compilation for RaspberryPI fails, and furthermore, the binary is nearly an order of magnitude larger than expected. Is it possible to use ops which are not natively supported by TFLite in a TFlite C++ program without building the entire delegate library?
I think selective build is what you are looking for: https://www.tensorflow.org/lite/guide/reduce_binary_size
It only links the ops that are used in your models so vastly reduce the library size.
Follow the instruction on that page, you can produce .aar files, extracting that file you will find the .so libraries.

How to reduce the TensorFlow Lite binary size with only the operators needed

The TensorFlow Lite binary size is about 900KB, and is still large for me. I want to know how to reduce the size with only the operators needed for supporting the model?
Tensorflow Lite
If you are using Tensorflow Lite, the only solution I have found is to work at level of Interpreter and customize the Kernel Library (OpResolver). I don't think there is an automatic way of doing this, and the available only example (here the header) is not so easy to understand IMHO. I think that more improvements on this topic will be included in the next releases. Also, I'm not sure this will reduce the size of the final library. In the API notes this approach is considered equivalent to the selective registration, that is explained in the next part of the answer for Tensorflow Mobile.
Tensorflow Mobile
As an answer to the question "How can I enable only the ops used by my model", the answer is in Tensorflow Mobile Documentation (at the subsection Binary Size).
The usual size for Tensorflow Mobile seems to be 12MB, but it is possible to reduce it by including only the model required ops. Obviously this requires to build Tensorflow Lite as a Framework using Bazel.
You can create an header of required ops (ops_to_register.h) using the tool print_selective_registration_header.py, that is available here. The generated header should be placed in the root of the Tensorflow source directory.
You are now ready to compile the library, passing the SELECTIVE_REGISTRATION definition to the compiler (building with Bazel, you should add the option: --copts=”-DSELECTIVE_REGISTRATION”).
I think this procedure will give the library with minimal ops inside. Some other compiler optimization flags may help you with the size (sometimes penalizing performance).
Compile options
I actually don't know how you are compiling your code (static lib or dynamic lib), which are your needs in terms of performance, and which are the default options in Tensorflow bazelfile, but you may try:
to reduce the optimization to -O1 or -Os (sometimes helps with the binary size, and I think the default for Tensorflow is -O2 for the framework and -O3 for the single kernels, I don't know for the lite version though).
use the flags -fdata-section and --gc-sections: quoting gcc documentation: "[-fdata-sections] Together with a linker garbage collection (linker --gc-sections option) these options may lead to smaller statically-linked executables (after stripping)." (It seems that at least --gc-sections is used in linker options for Raspberry Pi)
-fvisibility-inlines-hidden should impact on performance of inline functions, but decreases the size of the export table of the shared object. This option may break the library. Some explanations can be read here.
Even more dangerous is -fvisibility=hidden. Look at it here.

How to load tensorflow checkponit by myself without c++ api?

I am using tensorflow 1.0.
My production environment cannot build tensorflow-cpp because low gcc&glibc version.
Is there any doc about how to load a checkponit or freezed-graph in C++ without api?
1、 how to save network parameter? (embeding...)
2、 how to save graph structure (layers,weights...)
There is no documentation on doing this that I know of. Loading a checkpoint without the C++ runtime won't be very useful to you because you won't be able to run it.
The checkpoint by default does not include the graph structure, but if you export a metagraph you will get it in a serialized protocol buffer format. Implementing a parser for this (and the weights checkpoint) yourself sounds difficult to get right and likely to break in the future.

How to visualize CGAL resutls with VTK library

I am new in this forum.
I have a problem in my project in c++.
I used vtk and Itk and Qt, but the mesh was not perfect so I tried to include CGAL with cmake.
I can do everything using CGAL, but I can't visualize the object created with CGAL. I have tried to export the results (coordinates, vertices, triangles...) to a generic file like xml or txt to be able to read it from vtk and render it.
Please can you help me to find a way to visualize the CGAL operations?
Thank you
There is a mesh to vtk converter which I used a while ago.