What is minimum .fbx file requirements for FBX-SDK - fbx

I am coding a converter to convert 3D models to fbx for Unreal Engine.
A very simple v0.1.0 converter has been completed.
I can import it correctly in Blender, but the FBX-SDK (used in Unreal Engine) returns an error.
File is corrupted
I think I'm missing some necessary element, is there a minimum requirement for FBX-SDK written anywhere?
I have tried to write out empty fbx data using the SDK but there are too many elements.

Related

Freeze Saved_Model.pb created from converted Keras H5 model

I am currently trying to train a custom model for use in Unity (Barracuda) for object detection and I am struggling near what I believe to be the last part of the pipeline. Following various tutorials and git-repos I have done the following...
Using Darknet, I have trained a custom-model using the Tiny-Yolov2 model. (model tested successfully on a webcam python script)
I have taken the final weights from that training and converted them
to a Keras (h5) file. (model tested successfully on a webcam python
script)
From Keras, I then use tf.save_model to turn it into a
save_model.pd.
From save_model.pd I then convert it using tf2onnx.convert to change
it to an onnx file.
Supposedly from there it can then work in one of a few Unity sample
projects...
...however, this project fails to read in the Unity Sample projects I've tried to use. From various posts it seems that I may need to use a 'frozen' save_model.pd before converting it to ONNX. However all the guides and python functions that seem to be used for freezing save_models require a lot more arguments than I have awareness of or data for after going through so many systems. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py - for example, after converting into Keras, I only get left with a h5 file, with no knowledge of what an input_graph_def, or output_node_names might refer to.
Additionally, for whatever reason, I cannot find any TF version (1 or 2) that can successfully run this python script using 'from tensorflow.python.checkpoint import checkpoint_management' it genuinely seems like it not longer exists.
I am not sure why I am going through all of these conversions and steps but every attempt to find a cleaner process between training and unity seemed to lead only to dead ends.
Any help or guidance on this topic would be sincerely appreciated, thank you.

How to visualize data flow in a tensorflow project

I am trying to debug this project : https://github.com/VisualComputingInstitute/TrackR-CNN
This is a MaskRCNN based project and I want to visualize the data flow among various functions in network/FasterRCNN.py(https://github.com/VisualComputingInstitute/TrackR-CNN/blob/master/network/FasterRCNN.py)
mainly rpn_head(), fastrcnn_head(). I tried it with py_func and pdb but was not successful. SEssion.run() is created inside core/Engine.py(https://github.com/VisualComputingInstitute/TrackR-CNN/blob/master/core/Engine.py).
Is there any way to see the image manipulation during the training(i.e. rpn values, reid_dim, etc)?
Thanks.

How do I convert a Tensorflow model to .mlmodel?

I want to convert a Tensorflow model with the following structure to a .mlmodel file for use in an iOS app:
cub_image_experiment/
logdir/
val_summaries/
test_summaries/
finetune/
val_summaries/
cmds.txt
config_train.yaml
config_test.yaml
I'm following this tutorial: https://github.com/visipedia/tf_classification/wiki/CUB-200-Image-Classification
However, I'm having trouble understanding the structure of the project. Which files are important and how do I convert all the separate config files and everything into a single .mlmodel file so that I can use in my application?
I've looked online and all I could find was how to convert .caffemodel to .mlmodel or .pb file to .mlmodel. These are all single files, however my project has multiple files. I found a tutorial on how to convert a tf model into a single .pb file, however, that model's structure was different and it did not contain any yaml files. My project is not focused on creating a model at the moment, but merely integrating a model into an iOS app. I found this model interesting for an app idea and wanted to know if it can be integrated. If there are any tutorials out there that might help me in this sort of problem please let me know.
None of that stuff is used by the Core ML model. The yaml files etc are used only to train the TF model.
All you need to provide is a frozen graph (a .pb file) and then convert it to an mlmodel using tfcoreml.
It looks like your project doesn't have a frozen graph but checkpoints. There is a TF utility that you can use to convert the checkpoint to a frozen graph, see https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py

Tensorflow: partially decode binary data

I am wondering if there is a native tensorflow function that allows to decode a binary file (for example a tfrecord) starting from a given byte (offset) and reading the following N bytes, without decoding the entire file.
This has been implemented for jpeg images: tf.image.decode_and_crop_jpeg
but I cannot find a way to do the same thing with any binary file.
This would be very useful when the cropping window is much smaller than the whole data.
Currently, I am using a custom tf.py_func as mapping function of a Dataset object. It works, but with all the limitation of a custom py_func.
Is there a native tensorflow way to do the same thing?

How to visualize CGAL resutls with VTK library

I am new in this forum.
I have a problem in my project in c++.
I used vtk and Itk and Qt, but the mesh was not perfect so I tried to include CGAL with cmake.
I can do everything using CGAL, but I can't visualize the object created with CGAL. I have tried to export the results (coordinates, vertices, triangles...) to a generic file like xml or txt to be able to read it from vtk and render it.
Please can you help me to find a way to visualize the CGAL operations?
Thank you
There is a mesh to vtk converter which I used a while ago.