I am new in this forum.
I have a problem in my project in c++.
I used vtk and Itk and Qt, but the mesh was not perfect so I tried to include CGAL with cmake.
I can do everything using CGAL, but I can't visualize the object created with CGAL. I have tried to export the results (coordinates, vertices, triangles...) to a generic file like xml or txt to be able to read it from vtk and render it.
Please can you help me to find a way to visualize the CGAL operations?
Thank you
There is a mesh to vtk converter which I used a while ago.
Related
I am coding a converter to convert 3D models to fbx for Unreal Engine.
A very simple v0.1.0 converter has been completed.
I can import it correctly in Blender, but the FBX-SDK (used in Unreal Engine) returns an error.
File is corrupted
I think I'm missing some necessary element, is there a minimum requirement for FBX-SDK written anywhere?
I have tried to write out empty fbx data using the SDK but there are too many elements.
Please could you tell me if it is feasible to transform a torch model (torch.save) into algebraic matrices/ equations that can be operated with numpy or basic Python, without the need to install torch and other related libraries (that occupy a lot of space)? In an afirmative case, could you please give me some hints or a link with explanations? Thank you very much.
I'm not aware of any way to do this without a lot of your own work. Basically you'd have to port most of the pytorch library to numpy, which would be a huge project. If space is an issue check if you can save some space by e.g using earlier torch versions or using only the CPU-versions of pytorch.
Does the assimp gltf2 exporter support storing texture data inside the file? Specifically I am interested in the binary version of gltf2.
Assimp supports storing textures in gltf files:
Main page of assimp documentation
But unfortunately does not support glb.
If you visit the http://projector.tensorflow.org/ you can use it with your own dataset (ie a TSV file). I am playing with N-D data and found useful to look at these visualisations after PCA reduction.
I am wondering how I can run my own Projector version on my machine.
Looking at the doc, it seems to be released only as a tensorboard plugin for seeing the embedding results...
Thanks
According to the replies made on https://github.com/tensorflow/tensorflow/issues/7562 more recently than the other answer here, you can get the standalone version at https://github.com/tensorflow/embedding-projector-standalone/ and edit the oss_demo_projector_config.json to point to your datasets.
The demo files are binary files ending in .bytes, which can be generated from a numpy array with .tofile:
vectors = numpy.zeros(vector_shape, dtype=numpy.float32)
vectors.tofile('my_tensors.bytes')
It has only been released as a TensorBoard plugin.
Currently working on a project with a hospital where I need to detect facial features to determine if any facial deformities exist through iPhone App.
For example I found https://github.com/auduno/clmtrackr which showed facial feature detection points. I thought maybe look at the code and make it into objective C. The problem is when I tested clmtrackr with a face with deformity it did not work as intended.
You can check it also: http://www.auduno.com/clmtrackr/clm_image.html
Also tried this image:
both were inconsistent with detecting all the features points it can detect.
Do you know of any API that could do this? Or do you know what techniques I should look up so that I can make one myself.
Thank you
There are several libraries for facial landmark detection:
Dlib ( C++ / Python )
CLM-Framework (C++)
Face++ ( FacePlusPlus ) : Web API
OpenCV. Here's a tutorial: http://www.learnopencv.com/computer-vision-for-predicting-facial-attractiveness/
You can read more at: http://www.learnopencv.com/facial-landmark-detection/
you can use dlib since it's face detection algorithm is faster and it also includes a pre-trained model
https://github.com/davisking/dlib/
https://github.com/davisking/dlib-models
refer this for integration to ios how to build DLIB for iOS
alternatively you could use openface for checking it out just download the binaries http://www.cl.cam.ac.uk/~tb346/software/OpenFace_0.2_win_x86.zip and you're ready to go with command lines https://github.com/TadasBaltrusaitis/OpenFace/wiki/Command-line-arguments
note:- i wont prefer to use opencv since training process and results and not so regular