I have exported a simple model (a cube subdivided by 10, then I used smooth vertex).
I export the model, so it can be loaded by my engine (which expects everything to be a quad).
I export it as Wavefront OBJ.
The problem is, Blender sometimes exports faces with 4 vertex indexes (like a quad), sometimes 3 (like a triangle) and sometimes 5 or more (like a polygon, I suppose).
I used Tris to Quads, to make everything a quad, but it didn't work.
I tried exporting all as triangles (Triangulate Faces when exporting), and it does export all in triangles. (I wonder why it, apparently, can't export all as quads).
Well, how do I make Blender export Quads only?
BTW, probably some people will tell me to change the engine to support triangles, but I guess it's too late. Too many things already expect quads. Also, I prefer working with quads, than triangles or polygons.
i fear that this is a dead end for automatical processes. I am not 100% sure though, maybe someone came up with a solution, but i doubt it. Tri to Quad is useful, but as you have experienced, it does not work perfect. If Blender can't change all tris to quads, how could it export them? Well, thats not possible.
Perhaps someone has made an patch or addon which uses better algorithm, but you may need to correct it by hand!
I guess you can use Quads for sure, because regarding graphiccards, they are all converted to tris anyway.
Wish you luck,
Gerd
Related
Building a project in Unreal Engine 4.26 while trying to increase the performance of the videogame I ended up using instances of static meshes where possible. This generated an issue:
My instantiated meshes are scaled to 0.9995/1 (experimentally proven).
Looking for an answer I found a work-around suggested by Unreal Engine devs themselves: they suggested to rotate the mesh adding a rotation of 360 degrees using higher values of the same rotation as you can read here. This didn't work for me and, as you can see, the difference between instantiated and manually placed meshes are evident.
Following the way I've instantiated the meshes:
Increasing the rotation on z-axis using 450 degrees didn't solve anything even though doing it with the meshes provided by devs here actually works.
I'm sure that rotations are the key to solve the problem since the problem is not systematic, I didn't get the logic behind but by building a square with instantiated meshes I end up having some walls with spaces and some of them with perfect scale. I increased the size of them all so as to not have spaces but I'm afraid that the solution will bring more issues in the future while working with light and production materials. Seems that the bug isn't solved yet in UE4, is there another workaround that I may use without any risk of overlapping meshes?
Seems to be a bug not yet solved, to populate the map with objects where is necessary to have a high accuracy level is suggested to place them manually. If you place them manually, UE4 will make them instances, it's not necessary to have a blueprint to populate rooms programmatically.
Increasing the scale how I did in the first place doesn't seem to cause problems but I ended up placing them manually since as solution is more elegant even if it requires more effort.
I am totally new to blender. I know to create objects but rigging is lite problem to me. I downloaded male model with everything but problem is now when i move his arm (bend it with bones), his neck get off head (2 separated objects).
Here is the image what happens. What can i do?
Before:
After:
its a bit hard to see, but if the head comes loose, then the problem is in the model. One could fix it by repairing the model, in mesh edit, make it one connected object, as a result of that, when changing the mesh(add and connect surfaces) the the weights of the orginal model wont work anymore on the model. So then you would need to reaply weights from bones to the meshmodel.
As you said your totally new to blender, i think all those steps would be a bit to much (i have repaired meshes but i got 5 years of experience), for you repairing a model might be a bit too complex (its advanced stuff usually a few hours of blender work to fix something).
It might be much easier to startoff with correct models, you can get them at blendswap, or you could install the Bastioni addon, he's one of the MakeHuman creators and transformed that code into blender. Look for Bastioni and you get real good human models who you can pose.
I tried executing the example in http://doc.cgal.org/latest/Surface_mesh_skeletonization/index.html to get the skeleton of a surface mesh.
I tried using a mesh model of blood vessels with thin structures. However, no matter how refined my meshes are, parts of the skeletons seems to always be outside the mesh models.
In the sample code, there seems to be no parameters which I am able to play around with, so I am asking if there is anything i can do to make sure the skeleton stays within the mesh model.
I have tried to refined the meshes, till the program crashes. Thanks for any help provided. thanks!
I guess you have used the free function setting all parameters to their default. In case you want to tune the parameters you need to use the class Mean_curvature_flow_skeletonization.
It has 3 parameters that need to be fine tuned so that your skeleton lies within the mesh:
quality_speed_tradeoff
medially_centered_speed_tradeoff
is_medially_centered
Note that the polyhedron demo includes a plugin where you can try the effect of the different parameters.
If you can share the mesh with me, I can also have a look.
I'm pretty much grabbing at straws here cause I have no idea what I'm asking, but here is the question.
I've been looking at 3D modeling out of pure interest and came across the concept of bones.
Now, I am not too sure what bones are even after looking it up on wiki, but they seem like an abstraction of real-life skeletons and whatnot, so in a model of say a human I just think of them as the skeleton.
To my understanding, a bone is defined by a translation, rotation, and a scale on the x, y and z axis'. (Isn't that just a single point?)
I am interested in taking a model in blender or max and export the information (whatever they may be) that is used to define these bones. I can definitely see the bones in these programs, but I want to get that out into a text file Is there a way to export this?
I think you need to seperate these ideas:
Bones - which as you correcly say have a position and rotation. They are the objects that you can control and will effect the skin of the model. They are usually in a hierarchy so that if you move one bone then it will affect all of the bones connected to it, like a human skeleton.
Skin - this is the polygonal mesh that you can usually see. It is given a base position by you in the editor and the skeleton operates on the skin to move it around.
Animation - This is data to pass to the bones. Usually a rotation, for example to make an arm bend.
http://gpwiki.org/index.php/OpenGL:Tutorials:Basic_Bones_System gives a good explanation.
Hope that helps :3
If I have a graph of a reasonable size (e.g. ~100 nodes, ~40 edges coming out of each node) and I want to represent it in R^3 (i.e. map each node to a point in R^3 and draw a straight line between any two nodes which are connected in the original graph) in a way which would make it easy to understand its structure, what do you think would make a good drawing criterion?
I know this question is ill-posed; it's not objective. The idea behind it is easier to understand with an extreme case. Suppose you have a connected graph in which each node connects to two and only two other nodes, except for two nodes which only connect to one other node. It's not difficult to see that this graph, when drawn in R^3, can be drawn as a straight line (with nodes sprinkled over the line). Nevertheless, it is possible to draw it in a way which makes it almost impossible to see its very simple structure, e.g. by "twisting" it as much as possible around some fixed point in R^3. So, for this simple case, it's clear that a simple 3D representation is that of a straight line. However, it is not clear what this simplicity property is in the general case.
So, the question is: how would you define this simplicity property?
I'm happy with any kind of answer, be it a definition of "simplicity" computable for graphs, or a greedy approximated algorithm which transforms graphs and that converges to "simpler" 3D representations.
Thanks!
EDITED
In the mean time I've put force-based graph drawing ideas suggested in the answer into practice and wrote an OCaml/openGL program to simulate how imposing an electrical repulsive force between nodes (Coulomb's Law) and a spring-like behaviour on edges (Hooke's law) would turn out. I've posted the video on youtube. The video starts with an initial graph of 100 nodes each with approximately 1-2 outgoing edges and places the nodes randomly in 3D space. Then all the forces I mentioned are put into place and the system is left to move around subject to those forces. In the beginning, the graph is a mess and it's very difficult to see the structure. Closer to the end, it is clear that the graph is almost linear. I've also experience with larger-sized graphs but sometimes the geometry of the graph is just a mess and no matter how you plot it, you won't be able to visualise anything. And here is an even more extreme example with 500 nodes.
One simple approach is described, e.g., at http://en.wikipedia.org/wiki/Force-based_algorithms_%28graph_drawing%29 . The underlying notion of "simplicity" is something like "minimal potential energy", which doesn't really correspond to simplicity in any useful sense but might be good enough in practice.
(If you have 100 nodes of degree 40, I have some doubt as to whether any way of drawing them is going to reveal much in the way of human-accessible structure. That's a lot of edges. Still, good luck!)