BLENDER: Why is Vertecies getting weighed by alot of bones with automatic weighing - blender

I have bought two female characters online and adjusted each to use as character models for game.
unfortunately with one of them(Female_Legs02) when 'parenting with automatic weights' it selects a lot of bones to influence each vertex.
This causes very inaccurate weight painting on the mesh
Using the same armature and parenting to Female_Legs it works as expected.
I have tried everything from;
normalizing weights
adjusting mesh typology
parenting with envelope
etc
https://drive.google.com/file/d/1uccd7T69MRzX1YYqkx3nMUBBX1M2zTXX/view?usp=sharing

Related

MeshLab Face Count

Not sure if I'm supposed to ask this question here, but going to give it a try since MeshLab doesn't seem to respond to issues on GitHub fast..
When I imported a mesh consisting of 100 vertices and 75 quad faces, meshlab somehow recognizes it to have 146 faces. What is the problem here?
Please find here the OBJ file and below the screenshot:
Any help/advice would be greatly appreciated,
Thank you!
Tim
Yes, per the MeshLab homepage Stack Overflow is now the recommended place to ask questions. Github should be reserved for reporting actual bugs.
It is important to understand is that MeshLab is designed to work with large unstructured triangular meshes, and while it can do some things with quad and polygonal meshes, there are some limitations and idiosyncrasies.
MeshLab essentially treats all meshes as triangular for most operations; when a polygonal mesh is opened, MeshLab creates "faux edges" that subdivide the mesh into triangles. You can visualize the faux edges by turning "Polygonal Modality" on or off in the edge display pane. If you run "Compute Geometric Measures", it will provide different lengths for the edges both with and without the faux edges. This is why MeshLab is reporting a higher number of faces for your model; it is reporting the number of faces after triangulation, i.e. including the faux edge subdivision. As you can see, when dividing the number of quad faces (75) in half, you end up with nearly double the number of triangular faces (146), which makes sense. Unfortunately I don't know of a way to have MeshLab report the number of faces without these faux edges.
Most filters only work on triangular meshes, and if run on a polygonal mesh the faux edges will be used. A few specific filters (e.g. those in the "Polygonal and Quad Mesh" category) work with quads, and for these the faux edges should be ignored. When exporting, if you check "polygonal" the faux edges should be discarded and the mesh will be saved with the proper polygons, otherwise the mesh will be permanently triangulated per the faux edges.
Hope this helps!

Basics of face Sculpting in Blender

I mean, the basics..
1) I have seen in the Online videos, that they are modelling a character (or anything) through one object only, they are extruding, loop cut, scaling, etc and model a character, why don't they design different objects separately (like hands separately, legs separately, body separate and then join them together and make one object)..??????
2) Like What the texturing department has to see so that they should not return the model back to the modelling department. I mean like the meshes(polygons) over the model face must be quad, etc not triangle. while modelling a character..
what type of basics i should know , means is there any check list or is there any basics which i should see before modelling a character..
Please correct me if i am wrong , and answer my both questions.. Thanks
It may be common but it definitely isn't mandatory to have a model as one solid mesh. Some models will have parts of the body underneath clothing removed to reduce the poly count. How the model is to be used will be a big factor to how you model it, that is a for a single image it is easy to get away with multiple parts, while a character that will be animated in a cartoony animation could be stretched and distorted in ways that could show holes in a model with multiple pieces. When working in a team, there may be rules in place determining whether a solid or multi-part model is considered acceptable.
An example of an animated model made from multiple parts is Sintel, the main character in the Sintel short animation.
There is nothing stopping you from making a library of separate body parts and joining them together when you make your model. Be aware that this can bring complications, if you model an arm with 12 verts and then you make your hand with 15, then you have to fiddle around to merge them together.
You will also find some extra freedom to work with multiple body parts during the sculpting phase as you are creating a high density mesh that is used as a template to model a clean mesh over. This step is called retopology.
It is more likely that the rigging department will send a model back for fixing than the texturing department. When adding a rig and deforming the mesh in different ways, any parts that deform badly will be revealed and need fixing.
[...] (like hands separately, legs separately, body separate and then
join them together and make one object) [...]
Some modelers I know do precisely this and they do it in a way where they block in the design using broad primitive shapes, start slicing some edge loops and add broad details, then merge everything together, then sculpt it a bit further with high-res sculpting tools, and finally retopologize everything.
The main modelers I know who do this, however, model in a way that tries to adhere as close as possible to the concept artist's illustration. They're not creating their own models from scratch but are instead given top/front/back/side illustrations of a character, for example, and are just trying to match it as closely as possible.
When you start modeling everything in small pieces, it helps to have that concept illustration since you can get lost in the topology otherwise and fusing organic meshes together can be difficult to do in a clean way.
[...] why don't they design different objects separately? [...]
Again they sometimes do, but one of the appeals of creating organic meshes by keeping it seamless the entire time is that you can start to focus on how edge loops propagate across the entire model. It helps to know that the base of a finger is a hexagon, for example, in figuring out how to cleanly propagate and terminate the edge loops for a hand, and likewise have a strategy for the hand to cleanly propagate and terminate edge loops as it joins into the forearm.
It can be hard to get the topology to match up cleanly if you designed everything in small pieces and then had to figure out how to merge it all together. Polygonal modeling is very topology-oriented. It tends to require as much thinking about the wireframe and edge flows as it does the shape of the model, since it needs to be a certain way for everything to subdivide cleanly and smoothly and animate predictably with subdivision surfaces.
I used to work with developers who took one glance at the topology-dominated workflow of polygonal modeling and immediately wanted to jump to seeking alternatives, like voxel sculpting. With voxels you could be able to potentially model everything in pieces and foose it all together in a nice and smooth organic way without thinking about topology whatsoever.
However, that loses sight of the key appeal of polygonal meshes. Their wire flow forms a control lattice with a very finite number of control points for the artist to animate and move around to predictably control the shape of their model. You immediately lose that with a voxel representation -- so while voxels free the artist of thinking about how the topology works and how the wireframe flows through the model, it also loses all those control benefits of having that. So often if people use voxel sculpting, they end up meticulously retopologizing everything at the end anyway to gain back that level of coarse and predictable control they have with polygonal meshes.
I mean like the
meshes(polygons) over the model face must be quad, etc not triangle.
while modelling a character..
This is all in the context of subdivision surfaces: the most popular of which are variants of catmull-clark. That favors quads to get the most predictable subdivision. It's much easier for the artist to predict how everything will look like and deform if they favor, as much as possible, uniform grids of quadrangles wrapped around their model with 4-valence vertices and every polygon having 4 points. Then only in the case where they kind of need to "join" these quad grids together, they might create some funky topology: a 5-valence vertex here, a 3-valence vertex there, a 5-sided polygon here, a triangle there -- but those cases tend to deform a bit unpredictably (at least unintuitively), so artists tend to try to avoid these as much as possible.
Because when artists model polygonal meshes in this way, they are not just trying to create a statue with a nice shape. If that's all they wanted to do, they'd save themselves a lot of grief avoiding dealing with things in terms of individual vertices/edges/polygons in the first place and using something like Sculptris. Instead they are designing not only shapes but also designing a control lattice, a wire flow and a set of control points they can easily move around in the future to get predictable behavior out of their control cage. They're basically designing controls or an "interactive GUI/rig" almost for themselves with how they design the topology.
2) Like What the texturing department has to see so that they should
not return the model back to the modelling department.
Generally how a mesh is modeled in a direct sense shouldn't affect the texture department's work much at all if they're working with UV maps and painting textures over them (at that point it doesn't really matter if a model has clean wire flows or not, since all the texture artists do is pain images over the 2D UV map or directly onto the 3D model).
However, if the modeler does the UV mapping, then regardless of whether he uses quad meshes and clean wire flows or not, if the UV mapping is poor, then the resulting texture images will look all distorted. So the UV maps need to be made well with minimal distortion, though that's usually easy to do automatically these days.
The other exception is if the department doesn't use UV maps and instead uses, say, PTex from Disney. PTex really favors quads. In the original paper at least, it only worked with quads.

Tweaking Heightmap Generation For Hexagon Grids

Currently I'm working on a little project just for a bit of fun. It is a C++, WinAPI application using OpenGL.
I hope it will turn into a RTS Game played on a hexagon grid and when I get the basic game engine done, I have plans to expand it further.
At the moment my application consists of a VBO that holds vertex and heightmap information. The heightmap is generated using a midpoint displacement algorithm (diamond-square).
In order to implement a hexagon grid I went with the idea explained here. It shifts down odd rows of a normal grid to allow relatively easy rendering of hexagons without too many further complications (I hope).
After a few days it is beginning to come together and I've added mouse picking, which is implemented by rendering each hex in the grid in a unique colour, and then sampling a given mouse position within this FBO to identify the ID of the selected cell (visible in the top right of the screenshot below).
In the next stage of my project I would like to look at generating more 'playable' terrains. To me this means that the shape of each hexagon should be more regular than those seen in the image above.
So finally coming to my point, is there:
A way of smoothing or adjusting the vertices in my current method
that would bring all point of a hexagon onto one plane (coplanar).
EDIT:
For anyone looking for information on how to make points coplanar here is a great explination.
A better approach to procedural terrain generation that would allow
for better control of this sort of thing.
A way to represent my vertex information in a different way that allows for this.
To be clear, I am not trying to achieve a flat hex grid with raised edges or platforms (as seen below).
)
I would like all the geometry to join and lead into the next bit.
I'm hope to achieve something similar to what I have now (relatively nice undulating hills & terrain) but with more controllable plateaus. This gives me the flexibility of cording off areas (unplayable tiles) later on, where I can add higher detail meshes if needed.
Any feedback is welcome, I'm using this as a learning exercise so please - all comments welcome!
It depends on what you actually want and what you mean by "more controlled".
Do you want to be able to say "there will be a mountain on coordinates [11, -127] with radius 20"? Complexity of this this depends on how far you want to go. If you want just mountains, then radial gradients are enough (just add the gradient values to the noise values). But if you want some more complex shapes, you are in for a treat.
I explore this idea to great depth in my project (please consider that the published version is just a prototype, which is currently undergoing major redesign, it is completely usable a map generator though).
Another way is to make the generation much more procedural - you just specify a sequence of mathematical functions, which you apply on the terrain. Even a simple value transformation can get you very far.
All of these methods should work just fine for hex grid. If artefacts occur because of the odd-row shift, then you could interpolate the odd rows instead (just calculate the height value for the vertex from the two vertices between which it is located with simple linear interpolation formula).
Consider a function, which maps the purple line into the blue curve - it emphasizes lower located heights as well as very high located heights, but makes the transition between them steeper (this example is just a cosine function, making the curve less smooth would make the transformation more prominent).
You could also only use bottom half of the curve, making peaks sharper and lower located areas flatter (thus more playable).
"sharpness" of the curve can be easily modulated with power (making the effect much more dramatic) or square root (decreasing the effect).
Implementation of this is actually extremely simple (especially if you use the cosine function) - just apply the function on each pixel in the map. If the function isn't so mathematically trivial, lookup tables work just fine (with cubic interpolation between the table values, linear interpolation creates artefacts).
Several more simple methods of "gamification" of random noise terrain can be found in this paper: "Realtime Synthesis of Eroded Fractal Terrain for Use in Computer Games".
Good luck with your project

iOS: Generate image from non-image data (Godus like landscape)

So upon seeing images from Godus I was wondering how to generate a simple, non-interactive, 2D image with different colors for different heights or layers of heights like on the picture below.
I was just thinking in terms of generating the basic layers of colors for the topography without the houses, trees objects and units. I wasn't thinking in terms of creating a graphics engine that would solve this, but a simple way to generate a flat image on the screen.
The question is two-fold:
1, What kind of data could be used for this sort of generation? I was thinking maybe ASCII art which is kind of easy to create and modify to quickly change the topography, but would be difficult to provide height information.
2, What existing frameworks, classes, methods or methodologies could be used for solving the generation after having the data ready.
Godus:
ASCII art (northern europe with ! for Norway, # for Sweden, $ for Finland and % for Russia:
(Taken from the MapBox docs: http://mapbox.com/developers/utfgrid/#map_data_as_ascii_art)
If you want to create a simple 2D, contoured image, I would try the following:
Create some height data. I'd just use a grey-scale image for that, rather than ascii. You can author basic height-maps in MS Paint, or anything similar.
Smooth the data. For example, apply a blur, or increase the resolution using a smooth filter.
Consider clamping all height data below a certain point - this represents a water level, if you want that.
Quantise the data. The more you quantise, the fewer but more obvious the contours.
Apply a false colouring, via a palette lookup. For example a : low lying areas blue, for water, then yellow, for sand, green for grass, brown for earth, grey for rock, and white for snow.
The important parts are the enlarging/smoothing filter, which creates more interesting shapes to your contours, and the quantisation which actually creates the contours themselves.
You can play with the stages of this. For example you could introduce some noise to the terrain, to make it look more natural if your source data is very clean. Or you could increase the smoothing if you want everything very rounded.
If you want to use ascii, you could just generate a bitmap directly from that, which wouldn't be tricky. The ascii you use as an example though is split up by country rather than terrain, so the false-colouring and contouring would probably do the wrong thing. You could probably use it as input to a simple terrain generator, perhaps just having a couple of chars to denote where you want land, sea, mountains, etc.
Here's a very basic example I knocked up, it's just an application of the technique I suggested. I didn't use any frameworks or libs, just a few simple image processing functions, and an height-map of Europe I found:

Smoothing data received from CoreLocation

I'm trying to develop an app which allows you to walk around, and where you walked will be drawn on a map. I have this all working fine, but I'm finding that even with a reasonably accurate GPS location the points still jump around a bit. When drawn on a map this has the effect of creating a squiggly or zig-zag line.
I'm looking for suggestions/strategies on how to smooth the data, so that the line drawn on the map is more of a smooth best fit, rather than an accurate point to point drawing.
There are many different types of smoothing algorithms you could apply to the data (for a few starting points, see this Wikipedia article). The only way to know for sure which is/are suitable for your application is to implement and test them.
Simple or weighted moving averages are fairly common (taking the last n samples and averaging them), but have the problem of lagging behind the data. A common one for filtering signal noise is a high-pass filter, which attenuates small (noisy) movements while passing through larger ones. Apple has some code for this in their AccelerometerGraph sample.
I'd suggest trying those out first as they're easy to implement, before looking at the move complex ones.