Tail chain parented wrong after using skin modifier - blender

I was using a skin modifier to make a character. I added a tail, but the chain of bones were parented wrong, the tail influences the entire body.
Default Pose
Editing Base of tail
Unfortunately, I can't change the parent myself through
Properties > Bone Properties > Relations > Parent due to it being disabled.
How do I fix this, if possible.

Related

Dynamically show another graph within a node in Cytoscape.js

I'm using Cytoscape.js to visualize a large, nested data structure. Showing the whole graph directly makes it hard to interpret, so I'm only showing the top-level nodes at first. Then, when a node is clicked, I want to show the subgraph within the node.
My first attempt was to just add the subgraph as child nodes. The child nodes initially have visibility: hidden, but are shown once their parent node has been selected (the parent node itself also changes its appearance a bit when this happens, to indicate it has received the focus). This works, somewhat. However, the top-level nodes are drawn very large, obviously since they now contain their hidden child nodes.
So my alternative solution would be to dynamically add the child nodes at the moment when their parent receives focus. However, this would probably require some additional restrictions on Cytoscape, as I don't want the parent node to grow or move when this happens. So basically the node bounds of the parent becomes the canvas in which the child graph should be drawn.
My double questions is then (1) whether Cytoscape can actually introduce such constraints, and (2) if this is really the best solution for this particular problem.

Blender: black patterns on my rendered model

When I render my model these black lines/patterns appear all over it.
I tried recalculating normals, deleted doubles, and checked for extra faces, but everything seems alright.
Other models that I created from the same base model are ok, so I really don't know where the problem comes from. In the picture they also have th same material so it's not that either.
The two rendered models side by side:
Anyone have an idea of how I can fix this?
Here's the Bleder project: https://drive.google.com/file/d/1lpDNymtcCWtBQTj1qoA3sKUWtzl_fJsV/view?usp=sharing
For some reason, you have double mirror modifier in second mesh, remove one of them

Design pattern for child calling method in parent

I am currently working on my biggest project and I am having trouble figuring out how to structure my code. I'm looking for some guidance.
I have 2 objects a Tile and Container. Each Tile has a 2D coordinate and are all children of the Container. The Container has methods that return tile for location, switch tiles, add tiles, and remove tiles.
Now when you click on a tile it disappears, that was easy because it was self contained. The problem comes when I created different types of tiles that inherit from the base Tile. Each different type of tile does a different action when you click on it. Some destroy surrounding tiles some switch with other tiles and others add new tiles. For simplicity we will call these 3 subclasses Tile-destroy, Tile-swap, and Tile-add.
My problem is when I click on these tiles how can they act on other tiles in the Container. Should I just call functions in the parent class or is there a better way to do this? I am having trouble #including the Tile in the Container as well as the other way around. I feel like its not a proper pattern.
I have it set up so when a click takes place the Container handles it and checks the type of tile that is clicked and acts from there with a large else-if statement however this makes it very difficult to add new tile types. Ideally all the information for what happens when you click on a tile is contained within each tile subclass.
Any ideas?
I can suggest you the simpliest design:
Your Container will be a game controller
Each tile has Parent property which is refer to Container
When you click on tile it sends Command to Container (for example, DestroyTile(x, y) or AddTile(x, y)
Container handle this commands and destroys, adds or swap tiles.
If you want really good and more decoupled design you can also create handlers for all operation types DestroyTileHandler, AddTileHandler. In Container on different commands you will just pass them [commands] to appropriate handler. Also you need to pass context object (like Field with tiles) to handler. This allows you to add and modify new operations without even changing Container code.
See related patterns: Command, Observer
Feel free to ask questions and good luck!

Cocos3D - background shown through meshes

I imported the .pod file created from Blender and the blue background is shown through the eyelash and eyebrow meshes. Does anyone know why I'm encountering this?
WITHOUT additional material (looking normal except the root of the hair).
WITH new green material added to her left shoulder, the eyebrow and eyelash began showing the background
This issue is caused by the order in which the nodes are being rendered in your scene.
In the first model, the hair is drawn first, then the skin, then the eyebrows and eyelashes. In the second model, the hair, eyebrows and eyelashes are all drawn before the skin. By the time the skin under the hair or eyelashes is drawn, the depth buffer indicates that something closer to the camera has already been drawn, and the engine doesn't bother rendering those skin pixels. But because the eyelashes, eyebrows and hair all contain transparency, we end up looking right through them onto the backdrop.
This design use of a depth buffer is key to all 3D rendering. It's how the engine knows not to render pixels that are being visually occluded by another object, otherwise all we'd ever see was the last object to be rendered.However, when rendering overlapping objects that contain transparency, it's important to get the rendering order correct, so that more distant objects that are behind closer transparent objects are rendered first.
In Cocos3D, there are several tool available for you to order your transparent objects for rendering:
The first, and primary tool, is the drawingSequencer that is managed by the CC3Scene. You can configure several different types of drawing sequencers. The default sequencer is smart enough to render all opaque objects first, then to render the objects that contain transparency in decreasing order of distance from the camera (rendering farther objects first). This works best for most scenes, and in particular where objects are moving around and can move in front of each other unpredictably. Unfortunately, in your custom CC3Scene initialization code (which you sent me per the question comments), you replaced the default drawing sequencer with one that does not sequence transparent objects based on distance. If you remove that change, everything works properly.
Objects that are not explicitly sequenced by distance (as in part 1 above) are rendered in the order in which they are added to the scene. You can therefore also define rendering order by ensuring that the objects are added to your scene in the order in which you want them rendered. This can work well for static models, such as your first character (if you change it to add the hair after the skin).
CC3Node also has a zOrder property, which allows you to override the rendering order explicitly, so that objects with larger zOrder value are rendered before those with smaller zOrder values. This is useful when you have a static model whose components cannot be added in rendering order, or to temporarily override the rendering order of two transparent objects that might be passing in front of each other. Using the zOrder property does depend on using a drawingSequencer that makes use of it (the default drawing sequencer does).
Finally, you can temporarily turn off depth testing or masking when rendering particular nodes, by setting the shouldDisableDepthTest and shouldDisableDepthMask properties to YES on those nodes.

Store Half-Edge structure in CoreData

I'm building an app that uses a Half-Edge structure to store a mesh of 2D triangles.
The mesh is calculated everytime a user taps the screen and adds a point.
I want to be able to save the mesh into CoreData. Not just the points, but the whole mesh, so it won't have to be recalculated again when restored)
My HalfEdge structure is like this (a drawing is composed by a set of triangles):
Triangle:
- firstHalfEdge (actually, any half-edge of the triangle)
HalfEdge:
- lastVertex (the Vertex in which the Edge ends)
- next (next halfedge in the triangle)
- oposite (the halfedge oposite to this one, which is in another triangle)
- triangle (the triangle which this edge belongs to)
Vertex:
- halfEdge (the edge which the vertex belongs to)
- point (2d coordinates of the vertex)
And this is my CoreData scheme:
As you can see I added a previous attribute to HalfEdge (although is not needed) to avoid getting a warning for a non inverse relationship.
But I keep getting more warnings:
Vertex.point should have an inverse. (no problem with this one, I'll just add another attribute)
Vertex.halfEdge should have an inverse. (this refers to the HalfEdge for which this vertex is the first vertex, so lastVertex wouldn't do as an inverse)
HalfEdge.lastVertex should have an inverse. (see above)
HalfEdge.triangle should have an inverse. (Triangle.firstHalfEdge refers to just one edge, any, but all 3 edges should refer to the triangle) Triangle.firstHalfEdge should have an inverse. (see above)
So, what should I do? Should I try to accomplish those inverse relationships some how (though, I'm thinking it would get my structure calculation more complex) or should I ignore those warnings?
By the way, if anybody is curious, this is what I'm doing: http://www.youtube.com/watch?v=c2Eg7DXW7-A&feature=feedu
You can disable the warnings by setting MOMC_NO_INVERSE_RELATIONSHIP_WARNINGS in project configuration editor (category "Data Model Version Compiler – Warnings" in Xcode 4.1) to YES (screenshot).
Still, there are things to consider before doing so.