Renaming the bones on my skeletons messes up the mesh - blender

I'm working with several skeletons, all named similarly because they are copy paste from the original, but when I want to import them in their scenes into spark AR, spark says that the bones can't have the same names and deletes all the duplicates. So I went back to blender and tried renaming the bones of my first character who I refer to as captain, saying "right upperarm captain1" for example one of the bones. If I try to rename more than one bone however, it throws the mesh out completely, so that it looks like the attached image. When first applying the bones, I always made sure to make them weighted so that they attach to the mesh properly, but I suspect that renaming them means I have to reapply the weight? I'm not quite sure so if anyone has any advice, it would be appreciated.
I tried reapplying the automatic weights but it didn't do anything noticeable, I also tried better naming conventions, but even not trying to work "captain" into it and just calling it like "L**_**calfbone" does the same thing. The characters are only meant to be in a static pose so I'm not worried about how it affects animation, but I'm not too keen to have to pose them all from scratch.

Related

SimpleElastix 3D rigid registration fails to find same image

I've been trying to get a basic registration working with SimpleElastix for a while now, but I'm not able to make it work at all, regardless of the settings used. Note that I'm trying to compare SimpleElastix to other methods, so using another library is not an option.
Here's the task I'm currently trying to do:
Take a 3D ultrasound scan of a fetus (img_scan)
Mask a part of the scan, and apply a rigid transform (img_structure)
Use SimpleElastix to register img_structure to img_scan.
Since img_structure was created from img_scan, I know for certain that there is an ideal solution to the registration. However, unless img_structure is extremely close to it's original position, SimpleElastix seems to barely do anything at all. For example, if I only translate img_structure in one axis, it works. But if I only rotate it 90 degrees, it fails although their centres are basically the same. To clarify, by fail I mean it returns a slightly transformed img_structure, but nowhere near the the actual solution]
This is the basic code I've started with:
elastixImageFilter = sitk.ElastixImageFilter()
elastixImageFilter.SetFixedImage(fixed)
elastixImageFilter.SetMovingImage(moving)
elastixImageFilter.SetMovingMask(mask_moving)
elastixImageFilter.SetFixedMask(mask_fixed)
elastixImageFilter.SetParameterMap(sitk.GetDefaultParameterMap("rigid"))
elastixImageFilter.Execute()
Things I have tried:
With/without masks (or a combination)
Different MaximumStepLength
Different ImageSampler
Affine instead of rigid
Different Optimizers
What could I be missing?

Calculate points inside the mesh of other instances with Geometry Nodes

I'm working on a procedural terrain generator using Geometry Nodes and I want to include the option of having buildings or other objects placed inside a collection that the terrain uses as a reference for where not to put grass, pebbles and rocks. The issue I have is that I've been only able to do it with the faces of the objects in the collection, and when they are big enough, that causes the points to be distributed inside the object. Here's a capture:
I've scoured the internet for help, but since the change in the geometry nodes on 3.0, most of the answers I can find use the old system and I can't find a way to adapt it, so I'm asking for help here, because I ran out of ideas. Here's the current set up I have to make the selection to know where not to put points for the grass:
I did try using the Mesh Boolean approach, but it is too resource heavy. For the buildings is not a problem, but when I use it for avoiding grass from spawning inside the big rocks, it makes the entire geometry nodes really heave resources wise.
Any Help is appreciated, I've been fighting this "bug" for three days now and it's driving me crazy. Thanks!
After many searches, I've managed to find a solution that works! It is using the method explained in this tutorial: https://www.youtube.com/watch?v=tvb2aCeTANM
Basically, you create a raycast on top of the scene and use a Boolean Math node to detect if it's hitting or not. In my case, for deleting the Geometry in the mesh where the points are distributed, I used a "Or" operation. You can see the node setup here: New Node Setup
Hope this helps anyone else having the same problem :). Of course if anyone can think of a better solution, feel free to add it!
Just use Raycast node with position node to evaluate a dot product
and then add map range with clamp checkbox

SCNkit: Issues with unhidden nodes

I am creating a 3-D game with a cave as the main environment. The cave is made of a large number of ring segments, one attached to the other, thus creating a currently small tunnel system.
If the Player is inside the cave, only a small part of the segments are visible. I am figuring that actually hiding the not-visible segments could save a lot of gpu time, which I need for other objects like buildings or enemies.
So what I try to do first is hiding the entire cave and then unhiding the visible segments by turning ‚node.isHidden’ true and false.
The particular nodes are being found and accessed by their names: ‚Node.childnode (withName: „XYZ003“, recursively: false).isHidden = true‘ (or false).
It works to the point where the segments are unhidden, but once I am trying to hide a previously unhidden segment, the renderer crashes with an EXC_BAD_ACCESS.
Doing the hiding on a hidden object (of course useless, but helping to understand the problem) is fine, so is unhiding unhidden segments.
Following the hint of another thread, I moved the routine into the renderer delegate so not doing the switching during the wrong time, but instead during the phase in which such changes are supposed to happen, but this did not help.
As an alternative, I did the hiding (and unhiding) by SCNActions, but I received the same result, which really puzzles me, as this would be kind of the ‚official way‘ to do it...
I also played around with the ‚recursively’ boolean, getting the same outcome (works for unhide, crashes on isHidden = true).
Then I tried to change opacity or other properties of the nodes - which worked perfectly. On the other hand, trying to remove the nodes from the parent resulted in the mentioned crash as well.
I need this to work, because older hardware could never cope with several thousand nodes (trying this, the frame rate dropped to 10fps, even without enemies around). And newer hardware might break down once the enemies appear...
My thinking is that the pointer is somehow messed up by the first unhiding (and hence the BAD_ACCESS error), so maybe an additional bonding (often seen with spritekit-routines) or another way to get the node-pointer could be the solution. On the other hand, if the pointer is broken, why can I still access all other properties? Maybe it‘s the subnodes that cause the problem - everyone of the nodes has 20 subnodes, which are supposed to change visibility, too.
Did anyone come across this behavior before me? I could not find anything during my google-research...
Hiding and unhiding nodes frequently is typically not a problem by itself. You can hide a main node and any sub-nodes of the main node will automatically hide themselves, so you shouldn't have to loop them individually.
I'm not an expert debugger and don't know your skill level, but BAD_ACCESS can mean that you tried to send a msg to a block of memory that can't execute the message or whenever the app tried to deference a corrupt pointer. Search "What Is EXC_BAD_ACCESS and How to Debug It" for a decent tutorial on some options for dealing with it.
I do my changes in the render delegate as well, but depending on the number of changes and how long they take, I sometimes use timers to control the amount of changes that can be made in a certain amount of time. That way, and after some adjustments, I'm pretty sure that I'm not bogging it down to a point where it just spirals out of control.
Structure can matter - personal preference, but I try to setup an array of classes that create individual nodes (and sub-nodes) and therefore have direct access to them. That way I'm not iterating through the whole node structure or finding nodes by name. Sometimes a lot is going on before I really have to make a modification to the node itself and so I can loop through my array of classes, check values, compare, etc. before taking action that involves the display. That also gives me a chance to remove particle systems, remove actions, set geometry = nil and update logic counters when I need to remove a node.
I'm sure opinions vary, but this has worked well for me. Once I standardized the structure, I just keep repeating the pattern.
Hope that helps

What is a proper way to separate data structure logic from its graphical representation?

It's more of a software design question, than strictly programming, so I'll paste UML diagrams instead of code for everyone's convenience.
Language is Java, so variables are implied references.
I'm writing an app, that helps edit a very simple data structure, that looks like this:
On my first trial run I've included all the drawing-related and optimization code into the data structure, that is, every Node knew how to draw itself and kept a reference to one of shared cached bitmaps. UML:
It was easy to add (fetch a corresponding bitmap and you're done) and remove (paint background color over previously mentioned bitmap). Performance-wise it was nice, but code-wise it was messy.
So on the next iteration I decided to split things, but I may have went to far and things got messy yet again:
Here data structure and its logic is completely separated, which is nice. I can easily load it from file or manipulate in some way before it needs to be drawn, but when it comes to drawing things get uncomfortable.
The classic way would be to change data then call invalidate() on drawing wrapper,but that's inefficient for many small changes. So to, say, delete 1 Tile Id have to either have Drawn representation be independent of Data and call deketeTile() for both separately, or funnel all commands to Data through Drawing class. Things get even messier when I try to add different drawing methods via Strategy pattern or somehow else. The horror:
What wis a clean efficient way to organize interactions with Model and View?
First, definitely decouple the app logic from UI. Make some model for your schematic. That will solve your trouble to unit test the app model, as you already said. Then I would try the Observer pattern. But given that a schematic can have lots and lots of graphical components (your Tiles), I would change the usual setup for notifying every observer when something changes in the model, to notifying only the corresponding GraphicalComponent (Tile), when a Component gets changed in the Model. Your UI asks Model to do things, and gets called back in some parts to update. This will be automatic, no duplicated calls, just the initial observer registry on GraphicalComponent creation.

Does it make sense to allow retrieval of data from OpenGL's context

I am trying to abstract some of OpenGLs concepts into an object oriented style, wrapping elements like Buffers, Arrays, Vertices etc. into objects that save their access-id, data-types, buffer-sizes, used indices etc. and provice further simplifications to their usage.
Though right now I mentioned: Does anyone actually want to reaccess this data that was once pushed into the GPU? Are functions like glGetBufferSubData actually ever used other than for Debugging, since the documentation of these functions on the official wiki isn't very elaborate and I have never seen it in any tutorial.
GL is the general conecpt that everything can be queried. Reading back stuff that you yourself put should be avoided and is usually more expensive than if you keep a local copy. However, there is also data which is generated by the GPU which you might read back. Examples of this are of course frambeuffer contents, textures you rendered into, or vertex data which you stored via transform feedback into a buffer. So yes, there are real use cases for things like glGetBufferSubData() (although I prefer buffer mappings in most situations).
If you need support for such operations is another matter entirely, and one whoch I think is off-topic here and primarily opinion-based. The problem with those abstractions one builds without the intended use case in mind is that one tends to over-abstract things. YMMV.
I wrote a program to generate meshes using transform feedback, and needed to read the data in buffers to save the resulting mesh.
The transform feedback generated the data. It wasn't data that I originally pushed there.
So, yes.