Store Half-Edge structure in CoreData - objective-c

I'm building an app that uses a Half-Edge structure to store a mesh of 2D triangles.
The mesh is calculated everytime a user taps the screen and adds a point.
I want to be able to save the mesh into CoreData. Not just the points, but the whole mesh, so it won't have to be recalculated again when restored)
My HalfEdge structure is like this (a drawing is composed by a set of triangles):
Triangle:
- firstHalfEdge (actually, any half-edge of the triangle)
HalfEdge:
- lastVertex (the Vertex in which the Edge ends)
- next (next halfedge in the triangle)
- oposite (the halfedge oposite to this one, which is in another triangle)
- triangle (the triangle which this edge belongs to)
Vertex:
- halfEdge (the edge which the vertex belongs to)
- point (2d coordinates of the vertex)
And this is my CoreData scheme:
As you can see I added a previous attribute to HalfEdge (although is not needed) to avoid getting a warning for a non inverse relationship.
But I keep getting more warnings:
Vertex.point should have an inverse. (no problem with this one, I'll just add another attribute)
Vertex.halfEdge should have an inverse. (this refers to the HalfEdge for which this vertex is the first vertex, so lastVertex wouldn't do as an inverse)
HalfEdge.lastVertex should have an inverse. (see above)
HalfEdge.triangle should have an inverse. (Triangle.firstHalfEdge refers to just one edge, any, but all 3 edges should refer to the triangle) Triangle.firstHalfEdge should have an inverse. (see above)
So, what should I do? Should I try to accomplish those inverse relationships some how (though, I'm thinking it would get my structure calculation more complex) or should I ignore those warnings?
By the way, if anybody is curious, this is what I'm doing: http://www.youtube.com/watch?v=c2Eg7DXW7-A&feature=feedu

You can disable the warnings by setting MOMC_NO_INVERSE_RELATIONSHIP_WARNINGS in project configuration editor (category "Data Model Version Compiler – Warnings" in Xcode 4.1) to YES (screenshot).
Still, there are things to consider before doing so.

Related

Find out when SCNNode disappears from SCNScene

Does a particular method get called when an SCNNode is removed from the scene?
-(void)removeFromParentNode;
Does not get called on the SCNNode object.
To set the scene
I am using gravity to pull down an object. When an object goes too far down, it automatically disappears and the draw calls and polygon counts decrease. So the SCNNode is definitely being destroyed, but is there a way I could hook into the destruction?
Other answers covered this pretty well already, but to go a bit further:
First, your node isn't being removed from the scene — its content is passing outside the camera's viewing frustum, which means SceneKit knows it doesn't need to issue draw calls to the GPU to render it. If you enumerate the child nodes of the scene (or of whatever parent contains the nodes you're talking about), you'll see that they're still there. You lose some of the rendering performance cost because SceneKit doesn't need to issue draw calls for stuff that it knows won't be visible in the frame.
(As noted in Tanguy's answer, this may be because of your zFar setting. Or it may not — it depends which direction the nodes are falling out of camera in.)
But if you keep adding nodes and letting physics drop them off the screen, you'll accumulate a pre-render performance cost, as SceneKit has to walk the scene graph every frame and figure out which nodes it'll need to issue draw calls for. This cost is pretty small for each node, but it could eventually add up to something you don't want to deal with.
And since you want to have something happen when the node falls out of frame anyway, you just need to find a good opportunity to both deal with that and clean up the disappearing node.
So where to do that? Well, you have a few options. As has been noted, you could put something into the render loop to check the visibility of every node on every frame:
- (void)renderer:(id<SCNSceneRenderer>)renderer didSimulatePhysicsAtTime:(NSTimeInterval)time {
if (![renderer isNodeInsideFrustum:myNode withPointOfView:renderer.pointOfView]) {
// it's gone, remove it from scene
}
}
But that's a somewhat expensive check to be running on every frame (remember, you're targeting 30 or 60 fps here). A better way might be to let the physics system help you:
Create a node with an SCNBox geometry that's big enough to "catch" everything that falls off the screen.
Give that node a static physics body, and set up the category and collision bit masks so that your falling nodes will collide with it.
Position that node just outside of the viewing frustum so that your falling objects hit it soon after they fall out of view.
Implement a contact delegate method to destroy the falling nodes:
- (void)physicsWorld:(SCNPhysicsWorld *)world didBeginContact:(SCNPhysicsContact *)contact {
if (/* sort out which node is which */) {
[fallingNode removeFromParentNode];
// ... and do whatever else you want to do when it falls offscreen.
}
}
Your object will disappear if it goes further than the ZFar property of your active camera. (default value is 100.0)
As said by David Rönnqvist in comments, your Node is not destroyed and you can still modify its property.
If you want to hook-up to your Node's geometry disappearance, you can calculate the distance between your active camera and your Node and check it every frame in your rendering loop to trigger an action if it gets higher than 100.
If you want to render your Node at a greater distance, you can just modify the ZFar property of your camera.

Cocos3D - background shown through meshes

I imported the .pod file created from Blender and the blue background is shown through the eyelash and eyebrow meshes. Does anyone know why I'm encountering this?
WITHOUT additional material (looking normal except the root of the hair).
WITH new green material added to her left shoulder, the eyebrow and eyelash began showing the background
This issue is caused by the order in which the nodes are being rendered in your scene.
In the first model, the hair is drawn first, then the skin, then the eyebrows and eyelashes. In the second model, the hair, eyebrows and eyelashes are all drawn before the skin. By the time the skin under the hair or eyelashes is drawn, the depth buffer indicates that something closer to the camera has already been drawn, and the engine doesn't bother rendering those skin pixels. But because the eyelashes, eyebrows and hair all contain transparency, we end up looking right through them onto the backdrop.
This design use of a depth buffer is key to all 3D rendering. It's how the engine knows not to render pixels that are being visually occluded by another object, otherwise all we'd ever see was the last object to be rendered.However, when rendering overlapping objects that contain transparency, it's important to get the rendering order correct, so that more distant objects that are behind closer transparent objects are rendered first.
In Cocos3D, there are several tool available for you to order your transparent objects for rendering:
The first, and primary tool, is the drawingSequencer that is managed by the CC3Scene. You can configure several different types of drawing sequencers. The default sequencer is smart enough to render all opaque objects first, then to render the objects that contain transparency in decreasing order of distance from the camera (rendering farther objects first). This works best for most scenes, and in particular where objects are moving around and can move in front of each other unpredictably. Unfortunately, in your custom CC3Scene initialization code (which you sent me per the question comments), you replaced the default drawing sequencer with one that does not sequence transparent objects based on distance. If you remove that change, everything works properly.
Objects that are not explicitly sequenced by distance (as in part 1 above) are rendered in the order in which they are added to the scene. You can therefore also define rendering order by ensuring that the objects are added to your scene in the order in which you want them rendered. This can work well for static models, such as your first character (if you change it to add the hair after the skin).
CC3Node also has a zOrder property, which allows you to override the rendering order explicitly, so that objects with larger zOrder value are rendered before those with smaller zOrder values. This is useful when you have a static model whose components cannot be added in rendering order, or to temporarily override the rendering order of two transparent objects that might be passing in front of each other. Using the zOrder property does depend on using a drawingSequencer that makes use of it (the default drawing sequencer does).
Finally, you can temporarily turn off depth testing or masking when rendering particular nodes, by setting the shouldDisableDepthTest and shouldDisableDepthMask properties to YES on those nodes.

Beveling a path/shape in Core Graphics

I'm trying to bevel paths in core graphics. Has anyone done this already for arbitrary shapes and if so are they willing to share code?
I've included my implementation below. I use three variables to determine the bevel: CGFloat bevelSize, UIColor highlightColor, UIColor shadow. Note that the angle of the light source is always 135 degrees. I haven't finished implementing this yet, but here's essentially what I'm trying to do, split into two parts. Part one, generate focal points:
I find the bisectors for the angles between each adjacent lines in the path.
For arcs, the bisector is the line perpendicular to the line created by the two end points of the arc, originating from the mid-point. This should take care of the majority of situations in which an arc is used. I do not take the bisector of an arc and a line. The arc bisector should work fine in those cases.
I then calculate focal points based on the intersection of each adjacent bisectors.
If a focal point is within the shape it's used, otherwise it's discarded.
The purpose of generating the focal points is to 'shrink' the shape proportionally.
The second part is a little more complicated. I essential create each side/segment of the bevelled shape. I do this by drawing 'in' (by the bevelSize) each point of the original shape along radius of the line that extends from the nearest focal point to the point in question. When I have two consecutive 'bevelPoints', I create a UIBezierPath that extends along from the bevelPoints to the original points and back to the bevelPoints (note, this includes arcs). This creates a 'side/segment' I can use to fill. On straight sides, I simply fill with either the shadow or highlight color, depending on the angle of the side. For arcs, I determine the radian 'arc'. If that arc contains a transition angle (M_PI_4 or M_PI + M_PI_4) I fill it with a gradient (from shadow to highlight or highlight to shadow, which ever is appropriate). Otherwise I fill it with a solid color.
Update
I've split out my answer (see below) into a separate blog post. I'm not longer using the implementation details you see above but I'm keeping it all there for reference. I hope this helps anybody else looking to use Core Graphics.
So I did finally manage to write a routine to bevel arbitrary shapes in core graphics. It ended up being a lot of work, a lot more than I originally anticipated, but it's been a fun project. I've posted an explanation and the code to perform the bevel on my blog. Just note that I didn't create a full class (or set of classes) for this. This code was integrated into a much larger class I use to do all my core graphics drawing. However, all the code you need to bevel most arbitrary shapes is there.
UPDATE
I've rewritten the code as straight c and moved it into a separate file. It no longer depends on any other instance variables so that the beveling function can be called from within any context.
I explain the code and process I used to bevel here: Beveling Shapes In Core Graphics
Here is the code: Github : CGPathBevel.
The code isn't perfect: I'm open to suggestions/corrections/better ways of doing things.

How do I rotate an OpenGL view relative to the center of the view as opposed to the center of the object being displayed?

I'm working on a fork of Pleasant3D.
When rotating an object being displayed the object always rotates around the same point relative to to itself even if that point is not at the center of the view (e.g. because the user has panned to move the object in the view).
I would like to change this so that the view always rotates the object around the point at the center of the view as it appears to the user instead of the center of the object.
Here is the core of the current code that rotates the object around its center (slightly simplified) (from here):
glLoadIdentity();
// midPlatform is the offset to reach the "middle" of the object (or more specifically the platform on which the object sits) in the x/y dimension.
// This the point around which the view is currently rotated.
Vector3 *midPlatform = [self.currentMachine calcMidBuildPlatform];
glTranslatef((GLfloat)cameraTranslateX - midPlatform.x,
(GLfloat)cameraTranslateY - midPlatform.y,
(GLfloat)cameraOffset);
// trackBallRotation and worldRotation come from trackball.h/c which appears to be
// from an Apple OpenGL sample.
if (trackBallRotation[0] != 0.0f) {
glRotatef (trackBallRotation[0], trackBallRotation[1], trackBallRotation[2], trackBallRotation[3]);
}
// accumlated world rotation via trackball
glRotatef (worldRotation[0], worldRotation[1], worldRotation[2], worldRotation[3]);
glTranslatef(midPlatform.x, midPlatform.y, 0.);
// Now draw object...
What transformations do I need to apply in what order to get the effect I desire?
Some of what I've tried so far
As I understand it this is what the current code does:
"OpenGL performs matrices multiplications in reverse order if multiple transforms are applied to a vertex" (from here). This means that the first transformation to be applied is actually the last one in the code above. It moves the center of the view (0,0) to the center of the object.
This point is then used as the center of rotation for the next two transformations (the rotations).
Finally the midPlatform translation is done in reverse to move the center back to the original location and the XY translations (panning) done by the user is applied. Here also the "camera" is moved away from the object to the proper location (indicated by cameraOffset).
This seems straightforward enough. So what I need to change is instead of translating the center of the view to the center of the object (midPlatform) I need to translate it to the current center of the view as seen by the user, right?
Unfortunately this is where the transformations start affecting each other in interesting ways and I am running into trouble.
I tried changing the code to this:
glLoadIdentity();
glTranslatef(0,
0,
(GLfloat)cameraOffset);
if (trackBallRotation[0] != 0.0f) {
glRotatef (trackBallRotation[0], trackBallRotation[1], trackBallRotation[2], trackBallRotation[3]);
}
// accumlated world rotation via trackball
glRotatef (worldRotation[0], worldRotation[1], worldRotation[2], worldRotation[3]);
glTranslatef(cameraTranslateX, cameraTranslateY, 0.);
In other words, I translate the center of the view to the previous center, rotate around that, and then apply the camera offset to move the camera away to the proper position. This makes the rotation behave exactly the way I want it to, but it introduces a new issue. Now any panning done by the user is relative to the object. For example if the object is rotated so that the camera is looking along the X axis end-on, if the user pans left to right the object appears to be moving closer/further from the user instead of left or right.
I think I can understand why the is (XY camera translations being applied before rotation), and I think what I need to do is figure out a way to cancel out the translation from before the rotation after the rotation (to avoid the weird panning effect) and then to do another translation which translates relative to the viewer (eye coordinate space) instead of the object (object coordinate space) but I'm not sure exactly how to do this.
I found what I think are some clues in the OpenGL FAQ(http://www.opengl.org/resources/faq/technical/transformations.htm), for example:
9.070 How do I transform my objects around a fixed coordinate system rather than the object's local coordinate system?
If you rotate an object around its Y-axis, you'll find that the X- and Z-axes rotate with the object. A subsequent rotation around one of these axes rotates around the newly transformed axis and not the original axis. It's often desirable to perform transformations in a fixed coordinate system rather than the object’s local coordinate system.
The root cause of the problem is that OpenGL matrix operations postmultiply onto the matrix stack, thus causing transformations to occur in object space. To affect screen space transformations, you need to premultiply. OpenGL doesn't provide a mode switch for the order of matrix multiplication, so you need to premultiply by hand. An application might implement this by retrieving the current matrix after each frame. The application multiplies new transformations for the next frame on top of an identity matrix and multiplies the accumulated current transformations (from the last frame) onto those transformations using glMultMatrix().
You need to be aware that retrieving the ModelView matrix once per frame might have a detrimental impact on your application’s performance. However, you need to benchmark this operation, because the performance will vary from one implementation to the next.
And
9.120 How do I find the coordinates of a vertex transformed only by the ModelView matrix?
It's often useful to obtain the eye coordinate space value of a vertex (i.e., the object space vertex transformed by the ModelView matrix). You can obtain this by retrieving the current ModelView matrix and performing simple vector / matrix multiplication.
But I'm not sure how to apply these in my situation.
You need to transform/translate "center of view" point into origin, rotate, then invert that translation, back to the object's transform. This is known as a basis change in linear algebra.
This is way easier to work with if you have a proper 3d-math library (I'm assuming you do have one), and that also helps to to stay far from the deprecated fixed-pipeline APIs. (more on that later).
Here's how I'd do it:
Find the transform for the center of view point in world coordinates (figure it out, then draw it to make sure it's correct, with x,y,z axis too, since the axii are supposed to be correct w.r.t. the view). If you use the center-of-view point and the rotation (usually the inverse of the camera's rotation), this will be a transform from world origin to the view center. Store this in a 4x4 matrix transform.
Apply the inverse of the above transform, so that it becomes the origin. glMultMatrixfv(center_of_view_tf.inverse());
Rotate about this point however you want (glRotate())
Transform everything back to world space (glMultMatrixfv(center_of_view_tf);)
Apply object's own world transform (glTranslate/glRotate or glMultMatrix) and draw it.
About the fixed function pipeline
Back in the old days, there were separate transistors for transforming a vertex (or it's texture coordinates), computing where light was in relation to it applying lights (up to 8) and texturing fragments in many different ways. Simply, glEnable(), enabled fixed blocks of silicon to do some computation in the hardware graphics pipeline. As performance grew, die sized shrunk and people demanded more features, the amount of dedicated silicon grew too, and much of it wasn't used.
Eventually, it got so advanced that you could program it in rather obscene ways (register combiners anyone). And then, it became feasible to actually upload a small assembler program for all vertex-level transforms. Then, it made to sense to keep a lot of silicon there that just did one thing (especially as you could've used those transistors to make the programmable stuff faster), so everything became programmable. If "fixed function" rendering was called for, the driver just converted the state (X lights, texture projections, etc) to shader code and uploaded that as a vertex shader.
So, currently, where even the fragment processing is programmable, there is just a lot of fixed-function options that is used by tons and tons of OpenGL applications, but the silicon on the GPU just runs shaders (and lots of it, in parallell).
...
To make OpenGL more efficient, and the drivers less bulky, and the hardware simpler and useable on mobile/console devices and to take full advantage of the programmable hardware that OpenGL runs on these days, many functions in the API are now marked deprecated. They are not available on OpenGL ES 2.0 and beyond (mobile) and you won't be getting the best performance out of them even on desktop systems (where they will still be in the driver for ages to come, serving equally ancient code bases originating back to the dawn of accelerated 3d graphics)
The fixed-functionness mostly concerns how transforms/lighting/texturing etc. are done by "default" in OpenGL (i.e. glEnable(GL_LIGHTING)), instead of you specifying these ops in your custom shaders.
In the new, programmable, OpenGL, transform matrices are just uniforms in the shader. Any rotate/translate/mult/inverse (like the above) should be done by client code (your code) before being uploaded to OpenGL. (Using only glLoadMatrix is one way to start thinking about it, but instead of using gl_ModelViewProjectionMatrix and the ilk in your shader, use your own uniforms.)
It's a bit of a bother, since you have to implement quite a bit of what was done by the GL driver before, but if you have your own object list/graph with transforms and a transform somewhere etc, it's not that much work. (OTOH, if you have a lot of glTranslate/glRotate in your code, it might be...). As I said, a good 3d-math library is indispensable here.
-..
So, to change the above code to "programmable pipeline" style, you'd just do all these matrix multiplications in your own code (instead of the GL driver doing it, still on the CPU) and then send the resulting matrix to opengl as a uniform before you activate the shaders and draw your object from VBOs.
(Note that modern cards do not have fixed-function code, just a lot of code in the driver to compile fixed-function rendering state to a shader that does the job. No wonder "classic" GL drivers are huge...)
...
Some info about this process is available at Tom's Hardware Guide and probably Google too.

Java 3d: Unable to get Shape3D to be affected by lights

I am attempting to get a custom Shape3D to be affected by a DirectedLight in java 3D, but nothing I do seems to work.
The Shape has a geometry that is an IndexedQuadArray, with the NORMAL flag set and applied, ensuring the normal vectors are applied to the correct vertices - using indexed vectors
I have given the Appearance a Material (both with specified colors and shininess, and without)
I have also put the light on the same BranchGroup as the Shape, but it still does not work.
In fact, when I add in the normals to the shape, the object appears to disappear - without them, it's flat shaded, so that all faces are the same shade.
I can only think that I am forgetting to include something ridiculously simple, or have done something wrong.
To test the lights were actually, I put in a Sphere beside the Shape, and the sphere was affected and lit correctly, but the shape still wasn't. Both were on the same BranchGroup
[Small oddity too - if I translate the sphere, it vanishes if I move it greater than 31 in any direction... [my view is set about 700 back as I'm dealing with objects of sizes up to 600 in width]
Edit: found this in the official tutorials that is probably related
A visual object properly specified for shading (i.e., one with a Material object) in a live scene graph but outside the influencing bounds of all light source objects renders black.
The light's setInfluencingBounds() was not set correctly, so that the shapes in the scene were not being included in the bounds.
This was corrected by setting a BoundingBox to encompass the entire area, and assigning that into the influencing bounds