Cocos3D - background shown through meshes - blender

I imported the .pod file created from Blender and the blue background is shown through the eyelash and eyebrow meshes. Does anyone know why I'm encountering this?
WITHOUT additional material (looking normal except the root of the hair).
WITH new green material added to her left shoulder, the eyebrow and eyelash began showing the background

This issue is caused by the order in which the nodes are being rendered in your scene.
In the first model, the hair is drawn first, then the skin, then the eyebrows and eyelashes. In the second model, the hair, eyebrows and eyelashes are all drawn before the skin. By the time the skin under the hair or eyelashes is drawn, the depth buffer indicates that something closer to the camera has already been drawn, and the engine doesn't bother rendering those skin pixels. But because the eyelashes, eyebrows and hair all contain transparency, we end up looking right through them onto the backdrop.
This design use of a depth buffer is key to all 3D rendering. It's how the engine knows not to render pixels that are being visually occluded by another object, otherwise all we'd ever see was the last object to be rendered.However, when rendering overlapping objects that contain transparency, it's important to get the rendering order correct, so that more distant objects that are behind closer transparent objects are rendered first.
In Cocos3D, there are several tool available for you to order your transparent objects for rendering:
The first, and primary tool, is the drawingSequencer that is managed by the CC3Scene. You can configure several different types of drawing sequencers. The default sequencer is smart enough to render all opaque objects first, then to render the objects that contain transparency in decreasing order of distance from the camera (rendering farther objects first). This works best for most scenes, and in particular where objects are moving around and can move in front of each other unpredictably. Unfortunately, in your custom CC3Scene initialization code (which you sent me per the question comments), you replaced the default drawing sequencer with one that does not sequence transparent objects based on distance. If you remove that change, everything works properly.
Objects that are not explicitly sequenced by distance (as in part 1 above) are rendered in the order in which they are added to the scene. You can therefore also define rendering order by ensuring that the objects are added to your scene in the order in which you want them rendered. This can work well for static models, such as your first character (if you change it to add the hair after the skin).
CC3Node also has a zOrder property, which allows you to override the rendering order explicitly, so that objects with larger zOrder value are rendered before those with smaller zOrder values. This is useful when you have a static model whose components cannot be added in rendering order, or to temporarily override the rendering order of two transparent objects that might be passing in front of each other. Using the zOrder property does depend on using a drawingSequencer that makes use of it (the default drawing sequencer does).
Finally, you can temporarily turn off depth testing or masking when rendering particular nodes, by setting the shouldDisableDepthTest and shouldDisableDepthMask properties to YES on those nodes.

Related

Vulkan update descriptor every frame

I want to render my scene to a texture and then use that texture in shader so I created a frambuffer using imageview and recorded a command buffer for that. I successfully uploaded and executed the command buffer on gpu but the descriptor of imageview is black. I'm creating a descriptor from the imageview before rendering loop. Is it black because I create it before anything is rendered to framebuffer? If so I will have to update the descriptor every frame. Will I have to create a new descriptor from imageview every frame? Or is there another way I can do this?
I have read other thread on this title. Don't mark this as duplicate cause that thread is about textures and this is texture from a imageview.
Thanks.
#IAS0601 I will answer questions from Your comment through an answer, as it allows for much longer text to be written, and its formatting is much better. I hope this also answers Your original question, but You don't have to treat like the answer. As I wrote, I'm not sure what You are asking about.
1) In practically all cases, GPU accesses images through image views. They specify additional parameters which define how image is accessed (like for example which part of the image is accessed), but still it is the original image that gets accessed. Image view, as name suggests, is just a view, list of access parameters. It doesn't have any memory bound to it, it doesn't contain any data (apart from the parameters specified during image view creation).
So when You create a framebuffer and render into it, You render into original images or, to be more specific, to those parts of original images which were specified in image views. For example, You have a 2D texture with 3 array layers. You create a 2D image view for the middle (second) layer. Then You use this image view during framebuffer creation. And now when You render into this framebuffer, in fact You are rendering into the second layer of the original 2D texture array.
Another thing - when You later access the same image, and when You use the same image view, You still access the original image. If You rendered something into the image, then You will get the updated data (provided You have done everything correctly, like perform appropriate synchronization operations, layout transition if necessary etc.). I hope this is what You mean by updating image view.
2) I'm not sure what You mean by updating descriptor set. In Vulkan when we update a descriptor set, this means that we specify handles of Vulkan resources that should be used through given descriptor set.
If I understand You correctly - You want to render something into an image. You create an image view for that image and provide that image view during framebuffer creation. Then You render something into that framebuffer. Now You want to read data from that image. You have two options. If You want to access only one sample location that is associated with fragment shader's location, You can do this through an input attachment in the next subpass of the same render pass. But this way You can only perform operations which don't require access to multiple texels, for example a color correction.
But if You want to do something more advanced, like blurring or shadow mapping, if You need access to several texels, You must end a render pass and start another one. In this second render pass, You can read data from the original image through a descriptor set. It doesn't matter when this descriptor set was created and updated (when the handle of image view was specified). If You don't change the handles of resources - meaning, if You don't create a new image or a new image view, You can use the same descriptor set and You will access the data rendered in the first render pass.
If You have problems accessing the data, for example (as You wrote) You get only black colors, this suggests You didn't perform everything correctly - render pass load or store ops are incorrect, or initial and final layouts are incorrect. Or synchronization isn't performed correctly. Unfortunately, without access to Your project, we can't be sure what is wrong.

Vulkan: Framebuffer larger than Image dimensions

This question primarily relates to the dimension parameters (width, height, and layers) in the structure VkFramebufferCreateInfo.
The actual question:
In the case that one or more of the VkImageViews, used in creating a VkFrameBuffer, has dimensions that are larger than those specified in the VkFramebufferCreateInfo used to create the VkFrameBuffer, how does one control which part of that VkImageView is used during a render pass instance?
Alternatively worded question:
I am basically asking in the case that the image is larger (not the same dimensions) than the framebuffer, what defines which part of the image is used (read/write)?
Some Details:
The specification states this is a valid situation (I have seen many people state the attachments used by a framebuffer must match the dimensions of the framebuffer itself, but I can't find support for this in the specification):
Each element of pAttachments must have dimensions at least as large as the corresponding framebuffer dimension.
I want to be clear, that I understand that if I just wanted to draw to part of an image I can use a framebuffer that has the same dimensions as the image, and use viewports and scissors. But scissors and viewports are defined relative to the framebuffer's (0,0) as far as I can tell from the spec, although it is not clear to me.
I'm asking this question to help my understand of the framebuffer as I am certain I have misunderstood something. I feel it may well be the case that (x,y) in framebuffer space, is always (x,y) in image space (As in there is no way of controlling which part of the VkImageView is used).
I have been stuck on this for quite sometime (~4 days), and have tried both the Vulkan: Cookbook and the Vulkan Programming Guide, and read most of the specification, and searched online.
If the question needs clarification, please ask. I just didn't want to make it overly long.
Thank you for reading.
There isn't a way to control which part of the image is used by the framebuffer when the framebuffer is smaller than the image. The framebuffer origin always maps to the image origin.
Allowing attachments to be larger than the framebuffer is only meant to allow reusing memory/images/views for several purposes in a frame even when they don't all need the same dimensions. The typical example is reusing a depth buffer (but not it's contents) for several different render passes. You could accomplish the same thing with memory aliasing, but engines that have to support multiple APIs might find it easier to do it this way.
The way to control where you render to is by controlling the viewport. That is, you specify a framebuffer size that's actually big enough to cover the total area of the target images that you may want to render to, and use the viewport transform/scissoring to render to a specific area of those images.
There is no post-viewport transformation that goes from framebuffer space to image space. That would be decidedly redundant, since we already have a post-NDC transform. There's no point in having two of them.
Sure, VkRenderPassBeginInfo has the renderArea object, but that is more of a promise from the user rather than a guarantee for the system:
The application must ensure (using scissor if necessary) that all rendering is contained within the render area, otherwise the pixels outside of the render area become undefined and shader side effects may occur for fragments outside the render area.
So basically, the implementation doesn't do anything with renderArea. It doesn't set up a transformation or anything; you're just promising that no framebuffer pixels outside of that area will be impacted.
In any case, there's really little point to providing a framebuffer size that's smaller than the images sizes. That sort of thing is more the perview of the renderArea than the framebuffer specification.

three.js: how to control rendering order

Am using three.js
How can I control the rendering order? Let's say I have three plane geometries, and want to render them in a specific order regardless of their spatial position.
thanks
You can set
renderer.sortObjects = false;
and the objects will be rendered in the order they were added to the scene.
Alternatively, you can leave sortObjects as true, the default, and specify for each object a value for object.renderOrder.
For more detail, see Transparent objects in Threejs
Another thing you can do is use the approach described here: How to change the zOrder of object with Threejs?
three.js r.71
for threejs r70 and higher is renderDepth removed.
Using object.renderDepth worked in my case. I had a glass case and bubbles inside that were transparent. The bubbles were getting lost at certain angles.
So, setting their renderDepth to a high number and playing with other elements depths in the scene fixed the issue. Hooking up a dat.gui control to the renderDepth property made it very easy to tweak what needed to be at what depth to make the scene work.
So, in my fishScene, I have gravel, tank and bubbles. I hooked up the gravel mesh with a dat.gui control and with in a few seconds, I had the depth I needed.
this.gui.add(this.fishScene.gravel, "renderDepth", 0, 200);
i had a bunch of objects which was cloned from a for loop in random position x and y... and obj.z ++, so they would line up in line.. including obj.renderOrder ++; in the loop solved my issue.

OpenglES - Transparent texture blocking objects behind

I have some quads that have a texture with transparency and some objects behind these quads. However, these don't seem to be shown. I know it's something about GL_BLEND but I can't manage to make the objects behind show.
I've tried with:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
but still not working. What I basically have is:
// I paint the object
draw_ac3d_file([actualObject getCurrentObject3d]);
// I paint the quad
paintQuadWithAlphaTexture();
There are two common scenarios that create this situation, and it is difficult to tell which one your program is doing, if either at all.
Draw Order
First, make sure you are drawing your objects in the correct order. You must draw from back-to-front or else the models will not be blended properly.
http://www.opengl.org/wiki/Transparency_Sorting
note as Arne Bergene Fossaa pointed out, front-to-back is the proper way to render objects that are not transparent from a performance stand point. Because of this, most renderers first draw all the models that have no transparency front-to-back, and then they go back and render all models that have transparency back-to-front. This is covered in most 3D-graphic texts out there.
back-to-front
front-to-back
image credit to Geoff Leach at RMIT University
Lighting
The second most common issue is improper use of lighting. Normally in this case if you were using the fixed-function pipeline, people would advise you to simply call glDisable(GL_LIGHTING);
Now this should work (if it is the cause at all) but what if you want lighting? Then you would either have to employ custom shaders or set up proper material settings for the models.
A discussion of using the material properties can be found at http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=285889

how to generate graphs using integer values in iphone

i want to show a grapph/bar chart in iphone how do i do this without custom API;s
You may want to investigate the Core Plot project [code.google.com]. Core Plot was the subject of this year's scientific coding project at WWDC and is pretty useable for some cases already. From its inception, Core Plot was intended for both OS X and iPhone uses. The source distribution (there hasn't been a binary release yet) comes with both OS X and iPhone example applications and there's info on the project wiki for using it as a library in an iPhone app. Here's an example of it's current plotting capabilities.
(source: googlecode.com)
Write your own. It's not easy, I'm in the process of doing the same thing right now. Here's how I'm doing it:
First, ignore any desire you may have to try using a UIScrollView if you want to allow zooming. It's totally not worth it.
Second, create something like a GraphElement protocol. I have a hierarchy that looks something like this:
GraphElement
GraphPathElement
GraphDataElement
GraphDataSupplierElement
GraphElement contains the basic necessary methods for a graph element, including how to draw, a maximum width (for zooming in), whether a point is within that element (for touches) and the standard touchBegan, touchMoved, and touchEnded functions.
GraphPathElement contains a CGPath, a line color and width, a fill color and a drawing mode. Whenever it's prompted to draw, it simply adds the path to the context, sets the colors and line width, and draws the path with the given drawing mode.
GraphDataElement, as a subclass of GraphPathElement, takes in a set of data in x-y coordinates, a graph type (bar or line), a frame, and a bounds. The frame is the actual size of the created output CGPath. The bounds is the size of the data in input coordinates. Essentially, it lets you scale the data to the screen size.
It creates a graph by first calculating an affine transform to transform the bounds to the frame, then it loops through each point and adds it as data to a path, applying that transform to the point before adding it. How it adds data depends on the type.
If it's a bar graph, it creates a rectangle of width 0, origin at (x,frame.size.height-y), and height=y. Then it "insets" the graph by -3 pixels horizontally, and adds that to the path.
If it's a line graph, it's much simpler. It just moves to the first point, then for each other point, it adds a line to that point, adds a circle in a rect around that point, then moves back to that point to go on to the next point.
GraphDataSupplierElement is the interface to my database that actually contains all the data. It determines what kind of graph it should be, formats the data into the required type for GraphDataElement, and passes it on, with the color to use for that particular graph.
For me, the x-axis is time, and is represented as NSTimeIntervals. The GraphDataSupplierElement contains a minDate and maxDate so that a GraphDateElement can draw the x-axis labels as required.
Once all this is done, you need to create the actual graph. You can go about it several ways. One option is to keep all the elements in an NSArray and whenever drawRect: is called, loop through each element and draw it. Another option is to create a CALayer for each element, and use the GraphPathElement as the CALayer's delegate. Or you could make GraphPathElement extend from CALayer directly. It's up to you on this one. I haven't gotten as far as trying CALayers yet, I'm still stuck in the simple NSArray stage. I may move to CALayers at some point, once I'm satisfied with how everything looks.
So, all in all, the idea is that you create the graph as one or many CGPaths beforehand, and just draw that when you need to draw the graph, rather than trying to actually parse data whenever you get a drawRect: call.
Scaling can be done by keeping the source data in your GraphDataElement, and just change the frame so that the scaling of the bounds to the frame creates a CGPath wider than the screen, or whatever your needs are. I basically re-implemented my own pinch-zoom for my Graph UIView subclass that only scales horizontally, by changing its transform, then on completion, get the current frame, reset the transform to identity, set the frame to the saved value, and set the frame of all of the GraphElements to the new frame as well, to make them scale. Then just call [self setNeedsDisplay] to draw.
Anyway, that's a bit ramble-ish, but it's an outline of how I made it happen. If you have more specific questions, feel free to comment.