Metal multisampling results in darkened textures - objective-c

So I'm trying to implement full-screen MSAA in my Metal app. I have it working and when drawing solid-filled polygons the edges appear smooth as expected. However, my textured polygons appear dark, and get darker as I increase the number of samples, indicating that the shader might be taking only one sample of the texture per fragment and blending it with n - 1 samples of black therefore making it darker.
However, in my app I also have textures that I render to and then draw to the screen. These textures show up perfectly fine. I can't really see a difference between the two kinds of textures that would change the behavior of multisampling.
Anyway, if anyone could maybe give me any clues as to what's going on, I would greatly appreciate it. I'm pretty stumped on this one.
EDIT:
Here is how I am setting up all my pipeline state(s)
Here is how the texture pipeline state is set up specifically

I figured it out. The problem was that I hadn't set my stencil draw pipeline state to be multisampled. Therefore it was only reading the value in the stencil buffer for 1 out of n samples and hence darkening the output. Works fine now.

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

opengl es 2.0 drawing imprecision

Im having a weird issue in opengl, it goes like this: im designing a 2d engine, so far i coded the routines that let's you draw sprites, rectangle, boxes, translate and scale them... however when i run a small demo of my engine i notice when scaling gradually rectangles in an animation (drawn using 4 vertices and GL_LINE_LOOP), the rectangle edeges seems to bounce between the two neighboring pixels.
I can't determine the source of the problem or even formulate a proper search query in google, if someone can shed some light on this matter. If my question is not understood please let me know.
Building a 2D library on OpenGL ES is going to be problematic for several reasons. First of all, the Khronos specifications state that it is not intended to produce "pixel perfect" rendering. Every OpenGL ES renderer is allowed some variation in rendered results. This is because the actual rendering is implemented in hardware and floating point rounding can be a little different from platform to platform. Even the shader compilers are completely different from one GPU to the next.
Another issue is that most of the GPUs on mobile devices today are tile-based deferred renderers, and they do not typically support partial screen rendering. In other words, every screen update requires replacing the entire frame.

Render texture and normalized view rect in Unity

I'm using Unity 3D 3.5 pro.
I've got this scene with two cameras in it. One of them is looking at a plane that has a render texture on it. The other is recording the render texture. When the camera that's recording the render texture has a 1:1 normalized view and height rect, everything is fine. But when It's something different, some weird stuff happens -- the render texture's image becomes distorted. I've tried releasing and discarding the render texture's contents in an update function, but nothing changes! It's totally stopping the project I'm working on from being completed. I have pictures here to explain the situation in detail. The reason its a problem is because i need to be able to place non rectangular objects in front of the square and not have their scales appear to be distorted, due to the scale of the plane on which the render texture is being shown not being a square. What could I be doing wrong?
I also placed a similar question on unity answers, but received no usable help there. Here was the thread:
http://answers.unity3d.com/questions/389094/rendertexture-normalized-view-rect.html
I figured it out. I needed to mess with the offset and tiling of the rendertexture. Silly rabbit!

Motion Blur Emplementation on OpenGL ES

I'm a novice in OpenGL ES 1.1(for IOS) texturing and I have a problem with making the effect of motion blur. During googling, I found that I should render my scene in different time moments to several textures and then draw all these textures on the screen with different alpha values. But the problem is that I don't know how to implement all this!So,my questions are:
How to draw a 2D texture on the screen? Should I make a square and put my texture on it?Or may be, there is a way to draw a texture on the screen directly?
How to draw several textures(one upon another) on the screen with different alpha values?
I've already come up with some ideas, but I'm not sure if they are correct or not.
Thanks in advance!
Well, of course the first advice is, understand the basics before trying to do advanced stuff. Other than that:
Yes indeed, to draw a full-screen texture you just draw a textured screen-sized quad. An orthographic projection would be a good idea in this case, making the screen-alignment of the quad and its proper sizing easier. For getting the textures in the first place (by rendering into them), FBOs might be of help, but I'm not sure they are supported on ES 1 devices, otherwise the good old glCopyTexSubImage2D will do, too, albeit requiring a copy operation.
Well, you just draw multiple textured quads (see 1) one over the other. You might configure the texture environment to scale the texture's color with the quad's base color (glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE)) and give your quads a color of (1, 1, 1, alpha) (of course lighting should be disabled). Additionally you have to enable alpha blending (glEnable(GL_BLEND)) and use an appropriate blending function (glBlendFunc(GL_SRC_ALPHA, GL_ONE) should do).
But if all these terms don't tell you anything, you should rather first learn the basics using a good learning resource before delving into more advanced effects.

In OpenGL ES 2.0, how can I draw a wireframe of triangles except for the lines on adjacent coplanar faces?

I vaguely remember seeing something in OpenGL (not ES, which was still at v1.0 on the iPhone when I came across this, which is why I never used it) that let me specify which edges of my polygons were considered outlines vs those that made up the interior of faces. As such, this isn't the same as the outline of the entire model (which I know how to do), but rather the outline of a planar face with all its tris basically blended into one poly. For instance, in a cube made up of tri's, each face is actually two tris. I want to render the outline of the square, but not the diagonal across the face. Same thing with a hexagon. That takes four tris, but just one outline for the face.
Now yes, I know I can simply test all the edges to see if they share coplanar faces, but I could have sworn I remember seeing somewhere when you're defining the tri mesh data where you could say 'this line outlines a face whereas this one is inside a face.' That way when rendering, you could set a flag that basically says 'Give me a wireframe, but only the wires around the edges of complete faces, not around the tris that make them up.'
BTW, my target is all platforms that support OpenGL ES 2.0 but my dev platform is iOS. Again, this Im pretty sure was originally in OpenGL and may have been depreciated once shaders came on the scene, but I can't even find a reference to this feature to check if that's the case.
The only way I know now is to have one set of vertices, but two separate sets of indices... one for rendering tris, and another for rendering the wireframes of the faces. It's a real pain since I end up hand-coding a lot of this, which again, I'm 99% sure you can define when rendering the lines.
GL_QUADS, glEdgeFlag and glPolygonMode are not supported in OpenGL ES.
You could use LINES to draw the wireframe: To get hidden lines, first draw black filled triangles (with DEPTH on) and then draw the edges you are interested in with GL_LINES.