OpenGL-ES: how to draw object twice using different shaders - objective-c

Im trying to make an app that simulates dice rolls, at the moment everything works fine. Im trying to add a shader for when the user selects a dice, it will put an outline around the selected dice. How im going about this is to render the particular dice scaled up slightly and completely black, then draw the textured dice on top of it to make it look like it has an outline.
The problem im having is that when the shader for drawing the object black is first applied, it draws the black dice fine, but when the textured dice is trying to be drawn over it, it draws it in the wrong place, and draws the wrong dice. The odd thing is it draws it inside one of the other dice on the screen.
If I apply the the same shader to the object twice, everything draws how its supposed to for that particular shader (either all black because of the outline shader or all textured and lit from the normal shader), but when I apply both shaders to the same model, things go wrong.
This class loads the vertices and stuff, and draws the object:
http://pastebin.com/N5aYAtBC
This class manages the shaders:
http://pastebin.com/0bT7ABRu
I've left out a lot of code that I feel will have nothing to do with the problem, but if you need more just leave a comment
and when i click on different dice this is what happens (first pic is normal):
http://imgur.com/a/ikZVX

glUniformMatrix4fv(toon_mvp, 1, 0, modelViewProjectionMatrix.m);
glUniform4fv(toon_outline, 1, oc);
glUseProgram(Toon);
I think you're handling the uniforms wrong. I pulled out this snippet from your source as an example.
When you upload a uniform, it only effects the currently bound program (each program has it's own internal storage of uniforms).
If you're expecting 'toon_mvp' and 'toon_outline' to be available to the program 'Toon', they will not be. You need to first bind the program you want to modify, and then modify it's uniforms after it is bound, not in the other order.
You'll probably have to fix this in other places in your code as well.

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

Metal multisampling results in darkened textures

So I'm trying to implement full-screen MSAA in my Metal app. I have it working and when drawing solid-filled polygons the edges appear smooth as expected. However, my textured polygons appear dark, and get darker as I increase the number of samples, indicating that the shader might be taking only one sample of the texture per fragment and blending it with n - 1 samples of black therefore making it darker.
However, in my app I also have textures that I render to and then draw to the screen. These textures show up perfectly fine. I can't really see a difference between the two kinds of textures that would change the behavior of multisampling.
Anyway, if anyone could maybe give me any clues as to what's going on, I would greatly appreciate it. I'm pretty stumped on this one.
EDIT:
Here is how I am setting up all my pipeline state(s)
Here is how the texture pipeline state is set up specifically
I figured it out. The problem was that I hadn't set my stencil draw pipeline state to be multisampled. Therefore it was only reading the value in the stencil buffer for 1 out of n samples and hence darkening the output. Works fine now.

How to make a PhysicsBody based on Alpha Values

Suppose there there is a scene as follows:
There is a scene with the same size as the frame of the device. The scene has a red ball, which is able to move throughout the 'world'. This world is defined by black and white areas, where the ball is ONLY able to move in the area that is white. Here is a picture to help explain:
Parts of the black area can be erased, as if the user is drawing with white color over the scene. This would mean that the area in which the ball can be moved is constantly changing. Now, how would one go about implementing a physicsBody for the an edge between the white and black areas?
I tried redefining the physicsBody every time it is changed, but once the shape becomes complex enough, this isn't a viable solution at all. I tried creating a two-dimensional array of 'boxes' that are invisible and specify whether most of the area within each box is white or black, and if the ball touched a box that was black, it would be pushed back. However, this required heavy rendering and iterating over the array too much. Since my original array contained boxes a little bigger than a pixel, I tried making these boxes bigger to smooth the motion a little, but this eventually caused part of the ball to be stopped by white areas and appear to be inside the black area. This was undesired, since the user could feel invisible barriers that they seemed to be hitting.
I tried searching for other methods to implement this 'destructible terrain' type scene, but the solutions that I found and tried were using other game engines. To further clarify, I am using Objective-C and Apple's SpriteKit framework; and I am not looking for a detailed class full of code, but rather some pseudo-code or implementation ideas that would lead me to a solution.
Thank you.
If your deployment target is iOS 8, this may be what you're looking for...
+ bodyWithTexture:alphaThreshold:size:
Here's a description from Apple's documentation
Creates a physics body from the contents of a texture. Only texels
that exceed a certain transparency value are included in the physics
body.
where a texel is a texture element. You will need to convert an image to the texture before creating the SKPhysicsBody.
I'm not sure if it will allow for a hole in the middle like your drawing. If not, I suspect you can connect two physics bodies, a left half and a right half, to form the hole.

UIBezierPath subtraction from another UIBezierPath

I am making an app that allows the user to draw on the screen with his finger in different colors. The drawings are drawn with UIBezierPaths but I need an eraser. I did have an eraser that was just a path with the background image as the color but this method causes memory issues. I would like to delete the points from any path that is drawn on when eraser is selected.
Unfortunately UIBezierPath doesn't have a subtraction function so I want to make my own. So if eraser is selected, it will look at all the points that should be erased and see if any of the existing paths contain those points, then subdivide the path leaving a blank spot. But it should be able to see how many points in a row to delete not do it one at a time. In theory it makes sense but I'm having trouble getting started on the implementation.
Anyone have any guidance to set me on the right 'path'?
Upon first glance, it appears that you could do hit detection on a UIBezierPath by simply using containsPoint:. That works fine if you want to determine whether the point is contained in the fill of a UIBezierPath, but it does not work for determining whether only the stroke of the UIBezierPath intersects the point. Detecting whether or not a given point is in the stroke of a UIBezierPath can be done as described in the "Doing Hit-Detection on a Path" section at the bottom of this page. Actually, the code sample they give could be used either way. The basic idea is that you have to use the Core Graphics method CGContextPathContainsPoint.
Depending on how large the eraser brush is, you will probably want to check several different points on the edge of the brush circle to see if they intersect the curve, and you'll probably have to iterate through your UIBezierPaths until you get a hit. You should be able to optimize the search by using the bounds of the UIBezierPath.
After you detect that a point intersects a UIBezierPath, you must do the actual split of the path. There appears to be a good outline of the algorithm in this post. The main idea there is to use De Casteljau's algorithm to perform the subdivision of the curve. There are various implementations of the algorithm that you should be able to find with a quick search, including some in C++.

In OpenGL ES 2.0, how can I draw a wireframe of triangles except for the lines on adjacent coplanar faces?

I vaguely remember seeing something in OpenGL (not ES, which was still at v1.0 on the iPhone when I came across this, which is why I never used it) that let me specify which edges of my polygons were considered outlines vs those that made up the interior of faces. As such, this isn't the same as the outline of the entire model (which I know how to do), but rather the outline of a planar face with all its tris basically blended into one poly. For instance, in a cube made up of tri's, each face is actually two tris. I want to render the outline of the square, but not the diagonal across the face. Same thing with a hexagon. That takes four tris, but just one outline for the face.
Now yes, I know I can simply test all the edges to see if they share coplanar faces, but I could have sworn I remember seeing somewhere when you're defining the tri mesh data where you could say 'this line outlines a face whereas this one is inside a face.' That way when rendering, you could set a flag that basically says 'Give me a wireframe, but only the wires around the edges of complete faces, not around the tris that make them up.'
BTW, my target is all platforms that support OpenGL ES 2.0 but my dev platform is iOS. Again, this Im pretty sure was originally in OpenGL and may have been depreciated once shaders came on the scene, but I can't even find a reference to this feature to check if that's the case.
The only way I know now is to have one set of vertices, but two separate sets of indices... one for rendering tris, and another for rendering the wireframes of the faces. It's a real pain since I end up hand-coding a lot of this, which again, I'm 99% sure you can define when rendering the lines.
GL_QUADS, glEdgeFlag and glPolygonMode are not supported in OpenGL ES.
You could use LINES to draw the wireframe: To get hidden lines, first draw black filled triangles (with DEPTH on) and then draw the edges you are interested in with GL_LINES.