Drawing an angle/angular/"Conical"/"Arcing" gradient in Objective-C (IOS) using Core Graphics - objective-c

I'm trying to draw a "conical"/"arcing" gradient (I don't know what would be the correct term for this) (Photoshop calls it an "angle" gradient —your friendly neighborhood stackoverflow editor) using Objective-C (IOS), pretty much exactly like the image shown in the following thread.
After days of googling and searching the internet to no avail, I've decided to ask for help here.
A little background on what I'm trying to do. My objective is to create a custom UIView, which is circular progress bar, a ring basicly, somewhat similar to the activity indicator as seen in the TweetBot iPhone app (displays when you drag to refresh, which can be seen in action here, around 17-18 seconds into the video, on top of the iphone screen). I want the progress indicator (the fill of the ring) to be a simple two color gradient, which can be set programmatically, and the view to be resizable.
Filling the ring shape with a gradient that "follows" the arc of the ring is where I'm stuck. The answers that I get from googling, reading Apple's Core Graphics documentation on gradients and searching on SO are either about radial gradients or linear/axial gradients, which is not what I'm trying to achieve.
The thread linked above suggests using pre-made images, but this isn't an option because the colors of the gradient should be settable, the view should be resizable and the fill of the progress bar isn't always 100% full obviously (which would be the state of the gradient as shown in the picture in the thread above).
The only solution that I've come up with is to draw the gradient "manually", so without using a CGGradientRef, clipping small slices of the gradient with single solid color fills within a circular path. I don't know exactly how well this will perform when the bar is being animated though, it shouldn't be that bad, but it might be a problem.
So my first question:
Is there an easier/different solution to draw a conical/arcing gradient in Objective-C (IOS) than the solution I've come up with?
Second question:
If I have to draw the gradient manually in my view using the solution I came up with, how can I determine or calculate (if this is even possible) the value (HEX or RGBA) of each color "slice" of the gradient that I'm trying to draw, as illustrated in the image below.
(Can't link image) gradient slice illustration

Looks to me like a job for a pixel shader. I remember seeing a Quartz Composer example that simulated a radar sweep, and that used a pixel shader to produce an effect like you're describing.
Edit:
Found it. This shader was written by Peter Graffignino:
kernel vec4 radarSweep(sampler image, __color color1,__color color2, float angle, vec4 rect)
{
vec4 val = sample(image, samplerCoord(image));
vec2 locCart = destCoord();
float theta, r, frac, angleDist;
locCart.x = (locCart.x - rect.z/2.0) / (rect.z/2.0);
locCart.y = (locCart.y - rect.w/2.0) / (rect.w/2.0);
// locCart is now normalized
theta = degrees(atan(locCart.y, locCart.x));
theta = (theta < 0.0) ? theta + 360.0 : theta;
r = length(locCart);
angleDist = theta - angle;
angleDist = (angleDist < 0.0) ? angleDist + 360.0 : angleDist;
frac = 1.0 - angleDist/360.0;
// sum up 3 decaying phosphors with different time constants
val = val*exp2(-frac/.005) + (val+.1)*exp2(-frac/.25)*color1 + val*exp2(-frac/.021)*color2;
val = r > 1.0 ? vec4(0.0, 0.0,0.0,0.0) : val; // constrain to circle
return val;
}

The thread linked above suggests using pre-made images, but this isn't an option because the colors of the gradient should be settable, the view should be resizable and the fill of the progress bar isn't always 100% full obviously (which would be the state of the gradient as shown in the picture in the thread above).
Not a problem!
Use the very black-to-white image from the other question (or a bigger version if you need one), in the following fashion:
Clip to whatever shape you want to draw the gradient in.
Fill with the color at the end of the gradient.
Use the black-to-white gradient image as a mask.
Fill with the color at the start of the gradient.
You can rotate the gradient by rotating the mask image.
This only supports the simplest case of a gradient with a color at each extreme end; it doesn't scale to three or more colors and doesn't support unusual gradient stop positioning.

FYI: here's also a good tutorial for creating a circular progress bar using Quartz drawing.
http://www.turnedondigital.com/blog/quartz-tutorial-how-to-draw-in-quartz/

Related

Shrinking a dirty rect

Trying to optimize a falling sand simulation and I'm implementing optimizations that the noita devs talked about in their GDC talk. At around 10:45 they talk about how they use dirty rects. I've started trying to implement a similar system.
Currently, I am able to create a dirty rect that covers the particles that need updating. I do this by every time a valid particle(particle is not air or solid like a wall) is set inside a chunk, I call a function to update the dirty rect giving the placed particles position as an argument. From there, I can easily calculate the new min/max of the rectangle from this position.
Here's a gif of that working.
and here's the code for updating the rect:
public void UpdateDirtyRect(int2 newPos)
{
minX = Math.Min(minX, newPos.x);
minY = Math.Min(minY, newPos.y);
maxX = Math.Max(maxX, newPos.x);
maxY = Math.Max(maxY, newPos.y);
dirtyrect = .(.(minX, minY), .(maxX, maxY));
//Inflate by two pixels. Not doing this will cause the rect to not change size as particles update
dirtyrect=dirtyrect.Inflate(2);
}
The problem, as can be seen in the gif, is that I currently have no way to shrink the dirty rect. I can do a few things, such as detecting when a particle is erased/replaced with air/solid particle on the boundary edge of the dirty rect, but I'm unsure on what to do from there.
Here’s one approach that might work for you.
Keep the dirty rectangle updated by the previous frame.
Compute the dirty rectangle updated by one frame only.
Combine these two rectangles into a single one that contains both of them.
Use the rectangle from step 3 to update the screen.
Replace the previous frame rectangle with the one you have computed on step 2. Not the combined one you computed on step 3, doing so would cause the same problem you’re describing.

How to use a shaderModifier to alter the color of specific triangles in a SCNGeometry

First, before I go on, I have read through: SceneKit painting on texture with texture coordinates which seems to suggest I'm on the right track.
I have a complex SCNGeometry representing a hexasphere. It's rendering really well, and with a full 60fps on all of my test devices.
At the moment, all of the hexagons are being rendered with a single material, because, as I understand it, every SCNMaterial I add to my geometry adds another draw call, which I can't afford.
Ultimately, I want to be able to color each of the almost 10,000 hexagons individually, so adding another material for each one is not going to work.
I had been planning to limit the color range to (say) 100 colors, and then move hexagons between different geometries, each with their own colored material, but that won't work because SCNGeometry says it works with an immutable set of vertices.
So, my current thought/plan is to use a shader modifier as suggested by #rickster in the above-mentioned question to somehow modify the color of individual hexagons (or sets of 4 triangles).
The thing is, I sort of understand the Apple doco referred to, but I don't understand how to provide the shader with what I think must essentially be an array of colour information, somehow indexed so that the shader knows which triangles to give what colors.
The code I have now, that creates the geometry reads as:
NSData *indiceData = [NSData dataWithBytes:oneMeshIndices length:sizeof(UInt32) * indiceIndex];
SCNGeometryElement *oneMeshElement =
[SCNGeometryElement geometryElementWithData:indiceData
primitiveType:SCNGeometryPrimitiveTypeTriangles
primitiveCount:indiceIndex / 3
bytesPerIndex:sizeof(UInt32)];
[oneMeshElements addObject:oneMeshElement];
SCNGeometrySource *oneMeshNormalSource =
[SCNGeometrySource geometrySourceWithNormals:oneMeshNormals count:normalIndex];
SCNGeometrySource *oneMeshVerticeSource =
[SCNGeometrySource geometrySourceWithVertices:oneMeshVertices count:vertexIndex];
SCNGeometry *oneMeshGeom =
[SCNGeometry geometryWithSources:[NSArray arrayWithObjects:oneMeshVerticeSource, oneMeshNormalSource, nil]
elements:oneMeshElements];
SCNMaterial *mat1 = [SCNMaterial material];
mat1.diffuse.contents = [UIColor greenColor];
oneMeshGeom.materials = #[mat1];
SCNNode *node = [SCNNode nodeWithGeometry:oneMeshGeom];
If someone can shed some light on how to provide the shader with a way to color each triangle indexed by the indices in indiceData, that would be fantastic.
EDIT
I've tried looking at providing the shader with a texture as a container for color information that would be indexed by the VertexID however it seems that SceneKit doesn't make the VertexID available. My thought was to provide this texture (actually just an array of bytes, 1 per hexagon on the hexasphere), via the SCNMaterialProperty class and then, in the shader, pull out the appropriate byte, based on the vertex number. That byte would be used to index an array of fixed colors and the resultant color for each vertex would then give the desired result.
Without a VertexID, this idea won't work, unless there is some other, similarly useful piece of data...
EDIT 2
Perhaps I am stubborn. I've been trying to get this to work, and as an experiment I created an image that is basically a striped rainbow and wrote the following shader, thinking it would basically colour my sphere with the rainbow.
It doesn't work. The entire sphere is drawn using the colour in the top left corner of the image.
My shaderModifer code is:
#pragma arguments
sampler2D colorMap;
uniform sampler2D colorMap;
#pragma body
vec4 color = texture2D(colorMap, _surface.diffuseTexcoord);
_surface.diffuse.rgba = color;
and I apply this using the code:
SCNMaterial *mat1 = [SCNMaterial material];
mat1.locksAmbientWithDiffuse = YES;
mat1.doubleSided = YES;
mat1.shaderModifiers = #{SCNShaderModifierEntryPointSurface :
#"#pragma arguments\nsampler2D colorMap;\nuniform sampler2D colorMap;\n#pragma body\nvec4 color = texture2D(colorMap, _surface.diffuseTexcoord);\n_surface.diffuse.rgba = color;"};
colorMap = [SCNMaterialProperty materialPropertyWithContents:[UIImage imageNamed:#"rainbow.png"]];
[mat1 setValue:colorMap forKeyPath:#"colorMap"];
I had thought that the _surface.diffuseTexcoord would be appropriate but I'm beginning to think I need to somehow map that to a coordinate in the image by knowing the dimensions of the image and interpolating somehow.
But if this is the case, what units are _surface.diffuseTexcoord in? How do I know the min/max range of this so that I can map it to the image?
Once again, I'm hoping someone can steer me in the right direction if these attempts are wrong.
EDIT 3
OK, so I know I'm on the right track now. I've realised that by using _surface.normal instead of _surface.diffuseTexcoord I can use that as a latitude/longitude on my sphere to map to an x,y in the image and I now see the hexagons being colored based on the color in the colorMap however it doesn't matter what I do (so far); the normal angles seem to be fixed in relation to the camera position, so when I move the camera to look at a different point of the sphere, the colorMap doesn't rotate with it.
Here is the latest shader code:
#pragma arguments
sampler2D colorMap;
uniform sampler2D colorMap;
#pragma body
float x = ((_surface.normal.x * 57.29577951) + 180.0) / 360.0;
float y = 1.0 - ((_surface.normal.y * 57.29577951) + 90.0) / 180.0;
vec4 color = texture2D(colorMap, vec2(x, y));
_output.color.rgba = color;
ANSWER
So I solved the problem. It turned out that there was no need for a shader to achieve my desired results.
The answer was to use a mappingChannel to provide the geometry with a set of texture coordinates for each vertex. These texture coordinates are used to pull color data from the appropriate texture (it all depends on how you set up your material).
So, whilst I did manage to get a shader working, there were performance issues on older devices, and using a mappingChannel was much much better, working at 60fps on all devices now.
I did find though that despite the documentation saying that a mapping channel is a series of CGPoint objects, that wouldn't work on 64 bit devices because CGPoint seems to use doubles instead of floats.
I needed to define my own struct:
typedef struct {
float x;
float y;
} MyPoint;
MyPoint oneMeshTextureCoordinates[vertexCount];
and then having built up an array of these, one for each vertex, I then created the mappingChannel source as follows:
SCNGeometrySource *textureMappingSource =
[SCNGeometrySource geometrySourceWithData:
[NSData dataWithBytes:oneMeshTextureCoordinates
length:sizeof(MyPoint) * vertexCount]
semantic:SCNGeometrySourceSemanticTexcoord
vertexCount
floatComponents:YES
componentsPerVector:2
bytesPerComponent:sizeof(float)
dataOffset:0
dataStride:sizeof(MyPoint)];
EDIT:
In response to a request, here is a project that demonstrates how I use this. https://github.com/pkclsoft/HexasphereDemo

OpenGL texture mapping with different coordinates systems

I already asked a question about texture mapping and these two are related (this question).
I'm working with Quartz Composer which appears to be kind specific with textures...
I have a complex polygon that I triangulate in a specific coordinate system (-1 -> 1 on x | -0.75 -> 0.75 on y). I obtain an array of triangles vertices in this coordinate system (triangles 1 to 6 on the left pic).
Then I render each polygon separately (it's necessary for my program), by applying a scale function on its vertices from this coordinate system to OpenGL one (0. -> 1.). Here, even if for 0->1 range it's kind of stupid :
return (((1. - 0.) * (**myVertexXorY** - minTriangleBound)) / (maxTriangleBound - minTriangleBound)) + 0.;
But I want one image to be textured on these triangles (like on the picture above). So I begin by getting the whole polygon bounds (1 on the right pic), then the triangle bounds (2 on the right pic). I scale 1 to the picture coordinates (3 on the right pic) in pixels, then I get the triangle bounds (2) in pixels.
It gives me the bounds to lock my texture in OpenGL with Quartz :
NSRect myBounds = NSMakeRect(originXinPixels, originYinPixels, widthForTheTriangle, heightForTheTriangle);
And I lock my texture
[myImage lockTextureRepresentationWithColorSpace:space forBounds:myBounds];
Then, with OpenGL :
for (int32 i = 0; i < vertexCount; ++i)
{
verts[i] = myTriangle.vertices[i];
texcoord[0] = [self myScaleFunctionFor:XinQuartzCoordinateSystem From:0 To:1]
texcoord[1] = [self myScaleFunctionFor:YinQuartzCoordinateSystem From:0 To:1]
glTexCoord2fv(texcoord);
}
And I obtain what you can see : sometimes parts of the image are fitting, sometimes no (well, in fact with this particular polygon, it doesn't fit at all...).
I'm not really sure if I did understand your question, but:
What hinders you from directly supplying texture coordinates that do match the topology of your source picture? This was far easier than trying to find some per triangle linear mapping that moves the picture in the right way.

OpenGL ES OBJ Loading Transparency Issues

Hi I am working on an OBJ loader for use in iOS programming, I have managed to load the vertices and the faces but I have an issue with the transparency of the faces.
For the colours of the vertices I have just made them for now, vary from 0 - 1. So each vertex will gradually change from black to white. The problem is that the white vertices and faces seem to appear over the black ones. The darker the vertices the more they appeared covered.
For an illustration of this see the video I posted here < http://youtu.be/86Sq_NP5jrI >
The model here consists of two cubes, one large cube with a smaller one attached to a corner.
How do you assign a color to vertex? I assume, that you have RGBA render target. So you need to setup color like this:
struct color
{
u8 r, g, b, a;
};
color newColor;
newColor.a = 255;//opaque vertex, 0 - transparent
//other colors setup

Scale and rotate in OpenGL ES 2.0

Is there a way to include aspect ratio correction without using matrices in OpenGL ES?
I am writing a simple shader to rotate a texture.
void main()
{
mat2 rotX = mat2(cosA, sinA, -sinA, cosA);
vec4 a_pos = a_position;
a_pos.xy = a_position.xy * rotZ;
gl_Position = a_pos;
}
But the problem is that the image is getting skewed when rotating.
In normal openGL, we use something like gluPerspective(fov, (float)windowWidth/(float)windowHeight, zNear, zFar);
How do i do the same with shaders?
Note: I'd prefer not using a matrix.
Include aspect ratio fix in geometry of rendered object? I did so in my font rendering tool, position of verts in each rect is corrected by aspect ratio of screen, and yes i know that its better and easier do use matrix fix but i didnt know it when i was writing this tool, and it works fine:)
You can manually translate a code of GluPerspective to shader:
http://www.opengl.org/wiki/GluPerspective_code
But it is not efficient to calcuelte this matrix for each vertex. So you can manually calculate it for your device screen. Look at these posts
Just change the matrix multiplication order for rotation:
a_pos.xy = rotZ * a_position.xy;