OpenGL (on OS X with Objective-C) texture can't be mapped, what's wrong with my code? [duplicate] - objective-c

I am creating a program that allows me to plot points in 3 space, connects them using a Catmull-Rom Spline, and then draws a cylinder around the Spline. I am using GL_TRIANGLES_STRIP to connect circles of points drawn around the Spline at short intervals, to hopefully connect them all together into a Cylinder around the Spline.
I have managed to draw complete circles of points at these intervals, using GL_POINTS, and orientate them correctly to the line with regards to a Frenet Frame. Unfortunately, to use GL_TRIANGLE_STRIP, I believe I need to plot the points one at a time between a set of two circles of points.
The problem I am having, is that the glMultMatrix doesn't seem to work when inside a glBegin. The code below will draw a circle of points, but at the origin, and the glMultMatrix, which I use to translate and orientate the circle of points, doesn't seem to apply when inside the glbegin. Is there a solution to this?
//The matrixes that are applied to the circle of points
GLfloat M1[16]={
N1.x(),N1.y(),N1.z(),0,
B1.x(),B1.y(),B1.z(),0,
T1.x(),T1.y(),T1.z(),0,
fromPoint->x,fromPoint->y,fromPoint->z,1
};
GLfloat M2[16]={
N2.x(),N2.y(),N2.z(),0,
B2.x(),B2.y(),B2.z(),0,
T2.x(),T2.y(),T2.z(),0,
toPoint->x,toPoint->y,toPoint->z,1
};
glBegin(GL_TRIANGLE_STRIP);
GLfloat x, y;
GLfloat radius = 0.4f;
GLint pointCount = 180;
for (GLfloat theta = 0; theta < 2*M_PI; theta += (2*M_PI)/pointCount) {
x = radius * cos(theta);
y = radius * sin(theta);
// Now push a matrix, multiply it, draw a point and pop the matrix
glPushMatrix();
glMultMatrixf(& M1[0]);
// Draw the point here
glVertex3f(x, y, 0);
glPopMatrix();
// Do the same again for the second section
glPushMatrix();
glMultMatrixf(& M2[0]);
glVertex3f(x, y, 0);
glPopMatrix();
}
glEnd();

The problem I am having, is that the glMultMatrix doesn't seem to work when inside a glBegin
Unsurprising:
Only a subset of GL commands can be used between glBegin and glEnd.
The commands are
glVertex,
glColor,
glSecondaryColor,
glIndex,
glNormal,
glFogCoord,
glTexCoord,
glMultiTexCoord,
glVertexAttrib,
glEvalCoord,
glEvalPoint,
glArrayElement,
glMaterial, and
glEdgeFlag.
Also,
it is acceptable to use
glCallList or
glCallLists to execute
display lists that include only the preceding commands.
If any other GL command is executed between glBegin and glEnd,
the error flag is set and the command is ignored.
glMultMatrix() before glBegin():
//The matrixes that are applied to the circle of points
GLfloat M1[16]=
{
N1.x(),N1.y(),N1.z(),0,
B1.x(),B1.y(),B1.z(),0,
T1.x(),T1.y(),T1.z(),0,
fromPoint->x,fromPoint->y,fromPoint->z,1
};
GLfloat M2[16]=
{
N2.x(),N2.y(),N2.z(),0,
B2.x(),B2.y(),B2.z(),0,
T2.x(),T2.y(),T2.z(),0,
toPoint->x,toPoint->y,toPoint->z,1
};
GLfloat x, y;
GLfloat radius = 0.4f;
GLint pointCount = 180;
for (GLfloat theta = 0; theta < 2*M_PI; theta += (2*M_PI)/pointCount)
{
x = radius * cos(theta);
y = radius * sin(theta);
// Now push a matrix, multiply it, draw a point and pop the matrix
glPushMatrix();
glMultMatrixf(& M1[0]);
// Draw the point here
glBegin(GL_POINTS);
glVertex3f(x, y, 0);
glEnd();
glPopMatrix();
// Do the same again for the second section
glPushMatrix();
glMultMatrixf(& M2[0]);
glBegin(GL_POINTS);
glVertex3f(x, y, 0);
glEnd();
glPopMatrix();
}
Or apply the transforms client-side and hand OpenGL a big block 'o vertices to render in one go.
EDIT: Or pull those matrix multiplies outside the loop entirely:
GLfloat x, y;
GLfloat radius = 0.4f;
GLint pointCount = 180;
glPushMatrix();
glMultMatrixf(& M1[0]);
glBegin(GL_POINTS);
for (GLfloat theta = 0; theta < 2*M_PI; theta += (2*M_PI)/pointCount)
{
x = radius * cos(theta);
y = radius * sin(theta);
// Draw the point here
glVertex3f(x, y, 0);
}
glEnd();
glPopMatrix();
glPushMatrix();
glMultMatrixf(& M2[0]);
glBegin(GL_POINTS);
for (GLfloat theta = 0; theta < 2*M_PI; theta += (2*M_PI)/pointCount)
{
x = radius * cos(theta);
y = radius * sin(theta);
// Draw the point here
glVertex3f(x, y, 0);
}
glEnd();
glPopMatrix();

you can also use a vertex array to hold all info and then push the data to openGL:
GLFloat[] points = new GLFloat[3*pointCount];
for (GLfloat theta = 0; theta < 2*M_PI; theta += (2*M_PI)/pointCount)
{
points[0+off] = radius * cos(theta);
points[1+off] = radius * sin(theta);
points[2+off] = 0;
off+=3;
}
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, points);
glPushMatrix();
glMultMatrixf(& M1[0]);
glDrawArrays(GL_POINTS, 0, pointCount);
glPopMatrix();
glPushMatrix();
glMultMatrixf(& M2[0]);
glDrawArrays(GL_POINTS, 0, pointCount);
glPopMatrix();
glDisableClientState(GL_VERTEX_ARRAY);
delete[] points;

Related

Rotate image in bounds

I try to rotate an image view in its superview so that this image view while rotating always touches superview's borders not crossing them, with appropriate resizing. How can I implement this? The image view should be able to rotate around 360˚.
Here I use calculations based on triangle formulas, considering initial image view diagonal angle.
Maybe I should take into account new bounding frame of the image view after it gets rotated (its x and y coordinates get negative and its frame size after transform gets bigger too).
No success so far, my image view gets sized down too quickly and too much. So my goal as I understand to get proper scale factor for CGAffineTransformScale. Maybe there are other ways to do the same.
// set initial values
_planImageView.layer.affineTransform = CGAffineTransformScale(CGAffineTransformIdentity, 1, 1);
_degrees = 0;
_initialWidth = _planImageView.frame.size.width;
_initialHeight = _planImageView.frame.size.height;
_initialAngle = MathUtils::radiansToDegrees(atan((_initialWidth / 2) / (_initialHeight / 2)));
// rotation routine
- (void)rotatePlanWithDegrees:(double)degrees
{
double deltaDegrees = degrees - _degrees;
_initialAngle -= deltaDegrees;
double newAngle = _initialAngle;
double newWidth = (_initialWidth / 2) * tan(MathUtils::degreesToRadians(newAngle)) * 2;
double newHeight = newWidth * (_initialHeight / _initialWidth);
NSLog(#"DEG %f DELTA %f A %f W %f H %f", degrees, deltaDegrees, newAngle, newWidth, newHeight);
double currentScale = newWidth / _initialWidth;
_planImageView.layer.affineTransform = CGAffineTransformScale(CGAffineTransformIdentity, currentScale, currentScale);
_planImageView.layer.affineTransform = CGAffineTransformRotate(_planImageView.layer.affineTransform, (CGFloat) MathUtils::degreesToRadians(degrees));
_degrees = degrees;
self->_planImageView.center = _center;
// NSLog(#"%#", NSStringFromCGRect(_planImageView.frame));
}
EDIT
I overwrote routine thanks to the answer and now it works!
- (void)rotatePlanWithDegrees:(double)degrees
{
double newWidth =
_initialWidth * abs(cos(MathUtils::degreesToRadians(degrees))) +
_initialHeight * abs(sin(MathUtils::degreesToRadians(degrees)));
double newHeight =
_initialWidth * abs(sin(MathUtils::degreesToRadians(degrees))) +
_initialHeight * abs(cos(MathUtils::degreesToRadians(degrees)));
CGFloat scale = (CGFloat) MIN(
self.planImageScrollView.frame.size.width / newWidth,
self.planImageScrollView.frame.size.height / newHeight);
CGAffineTransform rotationTransform = CGAffineTransformMakeRotation((CGFloat) MathUtils::degreesToRadians(degrees));
CGAffineTransform scaleTransform = CGAffineTransformMakeScale(scale, scale);
_planImageView.layer.affineTransform = CGAffineTransformConcat(rotationTransform, scaleTransform);
self->_planImageView.center = _center;
}
When you rotate a rectangle W x H, the bounding box takes the dimensions W' = W |cos Θ| + H |sin Θ|, H' = W |sin Θ| + H |cos Θ|.
If you need to fit that in a W" x H" rectangle, the scaling factor is the smallest of W"/W' and H"/H'.

How to correctly render a texture orthogonally in OpenGL?

I'm trying to render a 2D texture in an orthogonal projection.
Let me know what's wrong.
width and height are 128, the view is 256px wide and tall, so I expect the texture to be scaled 2x.
But all I get is this:
Code:
#interface ModNesOpenGLView : NSOpenGLView {
#public
char *pixels;
int width;
int height;
int zoom;
}
- (void) drawRect: (NSRect) bounds;
- (void) free;
#end
#implementation ModNesOpenGLView
-(void) awakeFromNib {
self->pixels = malloc( self->width * self->height * 3 );
memset( (void *)self->pixels, 0, self->width * self->height * 3 );
for( int y=0; y<self->height; ++y )
{
for( int x=0; x<self->width; ++x )
{
char r=0,g=0,b=0;
switch( y%3 ) {
case 0: r=0xFF; break;
case 1: g=0xFF; break;
case 2: b=0xFF; break;
}
[self setPixel_x:x y:y r:r g:g b:b];
}
}
}
-(void) setPixel_x:(int)x y:(int)y r:(char)r g:(char)g b:(char)b
{
self->pixels[ ( y * self->width + x ) * 3 ] = r;
self->pixels[ ( y * self->width + x ) * 3 + 1 ] = g;
self->pixels[ ( y * self->width + x ) * 3 + 2 ] = b;
}
-(void) drawRect: (NSRect) bounds
{
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT );
glTexImage2D( GL_TEXTURE_2D, 0, 3, self->width, self->height, 0,GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // GL_LINEAR
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glEnable(GL_TEXTURE_2D);
// glTexSubImage2D(GL_TEXTURE_2D, 0 ,0, 0, self->width, self->height, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) self->pixels );
glBegin( GL_QUADS );
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
glEnd();
glFlush();
}
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
}
(^ code scrolls down)
I realize I'm missing all the part about the initialization of the projection matrix and orthogonal projection.
I added it:
- (void)prepareOpenGL
{
// Synchronize buffer swaps with vertical refresh rate
GLint swapInt = 1;
[[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval];
glClearColor(0, 0, 0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glViewport(0, 0, self->width, self->height);
}
And then I get this:
I'm confused.
Here's where I got the code from: code example
Your problem is with coordinate systems and their ranges. Looking at the coordinates you use for drawing:
glTexCoord2d(0.0, 0.0); glVertex2d(0.0, 0.0);
glTexCoord2d(1.0, 0.0); glVertex2d(self->width, 0.0);
glTexCoord2d(1.0, 1.0); glVertex2d(self->width, self->height);
glTexCoord2d(0.0, 1.0); glVertex2d(0.0, self->height);
The OpenGL coordinate system has a range of [-1.0, 1.0] in both x- and y-direction if you don't apply a transformation. This means that (0.0, 0.0), which is the bottom-left corner of the quad you are drawing, is in the center of the screen. It then extends to the right and top. The size of the quad is actually much bigger than the window, but it obviously gets clipped.
This explains the original version and resulting picture you posted. You end up with the top-right quadrant being filled, with a very small fraction of your texture (about one texel).
Then in the updated code, you add this:
glViewport(0, 0, self->width, self->height);
The viewport determines the part of the window you draw to. Since you say that width and height are 128, and the window size is 256x256, this call specifies that you only want to draw into the bottom-left quadrant of your window.
Since everything else is unchanged, you then still draw the top-right quadrant of your drawing area. So you end up filling the top-right quadrant of the bottom-left quadrant of the window, which is exactly what you have in the second image.
To fix this, the simplest approach is to not set the viewport to a non-default value (remove the glViewport() call), and use coordinates in the range [-1.0, 1.0] in both directions:
glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, -1.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f( 1.0f, -1.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f( 1.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0f, 1.0f);
Another option is that you set up a transformation that changes the coordinate range to the values you are using. In legacy OpenGL, which you are using, something like this should work:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, self->width, 0.0, self->height, -1.0, 1.0);

Rotating line like a balance board

Well,
i have a circle like this in the photo.
I want to rotate my red line to the degree that i want.
The circle start from 0 to 300 degrees.
I started to do something with
CGFloat wAngle = Degrees2Radians([_Weight.text intValue]/300.0*360);
_Arrow.layer.transform = CATransform3DMakeRotation (wAngle + M_PI, 0, 0, 1);
but in this snippet, 0 value was on top, not on bottom.
Probably because i'm not a genius in trigonometry... :)
What is the correct way to rotate properly the arrow?
How are the values of angle to set?
thanks.
You don't need to transform degrees and radians. You have a relative value:
CGFloat relativeAngle = [_Weight.text intValue] / 300.0;
So just use it:
_Arrow.layer.transform = CATransform3DMakeRotation(relativeAngle * M_PI*2, 0, 0, 1);
If the start is wrong just change the initial position of your view.
(Or sum up the wrong angle to the new angle. Like (relativeAngle * M_Pi*2 + correction).
// what PI means in degrees
M_PI * 2 = 360°
M_PI = 180°
M_PI_2 = 90°
M_PI_4 = 45°
You can have your relative angle like this
// transform your degrees (0->300) to radians
// (considering your starting direction is at M_PI*3/2)
CGFloat relativeAngle = - [_Weight.text intValue] * M_PI / 150.0;
_Arrow.layer.transform = CATransform3DMakeRotation(relativeAngle, 0, 0, 1);

Does iOS 5 support blur CoreImage fiters?

According to the documentation it should support blurring, note the "Available in iOS 5.0 and later":
CIFilter Class Reference
But according to the device, it doesn't:
[CIFilter filterNamesInCategory:kCICategoryBlur];
returns nothing.
According to the following only these filters are available on my iPhone and Simulator (which are both running 5.0):
[CIFilter filterNamesInCategory:kCICategoryBuiltIn]
CIAdditionCompositing,
CIAffineTransform,
CICheckerboardGenerator,
CIColorBlendMode,
CIColorBurnBlendMode,
CIColorControls,
CIColorCube,
CIColorDodgeBlendMode,
CIColorInvert,
CIColorMatrix,
CIColorMonochrome,
CIConstantColorGenerator,
CICrop,
CIDarkenBlendMode,
CIDifferenceBlendMode,
CIExclusionBlendMode,
CIExposureAdjust,
CIFalseColor,
CIGammaAdjust,
CIGaussianGradient,
CIHardLightBlendMode,
CIHighlightShadowAdjust,
CIHueAdjust,
CIHueBlendMode,
CILightenBlendMode,
CILinearGradient,
CILuminosityBlendMode,
CIMaximumCompositing,
CIMinimumCompositing,
CIMultiplyBlendMode,
CIMultiplyCompositing,
CIOverlayBlendMode,
CIRadialGradient,
CISaturationBlendMode,
CIScreenBlendMode,
CISepiaTone,
CISoftLightBlendMode,
CISourceAtopCompositing,
CISourceInCompositing,
CISourceOutCompositing,
CISourceOverCompositing,
CIStraightenFilter,
CIStripesGenerator,
CITemperatureAndTint,
CIToneCurve,
CIVibrance,
CIVignette,
CIWhitePointAdjust
While Core Image on iOS 5.0 lacks blur filters, there is still a way to get GPU-accelerated blurs of images and video. My open source GPUImage framework has multiple blur types, including Gaussian (using the GPUImageGaussianBlurFilter for a general Gaussian or the GPUImageFastBlurFilter for a hardware-optimized 9-hit Gaussian), box (using a GPUImageBoxBlurFilter), median (using a GPUImageMedianFilter), and a bilateral blur (using a GPUImageBilateralBlurFilter).
I describe the shaders used to pull off the hardware-optimized Gaussian blur in this answer, and you can examine the code I use for the rest within the framework. These filters run tens of times faster than any CPU-bound routine I've tried yet.
I've also incorporated these blurs into multi-stage processing effects, like unsharp masking, tilt-shift filtering, Canny edge detection, and Harris corner detection, all of which are available as filters within this framework.
Again, in an attempt to save all iOS blur isses, here is my contribution:
https://github.com/tomsoft1/StackBluriOS
A simple blur library based on Stack Blur. Stack Blur is very similar to Gaussian Blur, but much faster (see http://incubator.quasimondo.com/processing/fast_blur_deluxe.php )
use it like this:
UIImage *newIma=[sourceIma stackBlur:radius]
Hope this help
I too was disappointed to find that Core Image in iOS doesn't support blurs. Here's the function I wrote to do a 9-tap Gaussian blur on a UIImage. Call it repeatedly to get stronger blurs.
#interface UIImage (ImageBlur)
- (UIImage *)imageWithGaussianBlur9;
#end
#implementation UIImage (ImageBlur)
- (UIImage *)imageWithGaussianBlur9 {
float weight[5] = {0.2270270270, 0.1945945946, 0.1216216216, 0.0540540541, 0.0162162162};
// Blur horizontally
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:CGRectMake(0, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[0]];
for (int x = 1; x < 5; ++x) {
[self drawInRect:CGRectMake(x, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[x]];
[self drawInRect:CGRectMake(-x, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[x]];
}
UIImage *horizBlurredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Blur vertically
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[horizBlurredImage drawInRect:CGRectMake(0, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[0]];
for (int y = 1; y < 5; ++y) {
[horizBlurredImage drawInRect:CGRectMake(0, y, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[y]];
[horizBlurredImage drawInRect:CGRectMake(0, -y, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[y]];
}
UIImage *blurredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//
return blurredImage;
}
Just call it on an existing image like this:
UIImage *blurredImage = [originalImage imageWithGaussianBlur9];
and repeat it to get stronger blurring, like this:
blurredImage = [blurredImage imageWithGaussianBlur9];
Unfortunately, it does not support any blurs. For that, you'll have to roll your own.
UPDATE: As of iOS 6 [CIFilter filterNamesInCategory:kCICategoryBlur]; returns CIGaussianBlur meaning that this filter is available on the device. Even though this is true, you (probably) will get better performance and more flexibility using GPUImage.
Here is the link to our tutorial on making blur effect in iOS application with different approaches. http://blog.denivip.ru/index.php/2013/01/blur-effect-in-ios-applications/?lang=en
If you can use OpenGL ES in your iOS app, this is how you calculate the median in a pixel neighborhood radius of your choosing (the median being a type of blur, of course):
kernel vec4 medianUnsharpKernel(sampler u) {
vec4 pixel = unpremultiply(sample(u, samplerCoord(u)));
vec2 xy = destCoord();
int radius = 3;
int bounds = (radius - 1) / 2;
vec4 sum = vec4(0.0);
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
sum += unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
}
}
vec4 mean = vec4(sum / vec4(pow(float(radius), 2.0)));
float mean_avg = float(mean);
float comp_avg = 0.0;
vec4 comp = vec4(0.0);
vec4 median = mean;
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
comp = unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
comp_avg = float(comp);
median = (comp_avg < mean_avg) ? max(median, comp) : median;
}
}
return premultiply(vec4(vec3(abs(pixel.rgb - median.rgb)), 1.0));
}
A brief description of the steps
1. Calculate the mean of the values of the pixels surrounding the source pixel in a 3x3 neighborhood;
2. Find the maximum pixel value of all pixels in the same neighborhood that are less than the mean.
3. [OPTIONAL] Subtract the median pixel value from the source pixel value for edge detection.
If you're using the median value for edge detection, there are a couple of ways to modify the above code for better results, namely, hybrid median filtering and truncated media filtering (a substitute and a better 'mode' filtering). If you're interested, please ask.
Because I'm using Xamarin, I converted John Stephen's answer to C#:
private UIImage ImageWithGaussianBlur9(UIImage image)
{
var weight = new nfloat[]
{
0.2270270270f, 0.1945945946f, 0.1216216216f, 0.0540540541f, 0.0162162162f
};
var width = image.Size.Width;
var height = image.Size.Height;
// Blur horizontally
UIGraphics.BeginImageContextWithOptions(image.Size, false, 1f);
image.Draw(new CGRect(0f, 0f, width, height), CGBlendMode.PlusLighter, weight[0]);
for (int x = 1; x < 5; ++x)
{
image.Draw(new CGRect(x, 0, width, height), CGBlendMode.PlusLighter, weight[x]);
image.Draw(new CGRect(-x, 0, width, height), CGBlendMode.PlusLighter, weight[x]);
}
var horizBlurredImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
// Blur vertically
UIGraphics.BeginImageContextWithOptions(image.Size, false, 1f);
horizBlurredImage.Draw(new CGRect(0, 0, width, height), CGBlendMode.PlusLighter, weight[0]);
for (int y = 1; y < 5; ++y)
{
horizBlurredImage.Draw(new CGRect(0, y, width, height), CGBlendMode.PlusLighter, weight[y]);
horizBlurredImage.Draw(new CGRect(0, -y, width, height), CGBlendMode.PlusLighter, weight[y]);
}
var blurredImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return blurredImage;
}

CGContextDrawAngleGradient?

Dipping my feet into some more Core Graphics drawing, I'm attempting to recreate a wicked looking metallic knob, and I've landed on what is probably a show-stopping issue.
There doesn't seem to be any way to draw angle gradients in Core Graphics. I see there's CGContextDrawRadialGradient() and CGContextDrawLinearGradient(), but there's nothing that I see that would allow me to draw an angle gradient. Does anyone know of a workaround, or a bit of framework hidden away somewhere to accomplish this without pre-rendering the knob into an image file?
AngleGradientKnob http://dl.dropbox.com/u/3009808/AngleGradient.png.
This is kind of thrown together, but it's the approach I'd probably take. This is creating an angle gradient by drawing it directly into a bitmap using some simple trig, then clipping it to a circle. I create a grid of memory using a grayscale colorspace, calculate the angle from a given point to the center, and then color that based on a periodic function, running between 0 to 255. You could of course expand this to do RGBA color as well.
Of course you'd cache this and play with the math to get the colors you want. This currently runs all the way from black to white, which doesn't look as good as you'd like.
- (void)drawRect:(CGRect)rect {
CGImageAlphaInfo alphaInfo = kCGImageAlphaNone;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
size_t components = CGColorSpaceGetNumberOfComponents( colorSpace );
size_t width = 100;
size_t height = 100;
size_t bitsPerComponent = 8;
size_t bytesPerComponent = bitsPerComponent / 8;
size_t bytesPerRow = width * bytesPerComponent * components;
size_t dataLength = bytesPerRow * height;
uint8_t data[dataLength];
CGContextRef imageCtx = CGBitmapContextCreate( &data, width, height, bitsPerComponent,
bytesPerRow, colorSpace, alphaInfo );
NSUInteger offset = 0;
for (NSUInteger y = 0; y < height; ++y) {
for (NSUInteger x = 0; x < bytesPerRow; x += components) {
CGFloat opposite = y - height/2.;
CGFloat adjacent = x - width/2.;
if (adjacent == 0) adjacent = 0.001;
CGFloat angle = atan(opposite/adjacent);
data[offset] = abs((cos(angle * 2) * 255));
offset += components * bytesPerComponent;
}
}
CGImageRef image = CGBitmapContextCreateImage(imageCtx);
CGContextRelease(imageCtx);
CGColorSpaceRelease(colorSpace);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect buttonRect = CGRectMake(100, 100, width, width);
CGContextAddEllipseInRect(ctx, buttonRect);
CGContextClip(ctx);
CGContextDrawImage(ctx, buttonRect, image);
CGImageRelease(image);
}
To expand on what's in the comments to the accepted answer, here's the code for generating an angle gradient using Core Image. This should work in iOS 8 or later.
// generate a dummy image of the required size
UIGraphicsBeginImageContextWithOptions(CGSizeMake(256.0, 256.0), NO, [[UIScreen mainScreen] scale]);
CIImage *dummyImage = [CIImage imageWithCGImage:UIGraphicsGetImageFromCurrentImageContext().CGImage];
// define the kernel algorithm
NSString *kernelString = #"kernel vec4 circularGradientKernel(__color startColor, __color endColor, vec2 center, float radius) { \n"
" vec2 point = destCoord() - center;"
" float rsq = point.x * point.x + point.y * point.y;"
" float theta = mod(atan(point.y, point.x), radians(360.0));"
" return (rsq < radius*radius) ? mix(startColor, endColor, 0.5+0.5*cos(4.0*theta)) : vec4(0.0, 0.0, 0.0, 1.0);"
"}";
// initialize a Core Image context and the filter kernel
CIContext *context = [CIContext contextWithOptions:nil];
CIColorKernel *kernel = [CIColorKernel kernelWithString:kernelString];
// argument array, corresponding to the first line of the kernel string
NSArray *args = #[ [CIColor colorWithRed:0.5 green:0.5 blue:0.5],
[CIColor colorWithRed:1.0 green:1.0 blue:1.0],
[CIVector vectorWithCGPoint:CGPointMake(CGRectGetMidX(dummyImage.extent),CGRectGetMidY(dummyImage.extent))],
[NSNumber numberWithFloat:200.0]];
// apply the kernel to our dummy image, and convert the result to a UIImage
CIImage *ciOutputImage = [kernel applyWithExtent:dummyImage.extent arguments:args];
CGImageRef cgOutput = [context createCGImage:ciOutputImage fromRect:ciOutputImage.extent];
UIImage *gradientImage = [UIImage imageWithCGImage:cgOutput];
CGImageRelease(cgOutput);
This generates the following image: