OpenGL ES 2.0 Vertex Lighting inconsistencies - opengl-es-2.0

I am setting up my lighting as below. The issue is that it looks like there is one giant light in the middle (as you can see on the sphere screenshot below). The goal is to have 3 lights in a triangle shape with each individual light being seen on the sphere (this is the bottom screenshot). To accomplish this, do I need to create each light as a "spotlight"? I noticed GLKEffectPropertyLight does have properties for this but am uncertain if this is the effect I'm going for.
- (void)setupLighting
{
// setup the lighting
self.lightingType = GLKLightingTypePerPixel;
self.colorMaterialEnabled = GL_TRUE;
// light positions
GLKVector4 light0Pos = GLKVector4Make(0.0f, 0.0f, 40.0f, 1.0f);
GLKVector4 light1Pos = GLKVector4Make(-2.0f, -2.0f, 40.0f, 1.0f);
GLKVector4 light2Pos = GLKVector4Make(2.0f, -2.0f, 40.0f, 1.0f);
GLKVector4 frontLightDif = GLKVector4Make(0.1f, 0.1f, 0.1f, 1.0f);
// specular, diffuse and ambient colors
GLKVector4 specular = GLKVector4Make(0.45f, 0.45f, 0.45f, 1.0f);
GLKVector4 diffuse = GLKVector4Make(0.15f, 0.15f, 0.15f, 1.0f);
GLKVector4 ambient = GLKVector4Make(0.2f, 0.2f, 0.2f, 1.0f);
// setup light 0 - ambient light
self.light0.enabled = GL_TRUE;
self.light0.position = light0Pos;
self.light0.ambientColor = ambient;
self.light0.diffuseColor = specular;
self.light0.specularColor = specular;
// setup light 1 - head on light
self.light1.enabled = GL_TRUE;
self.light1.position = light1Pos;
self.light1.diffuseColor = frontLightDif;
self.light1.specularColor = specular;
// setup light 2
self.light2.enabled = GL_TRUE;
self.light2.position = light2Pos;
self.light2.diffuseColor = diffuse;
self.light2.specularColor = specular;
}

Related

OpenGL ES 2.0 Grid Lines With Triangle Strip

I need to draw a 10 x 10 grid in OpenGL ES 2.0. Are triangle strips the best way to do this? How do you draw this without drawing the diagonal lines? All of the searches that I come up with show grid lines with the diagonals drawn, but this defeats the purpose.
I've drawn the grid just using a bunch of lines, but I'm having trouble transforming it as a unit. Is this the right approach, but I'm not executing it correctly? Or is there a better way, like triangle strips? Thanks!
// draw the horizontal gridlines
for (i=0; i<12; i++) {
modelViewMatrixRight = GLKMatrix4MakeTranslation(0.0f, 1.0f/11.0f*(float)i - 0.5f, -2.0f);
// modelViewMatrixRight = GLKMatrix4Rotate(modelViewMatrixRight, GLKMathDegreesToRadians(-45.0f), 0.0f, 0.0f, 1.0f);
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrixRight, modelViewMatrixRight);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
glUniform4fv(uniforms[UNIFORM_COLOR_VECTOR], 1, _color);
glBindVertexArrayOES( [my_line[i] getVertexArray] ); // make this line the current object
[my_line[i] render];
}
// draw the vertical gridlines
for (i=12; i<24; i++) {
modelViewMatrixRight = GLKMatrix4MakeTranslation(0.5f - 1.0f/11.0f * (float) (i-12), 0.0f, -2.0f);
modelViewMatrixRight = GLKMatrix4Rotate(modelViewMatrixRight, GLKMathDegreesToRadians(90.0f), 0.0f, 0.0f, 1.0f);
modelViewMatrixRight = GLKMatrix4Rotate(modelViewMatrixRight, GLKMathDegreesToRadians(60.0f), 1.0f, 0.0f, 1.0f);
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrixRight, modelViewMatrixRight);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, 0, _modelViewProjectionMatrix.m);
glUniformMatrix3fv(uniforms[UNIFORM_NORMAL_MATRIX], 1, 0, _normalMatrix.m);
glUniform4fv(uniforms[UNIFORM_COLOR_VECTOR], 1, _color);
glBindVertexArrayOES( [my_line[i] getVertexArray] ); // make this line the current object
[my_line[i] render];
}

CGContextAddEllipse - overlapping get's clipped - Quartz

I like to draw a glass with a few Elements
- Top Ellipse
- Bottom Ellipse
- and the Lines Inbetween
Next, it should be filled with a Gradient. The Elements work, but at the point, where the middle of the glass comes in touch with the top or bottom ellipse the area get's clipped.
- (void)drawRect:(CGRect)rect
{
CGPoint c = self.center;
// Drawing code
CGContextRef cx = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(cx, 1.0);
[[UIColor whiteColor] setStroke];
// DrawTheShapeOfTheGlass
CGContextBeginPath(cx);
// Top and Bottom Ellipse
CGContextAddEllipseInRect(cx, CGRectMake(0, 0, 100, 20));
CGContextAddEllipseInRect(cx, CGRectMake(10, 90, 80, 20));
// Define the points for the Area inbetween
CGPoint points[] = { {0.0,10.0},{10.0,100.0},{90.0,100.0},{100.0,10.0} };
CGContextAddLines(cx, points, 4);
CGContextClosePath(cx);
// Clip, that's only the Clipped-Area wil be filled with the Gradient
CGContextClip(cx);
// CreateAndDraw the Gradient
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat colorSpace[] = {1.0f, 1.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f, 1.0f };
CGFloat locations[] = { 0.0, 1.0 };
CGGradientRef myGradient = CGGradientCreateWithColorComponents(rgbColorSpace, colorSpace, locations, 2);
CGPoint s = CGPointMake(0, 0);
CGPoint e = CGPointMake(100, 100);
CGContextDrawLinearGradient(cx, myGradient, s, e, kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation);
CGColorSpaceRelease(rgbColorSpace);
CGGradientRelease(myGradient);
}
Here how it looks like:
Is there any possibility to "fill" the whole ellipse? I played around with BlendModes but it didn't help.
Thanks
Try replacing the points[] initialization code with the following...
CGPoint points[] = {{0.0,10.0},{100.0,10.0},{90.0,100.0},{10.0,100.0}};
CoreGraphics uses the non-zero winding count rule to determine how to fill a path. Since ellipses are drawn clockwise and your trapezoid was drawn counter clockwise, the overlapping regions were not filled. Changing the drawing order of the trapezoid to clockwise will result in an object that is completely filled.

High-Resolution Content for paint app Using OpenGL ES on iPad device

I am working on paint app [taking reference from GLPaint app] for iPhone and iPad. In this app I am filling colors in paint-images by drawings lines onscreen based on where the user touches. App working properly for iPhone. In iPad without zooming lines on the paint view are proper [no pixel distortion] but after zooming lines on the paintView has distorted pixels i.e Content of OpenGL ES is not High Resolution.
I am using Following code for initialize paint view:
-(id)initWithCoder:(NSCoder*)coder {
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData;
size_t width, height;
CGFloat components[3];
if ((self = [super initWithCoder:coder])) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = NO;
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context]) {
return nil;
}
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
brushImage = [UIImage imageNamed:#"circle 64.png"].CGImage;
}
else {
brushImage = [UIImage imageNamed:#"flower 128.png"].CGImage;
}
// Get the width and height of the image
width = CGImageGetWidth(brushImage) ;
height = CGImageGetHeight(brushImage) ;
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
CGFloat scale;
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
NSLog(#"IPAd");
self.contentScaleFactor=1.0;
scale = self.contentScaleFactor;
}
else {
// NSLog(#"IPHone");
self.contentScaleFactor=2.0;
}
//scale = 2.000000;
// Setup OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
NSLog(#"Scale %f", scale);
glOrthof(0, (frame.size.width) * scale, 0, (frame.size.height) * scale, -1, 1);
glViewport(0, 0, (frame.size.width) * scale, (frame.size.height) * scale);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DITHER);
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_BLEND);
// Set a blending function appropriate for premultiplied alpha pixel data
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(width / kBrushScale);
// Make sure to start with a cleared buffer
needsErase = YES;
// Define a starting color
HSL2RGB((CGFloat) 0.0 / (CGFloat)kPaletteSize, kSaturation, kLuminosity, &components[0], &components[1], &components[2]);
[self setBrushColorWithRed:245.0f green:245.0f blue:0.0f];
boolEraser=NO;
}
return self;
}
TO CREATE FRAME BUFFER
-(BOOL)createFramebuffer {
// Generate IDs for a framebuffer object and a color renderbuffer
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
// For this sample, we also need a depth buffer, so we'll create and attach one via another renderbuffer.
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
return NO;
}
return YES;
}
Line Drawn using Following code
-(void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end {
static GLfloat* vertexBuffer = NULL;
static NSUInteger vertexMax = 64;
NSUInteger vertexCount = 0,
count,
i;
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Convert locations from Points to Pixels
//CGFloat scale = self.contentScaleFactor;
CGFloat scale;
scale=self.contentScaleFactor;
NSLog(#"Scale %f",scale);
start.x *= scale;
start.y *= scale;
end.x *= scale;
end.y *= scale;
float dx = end.x - start.x;
float dy = end.y - start.y;
float dist = (sqrtf(dx * dx + dy * dy)/ kBrushPixelStep);
// Allocate vertex array buffer
if(vertexBuffer == NULL)
// vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
count = MAX(ceilf(dist), 1);
//NSLog(#"count %d",count);
for(i = 0; i < count; ++i) {
if (vertexCount == vertexMax) {
vertexMax = 2 * vertexMax;
vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
// NSLog(#"if loop");
}
vertexBuffer[2 * vertexCount + 0] = start.x + (dx) * ((GLfloat)i / (GLfloat)count);
vertexBuffer[2 * vertexCount + 1] = start.y + (dy) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
For ipad device content of paint view is proper- high resolution for normal view but after zooming I am not getting High resolution content of paint view pixel of the lines looks distorted.
I have tried to change ContentScaleFactor as well as scale parameter of above code to see the difference but nothing worked as expected. IPad supports contentScaleFactor of 1.0 & 1.5, when I set contentScaleFactor = 2 Paint view can not display line it shows weird dotted lines.
Is there any way to make contents of OpenGL es high resolution?
The short answer is YES, you can have "High resolution" Content.
But you will have to clearly understand the issue before solving it. This is the long answer :
The brushes you use have a specific size (64 or 128). As soon as your virtual paper (the area in which you draw) will display its pixels larger than 1 screen pixel, you will start to see the "distortion". For example, in your favorite picture viewer, if you open one of your brush and zoom in the picture will also be distorted. You cannot avoid that, unless using vertor-brushes (with is not the scope of this answer and is far more complicated).
The quickest way would be to use more detailled brushes, but it is a fudge as if you zoom enought, the texture will look distorted as well.
You can also add a magnification filter using glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); . You used MIN in your sample, add this one will smooth the textures
i am not sure what you mean by high resolution. opengl is a vector library with a bitmap backed rendering system. the backing store will have the size in pixels (multiplied by the content scale factor) of the layer you are using to create the renderbuffer in:
- (BOOL)renderbufferStorage:(NSUInteger)target fromDrawable:(id<EAGLDrawable>)drawable
once it is created there is no way to change the resolution, nor would it make sense to do so generally, one renderbuffer pixel per screen pixel makes the most sense.
it is hard to know exactly what problem you are trying to solve without knowing what zooming you are talking about. i assume you have set up a CAEAGLLayer in a UIScrollView, and you are seeing pixel artifacts. this is inevitable, how else could it work?
if you want your lines to be smooth, you need to implement them using triangle strip meshes with alpha blending at the edges, which will provide antialiasing. instead of zooming the layer itself, you would simply "zoom" the contents by scaling the vertices, but keeping the CAEAGLLayer the same size. this would eliminate pixelation and give purdy alpha blended edges.

Does iOS 5 support blur CoreImage fiters?

According to the documentation it should support blurring, note the "Available in iOS 5.0 and later":
CIFilter Class Reference
But according to the device, it doesn't:
[CIFilter filterNamesInCategory:kCICategoryBlur];
returns nothing.
According to the following only these filters are available on my iPhone and Simulator (which are both running 5.0):
[CIFilter filterNamesInCategory:kCICategoryBuiltIn]
CIAdditionCompositing,
CIAffineTransform,
CICheckerboardGenerator,
CIColorBlendMode,
CIColorBurnBlendMode,
CIColorControls,
CIColorCube,
CIColorDodgeBlendMode,
CIColorInvert,
CIColorMatrix,
CIColorMonochrome,
CIConstantColorGenerator,
CICrop,
CIDarkenBlendMode,
CIDifferenceBlendMode,
CIExclusionBlendMode,
CIExposureAdjust,
CIFalseColor,
CIGammaAdjust,
CIGaussianGradient,
CIHardLightBlendMode,
CIHighlightShadowAdjust,
CIHueAdjust,
CIHueBlendMode,
CILightenBlendMode,
CILinearGradient,
CILuminosityBlendMode,
CIMaximumCompositing,
CIMinimumCompositing,
CIMultiplyBlendMode,
CIMultiplyCompositing,
CIOverlayBlendMode,
CIRadialGradient,
CISaturationBlendMode,
CIScreenBlendMode,
CISepiaTone,
CISoftLightBlendMode,
CISourceAtopCompositing,
CISourceInCompositing,
CISourceOutCompositing,
CISourceOverCompositing,
CIStraightenFilter,
CIStripesGenerator,
CITemperatureAndTint,
CIToneCurve,
CIVibrance,
CIVignette,
CIWhitePointAdjust
While Core Image on iOS 5.0 lacks blur filters, there is still a way to get GPU-accelerated blurs of images and video. My open source GPUImage framework has multiple blur types, including Gaussian (using the GPUImageGaussianBlurFilter for a general Gaussian or the GPUImageFastBlurFilter for a hardware-optimized 9-hit Gaussian), box (using a GPUImageBoxBlurFilter), median (using a GPUImageMedianFilter), and a bilateral blur (using a GPUImageBilateralBlurFilter).
I describe the shaders used to pull off the hardware-optimized Gaussian blur in this answer, and you can examine the code I use for the rest within the framework. These filters run tens of times faster than any CPU-bound routine I've tried yet.
I've also incorporated these blurs into multi-stage processing effects, like unsharp masking, tilt-shift filtering, Canny edge detection, and Harris corner detection, all of which are available as filters within this framework.
Again, in an attempt to save all iOS blur isses, here is my contribution:
https://github.com/tomsoft1/StackBluriOS
A simple blur library based on Stack Blur. Stack Blur is very similar to Gaussian Blur, but much faster (see http://incubator.quasimondo.com/processing/fast_blur_deluxe.php )
use it like this:
UIImage *newIma=[sourceIma stackBlur:radius]
Hope this help
I too was disappointed to find that Core Image in iOS doesn't support blurs. Here's the function I wrote to do a 9-tap Gaussian blur on a UIImage. Call it repeatedly to get stronger blurs.
#interface UIImage (ImageBlur)
- (UIImage *)imageWithGaussianBlur9;
#end
#implementation UIImage (ImageBlur)
- (UIImage *)imageWithGaussianBlur9 {
float weight[5] = {0.2270270270, 0.1945945946, 0.1216216216, 0.0540540541, 0.0162162162};
// Blur horizontally
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:CGRectMake(0, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[0]];
for (int x = 1; x < 5; ++x) {
[self drawInRect:CGRectMake(x, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[x]];
[self drawInRect:CGRectMake(-x, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[x]];
}
UIImage *horizBlurredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Blur vertically
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[horizBlurredImage drawInRect:CGRectMake(0, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[0]];
for (int y = 1; y < 5; ++y) {
[horizBlurredImage drawInRect:CGRectMake(0, y, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[y]];
[horizBlurredImage drawInRect:CGRectMake(0, -y, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[y]];
}
UIImage *blurredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//
return blurredImage;
}
Just call it on an existing image like this:
UIImage *blurredImage = [originalImage imageWithGaussianBlur9];
and repeat it to get stronger blurring, like this:
blurredImage = [blurredImage imageWithGaussianBlur9];
Unfortunately, it does not support any blurs. For that, you'll have to roll your own.
UPDATE: As of iOS 6 [CIFilter filterNamesInCategory:kCICategoryBlur]; returns CIGaussianBlur meaning that this filter is available on the device. Even though this is true, you (probably) will get better performance and more flexibility using GPUImage.
Here is the link to our tutorial on making blur effect in iOS application with different approaches. http://blog.denivip.ru/index.php/2013/01/blur-effect-in-ios-applications/?lang=en
If you can use OpenGL ES in your iOS app, this is how you calculate the median in a pixel neighborhood radius of your choosing (the median being a type of blur, of course):
kernel vec4 medianUnsharpKernel(sampler u) {
vec4 pixel = unpremultiply(sample(u, samplerCoord(u)));
vec2 xy = destCoord();
int radius = 3;
int bounds = (radius - 1) / 2;
vec4 sum = vec4(0.0);
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
sum += unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
}
}
vec4 mean = vec4(sum / vec4(pow(float(radius), 2.0)));
float mean_avg = float(mean);
float comp_avg = 0.0;
vec4 comp = vec4(0.0);
vec4 median = mean;
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
comp = unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
comp_avg = float(comp);
median = (comp_avg < mean_avg) ? max(median, comp) : median;
}
}
return premultiply(vec4(vec3(abs(pixel.rgb - median.rgb)), 1.0));
}
A brief description of the steps
1. Calculate the mean of the values of the pixels surrounding the source pixel in a 3x3 neighborhood;
2. Find the maximum pixel value of all pixels in the same neighborhood that are less than the mean.
3. [OPTIONAL] Subtract the median pixel value from the source pixel value for edge detection.
If you're using the median value for edge detection, there are a couple of ways to modify the above code for better results, namely, hybrid median filtering and truncated media filtering (a substitute and a better 'mode' filtering). If you're interested, please ask.
Because I'm using Xamarin, I converted John Stephen's answer to C#:
private UIImage ImageWithGaussianBlur9(UIImage image)
{
var weight = new nfloat[]
{
0.2270270270f, 0.1945945946f, 0.1216216216f, 0.0540540541f, 0.0162162162f
};
var width = image.Size.Width;
var height = image.Size.Height;
// Blur horizontally
UIGraphics.BeginImageContextWithOptions(image.Size, false, 1f);
image.Draw(new CGRect(0f, 0f, width, height), CGBlendMode.PlusLighter, weight[0]);
for (int x = 1; x < 5; ++x)
{
image.Draw(new CGRect(x, 0, width, height), CGBlendMode.PlusLighter, weight[x]);
image.Draw(new CGRect(-x, 0, width, height), CGBlendMode.PlusLighter, weight[x]);
}
var horizBlurredImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
// Blur vertically
UIGraphics.BeginImageContextWithOptions(image.Size, false, 1f);
horizBlurredImage.Draw(new CGRect(0, 0, width, height), CGBlendMode.PlusLighter, weight[0]);
for (int y = 1; y < 5; ++y)
{
horizBlurredImage.Draw(new CGRect(0, y, width, height), CGBlendMode.PlusLighter, weight[y]);
horizBlurredImage.Draw(new CGRect(0, -y, width, height), CGBlendMode.PlusLighter, weight[y]);
}
var blurredImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return blurredImage;
}

CGContextDrawAngleGradient?

Dipping my feet into some more Core Graphics drawing, I'm attempting to recreate a wicked looking metallic knob, and I've landed on what is probably a show-stopping issue.
There doesn't seem to be any way to draw angle gradients in Core Graphics. I see there's CGContextDrawRadialGradient() and CGContextDrawLinearGradient(), but there's nothing that I see that would allow me to draw an angle gradient. Does anyone know of a workaround, or a bit of framework hidden away somewhere to accomplish this without pre-rendering the knob into an image file?
AngleGradientKnob http://dl.dropbox.com/u/3009808/AngleGradient.png.
This is kind of thrown together, but it's the approach I'd probably take. This is creating an angle gradient by drawing it directly into a bitmap using some simple trig, then clipping it to a circle. I create a grid of memory using a grayscale colorspace, calculate the angle from a given point to the center, and then color that based on a periodic function, running between 0 to 255. You could of course expand this to do RGBA color as well.
Of course you'd cache this and play with the math to get the colors you want. This currently runs all the way from black to white, which doesn't look as good as you'd like.
- (void)drawRect:(CGRect)rect {
CGImageAlphaInfo alphaInfo = kCGImageAlphaNone;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
size_t components = CGColorSpaceGetNumberOfComponents( colorSpace );
size_t width = 100;
size_t height = 100;
size_t bitsPerComponent = 8;
size_t bytesPerComponent = bitsPerComponent / 8;
size_t bytesPerRow = width * bytesPerComponent * components;
size_t dataLength = bytesPerRow * height;
uint8_t data[dataLength];
CGContextRef imageCtx = CGBitmapContextCreate( &data, width, height, bitsPerComponent,
bytesPerRow, colorSpace, alphaInfo );
NSUInteger offset = 0;
for (NSUInteger y = 0; y < height; ++y) {
for (NSUInteger x = 0; x < bytesPerRow; x += components) {
CGFloat opposite = y - height/2.;
CGFloat adjacent = x - width/2.;
if (adjacent == 0) adjacent = 0.001;
CGFloat angle = atan(opposite/adjacent);
data[offset] = abs((cos(angle * 2) * 255));
offset += components * bytesPerComponent;
}
}
CGImageRef image = CGBitmapContextCreateImage(imageCtx);
CGContextRelease(imageCtx);
CGColorSpaceRelease(colorSpace);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect buttonRect = CGRectMake(100, 100, width, width);
CGContextAddEllipseInRect(ctx, buttonRect);
CGContextClip(ctx);
CGContextDrawImage(ctx, buttonRect, image);
CGImageRelease(image);
}
To expand on what's in the comments to the accepted answer, here's the code for generating an angle gradient using Core Image. This should work in iOS 8 or later.
// generate a dummy image of the required size
UIGraphicsBeginImageContextWithOptions(CGSizeMake(256.0, 256.0), NO, [[UIScreen mainScreen] scale]);
CIImage *dummyImage = [CIImage imageWithCGImage:UIGraphicsGetImageFromCurrentImageContext().CGImage];
// define the kernel algorithm
NSString *kernelString = #"kernel vec4 circularGradientKernel(__color startColor, __color endColor, vec2 center, float radius) { \n"
" vec2 point = destCoord() - center;"
" float rsq = point.x * point.x + point.y * point.y;"
" float theta = mod(atan(point.y, point.x), radians(360.0));"
" return (rsq < radius*radius) ? mix(startColor, endColor, 0.5+0.5*cos(4.0*theta)) : vec4(0.0, 0.0, 0.0, 1.0);"
"}";
// initialize a Core Image context and the filter kernel
CIContext *context = [CIContext contextWithOptions:nil];
CIColorKernel *kernel = [CIColorKernel kernelWithString:kernelString];
// argument array, corresponding to the first line of the kernel string
NSArray *args = #[ [CIColor colorWithRed:0.5 green:0.5 blue:0.5],
[CIColor colorWithRed:1.0 green:1.0 blue:1.0],
[CIVector vectorWithCGPoint:CGPointMake(CGRectGetMidX(dummyImage.extent),CGRectGetMidY(dummyImage.extent))],
[NSNumber numberWithFloat:200.0]];
// apply the kernel to our dummy image, and convert the result to a UIImage
CIImage *ciOutputImage = [kernel applyWithExtent:dummyImage.extent arguments:args];
CGImageRef cgOutput = [context createCGImage:ciOutputImage fromRect:ciOutputImage.extent];
UIImage *gradientImage = [UIImage imageWithCGImage:cgOutput];
CGImageRelease(cgOutput);
This generates the following image: