High-Resolution Content for paint app Using OpenGL ES on iPad device - objective-c

I am working on paint app [taking reference from GLPaint app] for iPhone and iPad. In this app I am filling colors in paint-images by drawings lines onscreen based on where the user touches. App working properly for iPhone. In iPad without zooming lines on the paint view are proper [no pixel distortion] but after zooming lines on the paintView has distorted pixels i.e Content of OpenGL ES is not High Resolution.
I am using Following code for initialize paint view:
-(id)initWithCoder:(NSCoder*)coder {
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData;
size_t width, height;
CGFloat components[3];
if ((self = [super initWithCoder:coder])) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = NO;
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context]) {
return nil;
}
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
brushImage = [UIImage imageNamed:#"circle 64.png"].CGImage;
}
else {
brushImage = [UIImage imageNamed:#"flower 128.png"].CGImage;
}
// Get the width and height of the image
width = CGImageGetWidth(brushImage) ;
height = CGImageGetHeight(brushImage) ;
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
CGFloat scale;
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
NSLog(#"IPAd");
self.contentScaleFactor=1.0;
scale = self.contentScaleFactor;
}
else {
// NSLog(#"IPHone");
self.contentScaleFactor=2.0;
}
//scale = 2.000000;
// Setup OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
NSLog(#"Scale %f", scale);
glOrthof(0, (frame.size.width) * scale, 0, (frame.size.height) * scale, -1, 1);
glViewport(0, 0, (frame.size.width) * scale, (frame.size.height) * scale);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DITHER);
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_BLEND);
// Set a blending function appropriate for premultiplied alpha pixel data
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(width / kBrushScale);
// Make sure to start with a cleared buffer
needsErase = YES;
// Define a starting color
HSL2RGB((CGFloat) 0.0 / (CGFloat)kPaletteSize, kSaturation, kLuminosity, &components[0], &components[1], &components[2]);
[self setBrushColorWithRed:245.0f green:245.0f blue:0.0f];
boolEraser=NO;
}
return self;
}
TO CREATE FRAME BUFFER
-(BOOL)createFramebuffer {
// Generate IDs for a framebuffer object and a color renderbuffer
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
// For this sample, we also need a depth buffer, so we'll create and attach one via another renderbuffer.
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
return NO;
}
return YES;
}
Line Drawn using Following code
-(void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end {
static GLfloat* vertexBuffer = NULL;
static NSUInteger vertexMax = 64;
NSUInteger vertexCount = 0,
count,
i;
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Convert locations from Points to Pixels
//CGFloat scale = self.contentScaleFactor;
CGFloat scale;
scale=self.contentScaleFactor;
NSLog(#"Scale %f",scale);
start.x *= scale;
start.y *= scale;
end.x *= scale;
end.y *= scale;
float dx = end.x - start.x;
float dy = end.y - start.y;
float dist = (sqrtf(dx * dx + dy * dy)/ kBrushPixelStep);
// Allocate vertex array buffer
if(vertexBuffer == NULL)
// vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
count = MAX(ceilf(dist), 1);
//NSLog(#"count %d",count);
for(i = 0; i < count; ++i) {
if (vertexCount == vertexMax) {
vertexMax = 2 * vertexMax;
vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
// NSLog(#"if loop");
}
vertexBuffer[2 * vertexCount + 0] = start.x + (dx) * ((GLfloat)i / (GLfloat)count);
vertexBuffer[2 * vertexCount + 1] = start.y + (dy) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
For ipad device content of paint view is proper- high resolution for normal view but after zooming I am not getting High resolution content of paint view pixel of the lines looks distorted.
I have tried to change ContentScaleFactor as well as scale parameter of above code to see the difference but nothing worked as expected. IPad supports contentScaleFactor of 1.0 & 1.5, when I set contentScaleFactor = 2 Paint view can not display line it shows weird dotted lines.
Is there any way to make contents of OpenGL es high resolution?

The short answer is YES, you can have "High resolution" Content.
But you will have to clearly understand the issue before solving it. This is the long answer :
The brushes you use have a specific size (64 or 128). As soon as your virtual paper (the area in which you draw) will display its pixels larger than 1 screen pixel, you will start to see the "distortion". For example, in your favorite picture viewer, if you open one of your brush and zoom in the picture will also be distorted. You cannot avoid that, unless using vertor-brushes (with is not the scope of this answer and is far more complicated).
The quickest way would be to use more detailled brushes, but it is a fudge as if you zoom enought, the texture will look distorted as well.
You can also add a magnification filter using glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); . You used MIN in your sample, add this one will smooth the textures

i am not sure what you mean by high resolution. opengl is a vector library with a bitmap backed rendering system. the backing store will have the size in pixels (multiplied by the content scale factor) of the layer you are using to create the renderbuffer in:
- (BOOL)renderbufferStorage:(NSUInteger)target fromDrawable:(id<EAGLDrawable>)drawable
once it is created there is no way to change the resolution, nor would it make sense to do so generally, one renderbuffer pixel per screen pixel makes the most sense.
it is hard to know exactly what problem you are trying to solve without knowing what zooming you are talking about. i assume you have set up a CAEAGLLayer in a UIScrollView, and you are seeing pixel artifacts. this is inevitable, how else could it work?
if you want your lines to be smooth, you need to implement them using triangle strip meshes with alpha blending at the edges, which will provide antialiasing. instead of zooming the layer itself, you would simply "zoom" the contents by scaling the vertices, but keeping the CAEAGLLayer the same size. this would eliminate pixelation and give purdy alpha blended edges.

Related

Draw UIImage (or JPEG) onto EAGLView

I am making a PDF annotator and when you switch pages it has to redraw all of the previously drawn OpenGL content (which was saved to file in JSON format). The problem is that it takes longer the more content there is to draw. I have a UIImage saved to disk for each page so I was hoping to speed up this process by drawing that UIImage onto EAGLContext in one big stroke.
I want to know how to take an UIImage (or JPEG/PNG file) and draw it directly on to the screen. The reason why it has to be on the EAGLView is because it needs to support the eraser, and using the regular UIKit way wouldn't work with that.
I assume there's some way to set a brush as the whole image and just stamp the screen with it once. Any suggestions?
As a pedantic note, there is no standard class named EAGLView, but I assume you're referring to one of Apple's sample UIView subclasses that host OpenGL ES content.
The first step in doing this would be to load the UIImage into a texture. The following is some code that I've used for this in my image processing framework (newImageSource is the input UIImage):
CGSize pointSizeOfImage = [newImageSource size];
CGFloat scaleOfImage = [newImageSource scale];
pixelSizeOfImage = CGSizeMake(scaleOfImage * pointSizeOfImage.width, scaleOfImage * pointSizeOfImage.height);
CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
BOOL shouldRedrawUsingCoreGraphics = YES;
// For now, deal with images larger than the maximum texture size by resizing to be within that limit
CGSize scaledImageSizeToFitOnGPU = [GPUImageOpenGLESContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
if (!CGSizeEqualToSize(scaledImageSizeToFitOnGPU, pixelSizeOfImage))
{
pixelSizeOfImage = scaledImageSizeToFitOnGPU;
pixelSizeToUseForTexture = pixelSizeOfImage;
shouldRedrawUsingCoreGraphics = YES;
}
if (self.shouldSmoothlyScaleOutput)
{
// In order to use mipmaps, you need to provide power-of-two textures, so convert to the next largest power of two and stretch to fill
CGFloat powerClosestToWidth = ceil(log2(pixelSizeOfImage.width));
CGFloat powerClosestToHeight = ceil(log2(pixelSizeOfImage.height));
pixelSizeToUseForTexture = CGSizeMake(pow(2.0, powerClosestToWidth), pow(2.0, powerClosestToHeight));
shouldRedrawUsingCoreGraphics = YES;
}
GLubyte *imageData = NULL;
CFDataRef dataFromImageDataProvider;
if (shouldRedrawUsingCoreGraphics)
{
// For resized image, redraw
imageData = (GLubyte *) calloc(1, (int)pixelSizeToUseForTexture.width * (int)pixelSizeToUseForTexture.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 8, (int)pixelSizeToUseForTexture.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, pixelSizeToUseForTexture.width, pixelSizeToUseForTexture.height), [newImageSource CGImage]);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
}
else
{
// Access the raw image bytes directly
dataFromImageDataProvider = CGDataProviderCopyData(CGImageGetDataProvider([newImageSource CGImage]));
imageData = (GLubyte *)CFDataGetBytePtr(dataFromImageDataProvider);
}
glBindTexture(GL_TEXTURE_2D, outputTexture);
if (self.shouldSmoothlyScaleOutput)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
if (self.shouldSmoothlyScaleOutput)
{
glGenerateMipmap(GL_TEXTURE_2D);
}
if (shouldRedrawUsingCoreGraphics)
{
free(imageData);
}
else
{
CFRelease(dataFromImageDataProvider);
}
As you can see, this has some functions for resizing images that exceed the maximum texture size of the device (the class method in the above code merely queries the max texture size), as well as a boolean flag for whether or not to generate mipmaps for the texture for smoother downsampling. These can be removed if you don't care about those cases. This is also OpenGL ES 2.0 code, so there might be an OES suffix or two that you'd need to add to some of the functions above in order for them to work with 1.1.
Once you have the UIImage in a texture, you can draw it to the screen by using a textured quad (two triangles that make up a rectangle, with appropriate texture coordinates for the corners). How you do this will differ between OpenGL ES 1.1 and 2.0. For 2.0, you use a passthrough shader program that just reads the color from that location in the texture and draws that to the screen and for 1.1, you just set up the texture coordinates for your geometry and draw the two triangles.
I have some OpenGL ES 2.0 code for this in this answer.

Xcode Screenshot EAGLContext [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get UIImage from EAGLView?
So I was just wondering if anybody knows any way to save what is stored in an EAGLContext as a UIImage.
I am currently using:
UIGraphicsBeginImageContext(CGSizeMake(768, 1024));
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
in other apps I have and this works fine, but obviously, EAGLContext doesn't have a .layer property. I've tried casting to UIView, but that - unsurprisingly - doesn't work:
UIView *newView = [[UIView alloc] init];
newView = (UIView *)context;
I am drawing to an EAGLContext property on a UIView (technically an EAGLContext on a UIView on another UIView on a View Controller, but I figure that shouldn't make any difference) using OpenGLES 1.
If anybody knows anything about this, even if its just that I'm completely barking up an impossible tree, please let me know!
Matt
After a few days I finally got a working solution to this. There is code provided by Apple which produces an UIImage from an EAGLView. Then you simply need to flip the image vertically since UIKit is upside down. The link to the documentation where I found this method doesn't exist anymore.
Method to capture EAGLView:
-(UIImage *)drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Method to flip the image vertically:
- (UIImage *)flipImageVertically:(UIImage *)originalImage
{
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}

Does iOS 5 support blur CoreImage fiters?

According to the documentation it should support blurring, note the "Available in iOS 5.0 and later":
CIFilter Class Reference
But according to the device, it doesn't:
[CIFilter filterNamesInCategory:kCICategoryBlur];
returns nothing.
According to the following only these filters are available on my iPhone and Simulator (which are both running 5.0):
[CIFilter filterNamesInCategory:kCICategoryBuiltIn]
CIAdditionCompositing,
CIAffineTransform,
CICheckerboardGenerator,
CIColorBlendMode,
CIColorBurnBlendMode,
CIColorControls,
CIColorCube,
CIColorDodgeBlendMode,
CIColorInvert,
CIColorMatrix,
CIColorMonochrome,
CIConstantColorGenerator,
CICrop,
CIDarkenBlendMode,
CIDifferenceBlendMode,
CIExclusionBlendMode,
CIExposureAdjust,
CIFalseColor,
CIGammaAdjust,
CIGaussianGradient,
CIHardLightBlendMode,
CIHighlightShadowAdjust,
CIHueAdjust,
CIHueBlendMode,
CILightenBlendMode,
CILinearGradient,
CILuminosityBlendMode,
CIMaximumCompositing,
CIMinimumCompositing,
CIMultiplyBlendMode,
CIMultiplyCompositing,
CIOverlayBlendMode,
CIRadialGradient,
CISaturationBlendMode,
CIScreenBlendMode,
CISepiaTone,
CISoftLightBlendMode,
CISourceAtopCompositing,
CISourceInCompositing,
CISourceOutCompositing,
CISourceOverCompositing,
CIStraightenFilter,
CIStripesGenerator,
CITemperatureAndTint,
CIToneCurve,
CIVibrance,
CIVignette,
CIWhitePointAdjust
While Core Image on iOS 5.0 lacks blur filters, there is still a way to get GPU-accelerated blurs of images and video. My open source GPUImage framework has multiple blur types, including Gaussian (using the GPUImageGaussianBlurFilter for a general Gaussian or the GPUImageFastBlurFilter for a hardware-optimized 9-hit Gaussian), box (using a GPUImageBoxBlurFilter), median (using a GPUImageMedianFilter), and a bilateral blur (using a GPUImageBilateralBlurFilter).
I describe the shaders used to pull off the hardware-optimized Gaussian blur in this answer, and you can examine the code I use for the rest within the framework. These filters run tens of times faster than any CPU-bound routine I've tried yet.
I've also incorporated these blurs into multi-stage processing effects, like unsharp masking, tilt-shift filtering, Canny edge detection, and Harris corner detection, all of which are available as filters within this framework.
Again, in an attempt to save all iOS blur isses, here is my contribution:
https://github.com/tomsoft1/StackBluriOS
A simple blur library based on Stack Blur. Stack Blur is very similar to Gaussian Blur, but much faster (see http://incubator.quasimondo.com/processing/fast_blur_deluxe.php )
use it like this:
UIImage *newIma=[sourceIma stackBlur:radius]
Hope this help
I too was disappointed to find that Core Image in iOS doesn't support blurs. Here's the function I wrote to do a 9-tap Gaussian blur on a UIImage. Call it repeatedly to get stronger blurs.
#interface UIImage (ImageBlur)
- (UIImage *)imageWithGaussianBlur9;
#end
#implementation UIImage (ImageBlur)
- (UIImage *)imageWithGaussianBlur9 {
float weight[5] = {0.2270270270, 0.1945945946, 0.1216216216, 0.0540540541, 0.0162162162};
// Blur horizontally
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:CGRectMake(0, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[0]];
for (int x = 1; x < 5; ++x) {
[self drawInRect:CGRectMake(x, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[x]];
[self drawInRect:CGRectMake(-x, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[x]];
}
UIImage *horizBlurredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Blur vertically
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[horizBlurredImage drawInRect:CGRectMake(0, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[0]];
for (int y = 1; y < 5; ++y) {
[horizBlurredImage drawInRect:CGRectMake(0, y, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[y]];
[horizBlurredImage drawInRect:CGRectMake(0, -y, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[y]];
}
UIImage *blurredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//
return blurredImage;
}
Just call it on an existing image like this:
UIImage *blurredImage = [originalImage imageWithGaussianBlur9];
and repeat it to get stronger blurring, like this:
blurredImage = [blurredImage imageWithGaussianBlur9];
Unfortunately, it does not support any blurs. For that, you'll have to roll your own.
UPDATE: As of iOS 6 [CIFilter filterNamesInCategory:kCICategoryBlur]; returns CIGaussianBlur meaning that this filter is available on the device. Even though this is true, you (probably) will get better performance and more flexibility using GPUImage.
Here is the link to our tutorial on making blur effect in iOS application with different approaches. http://blog.denivip.ru/index.php/2013/01/blur-effect-in-ios-applications/?lang=en
If you can use OpenGL ES in your iOS app, this is how you calculate the median in a pixel neighborhood radius of your choosing (the median being a type of blur, of course):
kernel vec4 medianUnsharpKernel(sampler u) {
vec4 pixel = unpremultiply(sample(u, samplerCoord(u)));
vec2 xy = destCoord();
int radius = 3;
int bounds = (radius - 1) / 2;
vec4 sum = vec4(0.0);
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
sum += unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
}
}
vec4 mean = vec4(sum / vec4(pow(float(radius), 2.0)));
float mean_avg = float(mean);
float comp_avg = 0.0;
vec4 comp = vec4(0.0);
vec4 median = mean;
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
comp = unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
comp_avg = float(comp);
median = (comp_avg < mean_avg) ? max(median, comp) : median;
}
}
return premultiply(vec4(vec3(abs(pixel.rgb - median.rgb)), 1.0));
}
A brief description of the steps
1. Calculate the mean of the values of the pixels surrounding the source pixel in a 3x3 neighborhood;
2. Find the maximum pixel value of all pixels in the same neighborhood that are less than the mean.
3. [OPTIONAL] Subtract the median pixel value from the source pixel value for edge detection.
If you're using the median value for edge detection, there are a couple of ways to modify the above code for better results, namely, hybrid median filtering and truncated media filtering (a substitute and a better 'mode' filtering). If you're interested, please ask.
Because I'm using Xamarin, I converted John Stephen's answer to C#:
private UIImage ImageWithGaussianBlur9(UIImage image)
{
var weight = new nfloat[]
{
0.2270270270f, 0.1945945946f, 0.1216216216f, 0.0540540541f, 0.0162162162f
};
var width = image.Size.Width;
var height = image.Size.Height;
// Blur horizontally
UIGraphics.BeginImageContextWithOptions(image.Size, false, 1f);
image.Draw(new CGRect(0f, 0f, width, height), CGBlendMode.PlusLighter, weight[0]);
for (int x = 1; x < 5; ++x)
{
image.Draw(new CGRect(x, 0, width, height), CGBlendMode.PlusLighter, weight[x]);
image.Draw(new CGRect(-x, 0, width, height), CGBlendMode.PlusLighter, weight[x]);
}
var horizBlurredImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
// Blur vertically
UIGraphics.BeginImageContextWithOptions(image.Size, false, 1f);
horizBlurredImage.Draw(new CGRect(0, 0, width, height), CGBlendMode.PlusLighter, weight[0]);
for (int y = 1; y < 5; ++y)
{
horizBlurredImage.Draw(new CGRect(0, y, width, height), CGBlendMode.PlusLighter, weight[y]);
horizBlurredImage.Draw(new CGRect(0, -y, width, height), CGBlendMode.PlusLighter, weight[y]);
}
var blurredImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return blurredImage;
}

average color value of UIImage in Objective-C

I need the average color value of an image in objective c. I want to create a color gradient of it.
Has anyone an idea?
here is an experimental code that i have not tested yet.
struct pixel {
unsigned char r, g, b, a;
};
- (UIColor*) getDominantColor:(UIImage*)image
{
NSUInteger red = 0;
NSUInteger green = 0;
NSUInteger blue = 0;
// Allocate a buffer big enough to hold all the pixels
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
CGContextRef context = CGBitmapContextCreate(
(void*) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
kCGImageAlphaPremultipliedLast
);
if (context != NULL)
{
// Draw the image in the bitmap
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
// Now that we have the image drawn in our own buffer, we can loop over the pixels to
// process it. This simple case simply counts all pixels that have a pure red component.
// There are probably more efficient and interesting ways to do this. But the important
// part is that the pixels buffer can be read directly.
NSUInteger numberOfPixels = image.size.width * image.size.height;
for (int i=0; i<numberOfPixels; i++) {
red += pixels[i].r;
green += pixels[i].g;
blue += pixels[i].b;
}
red /= numberOfPixels;
green /= numberOfPixels;
blue/= numberOfPixels;
CGContextRelease(context);
}
free(pixels);
}
return [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:1.0f];
}
You can use this method eg;
-(void)doSomething
{
UIImage *image = [UIImage imageNamed:#"someImage.png"];
UIColor *dominantColor = [self getDominantColor:image];
}
I hope this will work for you.
Also you can implement in UIImage with category. Better way to write some utilities for objects :)
Edit : Fixed the bug in while().
There is a method to create the average color from Image.
[UIColor colorWithAverageColorFromImage:(UIImage *)image];

Simple way to read pixel color values from an PNG image on the iPhone?

Is there an easy way to get an two-dimensional array or something similar that represents the pixel data of an image?
I have black & white PNG images and I simply want to read the color value at a certain coordinate. For example the color value at 20/100.
This Category on UIImage might be helpful Source
#import <CoreGraphics/CoreGraphics.h>
#import "UIImage+ColorAtPixel.h"
#implementation UIImage (ColorAtPixel)
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = CGImageGetWidth(cgImage);
NSUInteger height = CGImageGetHeight(cgImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, -pointY);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
#end
You could put the png into an image view, and then use this method to get the pixel value from a graphics context that you would draw the the image into.
A class to do it for you, and explained too:
http://www.markj.net/iphone-uiimage-pixel-color/
The direct approach is slightly tedious, but here goes:
Get the CoreGraphics image.
CGImageRef cgImage = image.CGImage;
Get the "data provider", and from that get the data.
NSData * d = [(id)CGDataProviderCopyData(CGImageGetDataProvider(cgImage)) autorelease];
Figure out what format the data is in.
CGImageGetBitmapInfo();
CGImageGetBitsPerComponent();
CGImageGetBitsPerPixel();
CGImageGetBytesPerRow();
figure out the colour space (PNG supports greyscale/RGB/paletted).
CGImageGetColorSpace()
The indirect approach is to draw the image to a context (note that you may need to specify the context's byte order if you want any guarantees) and read the bytes out.
If you only want single pixels, it might be faster to draw the image to a 1x1 context with the right rect
(something like (CGRect){{-x,-y},{imgWidth,imgHeight}}).
This will handle colour-space conversion for you. If you just want a brightness value, use a greyscale context.