CGContextDrawAngleGradient? - cocoa-touch

Dipping my feet into some more Core Graphics drawing, I'm attempting to recreate a wicked looking metallic knob, and I've landed on what is probably a show-stopping issue.
There doesn't seem to be any way to draw angle gradients in Core Graphics. I see there's CGContextDrawRadialGradient() and CGContextDrawLinearGradient(), but there's nothing that I see that would allow me to draw an angle gradient. Does anyone know of a workaround, or a bit of framework hidden away somewhere to accomplish this without pre-rendering the knob into an image file?
AngleGradientKnob http://dl.dropbox.com/u/3009808/AngleGradient.png.

This is kind of thrown together, but it's the approach I'd probably take. This is creating an angle gradient by drawing it directly into a bitmap using some simple trig, then clipping it to a circle. I create a grid of memory using a grayscale colorspace, calculate the angle from a given point to the center, and then color that based on a periodic function, running between 0 to 255. You could of course expand this to do RGBA color as well.
Of course you'd cache this and play with the math to get the colors you want. This currently runs all the way from black to white, which doesn't look as good as you'd like.
- (void)drawRect:(CGRect)rect {
CGImageAlphaInfo alphaInfo = kCGImageAlphaNone;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
size_t components = CGColorSpaceGetNumberOfComponents( colorSpace );
size_t width = 100;
size_t height = 100;
size_t bitsPerComponent = 8;
size_t bytesPerComponent = bitsPerComponent / 8;
size_t bytesPerRow = width * bytesPerComponent * components;
size_t dataLength = bytesPerRow * height;
uint8_t data[dataLength];
CGContextRef imageCtx = CGBitmapContextCreate( &data, width, height, bitsPerComponent,
bytesPerRow, colorSpace, alphaInfo );
NSUInteger offset = 0;
for (NSUInteger y = 0; y < height; ++y) {
for (NSUInteger x = 0; x < bytesPerRow; x += components) {
CGFloat opposite = y - height/2.;
CGFloat adjacent = x - width/2.;
if (adjacent == 0) adjacent = 0.001;
CGFloat angle = atan(opposite/adjacent);
data[offset] = abs((cos(angle * 2) * 255));
offset += components * bytesPerComponent;
}
}
CGImageRef image = CGBitmapContextCreateImage(imageCtx);
CGContextRelease(imageCtx);
CGColorSpaceRelease(colorSpace);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect buttonRect = CGRectMake(100, 100, width, width);
CGContextAddEllipseInRect(ctx, buttonRect);
CGContextClip(ctx);
CGContextDrawImage(ctx, buttonRect, image);
CGImageRelease(image);
}

To expand on what's in the comments to the accepted answer, here's the code for generating an angle gradient using Core Image. This should work in iOS 8 or later.
// generate a dummy image of the required size
UIGraphicsBeginImageContextWithOptions(CGSizeMake(256.0, 256.0), NO, [[UIScreen mainScreen] scale]);
CIImage *dummyImage = [CIImage imageWithCGImage:UIGraphicsGetImageFromCurrentImageContext().CGImage];
// define the kernel algorithm
NSString *kernelString = #"kernel vec4 circularGradientKernel(__color startColor, __color endColor, vec2 center, float radius) { \n"
" vec2 point = destCoord() - center;"
" float rsq = point.x * point.x + point.y * point.y;"
" float theta = mod(atan(point.y, point.x), radians(360.0));"
" return (rsq < radius*radius) ? mix(startColor, endColor, 0.5+0.5*cos(4.0*theta)) : vec4(0.0, 0.0, 0.0, 1.0);"
"}";
// initialize a Core Image context and the filter kernel
CIContext *context = [CIContext contextWithOptions:nil];
CIColorKernel *kernel = [CIColorKernel kernelWithString:kernelString];
// argument array, corresponding to the first line of the kernel string
NSArray *args = #[ [CIColor colorWithRed:0.5 green:0.5 blue:0.5],
[CIColor colorWithRed:1.0 green:1.0 blue:1.0],
[CIVector vectorWithCGPoint:CGPointMake(CGRectGetMidX(dummyImage.extent),CGRectGetMidY(dummyImage.extent))],
[NSNumber numberWithFloat:200.0]];
// apply the kernel to our dummy image, and convert the result to a UIImage
CIImage *ciOutputImage = [kernel applyWithExtent:dummyImage.extent arguments:args];
CGImageRef cgOutput = [context createCGImage:ciOutputImage fromRect:ciOutputImage.extent];
UIImage *gradientImage = [UIImage imageWithCGImage:cgOutput];
CGImageRelease(cgOutput);
This generates the following image:

Related

How do I create a custom NSCursor with alpha mask?

When I create a custom NSCursor in Objective-C the alpha channel mask appears to XOR the screen below. I am expecting an alpha channel value of zero to be transparent, not XOR the graphics below. I am certain my mask data is correct ARGB where A=0 for transparent and A=255 for opaque. What am I doing wrong?
static void maincustomcursor(bigmap *thedata, point hotspot)
{
NSPoint thepoint;
NSImage *newimage;
NSCursor *thecursor;
CGImageRef theimage;
CGBitmapInfo theinfo;
CGContextRef thecont;
CGColorSpaceRef thecolor;
int width, height, across;
width = (*thedata).width;
height = (*thedata).height;
if (width != smalliconsize || height != smalliconsize) return;
if (hotspot.h < 0) hotspot.h = 0;
if (hotspot.h >= smalliconsize) hotspot.h = smalliconsize - 1;
if (hotspot.v < 0) hotspot.v = 0;
if (hotspot.v >= smalliconsize) hotspot.v = smalliconsize - 1;
thepoint = NSMakePoint(hotspot.h, hotspot.v);
across = (*thedata).rowbytes;
thecolor = CGColorSpaceCreateDeviceRGB();
theinfo = (CGBitmapInfo)kCGImageAlphaPremultipliedFirst;
thecont = CGBitmapContextCreate((*thedata).baseaddr, width, height, 8, across, thecolor, theinfo);
theimage = CGBitmapContextCreateImage(thecont);
newimage = [[NSImage alloc] initWithCGImage:theimage size:NSZeroSize];
thecursor = [[NSCursor alloc] initWithImage:newimage hotSpot:thepoint];
[thecursor set];
[thecursor release];
[newimage release];
CGImageRelease(theimage);
CGContextRelease(thecont);
CGColorSpaceRelease(thecolor);
}
OK I figured this out. When the alpha channel = 0 you need to be sure that red, green, and blue are also zero. If the alpha channel is zero and red, green, and blue are 255, then I am seeing XOR for the pixel. This may be related to the ancient way that cursors were implemented, with clear, white, black, and xor depending on the bit masks.

UIImage resize and crop to fit frame

I know this question has been asked several times, but their answers make my images loose quality. They all become pixelated. So even though it crops and resizes correctly it looses quality.
Just so you can check it, this is the algorithm which is in every post:
- (UIImage*)scaleAndCropImage:(UIImage *)aImage forSize:(CGSize)targetSize
{
UIImage *sourceImage = aImage;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO)
{
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor)
{
scaleFactor = widthFactor; // scale to fit height
}
else
{
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else
{
if (widthFactor < heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
}
UIGraphicsBeginImageContext(targetSize); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil)
{
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
On the line where your begin the UIGraphicsImageContext, use UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext. Try something like this:
UIGraphicsBeginImageContextWithOptions(targetSize, NO, 0.0)
Notice the three parameters passed above, I'll go through them in order:
targetSize is the size of the image measured in points (not pixels)
NO is a BOOL value (could be YES or NO) that indicates whether the image is 100% opaque or not. Setting this to NO will preserve transparency and create an alpha channel to handle transparency.
The most important part of the above code is the final parameter, 0.0. This is the image scale factor that will be applied. Specifying the value to 0.0 sets the scale factor of the current device's screen. This means that the quality will be preserved, and look especially good on Retina Displays.
Here's the Apple Documentation on UIGraphicsBeginImageContextWithOptions.
You should use UIGraphicsBeginImageContextWithOptions(targetSize, false, 0.0) instead of UIGraphicsBeginImageContext(targetSize) so the correct scale factor gets applied to the bitmap.
Specifying 0.0 as scale factor, sets the scale factor to that from the device's main screen.
Calling only UIGraphicsBeginImageContext() is the same as calling
UIGraphicsBeginImageContextWithOptions(..) with a scale factor of 1.0
for more details take a look at: http://developer.apple.com/library/ios/documentation/UIKit/Reference/UIKitFunctionReference/Reference/reference.html#//apple_ref/c/func/UIGraphicsBeginImageContextWithOptions

Does iOS 5 support blur CoreImage fiters?

According to the documentation it should support blurring, note the "Available in iOS 5.0 and later":
CIFilter Class Reference
But according to the device, it doesn't:
[CIFilter filterNamesInCategory:kCICategoryBlur];
returns nothing.
According to the following only these filters are available on my iPhone and Simulator (which are both running 5.0):
[CIFilter filterNamesInCategory:kCICategoryBuiltIn]
CIAdditionCompositing,
CIAffineTransform,
CICheckerboardGenerator,
CIColorBlendMode,
CIColorBurnBlendMode,
CIColorControls,
CIColorCube,
CIColorDodgeBlendMode,
CIColorInvert,
CIColorMatrix,
CIColorMonochrome,
CIConstantColorGenerator,
CICrop,
CIDarkenBlendMode,
CIDifferenceBlendMode,
CIExclusionBlendMode,
CIExposureAdjust,
CIFalseColor,
CIGammaAdjust,
CIGaussianGradient,
CIHardLightBlendMode,
CIHighlightShadowAdjust,
CIHueAdjust,
CIHueBlendMode,
CILightenBlendMode,
CILinearGradient,
CILuminosityBlendMode,
CIMaximumCompositing,
CIMinimumCompositing,
CIMultiplyBlendMode,
CIMultiplyCompositing,
CIOverlayBlendMode,
CIRadialGradient,
CISaturationBlendMode,
CIScreenBlendMode,
CISepiaTone,
CISoftLightBlendMode,
CISourceAtopCompositing,
CISourceInCompositing,
CISourceOutCompositing,
CISourceOverCompositing,
CIStraightenFilter,
CIStripesGenerator,
CITemperatureAndTint,
CIToneCurve,
CIVibrance,
CIVignette,
CIWhitePointAdjust
While Core Image on iOS 5.0 lacks blur filters, there is still a way to get GPU-accelerated blurs of images and video. My open source GPUImage framework has multiple blur types, including Gaussian (using the GPUImageGaussianBlurFilter for a general Gaussian or the GPUImageFastBlurFilter for a hardware-optimized 9-hit Gaussian), box (using a GPUImageBoxBlurFilter), median (using a GPUImageMedianFilter), and a bilateral blur (using a GPUImageBilateralBlurFilter).
I describe the shaders used to pull off the hardware-optimized Gaussian blur in this answer, and you can examine the code I use for the rest within the framework. These filters run tens of times faster than any CPU-bound routine I've tried yet.
I've also incorporated these blurs into multi-stage processing effects, like unsharp masking, tilt-shift filtering, Canny edge detection, and Harris corner detection, all of which are available as filters within this framework.
Again, in an attempt to save all iOS blur isses, here is my contribution:
https://github.com/tomsoft1/StackBluriOS
A simple blur library based on Stack Blur. Stack Blur is very similar to Gaussian Blur, but much faster (see http://incubator.quasimondo.com/processing/fast_blur_deluxe.php )
use it like this:
UIImage *newIma=[sourceIma stackBlur:radius]
Hope this help
I too was disappointed to find that Core Image in iOS doesn't support blurs. Here's the function I wrote to do a 9-tap Gaussian blur on a UIImage. Call it repeatedly to get stronger blurs.
#interface UIImage (ImageBlur)
- (UIImage *)imageWithGaussianBlur9;
#end
#implementation UIImage (ImageBlur)
- (UIImage *)imageWithGaussianBlur9 {
float weight[5] = {0.2270270270, 0.1945945946, 0.1216216216, 0.0540540541, 0.0162162162};
// Blur horizontally
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:CGRectMake(0, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[0]];
for (int x = 1; x < 5; ++x) {
[self drawInRect:CGRectMake(x, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[x]];
[self drawInRect:CGRectMake(-x, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[x]];
}
UIImage *horizBlurredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Blur vertically
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[horizBlurredImage drawInRect:CGRectMake(0, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[0]];
for (int y = 1; y < 5; ++y) {
[horizBlurredImage drawInRect:CGRectMake(0, y, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[y]];
[horizBlurredImage drawInRect:CGRectMake(0, -y, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[y]];
}
UIImage *blurredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//
return blurredImage;
}
Just call it on an existing image like this:
UIImage *blurredImage = [originalImage imageWithGaussianBlur9];
and repeat it to get stronger blurring, like this:
blurredImage = [blurredImage imageWithGaussianBlur9];
Unfortunately, it does not support any blurs. For that, you'll have to roll your own.
UPDATE: As of iOS 6 [CIFilter filterNamesInCategory:kCICategoryBlur]; returns CIGaussianBlur meaning that this filter is available on the device. Even though this is true, you (probably) will get better performance and more flexibility using GPUImage.
Here is the link to our tutorial on making blur effect in iOS application with different approaches. http://blog.denivip.ru/index.php/2013/01/blur-effect-in-ios-applications/?lang=en
If you can use OpenGL ES in your iOS app, this is how you calculate the median in a pixel neighborhood radius of your choosing (the median being a type of blur, of course):
kernel vec4 medianUnsharpKernel(sampler u) {
vec4 pixel = unpremultiply(sample(u, samplerCoord(u)));
vec2 xy = destCoord();
int radius = 3;
int bounds = (radius - 1) / 2;
vec4 sum = vec4(0.0);
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
sum += unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
}
}
vec4 mean = vec4(sum / vec4(pow(float(radius), 2.0)));
float mean_avg = float(mean);
float comp_avg = 0.0;
vec4 comp = vec4(0.0);
vec4 median = mean;
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
comp = unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
comp_avg = float(comp);
median = (comp_avg < mean_avg) ? max(median, comp) : median;
}
}
return premultiply(vec4(vec3(abs(pixel.rgb - median.rgb)), 1.0));
}
A brief description of the steps
1. Calculate the mean of the values of the pixels surrounding the source pixel in a 3x3 neighborhood;
2. Find the maximum pixel value of all pixels in the same neighborhood that are less than the mean.
3. [OPTIONAL] Subtract the median pixel value from the source pixel value for edge detection.
If you're using the median value for edge detection, there are a couple of ways to modify the above code for better results, namely, hybrid median filtering and truncated media filtering (a substitute and a better 'mode' filtering). If you're interested, please ask.
Because I'm using Xamarin, I converted John Stephen's answer to C#:
private UIImage ImageWithGaussianBlur9(UIImage image)
{
var weight = new nfloat[]
{
0.2270270270f, 0.1945945946f, 0.1216216216f, 0.0540540541f, 0.0162162162f
};
var width = image.Size.Width;
var height = image.Size.Height;
// Blur horizontally
UIGraphics.BeginImageContextWithOptions(image.Size, false, 1f);
image.Draw(new CGRect(0f, 0f, width, height), CGBlendMode.PlusLighter, weight[0]);
for (int x = 1; x < 5; ++x)
{
image.Draw(new CGRect(x, 0, width, height), CGBlendMode.PlusLighter, weight[x]);
image.Draw(new CGRect(-x, 0, width, height), CGBlendMode.PlusLighter, weight[x]);
}
var horizBlurredImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
// Blur vertically
UIGraphics.BeginImageContextWithOptions(image.Size, false, 1f);
horizBlurredImage.Draw(new CGRect(0, 0, width, height), CGBlendMode.PlusLighter, weight[0]);
for (int y = 1; y < 5; ++y)
{
horizBlurredImage.Draw(new CGRect(0, y, width, height), CGBlendMode.PlusLighter, weight[y]);
horizBlurredImage.Draw(new CGRect(0, -y, width, height), CGBlendMode.PlusLighter, weight[y]);
}
var blurredImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return blurredImage;
}

average color value of UIImage in Objective-C

I need the average color value of an image in objective c. I want to create a color gradient of it.
Has anyone an idea?
here is an experimental code that i have not tested yet.
struct pixel {
unsigned char r, g, b, a;
};
- (UIColor*) getDominantColor:(UIImage*)image
{
NSUInteger red = 0;
NSUInteger green = 0;
NSUInteger blue = 0;
// Allocate a buffer big enough to hold all the pixels
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
CGContextRef context = CGBitmapContextCreate(
(void*) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
kCGImageAlphaPremultipliedLast
);
if (context != NULL)
{
// Draw the image in the bitmap
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
// Now that we have the image drawn in our own buffer, we can loop over the pixels to
// process it. This simple case simply counts all pixels that have a pure red component.
// There are probably more efficient and interesting ways to do this. But the important
// part is that the pixels buffer can be read directly.
NSUInteger numberOfPixels = image.size.width * image.size.height;
for (int i=0; i<numberOfPixels; i++) {
red += pixels[i].r;
green += pixels[i].g;
blue += pixels[i].b;
}
red /= numberOfPixels;
green /= numberOfPixels;
blue/= numberOfPixels;
CGContextRelease(context);
}
free(pixels);
}
return [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:1.0f];
}
You can use this method eg;
-(void)doSomething
{
UIImage *image = [UIImage imageNamed:#"someImage.png"];
UIColor *dominantColor = [self getDominantColor:image];
}
I hope this will work for you.
Also you can implement in UIImage with category. Better way to write some utilities for objects :)
Edit : Fixed the bug in while().
There is a method to create the average color from Image.
[UIColor colorWithAverageColorFromImage:(UIImage *)image];

Simple way to read pixel color values from an PNG image on the iPhone?

Is there an easy way to get an two-dimensional array or something similar that represents the pixel data of an image?
I have black & white PNG images and I simply want to read the color value at a certain coordinate. For example the color value at 20/100.
This Category on UIImage might be helpful Source
#import <CoreGraphics/CoreGraphics.h>
#import "UIImage+ColorAtPixel.h"
#implementation UIImage (ColorAtPixel)
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = CGImageGetWidth(cgImage);
NSUInteger height = CGImageGetHeight(cgImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, -pointY);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
#end
You could put the png into an image view, and then use this method to get the pixel value from a graphics context that you would draw the the image into.
A class to do it for you, and explained too:
http://www.markj.net/iphone-uiimage-pixel-color/
The direct approach is slightly tedious, but here goes:
Get the CoreGraphics image.
CGImageRef cgImage = image.CGImage;
Get the "data provider", and from that get the data.
NSData * d = [(id)CGDataProviderCopyData(CGImageGetDataProvider(cgImage)) autorelease];
Figure out what format the data is in.
CGImageGetBitmapInfo();
CGImageGetBitsPerComponent();
CGImageGetBitsPerPixel();
CGImageGetBytesPerRow();
figure out the colour space (PNG supports greyscale/RGB/paletted).
CGImageGetColorSpace()
The indirect approach is to draw the image to a context (note that you may need to specify the context's byte order if you want any guarantees) and read the bytes out.
If you only want single pixels, it might be faster to draw the image to a 1x1 context with the right rect
(something like (CGRect){{-x,-y},{imgWidth,imgHeight}}).
This will handle colour-space conversion for you. If you just want a brightness value, use a greyscale context.