Linear gradient aliasing with CoreGraphics - core-graphics

I'm trying to emulate the color tint effect from the UITabBarItem.
When I draw a linear gradient at an angle, I get visible aliasing in the middle part of the gradient where the two colors meet at the same location. Left is UITabBarItem, right is my gradient with visible aliasing (stepping):
Here is the snippet of relevant code:
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSaveGState(c);
CGContextScaleCTM(c, 1.0, -1.0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat components[16] = {1,1,1,1,
109.0/255.0,175.0/255.0,246.0/255.0,1,
31.0/255.0,133.0/255.0,242.0/255.0,1,
143.0/255.0,194.0/255.0,248.0/255.0,1};
CGFloat locations[4] = {0.0, 0.62, 0.62, 1};
CGGradientRef colorGradient =
CGGradientCreateWithColorComponents(colorSpace, components,
locations, (size_t)4);
CGContextDrawLinearGradient(c, colorGradient, CGPointZero,
CGPointMake(size.width*1.0/3.9, -size.height),0);
CGGradientRelease(colorGradient);
CGColorSpaceRelease(colorSpace);
CGContextRestoreGState(c);
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImage;
What do I need to change, to get a smooth angled gradient like in UITabBarItem?

What is the interpolation quality of your context set to? CGContextGetInterpolationQuality()/CGContextSetInterpolationQuality(). Try changing that if it's too low.
If that doesn't work, I'm curious what happens if you draw the gradient vertically (0,Ymin)-(0,Ymax) but apply a rotation transformation to your context...

As a current workaround, I draw the gradient at double resolution into an image and then draw the image with original dimensions. The image scaling that occurs, takes care of the aliasing. At the pixel level the result is not as smooth as in the UITabBarItem, but that probably uses an image created in Photoshop or something similar.

Related

Bitmap context for wide color range

I am trying to create an image mask with kCGColorSpaceDisplayP3 colorspace to support the iPhone 7's wide color range.
I am able to create image mask correctly when using sRGB colorspace on iPhone 6 and earlier devices using iOS 10 and earlier iOS. But I have no clue where I am going wrong when creating colorspace using kCGColorSpaceDisplayP3:
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceDisplayP3);
CGContextRef context = CGBitmapContextCreate(NULL, 320.0, 320.0, 32, 320.0*16, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapFloatComponents);
CGFloat radius = 10.0;
CGFloat components[] = {1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0, 1.0,1.0,1.0,0.5, 1.0,1.0,1.0,0.0};
CGFloat locations[] = {0.0, 0.1, 0.2, 0.8, 0.9, 1.0};
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, components, locations, 6); //colorSpaceP3
CGPoint center = CGPointMake(100.0, 100.0);
CGContextDrawRadialGradient(context, gradient, center, 0.1, center, radius, 0);
CGGradientRelease(gradient);
CGImageRef imageHole = CGBitmapContextCreateImage(context);
CGImageRef maskHole = CGImageMaskCreate(CGImageGetWidth(imageHole), CGImageGetHeight(imageHole), CGImageGetBitsPerComponent(imageHole), CGImageGetBitsPerPixel(imageHole), CGImageGetBytesPerRow(imageHole), CGImageGetDataProvider(imageHole), NULL, FALSE);
CGImageRelease(imageHole);
CGImageRef image = [UIImage imageNamed:#"prosbo_hires.jpg"].CGImage;
CGImageRef masked = CGImageCreateWithMask(image, maskHole);
CGImageRelease(maskHole);
UIImage *img = [UIImage imageWithCGImage:masked];
CGImageRelease(masked);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
The log says:
: CGImageMaskCreate: invalid mask bits/component: 32.
I don't have much experience with Core Graphics. Can anyone please suggest something here.
Thanks.
The documentation for the bitsPerComponent parameter of CGImageMaskCreate() says:
Image masks must be 1, 2, 4, or 8 bits per component.
You're passing CGImageGetBitsPerComponent(imageHole), which is 32 bits per component. As per both the documentation and the log message, that's not valid.
The implication is that image masks don't support floating point bitmap formats.
It should be possible to create the bitmap context and the mask using 8 bits per component. More or less, just leave out kCGBitmapFloatComponents. I expect that will result in reduced granularity of the opacity of the mask, but won't affect the color range of masked images.
this fixed my issue:
contextRef = CGBitmapContextCreate(
m.data,
m.cols,
m.rows,
8,
m.step[0],
CGColorSpaceCreateDeviceRGB(),
bitmapInfo);
https://developer.apple.com/search/?q=CGColorSpaceCreate

How to draw curve with glow in CGContext - something like SKShapeNode with glowWidth

I am rewriting some of my graphics drawing code from SKShapeNodes to CGContext/CALayers. I am trying to draw a curve with glow in CGContext. This is what I used to have in SpriteKit:
CGPathRef path = …(some path)
SKShapeNode *node = [SKShapeNode node];
node.path = path;
node.glowWidth = 60;
After adding it to the scene with dark-grey background, the result was as follows:
Is it possible to draw line with such glow using CGContext but without using CIFilters? Normally I will be drawing over an non-blank context background, so I prefer not to use CIFilters after the line was drawn.
I have already tried the "shadow" solution, but the results are far from perfect:
UIGraphicsBeginImageContextWithOptions(frame.size, NO, 1);
CGContextRef context = UIGraphicsGetCurrentContext();
CGFloat glowWidth = 60.0;
CGContextSetShadowWithColor(context, CGSizeMake(0.0, 0.0), glowWidth, [UIColor whiteColor].CGColor);
CGContextBeginPath(context);
CGContextAddPath(context, path);
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextStrokePath(context);
Result (the shadow is hardly visible):
Please let me know if you have some ideas.
You can create a SKEffectNode which allows you to apply Core Image filters to all of its children. In other words, you can create a SKEffectNode then add a flower sprite as a child and the SKEffectNode would apply the Core Image effect to the flower sprite.
For more detailed information, please see the SKEffectNode Class Reference.

CALayer: Linear Gradient Issue

I am drawing a Linear Gradient on one of my CALayers with some colors, but randomly, with the same input, the colors are drawn on the screen as a pink color.
The code is as follows:
bottomComponents = bottomColor.colorComponents;
topComponents = topColor.colorComponents;
middleComponents = middleColor.colorComponents;
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGGradientRef glossGradient;
CGColorSpaceRef rgbColorspace;
size_t numberOfLocations = 3;
CGFloat locations[3] = {0.0f, MIDDLE_POINT, 1.0f};
CGFloat components[12] =
{
topComponents.red, topComponents.green, topComponents.blue, topComponents.alpha,
middleComponents.red, middleComponents.green, middleComponents.blue, middleComponents.alpha,
bottomComponents.red, bottomComponents.green, bottomComponents.blue, bottomComponents.alpha
};
rgbColorspace = CGColorSpaceCreateDeviceRGB();
glossGradient = CGGradientCreateWithColorComponents(rgbColorspace, components, locations, numberOfLocations);
CGRect currentBounds = self.bounds;
CGPoint topCenter = CGPointMake(CGRectGetMidX(currentBounds), 0.0f);
CGPoint bottomCenter = CGPointMake(CGRectGetMidX(currentBounds), CGRectGetHeight(currentBounds));
CGContextDrawLinearGradient(currentContext, glossGradient, topCenter, bottomCenter, 0);
CGGradientRelease(glossGradient);
CGColorSpaceRelease(rgbColorspace);
Where colorComponents is just returning a struct with the color components.
When I output the color in a log, it is the proper color, but when it shows up on screen, regardless of the start colors, it is a pink-ish color.
Is there anything that I have done wrong that could cause the random pink to show up?
The pink shows up completely sporadically. I will load from the exact same values and rarely, but surely, it will show up pink. Loading from different values yields the same results. Its happening around 1-2% of the time.
Any chance some of the times you use UIColor predefined colors? (For example greenColor)
I found those don't generate proper components.
Liviu

Saving 2 UIImages to one while saving rotation, resize info and its quality

I want to save 2 UIImages that are moved, resized and rotated by user. The problem is i dont want to use such function as any 'printscreen one', because it makes both images to lose a lot from their quality (resolution).
Atm i use something like this:
UIGraphicsBeginImageContext(image1.size);
[image1 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
[image2 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
However ofc it just adds two images, their rotation, resizing and moving isn't operated here. Can anybody help with considering these 3 aspects in coding? Any help is appreciated!
My biggest thanks in advance :)
EDIT: images can be rotated and zoomed by user (handling touch events)!
You have to set the transform of the context to match your imageView's transform before you start drawing into it.
i.e.,
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(-imageView.image.size.width/2, -imageView.image.size.height/2, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
and check out Creating a UIImage from a rotated UIImageView.
EDIT: if you don't know the angle of rotation of the image you can get the transform from the layer property of the UIImageView:
UIGraphicsBeginImageContext(rotatedImageView.image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = rotatedImageView.transform;
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(0, 0, rotatedImageView.image.size.width, rotatedImageView.image.size.height), rotatedImageView.image.CGImage);
// Get an image from the context
UIImage *newRotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
You will have to play about with the transform matrix to centre the image in the context and you will also have to calculate a bounding rectangle for the rotated image or it will be cropped at the corners (i.e., rotatedImageView.image.size is not big enough to encompass a rotated version of itself).
Try this:
UIImage *temp = [[UIImage alloc] initWithCGImage:image1 scale:1.0 orientation: yourOrientation];
[temp drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
Similarly for image2. Rotation and resizing are handled by orientation and scale respectively. yourOrientation is a UIImageOrientation enum variable and can have a value from 0-7(check this apple documentation on different UIImageOrientation values). Hope it helps...
EDIT: To handle rotations, just write the desired orientation for the rotation you require. You can rotate 90 deg left/right or flip vertically/horizontally. For eg, in the apple documentation, UIImageOrientationUp is 0, UIImageOrientationDown is 1 and so on. Check out my github repo for an example.

Flipping OpenGL texture

When I load textures from images normally, they are upside down because of OpenGL's coordinate system. What would be the best way to flip them?
glScalef(1.0f, -1.0f, 1.0f);
mapping the y coordinates of the textures in reverse
vertically flipping the image files manually (in Photoshop)
flipping them programatically after loading them (I don't know how)
This is the method I'm using to load png textures, in my Utilities.m file (Objective-C):
+ (TextureImageRef)loadPngTexture:(NSString *)name {
CFURLRef textureURL = CFBundleCopyResourceURL(
CFBundleGetMainBundle(),
(CFStringRef)name,
CFSTR("png"),
CFSTR("Textures"));
NSAssert(textureURL, #"Texture name invalid");
CGImageSourceRef imageSource = CGImageSourceCreateWithURL(textureURL, NULL);
NSAssert(imageSource, #"Invalid Image Path.");
NSAssert((CGImageSourceGetCount(imageSource) > 0), #"No Image in Image Source.");
CFRelease(textureURL);
CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
NSAssert(image, #"Image not created.");
CFRelease(imageSource);
GLuint width = CGImageGetWidth(image);
GLuint height = CGImageGetHeight(image);
void *data = malloc(width * height * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSAssert(colorSpace, #"Colorspace not created.");
CGContextRef context = CGBitmapContextCreate(
data,
width,
height,
8,
width * 4,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
NSAssert(context, #"Context not created.");
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGImageRelease(image);
CGContextRelease(context);
return TextureImageCreate(width, height, data);
}
Where TextureImage is a struct that has a height, width and void *data.
Right now I'm just playing around with OpenGL, but later I want to try making a simple 2d game. I'm using Cocoa for all the windowing and Objective-C as the language.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Thanks.
Any of those:
Flip texture during the texture load,
OR flip model texture coordinates during model load
OR set texture matrix to flip y (glMatrixMode(GL_TEXTURE)) during render.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Depends on how you are going to render text.
Jordan Lewis pointed out CGContextDrawImage draws image upside down when passed UIImage.CGImage. There I found a quick and easy solution: Before calling CGContextDrawImage,
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM(context, 1.0f, -1.0f);
Does the job perfectly well.