I'd like to multiply r, g, b and a values of two CGImages, to use a single image as both a mask (in its alpha channel) and tint (in the rgb channels). I first tried simply:
CGContextDrawImage(context, CGRectMake(0, 0, _w, _h), handle());
CGContextSetBlendMode(context, kCGBlendModeMultiply);
CGContextDrawImage(context, CGRectMake(0, 0, other->width(), other->height()), over);
However, this only tints, and does not mask. Thus, I extracted the alpha from the masking image into a CGImage mask grayscale image (using a CGDataProvider) and used that as a mask on the context:
// "mask" is "other"'s alpha channel as an inverted grayscale image
CGContextSaveGState(context);
CGContextClipToMask(context, CGRectMake(0, 0, other->width(), other->height()), mask);
CGImageRelease(mask);
CGContextDrawImage(context, CGRectMake(0, 0, _w, _h), handle());
CGContextRestoreGState(context); // Don't apply the clip mask to 'other', since it already has it
CGContextSetBlendMode(context, kCGBlendModeMultiply);
CGContextDrawImage(context, CGRectMake(0, 0, other->width(), other->height()), other->handle());
It still won't look right, though. The image gets lighter; should a multiplied image be always at least darker? Previews of images at:
http://dl.dropbox.com/u/6775/multiply/index.html
Thanks in advance for enlightenment :)
The first thing is that you don't need to extract the alpha channel to a separate channel to do masking. You can just draw it with kCGBlendModeDestinationIn. For an inverted mask, same thing, but with kCGBlendModeDestinationOut.
So, the solution is to draw the second image with Multiply, then draw it again with Destination In or Out.
Related
I am trying to create an image mask with kCGColorSpaceDisplayP3 colorspace to support the iPhone 7's wide color range.
I am able to create image mask correctly when using sRGB colorspace on iPhone 6 and earlier devices using iOS 10 and earlier iOS. But I have no clue where I am going wrong when creating colorspace using kCGColorSpaceDisplayP3:
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceDisplayP3);
CGContextRef context = CGBitmapContextCreate(NULL, 320.0, 320.0, 32, 320.0*16, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapFloatComponents);
CGFloat radius = 10.0;
CGFloat components[] = {1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0, 1.0,1.0,1.0,0.5, 1.0,1.0,1.0,0.0};
CGFloat locations[] = {0.0, 0.1, 0.2, 0.8, 0.9, 1.0};
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, components, locations, 6); //colorSpaceP3
CGPoint center = CGPointMake(100.0, 100.0);
CGContextDrawRadialGradient(context, gradient, center, 0.1, center, radius, 0);
CGGradientRelease(gradient);
CGImageRef imageHole = CGBitmapContextCreateImage(context);
CGImageRef maskHole = CGImageMaskCreate(CGImageGetWidth(imageHole), CGImageGetHeight(imageHole), CGImageGetBitsPerComponent(imageHole), CGImageGetBitsPerPixel(imageHole), CGImageGetBytesPerRow(imageHole), CGImageGetDataProvider(imageHole), NULL, FALSE);
CGImageRelease(imageHole);
CGImageRef image = [UIImage imageNamed:#"prosbo_hires.jpg"].CGImage;
CGImageRef masked = CGImageCreateWithMask(image, maskHole);
CGImageRelease(maskHole);
UIImage *img = [UIImage imageWithCGImage:masked];
CGImageRelease(masked);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
The log says:
: CGImageMaskCreate: invalid mask bits/component: 32.
I don't have much experience with Core Graphics. Can anyone please suggest something here.
Thanks.
The documentation for the bitsPerComponent parameter of CGImageMaskCreate() says:
Image masks must be 1, 2, 4, or 8 bits per component.
You're passing CGImageGetBitsPerComponent(imageHole), which is 32 bits per component. As per both the documentation and the log message, that's not valid.
The implication is that image masks don't support floating point bitmap formats.
It should be possible to create the bitmap context and the mask using 8 bits per component. More or less, just leave out kCGBitmapFloatComponents. I expect that will result in reduced granularity of the opacity of the mask, but won't affect the color range of masked images.
this fixed my issue:
contextRef = CGBitmapContextCreate(
m.data,
m.cols,
m.rows,
8,
m.step[0],
CGColorSpaceCreateDeviceRGB(),
bitmapInfo);
https://developer.apple.com/search/?q=CGColorSpaceCreate
i am trying to draw rotated image with following code
CGLayerRef layer=CGLayerCreateWithContext(context,CGSizeMake(brush.size.width*2,brush.size.height*2), NULL);
CGContextRef tmp=CGLayerGetContext(layer);
CGContextDrawImage(tmp,CGRectMake(brush.size.width/2, brush.size.height/2, brush.size.width, brush.size.height), brush.CGImage);
CGContextSetFillColorWithColor(tmp, [UIColor clearColor].CGColor);
CGContextSetBlendMode(tmp, kCGBlendModeClear);
CGContextFillRect(tmp, CGRectMake(0, 0, 2*brush.size.width, 2*brush.size.height));
CGContextSetBlendMode(tmp, kCGBlendModeNormal);
CGContextTranslateCTM(tmp, -brush.size.width/2, -brush.size.height/2);
CGContextRotateCTM(tmp, angle);
CGContextTranslateCTM(tmp, brush.size.width/2, brush.size.height/2);
CGContextDrawImage(tmp,CGRectMake(brush.size.width/2, brush.size.height/2, brush.size.width, brush.size.height), brush.CGImage);
I want to place image at the center of layer (image is square)
try the rotation then the translation
I want to save 2 UIImages that are moved, resized and rotated by user. The problem is i dont want to use such function as any 'printscreen one', because it makes both images to lose a lot from their quality (resolution).
Atm i use something like this:
UIGraphicsBeginImageContext(image1.size);
[image1 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
[image2 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
However ofc it just adds two images, their rotation, resizing and moving isn't operated here. Can anybody help with considering these 3 aspects in coding? Any help is appreciated!
My biggest thanks in advance :)
EDIT: images can be rotated and zoomed by user (handling touch events)!
You have to set the transform of the context to match your imageView's transform before you start drawing into it.
i.e.,
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(-imageView.image.size.width/2, -imageView.image.size.height/2, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
and check out Creating a UIImage from a rotated UIImageView.
EDIT: if you don't know the angle of rotation of the image you can get the transform from the layer property of the UIImageView:
UIGraphicsBeginImageContext(rotatedImageView.image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = rotatedImageView.transform;
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(0, 0, rotatedImageView.image.size.width, rotatedImageView.image.size.height), rotatedImageView.image.CGImage);
// Get an image from the context
UIImage *newRotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
You will have to play about with the transform matrix to centre the image in the context and you will also have to calculate a bounding rectangle for the rotated image or it will be cropped at the corners (i.e., rotatedImageView.image.size is not big enough to encompass a rotated version of itself).
Try this:
UIImage *temp = [[UIImage alloc] initWithCGImage:image1 scale:1.0 orientation: yourOrientation];
[temp drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
Similarly for image2. Rotation and resizing are handled by orientation and scale respectively. yourOrientation is a UIImageOrientation enum variable and can have a value from 0-7(check this apple documentation on different UIImageOrientation values). Hope it helps...
EDIT: To handle rotations, just write the desired orientation for the rotation you require. You can rotate 90 deg left/right or flip vertically/horizontally. For eg, in the apple documentation, UIImageOrientationUp is 0, UIImageOrientationDown is 1 and so on. Check out my github repo for an example.
I'm trying to emulate the color tint effect from the UITabBarItem.
When I draw a linear gradient at an angle, I get visible aliasing in the middle part of the gradient where the two colors meet at the same location. Left is UITabBarItem, right is my gradient with visible aliasing (stepping):
Here is the snippet of relevant code:
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSaveGState(c);
CGContextScaleCTM(c, 1.0, -1.0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat components[16] = {1,1,1,1,
109.0/255.0,175.0/255.0,246.0/255.0,1,
31.0/255.0,133.0/255.0,242.0/255.0,1,
143.0/255.0,194.0/255.0,248.0/255.0,1};
CGFloat locations[4] = {0.0, 0.62, 0.62, 1};
CGGradientRef colorGradient =
CGGradientCreateWithColorComponents(colorSpace, components,
locations, (size_t)4);
CGContextDrawLinearGradient(c, colorGradient, CGPointZero,
CGPointMake(size.width*1.0/3.9, -size.height),0);
CGGradientRelease(colorGradient);
CGColorSpaceRelease(colorSpace);
CGContextRestoreGState(c);
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImage;
What do I need to change, to get a smooth angled gradient like in UITabBarItem?
What is the interpolation quality of your context set to? CGContextGetInterpolationQuality()/CGContextSetInterpolationQuality(). Try changing that if it's too low.
If that doesn't work, I'm curious what happens if you draw the gradient vertically (0,Ymin)-(0,Ymax) but apply a rotation transformation to your context...
As a current workaround, I draw the gradient at double resolution into an image and then draw the image with original dimensions. The image scaling that occurs, takes care of the aliasing. At the pixel level the result is not as smooth as in the UITabBarItem, but that probably uses an image created in Photoshop or something similar.
When I load textures from images normally, they are upside down because of OpenGL's coordinate system. What would be the best way to flip them?
glScalef(1.0f, -1.0f, 1.0f);
mapping the y coordinates of the textures in reverse
vertically flipping the image files manually (in Photoshop)
flipping them programatically after loading them (I don't know how)
This is the method I'm using to load png textures, in my Utilities.m file (Objective-C):
+ (TextureImageRef)loadPngTexture:(NSString *)name {
CFURLRef textureURL = CFBundleCopyResourceURL(
CFBundleGetMainBundle(),
(CFStringRef)name,
CFSTR("png"),
CFSTR("Textures"));
NSAssert(textureURL, #"Texture name invalid");
CGImageSourceRef imageSource = CGImageSourceCreateWithURL(textureURL, NULL);
NSAssert(imageSource, #"Invalid Image Path.");
NSAssert((CGImageSourceGetCount(imageSource) > 0), #"No Image in Image Source.");
CFRelease(textureURL);
CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
NSAssert(image, #"Image not created.");
CFRelease(imageSource);
GLuint width = CGImageGetWidth(image);
GLuint height = CGImageGetHeight(image);
void *data = malloc(width * height * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSAssert(colorSpace, #"Colorspace not created.");
CGContextRef context = CGBitmapContextCreate(
data,
width,
height,
8,
width * 4,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
NSAssert(context, #"Context not created.");
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGImageRelease(image);
CGContextRelease(context);
return TextureImageCreate(width, height, data);
}
Where TextureImage is a struct that has a height, width and void *data.
Right now I'm just playing around with OpenGL, but later I want to try making a simple 2d game. I'm using Cocoa for all the windowing and Objective-C as the language.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Thanks.
Any of those:
Flip texture during the texture load,
OR flip model texture coordinates during model load
OR set texture matrix to flip y (glMatrixMode(GL_TEXTURE)) during render.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Depends on how you are going to render text.
Jordan Lewis pointed out CGContextDrawImage draws image upside down when passed UIImage.CGImage. There I found a quick and easy solution: Before calling CGContextDrawImage,
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM(context, 1.0f, -1.0f);
Does the job perfectly well.