I have the following drawing, which renders a circle with full color at the center fading to 0 alpha at the edges. When drawing this to the screen, it looks perfect. However, when I draw the same thing in a PDF context (CGPDFContextCreate), the whole circle comes out opaque. If I draw any other regular path in the PDF, then alpha renders fines. So just the gradient doesn't work. Is this a bug or am I missing something?
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
size_t num_locations = 2;
CGFloat locations[2] = { 1.0, 0.0 };
CGColorRef color = [[UIColor redColor]CGColor];
CGFloat *k = (CGFloat *)CGColorGetComponents(color);
CGFloat components[8] = { k[0], k[1], k[2], 0.0, k[0], k[1], k[2], 1.0 };
CGGradientRef myGradient = CGGradientCreateWithColorComponents(myColorspace, components, locations, num_locations);
CGPoint c = CGPointMake(160, 160);
CGContextDrawRadialGradient(pdfContext, myGradient, c, 0, c, 60, 0);
Official response from Apple tech support:
Quartz ignores the alpha value of colors in gradients (or shadings)
when capturing a gradient (or shading) to a PDF document and instead
treats all colors as if they are completely opaque. In addition,
Quartz ignores the global alpha in the context when it records
gradients (or shadings) into a PDF document. One possible work-around
is to capture a shading as bits using a bitmap context and use the
resulting bits to create a CGImage that you draw through the clipping
area. This produces pre-rendered gradients (or shadings) but does
capture the alpha content into a PDF document. You should not perform
this pre-rendering for gradients (or shadings) that don't contain
alpha.
Related
I'm making a multiplayer game which involves drawing lines. Now i'm trying to implement online multiplayer into the game. However I've had some doing struggle doing this. The thing is that I will need to reverse the state of the draw lines in case a packet from the server comes late to the client. I've searched here on stack overflow but haven't found any real answer how to "undo" a bitmap context. The biggest problem is that the drawing needs to be done very fast since the game updates every 20th millisecond. However I figured out and tried some different approaches to this:
Save the state of the whole context and then redraw it. This is probably the slowest method.
Only save a part of the context (100x100) in a another bitmap hidden by looping through each pixel, then looping through each pixel from that bitmap to the main bitmap that is shown on the screen.
Save each point of the drawn path in a CGMutablePath ref, then when reverting the context, draw this path with a transparent color (0,0,0,0).
Saving the position in the bitmap of each pixel that gets drawn in a separate array and then setting that pixel alpha to 0 (in the drawn bitmap) when I need to undo.
The last approach should be the fastest of them all. However, I'm not sure how I can get the position of each drawn pixel unless i do it completely manual by. Right now I uses this code to drawn lines.
CGContextSetStrokeColorWithColor(cacheContext, [color CGColor]);
CGContextSetLineCap(cacheContext, kCGLineCapRound);
CGContextSetLineWidth(cacheContext, 6+thickness);
CGContextBeginPath(cacheContext);
CGContextMoveToPoint(cacheContext, point1.x, point1.y);
CGContextAddLineToPoint(cacheContext, point2.x, point2.y);
CGContextStrokePath(cacheContext);
CGRect dirtyPoint1 = CGRectMake(point1.x-10, point1.y-10, 20, 20);
CGRect dirtyPoint2 = CGRectMake(point2.x-10, point2.y-10, 20, 20);
[self setNeedsDisplayInRect:CGRectUnion(dirtyPoint1, dirtyPoint2)];
Here is how the CGBitmapcontext is setup
- (BOOL) initContext:(CGSize)size {
scaleFactor = [[UIScreen mainScreen] scale];
// scaleFactor = 1;
//non-retina
// scalefactor = 2; retina
int bitmapBytesPerRow;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (size.width * 4*scaleFactor);
bitmapByteCount = (bitmapBytesPerRow * (size.height*scaleFactor));
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little;
colorSpace = CGColorSpaceCreateDeviceRGB();
cacheContext = CGBitmapContextCreate (cacheBitmap, size.width*scaleFactor, size.height *scaleFactor, 8, bitmapBytesPerRow, colorSpace, bitmapInfo);
CGContextScaleCTM(cacheContext, scaleFactor, scaleFactor);
CGColorSpaceRelease(colorSpace);
CGContextSetRGBFillColor(cacheContext, 0, 0, 0, 0.0);
CGContextFillRect(cacheContext, (CGRect){CGPointZero, CGSizeMake(size.height*scaleFactor, size.width*scaleFactor)});
return YES;
}
Is there anyway other better way to undo the bitmap? If not, how can I get the positions of each pixels that gets draw with core graphics? Is this even possible?
Your 4th approach will either duplicate the whole canvas bitmap (should you consider a flat NxM matrix representation) or result a performance mess in case of map-based structure or something like that.
Actually, I believe 2nd way does the trick. I have had implemented that way of undo few times during past years, including a DirectX-based drawing app with some 25-30fps rendering pipeline.
However, your #2 description has a strange mention of some "loop" you want to perform across the area. You do not need a loop, what you need is a proper API method for copying a portion of the bitmap/graphics context, it might be CGContextDrawImage used to preserve your canvas portion and same method to undo/redo the drawing.
This is a code snippet for creating a thumbnail sized image (from an original large image) and placing it appropriately on top of a tableviewcell. As i was studying the code i got stuck at the part where the thumbnail is being given a position by setting its abscissa and ordinate. In the method -(void)setThumbDataFromImage:(UIImage *)image they're setting the dimensions and coordinate for project thumbnail—
-(void)setThumbnailDataFromImage:(UIImage *)image{
CGSize origImageSize= [image size];
// the rectange of the thumbnail
CGRect newRect= CGRectMake(0, 0, 40, 40);
// figure out a scaling ratio to make sure we maintain the same aspect ratio
float ratio= MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
// Create a transparent bitmap context with a scaling factor equal to that of the screen
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
// create a path that is a rounded rectangle
UIBezierPath *path= [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
// make all the subsequent drawing to clip to this rounded rectangle
[path addClip];
// center the image in the thumbnail rectangle
CGRect projectRect;
projectRect.size.width=ratio * origImageSize.width;
projectRect.size.height= ratio * origImageSize.height;
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
// draw the image on it
[image drawInRect:projectRect];
// get the image from the image context, keep it as our thumbnail
UIImage *smallImage= UIGraphicsGetImageFromCurrentImageContext();
[self setThumbnail:smallImage];
// get the PNG representation of the image and set it as our archivable data
NSData *data= UIImagePNGRepresentation(smallImage);
[self setThumbnailData:data];
// Cleanup image context resources, we're done
UIGraphicsEndImageContext();
}
I got the width and height computation wherein we multiply the origImageSize with scaling factor/ratio.
But then we use the following to give the thumbnail a position—
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
This i fail to understand. I cannot wrap my head around it. :?
Is this part of the centering process. I mean, are we using a mathematical relation here to position the thumbnail or is it some random calculation i.e could have been anything.. Am i missing some fundamental behind these two lines of code??
Those two lines are standard code for centering something, although they aren’t quite written in the most general way. You normally want to use:
projectRect.origin.x = newRect.origin.x + newRect.size.width / 2.0 - projectRect.size.width / 2.0;
projectRect.origin.y = newRect.origin.y + newRect.size.height / 2.0 - projectRect.size.height / 2.0;
In your case the author knows the origin is 0,0, so they omitted the first term in each line.
Since to center a rectangle in another rectangle you want the centers of the two axes to line up, you take, say, half the container’s width (the center of the outer rectangle) and subtract half the inner rectangle’s width (which takes you to the left side of the inner rectangle), and that gives you where the inner rectangle’s left side should be (e.g.: its x origin) when it is correctly centered.
I want to do some custom drawing with CoreGraphics. I need a linear gradient on my view, but the thing is that this view is a rounded rectangle so I want my gradient to be also rounded at angles. You can see what I want to achieve on the image below:
So is this possible to implement in CoreGraphics or some other programmatic and easy way?
Thank you.
I don't think there is an API for that, but you can get the same effect if you first draw a radial gradient, say, in an (N+1)x(N+1) size bitmap context, then convert the image from the context to a resizable image with left and right caps set to N.
Pseudocode:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(N+1,N+1), NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
// <draw the gradient into 'context'>
UIImage* gradientBase = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage* gradientImage = [gradientBase resizableImageWithCapInsets:UIEdgeInsetsMake(0,N,0,N)];
In case you want the image to scale vertically as well, you just have to set the caps to UIEdgeInsetsMake(N,N,N,N).
I just want to add more sample code for this technique, as some things weren't obvious for. Maybe it will be useful for somebody:
So, let's say, we have our custom view class and in it's drawRect: method we put this:
// Defining the rect in which to draw
CGRect drawRect=self.bounds;
Float32 gradientSize=drawRect.size.height; // The size of original radial gradient
CGPoint center=CGPointMake(0.5f*gradientSize,0.5f*gradientSize); // Center of gradient
// Creating the gradient
Float32 colors[4]={0.f,1.f,1.f,0.2f}; // From opaque white to transparent black
CGGradientRef gradient=CGGradientCreateWithColorComponents(CGColorSpaceCreateDeviceGray(), colors, nil, 2);
// Starting image and drawing gradient into it
UIGraphicsBeginImageContextWithOptions(CGSizeMake(gradientSize, gradientSize), NO, 1.f);
CGContextRef context=UIGraphicsGetCurrentContext();
CGContextDrawRadialGradient(context, gradient, center, 0.f, center, center.x, 0); // Drawing gradient
UIImage* gradientImage=UIGraphicsGetImageFromCurrentImageContext(); // Retrieving image from context
UIGraphicsEndImageContext(); // Ending process
gradientImage=[gradientImage resizableImageWithCapInsets:UIEdgeInsetsMake(0.f, center.x-1.f, 0.f, center.x-1.f)]; // Leaving 2 pixels wide area in center which will be tiled to fill whole area
// Drawing image into view frame
[gradientImage drawInRect:drawRect];
That's all. Also if you're not going to ever change the gradient while app is running, you would want to put everything except last line in awakeFromNib method and then in drawRect: just draw the gradientImage into view's frame. Also don't forget to retain the gradientImage in this case.
I am trying to use an image (270 degrees of a circle, similar to a pacman logo, painted as Core Graphics) to create a mask. What I am doing is this
1. creating a Core Graphics path
CGContextSaveGState(context);
CGContextBeginPath(context);
CGContextMoveToPoint(context,circleCenter.x,circleCenter.y);
//CGContextSetAllowsAntialiasing(myBitmapContext, YES);
CGContextAddArc(context,circleCenter.x, circleCenter.y,circleRadius,startingAngle, endingAngle, 0); // 0 is counterclockwise
CGContextClosePath(context);
CGContextSetRGBStrokeColor(context,1.0,0.0,0.0,1.0);
CGContextSetRGBFillColor(context,1.0,0.0,0.0,0.2);
CGContextDrawPath(context, kCGPathFillStroke);
2. then I'm creating an image of the path that has the path just painted
CGImageRef pacmanImage = CGBitmapContextCreateImage (context);
3. restoring the context
CGContextRestoreGState(context);
CGContextSaveGState(context);
4. creating a 1 bit mask (which will provide the black-white mask)
bitsPerComponent = 1;
bitsPerPixel = bitsPerComponent * 1 ;
bytesPerRow = (CGImageGetWidth(imgToMaskRef) * bitsPerPixel);
mask = CGImageCreate(CGImageGetWidth(imgToMaskRef),
CGImageGetHeight(imgToMaskRef),
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
greyColorSpace,
kCGImageAlphaNone,
CGImageGetDataProvider(pacmanImage),
NULL, //decode
YES, //shouldInterpolate
kCGRenderingIntentDefault);
5. masking the imgToMaskRef (which is a CGImageRef imgToMaskRef =imgToMask.CGImage;) with the mask just created
imageMaskedWithImage = CGImageCreateWithMask(imgToMaskRef, mask);
CGContextDrawImage(context,imgRectBox, imageMaskedWithImage);
CGImageRef maskedImageFinal = CGBitmapContextCreateImage (context);
6. returning the maskedImageFinal to the caller of this method (as wheelChoiceMadeState, which is a CGImageRef) who then updates the CALayer contents property with the image
theLayer.contents = (id) wheelChoiceMadeState;
the problem I am seeing is that the mask does not work properly and looks very strange indeed. I get strange patterns across the path painted by the Core Graphics. My hunch is there is something with CGImageGetDataProvider() but I am not sure.
Any help would be appreciated
thank you
CGImageGetDataProvider does not change the data at all. If the data of pacmanImage does not exactly match the parameters passed to CGImageCreate (bitsPer,bytesPer,colorSpace,...) the result is undefined. If it does exactly match, there would be no point in creating mask.
You need to create a grayscale CGBitmapContext to draw the mask into, and a CGImage that uses the same pixels and parameters as the bitmap. You can then use the CGImage to mask another image.
Only use CGBitmapContextCreateImage if you want a snapshot of a CGBitmapContext that you will continue to modify. For a single use bitmap, pass the same buffer to the bitmap and the matching CGImage you create.
Edit:
finalRect is the size the final image should be. It is either large enough to hold the original image, and the pacman is positioned inside it, or it is large enough to hold the pacman, and the original image is cropped to fit. In this example, the original image is cropped. Otherwise the pacman path would have to be positioned relative to the original image.
maskContext = CGBitmapContextCreate( ... , finalRect.size.width , finalRect.size.height , ... );
// add the pacman path and set the stroke and fill colors
CGContextDrawPath( maskContext , kCGPathFillStroke );
maskImage = CGBitmapContextCreateImage( maskContext );
imageToMask = CGImageCreateWithImageInRect( originalImage , finalRect );
finalImage = CGImageCreateWithMask( imageToMask , maskImage );
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGSize contextSize=CGSizeMake(320,400);
UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContext(contextSize);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *savedImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setSaveImage:savedImg];
to extarct some part of image from main screen.
In UIGraphicsBeginImageContext I can only use size, is there any way to use CGRect or some other way to extract image from a specific portion of screen ie (x,y, 320, 400) some thing like this
Hope this helps:
// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
// Create rect for image
CGRect rect = CGRectMake(x, y, size.width, size.height);
// Draw the image into the rect
[existingImage drawInRect:rect];
// Saving the image, ending image context
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This question is really a duplicate of several other questions including this: How to crop the UIImage?, but since it took me a while to find a solution, I will cross post again.
In my quest for a solution that I could more easily understand (and written in Swift), I arrived at this:
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, crop source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/