CGImageCreateWithMask with an image as a mask - cgimage

I am trying to use an image (270 degrees of a circle, similar to a pacman logo, painted as Core Graphics) to create a mask. What I am doing is this
1. creating a Core Graphics path
CGContextSaveGState(context);
CGContextBeginPath(context);
CGContextMoveToPoint(context,circleCenter.x,circleCenter.y);
//CGContextSetAllowsAntialiasing(myBitmapContext, YES);
CGContextAddArc(context,circleCenter.x, circleCenter.y,circleRadius,startingAngle, endingAngle, 0); // 0 is counterclockwise
CGContextClosePath(context);
CGContextSetRGBStrokeColor(context,1.0,0.0,0.0,1.0);
CGContextSetRGBFillColor(context,1.0,0.0,0.0,0.2);
CGContextDrawPath(context, kCGPathFillStroke);
2. then I'm creating an image of the path that has the path just painted
CGImageRef pacmanImage = CGBitmapContextCreateImage (context);
3. restoring the context
CGContextRestoreGState(context);
CGContextSaveGState(context);
4. creating a 1 bit mask (which will provide the black-white mask)
bitsPerComponent = 1;
bitsPerPixel = bitsPerComponent * 1 ;
bytesPerRow = (CGImageGetWidth(imgToMaskRef) * bitsPerPixel);
mask = CGImageCreate(CGImageGetWidth(imgToMaskRef),
CGImageGetHeight(imgToMaskRef),
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
greyColorSpace,
kCGImageAlphaNone,
CGImageGetDataProvider(pacmanImage),
NULL, //decode
YES, //shouldInterpolate
kCGRenderingIntentDefault);
5. masking the imgToMaskRef (which is a CGImageRef imgToMaskRef =imgToMask.CGImage;) with the mask just created
imageMaskedWithImage = CGImageCreateWithMask(imgToMaskRef, mask);
CGContextDrawImage(context,imgRectBox, imageMaskedWithImage);
CGImageRef maskedImageFinal = CGBitmapContextCreateImage (context);
6. returning the maskedImageFinal to the caller of this method (as wheelChoiceMadeState, which is a CGImageRef) who then updates the CALayer contents property with the image
theLayer.contents = (id) wheelChoiceMadeState;
the problem I am seeing is that the mask does not work properly and looks very strange indeed. I get strange patterns across the path painted by the Core Graphics. My hunch is there is something with CGImageGetDataProvider() but I am not sure.
Any help would be appreciated
thank you

CGImageGetDataProvider does not change the data at all. If the data of pacmanImage does not exactly match the parameters passed to CGImageCreate (bitsPer,bytesPer,colorSpace,...) the result is undefined. If it does exactly match, there would be no point in creating mask.
You need to create a grayscale CGBitmapContext to draw the mask into, and a CGImage that uses the same pixels and parameters as the bitmap. You can then use the CGImage to mask another image.
Only use CGBitmapContextCreateImage if you want a snapshot of a CGBitmapContext that you will continue to modify. For a single use bitmap, pass the same buffer to the bitmap and the matching CGImage you create.
Edit:
finalRect is the size the final image should be. It is either large enough to hold the original image, and the pacman is positioned inside it, or it is large enough to hold the pacman, and the original image is cropped to fit. In this example, the original image is cropped. Otherwise the pacman path would have to be positioned relative to the original image.
maskContext = CGBitmapContextCreate( ... , finalRect.size.width , finalRect.size.height , ... );
// add the pacman path and set the stroke and fill colors
CGContextDrawPath( maskContext , kCGPathFillStroke );
maskImage = CGBitmapContextCreateImage( maskContext );
imageToMask = CGImageCreateWithImageInRect( originalImage , finalRect );
finalImage = CGImageCreateWithMask( imageToMask , maskImage );

Related

Undo state of bitmap (CGContext)

I'm making a multiplayer game which involves drawing lines. Now i'm trying to implement online multiplayer into the game. However I've had some doing struggle doing this. The thing is that I will need to reverse the state of the draw lines in case a packet from the server comes late to the client. I've searched here on stack overflow but haven't found any real answer how to "undo" a bitmap context. The biggest problem is that the drawing needs to be done very fast since the game updates every 20th millisecond. However I figured out and tried some different approaches to this:
Save the state of the whole context and then redraw it. This is probably the slowest method.
Only save a part of the context (100x100) in a another bitmap hidden by looping through each pixel, then looping through each pixel from that bitmap to the main bitmap that is shown on the screen.
Save each point of the drawn path in a CGMutablePath ref, then when reverting the context, draw this path with a transparent color (0,0,0,0).
Saving the position in the bitmap of each pixel that gets drawn in a separate array and then setting that pixel alpha to 0 (in the drawn bitmap) when I need to undo.
The last approach should be the fastest of them all. However, I'm not sure how I can get the position of each drawn pixel unless i do it completely manual by. Right now I uses this code to drawn lines.
CGContextSetStrokeColorWithColor(cacheContext, [color CGColor]);
CGContextSetLineCap(cacheContext, kCGLineCapRound);
CGContextSetLineWidth(cacheContext, 6+thickness);
CGContextBeginPath(cacheContext);
CGContextMoveToPoint(cacheContext, point1.x, point1.y);
CGContextAddLineToPoint(cacheContext, point2.x, point2.y);
CGContextStrokePath(cacheContext);
CGRect dirtyPoint1 = CGRectMake(point1.x-10, point1.y-10, 20, 20);
CGRect dirtyPoint2 = CGRectMake(point2.x-10, point2.y-10, 20, 20);
[self setNeedsDisplayInRect:CGRectUnion(dirtyPoint1, dirtyPoint2)];
Here is how the CGBitmapcontext is setup
- (BOOL) initContext:(CGSize)size {
scaleFactor = [[UIScreen mainScreen] scale];
// scaleFactor = 1;
//non-retina
// scalefactor = 2; retina
int bitmapBytesPerRow;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (size.width * 4*scaleFactor);
bitmapByteCount = (bitmapBytesPerRow * (size.height*scaleFactor));
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little;
colorSpace = CGColorSpaceCreateDeviceRGB();
cacheContext = CGBitmapContextCreate (cacheBitmap, size.width*scaleFactor, size.height *scaleFactor, 8, bitmapBytesPerRow, colorSpace, bitmapInfo);
CGContextScaleCTM(cacheContext, scaleFactor, scaleFactor);
CGColorSpaceRelease(colorSpace);
CGContextSetRGBFillColor(cacheContext, 0, 0, 0, 0.0);
CGContextFillRect(cacheContext, (CGRect){CGPointZero, CGSizeMake(size.height*scaleFactor, size.width*scaleFactor)});
return YES;
}
Is there anyway other better way to undo the bitmap? If not, how can I get the positions of each pixels that gets draw with core graphics? Is this even possible?
Your 4th approach will either duplicate the whole canvas bitmap (should you consider a flat NxM matrix representation) or result a performance mess in case of map-based structure or something like that.
Actually, I believe 2nd way does the trick. I have had implemented that way of undo few times during past years, including a DirectX-based drawing app with some 25-30fps rendering pipeline.
However, your #2 description has a strange mention of some "loop" you want to perform across the area. You do not need a loop, what you need is a proper API method for copying a portion of the bitmap/graphics context, it might be CGContextDrawImage used to preserve your canvas portion and same method to undo/redo the drawing.

CGContextDrawRadialGradient not rendering alpha in PDF?

I have the following drawing, which renders a circle with full color at the center fading to 0 alpha at the edges. When drawing this to the screen, it looks perfect. However, when I draw the same thing in a PDF context (CGPDFContextCreate), the whole circle comes out opaque. If I draw any other regular path in the PDF, then alpha renders fines. So just the gradient doesn't work. Is this a bug or am I missing something?
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
size_t num_locations = 2;
CGFloat locations[2] = { 1.0, 0.0 };
CGColorRef color = [[UIColor redColor]CGColor];
CGFloat *k = (CGFloat *)CGColorGetComponents(color);
CGFloat components[8] = { k[0], k[1], k[2], 0.0, k[0], k[1], k[2], 1.0 };
CGGradientRef myGradient = CGGradientCreateWithColorComponents(myColorspace, components, locations, num_locations);
CGPoint c = CGPointMake(160, 160);
CGContextDrawRadialGradient(pdfContext, myGradient, c, 0, c, 60, 0);
Official response from Apple tech support:
Quartz ignores the alpha value of colors in gradients (or shadings)
when capturing a gradient (or shading) to a PDF document and instead
treats all colors as if they are completely opaque. In addition,
Quartz ignores the global alpha in the context when it records
gradients (or shadings) into a PDF document. One possible work-around
is to capture a shading as bits using a bitmap context and use the
resulting bits to create a CGImage that you draw through the clipping
area. This produces pre-rendered gradients (or shadings) but does
capture the alpha content into a PDF document. You should not perform
this pre-rendering for gradients (or shadings) that don't contain
alpha.

What's the fastest way to load big image on iPhone?

HI there,
I am building a scrollview which swipes through 100 images of houses.
It works. But.... For every image viewed the allocated memory increases by 2.5 MB. In the end the app crashed because it ran out of memory.
I use the code for decompress the image.....
- (void)decompress {
const CGImageRef cgImage = [self CGImage];
const int width = CGImageGetWidth(cgImage);
const int height = CGImageGetHeight(cgImage);
const CGColorSpaceRef colorspace = CGImageGetColorSpace(cgImage);
const CGContextRef context = CGBitmapContextCreate(
NULL, /* Where to store the data. NULL = don’t care */
width, height, /* width & height */
8, width * 4, /* bits per component, bytes per row */
colorspace, kCGImageAlphaNoneSkipFirst);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage);
CGContextRelease(context);
}
but its not working,very time to take load the image.
I also face the same problem The solution I opted was I resize the image in 320*460 dimension and then the image size become approx 50 kb not more than that.You can also do the same thing.
Code for resizing image:-
suppose you have your image in image variable of UIImage and new image is stored in editedImage which is also UIImage object.
if (self.image.size.height>460 || self.image.size.width>320) {
UIGraphicsBeginImageContext(CGSizeMake(320,480));
[self.image drawInRect: CGRectMake(0, 0,320,480)];
self.editedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"smallImage%f",self.editedImage.size.height);
NSLog(#"smallImage%f",self.editedImage.size.width);
}
else {
self.editedImage=self.image;
}
You need to do some memory optimization. Here is a logic to save memory but you need to implement it your self. Don't add all the images to scrollview just add initally 5 images. Suppose user is viewing image1 and now he reaches to image3 when user reaches image 3 remove image1 from scrollView so that the memory can be freed. And as user scrolls forward add next images like when user is on image4 add image6 and do vice versa. When user is goes from image4 to image3 remove image6 from memory. This way there is only 5 images will be in memory.
Three ways:
1: Trying Apple's sample code called "StreetScroller", then you'll get to know how to make scrollview work definitely.
2:Create thumbnails for large images and save to your own directory. Then read them from url.
3: Use UIpageviewcontroller

iPhone. How to change individual pixels of big CGImage inplace?

I use UIScrollView, which zooms and scrolls UIImageView, which contains UIImage, which take pixels from CGImage.
Size of CGImage may be about 5000x2000 pixels
1) Is this a correct way to zoom and scroll big image?
Some logic may change periodically some region(rect) in that CGImage
2) How can i change individuall pixels in CGImage inplace without heawy processor usage (entire image recreation)?
my solution:
I create CGBitmapContext for storing big image at XRGB-format.
Subclass UIView for override drawRect: which represent my image
periodically updates do so:
update some rect
invoke [setNeedsDisplayInRect: rect]
in [drawRect: rect] do:
CGContextRef g = UIGraphicsGetCurrentContext();
CGImageRef imgAll = CGBitmapContextCreateImage( m_BmpContext );
CGImageRef imgRect = CGImageCreateWithImageInRect( imgAll, rect );
CGContextDrawImage( g, rect, imgRect );
CGImageRelease( imgRect );
CGImageRelease( imgAll );
and it's work fine for me

Editing an UIImage

I have an UIImage that I want to edit (say, make every second row of pixels black). Now I am aware of the functions that extract PNG or JPEG data from the image, but that's raw data and I have no idea how the png/jpeg files work. Is there a way I can extract the colour data from each pixel into an array? And then make a new UIImage using the data from the array?
Here's the steps I took to do something similar (this creates a bitmap context for 8 bit greyscale no alpha:
// Allocate memory for image data.
bitmapData = malloc(bitmapByteCount);
// Use the generic Grey color space.
colorSpace = CGColorSpaceCreateDeviceGray();
// Create the bitmap context.
context = CGBitmapContextCreate (bitmapData, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, kCGImageAlphaNone);
Now it says in the docs that you can pass NULL to the bitmapData parameter and have the method handle all the malloc'ing of memory. I have found that if you do that, you can't then use CGBitmapContextGetData to get the pointer to go through the byte data.
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(0, 0, pixelsWide, pixelsHigh), imageRef);
To read a pixel at position i in the data, use:
unsigned char *pointerToPixelData = CGBitmapContextGetData(context);
pixelValue = *(pointerToPixelData + i);
Don't forget to release everything and free malloc'd memory when you're done.
Hope this helps,
Dave
Create a CGBitmapContext and draw the UIImage's CGImage into it. Clobber pixel bytes as appropriate, then create a new CGImage (and UIImage, if desired) from the bytes.
The main reason to do this is that CGImage supports a wide variety of pixel formats, which would not be fun for you to try to support if you were to try to work with whatever format a given CGImage had happened to have been created with.