I am trying to erase an image with my touch in iOS. By setting the blend mode to kCGBlendModeClear I am able to do this - but only with hard edges. I could draw my stroke with varying line widths and alphas, but it appears that CGContextSetAlpha does not have an effect with kCGBlendModeClear.
How do I go about this?
I would use a transparency layer compositing with kCGBlendModeDestinationOut (Da * (1 - Sa), Dc * (1 - Sa).) Something like this:
CGPathRef pathToErase = ...; // The path you want erased
// could also be an image or (nearly) anything else
// that can be drawn in a bitmap context
CGContextSetBlendMode(ctx, kCGBlendModeDestinationOut);
CGContextBeginTransparencyLayer(ctx, NULL);
{
CGContextSetGrayFillColor(ctx, 0.0, 1.0); // solid black
CGContextAddPath(ctx, pathToErase);
CGContextFillPath(ctx);
// the above two lines could instead be CGContextDrawImage()
// or whatever else you're using to clear
}
CGContextEndTransparencyLayer(ctx);
Note that you should also save and restore the gstate (CGContextSaveGState()/CGContextRestoreGState()) before/after the transparency layer to ensure that the blend mode and any other gstate changes do not persist.
Note: This is brain-compiled and it's possible that transparency layers don't play nice with all blend modes. If so, try drawing the path/image into a second bitmap context, then drawing the contents of that context with the above blend mode.
You can also experiment with other blend modes for different effects.
Related
I'm making a multiplayer game which involves drawing lines. Now i'm trying to implement online multiplayer into the game. However I've had some doing struggle doing this. The thing is that I will need to reverse the state of the draw lines in case a packet from the server comes late to the client. I've searched here on stack overflow but haven't found any real answer how to "undo" a bitmap context. The biggest problem is that the drawing needs to be done very fast since the game updates every 20th millisecond. However I figured out and tried some different approaches to this:
Save the state of the whole context and then redraw it. This is probably the slowest method.
Only save a part of the context (100x100) in a another bitmap hidden by looping through each pixel, then looping through each pixel from that bitmap to the main bitmap that is shown on the screen.
Save each point of the drawn path in a CGMutablePath ref, then when reverting the context, draw this path with a transparent color (0,0,0,0).
Saving the position in the bitmap of each pixel that gets drawn in a separate array and then setting that pixel alpha to 0 (in the drawn bitmap) when I need to undo.
The last approach should be the fastest of them all. However, I'm not sure how I can get the position of each drawn pixel unless i do it completely manual by. Right now I uses this code to drawn lines.
CGContextSetStrokeColorWithColor(cacheContext, [color CGColor]);
CGContextSetLineCap(cacheContext, kCGLineCapRound);
CGContextSetLineWidth(cacheContext, 6+thickness);
CGContextBeginPath(cacheContext);
CGContextMoveToPoint(cacheContext, point1.x, point1.y);
CGContextAddLineToPoint(cacheContext, point2.x, point2.y);
CGContextStrokePath(cacheContext);
CGRect dirtyPoint1 = CGRectMake(point1.x-10, point1.y-10, 20, 20);
CGRect dirtyPoint2 = CGRectMake(point2.x-10, point2.y-10, 20, 20);
[self setNeedsDisplayInRect:CGRectUnion(dirtyPoint1, dirtyPoint2)];
Here is how the CGBitmapcontext is setup
- (BOOL) initContext:(CGSize)size {
scaleFactor = [[UIScreen mainScreen] scale];
// scaleFactor = 1;
//non-retina
// scalefactor = 2; retina
int bitmapBytesPerRow;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (size.width * 4*scaleFactor);
bitmapByteCount = (bitmapBytesPerRow * (size.height*scaleFactor));
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little;
colorSpace = CGColorSpaceCreateDeviceRGB();
cacheContext = CGBitmapContextCreate (cacheBitmap, size.width*scaleFactor, size.height *scaleFactor, 8, bitmapBytesPerRow, colorSpace, bitmapInfo);
CGContextScaleCTM(cacheContext, scaleFactor, scaleFactor);
CGColorSpaceRelease(colorSpace);
CGContextSetRGBFillColor(cacheContext, 0, 0, 0, 0.0);
CGContextFillRect(cacheContext, (CGRect){CGPointZero, CGSizeMake(size.height*scaleFactor, size.width*scaleFactor)});
return YES;
}
Is there anyway other better way to undo the bitmap? If not, how can I get the positions of each pixels that gets draw with core graphics? Is this even possible?
Your 4th approach will either duplicate the whole canvas bitmap (should you consider a flat NxM matrix representation) or result a performance mess in case of map-based structure or something like that.
Actually, I believe 2nd way does the trick. I have had implemented that way of undo few times during past years, including a DirectX-based drawing app with some 25-30fps rendering pipeline.
However, your #2 description has a strange mention of some "loop" you want to perform across the area. You do not need a loop, what you need is a proper API method for copying a portion of the bitmap/graphics context, it might be CGContextDrawImage used to preserve your canvas portion and same method to undo/redo the drawing.
I have two ImageViews, one called imageView and the other called subView (which is a subview of imageView).
I want to blend the images on these views together, with the user being able to switch the alpha of the blend with a pan. My code works, but right now, the code is slow as we are redrawing the image each time the pan gesture is moved. Is there a faster/more efficient way of doing this?
BONUS Q: I want to allow for my subView image to drawn zoomed in. Currently I've set my subView to be UIViewContentModeCenter, however I can't seem to draw a zoomed in part of my image with this content mode. Is there any way around this?
My drawrect:
- (void)drawRect:(CGRect)rect
{
float xCenter = self.center.x - self.currentImage1.size.width/2.0;
float yCenter = self.center.y - self.currentImage1.size.height/2.0;
subView.alpha = self.blendAmount; // Customize the opacity of the top image.
UIGraphicsBeginImageContext(self.currentImage1.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(c, kCGBlendModeColorBurn);
[imageView.layer renderInContext:c];
self.blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.blendedImage drawAtPoint:CGPointMake(xCenter,yCenter)];
}
You need to use GPU for image processing which is far faster than using CPU (as you're doing right now).
You can use Core Image framework which is very fast and easy to use but requires iOS 5, or you can use Open GL directly but you need to be experienced and have some knowledge about Open GL Shading.
I want to do some custom drawing with CoreGraphics. I need a linear gradient on my view, but the thing is that this view is a rounded rectangle so I want my gradient to be also rounded at angles. You can see what I want to achieve on the image below:
So is this possible to implement in CoreGraphics or some other programmatic and easy way?
Thank you.
I don't think there is an API for that, but you can get the same effect if you first draw a radial gradient, say, in an (N+1)x(N+1) size bitmap context, then convert the image from the context to a resizable image with left and right caps set to N.
Pseudocode:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(N+1,N+1), NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
// <draw the gradient into 'context'>
UIImage* gradientBase = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage* gradientImage = [gradientBase resizableImageWithCapInsets:UIEdgeInsetsMake(0,N,0,N)];
In case you want the image to scale vertically as well, you just have to set the caps to UIEdgeInsetsMake(N,N,N,N).
I just want to add more sample code for this technique, as some things weren't obvious for. Maybe it will be useful for somebody:
So, let's say, we have our custom view class and in it's drawRect: method we put this:
// Defining the rect in which to draw
CGRect drawRect=self.bounds;
Float32 gradientSize=drawRect.size.height; // The size of original radial gradient
CGPoint center=CGPointMake(0.5f*gradientSize,0.5f*gradientSize); // Center of gradient
// Creating the gradient
Float32 colors[4]={0.f,1.f,1.f,0.2f}; // From opaque white to transparent black
CGGradientRef gradient=CGGradientCreateWithColorComponents(CGColorSpaceCreateDeviceGray(), colors, nil, 2);
// Starting image and drawing gradient into it
UIGraphicsBeginImageContextWithOptions(CGSizeMake(gradientSize, gradientSize), NO, 1.f);
CGContextRef context=UIGraphicsGetCurrentContext();
CGContextDrawRadialGradient(context, gradient, center, 0.f, center, center.x, 0); // Drawing gradient
UIImage* gradientImage=UIGraphicsGetImageFromCurrentImageContext(); // Retrieving image from context
UIGraphicsEndImageContext(); // Ending process
gradientImage=[gradientImage resizableImageWithCapInsets:UIEdgeInsetsMake(0.f, center.x-1.f, 0.f, center.x-1.f)]; // Leaving 2 pixels wide area in center which will be tiled to fill whole area
// Drawing image into view frame
[gradientImage drawInRect:drawRect];
That's all. Also if you're not going to ever change the gradient while app is running, you would want to put everything except last line in awakeFromNib method and then in drawRect: just draw the gradientImage into view's frame. Also don't forget to retain the gradientImage in this case.
I've created a custom progress bar which subclass UIView and implements drawRect. I manage to draw a single gradient on the entire view. I'd like however to draw several different gradients, each one in a different position. How to I limit CGContextDrawLinearGradient to smaller rect inside my view?
glossGradient = CGGradientCreateWithColorComponents(rgbColorspace, components, locations, num_locations);
CGPoint topCenter = CGPointMake(start + (CGRectGetMidX(currentBounds)/currentBounds.size.width), 0.0f);`
CGPoint midCenter = CGPointMake(start + (CGRectGetMidX(currentBounds)/currentBounds.size.width), currentBounds.size.height);
CGContextDrawLinearGradient(currentContext, glossGradient, topCenter, midCenter, 0);
start = start + (values[i] / currentBounds.size.width);
CGGradientRelease(glossGradient);
}
You can use CGContectClipToRect to restrict the drawing area
Then for each gradient do:
CGContextSaveGState(currentContext);
CGContextClipToRect(theRect); // theRect should be the area where you want to draw the gradient
... // gradient drawing code
CGContextRestoreGState(currentContext);
As stated in Quartz 2D Programming Guide:
When you paint a gradient, Quartz fills the current context. Painting
a gradient is different from working with colors and patterns, which
are used to stroke and fill path objects. As a result, if you want
your gradient to appear in a particular shape, you need to clip the
context accordingly.
Since you want to draw each gradient in a rectangle, you will want to do something like this for each gradient and rectangle:
CGContextSaveGState(currentContext); {
CGContextClipToRect(currentContext, currentBounds);
CGContextDrawLinearGradient(currentContext, glossGradient, topCenter, midCenter, 0);
} CGContextRestoreGState(currentContext);
I'm working on making an iPhone App where there are two ImageViews and when you touch the top one, wherever you tapped, the bottom one shows instead.
Basically what I want to do is cut an ellipse/roundedrect out of an image. To do this I was thinking on either clipping the image, or changing the alpha pixels in the rect to zero. I am new to Quartz 2D Programming so I am not sure how to do this.
Assuming I have:
UIImageView *topImage;
UIImageView *bottomImage;
How do I delete a CGRect/Ellipse/RoundedRect from these images.
This is kind of like those lottery tickets that you have to scratch off to reveal if you won.
I would generally try to make a mask from a path (here containing a rounded rectangle), then masking the image with it, as demonstrated in the apple docs. The one of the benefits of this is that for hit testing all you need to do is CGPathContainsPoint with the point that was touched (as in it will test whether it was in the visible area of the image).
I tried this code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect frame = CGRectMake(100, 100, 100, 100);
CGPathRef roundedRectPath = [self newPathForRoundedRect:frame radius:5];
CGContextAddPath(ctx, roundedRectPath);
CGContextClip (ctx);
CGPathRelease(roundedRectPath);
(Together with the rounded rect path function you sent)
This is on a white view and beneath the view there is a gray Window, so I thought this would just show gray instead of white in CGRect frame but it didn't do anything...