Erase and un erase a image in UIImageVIew with touch using coregraphics - objective-c

My question is same as mentioned at here. I'm also using two images in my app and all I need is to erase a top image by touch. Then un-erase (if required) the erased part by touch. I'm using following code to erase the the top image. There is also a problem in this approach. Which is that the images are big and I'm using Aspect Fit content mode to properly display them. When I touch on the screen, it erase in the corner not the touched place. I think the touch point calculation is required some fix. Any help will be appreciated.
Second problem is that how to un-erase the erased part by touch?
UIGraphicsBeginImageContext(self.imgTop.image.size);
[self.imgTop.image drawInRect:CGRectMake(0, 0, self.imgTop.image.size.width, self.imgTop.image.size.height)];
self.frame.size.width, self.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
GContextSetLineWidth(UIGraphicsGetCurrentContext(), pinSize);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 0, 0, 0, 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeCopy);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.imgTop.contentMode = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Your code is quite ambiguous: you're creating a context with imgTop inside, then blending with kCGBlendModeCopy the black color? This would cause the black color to be copied onto imgTop. I assume you wanted to set the layer's contentproperty then?
Anyway this class does what you need. There are only a few interesting methods (they're at the top), the others are only properties or init... routines.
#interface EraseImageView : UIView {
CGContextRef context;
CGRect contextBounds;
}
#property (nonatomic, retain) UIImage *backgroundImage;
#property (nonatomic, retain) UIImage *foregroundImage;
#property (nonatomic, assign) CGFloat touchWidth;
#property (nonatomic, assign) BOOL touchRevealsImage;
- (void)resetDrawing;
#end
#interface EraseImageView ()
- (void)createBitmapContext;
- (void)drawImageScaled:(UIImage *)image;
#end
#implementation EraseImageView
#synthesize touchRevealsImage=_touchRevealsImage, backgroundImage=_backgroundImage, foregroundImage=_foregroundImage, touchWidth=_touchWidth;
#pragma mark - Main methods -
- (void)createBitmapContext
{
// create a grayscale colorspace
CGColorSpaceRef grayscale=CGColorSpaceCreateDeviceGray();
/* TO DO: instead of saving the bounds at the moment of creation,
override setFrame:, create a new context with the right
size, draw the previous on the new, and replace the old
one with the new one.
*/
contextBounds=self.bounds;
// create a new 8 bit grayscale bitmap with no alpha (the mask)
context=CGBitmapContextCreate(NULL,
(size_t)contextBounds.size.width,
(size_t)contextBounds.size.height,
8,
(size_t)contextBounds.size.width,
grayscale,
kCGImageAlphaNone);
// make it white (touchRevealsImage==NO)
CGFloat white[]={1., 1.};
CGContextSetFillColor(context, white);
CGContextFillRect(context, contextBounds);
// setup drawing for that context
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineJoin(context, kCGLineJoinRound);
CGColorSpaceRelease(grayscale);
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch=(UITouch *)[touches anyObject];
// the new line that will be drawn
CGPoint points[]={
[touch previousLocationInView:self],
[touch locationInView:self]
};
// setup width and color
CGContextSetLineWidth(context, self.touchWidth);
CGFloat color[]={(self.touchRevealsImage ? 1. : 0.), 1.};
CGContextSetStrokeColor(context, color);
// stroke
CGContextStrokeLineSegments(context, points, 2);
[self setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect
{
if (self.foregroundImage==nil || self.backgroundImage==nil) return;
// draw background image
[self drawImageScaled:self.backgroundImage];
// create an image mask from the context
CGImageRef mask=CGBitmapContextCreateImage(context);
// set the current clipping mask to the image
CGContextRef ctx=UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextClipToMask(ctx, contextBounds, mask);
// now draw image (with mask)
[self drawImageScaled:self.foregroundImage];
CGContextRestoreGState(ctx);
CGImageRelease(mask);
}
- (void)resetDrawing
{
// draw black or white
CGFloat color[]={(self.touchRevealsImage ? 0. : 1.), 1.};
CGContextSetFillColor(context, color);
CGContextFillRect(context, contextBounds);
[self setNeedsDisplay];
}
#pragma mark - Helper methods -
- (void)drawImageScaled:(UIImage *)image
{
// just draws the image scaled down and centered
CGFloat selfRatio=self.frame.size.width/self.frame.size.height;
CGFloat imgRatio=image.size.width/image.size.height;
CGRect rect={0.,0.,0.,0.};
if (selfRatio>imgRatio) {
// view is wider than img
rect.size.height=self.frame.size.height;
rect.size.width=imgRatio*rect.size.height;
} else {
// img is wider than view
rect.size.width=self.frame.size.width;
rect.size.height=rect.size.width/imgRatio;
}
rect.origin.x=.5*(self.frame.size.width-rect.size.width);
rect.origin.y=.5*(self.frame.size.height-rect.size.height);
[image drawInRect:rect];
}
#pragma mark - Initialization and properties -
- (id)initWithCoder:(NSCoder *)aDecoder
{
if ((self=[super initWithCoder:aDecoder])) {
[self createBitmapContext];
_touchWidth=10.;
}
return self;
}
- (id)initWithFrame:(CGRect)frame
{
if ((self=[super initWithFrame:frame])) {
[self createBitmapContext];
_touchWidth=10.;
}
return self;
}
- (void)dealloc
{
CGContextRelease(context);
[super dealloc];
}
- (void)setBackgroundImage:(UIImage *)value
{
if (value!=_backgroundImage) {
[_backgroundImage release];
_backgroundImage=[value retain];
[self setNeedsDisplay];
}
}
- (void)setForegroundImage:(UIImage *)value
{
if (value!=_foregroundImage) {
[_foregroundImage release];
_foregroundImage=[value retain];
[self setNeedsDisplay];
}
}
- (void)setTouchRevealsImage:(BOOL)value
{
if (value!=_touchRevealsImage) {
_touchRevealsImage=value;
[self setNeedsDisplay];
}
}
#end
Some notes:
This class retains the two images you need. It has a touchRevealsImage property to set the mode to drawing or erasing, and you can set the width of the line.
At the initialization, it creates a CGBitmapContextRef, grayscale, 8bpp, no alpha, of the same size of the view. This context is used to store a mask, that will be applied to the foreground image.
Every time you move a finger on the screen, a line is drawn on the CGBitmapContextRef using CoreGraphics, white to reveal the image, black to hide it. In this way we're storing a b/w drawing.
The drawRect: routine simply draws the background, then creates a CGImageRef from the CGBitmapContextRef and applies it to the current context as a mask. Then draws the foreground image. To draw images it uses - (void)drawImageScaled:(UIImage *)image, which just draws the image scaled and centered.
If you're planning to resize the view, you should implement a method to copy or to recreate the mask with new size, overriding - (void)setFrame:(CGRect)frame.
The - (void)reset method simply clears the mask.
Even if the bitmap context hasn't any alpha channel, the grayscale color space used has alpha: that's why every time a color is set, I had to specify two components.

Related

CALayer doesn't display

(I already read this page, but it didn't help me CALayer not displaying)
I have a class called Image that has this field data:
uint8_t *data;
I already use this data to display this Image on a CALayer that I got gathering code from the internet.
I saw how to create another windows in my application, and I put a NSView inside it to display an Image using the method, I intend to display the histogram latter, but now I'm just trying to display again the same image:
-(void)wrapImageToCALayer: (CALayer*) layer{
if(!layer) return;
CGColorSpaceRef grayColorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(data, width, height, 8, step, grayColorSpace, kCGImageAlphaNone);
CGImageRef dstImage = CGBitmapContextCreateImage(context);
dispatch_sync(dispatch_get_main_queue(), ^{
layer.contents = (__bridge id)dstImage;
});
CGImageRelease(dstImage);
CGContextRelease(context);
CGColorSpaceRelease(grayColorSpace);
}
And this is my Window Controler:
#implementation HistogramControllerWindowController
#synthesize display;
- (id)initWithWindow:(NSWindow *)window{
self = [super initWithWindow:window];
if (self) {
// Initialization code here.
}
return self;
}
- (void)windowDidLoad
{
// Implement this method to handle any initialization after your window controller's window has been loaded from its nib file.
[super windowDidLoad];
histogramDisplayLayer = [CALayer layer];
[histogramDisplayLayer setContentsGravity:AVLayerVideoGravityResizeAspectFill];
histogramDisplayLayer.position = CGPointMake(display.frame.size.width/2., display.frame.size.height/2.);
[histogramDisplayLayer setNeedsDisplay];
[display.layer addSublayer: histogramDisplayLayer];
}
#end
And I'm calling this way:
[frame wrapImageToCALayer:histogramDisplayLayer];
Note that histogramDisplayLayer is an external (CALayer *)
Your histogramDisplayLayer have no size defined, you just set a position but not its size. So init its frame, and this should fix your problem.
Just insert:
[display setWantsLayer:YES]; // view's backing store is using a Core Animation Layer
Before:
[display.layer addSublayer: histogramDisplayLayer];
[ Best way to change the background color for an NSView ]

How do I prevent a text stroke from cutting off at the edge of a UILabel?

I'm adding a fairly wide stroke of a few pixels to text in a UILabel, and depending on the line spacing, if the very edges of the text touch the very edges of the label, the sides of the stroke can be cut off, if they go outside of the bounds of the label. How can I prevent this? Here's the code I'm using to apply the stroke (currently of 5px):
- (void) drawTextInRect: (CGRect) rect
{
UIColor *textColor = self.textColor;
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(c, 5);
CGContextSetLineJoin(c, kCGLineJoinRound);
CGContextSetTextDrawingMode(c, kCGTextStroke);
self.textColor = [UIColor colorWithRed: 0.165 green: 0.635 blue: 0.843 alpha: 1.0];
[super drawTextInRect: rect];
}
Here's an example of the clipping at the side of the label, I think what needs to happen is one of the following:
That when the text splits onto multiple lines, some space is given inside the label's frame for the stroke to occupy.
Or, that the stroke is allowed to overflow the outer bounds of the label.
Yup, clipping just didn't work.
What if you create insets on your UILabel sub-class though. You'd make the frame of the label however big you need it to be, then set your insets. When you draw the text, it will use the insets to give you padding around any edge you need.
The downside is, you won't be able to judge line wrapping at a glance in IB. You'd have to take your label and subtract your insets then you'll see what it would really look line on the screen.
.h
#interface MyLabel : UILabel
#property (nonatomic) UIEdgeInsets insets;
#end
.m
#implementation MyLabel
- (id)initWithCoder:(NSCoder *)aDecoder {
self = [super initWithCoder:aDecoder];
if (self) {
self.insets = UIEdgeInsetsMake(0, 3, 0, 3);
}
return self;
}
- (void)drawRect:(CGRect)rect {
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSaveGState(c);
CGRect actualTextContentRect = rect;
actualTextContentRect.origin.x += self.insets.left;
actualTextContentRect.origin.y += self.insets.top;
actualTextContentRect.size.width -= self.insets.right;
actualTextContentRect.size.height -= self.insets.bottom;
CGContextSetLineWidth(c, 5);
CGContextSetLineJoin(c, kCGLineJoinRound);
CGContextSetTextDrawingMode(c, kCGTextStroke);
self.textColor = [UIColor colorWithRed: 0.165 green: 0.635 blue: 0.843 alpha: 1.0];
[super drawTextInRect:actualTextContentRect];
CGContextRestoreGState(c);
self.textColor = [UIColor whiteColor];
[super drawTextInRect:actualTextContentRect];
}
Edit: Added full code for my UILabel subclass. Modified the code slightly to show both the large stroke and the the normal lettering.
You should implement sizeThatFits: on your UILabel subclass to return a slightly larger preferred size, taking the additional space required for the stroke into consideration. You can then either use the result of sizeThatFits: to calculate the label's frame correctly, or just call sizeToFit.

drawRect in UIView subclass doesn't update image

I have a UIView subclass that I would like to use to draw images with different blend modes.
code:
#implementation CompositeImageView
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
}
return self;
}
-(void)setBlendMode:(CGBlendMode) composite
{
blender = composite;
}
-(void)setImage:(UIImage*) img
{
image = img;
}
- (void)drawRect:(CGRect)rect
{
NSLog(#"it draws");
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(c, blender);
CGContextDrawImage(c, rect, image.CGImage);
}
#end
the code that I use to set it up it is:
[testImage setImage:[UIImage imageNamed:#"Prayer_Background_Paid"]];
[testImage setBlendMode:kCGBlendModeColor];
[testImage setNeedsDisplay];
I am using the interface builder to place a large rectangular UIView, and then set its class to CompositeImageView. However, it still draws as a large white square. Even if I comment out everything inside drawRect, it still draws the white square. However, it IS calling drawRect, because "it draws" is being logged.
Are you sure kCGBlendModeColor is what you want? From the Apple doc:
Uses the luminance values of the background with the hue and
saturation values of the source image.
It seems that if it was a white background, the blended image would also appear white.

Move NSView around until it hits a border

In a Cocoa-based App i'm having a canvas for drawing, inherited from NSView, as well as a rectangle, also inherited from NSView. Dragging the rectangle around inside of the canvas is no problem:
-(void)mouseDragged:(NSEvent *)theEvent {
NSPoint myOrigin = self.frame.origin;
[self setFrameOrigin:NSMakePoint(myOrigin.x + [theEvent deltaX],
myOrigin.y - [theEvent deltaY])];
}
Works like a charm. The issue i'm having now: How can i prevent the rectangle from being moved outside the canvas?
So, first of all i would like to fix this just for the left border, adapting the other edges afterwards. My first idea is: "check whether the x-origin of the rectangle is negative". But: once it is negative the rectangle can't be moved anymore around (naturally). I solved this with moving the rectangle to zero x-offset in the else-branch. This works but it's ... ugly.
So i'm little puzzled with this one, any hints? Definitely the solution is really near and easy. That easy, that i cannot figure it out (as always with easy solutions ;).
Regards
Macs
I'd suggest not using the deltaX and deltaY; try using the event's location in the superview. You'll need a reference to the subview.
// In the superview
- (void)mouseDragged:(NSEvent *)event {
NSPoint mousePoint = [self convertPoint:[event locationInWindow]
fromView:nil];
// Could also add the width of the moving rectangle to this check
// to keep any part of it from going outside the superview
mousePoint.x = MAX(0, MIN(mousePoint.x, self.bounds.size.width));
mousePoint.y = MAX(0, MIN(mousePoint.y, self.bounds.size.height));
// position is a custom ivar that indicates the center of the object;
// you could also use frame.origin, but it looks nicer if objects are
// dragged from their centers
myMovingRectangle.position = mousePoint;
[self setNeedsDisplay:YES];
}
You'd do essentially the same bounds checking in mouseUp:.
UPDATE: You should also have a look at the View Programming Guide, which walks you through creating a draggable view: Creating a Custom View.
Sample code that should be helpful, though not strictly relevant to your original question:
In DotView.m:
- (void)drawRect:(NSRect)dirtyRect {
// Ignoring dirtyRect for simplicity
[[NSColor colorWithDeviceRed:0.85 green:0.8 blue:0.8 alpha:1] set];
NSRectFill([self bounds]);
// Dot is the custom shape class that can draw itself; see below
// dots is an NSMutableArray containing the shapes
for (Dot *dot in dots) {
[dot draw];
}
}
- (void)mouseDown:(NSEvent *)event {
NSPoint mousePoint = [self convertPoint:[event locationInWindow]
fromView:nil];
currMovingDot = [self clickedDotForPoint:mousePoint];
// Move the dot to the point to indicate that the user has
// successfully "grabbed" it
if( currMovingDot ) currMovingDot.position = mousePoint;
[self setNeedsDisplay:YES];
}
// -mouseDragged: already defined earlier in post
- (void)mouseUp:(NSEvent *)event {
if( !currMovingDot ) return;
NSPoint mousePoint = [self convertPoint:[event locationInWindow]
fromView:nil];
spot.x = MAX(0, MIN(mousePoint.x, self.bounds.size.width));
spot.y = MAX(0, MIN(mousePoint.y, self.bounds.size.height));
currMovingDot.position = mousePoint;
currMovingDot = nil;
[self setNeedsDisplay:YES];
}
- (Dot *)clickedDotForPoint:(NSPoint)point {
// DOT_NUCLEUS_RADIUS is the size of the
// dot's internal "handle"
for( Dot *dot in dots ){
if( (abs(dot.position.x - point.x) <= DOT_NUCLEUS_RADIUS) &&
(abs(dot.position.y - point.y) <= DOT_NUCLEUS_RADIUS)) {
return dot;
}
}
return nil;
}
Dot.h
#define DOT_NUCLEUS_RADIUS (5)
#interface Dot : NSObject {
NSPoint position;
}
#property (assign) NSPoint position;
- (void)draw;
#end
Dot.m
#import "Dot.h"
#implementation Dot
#synthesize position;
- (void)draw {
//!!!: Demo only: assume that focus is locked on a view.
NSColor *clr = [NSColor colorWithDeviceRed:0.3
green:0.2
blue:0.8
alpha:1];
// Draw a nice border
NSBezierPath *outerCirc;
outerCirc = [NSBezierPath bezierPathWithOvalInRect:
NSMakeRect(position.x - 23, position.y - 23, 46, 46)];
[clr set];
[outerCirc stroke];
[[clr colorWithAlphaComponent:0.7] set];
[outerCirc fill];
[clr set];
// Draw the "handle"
NSRect nucleusRect = NSMakeRect(position.x - DOT_NUCLEUS_RADIUS,
position.y - DOT_NUCLEUS_RADIUS,
DOT_NUCLEUS_RADIUS * 2,
DOT_NUCLEUS_RADIUS * 2);
[[NSBezierPath bezierPathWithOvalInRect:nucleusRect] fill];
}
#end
As you can see, the Dot class is very lightweight, and uses bezier paths to draw. The superview can handle the user interaction.

CGAffineTransformMakeScale Makes UIView Jump to Original Size before scale

I have a UIView that I set up to respond to pinch gestures and change its size, except, once you enlarge it and then try and pinch it again, it jumps to its original size (which just so happens to be 100x200). Here is the code:
#implementation ChartAxisView
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:frame])) {
// do gesture recognizers
UIPinchGestureRecognizer *pinch = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(onPinch:)];
[self addGestureRecognizer:pinch];
[pinch release];
}
return self;
}
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGColorRef redColor = [UIColor colorWithRed:1.0 green:0.0 blue:0.0 alpha:1.0].CGColor;
CGContextSetFillColorWithColor(context, redColor);
CGContextFillRect(context, self.bounds);
}
- (void)onPinch: (UIPinchGestureRecognizer*) gesture {
self.transform = CGAffineTransformMakeScale(gesture.scale, gesture.scale);
}
- (void)dealloc {
[super dealloc];
}
#end
Any thoughts?
So there are two types of Scale (or, transform in general) functions: CGAffineTransformMakeScale and CGAffineTransformScale
The first one, CGAffineTransformMakeScale which you are using, always transforms with respect to the image's original size. And that is why you see the jump to its original size before the scaling happens.
The second one, CGAffineTransformScale, transforms from the image's current position. This is what you need. For this, it requires an additional 'transform' arg. The 'transform' arg in your case represents the enlarged image.
Read this very informative blog post about transformations.
- (void)onPinch: (UIPinchGestureRecognizer*) gesture {
if ([gesture state] == UIGestureRecognizerStateBegan)
{
curTransform = self.transform;
}
self.transform = CGAffineTransformScale(curTransform,gesture.scale, gesture.scale);
}
use CGAffineTransformScale instead of CGAffineTransformMakeScale
you will need a member -> CGAffineTransform curTransform;
;)
you can set transform with following code:
ivClone setTransform:CGAffineTransformMakeScale(scale, scale)];
and you can get current transform with following coee:
newV.transform.a //x scale
newV.transform.d // y scale