Alias + core graphics at retina display - core-graphics

I wanted to use alias drawing capability of core graphics
with Zooming in capablility
so the pixels will be seen very clear at 1600% or up
and everything works fine in ipad 1 (no retina display)
when i have the same code running on iphone4S / 4, I can't see alias drawing is on
follow the instructions from this post CoreGraphics for retina display but the image is still not "alias"
here's the screen cap of my question - http://farm8.staticflickr.com/7161/6634020573_19d8b3549f_o.jpg
here's how i do the screen capture below - (subclass a UIView and put the drawing code in drawRect:)
- (void)drawRect:(CGRect)rect
{
//[self setNeedsDisplay];
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
//
CGContextSetLineWidth(context, 1.0);
CGColorSpaceRef cs=CGColorSpaceCreateDeviceRGB();
float cc[]={1.0,0.0,0.0,1.0};
CGContextSetStrokeColorWithColor(context,CGColorCreate(cs, cc));
CGContextSetShouldAntialias(context, FALSE);
CGContextSetAllowsAntialiasing(context, FALSE);
//
CGContextMoveToPoint(context,10,0);
CGContextAddLineToPoint(context, 266,256);
CGContextStrokePath(context);
CGContextSetShouldAntialias(context, TRUE);
CGContextSetAllowsAntialiasing(context, TRUE);
//
CGContextMoveToPoint(context,-10,0);
CGContextAddLineToPoint(context, 246,256);
CGContextStrokePath(context);
}
and zoom in by double tap simply
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *t=[touches anyObject];
if([t tapCount]==2) {
paper *p=[paperStack objectAtIndex:0];
CGAffineTransform transform = p.transform;
p.transform = CGAffineTransformScale(transform, 16.0f, 16.0f);
}
}
any helps / tips will be much appreciated,
thank you in advance !
best,
Kitdastudio

to get the retina display a SHARP CRISP pixel
i need to give up
CGContextMoveToPoint(context,10,0);
CGContextAddLineToPoint(context, 266,256);
instead i need to use a loop to add rectangles (1x1 big each) along the line
and it will yield SHARP CRISP pixel in both retina display and older gen 1x display

Related

How to override draw method on PDFAnnotation IOS-PDFKIT

I followed another StackOverflow post that explains how i could override the draw method of a PDFAnnotation so i could draw a picture instead of a traditional PDFAnnotation.
But sadly i was not able to achieve that and the annotation that is drawn on top of my pdf is still a regular one.
This is the code that i used :
#implementation PDFImageAnnotation { UIImage * _picture;
CGRect _bounds;};
-(instancetype)initWithPicture:(nonnull UIImage *)picture bounds:(CGRect) bounds{
self = [super initWithBounds:bounds
forType:PDFAnnotationSubtypeWidget
withProperties:nil];
if(self){
_picture = picture;
_bounds = bounds;
}
return self;
}
- (void)drawWithBox:(PDFDisplayBox) box
inContext:(CGContextRef)context {
[super drawWithBox:box inContext:context];
[_picture drawInRect:_bounds];
CGContextRestoreGState(context);
UIGraphicsPushContext(context);
};
#end
Does someone know how i could override the draw method so i could draw a custom Annotation ?
Thank You !
ps: i also tried to followed the tutorial on the apple dev site.
UPDATE :
Now i'm able to draw pictures using CGContextDrawImage but i'm not able to flip coordinates back in place. when i do that mi pictures are not drawn and it seems that they are put outside of the page but i'm not sure.
This is my new code :
- (void)drawWithBox:(PDFDisplayBox) box
inContext:(CGContextRef)context {
[super drawWithBox:box inContext:context];
UIGraphicsPushContext(context);
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0.0, _pdfView.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, _bounds, _picture.CGImage);
CGContextRestoreGState(context);
UIGraphicsPopContext();
}
I also tried to follow the tutorial on the Apple dev site.
Which one?
Custom Graphics
Adding Custom Graphics to a PDF
Because both include UIGraphicsPushContext(context) & CGContextSaveGState(context) calls, but your code doesn't. Do not blindly copy & paste examples, try to understand them. Read what these two calls do.
Fixed code:
- (void)drawWithBox:(PDFDisplayBox) box
inContext:(CGContextRef)context {
[super drawWithBox:box inContext:context];
UIGraphicsPushContext(context);
CGContextSaveGState(context);
[_picture drawInRect:_bounds];
CGContextRestoreGState(context);
UIGraphicsPopContext();
}
The image was drawn with CGRectMake(20, 20, 100, 100). It's upside down, because PDFPage coordinates are flipped (0, 0 = bottom/left). Leaving it as an exercise for OP.
Rotation
Your rotation code is wrong:
CGContextTranslateCTM(context, 0.0, _pdfView.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, _bounds, _picture.CGImage);
It's based on _pdfView bounds, but it should be based on the image bounds (_bounds). Here's the correct one:
- (void)drawWithBox:(PDFDisplayBox) box
inContext:(CGContextRef)context {
[super drawWithBox:box inContext:context];
UIGraphicsPushContext(context);
CGContextSaveGState(context);
CGContextTranslateCTM(context, _bounds.origin.x, _bounds.origin.y + _bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
[_picture drawInRect:CGRectMake(0, 0, _bounds.size.width, _bounds.size.height)];
CGContextRestoreGState(context);
UIGraphicsPopContext();
}

Drawings in drawRect not being displayed correctly

I want to implement freeform drawing in my app. First, I tried the code inside drawLayer:inContext: and it gave me the result I wanted.
Drawing in CALayer:
But when I decided to implement the code inside drawRect:, this happened:
Even if I draw inside the white space, the drawing is rendered outside as shown above. The code I used is exactly the same. I copy-pasted it from drawLayer:inContext: to drawRect:. I didn't change a thing, so why is this happening?
The Code:
CGContextSaveGState(ctx);
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineWidth(ctx, 1.0);
CGContextSetRGBStrokeColor(ctx, 1, 0, 0, 1);
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, prevPoint.x, prevPoint.y);
for (NSValue *r in drawnPoints){
CGPoint pt = [r CGPointValue];
CGContextAddLineToPoint(ctx, pt.x, pt.y);
}
CGContextStrokePath(ctx);
CGContextRestoreGState(ctx);
I see you are using app in full screen mode where the view is centered and does not take full width of the screen.
It may be that CALayer has transform applied to it that translates the drawing from the left side of the screen to the center. This may not be the case with drawRect:. Try setting CGContext's transform matrix:
CGContextSaveGState(ctx);
CGFloat xOffset = CGRectGetMidX(screenFrame) - CGRectGetMidX(viewFrame);
CGContextTranslateCTM(ctx, xOffset, 0.0f);
// rest of drawing code
// ...
CGContextRestoreGState(ctx);

Cocoa-touch - drawing with core graphics to a touch point

so i want to do something which seems pretty simple but is proving other wise, i want to just draw a square at the point that touch i registered. i cant seem to get it to work though. in touchesBegan i am calling a custom method called drawSquare that is sent a CGRect. i know i must be doing something simple wrong but i don't know enough about drawing primitives in xcode/cocoa-touch. any help would be greatly appreciated. also here is my code:
- (void)drawSquare:(CGRect)rect{
//Get the CGContext from this view
CGContextRef context = UIGraphicsGetCurrentContext();
//Draw a rectangle
CGContextSetFillColorWithColor(context, [UIColor redColor].CGColor);
//Define a rectangle
CGContextAddRect(context, rect);
//Draw it
CGContextFillPath(context);
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
for (UITouch *touch in touches) {
// gets the coordinats of the touch with respect to the specified view.
CGPoint touchPoint = [touch locationInView:self];
CGRect rect = CGRectMake(touchPoint.x, touchPoint.y, 50, 50);
[self drawSquare:rect];
}
}
Don't try to do your drawing in your event methods. Update your list of what you want to draw, and send -setNeedsDisplay.

Simple way of using irregular shaped buttons

I've finally got my main app release (Tap Play MMO - check it out ;-) ) and I'm now working on expanding it.
To do this I need to have a circle that has four seperate buttons in it, these buttons will essentially be quarters. I've come to the conclusion that the circlular image will need to be constructed of four images, one for each quarter, but due to the necessity of rectangular image shapes I'm going to end up with some overlap, although the overlap will be transparent.
What's the best way of getting this to work? I need something really simple really, I've looked at this
http://iphonedevelopment.blogspot.com/2010/03/irregularly-shaped-uibuttons.html
Before but not yet succeeded in getting it to work. Anyone able to offer some advice?
In case it makes any difference I'll be deploying to a iOS 3.X framework (will be 4.2 down the line when 4.2 comes out for iPad)
Skip the buttons and simply respond to touches in your view that contains the circle.
Create a CGPath for each area that you want to capture touches, when your UIview receives a touch, check for membership inside the paths.
[Edited answer to show skeleton implementation details -- TomH]
Here's how I would approach the problem: (I haven't tested this code and the syntax may not be quite right, but this is the general idea)
1) Using PS or your favorite image creation application, create one png of the quarter circles. Add it to your XCode project.
2) Add a UIView to the UI. Set the UIView's layer's contents to the png.
self.myView = [[UIView alloc] initWithRect:CGRectMake(10.0, 10.0, 100.0, 100,0)];
[myView.layer setContents:(id)[UIImage loadImageNamed:#"my.png"]];
3) Create CGPaths that describe the region in the UIView that you are interested in.
self.quadrantOnePath = CGPathCreateMutable();
CGPathMoveToPoint(self.quadrantOnePath, NULL, 50.0, 50.0);
CGPathAddLineToPoint(self.quadrantOnePath, NULL, 100.0, 50.0);
CGPathAddArc(self.quadrantOnePath, NULL, 50.0, 50.0, 50.0, 0.0, M_PI2, 1);
CGPathCloseSubpath(self.quadrantOnePath);
// create paths for the other 3 circle quadrants too!
4) Add a UIGestureRecognizer and listen/observe for taps in the view
UITapGestureRecognizer *tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleGesture:)];
[tapRecognizer setNumberOfTapsRequired:2]; // default is 1
5) When tapRecognizer invokes its target selector
- (void)handleGesture:(UIGestureRecognizer *) recognizer {
CGPoint touchPoint = [recognizer locationOfTouch:0 inView:self.myView];
bool processTouch = CGPathContainsPoint(self.quadrantOnePath, NULL, touchPoint, true);
if(processTouch) {
// call your method to process the touch
}
}
Don't forget to release everything when appropriate -- use CGPathRelease to release paths.
Another thought: If the graphic that you are using to represent your circle quadrants is simply a filled color (i.e. no fancy graphics, layer effects, etc.), you could also use the paths you created in the UIView's drawRect method to draw the quadrants too. This would address one of the failings of the approach above: there isn't a tight integration between the graphic and the paths used to check for the touches. That is, if you swap out the graphic for something different, change the size of the graphic, etc., your paths used to check for touches will be out of sync. Potentially a high maintenance piece of code.
I can't see, why overlapping is needed.
Just create 4 buttons and give each one a slice of your image.
edit after comment
see this great project. One example is exactly what you want to do.
It works by incorporating the alpha-value of a pixel in the overwritten
pointInside:withEvent: and a category on UIImage, that adds this method
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = self.size.width;
NSUInteger height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
Here's an awesome project that solves the problem of irregular shaped buttons so easily:
http://christinemorris.com/2011/06/ios-irregular-shaped-buttons/

Making a Grid in an NSView

I currently have an NSView that draws a grid pattern (essentially a guide of horizontal and vertical lines) with the idea being that a user can change the spacing of the grid and the color of the grid.
The purpose of the grid is to act as a guideline for the user when lining up objects. Everything works just fine with one exception. When I resize the NSWindow by dragging the resize handle, if my grid spacing is particularly small (say 10 pixels). the drag resize becomes lethargic in nature.
My drawRect code for the grid is as follows:
-(void)drawRect:(NSRect)dirtyRect {
NSRect thisViewSize = [self bounds];
// Set the line color
[[NSColor colorWithDeviceRed:0
green:(255/255.0)
blue:(255/255.0)
alpha:1] set];
// Draw the vertical lines first
NSBezierPath * verticalLinePath = [NSBezierPath bezierPath];
int gridWidth = thisViewSize.size.width;
int gridHeight = thisViewSize.size.height;
int i;
while (i < gridWidth)
{
i = i + [self currentSpacing];
NSPoint startPoint = {i,0};
NSPoint endPoint = {i, gridHeight};
[verticalLinePath setLineWidth:1];
[verticalLinePath moveToPoint:startPoint];
[verticalLinePath lineToPoint:endPoint];
[verticalLinePath stroke];
}
// Draw the horizontal lines
NSBezierPath * horizontalLinePath = [NSBezierPath bezierPath];
i = 0;
while (i < gridHeight)
{
i = i + [self currentSpacing];
NSPoint startPoint = {0,i};
NSPoint endPoint = {gridWidth, i};
[horizontalLinePath setLineWidth:1];
[horizontalLinePath moveToPoint:startPoint];
[horizontalLinePath lineToPoint:endPoint];
[horizontalLinePath stroke];
}
}
I suspect this is entirely to do with the way that I am drawing the grid and am open to suggestions on how I might better go about it.
I can see where the inefficiency is coming in, drag-resizing the NSWindow is constantly calling the drawRect in this view as it resizes, and the closer the grid, the more calculations per pixel drag of the parent window.
I was thinking of hiding the view on the resize of the window, but it doesn't feel as dynamic. I want the user experience to be very smooth without any perceived delay or flickering.
Does anyone have any ideas on a better or more efficient method to drawing the grid?
All help, as always, very much appreciated.
You've inadvertently introduced a Schlemiel into your algorithm. Every time you call moveToPoint and lineToPoint in your loops, you are actually adding more lines to the same path, all of which will be drawn every time you call stroke on that path.
This means that you are drawing one line the first time through, two lines the second time through, three lines the third time, etc...
A quick fix would be to use a new path each time through the loop simply perform the stroke after the loop (with thanks to Jason Coco for the idea):
path = [NSBezierPath path];
while (...)
{
...
[path setLineWidth:1];
[path moveToPoint:startPoint];
[path lineToPoint:endPoint];
}
[path stroke];
Update: Another approach would be to avoid creating that NSBezierPath altogether, and just use the strokeLineFromPoint:toPoint: class method:
[NSBezierPath setDefaultLineWidth:1];
while (...)
{
...
[NSBezierPath strokeLineFromPoint:startPoint toPoint:endPoint];
}
Update #2: I did some basic benchmarking on the approaches so far. I'm using a window sized 800x600 pixels, a grid spacing of ten pixels, and I'm having cocoa redraw the window a thousand times, scaling from 800x600 to 900x700 and back again. Running on my 2GHz Core Duo Intel MacBook, I see the following times:
Original method posted in question: 206.53 seconds
Calling stroke after the loops: 16.68 seconds
New path each time through the loop: 16.68 seconds
Using strokeLineFromPoint:toPoint: 16.68 seconds
This means that the slowdown was entirely caused by the repetition, and that any of the several micro-improvements do very little to actually speed things up. This shouldn't be much of a surprise, since the actual drawing of pixels on-screen is (almost always) far more processor-intensive than simple loops and mathematical operations.
Lessons to be learned:
Hidden Schlemiels can really slow things down.
Always profile your code before doing unnecessary optimization
You should run Instruments Cpu Sampler to determine where most of the time is being spent and then optimized based on that info. If it's the stroke, put it outside the loop. If it's drawing the path, try offloading the rendering to the gpu. See if CALayer can help.
Maybe to late for the party, however someone could find this helpful. Recently, I needed a custom components for a customer, in order to recreate a grid resizable overlay UIView. The following should to the work, without issues even with very little dimensions.
The code is for iPhone (UIView), but it can be ported to NSView very quickly.
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, rect);
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
//corners
CGContextSetLineWidth(context, 5.0);
CGContextMoveToPoint(context, 0, 0);
CGContextAddLineToPoint(context, 15, 0);
CGContextMoveToPoint(context, 0, 0);
CGContextAddLineToPoint(context, 0, 15);
CGContextMoveToPoint(context, rect.size.width, 0);
CGContextAddLineToPoint(context, rect.size.width-15, 0);
CGContextMoveToPoint(context, rect.size.width, 0);
CGContextAddLineToPoint(context, rect.size.width, 15);
CGContextMoveToPoint(context, 0, rect.size.height);
CGContextAddLineToPoint(context, 15, rect.size.height);
CGContextMoveToPoint(context, 0, rect.size.height);
CGContextAddLineToPoint(context, 0, rect.size.height-15);
CGContextMoveToPoint(context, rect.size.width, rect.size.height);
CGContextAddLineToPoint(context, rect.size.width-15, rect.size.height);
CGContextMoveToPoint(context, rect.size.width, rect.size.height);
CGContextAddLineToPoint(context, rect.size.width, rect.size.height-15);
CGContextStrokePath(context);
//border
CGFloat correctRatio = 2.0;
CGContextSetLineWidth(context, correctRatio);
CGContextAddRect(context, rect);
CGContextStrokePath(context);
//grid
CGContextSetLineWidth(context, 0.5);
for (int i=0; i<4; i++) {
//vertical
CGPoint aPoint = CGPointMake(i*(rect.size.width/4), 0.0);
CGContextMoveToPoint(context, aPoint.x, aPoint.y);
CGContextAddLineToPoint(context,aPoint.x, rect.size.height);
CGContextStrokePath(context);
//horizontal
aPoint = CGPointMake(0.0, i*(rect.size.height/4));
CGContextMoveToPoint(context, aPoint.x, aPoint.y);
CGContextAddLineToPoint(context,rect.size.width, aPoint.y);
CGContextStrokePath(context);
}
}