Scaling CGPatternRef dynamically - objective-c

Hey, I've got a view with some items I lay out in it, and a tiled background image. I've set it's backgroundColor to a CGPatternRef. I am trying to get it so I can scale the background image in tandem with the contents in my layer. I've got the background image scaling fine, I just can't seem to figure out how to make it update when the scale changes. Here is my drawCallback
static void drawPatternImage (void *info, CGContextRef ctx)
{
CGImageRef image = (CGImageRef) info;
CGContextDrawTiledImage(ctx, CGRectMake(0,0, CGImageGetWidth(image)*scaleValue, CGImageGetHeight(image)*scaleValue), image);
}
the image is drawn how I expect, but when I change the scale value, and then call setNeedsDisplay on my layer, the callback method isn't called again, and my background isn't updated. Is this the best way to approach this problem? Or is there a more efficient way of drawing a scalable tileable image in my view. I implemented - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx and called CGContextDrawTiledImage from in there, but when scaling my view, the updating was very choppy.
Any guidance would be appreciated.
Thanks

The CGPattern needs to know the size of the pattern, you pass that in your call to CGPatternCreate. Just drawing in a different size in you callback isn't going to help, so you must destroy and re-setup the pattern, passing the new size to CGPatternCreate.

Related

Zoom with a drawing view

I'm looking for a solution in order to have a beautiful zoom on a drawing view.
In my app, I have a view with an other UIView (which is used like a drawing view) and when I draw a stroke on it, the stroke is perfect. But when I zoom the view, I have this really ugly effect (a pixelised stroke) :
(source: imagup.com)
url image
Is there a solution in order to have a proper stroke ?
My UIViewController has a hierarchy like that :
UIViewController
ScrollView
View zoomable (defined with the viewForZoomingInScrollView method)
Image view
Drawing view
Thanks a lot !
Regards,
Sébastien ;)
I'm in the process of making a vector drawing application and let me tell you, this is NOT a trivial task to do correctly and requires quite a bit of work.
Some issues to keep in mind:
If you are not using vector graphics (CGPaths, for example, are
vectors) you will NOT be able to remove the pixelation. A UIImage,
for example, only has so much resolution.
In order to get your drawing to not look pixelated, you are going to
have to redraw everything. If you have a lot of drawing, this can be
an expensive task to perform.
Having good resolution WHILE zooming is nearly impossible because it would require an excessively large context and your drawing would likely exceed the capabilities of the device
I use core graphics to do my drawing, so the way I solved this issue was by allocating and managing multiple CGContexts and using them as buffers. I have one context that is ALWAYS kept at my least zoomed level (scale factor of 1). That context is drawn into at all times and makes it so that when unzooming completely, no time is spent redrawing since it is already done. Another context is used soley for drawing when zoomed. When not zoomed, that context is ignored (since it will have to be redrawn based on the new zoom level anyway). A high level algorithm for how I perform my zooming is as follows:
- (IBAction)handlePinchGesture:(UIGestureRecognizer *)sender
{
if(sender.state == UIGestureRecognizerStateBegan)
{
//draw an image from the unzoomedContext into my current view
//set the scale transformation of my current view to be equal to "currentZoom", a property of the view that keeps track of the actual zoom level
}
else if(sender.state == UIGestureRecognizerStateChanged)
{
//determine the new zoom level and transform the current view, keeping track in the currentZoom property
//zooming will be pixelated.
}
else if(sender.state == UIGestureRecognizerStateEnded || sender.state == UIGestureRecognizerStateCancelled)
{
if(currentZoom == 1.0)
{
//you are done because the unzoomedContext image is already drawn into the view!
}
else
{
//you are zoomed in and will have to do special drawing
//perform drawing into your zoomedContext
//scale the zoomedContext
//set the scale of your current view to be equal to 1.0
//draw the zoomedContext into the current view. It will not be pixelated!
//any drawing done while zoomed needs to be "scaled" based on your current zoom and translation amounts and drawn into both contexts
}
}
}
This gets even more complicated for me because I have additional buffers for the buffers because drawing images of my paths is much faster than drawing paths when there is lots of drawing.
Between managing multiple contexts, tweaking your code to draw efficiently into multiple contexts, following proper OOD, scaling new drawing based on your current zoom and translation, etc, this is a mountain of a task. Hopefully this either motivates you and puts you on the right track, or you decide that getting rid of that pixelation isn't worth the effort :)
I had the same problem and found a solution: tell the view to use a CATiledLayer as backing layer, then tell the view how many levels of zoom it supports. This worked for me, my drawing methods get automatically called when the (parent) view is zoomed.
A short explanation of levelsOfDetail and levelsOfDetailBias:
levelsOfDetail determine how many zooming levels there are in total
levelsOfDetailBias determine how many of those are zooming in.
So in my example I have 4 zooming levels, 3 are zoomed in and 1 is the non-zoomed level, meaning my view only redraws when zooming in.
#imprementation MyZoomableView
+ (Class)layerClass
{
return [CATiledLayer class];
}
- (id)initWithFrame:(CGRect)frame
{
if ((self = [super initWithFrame:frame])) {
((CATiledLayer *)self.layer).levelsOfDetail = 4;
((CATiledLayer *)self.layer).levelsOfDetailBias = 3;
}
return self;
}
#end
Use [self setContentScaleFactor:scale]; in your scrollViewDidEndZooming: delegate method.

Blending two images and drawing resized image from two UIImageViews

I have two ImageViews, one called imageView and the other called subView (which is a subview of imageView).
I want to blend the images on these views together, with the user being able to switch the alpha of the blend with a pan. My code works, but right now, the code is slow as we are redrawing the image each time the pan gesture is moved. Is there a faster/more efficient way of doing this?
BONUS Q: I want to allow for my subView image to drawn zoomed in. Currently I've set my subView to be UIViewContentModeCenter, however I can't seem to draw a zoomed in part of my image with this content mode. Is there any way around this?
My drawrect:
- (void)drawRect:(CGRect)rect
{
float xCenter = self.center.x - self.currentImage1.size.width/2.0;
float yCenter = self.center.y - self.currentImage1.size.height/2.0;
subView.alpha = self.blendAmount; // Customize the opacity of the top image.
UIGraphicsBeginImageContext(self.currentImage1.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(c, kCGBlendModeColorBurn);
[imageView.layer renderInContext:c];
self.blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.blendedImage drawAtPoint:CGPointMake(xCenter,yCenter)];
}
You need to use GPU for image processing which is far faster than using CPU (as you're doing right now).
You can use Core Image framework which is very fast and easy to use but requires iOS 5, or you can use Open GL directly but you need to be experienced and have some knowledge about Open GL Shading.

iOS OpenGL layer and UIScrollView

I'm creating a drawing app on the iPad where the user can draw and scroll through the drawing. (Think of a canvas 4000 pixels wide with a view port width of 1024) At the moment, I'm using OpenGL for the drawing, and with a width of 1024, it works great. When I change the frame size of the UIView to 4000, I get "failed to make complete framebuffer object 8cd6". When I reduced it to a width of 2000, I ended up getting "wacky" results. I know I can manipulate the frame correctly, as having a frame width of 500 creates the correct result.
I was also thinking of leaving the width 1024 and moving the camera of the OpenGL layer, but how would that work with the UIScrollView that I setup? So I'm unsure on what to do at the moment. Any advice?
Thanks in advance
P.S. The code is largely based of GLPaint sample Apple provides here
I think you'd be best off with the scheme you suggest towards the end — keeping the OpenGL view static and outside of the scroll view, and moving its contents so as to match the movement of the scroll view.
Assuming you're using a GLKView, implement your glkView:drawInRect: so that it gets the contentOffset (and, probably, the bounds) properties from your scroll view and draws appropriately. E.g. (pretending you're using GLES 1.0 just because the matrix manipulation methods are so well known):
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// displayArea will be the area the scroll view is
// currently displaying; taking just the bounds would
// likely be fine too
CGRect displayArea;
displayArea.origin = scrollView.contentOffset;
displayArea.size = scrollView.bounds.size;
// assuming (0, 0) in the GL view is in the centre,
// we'll adjust things so that it's in the corner ala
// UIKit
CGPoint centre = CGPointMake(
displayArea.origin.x + displayArea.size.width*0.5f,
displayArea.origin.y + displayArea.size.height*0.5f);
glPushMatrix();
// apply the scroll as per the scroll view
// so that its centre is aligned with our centre
glTranslatef(-centre.x, -centre.y, -1);
/* rest of drawing here */
glPopMatrix();
}
Then connect yourself as a delegate to the scrollview and just perform:
- (void)scrollViewDidScroll:(UIScrollView *)scrollView
{
[glView setNeedsDisplay];
}
To ensure the GL redraws whenever the scroll view is scrolled.
I would strongly recommend to do your zooming in panning with your OpenGL camera instead of trying to use it in a UIScrollView. It will be a little more work to set up, but ultimately the way to go IMHO.

Using NSImageView to display multiple images in quick sucession

I have an application where, in one window, there is an NSImageView. The user should be able to drag and drop ANY FILE/FOLDER (not only images) into the image view, so I subclassed NSImageView class to add support for those types.
The reason why I chose an NSImageView instead of a normal view is because I also wanted to display an animation (say an arrow pointing downwards and going up and down) when the user hovers over with files ready to drop. My question is this: what would be the best way (most efficient, quickest, least CPU usage, etc) to do this?
In fact, I have already done it, but what made me ask this question is the fact that when I set the images to change at a rate below 0.02 sec it starts to lag. Here is how I did it:
In the NSImageView subclass:
have an ivar: NSTimer* animTimer;
override awakeFromNib, calling [super awakeFromNib] and loading the images into an array (about 45 images) using NSImage
whenever user enters with files, start animTimer with frequency = 0.025 (less and it lags), and a selector that sets the next image in the array (called drawNextImage)
whenever the user exits or ends the drag and drop, call [animTimer invalidate] to stop updating images
Here is how I set the image in the subclass:
- (void)drawNextImage
{
currentImageIndex++; // ivar / kNumberDNDImages is a constant defined as 46
if (currentImageIndex >= kNumberDNDImages) { currentImageIndex = 0;}
[super setImage: [imagesArray objectAtIndex: currentImageIndex]]; // imagesArray is ivar
}
So, how would I do this quick enough? I'd like the frequency to be about 0.01 secs, but less than 0.025 lags, so that is what I have set for the moment. Oh, and my images are the correct size (+ or - one pixel or something) and they are in .png (I need the transparency - jpegs, for example, won't do it).
EDIT:
I have tried to follow NSResponder's suggestion, and have updated my method to this:
- (void)drawNextImage
{
currentImageIndex++;
if (currentImageIndex >= kNumberDNDImages) { currentImageIndex = 0;}
NSRect smallImgRect;
smallImgRect.origin = NSMakePoint(kSmallImageWidth * currentImageIndex, [self.bigDNDImage size].height); // Up left corner - ??
smallImgRect.size = NSMakeSize(kSmallImageWidth, [self.bigDNDImage size].height);
// Bottom left corner - ??
NSPoint imgPoint = NSMakePoint(([self bounds].size.width - kSmallImageWidth) / 2, 0);
[bigDNDImage drawAtPoint: imgPoint fromRect: smallImgRect operation: NSCompositeCopy fraction: 1];
}
I have also moved this method and the other drag and drop methods from the NSImageView subclass to an NSView subclass I already had. Everything is exactly the same, except for the superclass and this method. I also modified some constants.
In my early testing of this, I got some error/warning messages that didn't stop execution talking abou NSGraphicsContext or something. These have disappeared now, but just so you know. I have absolutely no idea why they were appearing and what they mean. If they ever appear again I'll worry about them, not now :)
EDIT 2:
This is what I'm doing now:
- (void)drawNextImage
{
currentImageIndex++;
if (currentImageIndex >= kNumberDNDImages) { currentImageIndex = 0;}
[self drawCurrentImage];
}
- (void)drawCurrentImage
{
NSRect smallImgRect;
smallImgRect.origin = NSMakePoint(kSmallImageWidth * currentImageIndex, 0); // Bottom left, for sure
smallImgRect.size = NSMakeSize(kSmallImageWidth, [self.bigDNDImage size].height);
// Bottom left as well
NSPoint imgPoint = NSMakePoint(([self bounds].size.width - kSmallImageWidth) / 2, 0);
[bigDNDImage drawAtPoint: imgPoint fromRect: smallImgRect operation: NSCompositeCopy fraction: 1];
}
And the catch here is to call drawCurrentImage when drawRect is called (see, it actually was easier to solve than I thought).
Now, I must say I haven't tried this with the composite image, because I couldn't find a good and quick way to merge 40+ images the way I wanted (one next to the other). But for the ones ineterested, I modified this to do the same thing as my NSImageView subclass (reading 40+ images from an array and displaying them) and I found no speed bump: NSView is as laggy below 0.025 as NSImageView. Also I found some problems when using core animation (the image is drawn in weird places instead of the place I tell her to) and some warnings talking about NSGraphicsContext, which I don't know how to solve at all (I'm a complete noob when it comes to drawing and such with Objective-C's tools). So for the time being I'm using NSImageView, unless I find a way to merge all those images and try it with NSView.
Core Animation would probably be quickest, since it'll do everything on the GPU. Create a layer for each image, setting each layer's contents to the CGImage you can make from each image, add them all as sublayers of a single top-level layer, host the top-level layer in a plain NSView, and then just toggle each image layer's hidden property in turn.
I'd probably draw all of the component images into one long image, and draw segments into a view using -drawAtPoint:fromRect:operation:fraction:. I'm sure you could make it faster than that by resorting to OpenGL, though.

Tracking a Core Animation animation

I have two circles which move around the screen. The circles are both UIViews which contain other UIViews. The area outside each circle is transparent.
I have written a function to create a CGPath which connects the two circles with a quadrilateral shape. I fill this path in a transparent CALayer which spans the entire screen. Since the layer is behind the two circular UIViews, it appears to connect them.
Finally, the two UIViews are animated using Core Animation. The position and size of both circles change during this animation.
So far the only method that I have had any success with is to interrupt the animation at regular intervals using an NSTimer, then recompute and draw the beam based on the location of the circle's presentationLayer. However, the quadrilateral lags behind the circles when the animation speeds up.
Is there a better way to accomplish this using Core Animation? Or should I avoid Core Animation and implement my own animation using an NSTimer?
I faced a similar problem. I used layers instead of views for the animation. You could try something like this.
Draw each element as a CALayer and include them as sublayers for your container UIVIew's layer. UIViews are easier to animate, but you will have less control. Notice that for any view you can get it's layer with [view layer];
Create a custom sublayer for your quadrilateral. This layer should have a property or several of properties you want to animate for this layer. Let's call this property "customprop". Because it is a custom layer, you want to redraw on each frame of the animation. For the properties you plan to animate, your custom layer class should return YES needsDisplayForKey:. That way you ensure -(void)drawInContext:(CGContextRef)theContext gets called on every frame.
Put all animations (both circles and the quad) in the same transaction;
For the circles you can probably use CALayers and set the content, if it is an image, the standard way:
layer.contents = [UIImage imageNamed:#"circle_image.png"].CGImage;
Now, for the quad layer, subclass CALayer and implement this way:
- (void)drawInContext:(CGContextRef)theContext{
//Custom draw code here
}
+ (BOOL)needsDisplayForKey:(NSString *)key{
if ([key isEqualToString:#"customprop"])
return YES;
return [super needsDisplayForKey:key];
}
The transaction would look like:
[CATransaction begin];
CABasicAnimation *theAnimation=[CABasicAnimation animationWithKeyPath:#"customprop"];
theAnimation.toValue = [NSValue valueWithCGPoint:CGPointMake(1000, 1000)];
theAnimation.duration=1.0;
theAnimation.repeatCount=4;
theAnimation.autoreverses=YES;
theAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseIn];
theAnimation.delegate = self;
[lay addAnimation:theAnimation forKey:#"selecting"];
[CATransaction setValue:[NSNumber numberWithFloat:10.0f]
forKey:kCATransactionAnimationDuration];
circ1.position=CGPointMake(1000, 1000);
circ2.position=CGPointMake(1000, 1000);
[CATransaction commit];
Now all the draw routines will happen at the same time. Make sure your drawInContext: implementation is fast. Otherwise the animation will lag.
After adding each sublayer to the UIViews's layer, rememeber to call [layer setNeedsDisplay]. It does not get called automatically.
I know this is a bit complicated. However, the resulting animations are better than using a NSTimer and redrawing on each call.
If you need to find the current visible state of the layers, you can call -presentationLayer on the CALayer in question, and this will give you a layer that approximates the one used for rendering. Note I said approximates - it's not guaranteed to be fully accurate. However it may be good enough for your purposes.