Saving what is currently drawn into a view as an image - cocoa-touch

I am creating a drawing app and have run into a problem. I have an array of curves; each curve keeps an array of points, and each point keeps its color, thickness, and coords.
When I drawRect: is called, I redraw all the curves from this array. The problem is that this array is getting huge, and the app slows down.
My idea is to, at the end of each redrawing, save the current context as an image, free the curves array, and at the next redraw, use that image as the background. Ultimately, I don't need the curves array at all, just an array of the curves in progress. Is this possible? Or maybe there is another way to do it?

You can render the corresponding layer of your view as image to update in on the next iteration. Sure it is better in this case to use UIImageView as yourViewToSaveAsImage. In this case you could get this process even easier...
UIView *view = yourViewToSaveAsImage;
UIGraphicsBeginImageContext(view.bounds.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

A path contains only information on points, so if you want to track variations in rendering you need a separate list of objects.
I achieved this by creating an NSArray* of my own custom objects that included fields such as: an NSBezierPath* (to capture the points and simplify drawing the segment), a CGPathDrawingMode to use for the segment, and information on the color and line size.
Then when I draw, I iterate over the elements of the array, set the context's current colors, and call either stroke or fill on the current element's NSBezierPath* depending on how I configured that segment.
I would also like to know if there's a faster way but this approach certainly works well.

Related

How to find back the real position on the image on iOS?

Here is the view I got, I got a layer view, detect user touch, and a image view, which showing the image. The layer view is cover on top of the image view. The image view's image is aspect fit. So, it won't lost the ratio. If in my layer view touch on 100, 240, it is a layer view coordinate, but not the image's coordinate. I would like to know how to convert the layer view's coordinate to a image's coordinate. In this example, the image size may be 180*180, so, the coordinate in layer view in the image is 60, 90.
Thanks.
If I'm understanding this question correctly, you want to take a point, which is currently in relation to the layer's coordinate system, and convert it to the image view's coordinate system?
In that case, there are a couple of ways to do this.
Easiest is to use convertPoint:fromView: or convertPoint:toView:
CGPoint imageViewTouchPoint = [layerView convertPoint:touchPoint fromView:imageView];
CGPoint imageViewTouchPoint = [imageView convertPoint:touchPoint toView:layerView];
Either one should work.
EDIT - I realize now that this is only if the UIImageView has the same frame as the UIImage, which you said it might not, due to the UIViewContentModeScaleAspectFit property.
In this case, unless I'm mistaken, the image frame is calculated inside the UIImageView drawRect: method and isn't a property that gets set. This means you'll have to calculate this on your own.
Definitely get the imageViewTouchPoint from one of the methods above (just in case you want to use the same logic on a UIImageView which isn't the full screen size).
You will then need to calculate the scaled image frame. There are a couple of ways to do this. Some people go brute force and manually calculate based on which side of the image is longer, then determining which side should be scaled. Then they calculate the origin by by centering the image and subtracting the image and image view's sides and dividing by two.
I like to write as little code as possible if it's unnecessary, even if it means importing a framework. If you import AVFoundation you get a method AVMakeRectWithAspectRatioInsideRect which you can use to actually calculate the scaled rectangle in one line of code.
CGRect imageRect = AVMakeRectWithAspectRatioInsideRect(image.size, imageView.frame);
Whichever method you use, you will then simply translate your touched point with the scaled image origin:
CGPoint imageTouchPoint = CGPointMake(imageViewTouchPoint.x - imageRect.origin.x, imageViewTouchPoint.y - imageRect.origin.y);
You have to do the math yourself. Calculate the aspect ratio of your image and compare with the aspect ratio of the image view's bounds.
Look at this question: How to Get Image position in ImageView
After searching more, got a hack:
CGSize imageInViewSize = [photo resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:imageView.size interpolationQuality:kCGInterpolationNone].size;
CGRect overlayRect = CGRectMake((imageView.frame.size.width - imageInViewSize.width) / 2,
(imageView.frame.size.height - imageInViewSize.height) / 2,
imageInViewSize.width,
imageInViewSize.height);
NSLog(#"Frame of Image inside UIImageView: Left:%f Top:%f Width:%f Height:%f \n", overlayRect.origin.x, overlayRect.origin.y, overlayRect.size.width, overlayRect.size.height);

Merge two UIBezierPath points

I want to be able to select some area from this image, and change the color of the selected area.
To do this, I thought of using CALayer and UIBezierPath.
I've cleared the colored area from the image, then I took each area point and drew a UIBezierPath beneath the image.
I have 3 CALayers for each area, each CALayer has a UIBezierPath with predefined points.
When the user clicks on a layer, it will show the selected layer without filling the UIBezierPath, just to have a border around the UIBezierPath, the result look like this:
I added a UIView over the image with Opacity = 0.6f and
redrew all the CALayers on it.
All the layers are hidden in the new UIView
Every thing is working great, the next step is to merge the selected areas:
I took the points from the first area and added it to the points of
the second area
I created a new UIBezierPath with the new points
My problem is that the result is wrong:
How to merge a UIBezierPath with a correct points order?
Is there a better way to accomplish something like this without
using UIBezierPath?
from looking at the image above, the path is wrong because the sequence of points is not followed which pretty much screws up your path. I don't think a Bezier Path is the right tool to do this in the first place as you have rectangular or point to point connections. So you more have a Poligon than a Bezier Path object. However UIKit seems to bundle all this into a UIBezierPath object (non optimal naming if you ask me).
The tricky thing here is to find out where the two shapes really touch each other and to add the points in the sequence as before but then tear up the vertical lines in the middle and connect the path to the other structure.
Another alternative could be to use a bitmap and simply union the bitmaps and create a new shape. It largely depends on how your base data is represented and managed. You could also simply keep two shapes and just join them in a meta object to draw them concurrently.

NSImage from two NSImages

I have a rectangular NSImage A and I want to scale to embed into a squared transparent image B keeping A's ratio. So, in the end I'll get a squared image with the rectangle in it.
How can I compose that image?. I mean, how can I draw an NSImage over another NSImage and save the resulting image?.
I've been reading about clipping an NSImage inside a beizer but I need to keep ratio instead of filling the beizer square.
I hope you understand what I want.
Thanks.
The 'Cocoa Drawing Guide' has a section called 'Drawing to an Image'. From that documentation:
It is possible to create images programmatically by locking focus on an NSImage object and drawing other images or paths into the image context. This technique is most useful for creating images that you intend to render to the screen, although you can also save the resulting image data to a file.
There is example code there.

Excluding Masked Areas From A "Paint Bucket" Fill On A UIImage

I'm building an iPhone application which requires some images to be built up in a very specific way. The problem is quite difficult-to-explain so below is a diagram of what I'm trying to achieve. Basically, I want to "paint bucket" fill onto a UIImage (which will be a PNG). I assume the term "paint bucket" here will equate to a tint?
After that, I want to create a mask object (which will be updatable and may consist of multiple shapes) and then when I apply another tint/paint bucket to the original image, the areas covered by the built-up mask will be unaffected. It's basically like wrapping some tape around an object, painting it and then removing the tape. As promised, here's a diagram of what I'm after. It's important to note that although I'm using a cross here, eventually the patterns may be quite complex and will have to be inside PNGs and not created in code. Thanks for any help you might be able to give!
Create your cross (or whatever shape you want) as a black image on a white background. Apply it to your graphics context using CGContextClipToMask. Then use CGContextFillRect to fill the bounds of your context with blue. Something like this should do it:
CGRect bounds = your context bounds;
CGContextRef gc = your context;
UIImage *cross = [UIImage imageNamed:#"cross"];
CGContextSaveGState(gc); {
CGContextClipToMask(gc, bounds, cross.CGImage);
CGContextSetColorWithColor(gc, [UIColor blueColor].CGColor);
CGContextFillRect(gc, bounds);
} CGContextRestoreGState(gc);

how to generate graphs using integer values in iphone

i want to show a grapph/bar chart in iphone how do i do this without custom API;s
You may want to investigate the Core Plot project [code.google.com]. Core Plot was the subject of this year's scientific coding project at WWDC and is pretty useable for some cases already. From its inception, Core Plot was intended for both OS X and iPhone uses. The source distribution (there hasn't been a binary release yet) comes with both OS X and iPhone example applications and there's info on the project wiki for using it as a library in an iPhone app. Here's an example of it's current plotting capabilities.
(source: googlecode.com)
Write your own. It's not easy, I'm in the process of doing the same thing right now. Here's how I'm doing it:
First, ignore any desire you may have to try using a UIScrollView if you want to allow zooming. It's totally not worth it.
Second, create something like a GraphElement protocol. I have a hierarchy that looks something like this:
GraphElement
GraphPathElement
GraphDataElement
GraphDataSupplierElement
GraphElement contains the basic necessary methods for a graph element, including how to draw, a maximum width (for zooming in), whether a point is within that element (for touches) and the standard touchBegan, touchMoved, and touchEnded functions.
GraphPathElement contains a CGPath, a line color and width, a fill color and a drawing mode. Whenever it's prompted to draw, it simply adds the path to the context, sets the colors and line width, and draws the path with the given drawing mode.
GraphDataElement, as a subclass of GraphPathElement, takes in a set of data in x-y coordinates, a graph type (bar or line), a frame, and a bounds. The frame is the actual size of the created output CGPath. The bounds is the size of the data in input coordinates. Essentially, it lets you scale the data to the screen size.
It creates a graph by first calculating an affine transform to transform the bounds to the frame, then it loops through each point and adds it as data to a path, applying that transform to the point before adding it. How it adds data depends on the type.
If it's a bar graph, it creates a rectangle of width 0, origin at (x,frame.size.height-y), and height=y. Then it "insets" the graph by -3 pixels horizontally, and adds that to the path.
If it's a line graph, it's much simpler. It just moves to the first point, then for each other point, it adds a line to that point, adds a circle in a rect around that point, then moves back to that point to go on to the next point.
GraphDataSupplierElement is the interface to my database that actually contains all the data. It determines what kind of graph it should be, formats the data into the required type for GraphDataElement, and passes it on, with the color to use for that particular graph.
For me, the x-axis is time, and is represented as NSTimeIntervals. The GraphDataSupplierElement contains a minDate and maxDate so that a GraphDateElement can draw the x-axis labels as required.
Once all this is done, you need to create the actual graph. You can go about it several ways. One option is to keep all the elements in an NSArray and whenever drawRect: is called, loop through each element and draw it. Another option is to create a CALayer for each element, and use the GraphPathElement as the CALayer's delegate. Or you could make GraphPathElement extend from CALayer directly. It's up to you on this one. I haven't gotten as far as trying CALayers yet, I'm still stuck in the simple NSArray stage. I may move to CALayers at some point, once I'm satisfied with how everything looks.
So, all in all, the idea is that you create the graph as one or many CGPaths beforehand, and just draw that when you need to draw the graph, rather than trying to actually parse data whenever you get a drawRect: call.
Scaling can be done by keeping the source data in your GraphDataElement, and just change the frame so that the scaling of the bounds to the frame creates a CGPath wider than the screen, or whatever your needs are. I basically re-implemented my own pinch-zoom for my Graph UIView subclass that only scales horizontally, by changing its transform, then on completion, get the current frame, reset the transform to identity, set the frame to the saved value, and set the frame of all of the GraphElements to the new frame as well, to make them scale. Then just call [self setNeedsDisplay] to draw.
Anyway, that's a bit ramble-ish, but it's an outline of how I made it happen. If you have more specific questions, feel free to comment.