I'm building an iPhone application which requires some images to be built up in a very specific way. The problem is quite difficult-to-explain so below is a diagram of what I'm trying to achieve. Basically, I want to "paint bucket" fill onto a UIImage (which will be a PNG). I assume the term "paint bucket" here will equate to a tint?
After that, I want to create a mask object (which will be updatable and may consist of multiple shapes) and then when I apply another tint/paint bucket to the original image, the areas covered by the built-up mask will be unaffected. It's basically like wrapping some tape around an object, painting it and then removing the tape. As promised, here's a diagram of what I'm after. It's important to note that although I'm using a cross here, eventually the patterns may be quite complex and will have to be inside PNGs and not created in code. Thanks for any help you might be able to give!
Create your cross (or whatever shape you want) as a black image on a white background. Apply it to your graphics context using CGContextClipToMask. Then use CGContextFillRect to fill the bounds of your context with blue. Something like this should do it:
CGRect bounds = your context bounds;
CGContextRef gc = your context;
UIImage *cross = [UIImage imageNamed:#"cross"];
CGContextSaveGState(gc); {
CGContextClipToMask(gc, bounds, cross.CGImage);
CGContextSetColorWithColor(gc, [UIColor blueColor].CGColor);
CGContextFillRect(gc, bounds);
} CGContextRestoreGState(gc);
Related
I am creating a drawing app and have run into a problem. I have an array of curves; each curve keeps an array of points, and each point keeps its color, thickness, and coords.
When I drawRect: is called, I redraw all the curves from this array. The problem is that this array is getting huge, and the app slows down.
My idea is to, at the end of each redrawing, save the current context as an image, free the curves array, and at the next redraw, use that image as the background. Ultimately, I don't need the curves array at all, just an array of the curves in progress. Is this possible? Or maybe there is another way to do it?
You can render the corresponding layer of your view as image to update in on the next iteration. Sure it is better in this case to use UIImageView as yourViewToSaveAsImage. In this case you could get this process even easier...
UIView *view = yourViewToSaveAsImage;
UIGraphicsBeginImageContext(view.bounds.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
A path contains only information on points, so if you want to track variations in rendering you need a separate list of objects.
I achieved this by creating an NSArray* of my own custom objects that included fields such as: an NSBezierPath* (to capture the points and simplify drawing the segment), a CGPathDrawingMode to use for the segment, and information on the color and line size.
Then when I draw, I iterate over the elements of the array, set the context's current colors, and call either stroke or fill on the current element's NSBezierPath* depending on how I configured that segment.
I would also like to know if there's a faster way but this approach certainly works well.
I need a method for resizing UIImage like in photoshop with "nearest neighbour" resampling. I was looking for some, but everything I found was about CoreGraphics thicks to improve bicubic resampling quality. I have pixel-style design in my app, and a lot of stuff I create by pixel and then enlarge it with x5 multiplier (and it takes a lot of time, so I even close to writing a script for Photoshop). For example:
>
But I really don't need this like result of resampling:
Maybe anyone will show me the right way.
When you draw your image into a graphics context, you can set the graphics context's interpolation quality to "none", like this (e.g. in a view's drawRect method):
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(c, kCGInterpolationNone);
UIImage *image = [UIImage imageNamed:#"pixels.png"];
[image drawInRect:self.bounds];
If you need the result as a UIImage (e.g. to assign it to a built-in UI control), you could do this with UIGraphicsBeginImageContext (you'll find lots of examples for that).
An alternative would be to set the magnificationFilter property of an image view's layer:
pixelatedImageView.layer.magnificationFilter = kCAFilterNearest;
This is probably faster and more memory-efficient, because you don't need to redraw the image.
I have a rectangular NSImage A and I want to scale to embed into a squared transparent image B keeping A's ratio. So, in the end I'll get a squared image with the rectangle in it.
How can I compose that image?. I mean, how can I draw an NSImage over another NSImage and save the resulting image?.
I've been reading about clipping an NSImage inside a beizer but I need to keep ratio instead of filling the beizer square.
I hope you understand what I want.
Thanks.
The 'Cocoa Drawing Guide' has a section called 'Drawing to an Image'. From that documentation:
It is possible to create images programmatically by locking focus on an NSImage object and drawing other images or paths into the image context. This technique is most useful for creating images that you intend to render to the screen, although you can also save the resulting image data to a file.
There is example code there.
Developing an iPhone game with Cocos2d-iphone. I have a huge sprite and I want to apply a CCLiquid (or any other liquid-wave-like effect) on it.
However, the image is huge, so it consumes a lot of memory (without mentioning I have many other big elements during gameplay).
Well, I figured I could try to "only apply the liquid effect on the area that is visible by the player" (dimensions of such area being 480x320). That could help a lot.
I already got a CGRect representing the area of the CCSprite that should be affected. However, how would I actually apply the effect only within such area? Any ideas?
You could manually create a CCSprite from a sprite frame and set the boundaries of that frame to your CGRect. Then use the effect on this resulting CCSprite. Essentially, your original CCSprite image would act like a larger texture atlas form which you are specifying a small portion of that image to be the actual frame of your sprite. If you layered this new copied sprite on top of your main, larger one in the exact position, it would appear to be part of that larger sprite, but only the small CGRect portion would be affected by your code.
I am trying to draw some text via Quartz onto an NSView via CGContextShowTextAtPoint(). This worked well until I overrode (BOOL)isFlipped to return YES in my NSView subclass in order to position the origin in the upper-left for drawing. The text draws in the expected area but the letters are all inverted. I also tried the (theoretically, at least) equivalent of flipping my CGContext and translating by the context's height.
e.x.
// drawRect:
CGContextScaleCTM(theContext, 1, -1);
CGContextTranslateCTM(theContext, 0, -dirtyRect.size.height);
This yields the same result.
Many suggestions to similar problems have pointed to modifying the text matrix. I've set the text matrix to the identity matrix, performed an additional inversion on it, and done both, respectively. All these solutions have lead to even stranger rendering of the text (often just a fragment shows up.)
Another suggestion I saw was to simply steer clear of this function in favor of other means of drawing text (e.x. NSString's drawing methods.) However, this is being done amongst mostly C++ / C and I'd like to stay at those levels if possible.
Any suggestions are much appreciated and I'd be happy to post more code if needed.
Thanks,
Sam
This question has been answered here.
Basically it's because the coordinate system on iOS core graphics is fliped (x:0, y:0 in the top left) opposed to the one on the Mac (where x:0, y:0 is bottom left). The solution for this is setting the text transform matrix like this:
CGContextSetTextMatrix(context, CGAffineTransformMake(1.0,0.0, 0.0, -1.0, 0.0, 0.0));
You need to use the view's bounds rather than the dirtyRect and perform the translation before the scale:
CGContextTranslateCTM(theContext, 0, -NSHeight(self.bounds));
CGContextScaleCTM(theContext, 1, -1);
Turns out the answer was to modify the text matrix. The weird "fragments" that were showing up instead of the text was because the font size (set via CGContextSelectFont()) was too small when the "default" text matrix was replaced. The initial matrix had, for some reason, a large scale transform so smaller text sizes looked fine when the matrix was unmodified; when replaced with a inverse scale (1, -1) or an identity matrix, however, they would become unreadably small.