iPhone Objective-C image manipulation - objective-c

I am looking for a way to, in Objective-C, create a PNG from several smaller PNGs based on how the user sets things up. Is this possible using existing Apple classes, or do I need to use a 3rd party library? If 3rd party code is needed, can anyone recommend a good library? The simpler the better - simple filters (such as darkening/lightening the image) would be nice but not required.
Here is some pseudo-code, to give you a better idea of what I am looking for:
image = [myImageLibrary imageWithHeight:1024 width:768];
[image addImage:#"background.png" atX:0 andY:0 withRotation:0];
[image addImage:#"image2.png" atX:100 andY:200 withRotation:90];
[image saveAtLocation:#"output.png"];
At output.png we see image2.png placed on top of background.png and rotated 90 degrees
P.S. - I am sorry if this seems to be a duplicate of another question, I just have not found an answer that works for what I am trying to do.

Have you read the "Creating and Drawing Images" section of the Drawing and Printing Guide for iOS and the UIImage Class Reference docs?
What you're after is perfectly possible - with a well built class you could pretty much use that pseudo code as-is.
As a starter for ten, you could:
Create your own graphics context via UIGraphicsBeginImageContext.
Draw into that via the drawAtPoint: method of the UIImage class
Save the resultant image data out via UIGraphicsGetImageFromCurrentImageContext.
In terms of steps 1 and 3, see the UIKit Function Reference for more info. Additionally, the imageWithCGImage:scale:orientation: method of the UIImage class may prove useful for performing transformations, etc. as a part of step 2.

You'll want to look at CGContextDrawImage to draw your images, using a custom bitmap context, and then save it out using UIGraphicsGetImageFromCurrentImageContext(). The rotation can be done by applying CGAffineTransforms to your CGContext.
More information on Core Graphics here:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html

Related

Create an image cropping interface for Objective C (Mac OS X)

I need to create a very simple image cropping interface for an OS X cocoa application, but I am not sure where to start. The user needs to be able to choose a crop size from a menu of presets, be presented with a cropping rectangle that can be resized preserving the ratio, and moved around the image until they finally apply the selected crop to the image.
I've done some searching for sample code and projects but not found anything too useful. Core Image fun house has some pointers but is a retired sample. There are lots of iOS examples, but I've not found an easy to follow Mac OS example.
Can someone point me in the right direction (or at a sample project or framework!!).
Thanks a lot.
Here is a project you can look at:
https://github.com/foundry/drawingtest
It's a little demo I made as I was trying to understand the relationship between the rects in this method:
- (void)drawInRect:(NSRect)dstRect
fromRect:(NSRect)srcRect
operation:(NSCompositingOperation)op
fraction:(CGFloat)delta
Note that the older compositeToPoint: methods are deprecated and should not be used for this sort of thing.
srcRect is the portion of the original image (in it's own coordinates) that you want to keep.
dstRect is the rect that you want that cropped area to draw into.
JMRect in the project is an NSObject representation of an NSRect - so that we can use cocoa bindings to tie the interface controls together.
For your UI, the cropping rectangle could just be a transparent subview view with a border that you push around and resize over the image you want to crop.
This is by no means a complete solution to your question, but it's something you can poke around with - it might help you to get started.

Objective C: How to use CGLayerCreate and CGContextDrawLayerAtPoint?

I am studying Core Graphics to make dynamic textures for my project.
A friend told me that i should use CGLayerCreate and CGContextDrawLayerAtPoint
to improve the texture of the brush app that i am building but i haven't found any book or tutorial that includes CGLayerCreate and CGContextDrawAtPoint.
can you guys tell what's the use of this two and how will able to code them?
also if you know any core graphics book that includes those please tell me, it'll surely help me.
thanks!
The Quartz 2D Programming Guide has a chapter that discusses CGLayer objects.
Your friend might be thinking of this use of layers, quoting that chapter:
Repeated drawing. For example, you might want to create a pattern that
consists of the same item drawn over and over. Draw the item to a
layer and then repeatedly draw the layer, as shown in Figure 12-1. Any
Quartz object that you draw repeatedly—including CGPath, CGShading,
and CGPDFPage objects—benefits from improved performance if you draw
it to a CGLayer. Note that a layer is not just for onscreen drawing;
you can use it for graphics contexts that aren’t screen-oriented, such
as a PDF graphics context.
There's also a very very simple example in the Quartz2DBasics sample app.

How should I design displaying a dynamic map? (Coordinates + Lines)

So I want to have a view (NSView, NSOpenGLView, something CG related?) which basically displays a map. Such as:
http://dump.tanaris4.com/map.png
Obviously that looks horrible, but I did it using an NSView, and it draws SO slow. Clearly not designed for this.
I just need to allow users to click on the individual (x,y) coordinates to make changes, and zoom into a certain area (to see it better).
Should I go the OpenGL route? And if so - any suggestions as to how to get started? (I was able to follow the guide to draw a triangle, so that's good).
I did find this post on zooming in an NSView: How to implement zoom/scale in a Cocoa AppKit-application
My concern is if I'm drawing over 6000 coordinates and the lines connecting them, this isn't efficient at all.
I don't think using OpenGL would be of any good here. The problem does not seem to be the actual painting, but rather the rendering strategy. You would need a scene graph of some kind to dynamically handle level of detail and culling.
Qt has all this packaged in a nice class class QGraphicsScene (see http://doc.qt.nokia.com/latest/qgraphicsscene.html for reference, and http://doc.qt.nokia.com/main-snapshot/demos-chip.html for an example).
Some basic concepts you should consider using:
http://en.wikipedia.org/wiki/Scene_graph
http://en.wikipedia.org/wiki/Quadtree
http://en.wikipedia.org/wiki/Level_of_detail
Try using core graphics for this, really there is so much that could be done. Watch the video Practical Drawing for iOS Developers from WWDC 2011 and it should give an over view of what can be done with CG.
I believe even CoreGraphics will suffice for what you want to achieve, and that should work under a UIView if you draw the rectangle of your view completely under the DrawRect method of your UIView (you must overload this method). Please see the UIView Class Reference. I have a mobile application that logs points on the UIMapKit, kind of like Nike+, and it certainly works well for massive amounts of points/line segments. There is no reason why this simple approach cannot work for you as well.

UIImage change raw pixels from white to clear?

I've tried some code from each of these questions:
How to make one color transparent on a UIImage?
How to mask a UIImage so that white becomes transparent on iphone?
but have come up unsuccessful, unfortunately working with Core Graphics and images is not my strong suit.
How would I go about accessing a UIImage's raw data and changing the white pixels to clear?
How would I go about accessing a UIImage's raw data …?
Look at the documentation.
You'll find that there is no way to get the raw data behind a UIImage. The closest you can get is a CGImage. That will let you get its data provider, which you can ask for a copy of the raw data.
The problem with that solution is that you need to handle every possible configuration (RGBA, ARGB, RGB_, _RGB, RGB, 8-bpc, 16-bpc, etc.) that CGImage supports. That's a lot of work. If you don't do it, then someday, you'll get surprised by an image that somehow doesn't work with your code, or by an OS upgrade changing how the CGImage gets created.
The CGImageCreateWithMaskingColors function, suggested on one of the other questions you linked to, is the correct solution.
One thing that's tripping you up is that the values shown in the accepted answer on that question are generally bogus: They're out of range. The Quartz 2D Programming Guide has more details in at least two.places.
I also argue against including that answer's createMask: method, since it doesn't do what it says it does and is barely useful at all (it's only worth having if the source image may be CMYK, but how likely is that on an iPhone app?). Skip it and create the mask image from the UIImage's CGImage directly.
That answer will probably work just fine once you fix those two problems.

Using Core Animation/CALayer for simple layered painting

I would like to create a custom NSView that takes a layered approach to painting. I imagine the majority of the layers would be the same width and height as the backing view.
Is it appropriate to use the Core Animation classes like CALayer for this task, even though I don't expect to need much animation? Is there a more appropriate approach?
To clarify, the view is not meant to be like a canvas in a Photoshop-like application. It more of a data display that should allow for user interaction (selecting, moving, scrolling, etc.)
If it's display and layout you're after, I'd say that a CALayer-based architecture is a good choice. For the open source Core Plot framework, we construct all of our graphs and plot elements out of CALayers, and organize them in a regular hierarchy. CALayers are lightweight and use almost identical APIs between Mac and iPhone. They can even be made to respond to touch or mouse events.
For another example of a CALayer-based user interface, my iPhone application's entire equation entry interface is composed of CALayers, including the menu that slides up from below. Performance is slightly better than that of my previous UIView-based implementation, but the same code also works within my preliminary desktop version of the application.
For a drawing program, I would imagine it would be important to hold a buffer of the bitmap data. The only issue with using a CALayer is that the contents property is a CGImageRef. To turn that back into a graphics context for doing further drawing can be a bit of a pain. You'd have to initialize a new context, draw the bitmap data into it, then do whatever drawing operations you wanted to do, and finally turn that back into a CGImageRef. You probably wouldn't be able to avoid doing a number of pretty large memory allocations, which is virtually guaranteed to slow your program way down.
I would consider holding an off-screen buffer for each layer. Take a look at the Quartz CGLayerRef object. I think it probably does what you want to do: it's an off-screen buffer that holds things you might want to draw repeatedly. You can also quickly get a CGContextRef whenever you need it so you can do additional drawing. And you can always use that CGContextRef with NSGraphicsContext if you want to use Cocoa drawing methods.