I have a bunch of image files in a proprietary binary format that I want to load into NSImages. The format is not a simple bitmap, but rather a kind of an RLE representation mixed with transparency and miscellaneous additional information.
In order to display one of these images in a Cocoa app, I need a way to parse the image file byte by byte and "calculate" a bitmap from it which I will then put into an NSImage.
What is a good approach for doing this in Objective-C/Cocoa?
The tasks of interpreting image data are handled by the image's representation object(s). To use a proprietary format, you have a few options: (a) create a custom representation class, (b) use NSCustomImageRep with a custom delegate, or (c) use a custom object to translate your image to a supported format, such as a raw bitmap.
If you choose to create a custom representation class, you will create a subclass of NSImageRep as described in Creating New Image Representation Classes. This basically requires that your class register itself and be able to draw the image data. In addition to this, you can override methods to return information about the image, and you will be able to instantiate your images using the normal NSImage methods. This method requires the most work.
Using NSCustomImageRep requires less work than creating a custom implementation. Your delegate object only needs to be able to draw the image at a fixed location. However, you cannot return other information about the image, and you will need to create the NSCustomImageRep object manually before creating the NSImage.
Translating the image into a different format is also simpler than creating a custom representation. It could be as simple as creating a blank NSImage of the proper size and drawing into it. Creating the image is still more complicated since you need to call your translation method, and this will affect efficiency (both future drawing time and memory usage) since you are changing formats, which could be good or bad. You will also lose any association between the image object and its source.
Related
I want to render my scene to a texture and then use that texture in shader so I created a frambuffer using imageview and recorded a command buffer for that. I successfully uploaded and executed the command buffer on gpu but the descriptor of imageview is black. I'm creating a descriptor from the imageview before rendering loop. Is it black because I create it before anything is rendered to framebuffer? If so I will have to update the descriptor every frame. Will I have to create a new descriptor from imageview every frame? Or is there another way I can do this?
I have read other thread on this title. Don't mark this as duplicate cause that thread is about textures and this is texture from a imageview.
Thanks.
#IAS0601 I will answer questions from Your comment through an answer, as it allows for much longer text to be written, and its formatting is much better. I hope this also answers Your original question, but You don't have to treat like the answer. As I wrote, I'm not sure what You are asking about.
1) In practically all cases, GPU accesses images through image views. They specify additional parameters which define how image is accessed (like for example which part of the image is accessed), but still it is the original image that gets accessed. Image view, as name suggests, is just a view, list of access parameters. It doesn't have any memory bound to it, it doesn't contain any data (apart from the parameters specified during image view creation).
So when You create a framebuffer and render into it, You render into original images or, to be more specific, to those parts of original images which were specified in image views. For example, You have a 2D texture with 3 array layers. You create a 2D image view for the middle (second) layer. Then You use this image view during framebuffer creation. And now when You render into this framebuffer, in fact You are rendering into the second layer of the original 2D texture array.
Another thing - when You later access the same image, and when You use the same image view, You still access the original image. If You rendered something into the image, then You will get the updated data (provided You have done everything correctly, like perform appropriate synchronization operations, layout transition if necessary etc.). I hope this is what You mean by updating image view.
2) I'm not sure what You mean by updating descriptor set. In Vulkan when we update a descriptor set, this means that we specify handles of Vulkan resources that should be used through given descriptor set.
If I understand You correctly - You want to render something into an image. You create an image view for that image and provide that image view during framebuffer creation. Then You render something into that framebuffer. Now You want to read data from that image. You have two options. If You want to access only one sample location that is associated with fragment shader's location, You can do this through an input attachment in the next subpass of the same render pass. But this way You can only perform operations which don't require access to multiple texels, for example a color correction.
But if You want to do something more advanced, like blurring or shadow mapping, if You need access to several texels, You must end a render pass and start another one. In this second render pass, You can read data from the original image through a descriptor set. It doesn't matter when this descriptor set was created and updated (when the handle of image view was specified). If You don't change the handles of resources - meaning, if You don't create a new image or a new image view, You can use the same descriptor set and You will access the data rendered in the first render pass.
If You have problems accessing the data, for example (as You wrote) You get only black colors, this suggests You didn't perform everything correctly - render pass load or store ops are incorrect, or initial and final layouts are incorrect. Or synchronization isn't performed correctly. Unfortunately, without access to Your project, we can't be sure what is wrong.
I would like to ask if it is possible with Cocoa classes to have a direct pointer to the internal graphical image of a view and put a raw graphical image into that (private?) memory area,
or viceversa tell that view the pointer to a raw image in another memory area.
I ask if Cocoa allows this low-level operations. Thanks
No, you can't. You need to wrap that internal representation with a CGImage (via CGImageCreate), and depending on context, you would then need to wrap the CGImage with an UIImage.
What makes you think that an "internal graphical image of a view" even exists? Some views may, others may not, but if such a thing exists it is by definition "internal" and therefore not something you should mess with. However, you do have access to a view's layer, and that gives you a lot of control.
What are you trying to do? If you just want to get a copy of the contents of a view as an image, that's easy, and if you want to draw an image in a view that's easy too. Once you have an image, you can do just about anything you want to change it.
I am looking for a way to, in Objective-C, create a PNG from several smaller PNGs based on how the user sets things up. Is this possible using existing Apple classes, or do I need to use a 3rd party library? If 3rd party code is needed, can anyone recommend a good library? The simpler the better - simple filters (such as darkening/lightening the image) would be nice but not required.
Here is some pseudo-code, to give you a better idea of what I am looking for:
image = [myImageLibrary imageWithHeight:1024 width:768];
[image addImage:#"background.png" atX:0 andY:0 withRotation:0];
[image addImage:#"image2.png" atX:100 andY:200 withRotation:90];
[image saveAtLocation:#"output.png"];
At output.png we see image2.png placed on top of background.png and rotated 90 degrees
P.S. - I am sorry if this seems to be a duplicate of another question, I just have not found an answer that works for what I am trying to do.
Have you read the "Creating and Drawing Images" section of the Drawing and Printing Guide for iOS and the UIImage Class Reference docs?
What you're after is perfectly possible - with a well built class you could pretty much use that pseudo code as-is.
As a starter for ten, you could:
Create your own graphics context via UIGraphicsBeginImageContext.
Draw into that via the drawAtPoint: method of the UIImage class
Save the resultant image data out via UIGraphicsGetImageFromCurrentImageContext.
In terms of steps 1 and 3, see the UIKit Function Reference for more info. Additionally, the imageWithCGImage:scale:orientation: method of the UIImage class may prove useful for performing transformations, etc. as a part of step 2.
You'll want to look at CGContextDrawImage to draw your images, using a custom bitmap context, and then save it out using UIGraphicsGetImageFromCurrentImageContext(). The rotation can be done by applying CGAffineTransforms to your CGContext.
More information on Core Graphics here:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html
I would like to create a custom NSView that takes a layered approach to painting. I imagine the majority of the layers would be the same width and height as the backing view.
Is it appropriate to use the Core Animation classes like CALayer for this task, even though I don't expect to need much animation? Is there a more appropriate approach?
To clarify, the view is not meant to be like a canvas in a Photoshop-like application. It more of a data display that should allow for user interaction (selecting, moving, scrolling, etc.)
If it's display and layout you're after, I'd say that a CALayer-based architecture is a good choice. For the open source Core Plot framework, we construct all of our graphs and plot elements out of CALayers, and organize them in a regular hierarchy. CALayers are lightweight and use almost identical APIs between Mac and iPhone. They can even be made to respond to touch or mouse events.
For another example of a CALayer-based user interface, my iPhone application's entire equation entry interface is composed of CALayers, including the menu that slides up from below. Performance is slightly better than that of my previous UIView-based implementation, but the same code also works within my preliminary desktop version of the application.
For a drawing program, I would imagine it would be important to hold a buffer of the bitmap data. The only issue with using a CALayer is that the contents property is a CGImageRef. To turn that back into a graphics context for doing further drawing can be a bit of a pain. You'd have to initialize a new context, draw the bitmap data into it, then do whatever drawing operations you wanted to do, and finally turn that back into a CGImageRef. You probably wouldn't be able to avoid doing a number of pretty large memory allocations, which is virtually guaranteed to slow your program way down.
I would consider holding an off-screen buffer for each layer. Take a look at the Quartz CGLayerRef object. I think it probably does what you want to do: it's an off-screen buffer that holds things you might want to draw repeatedly. You can also quickly get a CGContextRef whenever you need it so you can do additional drawing. And you can always use that CGContextRef with NSGraphicsContext if you want to use Cocoa drawing methods.
When should I use each?
NSImage is an abstract data type that can represent many different types of images, as well as multiple representations of an image. It is often useful when the actual type of image is not important for what you're trying to do. It is also the only image class that AppKit will accept in its APIs (NSImageView and so forth).
CGImage can only represent bitmaps. If you need to get down and dirty with the actual bitmap data, CGImage is an appropriate type to use. The operations in CoreGraphics, such as blend modes and masking, require CGImageRefs. CGImageRefs can be used to create NSBitmapImageRefs, which can then added to an NSImage.
I think the documentation describes a CIImage best:
Although a CIImage object has image data associated with it, it is not an image. You can think of a CIImage object as an image “recipe.” A CIImage object has all the information necessary to produce an image, but Core Image doesn’t actually render an image until it is told to do so. This “lazy evaluation” method allows Core Image to operate as efficiently as possible.
CIImages are the type required to use the various GPU-optimized Core Image filters that come with Mac OS X, but, like CGImageRefs, they can also be converted to NSBitmapImageReps.