Best practice of copying and serializing Quartz references - objective-c

I have objects containing Quartz-2D references (describing colors, fill patterns, gradients and shadows) in Cocoa. I would like to implement the NSCoding protocol in my objects and thus need to serialize those opaque Quartz-2D structures.
Possible solutions might be:
Define a set of properties in my objects that allow to setup the data structures from scratch whenever they are needed. Those can then be serialized easily. Example: Store four floats for red, green, blue, and alpha, then use CGColorCreate. Disadvantage: Duplication of information, thus potential consistency and (so far minor) space consumption issues. I would need to write property setters manually that recreate the Quartz structure whenever a component changes. That would bloat my code substantially.
Read out the properties using Quartz functions. Example: Use CGColorGetComponents for colors. Disadvantage: It seems to work for colors. But there are no equivalent functions for other structures, so I don't see how this could work for things like gradients, shadings, shadows etc.
Read out the properties directly from the raw, opaque structures. Disadvantage: As the documentation says, the structures are supposed to be opaque. So in case something changes under the hood, my code will break. (Apple certainly would not have provided a function like CGColorGetComponents if that was supposed to be done.) Furthermore, things like the CGFunctionRef inside a CGShadingRef would really be asking for trouble.
What is the best practice for serializing Quartz structures?

The answer pretty much varies from one class to the next:
CGImage: Use CGImageDestination to make a TIFF file of it. (Equivalent to NSImage's TIFFRepresentation method.)
CGPath: Write an applier function you can use to describe the path's elements as, say, PostScript code. Write a simple interpreter to go the other direction.
CGColorSpace: You can export the ICC representation.
CGColor: As you described, but don't forget to include the color space.
CGLayer: Convoluted: Make a bitmap context, draw the layer into it, and dump an image of the context, then serialize that.
CGFont: The name should be enough for most applications. If you're being really fancy (i.e., using the variations feature), you'll want to include the font's variations dictionary. You'll have to maintain your knowledge of the font size separately, since CGFont doesn't have one and CGContext doesn't let you get the one you set in it.
CGPDFDocument: From a quick look, it looks like CGPDFObjects are all immutable, so you'd just archive the original PDF data or the URL you got it from.
CGGradient, CGPattern, CGShading, and most other classes: Yup, you're screwed. You'll just need to maintain all of the information you created the object with separately.

Related

When to use multiple MTLRenderCommandEncoders to perform my Metal rendering?

I'm learning Metal, and there's a conceptual question that I'm trying to wrap my head around: at what level, exactly, should my code handle successive drawing operations that require different pipeline states? As I understand it (from answers like this: https://stackoverflow.com/a/43827775/2752221), I can use a single MTLRenderCommandEncoder and change its pipeline state, the vertex buffer it's using, etc., between calls to drawPrimitives:, and the encoder state that was current at the time of each call to drawPrimitives: will be preserved. So that's great. But it also seems like the design of Metal is such that one can make multiple MTLRenderCommandEncoder instances, and use them to sequentially throw batches of commands into a MTLCommandBuffer. Given that the former works – using one MTLRenderCommandEncoder and changing its state – why would one do the latter? Under what circumstances is it correct to do the former, and under what circumstances is it necessary to do the latter? What is an example of a situation where the latter would be necessary/appropriate?
If it matters, I'm working on a macOS app, using Objective-C. Thanks.
Ignoring multithreaded encoding cases, which are somewhat advanced, the main reason you'd want to create multiple render command encoders during a frame is because you need to change which textures you're rendering to.
You'll notice that you need to provide a render pass descriptor when creating a render command encoder. For this reason, we often say that the sequence of commands belonging to a particular encoder constitute a render pass. The attachments of that descriptor refer to the textures that will be written to by the commands encoded by the encoder.
Many different techniques, including shadow mapping and postprocessing effects like bloom require multiple passes to produce. Since you can't change attachments in the midst of a pass, creating a new encoder is the only way to encode multiple passes in a frame.
Relatedly, you should ordinarily use one command buffer per frame. You can, however, sometimes reduce frame time by splitting your passes across multiple command buffers, but this is highly dependent on the shape of your workload and should only be done in tandem with profiling, as it's not always an optimization.
In addition to Warren's answer, another way to look at the question is by examining the API. A number of Metal objects are created from descriptors. The properties of the descriptor at the time an object is created from it govern that object for its lifetime. Those are aspects of the object that can't be changed after creation.
By contrast, the object will have various setter methods to modify other properties over its lifetime.
For a render command encoder, the properties that are fixed for its lifetime are those specified by the MTLRenderPassDescriptor used to create it. If you want to render with different values for any of those properties, the only way to do so is to create a new encoder from a different descriptor. On the other hand, if you can do everything you need/want to do by using the encoder's setter methods, then you don't need a new encoder.

Vulkan: Using buffer aliasing to implement a defragmentation scheme

Say I have one VkBuffer bound to every device allocation, and use appropriate combinations of vkCmdCopyBuffer to perform defragmentation of an arena block-by-block.
Say an arena may contain linear and non-linear data in any appropriately-aligned arrangement. Due to the immutability of VkImage after binding, defragmentation will involve constructing and binding new VkImages at the new locations of image data that have been moved.
None of the resources within an arena undergoing defragmentation are bound to anything, or could be considered "in-use".
This isn't difficult to implement, though I have a concern:
Is it UB to use vkCmdCopyBuffer to move an image's data (as to avoid redundant layout transitions), then construct a new VkImage at the new location?
My thoughts are that perhaps an implementation would do something strange, like rely on absolute device addresses within some internal bookkeeping structure, making it UB to treat image data as POD until bound to a new object.
Well, let's look at this systematically.
So you find a suitable destination area for your image. You do a vkCmdCopyBuffer to copy from the source area to the destination area. Now you create a new VkImage for that destination area, and the initial layout you specify is... what?
See, there are only two valid initialLayout values in VkCreateImageInfo: undefined or preinitialized. And preinitialized is only works for images that use linear tiling, since there is no well-defined layout for optimal tiled images.
So you can't use preinitialized layout. And using undefined layout means that the next image transition you use will potentially mangle whatever data is there. Now, undefined layout might work on some implementations. On implementations that don't care about layout, it might work. It also might work on implementations if the source image was in the general layout.
But none of that is guaranteed to work by the standard. As far as the standard is concerned, if you set the layout to be undefined, then the data will not be preserved. So regardless of questions of buffer/image aliasing, this can't work.
You have to create the VkImage at the destination, then use vkCmdCopyImage to copy from the source image to the destination.
It should also be noted that, even if the layout issue worked, the aliasing rules tell us that copying from non-host-accessible memory (ie: images in optimal tiling or non-general/preinitialized layout) to host-accessible memory yields undefined values. So even if the layout issue weren't a problem, the copy itself doesn't work. In theory, at least.

Save MKOverlay objects in Core Data

I have a large MKOverlay that I would like to be saved in Core Data so that I don't have to create it later. Since this isn't one of the types that you can choose in Core Data, how do I go about saving it?
Do I need to somehow encode it first?
Do I then need to decode it when using?
What kind of object do I select in core data when creating a new property?
Thanks guys.
If you do not need to query for different overlays and you're not using core data elsewhere in your project, then you're probably better off caching the overlay on disk as an encoded NSArray.
However, if you're already using Core Data or you're caching multiple overlays then you can encode/decode the overlay in a field of type NSData. Add additional fields to the entity so you can query for the specific overlay you're looking for.
In iOS 5, you can enable optional storage of NSData fields in an external file by selecting the "Allows External Storage" option. Core Data will apply a size-based heuristic to determine if a blob or external file will result in better performance.
MKOverlay conforms to NSCoding, so you can encode and decode an entire array of MKOverlay objects using an encode method of NSKeyedArchiver and store the result in a binary field in your entity. You'll likely want + (NSData *)archivedDataWithRootObject:(id)rootObject on NSKeyedArchiver and + (id)unarchiveObjectWithData:(NSData *)data on NSKeyedUnarchiver
See the Archives section in the Archives and Serializations Programming guide for details of creating a keyed archive at: http://developer.apple.com/library/ios/#documentation/Cocoa/Conceptual/Archiving/Articles/archives.html
You can write a custom accessor for the entity's binary field that encodes and decodes the overlay array for you. Another option is to create a value transformer that encapsulates the encoding and decoding operations. The end result would be an overlays array property that you can set and read via entity.overlays.
I believe you can use Apple's NSCoding libraries to convert the object to and from a serialized state. However, Core Data may support saving objects, but NSCoding lets you save any class that implements it anywhere, including a string sent to a server, a file written to disk, or if you're as bad a programmer as me, an NSUserDefaults entry.
edit- You may have to implement NSCoding into your own class based on MKOverlay by adding read and write methods, I'm uncertain.
Why not instead save the properties (size, color, coordinates, etc can all be described with NSNumbers and those can be stored in Core Data natively) and recreate the MKOverlay when needed. I think that's a much more efficient approach to be honest. I'm not sure how much of an impact creating an object has, so prove me wrong if I'm wrong.
You need to take the large dataset that composes the overlay and turn those individual data nodes into NSManagedObjects to be stored in CoreData.
I mean, you probably COULD just NSCoder the entire thing into one giant datablob, but at that point, you might as well just write the thing to a flat file (which frankly might be better if all you want to do is read/write it without changing it).
Don't use Core Data unless you're going to be doing legit querying or piecemeal changes to the dataset.

How do I use Core Data with the Cocoa Text Input system?

Hobbyist Cocoa programmer here. Have been looking around all the usual places, but this seems relatively under-explained:
I am writing something a little out of the ordinary. It is much simpler than, but similar to, a desktop publishing app. I want editable text boxes on a canvas, arbitrarily placed.
This is document-based and I’d really like to use Core Data.
Now, The cocoa text-handling system seems to deal with a four-class structure: NSTextStorage, NSLayoutManager, NSTextContainer and finally NSTextView. I have looked into these and know how to use them, sort of. Have been making some prototypes and it works for simple apps.
The problem arrives when I get into persistency. I don't know how to, by way of Cocoa Bindings or something else, store the contents of NSTextStorage (= the actual text) in my managed object context.
I have considered overriding methods pairs like -words, -setWords: in these objects. This would let me link the words to a String, which I know how to store in Core Data. However, I’d have to override any method that affects the text - and that seems a little much.
Thankful for any insights.
NSTextStorage is just a subclass of NSMutableAtrributedString which support the NSCoding protocol so you can save it in Core Data as a transformable attribute using the default NSKeyedUnArchiveFromData transform.
I'm pretty sure that holds true for all the other classes you might want to save with the possible exception of the views. (I've never tried to store them in Core Data buts its theoretically possible to do so.)
Any class that implements/inherits NSCoding or has a initWithCoder: method can be stored in code data as a transformable attribute.
I suggest binding the value binding of a text view to a string attribute of one of your model entities, or the attributedString binding to a transformable attribute. This hooks up the view to the model without you having to pass text back and forth yourself.

Sample (preferably simple) subclass of NSCoder?

I'm trying to create a subclass of NSCoder, but I really don't know where to start. Apple's documentation lists which methods are required, but not much else. Maybe my Google-fu is weak, but I can't find any examples of an implementation of, e.g. encodeValueOfObjCType:at:, anywhere. (Though I assume it involves a lot of cases.)
Anyone know of a sample subclass of NSCoder I can look at, or have an idea of what a case or two of encodeValueOfObjCType:at: and decodeValueOfObjCType:at: should look like?
I just open-sourced a NSCoder subclass here
It basically is a replica of the deprecated NSArchiver.
Should get anyone who stumble into this question started.
I've also been wanting to (ab)use NSCoder to generate simpler XML than what NSKeyedArchiver produces and have implemented some classes for it. The classes are called RWPlainXMLTreeEncoder and RWPlainXMLTreeDecoder, and I've written some test code for them too.
RWPlainXMLTreeEncoder assumes that the object graph you're encoding is a tree (in case the same object is encoded twice, the decoded tree will contain two different copies instead of one shared copy; if you try to encode a cyclic graph it raises an exception). Per encoded object it generates an XML element that looks roughly like the one for this example, an encoding of an array containing the string "A string":
<ROOT type="#NSArray"><NS.object.0 type="#NSString"><NS.bytes>4120737472696E67</NS.bytes></NS.object.0></ROOT>
I wanted to further improve the above by using a different method instead of the object's own encodeWithCoder: for objects such as arrays and strings, so that the above would become:
<ROOT type="array"><item.0 type="string">A string</item.0></array></ROOT>
I'm however not sure if I will continue working on this. My overall goal was to have a fairly generic, simple way of saving an object tree to a file that leverages of the encodeWithCoder: methods I've already written, while producing a file that is not as Cocoa-dependent as when using NSKeyedArchiver. This would allow others to write applications that open those files on other platforms.
But I've now come to understand there have been similar efforts which may already be more advanced anyway, and furthermore, with XML being a document markup language it may not be the best target format and some non-markup language might be better suited.
Nevertheless, if you want to continue with this or have some other reason to look at a fairly simple NSCoder subclass, feel free to use my code. You could also take a look at MAKeyedArchiver. Oh, and my code is covered by a BSD-style license (at least the version that is in SVN revision 424 is, I might change this for future versions). Improvements and feedback are welcomed.