kUTTypeAppleICNS lossy? - objective-c

Just a (maybe stupid) question: why saving a NSImage as kUTTypeTIFF or kUTTypePNG (setting kCGImageDestinationLossyCompressionQuality to 1.0) produces a lossless file while kUTTypeAppleICNS makes weird icons?
The most noticeable result I got is loading as NSImage the standard (I mean "not customized") MacOS trash icon (/System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/TrashIcon.icns) and trying to writing it back to file.
(I use a standard procedure: CGImageDestinationCreateWithURL + CGImageDestinationAddImage + CGImageDestinationFinalize)
Thank you

Related

MTKTextureLoader messes PNG RGB vs BGR formats

I'm trying to port my OpenGL code to Metal one. As part of it I need to draw PNG textures, and I'm using MTKTextureLoader to load them. Here is my pipeline code:
texturePipelineDescriptor.vertexFunction = textureVertexFunc;
texturePipelineDescriptor.fragmentFunction = textureFragmentFunc;
texturePipelineDescriptor.colorAttachments[0].pixelFormat = MTLPixelFormatRGBA8Unorm;
texturePipelineDescriptor.colorAttachments[0].blendingEnabled = YES;
texturePipelineDescriptor.colorAttachments[0].rgbBlendOperation = MTLBlendOperationAdd;
texturePipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = MTLBlendFactorSourceAlpha;
texturePipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = MTLBlendFactorOneMinusSourceAlpha;
And here is my loader code:
NSData* imageData = [NSData dataWithBytes:imageBuffer length:imageBufferSize];
id<MTLTexture> newMTLTexture = [m_metal_renderer.metalTextureLoader newTextureWithData:imageData options:nil error:&error];
And here is the result. The entire image should be orange, like the small parts.
If I change this line:
texturePipelineDescriptor.colorAttachments[0].pixelFormat = MTLPixelFormatRGBA8Unorm;
To this line:
texturePipelineDescriptor.colorAttachments[0].pixelFormat = MTLPixelFormatBGRA8Unorm;
I will get the following results:
As you can see,images changed R and B channels, When I loading the textures, in ALL OF THEM
newMTLTexture.pixelFormat = MTLPixelFormatBGRA8Unorm;
regardless whether they are drawn correctly or not.
I would suspect there is something wrong with PNG files, but they seems absolutely normal in any PNG viewer.
Any clues what I can do to resolve it? Is it a bug In MTLTextureLoader? Is there another way of loading PNG picture?
P.S. The software is written for MacOS and not iOS.
I found the problem. It seems that some of the PNG files were stored in RGB (24 bit) format, while another were stored in RGBA (32 bit) format. All color became right after converting images to 32bit format.
I would expect MTKTextureLoader to be able to handle that, but apparently it is not. If someone knows the option I missed or another way to correctly load those PNG's to Metal (Other than adding 4th channel manually) it will be great!

Does vkCmdCopyImageToBuffer work when source image uses VK_IMAGE_TILING_OPTIMAL?

I have read (after running into the limitation myself) that for copying data from the host to a VK_IMAGE_TILING_OPTIMAL VkImage, you're better off using a VkBuffer rather than a VkImage for the staging image to avoid restrictions on mipmap and layer counts. (Here and Here)
So, when it came to implementing a glReadPixels-esque piece of functionality to read the results of a render-to-texture back to the host, I thought that reading to a staging VkBuffer with vkCmdCopyImageToBuffer instead of using a staging VkImage would be a good idea.
However, I haven't been able to get it to work yet, I'm seeing most of the intended image, but with rectangular blocks of the image in incorrect locations and even some bits duplicated.
There is a good chance that I've messed up my synchronization or layout transitions somewhere and I'll continue to investigate that possibility.
However, I couldn't figure out from the spec whether using vkCmdCopyImageToBuffer with an image source using VK_IMAGE_TILING_OPTIMAL is actually supposed to 'un-tile' the image, or whether I should actually expect to receive a garbled implementation-defined image layout if I attempt such a thing.
So my question is: Does vkCmdCopyImageToBuffer with a VK_IMAGE_TILING_OPTIMAL source image fill the buffer with linearly tiled data or optimally (implementation defined) tiled data?
Section 18.4 describes the layout of the data in the source/destination buffers, relative to the image being copied from/to. This is outlined in the description of the VkBufferImageCopy struct. There is no language in this section which would permit different behavior from tiled images.
The specification even has pseudo code for how copies work (this is for non-block compressed images):
rowLength = region->bufferRowLength;
if (rowLength == 0)
rowLength = region->imageExtent.width;
imageHeight = region->bufferImageHeight;
if (imageHeight == 0)
imageHeight = region->imageExtent.height;
texelSize = <texel size taken from the src/dstImage>;
address of (x,y,z) = region->bufferOffset + (((z * imageHeight) + y) * rowLength + x) * texelSize;
where x,y,z range from (0,0,0) to region->imageExtent.width,height,depth}.
The x,y,z part is the location of the pixel in question from the image. Since this location is not dependent on the tiling of the image (as evidenced by the lack of anything stating that it would be), buffer/image copies will work equally on both kinds of tiling.
Also, do note that this specification is shared between vkCmdCopyImageToBuffer and vkCmdCopyBufferToImage. As such, if a copy works one way, it by necessity must work the other.

NSImage loading a portion of image

The requirement is like this,
I would get a single large PNG Images for a button, this single image will contain images for hOver, button clicked , mouse exit that need to be displayed,
Single PNG File size would be 1024 X 28, so each image have size about 256 X 28,
I am googling the best possible approach but couldn't make out how to achieve this,
I have following approach in mind,
NSImage *pBtnImage[MAX_BUTTON_IMAGES]
for ( i = 0; i < 4 ; i++) {
pBtnImage[i] = [[NSImage alloc]initWithData:??????];
}
I want to know what should i give in the NSData parameter,
Is it possible to load a Single Image and clipped image accordingly as and when it needed.
Thanks in advance
There's no simple Cocoa-supported way to read only a sub-rectangle of the image from its data. It's a simple matter, however, to read the whole image in and only use a select rectangle of the image when compositing. Thing is, with all the available API, you might be better off just to use the standard +[NSImage imageNamed:] method to read the images in individually and let the OS handle caching.
What actual, measured performance problem are you trying to solve? Does one really exist, or is this a case of premature optimization?

Alternative to CGPathGetPathBoundingBox() for iPad (iOS 3.2)

I'm trying to get my head around using QuartzCore to render semi-complex text/gradient/image UITableViewCell composites. Thankfully, Opacity will let me visually build the view and then spit out source code to drop in to cocoa touch. Trouble is, Opacity assumes the code is running on iOS 4, which is a problem if you want to draw Quartz views on an iPad.
For me, the offending method is CGPathGetPathBoundingBox ... would someone mind pointing me to a suitable alternative or workaround to this (presumably simple) method?
If you care to have some context (no pun intended), here you go:
transform = CGAffineTransformMakeRotation(1.571f);
tempPath = CGPathCreateMutable();
CGPathAddPath(tempPath, &transform, path);
pathBounds = CGPathGetPathBoundingBox(tempPath);
point = pathBounds.origin;
point2 = CGPointMake(CGRectGetMaxX(pathBounds), CGRectGetMinY(pathBounds));
transform = CGAffineTransformInvert(transform);
The alternative is to iterate on the points of the path and note down the leftmost, rightmost, upmost, and downmost co-ordinatesĀ of the anchor points yourself, then work out origin and size from those numbers.
You should wrap this in a function, and name it something like MyPathGetPathBoundingBox, and use that until you drop support for iOS 3.x. That will make it easy to switch to CGPathGetPathBoundingBox once you can.

Label and LabelAtlas comparison? LabelAtlas is hard to use

I have some confusion about how to use AtlasLabel. It seems Label consume a lot memory than LabelAtlas?
Such as if I create 100 line of text. Each of them is created by Label, then will it consume more memory than 100 line of text created by LabelAtlas?
Label *label1 = [[Label alloc] initWithString:#"text1" dimensions:CGSizeMake(0, 0) alignment:UITextAlignmentLeft fontName:#"Arial" fontSize:22];
.....
.....
Label *label100 = [[Label alloc] initWithString:#"text100" dimensions:CGSizeMake(0, 0) alignment:UITextAlignmentLeft fontName:#"Arial" fontSize:22];
will they be the same with
LabelAtlas *label1 = [LabelAtlas labelAtlasWithString:#"text1" charMapFile:#"abc_22c.png" itemWidth:34 itemHeight:40 startCharMap:' '];
........
.......
LabelAtlas *label100 = [LabelAtlas labelAtlasWithString:#"text100" charMapFile:#"abc_22c.png" itemWidth:34 itemHeight:40 startCharMap:' '];
I assume LabelAtlas is less expensive than Label since it uses just one image. Label creates likely an image each time it created.
I would like to convert all the text from label to labelAtlas. But I still don;t really understand how to use LabelAtlas deeply. I hardly display the string I want. I read number of examples. It seems simple but when I tried....It does not give me what I expect. Could you show me some example for displaying a long text using LabelAtlas instead of Label. I used LabelAtlas before for my point counter. But it is so hard now to display a long string. Thanks in Advance
The main difference between CCLabel and CCLabelAtlas is that the atlas version (like all the other atlas classes) uses one big texture with all the letters pre-rendered to draw a string. This means that the drawing is much faster, because if you draw 100 labels, the graphics processor doesn't have to read in 100 textures but just keep one texture in memory. But it also means that all the letters will be of a fixed size. If you want to get around the fixed-size limitation, use CCBitmapFontAtlas.
And, yes, CCLabel creates one texture for every label, whereas CCLabelAtlas renders the text on the fly, using the provided texture (containing all the characters), so using CCLabelAtlas results in lower memory consumption.
In general, try to always use the *Atlas versions of classes. You can start by using the non-atlas versions and then switch to the atlas version when you've progressed a bit and had time to generate the atlas bitmaps. Don't worry too much about it if you're just starting out.