Does #2x works on UIImage imageWithNamed? - objective-c

[self.distanceSlider setThumbImage:[UIImage imageNamed:#"handle-slider"] forState:UIControlStateNormal];
Let's take a look at that code.
If I use retina display, will the image called be handle-slider#2x instead of handle-slider?
Notice that this could raise an issue. Imagined if I load an image for the sole purpose of processing it and I really really want to load handle-slider, or handle-slider#2x? Then having iOS to override my decision and arbitrarily load #2x image will be kind of silly.
On the other hand most of the time, I used UIImage imageNamed to populate a button. In that case, it makes perfect sense to add #2x.
In any case, which path does apple eventually use and if possible, what's the reference?
I searched stackOverflow.
Most answers are inconsistent with one suggesting one or the other.

The documentation for [UIImage imageNamed:] has exactly the information you want to know.
This method looks in the system caches for an image object with the
specified name and returns that object if it exists. If a matching
image object is not already in the cache, this method loads the image
data from the specified file, caches it, and then returns the
resulting object.
On a device running iOS 4 or later, the behavior is identical if the
device’s screen has a scale of 1.0. If the screen has a scale of 2.0,
this method first searches for an image file with the same filename
with an #2x suffix appended to it. For example, if the file’s name is
button, it first searches for button#2x. If it finds a 2x, it loads
that image and sets the scale property of the returned UIImage object
to 2.0. Otherwise, it loads the unmodified filename and sets the scale
property to 1.0.

Related

Vulkan update descriptor every frame

I want to render my scene to a texture and then use that texture in shader so I created a frambuffer using imageview and recorded a command buffer for that. I successfully uploaded and executed the command buffer on gpu but the descriptor of imageview is black. I'm creating a descriptor from the imageview before rendering loop. Is it black because I create it before anything is rendered to framebuffer? If so I will have to update the descriptor every frame. Will I have to create a new descriptor from imageview every frame? Or is there another way I can do this?
I have read other thread on this title. Don't mark this as duplicate cause that thread is about textures and this is texture from a imageview.
Thanks.
#IAS0601 I will answer questions from Your comment through an answer, as it allows for much longer text to be written, and its formatting is much better. I hope this also answers Your original question, but You don't have to treat like the answer. As I wrote, I'm not sure what You are asking about.
1) In practically all cases, GPU accesses images through image views. They specify additional parameters which define how image is accessed (like for example which part of the image is accessed), but still it is the original image that gets accessed. Image view, as name suggests, is just a view, list of access parameters. It doesn't have any memory bound to it, it doesn't contain any data (apart from the parameters specified during image view creation).
So when You create a framebuffer and render into it, You render into original images or, to be more specific, to those parts of original images which were specified in image views. For example, You have a 2D texture with 3 array layers. You create a 2D image view for the middle (second) layer. Then You use this image view during framebuffer creation. And now when You render into this framebuffer, in fact You are rendering into the second layer of the original 2D texture array.
Another thing - when You later access the same image, and when You use the same image view, You still access the original image. If You rendered something into the image, then You will get the updated data (provided You have done everything correctly, like perform appropriate synchronization operations, layout transition if necessary etc.). I hope this is what You mean by updating image view.
2) I'm not sure what You mean by updating descriptor set. In Vulkan when we update a descriptor set, this means that we specify handles of Vulkan resources that should be used through given descriptor set.
If I understand You correctly - You want to render something into an image. You create an image view for that image and provide that image view during framebuffer creation. Then You render something into that framebuffer. Now You want to read data from that image. You have two options. If You want to access only one sample location that is associated with fragment shader's location, You can do this through an input attachment in the next subpass of the same render pass. But this way You can only perform operations which don't require access to multiple texels, for example a color correction.
But if You want to do something more advanced, like blurring or shadow mapping, if You need access to several texels, You must end a render pass and start another one. In this second render pass, You can read data from the original image through a descriptor set. It doesn't matter when this descriptor set was created and updated (when the handle of image view was specified). If You don't change the handles of resources - meaning, if You don't create a new image or a new image view, You can use the same descriptor set and You will access the data rendered in the first render pass.
If You have problems accessing the data, for example (as You wrote) You get only black colors, this suggests You didn't perform everything correctly - render pass load or store ops are incorrect, or initial and final layouts are incorrect. Or synchronization isn't performed correctly. Unfortunately, without access to Your project, we can't be sure what is wrong.

NSImageView with high-resolution image causes extreme slowdown when resizing the window

I am creating a simple photo filter app for OS X and I am displaying a photo on an NSImageView (actually two photos on top of each other with two NSImageViews, but the question still applies for a single view too). Everything works super, but when I try to resize the window that contains the NSImageViews, the window (which also resizes the NSImageViews) resizes very slowly, at about less than 1fps, creating a negative impact on the user experience. I want resizing windows to be as smooth as possible. When I disable resizing the image views, the window resizes smoothly, so the cause of the slowdown is those NSImageViews.
I'm loading 20-megapixel images from my DSLR. When I scale them down to a reasonable size for screen (e.g. 1024x768), they scale smoothly, so the problem is the way NSImageView renders the images. It (I assume as the result of this behavior) tries to re-render 20MP image every time it needs to redraw it into whatever the target frame of the view is.
How can I make NSImageView rescale more smoothly? Should I feed it with a scaled-down version of my images? I don't want to do that as it's a photo editing app that also targets retina display screens and the viewport would actually be quite large. I can do it, but it's my final option. Other than scaling down, how can I make NSImageView resize faster?
I believe part of the solution your are looking for is in NSImage's representations. You can add many representations to an image with addRepresentation: I believe there is some intelligent selection done when drawing. In your case, I think you would need to add both representations (the scaled-down and the full resolution bitmap) to NSImage. I strongly suspect drawRect: should pick the low resolution version. I would make sure "scale up or down" is selected in NSImageView, because the default is scale down only, which may force your full resolution image to be used most of the time. There are some discussion in Apple's documentation regarding "matching" under "Setting the Image Representation Selection Criteria" in NSImage, although at first sight this may not be sufficient.
Then, whenever you need to do something with the full image, you would request the full resolution image by going through the representations ([NSImage representations] returns an array of NSImageRep).

Recommended approach to load proprietary binary image file into NSImage?

I have a bunch of image files in a proprietary binary format that I want to load into NSImages. The format is not a simple bitmap, but rather a kind of an RLE representation mixed with transparency and miscellaneous additional information.
In order to display one of these images in a Cocoa app, I need a way to parse the image file byte by byte and "calculate" a bitmap from it which I will then put into an NSImage.
What is a good approach for doing this in Objective-C/Cocoa?
The tasks of interpreting image data are handled by the image's representation object(s). To use a proprietary format, you have a few options: (a) create a custom representation class, (b) use NSCustomImageRep with a custom delegate, or (c) use a custom object to translate your image to a supported format, such as a raw bitmap.
If you choose to create a custom representation class, you will create a subclass of NSImageRep as described in Creating New Image Representation Classes. This basically requires that your class register itself and be able to draw the image data. In addition to this, you can override methods to return information about the image, and you will be able to instantiate your images using the normal NSImage methods. This method requires the most work.
Using NSCustomImageRep requires less work than creating a custom implementation. Your delegate object only needs to be able to draw the image at a fixed location. However, you cannot return other information about the image, and you will need to create the NSCustomImageRep object manually before creating the NSImage.
Translating the image into a different format is also simpler than creating a custom representation. It could be as simple as creating a blank NSImage of the proper size and drawing into it. Creating the image is still more complicated since you need to call your translation method, and this will affect efficiency (both future drawing time and memory usage) since you are changing formats, which could be good or bad. You will also lose any association between the image object and its source.

UIImage change raw pixels from white to clear?

I've tried some code from each of these questions:
How to make one color transparent on a UIImage?
How to mask a UIImage so that white becomes transparent on iphone?
but have come up unsuccessful, unfortunately working with Core Graphics and images is not my strong suit.
How would I go about accessing a UIImage's raw data and changing the white pixels to clear?
How would I go about accessing a UIImage's raw data …?
Look at the documentation.
You'll find that there is no way to get the raw data behind a UIImage. The closest you can get is a CGImage. That will let you get its data provider, which you can ask for a copy of the raw data.
The problem with that solution is that you need to handle every possible configuration (RGBA, ARGB, RGB_, _RGB, RGB, 8-bpc, 16-bpc, etc.) that CGImage supports. That's a lot of work. If you don't do it, then someday, you'll get surprised by an image that somehow doesn't work with your code, or by an OS upgrade changing how the CGImage gets created.
The CGImageCreateWithMaskingColors function, suggested on one of the other questions you linked to, is the correct solution.
One thing that's tripping you up is that the values shown in the accepted answer on that question are generally bogus: They're out of range. The Quartz 2D Programming Guide has more details in at least two.places.
I also argue against including that answer's createMask: method, since it doesn't do what it says it does and is barely useful at all (it's only worth having if the source image may be CMYK, but how likely is that on an iPhone app?). Skip it and create the mask image from the UIImage's CGImage directly.
That answer will probably work just fine once you fix those two problems.

How to avoid a NSCachedImageRep

I'm working with an NSImage which comes from a PDF. When I initially create the image, it has only one NSImageRep and that is NSPDFImageRep. This is good. I can work with it. I can find out how many pages it has, and go to a specified page, and draw it, etc.
The problem is that as soon as I turn my back, it gets turned into a NSCachedImageRep, which doesn't seem to have multiple pages. Now, if I keep the PDFImageRep in a separate variable, it is kept track of, but it isn't associated with the image anymore, so when I draw the image, it's still on the same page.
What am I missing?
thanks.
You need to call [image setDataRetained:YES] on the image, so that your original PDF data is kept around, otherwise it will be cached to a bitmap.
If you're still having problems you could turn off caching altogether by using [image setCacheMode:NSImageCacheNever].
Try it on 10.6. The problem has probably evaporated.
Please see the AppKit release notes for details of the NSImage changes.