NSImageView with high-resolution image causes extreme slowdown when resizing the window - objective-c

I am creating a simple photo filter app for OS X and I am displaying a photo on an NSImageView (actually two photos on top of each other with two NSImageViews, but the question still applies for a single view too). Everything works super, but when I try to resize the window that contains the NSImageViews, the window (which also resizes the NSImageViews) resizes very slowly, at about less than 1fps, creating a negative impact on the user experience. I want resizing windows to be as smooth as possible. When I disable resizing the image views, the window resizes smoothly, so the cause of the slowdown is those NSImageViews.
I'm loading 20-megapixel images from my DSLR. When I scale them down to a reasonable size for screen (e.g. 1024x768), they scale smoothly, so the problem is the way NSImageView renders the images. It (I assume as the result of this behavior) tries to re-render 20MP image every time it needs to redraw it into whatever the target frame of the view is.
How can I make NSImageView rescale more smoothly? Should I feed it with a scaled-down version of my images? I don't want to do that as it's a photo editing app that also targets retina display screens and the viewport would actually be quite large. I can do it, but it's my final option. Other than scaling down, how can I make NSImageView resize faster?

I believe part of the solution your are looking for is in NSImage's representations. You can add many representations to an image with addRepresentation: I believe there is some intelligent selection done when drawing. In your case, I think you would need to add both representations (the scaled-down and the full resolution bitmap) to NSImage. I strongly suspect drawRect: should pick the low resolution version. I would make sure "scale up or down" is selected in NSImageView, because the default is scale down only, which may force your full resolution image to be used most of the time. There are some discussion in Apple's documentation regarding "matching" under "Setting the Image Representation Selection Criteria" in NSImage, although at first sight this may not be sufficient.
Then, whenever you need to do something with the full image, you would request the full resolution image by going through the representations ([NSImage representations] returns an array of NSImageRep).

Related

Replicating Preview image viewing with NSPageController

Thanks to Apple's PictureSwiper sample code and the very nice NSPageController tutorial from juniperi here on stackoverflow, it's pretty easy to get tantalizing close to image viewing capabilities in Preview. Specifically I want to replicate the ability to swipe forwards/backwards between images/pages, use pinch-to-zoom resize the images, gesture to rotate the images/pages, and support two-page mode.
But there are some hurdles that make me wonder if NSPageController is the right approach or if it is too limiting and a custom view controller is needed.
1) Images of varying sizes are simply displayed stacked and if the top/upper layer image is smaller, the underlying image(s) show through. Using the same images in preview, they hide the larger "underlying" images/pages and fade the underlying image in/out with the swipe transition. I could hide underlying images by linking the page controller to the view rather than the image cell (like PictureSwiper), but that causes the entire view to scale on pinch to zoom and overall looks clunky.
2) Is it possible to use NSPageController with more than one image cell, e.g. two-page mode?
3) Is page/image rotation possible with NSPageController?
4) Is it possible to lock the zoom level for all the images, so they are uniformly displayed as navigated?
My apologies if this too general of a question, but the gist is whether NSPageController too limited and problematic to extend which would necessitate building a custom controller from scratch.
Thanks.

ipad frame max size is not enough

I'm developing an ipad application about 2d drawing.
I need a UIView.frame size of 4000x4000. But if I set a frame with size 4000x4000 the application
crash since i get memory warning.
Right night I'm using 1600*1000 frame size and the user can add new object (rectangle) on frame. User can also translate fram along x and y axis using pan gesture in order to see or add new object.
Have you got some suggestion? how can I tackle this problem?
thanks
Well, I would suggest what is used in video games for a long time - creating a tiled LOD mechanism, where only when you zoom in toward specific tiles, they are rendered at an increasing resolution, while when zoomed out, you only render lower resolution.
If the drawing in based on shapes (rectangles, points, lines, or anything can be represented by simple vector data) there is no reason to create a UIView for the entire size of the drawing. You just redraw the currently visible view as the user pans across the drawing using the stored vector data. There is no persistent bitmapped representation of the drawing.
If using bitmap data for drawing (i.e. a Photoshop type of app) then you'll likely need to use a mechanism that caches off-screen data into secondary storage and loads it back onto the screen as the user pans across it. In either case, the UIView only needs to be as big as the physical screen size.
Sorry I don't have any iOS code examples for any of this - take this as a high-level abstraction and work from there.
Sounds like you want to be using UIScrollView.

iOS cropping and resizing ensuring rect stays visible

My app downloads images from a website. These images are all manner of sizes, from 800x600 up to 1800x1600. I analyze the image using facial recognition, and then want to resize and crop the image. However, it's important that the detected CGRect be visible on the cropped image.
I was using the excellent UIImage+Resize code and using UIViewContentModeScaleAspectFill, but it doesn't seem to have a programatic way of specifying an arbitrary location that needs to be visible in the final image. So if a face is located at the 1600px range of an 1800x1600 image, it'll get cut off.
Is there an easy solution to this, or do I need to dig around in the depths of UIImage+Resize? Any guidance would be appreciated!

8192x8192 UIView Lag

I'm making a game using UIView.
I use a large (8192x8192) UIView as the map area, (the game is birds-eye-view) with a UIImageView stretched across it displaying a grass texture.
This uses heaps of memory, doesn't run on older devices and nearly crashes Xcode whenever I try to edit it...
Is there an alternate method of creating a 8192x8192 map, but without being laggy?
If it's possible to tile your graphics, something involving CATiledLayer would probably be a good fit. CATiledLayer allows you to provide only the images that are necessary to display the currently viewable area of the view (just like Maps does).
Here is some example code for displaying a large PDF.

Resizing CATiledLayer's Using Scale Transforms vs. Bounds Manipulation

I've got my layer hosted workspace working so that using CATiledLayers for hundreds of images works nicely when the workspace is zoomed out substantially. All the images use lower resolution representations, and my application is much more responsive when panning and zooming large numbers of images.
However, within my application I also provide the user the ability to resize layers with a resize handle. Before I converted image layers to use CATiledLayers I was doing layer resizes by manipulating the bounds of the image layer according to the resize delta (mouse drag), and it worked well. But now with CATiledLayers in place, this is causing CATiledLayers to get confused when I mix resizing of layers through bounds manipulation and zooming/unzooming the workspace through scale transforms.
Specifically, if I resize a CATiledLayer to half the width/height size (1/4 the area), the image inside it will suddenly scale to a further 1/2 the resized frame leaving 3/4 of the frame empty. This seems to be exactly when the inner CATiledLayer logic gets invoked to provide a lower resolution image representation. It works fine if I don't touch the resize handler and just zoom/unzoom the workspace.
Is there a way to make zooming/resizing play nice together with CATiledLayers, or am I going to have to convert my layer resize logic to use scale transforms instead of bounds manipulations?
I ended up solving this by converting my layer resize logic to use scale transforms by overriding the setBounds: method for my custom image layer class to scale it's containing CATiledLayer, and repositioning accordingly. Also it is important to make sure the CATiledLayer's autoresizingMask is set to kCALayerNotSizable since we are handling resizes manually in setBounds:.
Note: be sure to call the superclass's implementation of setBounds:.