Thanks to Apple's PictureSwiper sample code and the very nice NSPageController tutorial from juniperi here on stackoverflow, it's pretty easy to get tantalizing close to image viewing capabilities in Preview. Specifically I want to replicate the ability to swipe forwards/backwards between images/pages, use pinch-to-zoom resize the images, gesture to rotate the images/pages, and support two-page mode.
But there are some hurdles that make me wonder if NSPageController is the right approach or if it is too limiting and a custom view controller is needed.
1) Images of varying sizes are simply displayed stacked and if the top/upper layer image is smaller, the underlying image(s) show through. Using the same images in preview, they hide the larger "underlying" images/pages and fade the underlying image in/out with the swipe transition. I could hide underlying images by linking the page controller to the view rather than the image cell (like PictureSwiper), but that causes the entire view to scale on pinch to zoom and overall looks clunky.
2) Is it possible to use NSPageController with more than one image cell, e.g. two-page mode?
3) Is page/image rotation possible with NSPageController?
4) Is it possible to lock the zoom level for all the images, so they are uniformly displayed as navigated?
My apologies if this too general of a question, but the gist is whether NSPageController too limited and problematic to extend which would necessitate building a custom controller from scratch.
Thanks.
Related
I have a very simple Mac Application; the best way to think of it is a master/detail view. Once an item is chosen on the left-hand side, information appears on the right pane, along with an image (can be various sizes/formats/resolutions due to the source). The user can very quickly select items on the left, and while the detail text on the right-hand pane appears quickly, we notice an immediate lag or plain failure to display the image on the right. Even the largest of images is a few MB, and typically less than 1024x1024 pixels. If the user moves slowly, everything is kosher...but if you go quick, it will hiccup...lag behind a few selections, or not display at all.
I've attempted solutions using nsviews' setNeedsDisplay, and alternatively using Core Animation layers, but nothing seems to do the trick. Is there a guide or set of tutorials on performant image display routines?
Targeting 10.6+
I am creating a simple photo filter app for OS X and I am displaying a photo on an NSImageView (actually two photos on top of each other with two NSImageViews, but the question still applies for a single view too). Everything works super, but when I try to resize the window that contains the NSImageViews, the window (which also resizes the NSImageViews) resizes very slowly, at about less than 1fps, creating a negative impact on the user experience. I want resizing windows to be as smooth as possible. When I disable resizing the image views, the window resizes smoothly, so the cause of the slowdown is those NSImageViews.
I'm loading 20-megapixel images from my DSLR. When I scale them down to a reasonable size for screen (e.g. 1024x768), they scale smoothly, so the problem is the way NSImageView renders the images. It (I assume as the result of this behavior) tries to re-render 20MP image every time it needs to redraw it into whatever the target frame of the view is.
How can I make NSImageView rescale more smoothly? Should I feed it with a scaled-down version of my images? I don't want to do that as it's a photo editing app that also targets retina display screens and the viewport would actually be quite large. I can do it, but it's my final option. Other than scaling down, how can I make NSImageView resize faster?
I believe part of the solution your are looking for is in NSImage's representations. You can add many representations to an image with addRepresentation: I believe there is some intelligent selection done when drawing. In your case, I think you would need to add both representations (the scaled-down and the full resolution bitmap) to NSImage. I strongly suspect drawRect: should pick the low resolution version. I would make sure "scale up or down" is selected in NSImageView, because the default is scale down only, which may force your full resolution image to be used most of the time. There are some discussion in Apple's documentation regarding "matching" under "Setting the Image Representation Selection Criteria" in NSImage, although at first sight this may not be sufficient.
Then, whenever you need to do something with the full image, you would request the full resolution image by going through the representations ([NSImage representations] returns an array of NSImageRep).
I'm developing an ipad application about 2d drawing.
I need a UIView.frame size of 4000x4000. But if I set a frame with size 4000x4000 the application
crash since i get memory warning.
Right night I'm using 1600*1000 frame size and the user can add new object (rectangle) on frame. User can also translate fram along x and y axis using pan gesture in order to see or add new object.
Have you got some suggestion? how can I tackle this problem?
thanks
Well, I would suggest what is used in video games for a long time - creating a tiled LOD mechanism, where only when you zoom in toward specific tiles, they are rendered at an increasing resolution, while when zoomed out, you only render lower resolution.
If the drawing in based on shapes (rectangles, points, lines, or anything can be represented by simple vector data) there is no reason to create a UIView for the entire size of the drawing. You just redraw the currently visible view as the user pans across the drawing using the stored vector data. There is no persistent bitmapped representation of the drawing.
If using bitmap data for drawing (i.e. a Photoshop type of app) then you'll likely need to use a mechanism that caches off-screen data into secondary storage and loads it back onto the screen as the user pans across it. In either case, the UIView only needs to be as big as the physical screen size.
Sorry I don't have any iOS code examples for any of this - take this as a high-level abstraction and work from there.
Sounds like you want to be using UIScrollView.
My app downloads images from a website. These images are all manner of sizes, from 800x600 up to 1800x1600. I analyze the image using facial recognition, and then want to resize and crop the image. However, it's important that the detected CGRect be visible on the cropped image.
I was using the excellent UIImage+Resize code and using UIViewContentModeScaleAspectFill, but it doesn't seem to have a programatic way of specifying an arbitrary location that needs to be visible in the final image. So if a face is located at the 1600px range of an 1800x1600 image, it'll get cut off.
Is there an easy solution to this, or do I need to dig around in the depths of UIImage+Resize? Any guidance would be appreciated!
I'm making a game using UIView.
I use a large (8192x8192) UIView as the map area, (the game is birds-eye-view) with a UIImageView stretched across it displaying a grass texture.
This uses heaps of memory, doesn't run on older devices and nearly crashes Xcode whenever I try to edit it...
Is there an alternate method of creating a 8192x8192 map, but without being laggy?
If it's possible to tile your graphics, something involving CATiledLayer would probably be a good fit. CATiledLayer allows you to provide only the images that are necessary to display the currently viewable area of the view (just like Maps does).
Here is some example code for displaying a large PDF.