I have a large image managed with CATiledLayer (like the Large Image Downsizing iOS sample code).
I had a drawing view (UIView overrided with drawing methods) on it but when I zoom a lot, I get the following message and my view disappeared..
-[<CALayer: 0xb253aa0> display]: Ignoring bogus layer size (25504.578125, 15940.361328)
Is there a way to avoid this ?
Sounds like the levelsOfDetail and levelsOfDetailBias you are setting are allowing for more zoom than the tiled layer should allow given the max layer size allowable for the layer. Try changing those to lessen how much the user can zoom.
Here is a great article explaining some of the undocumented behavior of CATiledLayer.
Related
I am creating a simple photo filter app for OS X and I am displaying a photo on an NSImageView (actually two photos on top of each other with two NSImageViews, but the question still applies for a single view too). Everything works super, but when I try to resize the window that contains the NSImageViews, the window (which also resizes the NSImageViews) resizes very slowly, at about less than 1fps, creating a negative impact on the user experience. I want resizing windows to be as smooth as possible. When I disable resizing the image views, the window resizes smoothly, so the cause of the slowdown is those NSImageViews.
I'm loading 20-megapixel images from my DSLR. When I scale them down to a reasonable size for screen (e.g. 1024x768), they scale smoothly, so the problem is the way NSImageView renders the images. It (I assume as the result of this behavior) tries to re-render 20MP image every time it needs to redraw it into whatever the target frame of the view is.
How can I make NSImageView rescale more smoothly? Should I feed it with a scaled-down version of my images? I don't want to do that as it's a photo editing app that also targets retina display screens and the viewport would actually be quite large. I can do it, but it's my final option. Other than scaling down, how can I make NSImageView resize faster?
I believe part of the solution your are looking for is in NSImage's representations. You can add many representations to an image with addRepresentation: I believe there is some intelligent selection done when drawing. In your case, I think you would need to add both representations (the scaled-down and the full resolution bitmap) to NSImage. I strongly suspect drawRect: should pick the low resolution version. I would make sure "scale up or down" is selected in NSImageView, because the default is scale down only, which may force your full resolution image to be used most of the time. There are some discussion in Apple's documentation regarding "matching" under "Setting the Image Representation Selection Criteria" in NSImage, although at first sight this may not be sufficient.
Then, whenever you need to do something with the full image, you would request the full resolution image by going through the representations ([NSImage representations] returns an array of NSImageRep).
I'm developing an ipad application about 2d drawing.
I need a UIView.frame size of 4000x4000. But if I set a frame with size 4000x4000 the application
crash since i get memory warning.
Right night I'm using 1600*1000 frame size and the user can add new object (rectangle) on frame. User can also translate fram along x and y axis using pan gesture in order to see or add new object.
Have you got some suggestion? how can I tackle this problem?
thanks
Well, I would suggest what is used in video games for a long time - creating a tiled LOD mechanism, where only when you zoom in toward specific tiles, they are rendered at an increasing resolution, while when zoomed out, you only render lower resolution.
If the drawing in based on shapes (rectangles, points, lines, or anything can be represented by simple vector data) there is no reason to create a UIView for the entire size of the drawing. You just redraw the currently visible view as the user pans across the drawing using the stored vector data. There is no persistent bitmapped representation of the drawing.
If using bitmap data for drawing (i.e. a Photoshop type of app) then you'll likely need to use a mechanism that caches off-screen data into secondary storage and loads it back onto the screen as the user pans across it. In either case, the UIView only needs to be as big as the physical screen size.
Sorry I don't have any iOS code examples for any of this - take this as a high-level abstraction and work from there.
Sounds like you want to be using UIScrollView.
I have a custom UIView (graphView) that draws a complex graphic in the drawRect based on iPad screen size 1024 x 768. I'd like to take this view and shrink it down for use on the iPhone. I'm hoping to use the same drawing code and shrink the view instead of recalculating my graphic or creating a bitmap cache. The view created on the fly with no Interface Builder.
What is the best approach to do the shrinking?
Should I put the view inside of a UIScrollView?
Thanks!
If possible, just change the current transform matrix before drawing, using something like CGContextScaleCTM. That'll scale all your measurements sent into Core Graphics prior to rasterisation.
If that isn't possible for whatever reason, you should consider still drawing at 1024x768 but applying a suitable transform to the UIView using CGAffineTransformMakeScale. That'll draw at the original pixel size then scale down as a raster operation on the output pixels so it'll be less efficient.
I've got my layer hosted workspace working so that using CATiledLayers for hundreds of images works nicely when the workspace is zoomed out substantially. All the images use lower resolution representations, and my application is much more responsive when panning and zooming large numbers of images.
However, within my application I also provide the user the ability to resize layers with a resize handle. Before I converted image layers to use CATiledLayers I was doing layer resizes by manipulating the bounds of the image layer according to the resize delta (mouse drag), and it worked well. But now with CATiledLayers in place, this is causing CATiledLayers to get confused when I mix resizing of layers through bounds manipulation and zooming/unzooming the workspace through scale transforms.
Specifically, if I resize a CATiledLayer to half the width/height size (1/4 the area), the image inside it will suddenly scale to a further 1/2 the resized frame leaving 3/4 of the frame empty. This seems to be exactly when the inner CATiledLayer logic gets invoked to provide a lower resolution image representation. It works fine if I don't touch the resize handler and just zoom/unzoom the workspace.
Is there a way to make zooming/resizing play nice together with CATiledLayers, or am I going to have to convert my layer resize logic to use scale transforms instead of bounds manipulations?
I ended up solving this by converting my layer resize logic to use scale transforms by overriding the setBounds: method for my custom image layer class to scale it's containing CATiledLayer, and repositioning accordingly. Also it is important to make sure the CATiledLayer's autoresizingMask is set to kCALayerNotSizable since we are handling resizes manually in setBounds:.
Note: be sure to call the superclass's implementation of setBounds:.
I am using CATiledLayer to display a pdf page..
But the layer do have some time to draw context.
Therefore, I try to add an background beside this layer and remove the background after CATiledLayer is successful display..
My question is Is there any ways to detect the drawing status of CATiledLayer?
Thanks for your help.
subclass your CATiledLayer and make fadeDuration of tiledlayer to 0.0
A "hacky" way of doing is to pre-calculate how much tiles will be rendered, and then count the calls to - (void)drawLayer:(CALayer*)layer inContext:(CGContextRef)context. It's a pretty insecure though and most likely only works on the initial zoom level. CATiledLayer caches its tiles and doesn't tell you what is cached and what will be redrawn.