I want get texture or image from UIVIew like screenshot. But without create CGContext, and draw UIView on this context. If this view display on screen, i think it contains this texture somewhere inside. How can I get this texture/image from UIView? How can I catch moment redraw this texture(E.g. change highliten button inside view)?
There isn't a texture stored for a rendered UIView. There will be a framebuffer under the hood somewhere but I don't think you can get at that.
Like you said, you can render a view's layer to a CGContext. This is easily done and I believe it's the only way of achieving what you want. What are your reasons for avoiding this approach? Perhaps you are trying to solve the wrong problem.
Related
I am using NSView as a digital display screen.
When I invoke
[display setNeedsDisplay:YES]
The view's drawrect function seems to entirely clear the view before redrawing it contents.
This results in what appears as a "blinking" or "flickering" display.
Is there any way to alleviate this behavior?
I read about the concept of NSView layers, placing the most changing portions in one layer, while drawing seldom changing in another layer, but I was unable to find any concrete example code using layers.
Desired behavior: drawrect draws contents without 1st clearing view. After drawing, clears previous contents that does not intersect with new contents.
Any help or suggestions would be appreciated.
I have seen many examples of views being subclassed to override drawRect, but that approach is pretty static (at least as far as I understand).
What I'd like to do is set up a very simple drawing canvas. In that, I've got a view with a UIPanGestureRecognizer attached to it. Whenever the gesture fires a new position, I'd like to draw a circle of a fixed size and color in that position of the view. The gesture recognizer is attached to the view, but it fires a selector in the view controller. I already have a subclass of a UIView. So, what would be the best approach?
Thanks.
What you need to do in this case is still to override drawRect!
The difference is that you, when recognising gestures, need to keep track of the location(s) in which this circle should be drawn, and access that information the next time you are redrawing the view, essentially building up an image in memory that you draw into the view.
I have a scroll view with a image view inside. I was wondering whether it's possible to have the image inside the image view be much smaller than the resolution of the screen, and somehow be able to stretch it to fit the screen.
Set contentMode on the UIImageView and change its size.
[UIImageView setContentMode:UIViewContentModeScaleToFill];
Check the docs for more informations regarding UIViewContentMode:
http://developer.apple.com/library/ios/#documentation/uikit/reference/uiview_class/UIView/UIView.html#//apple_ref/doc/uid/TP40006816-CH3-SW69
Sure, just change the bounds of the imageView.
Am I missing something here?
Your UIImageView is within an UIScrollView I understand?
That would work by adjusting the scroll view plus adusting the image view appropriately. However that is not advisable to do. You will get lost in small errors with annoying effects.
I'd suggest to add an additional UIView that can match the bounds of the screen.
Add that view to the underlying "view" object and use the bringSubviewToFront method.
You could as well make sure that this new UIView is the first subview of the most underlying view object. You could achieve that by manipulating the subviews array structure - which I do not recommend in general wihout having fully understood everythng about the view hierarchy.
You can as well achieve that by adding the additional view at first before adding any other view. (Within IB just make sure that the new view is the topmost in the tree, coming next to the view controllers "view".) And then make it hidden until you actually need it. Then you unhide it. When it is not needed anymore then hide it again and neither delete it nor erase it from its superview.
I am creating an app for practice that is a simple drawing app. The user drags his/her finger along the screen and it colors in a 100px x 100px square.
I currently achieve this by creating a new colored UIView where the user taps, and that is working. But, after a little time coloring in, there is substantial lag, which I believe is down to there being too many UIViews as a subview of the main view.
How can I, and others who similarly create UIViews on dragging a finger reduce the lag to none at all, no matter how many UIViews there are. I also think that perhaps this is an impossible task, so how else can someone like me color a cube of the size stated above in the main view on a finger dragged along the screen?
I know that this may seem like a specific question, but I believe that it could help others understand how to reduce lag if there are a very large amount of UIViews where a less performance reducing option is available.
One approach is to draw each square into an image and display that image, rather than keeping around an UIView for each square.
If your drawing is simple enough, though, you can use OpenGL to do this, which is much faster. You should look at Apple's GL Paint Sample Code which shows how to do this in OpenGL.
If your drawing is too complex for OpenGL, you could create, for example, a CGBitmapContext, and draw each square into that context when the user drags their finger. Whenever you draw a new square into that bitmap, you can turn the bitmap into an image (via CGBitmapConxtextCreateImage) and display that image an a UIImageView.
There are two things that come to my mind:
1- Use Instruments tool to check if you are leaking any memory
2- If you are just coloring the views than instead of creating images for each of them, either set the background color property of UIView or override the drawRect method to do custom drawing
I think what you are looking for is the drawRect: method of UIView. You could create your custom UIView (you propably have that already) and override the drawRect method and do your drawing there! You will have to save your drawings in an array or another container and call the setNeedsDisplay method whenever the array content is changed.
I have two different UIImageViews. I'd like to make the top UIImageView blend in using the Screen blend mode with the bottom UIImageView.
I know of the property of CALayer: compositingFilter and I know that it doesn't work in iOS. I've searched a lot for solutions, and I've found how one should subclass UIView and override drawRect.
I've tried to set the context in drawRect to the screen blend mode, although it still draws every one of the images normally. Perhaps I am doing something wrong, or the approach should be different. Maybe I need OpenGL or CALayer to achieve this. Could someone assist?
Unfortunately there is no way to do a non-composite blend between UIViews on iOS. UIKit doesn't provide the functionality, and you've already noted, CALayer can't do it either.
In general, implementing -drawRect in a UIView won't help you. You're drawing into an empty bitmap -- it doesn't contain the bits of the views behind it, since those might change at any time (any view or layer might be animated). CA fundamentally assumes that layers' contents should be independent of each other.
You could try, in your -drawRect:
create an image context
capture the views under your view using -[CALayer renderInContext:] for each
create an image from the image context
draw that image into your view
set the blend mode and draw on top of that
But that will be slow and fragile, and won't work if you animate any of the views. I wouldn't recommend it.
If you really need to do this, you're going to have to switch your whole scene to render with OpenGL, where you've got more freedom.