I'm creating a drawing app on the iPad where the user can draw and scroll through the drawing. (Think of a canvas 4000 pixels wide with a view port width of 1024) At the moment, I'm using OpenGL for the drawing, and with a width of 1024, it works great. When I change the frame size of the UIView to 4000, I get "failed to make complete framebuffer object 8cd6". When I reduced it to a width of 2000, I ended up getting "wacky" results. I know I can manipulate the frame correctly, as having a frame width of 500 creates the correct result.
I was also thinking of leaving the width 1024 and moving the camera of the OpenGL layer, but how would that work with the UIScrollView that I setup? So I'm unsure on what to do at the moment. Any advice?
Thanks in advance
P.S. The code is largely based of GLPaint sample Apple provides here
I think you'd be best off with the scheme you suggest towards the end — keeping the OpenGL view static and outside of the scroll view, and moving its contents so as to match the movement of the scroll view.
Assuming you're using a GLKView, implement your glkView:drawInRect: so that it gets the contentOffset (and, probably, the bounds) properties from your scroll view and draws appropriately. E.g. (pretending you're using GLES 1.0 just because the matrix manipulation methods are so well known):
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// displayArea will be the area the scroll view is
// currently displaying; taking just the bounds would
// likely be fine too
CGRect displayArea;
displayArea.origin = scrollView.contentOffset;
displayArea.size = scrollView.bounds.size;
// assuming (0, 0) in the GL view is in the centre,
// we'll adjust things so that it's in the corner ala
// UIKit
CGPoint centre = CGPointMake(
displayArea.origin.x + displayArea.size.width*0.5f,
displayArea.origin.y + displayArea.size.height*0.5f);
glPushMatrix();
// apply the scroll as per the scroll view
// so that its centre is aligned with our centre
glTranslatef(-centre.x, -centre.y, -1);
/* rest of drawing here */
glPopMatrix();
}
Then connect yourself as a delegate to the scrollview and just perform:
- (void)scrollViewDidScroll:(UIScrollView *)scrollView
{
[glView setNeedsDisplay];
}
To ensure the GL redraws whenever the scroll view is scrolled.
I would strongly recommend to do your zooming in panning with your OpenGL camera instead of trying to use it in a UIScrollView. It will be a little more work to set up, but ultimately the way to go IMHO.
Related
I'm looking for a solution in order to have a beautiful zoom on a drawing view.
In my app, I have a view with an other UIView (which is used like a drawing view) and when I draw a stroke on it, the stroke is perfect. But when I zoom the view, I have this really ugly effect (a pixelised stroke) :
(source: imagup.com)
url image
Is there a solution in order to have a proper stroke ?
My UIViewController has a hierarchy like that :
UIViewController
ScrollView
View zoomable (defined with the viewForZoomingInScrollView method)
Image view
Drawing view
Thanks a lot !
Regards,
Sébastien ;)
I'm in the process of making a vector drawing application and let me tell you, this is NOT a trivial task to do correctly and requires quite a bit of work.
Some issues to keep in mind:
If you are not using vector graphics (CGPaths, for example, are
vectors) you will NOT be able to remove the pixelation. A UIImage,
for example, only has so much resolution.
In order to get your drawing to not look pixelated, you are going to
have to redraw everything. If you have a lot of drawing, this can be
an expensive task to perform.
Having good resolution WHILE zooming is nearly impossible because it would require an excessively large context and your drawing would likely exceed the capabilities of the device
I use core graphics to do my drawing, so the way I solved this issue was by allocating and managing multiple CGContexts and using them as buffers. I have one context that is ALWAYS kept at my least zoomed level (scale factor of 1). That context is drawn into at all times and makes it so that when unzooming completely, no time is spent redrawing since it is already done. Another context is used soley for drawing when zoomed. When not zoomed, that context is ignored (since it will have to be redrawn based on the new zoom level anyway). A high level algorithm for how I perform my zooming is as follows:
- (IBAction)handlePinchGesture:(UIGestureRecognizer *)sender
{
if(sender.state == UIGestureRecognizerStateBegan)
{
//draw an image from the unzoomedContext into my current view
//set the scale transformation of my current view to be equal to "currentZoom", a property of the view that keeps track of the actual zoom level
}
else if(sender.state == UIGestureRecognizerStateChanged)
{
//determine the new zoom level and transform the current view, keeping track in the currentZoom property
//zooming will be pixelated.
}
else if(sender.state == UIGestureRecognizerStateEnded || sender.state == UIGestureRecognizerStateCancelled)
{
if(currentZoom == 1.0)
{
//you are done because the unzoomedContext image is already drawn into the view!
}
else
{
//you are zoomed in and will have to do special drawing
//perform drawing into your zoomedContext
//scale the zoomedContext
//set the scale of your current view to be equal to 1.0
//draw the zoomedContext into the current view. It will not be pixelated!
//any drawing done while zoomed needs to be "scaled" based on your current zoom and translation amounts and drawn into both contexts
}
}
}
This gets even more complicated for me because I have additional buffers for the buffers because drawing images of my paths is much faster than drawing paths when there is lots of drawing.
Between managing multiple contexts, tweaking your code to draw efficiently into multiple contexts, following proper OOD, scaling new drawing based on your current zoom and translation, etc, this is a mountain of a task. Hopefully this either motivates you and puts you on the right track, or you decide that getting rid of that pixelation isn't worth the effort :)
I had the same problem and found a solution: tell the view to use a CATiledLayer as backing layer, then tell the view how many levels of zoom it supports. This worked for me, my drawing methods get automatically called when the (parent) view is zoomed.
A short explanation of levelsOfDetail and levelsOfDetailBias:
levelsOfDetail determine how many zooming levels there are in total
levelsOfDetailBias determine how many of those are zooming in.
So in my example I have 4 zooming levels, 3 are zoomed in and 1 is the non-zoomed level, meaning my view only redraws when zooming in.
#imprementation MyZoomableView
+ (Class)layerClass
{
return [CATiledLayer class];
}
- (id)initWithFrame:(CGRect)frame
{
if ((self = [super initWithFrame:frame])) {
((CATiledLayer *)self.layer).levelsOfDetail = 4;
((CATiledLayer *)self.layer).levelsOfDetailBias = 3;
}
return self;
}
#end
Use [self setContentScaleFactor:scale]; in your scrollViewDidEndZooming: delegate method.
I have two ImageViews, one called imageView and the other called subView (which is a subview of imageView).
I want to blend the images on these views together, with the user being able to switch the alpha of the blend with a pan. My code works, but right now, the code is slow as we are redrawing the image each time the pan gesture is moved. Is there a faster/more efficient way of doing this?
BONUS Q: I want to allow for my subView image to drawn zoomed in. Currently I've set my subView to be UIViewContentModeCenter, however I can't seem to draw a zoomed in part of my image with this content mode. Is there any way around this?
My drawrect:
- (void)drawRect:(CGRect)rect
{
float xCenter = self.center.x - self.currentImage1.size.width/2.0;
float yCenter = self.center.y - self.currentImage1.size.height/2.0;
subView.alpha = self.blendAmount; // Customize the opacity of the top image.
UIGraphicsBeginImageContext(self.currentImage1.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(c, kCGBlendModeColorBurn);
[imageView.layer renderInContext:c];
self.blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.blendedImage drawAtPoint:CGPointMake(xCenter,yCenter)];
}
You need to use GPU for image processing which is far faster than using CPU (as you're doing right now).
You can use Core Image framework which is very fast and easy to use but requires iOS 5, or you can use Open GL directly but you need to be experienced and have some knowledge about Open GL Shading.
I have a custom map of a limited area, and have it set up to correctly show the users' location. The map is a 1600px square image within a UIScrollView.
I have a crosshair image to show the current location of the user, which at zoomScale 1.0 is the desired size. When I pinch and zoom the scrollView, the crosshair scales with it. I would like to have the subview remain the same size on screen.
I haven't been able to find any information on this, what would be the best way to go about this?
If there is anything I can provide you with to help the answer, please let me know.
Many thanks!
EDIT -
Having looked in to this further, there is a UIScrollViewDelegate method - (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale which I tried using to take the marker's current center and size, then adjust, but this only scales at the end of the zoom. I would prefer to have the marker remain the same size while the user is zooming.
EDIT 2-
Cake has provided a great answer below, but I haven't been able to implement this in the way I imagined it would be.
I have the UIImageView as a placeholder, with alpha set to 0. This placeholder moves around relative to the map to show the user location. This operates as I expect it to. Unfortunately, this resizes with the map, as it is a subview of the map (so it stays in place).
Taking Cake's below answer, I have created the non-scaling crosshair image, and added it as a sibling subview to the scrollview. The maths, once Cake had pointed them out, were quite simple to get the new frame for the crosshair:
CGPoint ULPC = userLocationPlaceholder.center;
float zs = scrollView.zoomScale;
CGRect newFrame = CGRectMake(((ULPC.x * zs) - scrollView.contentOffset.x) - 20, ((ULPC.y * zs) - scrollView.contentOffset.y) - 20, 40, 40);
Where the image is 40points wide. This matches the centers perfectly.
The problem I now have is that I cannot get the crosshair image to stay locked to the placeholder.
I have tried using a self calling animation as such:
-(void)animeUserLocationAttachment
{
[UIView animateWithDuration:0.05
delay:0
options:(UIViewAnimationOptionAllowUserInteraction | UIViewAnimationOptionCurveLinear )
animations:^{
userLocationDotContainer.frame = newFrame;
} completion:^(BOOL finished){
// Call self
[self animateUserLocationAttachment];
}];
}
As soon as I start scrolling/zooming, this locks the animation so that the crosshair just sits in place until I release the scrolling/zooming, then it correctly updates it's location.
Is there any way I can get around this, or an alternative method I can apply?
Many thanks
EDIT 3 -
I've re-accepted Cake's answer as it covers 90% of the issue. Further to his answer I have implemented the ScrollViewDelegate methods scrollViewWillBeginDragging: andscrollViewWillBeginDecelerating: to scale the placeholder to match the current size of the crosshair relative to the map, show the placeholder (that is a subview of the map image) and hide the crosshair image. The delegate method scrollviewWillBeginZooming:withView: does not show the placeholder because it scales with the map. As Cake recommends, I'll make a new question for this issue.
The counterpart methods (scrollViewDidEndZooming:withView:atScale:, scrollViewDidEndDragging:willDecelerate: and -scrollViewDidEndDecelerating:`) all hide the placeholder, and re-show the crosshair.
The question is old but for the future similar questions I've recently resolved a similar problem applying the hint of Andrew Madsen of another post.
I'had a UIScrollView, with an UIImageView in it. Attached to the UIImageView I had many MKAnnotationView (those are my subviews that I didn't want scaling with the superview).
I did subclass UIImageView and implement setTransform: method like here:
#import "SLImageView.h"
#implementation SLImageView
- (void)setTransform:(CGAffineTransform)transform
{
[super setTransform:transform];
CGAffineTransform invertedTransform = CGAffineTransformInvert(transform);
for (id obj in self.subviews)
{
if ([obj isKindOfClass:[MKAnnotationView class]])
{
[((UIView *)obj) setTransform:invertedTransform];
}
}
}
#end
This works perfectly!
Mick.
Create another crosshair image that's associated with the view or view controller that contains the scrollview. Then have this one always snap to the center of the crosshair image you already have. Then, hide your original crosshair image. Then you can avoid having the scrollview scale the disassociated crosshair, and it should stay the same size.
Relative coordinate systems
Each view in cocoa touch has a frame property that has an origin. In order to position an object owned by one view properly relative to another view, all you have to do is figure out the differences in their origins. If one view is a subview of another, then this isn't too difficult.
Get the origin of the container view
Get the location of the subview inside of the container view
Get the origin of the subview
Calculate the difference in the positions of the origins
Get the location of the object you want to overlap (relative to the subview)
Calculate the location of the object you want to overlap relative to the container view
Move your crosshair to this position
Swift equivalent for Mick's answer:
class MapContainerView:UIView {
#IBOutlet var nonScalingViews: [UIView]!
override var transform: CGAffineTransform {
didSet {
guard let nonScalingViews = nonScalingViews else {
return
}
let invertedTransform = CGAffineTransformInvert(transform)
for view in nonScalingViews {
view.transform = invertedTransform
}
}
}
}
I have a UIScrollView whose content size is 1200x480. I have some image views on it, whose width adds up to 600. When scrolling towards the right, I simply increase the content size and set the offset so as to make everything smooth (I then want to add other images, but that's not important right now). So basically, the images currently in the viewport remain somewhere on the left, eventually to be removed from the superview.
Now, the problem that I have happens when scrolling towards the left. What I do is I move the images to the end of the content size (so add 600 to each image view's origin.x), and then set the content offset accordingly. It works when the finger is on the screen and the user drags (scrollView.isTracking = YES). When the user scrolls towards the left and lets go (scrollView.isTracking = NO), the image views end up moving too fast towards the right and disappear almost instantly. Does anyone know how I could have the images move nicely and not disappear even when the user's not manually dragging the view and has already let go?
Here's my code for dragging horizontally:
-(void) scrollViewDidScroll:(UIScrollView *)scrollView {
CGPoint offset = self.scrollView.contentOffset;
CGSize size = self.scrollView.contentSize;
CGPoint newXY = CGPointMake(size.width-600, size.height-480);
// this bit here allows scrolling towards the right
if (offset.x > size.width - 320) {
[self.scrollView setContentSize:CGSizeMake(size.width+320, size.height)];
[self.scrollView setContentOffset: offset];
}
// and this is where my problem is:
if (offset.x < 0) {
for (UIImageView *imageView in self.scrollView.subviews) {
CGRect frame = imageView.frame;
[imageView setFrame:CGRectMake
(frame.origin.x+newXY.x, frame.origin.y, 200, frame.size.height)];
}
[self.scrollView setContentOffset:CGPointMake(newXY.x+offset.x, offset.y)];
}
}
EDIT: This is now working - I had a look at StreetScroller and it's all good now.
However, I now want to zoom in on the scrollview, but viewForZoomingInScrollView is never called. Is it not possible to zoom in on a scrollview with a large content size?
There are some approaches floating around here. Just use the site search …
If you want an more "official" example created by Apple take a look at the StreetScroller Demo. For some more information about this example take a look at last years WWDC session no. 104 Advanced Scroll View Techniques.
There is also an UIScrollView subclass on Github called BAGPagingScrollView, which is paging & infinite, but it has a few bugs you have to fix on your own, because it's not under active development (especially the goToPage: method leads to problems).
I've got a custom map view which is made of a UIScrollView. The scroll view's subview is backed by a CATiledLayer. Everything works great here. Panning & zooming loads up new map tiles and everything performs well.
What I want to do is capture frames of video of animations to this scroll view. Essentially, I want to create a video of animated changes to the scroll view's contentOffset and zoomScale.
I know that the concept is sound as I can get the private API function UIGetScreenImage() to capture the app's screen at, say, 10fps, combine these images, and I get playback animations that are smooth and have the timing curves used by the scroll view animations.
My problem, of course, is that I can't use the private API. Going through the alternatives outlined by Apple here leaves me with pretty much one supposedly valid option: asking a CALayer to renderInContext and taking a UIGraphicsGetImageFromCurrentImageContext() from that.
This just doesn't seem to work with CATiledLayer-backed views, though. A blocky, un-zoomed image is what is captured, as if the higher-resolution tiles never load. This somewhat makes sense given that CATiledLayer draws in background threads for performance and calling renderInContext from the main thread might not catch these updates. The result is similar even if I render the tiled layer's presentationLayer as well.
Is there an Apple-sanctioned way of capturing an image of a CATiledLayer-backed view during the course of the containing scroll view's animations? Or at any point, for that matter?
BTW, this is doable if you properly implement renderLayer:inContext: in your CATiledLayer-backed view.
I did a quick test, and using renderInContext: on a view wrapping the scroll view seemed to work. Have you tried that?
This code works for me.
- (UIImage *)snapshotImageWithView:(CCTiledImageScrollView *)view
{
// Try our best to approximate the best tile set zoom scale to use
CGFloat tileScale;
if (view.zoomScale >= 0.5) {
tileScale = 2.0;
}
else if (view.zoomScale >= 0.25) {
tileScale = 1.0;
}
else {
tileScale = 0.5;
}
// Calculate the context translation based on how far zoomed in or out.
CGFloat translationX = -view.contentOffset.x;
CGFloat translationY = -view.contentOffset.y;
if (view.contentSize.width < CGRectGetWidth(view.bounds)) {
CGFloat deltaX = (CGRectGetWidth(view.bounds) - view.contentSize.width) / 2.0;
translationX += deltaX;
}
if (view.contentSize.height < CGRectGetHeight(view.bounds)) {
CGFloat deltaY = (CGRectGetHeight(view.bounds) - view.contentSize.height) / 2.0;
translationY += deltaY;
}
// Pass the tileScale to the context because that will be the scale used in drawRect by your CATiledLayer backed UIView
UIGraphicsBeginImageContextWithOptions(CGSizeMake(CGRectGetWidth(view.bounds) / view.zoomScale, CGRectGetHeight(view.bounds) / view.zoomScale), NO, tileScale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, translationX / view.zoomScale, translationY / view.zoomScale);
// The zoomView is a subview of UIScrollView. The CATiledLayer backed UIView is a subview of the zoomView.
[view.zoomView.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Full sample code found here: https://github.com/gortega56/CCCanvasView