How to get the actual size of transformed CALayer? - core-animation

I have a 200x200 pixel CALayer instance to which I've applied two transformations:
scaled down to 50%
applied a 3D perspective
Now I want to align the transformed layer with another layer. To do that I need the size (width and height) of the transformed layer.
The problem is that layer.bounds still returns 200x200 for the layer, but because of the perspective transformation the actual visible width is less and the height more than 200. The bounds of the presentation layer return 0, by the way.
Is there any way how I can determine the exact size of a transformed CALayer exactly as it is drawn on screen?

Finally figured it out: the bounds remain unchanged when applying a transformation, but what changes is the layer's frame property.
Solution is to use layer.frame.size instead of layer.bounds.size.
This is mentioned in the Core Animation Programming Guide:
Layers have an implicit frame that is a function of the position, bounds, anchorPoint, and transform properties.

Related

Parallax effect jitter in libGDX

Basically I'm using this ParallaxCamera class to create the effect in my game, but upon movement, the layers "wiggle". This is especially noticeable when the camera is moving slowly.
I have fixed the timestep and use interpolated smoothing. I use scaled up pixel art. Camera centered on player, updated after movement. Disabling the effect makes moving the camera smooth.
What I guess the issues might be:
the layers move at different paces which means they move at different
times
rounding to display makes the layers assume slightly different positions each frame when moving the camera
Thanks for any help
For low-resolution pixel art, this is the strategy I've used. I draw to a small FrameBuffer at 1:1 resolution and then draw that to the screen. That should take care of jittering.
If your Stage is also at the same resolution, then you have to use a bit of a hack to get input to be processed properly. The one I've used it to use a StretchViewport, but I manually calculate the world width and height to not stretch the world, so I'm basically doing the same calculation that ExtendViewport does behind the scenes. You also have to round to an integer for the width and height. You should do this in resize and apply the width and height using viewport.setWorldWidth() and setWorldHeight(). So in this case it doesn't matter what world size you give to the constructor since it will be changed in update().
When you call update on the viewport in resize, you need to do it within the context of the FrameBuffer you are drawing to. Otherwise it will mess up the screen's frame buffer dimensions.
public void resize(int width, int height) {
int worldWidth = Math.round((float)WORLD_HEIGHT / (float)height * (float)width);
viewport.setWorldWidth(worldWidth);
viewport.setWorldHeight(worldHeight);
frameBuffer.begin();
viewport.update(width, height, true); // the actual screen dimensions
frameBuffer.end();
}
You can look up examples of using FrameBuffer in LibGDX. You should do all your game drawing in between frameBuffer.begin() and end(), and then draw the frameBuffer's color buffer to the screen like this:
stage.act();
frameBuffer.begin();
//Draw game
stage.draw();
frameBuffer.end();
spriteBatch.setProjectionMatrix(spriteBatch.getProjectionMatrix().idt());
spriteBatch.begin();
spriteBatch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2);
spriteBatch.end();
In my case, I do a more complicated calculation of world width and world height such that they are a whole number factor of the actual screen dimensions. This prevents the big pixels from being different sizes on the screen, which might look bad. Alternatively, you can change the filtering of the FrameBuffer's texture to linear and use an upscaling shader when drawing it.

How to find back the real position on the image on iOS?

Here is the view I got, I got a layer view, detect user touch, and a image view, which showing the image. The layer view is cover on top of the image view. The image view's image is aspect fit. So, it won't lost the ratio. If in my layer view touch on 100, 240, it is a layer view coordinate, but not the image's coordinate. I would like to know how to convert the layer view's coordinate to a image's coordinate. In this example, the image size may be 180*180, so, the coordinate in layer view in the image is 60, 90.
Thanks.
If I'm understanding this question correctly, you want to take a point, which is currently in relation to the layer's coordinate system, and convert it to the image view's coordinate system?
In that case, there are a couple of ways to do this.
Easiest is to use convertPoint:fromView: or convertPoint:toView:
CGPoint imageViewTouchPoint = [layerView convertPoint:touchPoint fromView:imageView];
CGPoint imageViewTouchPoint = [imageView convertPoint:touchPoint toView:layerView];
Either one should work.
EDIT - I realize now that this is only if the UIImageView has the same frame as the UIImage, which you said it might not, due to the UIViewContentModeScaleAspectFit property.
In this case, unless I'm mistaken, the image frame is calculated inside the UIImageView drawRect: method and isn't a property that gets set. This means you'll have to calculate this on your own.
Definitely get the imageViewTouchPoint from one of the methods above (just in case you want to use the same logic on a UIImageView which isn't the full screen size).
You will then need to calculate the scaled image frame. There are a couple of ways to do this. Some people go brute force and manually calculate based on which side of the image is longer, then determining which side should be scaled. Then they calculate the origin by by centering the image and subtracting the image and image view's sides and dividing by two.
I like to write as little code as possible if it's unnecessary, even if it means importing a framework. If you import AVFoundation you get a method AVMakeRectWithAspectRatioInsideRect which you can use to actually calculate the scaled rectangle in one line of code.
CGRect imageRect = AVMakeRectWithAspectRatioInsideRect(image.size, imageView.frame);
Whichever method you use, you will then simply translate your touched point with the scaled image origin:
CGPoint imageTouchPoint = CGPointMake(imageViewTouchPoint.x - imageRect.origin.x, imageViewTouchPoint.y - imageRect.origin.y);
You have to do the math yourself. Calculate the aspect ratio of your image and compare with the aspect ratio of the image view's bounds.
Look at this question: How to Get Image position in ImageView
After searching more, got a hack:
CGSize imageInViewSize = [photo resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:imageView.size interpolationQuality:kCGInterpolationNone].size;
CGRect overlayRect = CGRectMake((imageView.frame.size.width - imageInViewSize.width) / 2,
(imageView.frame.size.height - imageInViewSize.height) / 2,
imageInViewSize.width,
imageInViewSize.height);
NSLog(#"Frame of Image inside UIImageView: Left:%f Top:%f Width:%f Height:%f \n", overlayRect.origin.x, overlayRect.origin.y, overlayRect.size.width, overlayRect.size.height);

Change NSImage Origin

Is it possible to change the origin of an NSImage? If so how would I go about doing this. I have coordinates in regular cartesian system some of them with negative values and I am trying to draw them at the corresponding point in the NSImage but since the origin is at (0,0) there are some missing.
EDIT:Say I have an drawing aspect that needs to be done to an image at the point (-10,-10), currently this doesn't show up. Is there a way to fix that?
If it's like in iOS (you may have to adapt a little the code) and if my memory is still good, you have to do this, since origin is readOnly:
CGRect myFrame = yourImage.frame;
myFrame.origin.x=newX; myFrame.origin.y=newY;
yourImage.frame = myFrame;
I think you are confusing an NSImage with it's container. An NSImage has no bounds or frame, and thus no origin. It does have a size which may represent the pixel dimensions of its birtmap representation ( if it has one) or otherwise could represent it's bounding box ( if it is a vector image). Drawing in an image at a pixel location of (-10,-10) doesn't really make sense.
An NSImage is displayed in a container ( typically an NSImageView), and the container's bounds.origin will dictate the placement of the image relative to the imageView, but you can't modify pixels beyond the edge of the bitmap plane.
In any case you probably want to be using a subclassed NSView in which you would override the drawRect method for your custom drawing. NSView does have a bounds.origin but this is not relevant to your in-drawing coordinates, but rather to the position of the drawn content as a whole to the view's bounding box. The coordinate system that you will be drawing into will be referenced to your graphics context which will (usually) pin the origin (0,0) to the bottom left corner (OSX) or top left corner (iOS). If you are trying to represent negative points on a Cartesian plane, you will need to apply a translation transform to map your points into this positive coordinate space.
I'm trying to explain in a few words, badly, something which Apple explains in great detail in their Quartz 2D Programming Guide.

How to force a CALayer to redraw at a higher resolution?

I have two instances of a CALayer subclass.
The only difference between them is this line:
[self setTransform:CATransform3DMakeScale(2, 2, 2)];
What else do I need so that the large layer looks good at scale 2x ?
PS: (to avoid any confusion) The layers also include a few control buttons, shadows and rounded corner to mimic the look of windows in a windowing system, but those are not NSWindows instances.
The short answer is, don't use transforms. Transforms scale the layer by magnifying it, without re-rendering.
You could get a very similar effect by using a CAShapeLayer and animating changes to the path. That would give you sharp rendering, however, because it path animation does re-render the pixels.
I say "similar" effect because CAShapeLayers use a lineWidth property for the whole layer. You can animate the line width between values, and use fractional values, but you'll have to do some fine-tuning to get the line thickness to animate up and down in proportion to the size of the shape. Another consideration is that the graphics system uses anti-aliasing to draw fractional width paths, so when the line width is not an integer value they will look slightly soft. You could turn off antialiasing, but then they would look really jaggy.

Can someone explain the CALayer contentsRect property's coordinate system to me?

I realize that the contentsRect property of CALayer (documentation here) allows one to define how much of the layer to use for drawing, but I do not understand how the coordinate system works, I think.
It seems that when the width/height are smaller, the area used for content is bigger and vice versa. Similarly, negative x,y positions seem to move the content area down and to the right which is the opposite of my intuition.
Can someone explain why this is? I'm sure there is a good reason, but I assume I'm missing some graphics programming background.
the contentsRect property of CALayer (documentation here) allows one to define how much of the layer to use for drawing
No, you're thinking about it incorrectly.
The contentsRect specifies which part of the contents image will be displayed in the layer.
That part is then arranged in the layer according to the contentsGravity property.
If this is kCAGravityResize, the default, this will cause the part to be resized to fit the layer. That would explain the counterintuitive behavior you're seeing -- you make contentsRect smaller, but the layer appears to be the same size, and it appears to "zoom in" on the selected part of the image. You might find it easier to understand if you set contentsGravity to kCAGravityCenter, which won't resize.
Most of the time, you would set the contentsRect to some sub-rect of the identity rect { {0, 0}, {1, 1} }, so you choose to see only part of the contents.
(Think of these as percentages if you like -- if contentsRect has a size of {0.5, 0.5}, you're choosing 50% of the contents.)
If part of the contentsRect goes outside the identity rect, then CA will extend the edge pixels of the contents outwards. This is handy in some cases, but it's not something you'd use on its own -- you'd use it in combination with a mask or with some other layers to achieve some effect.
The contentsRect property is measured in unit coordinates.
Unit coordinates are specified in the range 0 to 1, and are relative values (as opposed to absolute values like points and pixels). In this case, they are relative to the backing image’s dimensions. The default contentsRect is {0, 0, 1, 1}, which means that the entire backing image is visible by default. If we specify a smaller rectangle, the image will be clipped.
It is actually possible to specify a contentsRect with a negative origin or with dimensions larger than {1, 1}. In this case, the outermost pixels of the image will be stretched to fill the remaining area.
You can find more information in Nick Lockwood's book "iOS CoreAnimation Advanced Techniques"