How to Combine SKTextures - objective-c

I'm looking for a way to combine an SKTexture on top of another SKTexture, similar in appearance to how a textured SKSpritenode would look when adding another textured SKSPritenode as a child. I need it to be a single SKTexture as an end-product please.

The short of it is that you can't do it. A SKSpriteNode can only ever take one SKTexture. The only way to do this within the SpriteKit framework is to add children on top of the parent node.
Another way is to use a series of images and combine them before adding the final product as a texture to your sprite node.
CGSize mergedSize = CGSizeMake(maxWidth, maxHeight); // <- use the largest width and height for your images if they are different sizes
UIGraphicsBeginImageContextWithOptions(mergedSize, NO, 0.0f);
[textureImage1 drawInRect:CGRectMake(0, 0, maxWidth, textureImage1.size.height)];
[textureImage2 drawInRect:CGRectMake(0, 0, 75, textureImage2.size.height)];
// ... add any additional images ...
UIImage *mergedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.texture = [SKTexture textureWithImage:mergedImage];
Make sure you have your SKSpriteNode size property set to (maxWidth, maxHeight) so nothing gets clipped.

Related

Changing the image of a SKSprite to an SKShapeNode

Sprite Kit, Xcode.
I need to find a way to change a sprites image within the program itself. I know how to create jpg files and make them into the sprite image...
But for this program, I need to draw circles/polygons (which may change inside the program) using SKShapeNode, and then transferring this to the SKSpriteNode's image.
Let's say I have declared:
SKSpriteNode *sprite;
SKShapeNode *image;
How would I do this with these variables?
Thanks!
EDIT: I mean texture when I say image.
If I understand your question correctly, you can achieve what you're after using the textureFromNode method on SKView.
In your SKScene:
-(void)didMoveToView:(SKView *)view {
SKShapeNode *shape = [SKShapeNode shapeNodeWithCircleOfRadius:100];
shape.fillColor = [UIColor blueColor];
shape.position = CGPointMake(self.size.width * 0.25, self.size.height * 0.5);
[self addChild:shape];
SKTexture *shapeTexture = [view textureFromNode:shape];
SKSpriteNode* sprite = [SKSpriteNode spriteNodeWithTexture: shapeTexture];
sprite.position = CGPointMake(self.size.width * 0.75, self.size.height * 0.5);
[self addChild:sprite];
}
Hope that helps!
You cannot change a SKSpriteNode image once you assign it. To do what you want, you need to create a SKSpriteNode using a texture.
- (instancetype)initWithTexture:(SKTexture *)texture
To change a SKSpriteNode's texture you assign a new texture using its texture property. You can also do this using an image converted to a texture like this:
myNode.texture = [SKTexture textureWithImageNamed:#"imageName"];
As for a SKShapeNode, you cannot assign an image. Only a path, rect, circle, ellipse or points. Look at the SKShapeNode class docs section Creating a Shape Path for more info.

Why after rotating UIImageView size is getting changed?

I'm new in using transformations. And still confusted how they are working.
What I'm trying to do, is to rotate my UIImageView with given angle. But after rotating, it's changing the size of image, getting smaller. I'm also doing scaling for ImageView so it won't be upside down.How to rotate and keep the size, that was given in CGRectMake, when ImageView was allocated ?
UIImageView *myImageView = [[UIImageView alloc]initWithFrame:CGRectMake(x,y,width,height)];
myImageView.contentMode = UIViewContentModeScaleAspectFit;
[myImageView setImage:[UIImage imageNamed:#"image.png"]];
myImageView.layer.anchorPoint = CGPointMake(0.5,0.5);
CGAffineTransform newTransform;
myImageView.transform = CGAffineTransformMakeScale(1,-1);
newTransform = CGAffineTransformRotate(newTransform, 30*M_PI/180);
[self.window addSubview:myImageView];
Thanks a lot!
Ok I promised I'd look into it, so here's my answer:
I create a scene which should be somewhat equivalent to yours, code as follows:
UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(self.view.bounds.size.width/2-100,
self.view.bounds.size.height/2-125,
200,
250)];
imageView.image = [UIImage imageNamed:#"testimage.jpg"];
imageView.contentMode = UIViewContentModeScaleAspectFill;
/*
* I added clipsToBounds, because my test image didn't have a size of 200x250px
*/
imageView.clipsToBounds = YES;
[self.view addSubview:imageView];
NSLog(#"frame: %#",[NSValue valueWithCGRect:imageView.frame]);
NSLog(#"bounds: %#",[NSValue valueWithCGRect:imageView.bounds]);
imageView.layer.anchorPoint = CGPointMake(0.5, 0.5);
imageView.transform = CGAffineTransformMakeRotation(30*M_PI/180);
NSLog(#"frame after rotation: %#",[NSValue valueWithCGRect:imageView.frame]);
NSLog(#"bounds after rotation: %#",[NSValue valueWithCGRect:imageView.bounds]);
This code assumes that you are using ARC. If not add
[imageView release];
at the end.
Using this code the logs look like this:
[16221:207] frame: NSRect: {{60, 105}, {200, 250}}
[16221:207] bounds: NSRect: {{0, 0}, {200, 250}}
[16221:207] frame after rotation: NSRect: {{10.897461, 71.746826}, {298.20508, 316.50635}}
[16221:207] bounds after rotation: NSRect: {{0, 0}, {200, 250}}
As you can see the bounds always stay the same. What actually changes due to the rotation is the frame, because an image which has been rotated by 30°C is of course wider than if it handn't been rotated. And since the center point has been set to the actual center of the view the origin of the frame also changes (being pushed to the left and the top). Notice that the size of the image itself does not change. I didn't use the scale transformation, since the result can be achieved without scaling.
But to make it clearer here are some pictures for you (0°, 30° 90° rotation):
They already look pretty similar, right? I drew the actual frames to make it clear what's the difference between bounds and frame is. The next one really makes it clear. I overlayed all images, rotating them by the negative degrees with which the UIImageView was rotated, giving the following result:
So you see it's pretty straight forward how to rotate images. Now to your problem that you actually want the frame to stay the same. If you want the final frame to have the size of your original frame (in this example with a width of 200 and a height of 250) then you will have to scale the resulting frame. But this will of course result in scaling of the image, which you do not want. I actually think a larger frame will not be a problem for you - you just need to know that you have to take it into account because of the rotation.
In short: it is not possible to have an UIImageView which will have the same frame after rotation. This isn't possible for any UIView. Just think of a rectangle. If you rotate it, it won't be a rectangle after the rotation, will it?
Of course you could put your UIImageView inside another UIView which will have a non-rotated frame with a width of 200 and a height of 250 but that would just be superficial, since it won't really change the fact that a rotated rectangle has a different width and height than the original.
I hope this helps. :)
Do not set the contentMode which UIImageView inherits from UIView and leads to the changes in frame according to scaling,transformation,device rotation in accordance to the UIViewContentMode you select.
Also if you just want to rotate you can just change the frame using :-
[UIView beginAnimations:#"Rotate" context:nil];
[UIView setAnimationDuration:1.0];
[UIView setAnimationDelegate:self];
CGRect frame=yourView.frame;
frame.origin.y+=distance you want to rotate;
yourview.frame=frame;
[UIView commitAnimations];
}
if you dont want the animation just change the frame
Try Using This :-
CABasicAnimation* animation = [CABasicAnimation animationWithKeyPath:#"transform.rotation.z"];
animation.fromValue = [NSNumber numberWithFloat:0.0f];
animation.toValue = [NSNumber numberWithFloat: 2*M_PI];
animation.duration = 0.5f;
animation.repeatCount = HUGE_VALF; // HUGE_VALF is defined in math.h so import it
[self.reloadButton.imageView.layer addAnimation:animation forKey:#"rotation"];

Saving 2 UIImages to one while saving rotation, resize info and its quality

I want to save 2 UIImages that are moved, resized and rotated by user. The problem is i dont want to use such function as any 'printscreen one', because it makes both images to lose a lot from their quality (resolution).
Atm i use something like this:
UIGraphicsBeginImageContext(image1.size);
[image1 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
[image2 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
However ofc it just adds two images, their rotation, resizing and moving isn't operated here. Can anybody help with considering these 3 aspects in coding? Any help is appreciated!
My biggest thanks in advance :)
EDIT: images can be rotated and zoomed by user (handling touch events)!
You have to set the transform of the context to match your imageView's transform before you start drawing into it.
i.e.,
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(-imageView.image.size.width/2, -imageView.image.size.height/2, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
and check out Creating a UIImage from a rotated UIImageView.
EDIT: if you don't know the angle of rotation of the image you can get the transform from the layer property of the UIImageView:
UIGraphicsBeginImageContext(rotatedImageView.image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = rotatedImageView.transform;
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(0, 0, rotatedImageView.image.size.width, rotatedImageView.image.size.height), rotatedImageView.image.CGImage);
// Get an image from the context
UIImage *newRotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
You will have to play about with the transform matrix to centre the image in the context and you will also have to calculate a bounding rectangle for the rotated image or it will be cropped at the corners (i.e., rotatedImageView.image.size is not big enough to encompass a rotated version of itself).
Try this:
UIImage *temp = [[UIImage alloc] initWithCGImage:image1 scale:1.0 orientation: yourOrientation];
[temp drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
Similarly for image2. Rotation and resizing are handled by orientation and scale respectively. yourOrientation is a UIImageOrientation enum variable and can have a value from 0-7(check this apple documentation on different UIImageOrientation values). Hope it helps...
EDIT: To handle rotations, just write the desired orientation for the rotation you require. You can rotate 90 deg left/right or flip vertically/horizontally. For eg, in the apple documentation, UIImageOrientationUp is 0, UIImageOrientationDown is 1 and so on. Check out my github repo for an example.

Simple way of using irregular shaped buttons

I've finally got my main app release (Tap Play MMO - check it out ;-) ) and I'm now working on expanding it.
To do this I need to have a circle that has four seperate buttons in it, these buttons will essentially be quarters. I've come to the conclusion that the circlular image will need to be constructed of four images, one for each quarter, but due to the necessity of rectangular image shapes I'm going to end up with some overlap, although the overlap will be transparent.
What's the best way of getting this to work? I need something really simple really, I've looked at this
http://iphonedevelopment.blogspot.com/2010/03/irregularly-shaped-uibuttons.html
Before but not yet succeeded in getting it to work. Anyone able to offer some advice?
In case it makes any difference I'll be deploying to a iOS 3.X framework (will be 4.2 down the line when 4.2 comes out for iPad)
Skip the buttons and simply respond to touches in your view that contains the circle.
Create a CGPath for each area that you want to capture touches, when your UIview receives a touch, check for membership inside the paths.
[Edited answer to show skeleton implementation details -- TomH]
Here's how I would approach the problem: (I haven't tested this code and the syntax may not be quite right, but this is the general idea)
1) Using PS or your favorite image creation application, create one png of the quarter circles. Add it to your XCode project.
2) Add a UIView to the UI. Set the UIView's layer's contents to the png.
self.myView = [[UIView alloc] initWithRect:CGRectMake(10.0, 10.0, 100.0, 100,0)];
[myView.layer setContents:(id)[UIImage loadImageNamed:#"my.png"]];
3) Create CGPaths that describe the region in the UIView that you are interested in.
self.quadrantOnePath = CGPathCreateMutable();
CGPathMoveToPoint(self.quadrantOnePath, NULL, 50.0, 50.0);
CGPathAddLineToPoint(self.quadrantOnePath, NULL, 100.0, 50.0);
CGPathAddArc(self.quadrantOnePath, NULL, 50.0, 50.0, 50.0, 0.0, M_PI2, 1);
CGPathCloseSubpath(self.quadrantOnePath);
// create paths for the other 3 circle quadrants too!
4) Add a UIGestureRecognizer and listen/observe for taps in the view
UITapGestureRecognizer *tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleGesture:)];
[tapRecognizer setNumberOfTapsRequired:2]; // default is 1
5) When tapRecognizer invokes its target selector
- (void)handleGesture:(UIGestureRecognizer *) recognizer {
CGPoint touchPoint = [recognizer locationOfTouch:0 inView:self.myView];
bool processTouch = CGPathContainsPoint(self.quadrantOnePath, NULL, touchPoint, true);
if(processTouch) {
// call your method to process the touch
}
}
Don't forget to release everything when appropriate -- use CGPathRelease to release paths.
Another thought: If the graphic that you are using to represent your circle quadrants is simply a filled color (i.e. no fancy graphics, layer effects, etc.), you could also use the paths you created in the UIView's drawRect method to draw the quadrants too. This would address one of the failings of the approach above: there isn't a tight integration between the graphic and the paths used to check for the touches. That is, if you swap out the graphic for something different, change the size of the graphic, etc., your paths used to check for touches will be out of sync. Potentially a high maintenance piece of code.
I can't see, why overlapping is needed.
Just create 4 buttons and give each one a slice of your image.
edit after comment
see this great project. One example is exactly what you want to do.
It works by incorporating the alpha-value of a pixel in the overwritten
pointInside:withEvent: and a category on UIImage, that adds this method
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = self.size.width;
NSUInteger height = self.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
Here's an awesome project that solves the problem of irregular shaped buttons so easily:
http://christinemorris.com/2011/06/ios-irregular-shaped-buttons/

Get size of UIView after applying CGAffineTransform

I was surprised not to find an answer to this question, maybe is something very simple I somehow overlook :
How to get the real size of an UIView after I apply a CGAffineTransform to it?
eg.
my UIView has size 300 x 200, I apply a scaling transform let's say factor 2 both horizontal and vertical, so the UIView now takes 600 x 400 on the screen, but it's bounds and it's layer's bounds are still returning a size of 300 x 200 ... where do I find the real size of the UIView ?
ps. forgot to mention I want to also rotate the uiview. If I apply only scaling CGSizeApplyAffineTransform works great, but when there's also rotation, then it does not work properly.
Edit: drawnonward pointed me in the right direction, I just refined a bit the code to compile and here it is :
UIView* view = (your view being transformed);
CGAffineTransform trans = (view.transform or create a new transformation);
CGRect rect = [view bounds];
CGMutablePathRef path = CGPathCreateMutable();
rect.origin = CGPointZero;
CGPathAddRect(path , &trans , rect);
rect = CGPathGetBoundingBox( path );
CGPathRelease( path );
Now rect.size contains the dimensions of the view with the transformation applied
Thanks again to drawnonward
I use this in Objective C:
CGRect transformedBounds = CGRectApplyAffineTransform(view.bounds, view.transform);
or in Swift 4:
let transformedBounds = view.bounds.applying(view.transform)
[myView frame] returns the frame of the view as seen by the parent, for layout and relative sizes. [myView bounds] returns the bounds of the view as seen by itself, for drawing. If you have transforms applied to multiple views, you can use convertRect: to or from a view.
Edit:
Maybe something like this.
CGRect rect = [view bounds];
CGPathRef path = CGPathCreateMutable();
rect.origin = CGPointZero;
CGPathAddRect( rect , [view transform] );
rect = CGPathGetBoundingBox( path );
CGPathRelease( path );
The use [view center] to find the position in the superview.
Old question, but bumped into here, after searching a solution and tons of attempts. It was simple;
view.layer.frame has all transformations applied and you'll get the size from view.layer.frame.size easily.
-- below here is not an answer to this question - -
And for my problem, I was trying to calculate new center value after changing layer.anchorPoint of my rotated view, so it doesn't move. And finally did it like this;
CGPoint topLeft = [self.superview convertPoint:CGPointMake(0, 0) fromView:self];
self.layer.anchorPoint = CGPointMake(0, 0);
self.center = topLeft;
for reverse
CGPoint center = [self.superview convertPoint:CGPointMake(self.bounds.size.width / 2, self.bounds.size.height / 2) fromView:self];
self.layer.anchorPoint = CGPointMake(.5, .5);
self.center = center;
finally.
Use CGSizeApplyAffineTransform(size, transform) and it will return a transformed size. There are similar CGPoint and CGRect functions as well.
Simpler: A view with (bounds) size s to which transform tr is applied has resulting size:
CGSizeMake(s.width*hypotf(tr.a, tr.b), s.height*hypotf(tr.c, tr.d))
However, if view's superview or any ancestor view has any non-unit transform applied, this size makes little sense in absolute terms.
If you want the absolute size of a view in window coordinates after any arbitrary transform has been applied to that view or its superviews, you should first compute the absolute transform matrix by composing all the view transform up to the root window, and then apply the above formula to the result.
But you apply a rotating transform, it don't get right size by CGPathGetBoundingBox.
If you applied the CGAffineTransform the view's .layer then the adjusted CGRect region after scale and/or translation transforms is simply view.layer.frame