Objective-c core graphics draw rotated image not working - objective-c

i am trying to draw rotated image with following code
CGLayerRef layer=CGLayerCreateWithContext(context,CGSizeMake(brush.size.width*2,brush.size.height*2), NULL);
CGContextRef tmp=CGLayerGetContext(layer);
CGContextDrawImage(tmp,CGRectMake(brush.size.width/2, brush.size.height/2, brush.size.width, brush.size.height), brush.CGImage);
CGContextSetFillColorWithColor(tmp, [UIColor clearColor].CGColor);
CGContextSetBlendMode(tmp, kCGBlendModeClear);
CGContextFillRect(tmp, CGRectMake(0, 0, 2*brush.size.width, 2*brush.size.height));
CGContextSetBlendMode(tmp, kCGBlendModeNormal);
CGContextTranslateCTM(tmp, -brush.size.width/2, -brush.size.height/2);
CGContextRotateCTM(tmp, angle);
CGContextTranslateCTM(tmp, brush.size.width/2, brush.size.height/2);
CGContextDrawImage(tmp,CGRectMake(brush.size.width/2, brush.size.height/2, brush.size.width, brush.size.height), brush.CGImage);
I want to place image at the center of layer (image is square)

try the rotation then the translation

Related

Image rotation cutting edge on drawrect

I am rotating a view and masking the shape of the view using CAShapeLayer. I have view frame and bezier path of drawing shape, it is cutting the edges of my shape when i draw it using CGContext to make its sized shape. Here is my code:
- (UIImage*) crop:(UIImage*)image rect:(CGRect)rect withRadian:(CGFloat)radian
{
float degree = RADIANS_TO_DEGREES(radian);
// // translated rectangle for drawing sub image
// float degree1 = fabsf(degree);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y, image.size.width, image.size.height);
// grab image
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}
Can anyone please tell me what the problem is?

How to draw curve with glow in CGContext - something like SKShapeNode with glowWidth

I am rewriting some of my graphics drawing code from SKShapeNodes to CGContext/CALayers. I am trying to draw a curve with glow in CGContext. This is what I used to have in SpriteKit:
CGPathRef path = …(some path)
SKShapeNode *node = [SKShapeNode node];
node.path = path;
node.glowWidth = 60;
After adding it to the scene with dark-grey background, the result was as follows:
Is it possible to draw line with such glow using CGContext but without using CIFilters? Normally I will be drawing over an non-blank context background, so I prefer not to use CIFilters after the line was drawn.
I have already tried the "shadow" solution, but the results are far from perfect:
UIGraphicsBeginImageContextWithOptions(frame.size, NO, 1);
CGContextRef context = UIGraphicsGetCurrentContext();
CGFloat glowWidth = 60.0;
CGContextSetShadowWithColor(context, CGSizeMake(0.0, 0.0), glowWidth, [UIColor whiteColor].CGColor);
CGContextBeginPath(context);
CGContextAddPath(context, path);
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextStrokePath(context);
Result (the shadow is hardly visible):
Please let me know if you have some ideas.
You can create a SKEffectNode which allows you to apply Core Image filters to all of its children. In other words, you can create a SKEffectNode then add a flower sprite as a child and the SKEffectNode would apply the Core Image effect to the flower sprite.
For more detailed information, please see the SKEffectNode Class Reference.

JigSaw puzzle uiimage irregular cropping

Am developing jigsaw puzzle for iphone.
Here using the masking technique I have cropped the image into 9 peices. See the image below.
After cropping some portion of image is missing due to masking. I knew this is coz of loading those cropped images in square uiimageview.
My question is how to make it as complete cropped image without losing any portion of image and how to fit these pieces correctly to match with original one.
Build a set of masks corresponding to to each puzzle piece. Each mask should be the size of the original image and all black except for a white area with the position and shape of the puzzle piece. Also, maintain a bounding rectangle for each piece (a rectangle that minimally contains the piece in the mask image).
The way to not lose any of the original image is to arrange the masks (and the corresponding bounding rects) as a partition over the image.
Here's a link to some code that demonstrates how to apply a mask. Once the mask is applied, crop the masked image to the bounding rectangle using code like here and elsewhere.
I am also thinking that divided original image with masking but it might be bad idea for us and also complicated to manage it. So For user who is beginner for jigsaw puzzle then This is the best question/answer
and also you can get source code of jigsaw puzzle game from git.
Please hats off this man (#"Guntis Treulands") who solve your problem. I know this should be not an answer but should be comment but If I will put it as comment then may be user who have problem with jigsaw puzzle he/she can not find it easily so. I am putting it as an answer.
//Create our colorspaces
imageColorSpace = CGColorSpaceCreateDeviceRGB();
maskColorSpace = CGColorSpaceCreateDeviceGray();
provider=CGDataProviderCreateWithCFData((__bridge CFDataRef)self.puzzleData);
image=CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
//Resize the puzzle image
context = CGBitmapContextCreate(NULL, kPuzzleSize, kPuzzleSize, 8, 0, imageColorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, kPuzzleSize, kPuzzleSize), image);
CGImageRelease(image);
image = CGBitmapContextCreateImage(context);
CGContextRelease(context);
//Create the image view with the puzzle image
self.imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, kPuzzleSize, kPuzzleSize)];
[self.imageView setImage:[UIImage imageWithCGImage:image]];
//Create the puzzle pieces (note that pieces are rotated to the puzzle orientation in order to minimize the number of graphic operations when creating the puzzle images)
for(i = 0; i < appDelegate().puzzleSize * appDelegate().puzzleSize; ++i)
{
//Recreate the piece view
[pieces[i] removeFromSuperview];
pieces[i] = [[CJPieceView alloc] initWithFrame:CGRectMake(0, 0, kPieceSize, kPieceSize) index:i];
[pieces[i] setTag:-1];
//Load puzzle piece mask image
UIImage *maskimage=[self.arrmaskImages objectAtIndex:i];
NSData *dataMaskImage=UIImagePNGRepresentation(maskimage);
provider=CGDataProviderCreateWithCFData((__bridge CFDataRef)dataMaskImage);
tile = CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
mask = CGImageCreateCopyWithColorSpace(tile, maskColorSpace);
CGImageRelease(tile);
context = CGBitmapContextCreate(NULL, kPieceSize / kPieceShadowFactor, kPieceSize / kPieceShadowFactor, 8, 0, imageColorSpace, kCGImageAlphaPremultipliedFirst);
CGContextClipToMask(context, CGRectMake(0, 0, kPieceSize / kPieceShadowFactor, kPieceSize / kPieceShadowFactor), mask);
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1.0);
CGContextFillRect(context, CGRectMake(0, 0, kPieceSize / kPieceShadowFactor, kPieceSize / kPieceShadowFactor));
shadow = CGBitmapContextCreateImage(context);
CGContextRelease(context);
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, kPieceSize, kPieceSize)];
[imageView setImage:[UIImage imageWithCGImage:shadow]];
[imageView setAlpha:kPieceShadowOpacity];
[imageView setUserInteractionEnabled:NO];
[pieces[i] addSubview:imageView];
CGImageRelease(shadow);
//Create image view with piece image and add it to the piece view
context = CGBitmapContextCreate(NULL, kPieceSize, kPieceSize, 8, 0, imageColorSpace, kCGImageAlphaPremultipliedFirst);
CGRect rectPiece= CGRectMake(fmodf(i, appDelegate().puzzleSize) * kPieceDistance, (floorf(i / appDelegate().puzzleSize)) * kPieceDistance, kPieceSize, kPieceSize);
[self.arrlocations addObject:[NSValue valueWithCGRect:rectPiece]];
CGContextTranslateCTM(context, (kPieceSize - kPieceDistance) / 2 - fmodf(i, appDelegate().puzzleSize) * kPieceDistance, (kPieceSize - kPieceDistance) / 2 - (appDelegate().puzzleSize - 1 - floorf(i / appDelegate().puzzleSize)) * kPieceDistance);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
subImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
tile = CGImageCreateWithMask(subImage, mask);
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, kPieceSize, kPieceSize)];
[imageView setImage:[UIImage imageWithCGImage:tile]];
[imageView setUserInteractionEnabled:NO];
[pieces[i] addSubview:imageView];
CGImageRelease(tile);
CGImageRelease(subImage);
DLog(#"%f", pieces[i].frame.size.width);
pieces[i].transform=CGAffineTransformScale(CGAffineTransformIdentity, kTransformScale, kTransformScale);
DLog(#"%f %f",kTransformScale, pieces[i].frame.size.width);
//Release puzzle piece mask;
CGImageRelease(mask);
}
//Clean up
CGColorSpaceRelease(maskColorSpace);
CGColorSpaceRelease(imageColorSpace);
CGImageRelease(image);

Saving 2 UIImages to one while saving rotation, resize info and its quality

I want to save 2 UIImages that are moved, resized and rotated by user. The problem is i dont want to use such function as any 'printscreen one', because it makes both images to lose a lot from their quality (resolution).
Atm i use something like this:
UIGraphicsBeginImageContext(image1.size);
[image1 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
[image2 drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
However ofc it just adds two images, their rotation, resizing and moving isn't operated here. Can anybody help with considering these 3 aspects in coding? Any help is appreciated!
My biggest thanks in advance :)
EDIT: images can be rotated and zoomed by user (handling touch events)!
You have to set the transform of the context to match your imageView's transform before you start drawing into it.
i.e.,
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, boundingRect.size.width/2, boundingRect.size.height/2);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(-imageView.image.size.width/2, -imageView.image.size.height/2, imageView.image.size.width, imageView.image.size.height), imageView.image.CGImage);
// Get an image from the context
rotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
and check out Creating a UIImage from a rotated UIImageView.
EDIT: if you don't know the angle of rotation of the image you can get the transform from the layer property of the UIImageView:
UIGraphicsBeginImageContext(rotatedImageView.image.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform transform = rotatedImageView.transform;
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(context, transform);
// Draw the image into the context
CGContextDrawImage(context, CGRectMake(0, 0, rotatedImageView.image.size.width, rotatedImageView.image.size.height), rotatedImageView.image.CGImage);
// Get an image from the context
UIImage *newRotatedImage = [UIImage imageWithCGImage: CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
You will have to play about with the transform matrix to centre the image in the context and you will also have to calculate a bounding rectangle for the rotated image or it will be cropped at the corners (i.e., rotatedImageView.image.size is not big enough to encompass a rotated version of itself).
Try this:
UIImage *temp = [[UIImage alloc] initWithCGImage:image1 scale:1.0 orientation: yourOrientation];
[temp drawInRect:CGRectMake(0, 0, image1.size.width, image1.size.height)];
Similarly for image2. Rotation and resizing are handled by orientation and scale respectively. yourOrientation is a UIImageOrientation enum variable and can have a value from 0-7(check this apple documentation on different UIImageOrientation values). Hope it helps...
EDIT: To handle rotations, just write the desired orientation for the rotation you require. You can rotate 90 deg left/right or flip vertically/horizontally. For eg, in the apple documentation, UIImageOrientationUp is 0, UIImageOrientationDown is 1 and so on. Check out my github repo for an example.

Flipping OpenGL texture

When I load textures from images normally, they are upside down because of OpenGL's coordinate system. What would be the best way to flip them?
glScalef(1.0f, -1.0f, 1.0f);
mapping the y coordinates of the textures in reverse
vertically flipping the image files manually (in Photoshop)
flipping them programatically after loading them (I don't know how)
This is the method I'm using to load png textures, in my Utilities.m file (Objective-C):
+ (TextureImageRef)loadPngTexture:(NSString *)name {
CFURLRef textureURL = CFBundleCopyResourceURL(
CFBundleGetMainBundle(),
(CFStringRef)name,
CFSTR("png"),
CFSTR("Textures"));
NSAssert(textureURL, #"Texture name invalid");
CGImageSourceRef imageSource = CGImageSourceCreateWithURL(textureURL, NULL);
NSAssert(imageSource, #"Invalid Image Path.");
NSAssert((CGImageSourceGetCount(imageSource) > 0), #"No Image in Image Source.");
CFRelease(textureURL);
CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
NSAssert(image, #"Image not created.");
CFRelease(imageSource);
GLuint width = CGImageGetWidth(image);
GLuint height = CGImageGetHeight(image);
void *data = malloc(width * height * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSAssert(colorSpace, #"Colorspace not created.");
CGContextRef context = CGBitmapContextCreate(
data,
width,
height,
8,
width * 4,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
NSAssert(context, #"Context not created.");
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGImageRelease(image);
CGContextRelease(context);
return TextureImageCreate(width, height, data);
}
Where TextureImage is a struct that has a height, width and void *data.
Right now I'm just playing around with OpenGL, but later I want to try making a simple 2d game. I'm using Cocoa for all the windowing and Objective-C as the language.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Thanks.
Any of those:
Flip texture during the texture load,
OR flip model texture coordinates during model load
OR set texture matrix to flip y (glMatrixMode(GL_TEXTURE)) during render.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Depends on how you are going to render text.
Jordan Lewis pointed out CGContextDrawImage draws image upside down when passed UIImage.CGImage. There I found a quick and easy solution: Before calling CGContextDrawImage,
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM(context, 1.0f, -1.0f);
Does the job perfectly well.