I create a double layer with a UIImageView one on bottom and one overlay with this method:
UIImage *bottomImage = self.imageView.image;
UIImage *image = self.urlFoto.image;
CGSize newSize = CGSizeMake(640, 640);
UIGraphicsBeginImageContext( newSize );
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
After this I add a method to resize image with pinch gesture:
- (IBAction)scaleImage:(UIPinchGestureRecognizer *)recognizer {
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
}
But when i save the Photo in photo library, i can't get the current size of my overlay and I see the same size 640,640 on overlay(image), i think the missed code is here:
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:1.0];
On the CGRectMake(0,0,newSize.width,newSize.height) any know the correct method to get the current size of the UIImageView after pinched?
When drawing, you must keep the scale transform in mind. So you have to draw your overlay image with the respect to the final scale of the UIImageView adjusted by gesture.
You can get final scale of view like this:
CGFloat scale = view.transform.a;
The letter a is important here. It is value of the width transformation. So you can use it to get common scale assuming you are scaling the image proportionally (same scale for width and height)
Little more details regarding the scale:
CGAffineTransform is structure defined like
struct CGAffineTransform {
CGFloat a, b, c, d;
CGFloat tx, ty;
};
and
CGAffineTransformMakeScale(CGFloat sx, CGFloat sy)
does following according to the documentation
Return a transform which scales by `(sx, sy)':
t' = [ sx 0 0 sy 0 0 ]
For better understanding, see this example code:
UIView *view = [[UIView alloc] initWithFrame:CGRectMake(50, 50, 100, 100)];
view.backgroundColor = [UIColor redColor];
[self.view addSubview:view];
view.transform = CGAffineTransformScale(view.transform, 0.5, 0.5);
NSLog(#"Transform 1: %f", view.transform.a);
view.transform = CGAffineTransformScale(view.transform, 0.5, 0.5);
NSLog(#"Transform 2: %f", view.transform.a);
which prints following to the console:
Transform 1: 0.500000
Transform 2: 0.250000
1st transform makes 0.5 scale to the default scale 1.0
2nd transform makes 0.5 scale to the already scaled 0.5 -> multiplies current 0.5 with new 0.5 scale
etc.
Step 1 : Declare in .h file
CGFloat lastScale;
Step 2 : in .m File replace your gesture method
- (void)handlePinchGesture:(UIPinchGestureRecognizer *)gestureRecognizer {
if([gestureRecognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = [gestureRecognizer scale];
}
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan ||
[gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[gestureRecognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
// Calculate the New Scale of UIImageView
CGFloat newScale = 1 - (lastScale - [gestureRecognizer scale]);
// Store Your Imageview's transform
CGAffineTransform transorm = simageView.transform;
// Convert your Imageview to Identity (original Size)
[imageView setTransform:CGAffineTransformIdentity];
// Save Rect Of UIImageView
CGRect actualRect = self.view.frame;
// Apply Transform to Imageview to make it scaled
[imageView setTransform:transorm];
// Now calculate new frame size of your ImageView
NSLog(#"width %f height %f",(imageView.frame.size.width*newScale), (imageView.frame.size.height*newScale));
}
}
Related
I have an image in a CALayer (using the CALayers contents property). A CALayer has a transform property that can be use to scale and rotate. I can use that transform without difficulty when the CALayer is still.
But when I try to drag the CALayer (using mouseDragged:(NSEvent *)theEvent), then it all fells apart (the image gets flattened out as I drag).
-(void)mouseDragged:(NSEvent *)theEvent {
CGPoint loc = [self convertPoint:theEvent.locationInWindow fromView:nil];
CGPoint deltaLoc = ccpSub(loc, downLoc); // subtract two points
CGPoint newPos = ccpAdd(startPos, deltaLoc); // adds two points
[layer changePosition:newPos];
...
}
In the layer
-(void)changePosition:(CGPoint)newPos {
//prevent unintended animation actions
self.actions = #{#"position": [NSNull null]};
CGSize size = self.frame.size;
CGRect newRect = CGRectMake(newPos.x, newPos.y, size.width, size.height);
self.frame = newRect;
}
The problem is that I use CALayer's frame to dynamically change the location of the CALayer as I drag, but when the rotation angle is not zero, then the frame is not longer parallel to x and y axis. What is the best way to fix this problem?
One way to do it is to regenerate transforms for each incremental movement.
-(void)mouseDragged:(CGPoint)deltaLoc start:(CGPoint)startPos {
CGPoint newPos = ccpAdd(startPos, deltaLoc);
[self changePosition:newPos];
}
-(void)changePosition:(CGPoint)newPos {
//prevent unintended animation actions
self.actions = #{#"position": [NSNull null]};
double angle = [[self valueForKeyPath: #"transform.rotation"] doubleValue];
double scaleX = [[self valueForKeyPath: #"transform.scale.x"] doubleValue];
double scaleY = [[self valueForKeyPath: #"transform.scale.y"] doubleValue];
self.transform = CATransform3DIdentity;
CGSize size = self.frame.size;
CGRect newRect = CGRectMake(newPos.x, newPos.y, size.width, size.height);
self.frame = newRect;
[self applyTransformRotation:angle scale:ccp(scaleX, scaleY)];
}
-(void)applyTransformRotation:(double)angle scale:(CGPoint)scale {
self.transform = CATransform3DMakeRotation(angle, 0, 0, 1);
self.transform = CATransform3DScale(self.transform, scale.x, scale.y, 1);
}
I do not know if this is the most efficient way to do it, but it definitely works.
In objective-c, I make a circle shape programmatically by following codes:
+(UIImage *)makeRoundedImage:(CGSize) size backgroundColor:(UIColor *) backgroundColor cornerRadius:(int) cornerRadius
{
UIImage* bgImage = [self imageWithColor:backgroundColor andSize:size];;
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, size.width, size.height);
imageLayer.contents = (id) bgImage.CGImage;
imageLayer.masksToBounds = YES;
imageLayer.cornerRadius = cornerRadius;
UIGraphicsBeginImageContext(bgImage.size);
[imageLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return roundedImage;
}
The imageWithColor method is as following:
+(UIImage *)imageWithColor:(UIColor *)color andSize:(CGSize)size
{
//quick fix, or pop up CG invalid context 0x0 bug
if(size.width == 0) size.width = 1;
if(size.height == 0) size.height = 1;
//---quick fix
UIImage *img = nil;
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context,
color.CGColor);
CGContextFillRect(context, rect);
img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Then I used it to create a pure color circle shape image, but what I found is the circle image is not perfect rounded. As an example, please see following code:
CGSize size = CGSizeMake(diameter, diameter);
int r = ceil((float)diameter/2.0);
UIImage *imageNormal = [self makeRoundedImage:size backgroundColor:backgroundColor cornerRadius:r];
[slider setThumbImage:imageNormal forState:UIControlStateNormal];
First I created a circle image, then I set the image as the thumb to a UISlider. But what shown is as the picture shown below:
You can see the circle is not an exact circle. I'm thinking probably it caused by the screen resolution issue? Because if I use an image resource for the thumb, I need add #2x. Anybody know the reason? Thanx in advance.
updated on 8th Aug 2015.
Further to this question and the answer from #Noah Witherspoon, I found the blurry edge issue has been solved. But still, the circle looks like being cut. I used the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f, radius*2.0f);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rect);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
And the circle looks like:
You can see the edge has been cut.
I changed the code as following:
CGRect rect = CGRectMake(0.0f, 0.0f, radius*2.0f+4, radius*2.0f+4);
CGRect rectmin = CGRectMake(2.0f, 2.0f, radius*2, radius*2);
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextFillEllipseInRect(context, rectmin);
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
You can see the circle looks better(The top edge and the bottom edge):
I made the fill rect size smaller, and the edge looks better, but I don't think it's a nice solution. Still, does anybody know why this happen?
From your screenshot it looks like you do actually have a circular image, but its scale is wrong—it’s not Retina—so it looks blurry and not-circular. The key thing is that instead of using UIGraphicsBeginImageContext which defaults to a scale of 1.0 (as compared to your screen, which is at a scale of 2.0 or 3.0), you should be using UIGraphicsBeginImageContextWithOptions. Also, you don’t need to make a layer or a view to draw a circle in an image context.
+ (UIImage *)makeCircleImageWithDiameter:(CGFloat)diameter color:(UIColor *)color {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(diameter, diameter), NO, 0 /* scale (0 means “use the current screen’s scale”) */);
[color setFill];
CGContextFillEllipseInRect(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, diameter, diameter));
UIImage *image = UIGraphicsGetImageFromCurrentContext();
UIGraphicsEndImageContext();
return image;
}
If you want to get a circle every time try this:
- (UIImage *)makeCircularImage:(CGSize)size backgroundColor:(UIColor *)backgroundColor {
CGSize squareSize = CGSizeMake((size.width > size.height) ? size.width : size.height,
(size.width > size.height) ? size.width : size.height);
UIView *circleView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, squareSize.width, squareSize.height)];
circleView.layer.cornerRadius = circleView.frame.size.height * 0.5f;
circleView.backgroundColor = backgroundColor;
circleView.opaque = NO;
UIGraphicsBeginImageContextWithOptions(circleView.bounds.size, circleView.opaque, 0.0);
[circleView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I'm trying to draw a line with a specific width. I searched for examples online, but I only found examples using straight lines. I need curved lines. Also, I need to detect if the user touched within the line. Is it possible to achieve this using Objective C and Sprite Kit? If so can someone provide an example?
You can use UIBezierPath to create bezier curves (nice smooth curves). You can specify this path for a CAShapeLayer and add that as a sublayer to your view:
UIBezierPath *path = [UIBezierPath bezierPath];
[path moveToPoint:CGPointMake(10, 150)];
[path addCurveToPoint:CGPointMake(110, 150) controlPoint1:CGPointMake(40, 100) controlPoint2:CGPointMake(80, 100)];
[path addCurveToPoint:CGPointMake(210, 150) controlPoint1:CGPointMake(140, 200) controlPoint2:CGPointMake(170, 200)];
[path addCurveToPoint:CGPointMake(310, 150) controlPoint1:CGPointMake(250, 100) controlPoint2:CGPointMake(280, 100)];
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 10;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
[self.view.layer addSublayer:layer];
If you want to randomize it a little, you can just randomize some of the curves. If you want some fuzziness, add some shadow. If you want the ends to be round, specify a rounded line cap:
UIBezierPath *path = [UIBezierPath bezierPath];
CGPoint point = CGPointMake(10, 100);
[path moveToPoint:point];
CGPoint controlPoint1;
CGPoint controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
for (NSInteger i = 0; i < 5; i++) {
controlPoint1 = CGPointMake(point.x + (point.x - controlPoint2.x), 50.0);
point.x += 40.0 + arc4random_uniform(20);
controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
[path addCurveToPoint:point controlPoint1:controlPoint1 controlPoint2:controlPoint2];
}
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 5;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
layer.shadowColor = [UIColor redColor].CGColor;
layer.shadowRadius = 2.0;
layer.shadowOpacity = 1.0;
layer.shadowOffset = CGSizeZero;
layer.lineCap = kCALineCapRound;
[self.view.layer addSublayer:layer];
If you want it to be even more irregular, break those beziers into smaller segments, but the idea would be the same. The only trick with conjoined bezier curves is that you want to make sure that the second control point of one curve is in line with the first control point of the next one, or else you end up with sharp discontinuities in the curves.
If you want to detect if and when a user taps on it, that's more complicated. But what you have to do is:
Make a snapshot of the image:
- (UIImage *)captureView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 1.0); // usually I'd use 0.0, but we'll use 1.0 here so that the tap point of the gesture matches the pixel of the snapshot
if ([view respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
BOOL success = [view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
NSAssert(success, #"drawViewHierarchyInRect failed");
} else {
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Get the color of the pixel at the coordinate that the user tapped by identifying the color of the pixel the user tapped.
- (void)handleTap:(UITapGestureRecognizer *)gesture
{
CGPoint point = [gesture locationInView:gesture.view];
CGFloat red, green, blue, alpha;
UIColor *color = [self image:self.image colorAtPoint:point];
[color getRed:&red green:&green blue:&blue alpha:&alpha];
if (green < 0.9 && blue < 0.9 && red > 0.9)
NSLog(#"tapped on curve");
else
NSLog(#"didn't tap on curve");
}
Where I adapted Apple's code for getting the pixel buffer in order to determine the color of the pixel the user tapped on was.
// adapted from https://developer.apple.com/library/mac/qa/qa1509/_index.html
- (UIColor *)image:(UIImage *)image colorAtPoint:(CGPoint)point
{
UIColor *color;
CGImageRef imageRef = image.CGImage;
// Create the bitmap context
CGContextRef context = [self createARGBBitmapContextForImage:imageRef];
NSAssert(context, #"error creating context");
// Get image width, height. We'll use the entire image.
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = {{0,0},{width,height}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(context, rect, imageRef);
// Now we can get a pointer to the image data associated with the bitmap
// context.
uint8_t *data = CGBitmapContextGetData (context);
if (data != NULL) {
size_t offset = (NSInteger) point.y * 4 * width + (NSInteger) point.x * 4;
uint8_t alpha = data[offset];
uint8_t red = data[offset+1];
uint8_t green = data[offset+2];
uint8_t blue = data[offset+3];
color = [UIColor colorWithRed:red / 255.0 green:green / 255.0 blue:blue / 255.0 alpha:alpha / 255.0];
}
// When finished, release the context
CGContextRelease(context);
// Free image data memory for the context
if (data) {
free(data); // we used malloc in createARGBBitmapContextForImage, so free it
}
return color;
}
- (CGContextRef) createARGBBitmapContextForImage:(CGImageRef) inImage
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
size_t bitmapByteCount;
size_t bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB(); // CGColorSpaceCreateDeviceWithName(kCGColorSpaceGenericRGB);
NSAssert(colorSpace, #"Error allocating color space");
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc(bitmapByteCount);
NSAssert(bitmapData, #"Unable to allocate bitmap buffer");
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
NSAssert(context, #"Context not created!");
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
How to create a reflection of UIImage without using UIImageView ? I have seen the apple code for reflection using two imageViews but i don't want to add imageview in my application i just want whole image of the original image and the reflected image. Does any body knows how to do that.
This is from UIImage+FX.m, created by Nick Lockwood
UIImage * processedImage = //here your image
processedImage = [processedImage imageWithReflectionWithScale:0.15f
gap:10.0f
alpha:0.305f];
Scale is the size of the reflection, gap the distance between the image and the reflection, and alpha the alpha of the reflection
- (UIImage *)imageWithReflectionWithScale:(CGFloat)scale gap:(CGFloat)gap alpha:(CGFloat)alpha
{
//get reflected image
UIImage *reflection = [self reflectedImageWithScale:scale];
CGFloat reflectionOffset = reflection.size.height + gap;
//create drawing context
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.size.width, self.size.height + reflectionOffset * 2.0f), NO, 0.0f);
//draw reflection
[reflection drawAtPoint:CGPointMake(0.0f, reflectionOffset + self.size.height + gap) blendMode:kCGBlendModeNormal alpha:alpha];
//draw image
[self drawAtPoint:CGPointMake(0.0f, reflectionOffset)];
//capture resultant image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return image
return image;
}
and
- (UIImage *)reflectedImageWithScale:(CGFloat)scale
{
//get reflection dimensions
CGFloat height = ceil(self.size.height * scale);
CGSize size = CGSizeMake(self.size.width, height);
CGRect bounds = CGRectMake(0.0f, 0.0f, size.width, size.height);
//create drawing context
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
//clip to gradient
CGContextClipToMask(context, bounds, [[self class] gradientMask]);
//draw reflected image
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -self.size.height);
[self drawInRect:CGRectMake(0.0f, 0.0f, self.size.width, self.size.height)];
//capture resultant image
UIImage *reflection = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return reflection image
return reflection;
}
gradientMask
+ (CGImageRef)gradientMask
{
static CGImageRef sharedMask = NULL;
if (sharedMask == NULL)
{
//create gradient mask
UIGraphicsBeginImageContextWithOptions(CGSizeMake(1, 256), YES, 0.0);
CGContextRef gradientContext = UIGraphicsGetCurrentContext();
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGPoint gradientStartPoint = CGPointMake(0, 0);
CGPoint gradientEndPoint = CGPointMake(0, 256);
CGContextDrawLinearGradient(gradientContext, gradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
sharedMask = CGBitmapContextCreateImage(gradientContext);
CGGradientRelease(gradient);
CGColorSpaceRelease(colorSpace);
UIGraphicsEndImageContext();
}
return sharedMask;
}
it returns the image with the reflection.
I wrote a blog post awhile ago that uses a view's CAReplicatorLayer. It's really designed for handling dynamics updates to a view with reflection, but I think it would work for what you want to do as well.
You can render the image and its reflection inside a graphics context and then get a CGImage from the context and from that again a UIImage.
But the question is: why not use two views? Why would you think it is a problem or limitation?
This is what I am trying to do
Create a UIImage View.
Do some drawing on it
Press a share button to share the image with the drawings on it.
My code works perfect on iPad and iPhone. Problem comes with retina display. So my guess is some scale is not handled correctly, but not sure what I am doing wrong. This is my code
// Create the UIImageView named centerCanvas
// Do the drawing
CGPoint origin = centerCanvas.frame.origin;
CGSize size = centerCanvas.frame.size;
CGSize screenSize = [self returnScreenSize];
CGRect rect = CGRectMake(origin.x, origin.y, size.width, size.height);
UIGraphicsBeginImageContext(screenSize);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], rect);
[shareView setImage:[UIImage imageWithCGImage:imageRef]];
-(CGSize) returnScreenSize
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
CGFloat screenScale = [[UIScreen mainScreen] scale];
CGSize screenSize = CGSizeMake(screenBounds.size.width * screenScale,screenBounds.size.height * screenScale);
return screenSize;
}
Try using
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
instead of the old UIGraphicsBeginImageContext, which uses a scale factor (third argument above) equal to 1. By giving 0.0 as a scale factor you get a scale factor for your current screen. Have a look at the reference.