I used this function to draw a path between two points. I get the coordinates from Google's API, and there's no problem with this. My problem is, how can I specify whether a particular point is within the path I draw or not? I'm trying to use CGContextPathContainsPoint(), but with no luck.
-(void)updateRouteView {
CGContextRef context = CGBitmapContextCreate(nil,
routeView.frame.size.width,
routeView.frame.size.height,
8,
4 * routeView.frame.size.width,
CGColorSpaceCreateDeviceRGB(),
kCGImageAlphaPremultipliedLast);
CGContextSetStrokeColorWithColor(context, lineColor.CGColor);
CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(context, 5.0);
// routes : array of coordinates of the path
for (int i = 0; i < routes.count; i++) {
CLLocation* location = [routes objectAtIndex:i];
CGPoint point = [mapView convertCoordinate:location.coordinate toPointToView:routeView];
if (i == 0)
CGContextMoveToPoint(context, point.x, routeView.frame.size.height - point.y);
else
CGContextAddLineToPoint(context, point.x, routeView.frame.size.height - point.y);
}
CGContextStrokePath(context);
// this is the point I want to check if it is inside the path
CLLocationCoordinate2D checkedLocation;
checkedLocation.latitude = 37.782756;
checkedLocation.longitude = -122.409647;
CGPoint checkedPoint = [mapView convertCoordinate:checkedLocation toPointToView:routeView];
checkedPoint.y = routeView.frame.size.height - checkedPoint.y;
// even if the checkedPoint is inside the path, the result is false
BOOL checking = CGContextPathContainsPoint(context, checkedPoint, kCGPathStroke);
if (checking)
NSLog(#"YES");
else
NSLog(#"NO");
CGImageRef image = CGBitmapContextCreateImage(context);
UIImage* img = [UIImage imageWithCGImage:image];
routeView.image = img;
CGContextRelease(context);
}
Call CGContextStrokePath(context) AFTER you check for your point. Quartz will clear the current path once it's struck.
Related
I'm trying to draw a line with a specific width. I searched for examples online, but I only found examples using straight lines. I need curved lines. Also, I need to detect if the user touched within the line. Is it possible to achieve this using Objective C and Sprite Kit? If so can someone provide an example?
You can use UIBezierPath to create bezier curves (nice smooth curves). You can specify this path for a CAShapeLayer and add that as a sublayer to your view:
UIBezierPath *path = [UIBezierPath bezierPath];
[path moveToPoint:CGPointMake(10, 150)];
[path addCurveToPoint:CGPointMake(110, 150) controlPoint1:CGPointMake(40, 100) controlPoint2:CGPointMake(80, 100)];
[path addCurveToPoint:CGPointMake(210, 150) controlPoint1:CGPointMake(140, 200) controlPoint2:CGPointMake(170, 200)];
[path addCurveToPoint:CGPointMake(310, 150) controlPoint1:CGPointMake(250, 100) controlPoint2:CGPointMake(280, 100)];
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 10;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
[self.view.layer addSublayer:layer];
If you want to randomize it a little, you can just randomize some of the curves. If you want some fuzziness, add some shadow. If you want the ends to be round, specify a rounded line cap:
UIBezierPath *path = [UIBezierPath bezierPath];
CGPoint point = CGPointMake(10, 100);
[path moveToPoint:point];
CGPoint controlPoint1;
CGPoint controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
for (NSInteger i = 0; i < 5; i++) {
controlPoint1 = CGPointMake(point.x + (point.x - controlPoint2.x), 50.0);
point.x += 40.0 + arc4random_uniform(20);
controlPoint2 = CGPointMake(point.x - 5.0 - arc4random_uniform(50), 150.0);
[path addCurveToPoint:point controlPoint1:controlPoint1 controlPoint2:controlPoint2];
}
CAShapeLayer *layer = [CAShapeLayer layer];
layer.lineWidth = 5;
layer.strokeColor = [UIColor redColor].CGColor;
layer.fillColor = [UIColor clearColor].CGColor;
layer.path = path.CGPath;
layer.shadowColor = [UIColor redColor].CGColor;
layer.shadowRadius = 2.0;
layer.shadowOpacity = 1.0;
layer.shadowOffset = CGSizeZero;
layer.lineCap = kCALineCapRound;
[self.view.layer addSublayer:layer];
If you want it to be even more irregular, break those beziers into smaller segments, but the idea would be the same. The only trick with conjoined bezier curves is that you want to make sure that the second control point of one curve is in line with the first control point of the next one, or else you end up with sharp discontinuities in the curves.
If you want to detect if and when a user taps on it, that's more complicated. But what you have to do is:
Make a snapshot of the image:
- (UIImage *)captureView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 1.0); // usually I'd use 0.0, but we'll use 1.0 here so that the tap point of the gesture matches the pixel of the snapshot
if ([view respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
BOOL success = [view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
NSAssert(success, #"drawViewHierarchyInRect failed");
} else {
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Get the color of the pixel at the coordinate that the user tapped by identifying the color of the pixel the user tapped.
- (void)handleTap:(UITapGestureRecognizer *)gesture
{
CGPoint point = [gesture locationInView:gesture.view];
CGFloat red, green, blue, alpha;
UIColor *color = [self image:self.image colorAtPoint:point];
[color getRed:&red green:&green blue:&blue alpha:&alpha];
if (green < 0.9 && blue < 0.9 && red > 0.9)
NSLog(#"tapped on curve");
else
NSLog(#"didn't tap on curve");
}
Where I adapted Apple's code for getting the pixel buffer in order to determine the color of the pixel the user tapped on was.
// adapted from https://developer.apple.com/library/mac/qa/qa1509/_index.html
- (UIColor *)image:(UIImage *)image colorAtPoint:(CGPoint)point
{
UIColor *color;
CGImageRef imageRef = image.CGImage;
// Create the bitmap context
CGContextRef context = [self createARGBBitmapContextForImage:imageRef];
NSAssert(context, #"error creating context");
// Get image width, height. We'll use the entire image.
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGRect rect = {{0,0},{width,height}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(context, rect, imageRef);
// Now we can get a pointer to the image data associated with the bitmap
// context.
uint8_t *data = CGBitmapContextGetData (context);
if (data != NULL) {
size_t offset = (NSInteger) point.y * 4 * width + (NSInteger) point.x * 4;
uint8_t alpha = data[offset];
uint8_t red = data[offset+1];
uint8_t green = data[offset+2];
uint8_t blue = data[offset+3];
color = [UIColor colorWithRed:red / 255.0 green:green / 255.0 blue:blue / 255.0 alpha:alpha / 255.0];
}
// When finished, release the context
CGContextRelease(context);
// Free image data memory for the context
if (data) {
free(data); // we used malloc in createARGBBitmapContextForImage, so free it
}
return color;
}
- (CGContextRef) createARGBBitmapContextForImage:(CGImageRef) inImage
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
size_t bitmapByteCount;
size_t bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB(); // CGColorSpaceCreateDeviceWithName(kCGColorSpaceGenericRGB);
NSAssert(colorSpace, #"Error allocating color space");
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc(bitmapByteCount);
NSAssert(bitmapData, #"Unable to allocate bitmap buffer");
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
NSAssert(context, #"Context not created!");
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
How to create a reflection of UIImage without using UIImageView ? I have seen the apple code for reflection using two imageViews but i don't want to add imageview in my application i just want whole image of the original image and the reflected image. Does any body knows how to do that.
This is from UIImage+FX.m, created by Nick Lockwood
UIImage * processedImage = //here your image
processedImage = [processedImage imageWithReflectionWithScale:0.15f
gap:10.0f
alpha:0.305f];
Scale is the size of the reflection, gap the distance between the image and the reflection, and alpha the alpha of the reflection
- (UIImage *)imageWithReflectionWithScale:(CGFloat)scale gap:(CGFloat)gap alpha:(CGFloat)alpha
{
//get reflected image
UIImage *reflection = [self reflectedImageWithScale:scale];
CGFloat reflectionOffset = reflection.size.height + gap;
//create drawing context
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.size.width, self.size.height + reflectionOffset * 2.0f), NO, 0.0f);
//draw reflection
[reflection drawAtPoint:CGPointMake(0.0f, reflectionOffset + self.size.height + gap) blendMode:kCGBlendModeNormal alpha:alpha];
//draw image
[self drawAtPoint:CGPointMake(0.0f, reflectionOffset)];
//capture resultant image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return image
return image;
}
and
- (UIImage *)reflectedImageWithScale:(CGFloat)scale
{
//get reflection dimensions
CGFloat height = ceil(self.size.height * scale);
CGSize size = CGSizeMake(self.size.width, height);
CGRect bounds = CGRectMake(0.0f, 0.0f, size.width, size.height);
//create drawing context
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
//clip to gradient
CGContextClipToMask(context, bounds, [[self class] gradientMask]);
//draw reflected image
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -self.size.height);
[self drawInRect:CGRectMake(0.0f, 0.0f, self.size.width, self.size.height)];
//capture resultant image
UIImage *reflection = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return reflection image
return reflection;
}
gradientMask
+ (CGImageRef)gradientMask
{
static CGImageRef sharedMask = NULL;
if (sharedMask == NULL)
{
//create gradient mask
UIGraphicsBeginImageContextWithOptions(CGSizeMake(1, 256), YES, 0.0);
CGContextRef gradientContext = UIGraphicsGetCurrentContext();
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGPoint gradientStartPoint = CGPointMake(0, 0);
CGPoint gradientEndPoint = CGPointMake(0, 256);
CGContextDrawLinearGradient(gradientContext, gradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
sharedMask = CGBitmapContextCreateImage(gradientContext);
CGGradientRelease(gradient);
CGColorSpaceRelease(colorSpace);
UIGraphicsEndImageContext();
}
return sharedMask;
}
it returns the image with the reflection.
I wrote a blog post awhile ago that uses a view's CAReplicatorLayer. It's really designed for handling dynamics updates to a view with reflection, but I think it would work for what you want to do as well.
You can render the image and its reflection inside a graphics context and then get a CGImage from the context and from that again a UIImage.
But the question is: why not use two views? Why would you think it is a problem or limitation?
I've been following the WWDC Sesion 127- Customizing Maps With Overlays. They say you can get the image to use as a MKOverlay either from the apps bundle or from the network. I would like to know if the image can be generated by the phone using it's APIs.
Has anyone done it? Can you maybe provide a tutorial?
Thanks.
[EDIT]
I would also actually need a tutorial on how to add that image, created by the phone or downloadded from the network, to the map.
All I see is the WWDC demo but you need the ticket to the conference, wich I obviously dont have.
Here is some code for drawing using Quartz into an UIImage. Hope it is helpful, and you can get the idea
void mainFunction() {
// background size
CGSize sizeBack = CGSizeMake(layerFrame.size.width*scaledFactor,layerFrame.size.height*scaledFactor);
UIGraphicsBeginImageContext(sizeBack);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0., 0., 0., 1.);
drawBorder(context, scaledRect);
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (destinationLayer) {
[destinationLayer setContents:(id)img.CGImage];
}
}
CGPathRef createRoundedRectanglePath(CGRect mainRect, CGFloat roundRatio) {
roundRatio = fabs(roundRatio);
roundRatio = MIN(1, roundRatio);
CGFloat offset = roundRatio * MIN(mainRect.size.width, mainRect.size.height)/2;
CGPoint c1 = CGPointMake(0,0);
CGPoint c2 = CGPointMake(mainRect.size.width,0);
CGPoint c3 = CGPointMake(mainRect.size.width,mainRect.size.height);
CGPoint c4 = CGPointMake(0,mainRect.size.height);
CGPoint pA = CGPointMake(c1.x + offset, c1.y);
CGPoint pB = CGPointMake(c2.x - offset, c2.y);
CGPoint pC = CGPointMake(c2.x, c2.y + offset);
CGPoint pD = CGPointMake(c3.x, c3.y - offset);
CGPoint pE = CGPointMake(c3.x - offset, c3.y);
CGPoint pF = CGPointMake(c4.x + offset, c4.y);
CGPoint pG = CGPointMake(c4.x, c4.y - offset);
CGPoint pH = CGPointMake(c1.x, c1.y + offset);
CGMutablePathRef result = CGPathCreateMutable();
CGPathMoveToPoint(result, NULL, pA.x, pA.y);
CGPathAddLineToPoint(result, NULL, pB.x, pB.y);
CGPathAddQuadCurveToPoint(result, NULL, c2.x, c2.y, pC.x, pC.y);
CGPathAddLineToPoint(result, NULL, pD.x, pD.y);
CGPathAddQuadCurveToPoint(result, NULL, c3.x, c3.y, pE.x, pE.y);
CGPathAddLineToPoint(result, NULL, pF.x, pF.y);
CGPathAddQuadCurveToPoint(result, NULL, c4.x, c4.y, pG.x, pG.y);
CGPathAddLineToPoint(result, NULL, pH.x, pH.y);
CGPathAddQuadCurveToPoint(result, NULL, c1.x, c1.y, pA.x, pA.y);
CGPathCloseSubpath(result);
CGPathRef path = CGPathCreateCopy(result);
CGPathRelease(result);
return path;
}
void drawBorder(CGContextRef context, CGRect rect) {
CGContextSaveGState(context);
/* Outside Path */
CGPathRef thePath = createRoundedRectanglePath(rect, 0.2);
/* Fill with Gradient Color */
CGFloat locations[] = { 0.0, 1.0 };
CGColorRef startColor = [UIColor colorWithRed:1.
green:1.
blue:1.
alpha:1.].CGColor;
CGColorRef endColor = [UIColor colorWithRed:208./255.
green:208./255.
blue:208./255.
alpha:1.].CGColor;
NSArray *colors = [NSArray arrayWithObjects:(id)startColor, (id)endColor, nil];
CGPoint startPoint = CGPointMake(CGRectGetMidX(rect), 0);
CGPoint endPoint = CGPointMake(CGRectGetMidX(rect), rect.size.height);
CGContextAddPath(context, thePath);
CGContextClip(context);
drawLinearGradient(context, startPoint, endPoint, locations, (CFArrayRef*)colors);
/* Stroke path */
CGPathRelease(thePath);
thePath = createRoundedRectanglePath(rect, 0.2);
CGContextSetStrokeColor(context, CGColorGetComponents([UIColor colorWithRed:0 green:0 blue:1. alpha:1.].CGColor));
CGContextAddPath(context, thePath);
CGContextSetLineWidth(context, 1.);
CGContextDrawPath(context, kCGPathStroke);
CGContextRestoreGState(context);
CGPathRelease(thePath);
}
I'm having a UIView that renders some text using CoreText,everything works fine,
except for the fact that the entire view has black background color.
I've tried the most basic solutions like
[self setBackgroundColor:[UIColor clearColor]];
or
CGFloat components[] = {0.f, 0.f, 0.f, 0.f };
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGColorRef transparent = CGColorCreate(rgbColorSpace, components);
CGContextSetFillColorWithColor(context, transparent);
CFRelease(transparent);
but this didn't help at all.
I'm just pounding my head on this issue, because I can't quite understand
how does really core text works and how is this pure-C layer related
with all the other objective-c stuff.
And this doesn't help very much in solving this problem,
so I'm asking you folks, if could you provide me a solution and (most important)
an explanation about it.
FIY I'm posting also the code of the CoreText view.
it's all based on 2 functions: parseText which builds up the CFAttributedString and the
drawRect that takes care of the drawing part.
-(void) parseText {
NSLog(#"rendering this text: %#", symbolHTML);
attrString = CFAttributedStringCreateMutable(kCFAllocatorDefault, 0);
CFAttributedStringReplaceString (attrString, CFRangeMake(0, 0), (CFStringRef)symbolHTML);
CFStringRef fontName = (CFStringRef)#"MetaSerif-Book-Accent";
CTFontRef myFont = CTFontCreateWithName((CFStringRef)fontName, 18.f, NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat components[] = { 1.0, 0.3f, 0.3f, 1.f };
CGColorRef redd = CGColorCreate(rgbColorSpace, components);
CFAttributedStringSetAttribute(attrString, CFRangeMake(0,0),
kCTForegroundColorAttributeName, redd);
CFRelease(redd);
CFAttributedStringSetAttribute(attrString, CFRangeMake(0, CFStringGetLength((CFStringRef)symbolHTML)), kCTFontAttributeName, myFont);
CTTextAlignment alignment = kCTLeftTextAlignment;
CTParagraphStyleSetting settings[] = {
{kCTParagraphStyleSpecifierAlignment, sizeof(alignment), &alignment}
};
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(settings, sizeof(settings) / sizeof(settings[0]));
CFAttributedStringSetAttribute(attrString, CFRangeMake(0, CFStringGetLength((CFStringRef)attrString)), kCTParagraphStyleAttributeName, paragraphStyle);
[self calculateHeight];
}
and this is the DrawRect function:
- (void)drawRect:(CGRect)rect {
/* get the context */
CGContextRef context = UIGraphicsGetCurrentContext();
/* flip the coordinate system */
float viewHeight = self.bounds.size.height;
CGContextTranslateCTM(context, 0, viewHeight);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetTextMatrix(context, CGAffineTransformMakeScale(1.0, 1.0));
/* generate the path for the text */
CGMutablePathRef path = CGPathCreateMutable();
CGRect bounds = CGRectMake(0, 0, self.bounds.size.width-20.0, self.bounds.size.height);
CGPathAddRect(path, NULL, bounds);
/* draw the text */
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString(attrString);
CTFrameRef frame = CTFramesetterCreateFrame(framesetter,
CFRangeMake(0, 0), path, NULL);
CFRelease(framesetter);
CFRelease(path);
CTFrameDraw(frame, context);
CGFloat components[] = {0.f, 0.f, 0.f, 0.f };
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGColorRef transparent = CGColorCreate(rgbColorSpace, components);
CGContextSetFillColorWithColor(context, transparent);
CFRelease(transparent);
}
I hope that everything is clear, but if something looks
confusing, just ask and I'll be glad to explain myself better.
thank you in advance for this hardcore issue :)
k
Here's a question. Did you remember to set view.opaque = NO? Because if not you'll get that black background regardless of what else you do.
I'm writing code to use UIImagePickerController. Corey previously posted some nice sample code on SO related to cropping and scaling. However, it doesn't have implementations of cropImage:to:andScaleTo: nor straightenAndScaleImage().
Here's how they're used:
newImage = [self cropImage:originalImage to:croppingRect andScaleTo:scaledImageSize];
...
UIImage *rotatedImage = straightenAndScaleImage([editInfo objectForKey:UIImagePickerControllerOriginalImage], scaleSize);
Since I'm sure someone must be using something very similar to Corey's sample code, there's probably an existing implementation of these two functions. Would someone like to share?
If you check the post you linked to, you'll see a link to the apple dev forums where I got some of this code, here are the methods you are asking about. Note: I may have made some changes relating to data types, but I can't quite remember. It should be trivial for you to adjust if needed.
- (UIImage *)cropImage:(UIImage *)image to:(CGRect)cropRect andScaleTo:(CGSize)size {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef subImage = CGImageCreateWithImageInRect([image CGImage], cropRect);
CGRect myRect = CGRectMake(0.0f, 0.0f, size.width, size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -size.height);
CGContextDrawImage(context, myRect, subImage);
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(subImage);
return croppedImage;
}
UIImage *straightenAndScaleImage(UIImage *image, int maxDimension) {
CGImageRef img = [image CGImage];
CGFloat width = CGImageGetWidth(img);
CGFloat height = CGImageGetHeight(img);
CGRect bounds = CGRectMake(0, 0, width, height);
CGSize size = bounds.size;
if (width > maxDimension || height > maxDimension) {
CGFloat ratio = width/height;
if (ratio > 1.0f) {
size.width = maxDimension;
size.height = size.width / ratio;
}
else {
size.height = maxDimension;
size.width = size.height * ratio;
}
}
CGFloat scale = size.width/width;
CGAffineTransform transform = orientationTransformForImage(image, &size);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
// Flip
UIImageOrientation orientation = [image imageOrientation];
if (orientation == UIImageOrientationRight || orientation == UIImageOrientationLeft) {
CGContextScaleCTM(context, -scale, scale);
CGContextTranslateCTM(context, -height, 0);
}else {
CGContextScaleCTM(context, scale, -scale);
CGContextTranslateCTM(context, 0, -height);
}
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, bounds, img);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}