I have already tried looking around and cannot seem to find any info on this topic for exactly what I am looking for.
If you are familiar with Unity3D uGUI, I am trying to achieve the same "Filled" type Image effect there but inside of XCode using Objective-C.
If I have a faded image of a "character" and then as time goes on, I want to fill that "character" image with a "filled character" image in the same position. How can I work through this?
I have tried using MGImageResizingUtilities but this seems to change the size of the image being that the cropped image is no longer the same size of the original image being used and scale is set to Aspect Ratio.
It was hard to find a good example but here is something to get an understanding.
http://s22.postimg.org/9qghxosox/Screen_Shot_2016_01_30_at_17_08_06.png
Here is some code I am trying to use:
CALayer *maskLayer = [CALayer layer];
UIImage *maskImage = [UIImage imageNamed:#"turtle_posing"];
maskLayer.contents = (id)maskImage.CGImage;
maskLayer.bounds = self.filledTurtlePic.bounds;
CGRect turtleRect = self.filledTurtlePic.frame;
maskLayer.frame = CGRectMake(0, turtleRect.size.height, turtleRect.size.width, -turtleRect.size.height);
self.filledTurtlePic.layer.mask = maskLayer;
Here are two screenshots of an attempt to completely fill the image with the code above, and one screenshot with an attempt to partially fill the image (masklayer frame height is set to -40)
Thanks in advanced
The way I would do this is to create two images. Filled and unfilled.
Then create two image views. Fix the size so they are the same size. And in the same position. The unfilled one underneath and the filled one on top. When you run this it should look like only the filled one is on the screen.
Now create a CALayer and set it as the mask of the filledImageView.layer.
When you change the frame of this layer it will only show what is inside that frame. So it will hide part of the filledImageView and you will see the unfilledImageView underneath.
So, if your imageView has a frame of (100, 100, 100, 100) origin 100, 100, size 100, 100. The the mask layer frame should be (0, 0, 100, 100) for the full image. This is because the origin is relative to the image view.
For an empty image it should be (0, 100, 100, 0).
For x% the frame of the mask layer should be (0, x-100, 100, 100-x)
EDIT
You should have two image views...
filledImageView
unFilledImageView
These are setup correctly. Do not change the frames of these. Nor the layers of these.
You should create a layer and add it as a mask of the filledImageView.
self.maskLayer = [CALayer layer];
self.filledImageView.layer.mask = self.maskLayer;
Now have a function that updates the mask...
- (void)changeFilledImage:(CGFloat)percentage {
// percentage is from 0.0 to 1.0
CGFloat fullHeight = CGRectGetHeight(self.filledImageView.frame);
CGFloat width = CGRectGetWidth(self.filledImageView.frame);
CGFloat percentageHeight = fullHeight * percentage;
self.maskLayer.frame = CGRectMake(0, fullHeight - percentageHeight, width, percentageHeight);
}
Something like this should do.
Related
I have an UIImageView that shows and image of a creditCard, My problem is that I want to make boarders around it, so the credit card wont touch the edges of the UIimageView, I dont want to change the UIImageView position on the screen so what can I do ?
image 1
image 2
( unlike the example of the blue card it dosnt have to leave spaces only from the sides, it could shrink it from all sides )
You basically have (3) three options that can fix this.
Adjust the UIViewContentMode contentMode of the UIImageView so that it appropriately displays the image (this will only work however if the image you are using scales to fit without touching the border.
Use a UIButton instead and simply adjust the image insets until your satisfied (just disable the button or set user interaction to no here to "mimic" an image view)
Create a wrapper method that creates a new image to your desired size. Something like this should work just fine
CGSize size = CGSizeMake(< some width > , < some height >);
Then just create a new image capture with the size you've chosen
CGFloat scale = MAX(size.width/image.size.width, size.height/image.size.height);
CGFloat width = image.size.width * scale;
CGFloat height = image.size.height * scale;
CGRect imageRect = CGRectMake((size.width - width)/2.0f,
(size.height - height)/2.0f,
width,
height);
UIGraphicsBeginImageContextWithOptions(size, NO, 0);
[image drawInRect:imageRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
From here you should then be able to just apply a border width and color to get your desired result!
I guess a fourth (4th) option could be manually shrinking the image (with some photo editor) so that it renders (fits) into your image view with the appropriate padding around the image. I think the others are easier and best practice though.
This is a code snippet for creating a thumbnail sized image (from an original large image) and placing it appropriately on top of a tableviewcell. As i was studying the code i got stuck at the part where the thumbnail is being given a position by setting its abscissa and ordinate. In the method -(void)setThumbDataFromImage:(UIImage *)image they're setting the dimensions and coordinate for project thumbnail—
-(void)setThumbnailDataFromImage:(UIImage *)image{
CGSize origImageSize= [image size];
// the rectange of the thumbnail
CGRect newRect= CGRectMake(0, 0, 40, 40);
// figure out a scaling ratio to make sure we maintain the same aspect ratio
float ratio= MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
// Create a transparent bitmap context with a scaling factor equal to that of the screen
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
// create a path that is a rounded rectangle
UIBezierPath *path= [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
// make all the subsequent drawing to clip to this rounded rectangle
[path addClip];
// center the image in the thumbnail rectangle
CGRect projectRect;
projectRect.size.width=ratio * origImageSize.width;
projectRect.size.height= ratio * origImageSize.height;
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
// draw the image on it
[image drawInRect:projectRect];
// get the image from the image context, keep it as our thumbnail
UIImage *smallImage= UIGraphicsGetImageFromCurrentImageContext();
[self setThumbnail:smallImage];
// get the PNG representation of the image and set it as our archivable data
NSData *data= UIImagePNGRepresentation(smallImage);
[self setThumbnailData:data];
// Cleanup image context resources, we're done
UIGraphicsEndImageContext();
}
I got the width and height computation wherein we multiply the origImageSize with scaling factor/ratio.
But then we use the following to give the thumbnail a position—
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
This i fail to understand. I cannot wrap my head around it. :?
Is this part of the centering process. I mean, are we using a mathematical relation here to position the thumbnail or is it some random calculation i.e could have been anything.. Am i missing some fundamental behind these two lines of code??
Those two lines are standard code for centering something, although they aren’t quite written in the most general way. You normally want to use:
projectRect.origin.x = newRect.origin.x + newRect.size.width / 2.0 - projectRect.size.width / 2.0;
projectRect.origin.y = newRect.origin.y + newRect.size.height / 2.0 - projectRect.size.height / 2.0;
In your case the author knows the origin is 0,0, so they omitted the first term in each line.
Since to center a rectangle in another rectangle you want the centers of the two axes to line up, you take, say, half the container’s width (the center of the outer rectangle) and subtract half the inner rectangle’s width (which takes you to the left side of the inner rectangle), and that gives you where the inner rectangle’s left side should be (e.g.: its x origin) when it is correctly centered.
I'm working on making an iPhone App where there are two ImageViews and when you touch the top one, wherever you tapped, the bottom one shows instead.
Basically what I want to do is cut an ellipse/roundedrect out of an image. To do this I was thinking on either clipping the image, or changing the alpha pixels in the rect to zero. I am new to Quartz 2D Programming so I am not sure how to do this.
Assuming I have:
UIImageView *topImage;
UIImageView *bottomImage;
How do I delete a CGRect/Ellipse/RoundedRect from these images.
This is kind of like those lottery tickets that you have to scratch off to reveal if you won.
I would generally try to make a mask from a path (here containing a rounded rectangle), then masking the image with it, as demonstrated in the apple docs. The one of the benefits of this is that for hit testing all you need to do is CGPathContainsPoint with the point that was touched (as in it will test whether it was in the visible area of the image).
I tried this code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect frame = CGRectMake(100, 100, 100, 100);
CGPathRef roundedRectPath = [self newPathForRoundedRect:frame radius:5];
CGContextAddPath(ctx, roundedRectPath);
CGContextClip (ctx);
CGPathRelease(roundedRectPath);
(Together with the rounded rect path function you sent)
This is on a white view and beneath the view there is a gray Window, so I thought this would just show gray instead of white in CGRect frame but it didn't do anything...
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGSize contextSize=CGSizeMake(320,400);
UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContext(contextSize);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *savedImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setSaveImage:savedImg];
to extarct some part of image from main screen.
In UIGraphicsBeginImageContext I can only use size, is there any way to use CGRect or some other way to extract image from a specific portion of screen ie (x,y, 320, 400) some thing like this
Hope this helps:
// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
// Create rect for image
CGRect rect = CGRectMake(x, y, size.width, size.height);
// Draw the image into the rect
[existingImage drawInRect:rect];
// Saving the image, ending image context
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This question is really a duplicate of several other questions including this: How to crop the UIImage?, but since it took me a while to find a solution, I will cross post again.
In my quest for a solution that I could more easily understand (and written in Swift), I arrived at this:
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, crop source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/
I am using CGAffineTransformMake to flip an UIImageView vertically. It works fine but it does not seem to save the new flipped position of UIImageview, because when I try to flip it 2nd time (execute the line code below) it just does not work.
shape.transform = CGAffineTransformMake(1, 0, 0, 1, 0, 0);
help please.
Thanks in advance.
Kedar
Transforms are not automatically additive/accumulative as you would expect. Assigning a transform just transforms the target once.
Each transform is highly specific. If apply a rotation transform that rotates a view +45 degrees, you will see it rotate only once. Applying the same transform again does not rotate the view an additional +45 degrees. All subsequent applications of the same transforms produce no visible effect because the view is already rotated +45 degrees and that is all that transform will ever do.
To make transforms accumulative you have apply the new transform to the existing transform instead of just replacing it. So as mentioned previously for each subsequent rotation you use:
shape.transform = CGAffineTransformRotate(shape.transform, M_PI);
Which adds the new transform to the existing transform. If you add a +45 degree transform in this manner the view will rotate an additional +45 each time it is applied.
I have the same problem with you and I found the solution! I want to rotate the UIImageView, because I will have the animation. To save the image I use this method:
void CGContextConcatCTM(CGContextRef c, CGAffineTransform transform)
the transform param is the transform of your UIImageView so anything you have done to the imageView will be the same with image! And I have write a category method of UIImage.
-(UIImage *)imageRotateByTransform:(CGAffineTransform)transform{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.size.width, self.size.height)];
rotatedViewBox.transform = transform;
CGSize rotatedSize = rotatedViewBox.frame.size;
[rotatedViewBox release];
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
//Rotate the image context using tranform
CGContextConcatCTM(bitmap, transform);
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Hope this will help you.
If you just want to reverse the effects of a previous transformation, you may like to look into setting the shape.transform property to the value CGAffineTransformIdentity.
When you set a view's transform property you are replacing any existing transform it has, not adding to it. So if you assign a transform which causes a rotation, it will forget about any flip you had previously configured.
If you want to add an additional rotation or scaling operation to a view which you have previously transformed you should investigate the functions which allow you to specify an existing transform.
I.e. instead of using
shape.transform = CGAffineTransformMakeRotation(M_PI);
which replaces the existing transform with the specified rotation, you could use
shape.transform = CGAffineTransformRotate(shape.transform, M_PI);
this applies the rotation to the existing transform (what ever that may be) and then assigns it to the view. Take a look at Apple's documentation for CGAffineTransformRotate, it may clarify things a little.
BTW, the documentation says: "If you don’t plan to reuse an affine transform, you may want to use CGContextScaleCTM, CGContextRotateCTM, CGContextTranslateCTM, or CGContextConcatCTM."