Currently I have an image... A battery Icon...
_MeterBar = [[UIImageView alloc] initWithFrame:CGRectMake(25, 40, 50, 40)];
_MeterBar.image = [UIImage imageNamed:#"battery-empty-icon.png"];
_MeterBar.backgroundColor = [UIColor clearColor];
I am attempting to add an animation that shows the battery loading to a specific point of the battery when a value is fetched from a web server.
Fetching the value is no problem but adding the animation is an issue. I basically copied and pasted some code a user answered before but I am stuck :/. When I add the animation, there is no image.....
CABasicAnimation *scaleToValue = [CABasicAnimation animationWithKeyPath:#"transform.scale.x"];
scaleToValue.toValue = [NSNumber numberWithFloat:100.0f];
scaleToValue.fromValue = [NSNumber numberWithFloat:0];
scaleToValue.duration = 1.0f;
scaleToValue.delegate = self;
_MeterBar.layer.anchorPoint = CGPointMake(0.5, 1);
[_MeterBar.layer addAnimation:scaleToValue forKey:#"scaleRight"];
CGAffineTransform scaleTo = CGAffineTransformMakeScale(1.0f, 50.0f);
_MeterBar.transform = scaleTo;
[_PowerNowView addSubview:_MeterBar];
The battery icon is more wide than tall so I'm animating from 0 -> a percentage on the x axis.
If anyone has any hints, tips, or sample examples that does this exact animation, I would greatly appreciate your help :)
Thank you.
P.S. Heres the URL of the battery Icon i found : Battery Icon
Sorry for not being clear but I would like to have a color layer seperate of the battery icon...
I would like to create a bar over the battery image and have the bar dynamically animate with respect to the size of the battery.... not actually resize the battery image itself....
So there would be two layers, one for the battery that is mainly static in size and the other layer being the colored BAR, maybe a green color to represent the amount of the BATTERY image that is filled.
A good reference of what I would like to do is what the MINT application does, whose purpose is to record your bank statements.
The final animated battery above is very nice but I would just like to show a bar within the battery and animate it with respect to the pulled data value.
A couple of things.
Your scale of 100 would make your layer 100 times larger than normal. If it's 300 pixels on a side, a scale of 100 would display it as 30,000 pixels on a side.
You want your to scale value to 1.0, not 100.
Also, I'm not sure if scaling the transform multiplies the previous scale by a growing factor. If it does, then a starting scale value of 0 would prevent it from ever growing, since anything times zero is still zero. Try a from value of .0001 instead of 0.0,
If you want your battery image to grow from left to right, wouldn't you want your anchor point to be (0,.5) rather than (.5, 1)?
And, when you change the anchor point, it not only changes the center for transforms, but it moves the position of the image. You will need to adjust the position to compensate. I always have to scratch my head and think about it really hard (and usually get it wrong anyway) but I think you would want to shift the X position of your view to the right by half of it's width.
Note that scaling your image will cause it to stretch oddly. It will start out short and fat and stretch out to normal size.
It might be better to attach a CAShapeLayer to the view as a mask layer, and animate the shape layer's path from an empty rectangle (Width = 0, height = full height of view) to a rectangle that is the full size of the view. That would cause the view to animate in a "wipe" animation where it is revealed from left to right without stretching.
Related
Basically I'm using this ParallaxCamera class to create the effect in my game, but upon movement, the layers "wiggle". This is especially noticeable when the camera is moving slowly.
I have fixed the timestep and use interpolated smoothing. I use scaled up pixel art. Camera centered on player, updated after movement. Disabling the effect makes moving the camera smooth.
What I guess the issues might be:
the layers move at different paces which means they move at different
times
rounding to display makes the layers assume slightly different positions each frame when moving the camera
Thanks for any help
For low-resolution pixel art, this is the strategy I've used. I draw to a small FrameBuffer at 1:1 resolution and then draw that to the screen. That should take care of jittering.
If your Stage is also at the same resolution, then you have to use a bit of a hack to get input to be processed properly. The one I've used it to use a StretchViewport, but I manually calculate the world width and height to not stretch the world, so I'm basically doing the same calculation that ExtendViewport does behind the scenes. You also have to round to an integer for the width and height. You should do this in resize and apply the width and height using viewport.setWorldWidth() and setWorldHeight(). So in this case it doesn't matter what world size you give to the constructor since it will be changed in update().
When you call update on the viewport in resize, you need to do it within the context of the FrameBuffer you are drawing to. Otherwise it will mess up the screen's frame buffer dimensions.
public void resize(int width, int height) {
int worldWidth = Math.round((float)WORLD_HEIGHT / (float)height * (float)width);
viewport.setWorldWidth(worldWidth);
viewport.setWorldHeight(worldHeight);
frameBuffer.begin();
viewport.update(width, height, true); // the actual screen dimensions
frameBuffer.end();
}
You can look up examples of using FrameBuffer in LibGDX. You should do all your game drawing in between frameBuffer.begin() and end(), and then draw the frameBuffer's color buffer to the screen like this:
stage.act();
frameBuffer.begin();
//Draw game
stage.draw();
frameBuffer.end();
spriteBatch.setProjectionMatrix(spriteBatch.getProjectionMatrix().idt());
spriteBatch.begin();
spriteBatch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2);
spriteBatch.end();
In my case, I do a more complicated calculation of world width and world height such that they are a whole number factor of the actual screen dimensions. This prevents the big pixels from being different sizes on the screen, which might look bad. Alternatively, you can change the filtering of the FrameBuffer's texture to linear and use an upscaling shader when drawing it.
Here is the view I got, I got a layer view, detect user touch, and a image view, which showing the image. The layer view is cover on top of the image view. The image view's image is aspect fit. So, it won't lost the ratio. If in my layer view touch on 100, 240, it is a layer view coordinate, but not the image's coordinate. I would like to know how to convert the layer view's coordinate to a image's coordinate. In this example, the image size may be 180*180, so, the coordinate in layer view in the image is 60, 90.
Thanks.
If I'm understanding this question correctly, you want to take a point, which is currently in relation to the layer's coordinate system, and convert it to the image view's coordinate system?
In that case, there are a couple of ways to do this.
Easiest is to use convertPoint:fromView: or convertPoint:toView:
CGPoint imageViewTouchPoint = [layerView convertPoint:touchPoint fromView:imageView];
CGPoint imageViewTouchPoint = [imageView convertPoint:touchPoint toView:layerView];
Either one should work.
EDIT - I realize now that this is only if the UIImageView has the same frame as the UIImage, which you said it might not, due to the UIViewContentModeScaleAspectFit property.
In this case, unless I'm mistaken, the image frame is calculated inside the UIImageView drawRect: method and isn't a property that gets set. This means you'll have to calculate this on your own.
Definitely get the imageViewTouchPoint from one of the methods above (just in case you want to use the same logic on a UIImageView which isn't the full screen size).
You will then need to calculate the scaled image frame. There are a couple of ways to do this. Some people go brute force and manually calculate based on which side of the image is longer, then determining which side should be scaled. Then they calculate the origin by by centering the image and subtracting the image and image view's sides and dividing by two.
I like to write as little code as possible if it's unnecessary, even if it means importing a framework. If you import AVFoundation you get a method AVMakeRectWithAspectRatioInsideRect which you can use to actually calculate the scaled rectangle in one line of code.
CGRect imageRect = AVMakeRectWithAspectRatioInsideRect(image.size, imageView.frame);
Whichever method you use, you will then simply translate your touched point with the scaled image origin:
CGPoint imageTouchPoint = CGPointMake(imageViewTouchPoint.x - imageRect.origin.x, imageViewTouchPoint.y - imageRect.origin.y);
You have to do the math yourself. Calculate the aspect ratio of your image and compare with the aspect ratio of the image view's bounds.
Look at this question: How to Get Image position in ImageView
After searching more, got a hack:
CGSize imageInViewSize = [photo resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:imageView.size interpolationQuality:kCGInterpolationNone].size;
CGRect overlayRect = CGRectMake((imageView.frame.size.width - imageInViewSize.width) / 2,
(imageView.frame.size.height - imageInViewSize.height) / 2,
imageInViewSize.width,
imageInViewSize.height);
NSLog(#"Frame of Image inside UIImageView: Left:%f Top:%f Width:%f Height:%f \n", overlayRect.origin.x, overlayRect.origin.y, overlayRect.size.width, overlayRect.size.height);
I am using UIImageView to display thumbnails of images that can then be selected to be viewed at full size. The UIImageView has its content mode set to aspect fit.
The images are usually scaled down from around 500px x 500px to 100px x 100px. On the retina iPad they display really well while on the iPad2 they are badly aliased until the size gets closer to the native image size.
Examples:
Original Image
Retina iPad rendering at 100px x 100px
iPad 2 rendering at 100px x 100px
The difference between iPad 2 and new iPad might just be the screen resolution or could be that the GPU is better equipped to scale images. Either way, the iPad 2 rendering is very poor.
I have tried first reducing the image size by creating a new context, setting the interpolation quality to high and drawing the image into the context. In this case, the image looks fine on both iPads.
Before I continue down the image copy/resize avenue, I wanted to check there wasn't something simpler I was missing. I appreciate that UIImage isn't there to be scaled but I was under the impression UIImageView was there to handle scaling but at the moment it doesn't seem to be doing a good job scaling down. What (if anything) am I missing?
Update: Note: The drop shadow on the rendered / resized images is added in code. Disabling this made no difference to the quality of the scaling.
Another approach I've tried that does seem to be improving things is to set the minificationFilter:
[imageView.layer setMinificationFilter:kCAFilterTrilinear]
The quality is certainly improved and I haven't noticed a performance hit.
Applying a small minification filter bias can help out with this if you don't want to resample the image yourself:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.minificationFilterBias = 0.1
The left image has no filtering applied to it. The right image has a 0.1 filter bias.
Note that no explicit rasterization is required.
Playing around with very small values, you can usually come up with a value that smooths out the scaling artifacts just enough, and it's a lot easier than resizing the bitmap yourself. Certainly, you lose detail as the bias increases, so values even less than 0.1 are probably sufficient, though it all depends on the size the image view's frame that's displaying the image.
Just realize that trilinear filtering effectively enables mipmapping on the layer, which basically means it generates extra copies of the bitmap at progressively smaller scales. It's a very common technique used in rendering to increase render speed and also reduce scaling aliasing. The tradeoff is that it requires more memory, though the memory usage for successive downsampled bitmaps reduces exponentially.
Another potential advantage to this technique, though I have not tried it myself, is that you can animate minificationFilterBias. So if you're going to be scaling an image view down quite a lot as part of an animation, consider also animating the filter bias from 0.0 to whatever small value you've determined is appropriate for the scaled down size.
Finally, as others have noted, if your source image is very large, this technique isn't appropriate if overused, because Core Animation will always keep around the original bitmap. It's better to resize the image then discard the source image instead of using mipmapping in most cases, but for one-offs or cases where your image views are going to be deallocated quickly enough, this is fine.
if you just put the large image in a small imageview it will look real bad.
the solution is to properly resize the image... i'll add an example function that does the trick:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
this function might take some time .. so you might want to save the result to a cache file.
If you're not afraid of wasting memory and know what you're doing for a particular case, this works beautifully.
myView.layer.shouldRasterize = YES;
myView.layer.rasterizationScale = 2;
The resulting quality is much better than setMinificationFilter.
I am using images that are 256x256 and scaling them to something like 48 px. Obviously a saner solution here would be to downscale the images to the exact destination size.
Next helped to me:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.shouldRasterize = true
imageView.layer.rasterizationScale = UIScreen.mainScreen().scale
Keep an eye on performance if used in scroll lists.
I have a 64px by 64px redSquare.png file at a 326ppi resolution. I'm drawing it at the top left corner of my View Controller's window as follows:
myImage = [UIImage imageNamed:#"redSquare.png"];
myImageView = [[UIImageView alloc] initWithImage:myImage];
[self.view addSubview:myImageView];
Given that the iPhone 4S has a screen resolution of 960x640 (326ppi) there should be enough room for 9 more squares to fit next to the first one. However there's only room for 4 more. i.e. the square is drawn larger than what it should given my measurements.
// even tried resizing UIImageView in case it was
// resizing my image to a different size, by adding
// this next line, but no success there either :
myImageView.frame = CGRectMake(0, 0, 64, 64);
I believe it has to do with the way the device is "translating" my pixels. I read about the distinction between Points Versus Pixels in Apple's documentation but it doesn't mention how one can work around this problem. I know I'm measuring in pixels. Should I be measuring in points? And how could I do that? How exactly am I to resize my image so that it can hold 9 more same-sized squares next to it (i.e. on the same horizontal..) ?
Thank you
To display an image at full resolution on a Retina display, it needs to have #2x appended to the end of its name. In practice, this means you should save the image you're currently using as redSquare#2x.png and a version of that image in 32x32 pixels as redSquare.png.
Once you have done this, there is no need to change your code. The appropriate image will be displayed depending on the device's capabilities. This will allow your app to render correctly on both Retina and non-Retina devices.
I let the user select a photo from the iPhone library, and I grab the UIImage.
I output the size of the image, and it says 320x480, but it doesn't seem to be, because when I draw the image on the screen using CGRectMake(0,0,320,480), it only shows the upper left portion of the image. Aren't the images much bigger than 320x480 because of the high resolution?
I'd like to scale the image to force it to be 320x480. If it is less than 320x480, it should not be rescaled at all. If the width is greater than 320 or the height is greater than 480, it should scale in a way so that it becomes as close to 320x480 as possible, but by keeping the proper proportion of width to height. So, for instance, if it scales to 320x420, that is fine, or 280x480.
How can I do this in Objective-C?
Setting the image view's content mode like this:
myView.contentMode = UIViewContentModeScaleAspectFit;
will preserve the aspect ratio.