This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get UIImage from EAGLView?
So I was just wondering if anybody knows any way to save what is stored in an EAGLContext as a UIImage.
I am currently using:
UIGraphicsBeginImageContext(CGSizeMake(768, 1024));
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
in other apps I have and this works fine, but obviously, EAGLContext doesn't have a .layer property. I've tried casting to UIView, but that - unsurprisingly - doesn't work:
UIView *newView = [[UIView alloc] init];
newView = (UIView *)context;
I am drawing to an EAGLContext property on a UIView (technically an EAGLContext on a UIView on another UIView on a View Controller, but I figure that shouldn't make any difference) using OpenGLES 1.
If anybody knows anything about this, even if its just that I'm completely barking up an impossible tree, please let me know!
Matt
After a few days I finally got a working solution to this. There is code provided by Apple which produces an UIImage from an EAGLView. Then you simply need to flip the image vertically since UIKit is upside down. The link to the documentation where I found this method doesn't exist anymore.
Method to capture EAGLView:
-(UIImage *)drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Method to flip the image vertically:
- (UIImage *)flipImageVertically:(UIImage *)originalImage
{
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}
Related
I'm trying to replicate this blurred background from Apple's publicly released iOS 7 example screen:
This question suggests applying a CI filter to the contents below, but that's a whole different approach. It's obvious that iOS 7 doesn't capture the contents of the views below, for many reasons:
Doing some rough testing, capturing a screenshot of the views below and applying a CIGaussianBlur filter with a large enough radius to mimic iOS 7's blur style takes 1-2 seconds, even on a simulator.
The iOS 7 blur view is able to blur over dynamic views, such as a video or animations, with no noticeable lag.
Can anyone hypothesize what frameworks they could be using to create this effect, and if it's possible to create a similar effect with current public APIs?
Edit: (from comment) We don't exactly know how Apple is doing it, but are there any basic assumptions we can make? We can assume they are using hardware, right?
Is the effect self-contained in each view, such that the effect doesn't actually know what's behind it? Or must, based on how blurs work, the contents behind the blur be taken into consideration?
If the contents behind the effect are relevant, can we assume that Apple is receiving a "feed" of the contents below and continuously rendering them with a blur?
Why bother replicating the effect? Just draw a UIToolbar behind your view.
myView.backgroundColor = [UIColor clearColor];
UIToolbar* bgToolbar = [[UIToolbar alloc] initWithFrame:myView.frame];
bgToolbar.barStyle = UIBarStyleDefault;
[myView.superview insertSubview:bgToolbar belowSubview:myView];
Apple released code at WWDC as a category on UIImage that includes this functionality, if you have a developer account you can grab the UIImage category (and the rest of the sample code) by going to this link: https://developer.apple.com/wwdc/schedule/ and browsing for section 226 and clicking on details. I haven't played around with it yet but I think the effect will be a lot slower on iOS 6, there are some enhancements to iOS 7 that make grabbing the initial screen shot that is used as input to the blur a lot faster.
Direct link: https://developer.apple.com/downloads/download.action?path=wwdc_2013/wwdc_2013_sample_code/ios_uiimageeffects.zip
Actually I'd bet this would be rather simple to achieve. It probably wouldn't operate or look exactly like what Apple has going on but could be very close.
First of all, you'd need to determine the CGRect of the UIView that you will be presenting. Once you've determine that you would just need to grab an image of the part of the UI so that it can be blurred. Something like this...
- (UIImage*)getBlurredImage {
// You will want to calculate this in code based on the view you will be presenting.
CGSize size = CGSizeMake(200,200);
UIGraphicsBeginImageContext(size);
[view drawViewHierarchyInRect:(CGRect){CGPointZero, w, h} afterScreenUpdates:YES]; // view is the view you are grabbing the screen shot of. The view that is to be blurred.
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Gaussian Blur
image = [image applyLightEffect];
// Box Blur
// image = [image boxblurImageWithBlur:0.2f];
return image;
}
Gaussian Blur - Recommended
Using the UIImage+ImageEffects Category Apple's provided here, you'll get a gaussian blur that looks very much like the blur in iOS 7.
Box Blur
You could also use a box blur using the following boxBlurImageWithBlur: UIImage category. This is based on an algorythem that you can find here.
#implementation UIImage (Blur)
-(UIImage *)boxblurImageWithBlur:(CGFloat)blur {
if (blur < 0.f || blur > 1.f) {
blur = 0.5f;
}
int boxSize = (int)(blur * 50);
boxSize = boxSize - (boxSize % 2) + 1;
CGImageRef img = self.CGImage;
vImage_Buffer inBuffer, outBuffer;
vImage_Error error;
void *pixelBuffer;
CGDataProviderRef inProvider = CGImageGetDataProvider(img);
CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
pixelBuffer = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
if(pixelBuffer == NULL)
NSLog(#"No pixelbuffer");
outBuffer.data = pixelBuffer;
outBuffer.width = CGImageGetWidth(img);
outBuffer.height = CGImageGetHeight(img);
outBuffer.rowBytes = CGImageGetBytesPerRow(img);
error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
if (error) {
NSLog(#"JFDepthView: error from convolution %ld", error);
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(outBuffer.data,
outBuffer.width,
outBuffer.height,
8,
outBuffer.rowBytes,
colorSpace,
kCGImageAlphaNoneSkipLast);
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
//clean up
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(pixelBuffer);
CFRelease(inBitmapData);
CGImageRelease(imageRef);
return returnImage;
}
#end
Now that you are calculating the screen area to blur, passing it into the blur category and receiving a UIImage back that has been blurred, now all that is left is to set that blurred image as the background of the view you will be presenting. Like I said, this will not be a perfect match for what Apple is doing, but it should still look pretty cool.
Hope it helps.
iOS8 answered these questions.
- (instancetype)initWithEffect:(UIVisualEffect *)effect
or Swift:
init(effect effect: UIVisualEffect)
I just wrote my little subclass of UIView that has ability to produce native iOS 7 blur on any custom view. It uses UIToolbar but in a safe way for changing it's frame, bounds, color and alpha with real-time animation.
Please let me know if you notice any problems.
https://github.com/ivoleko/ILTranslucentView
There is a rumor that Apple engineers claimed, to make this performant they are reading directly out of the gpu buffer which raises security issues which is why there is no public API to do this yet.
This is a solution that you can see in the vidios of the WWDC. You have to do a Gaussian Blur, so the first thing you have to do is to add a new .m and .h file with the code i'm writing here, then you have to make and screen shoot, use the desired effect and add it to your view, then your UITable UIView or what ever has to be transparent, you can play with applyBlurWithRadius, to archive the desired effect, this call works with any UIImage.
At the end the blured image will be the background and the rest of the controls above has to be transparent.
For this to work you have to add the next libraries:
Acelerate.framework,UIKit.framework,CoreGraphics.framework
I hope you like it.
Happy coding.
//Screen capture.
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(c, 0, 0);
[self.view.layer renderInContext:c];
UIImage* viewImage = UIGraphicsGetImageFromCurrentImageContext();
viewImage = [viewImage applyLightEffect];
UIGraphicsEndImageContext();
//.h FILE
#import <UIKit/UIKit.h>
#interface UIImage (ImageEffects)
- (UIImage *)applyLightEffect;
- (UIImage *)applyExtraLightEffect;
- (UIImage *)applyDarkEffect;
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor;
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius tintColor:(UIColor *)tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage;
#end
//.m FILE
#import "cGaussianEffect.h"
#import <Accelerate/Accelerate.h>
#import <float.h>
#implementation UIImage (ImageEffects)
- (UIImage *)applyLightEffect
{
UIColor *tintColor = [UIColor colorWithWhite:1.0 alpha:0.3];
return [self applyBlurWithRadius:1 tintColor:tintColor saturationDeltaFactor:1.8 maskImage:nil];
}
- (UIImage *)applyExtraLightEffect
{
UIColor *tintColor = [UIColor colorWithWhite:0.97 alpha:0.82];
return [self applyBlurWithRadius:1 tintColor:tintColor saturationDeltaFactor:1.8 maskImage:nil];
}
- (UIImage *)applyDarkEffect
{
UIColor *tintColor = [UIColor colorWithWhite:0.11 alpha:0.73];
return [self applyBlurWithRadius:1 tintColor:tintColor saturationDeltaFactor:1.8 maskImage:nil];
}
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor
{
const CGFloat EffectColorAlpha = 0.6;
UIColor *effectColor = tintColor;
int componentCount = CGColorGetNumberOfComponents(tintColor.CGColor);
if (componentCount == 2) {
CGFloat b;
if ([tintColor getWhite:&b alpha:NULL]) {
effectColor = [UIColor colorWithWhite:b alpha:EffectColorAlpha];
}
}
else {
CGFloat r, g, b;
if ([tintColor getRed:&r green:&g blue:&b alpha:NULL]) {
effectColor = [UIColor colorWithRed:r green:g blue:b alpha:EffectColorAlpha];
}
}
return [self applyBlurWithRadius:10 tintColor:effectColor saturationDeltaFactor:-1.0 maskImage:nil];
}
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius tintColor:(UIColor *)tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage
{
if (self.size.width < 1 || self.size.height < 1) {
NSLog (#"*** error: invalid size: (%.2f x %.2f). Both dimensions must be >= 1: %#", self.size.width, self.size.height, self);
return nil;
}
if (!self.CGImage) {
NSLog (#"*** error: image must be backed by a CGImage: %#", self);
return nil;
}
if (maskImage && !maskImage.CGImage) {
NSLog (#"*** error: maskImage must be backed by a CGImage: %#", maskImage);
return nil;
}
CGRect imageRect = { CGPointZero, self.size };
UIImage *effectImage = self;
BOOL hasBlur = blurRadius > __FLT_EPSILON__;
BOOL hasSaturationChange = fabs(saturationDeltaFactor - 1.) > __FLT_EPSILON__;
if (hasBlur || hasSaturationChange) {
UIGraphicsBeginImageContextWithOptions(self.size, NO, [[UIScreen mainScreen] scale]);
CGContextRef effectInContext = UIGraphicsGetCurrentContext();
CGContextScaleCTM(effectInContext, 1.0, -1.0);
CGContextTranslateCTM(effectInContext, 0, -self.size.height);
CGContextDrawImage(effectInContext, imageRect, self.CGImage);
vImage_Buffer effectInBuffer;
effectInBuffer.data = CGBitmapContextGetData(effectInContext);
effectInBuffer.width = CGBitmapContextGetWidth(effectInContext);
effectInBuffer.height = CGBitmapContextGetHeight(effectInContext);
effectInBuffer.rowBytes = CGBitmapContextGetBytesPerRow(effectInContext);
UIGraphicsBeginImageContextWithOptions(self.size, NO, [[UIScreen mainScreen] scale]);
CGContextRef effectOutContext = UIGraphicsGetCurrentContext();
vImage_Buffer effectOutBuffer;
effectOutBuffer.data = CGBitmapContextGetData(effectOutContext);
effectOutBuffer.width = CGBitmapContextGetWidth(effectOutContext);
effectOutBuffer.height = CGBitmapContextGetHeight(effectOutContext);
effectOutBuffer.rowBytes = CGBitmapContextGetBytesPerRow(effectOutContext);
if (hasBlur) {
CGFloat inputRadius = blurRadius * [[UIScreen mainScreen] scale];
NSUInteger radius = floor(inputRadius * 3. * sqrt(2 * M_PI) / 4 + 0.5);
if (radius % 2 != 1) {
radius += 1;
}
vImageBoxConvolve_ARGB8888(&effectInBuffer, &effectOutBuffer, NULL, 0, 0, radius, radius, 0, kvImageEdgeExtend);
vImageBoxConvolve_ARGB8888(&effectOutBuffer, &effectInBuffer, NULL, 0, 0, radius, radius, 0, kvImageEdgeExtend);
vImageBoxConvolve_ARGB8888(&effectInBuffer, &effectOutBuffer, NULL, 0, 0, radius, radius, 0, kvImageEdgeExtend);
}
BOOL effectImageBuffersAreSwapped = NO;
if (hasSaturationChange) {
CGFloat s = saturationDeltaFactor;
CGFloat floatingPointSaturationMatrix[] = {
0.0722 + 0.9278 * s, 0.0722 - 0.0722 * s, 0.0722 - 0.0722 * s, 0,
0.7152 - 0.7152 * s, 0.7152 + 0.2848 * s, 0.7152 - 0.7152 * s, 0,
0.2126 - 0.2126 * s, 0.2126 - 0.2126 * s, 0.2126 + 0.7873 * s, 0,
0, 0, 0, 1,
};
const int32_t divisor = 256;
NSUInteger matrixSize = sizeof(floatingPointSaturationMatrix)/sizeof(floatingPointSaturationMatrix[0]);
int16_t saturationMatrix[matrixSize];
for (NSUInteger i = 0; i < matrixSize; ++i) {
saturationMatrix[i] = (int16_t)roundf(floatingPointSaturationMatrix[i] * divisor);
}
if (hasBlur) {
vImageMatrixMultiply_ARGB8888(&effectOutBuffer, &effectInBuffer, saturationMatrix, divisor, NULL, NULL, kvImageNoFlags);
effectImageBuffersAreSwapped = YES;
}
else {
vImageMatrixMultiply_ARGB8888(&effectInBuffer, &effectOutBuffer, saturationMatrix, divisor, NULL, NULL, kvImageNoFlags);
}
}
if (!effectImageBuffersAreSwapped)
effectImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (effectImageBuffersAreSwapped)
effectImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
UIGraphicsBeginImageContextWithOptions(self.size, NO, [[UIScreen mainScreen] scale]);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -self.size.height);
CGContextDrawImage(outputContext, imageRect, self.CGImage);
if (hasBlur) {
CGContextSaveGState(outputContext);
if (maskImage) {
CGContextClipToMask(outputContext, imageRect, maskImage.CGImage);
}
CGContextDrawImage(outputContext, imageRect, effectImage.CGImage);
CGContextRestoreGState(outputContext);
}
if (tintColor) {
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, tintColor.CGColor);
CGContextFillRect(outputContext, imageRect);
CGContextRestoreGState(outputContext);
}
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
You can find your solution from apple's DEMO in this page:
WWDC 2013 , find out and download UIImageEffects sample code.
Then with #Jeremy Fox's code. I changed it to
- (UIImage*)getDarkBlurredImageWithTargetView:(UIView *)targetView
{
CGSize size = targetView.frame.size;
UIGraphicsBeginImageContext(size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(c, 0, 0);
[targetView.layer renderInContext:c]; // view is the view you are grabbing the screen shot of. The view that is to be blurred.
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [image applyDarkEffect];
}
Hope this will help you.
Here is a really easy way of doing it:https://github.com/JagCesar/iOS-blur
Just copy the layer of UIToolbar and you're done, AMBlurView does it for you.
Okay, it's not as blurry as control center, but is's blurry enough.
Remember that iOS7 is under NDA.
Every response here is using vImageBoxConvolve_ARGB8888 this function is really, really slow, that is fine, if the performance is not a high priority requirement, but if you are using this for transitioning between two View Controllers (for example) this approach means times over 1 second or maybe more, that is very bad to the user experience of your application.
If you prefer leave all this image processing to the GPU (And you should) you can get a much better effect and also awesome times rounding 50ms (supposing that you have a time of 1 second in the first approach), so, lets do it.
First download the GPUImage Framework (BSD Licensed) here.
Next, Add the following classes (.m and .h) from the GPUImage (I'm not sure that these are the minimum needed for the blur effect only)
GPUImage.h
GPUImageAlphaBlendFilter
GPUImageFilter
GPUImageFilterGroup
GPUImageGaussianBlurPositionFilter
GPUImageGaussianSelectiveBlurFilter
GPUImageLuminanceRangeFilter
GPUImageOutput
GPUImageTwoInputFilter
GLProgram
GPUImageBoxBlurFilter
GPUImageGaussianBlurFilter
GPUImageiOSBlurFilter
GPUImageSaturationFilter
GPUImageSolidColorGenerator
GPUImageTwoPassFilter
GPUImageTwoPassTextureSamplingFilter
iOS/GPUImage-Prefix.pch
iOS/GPUImageContext
iOS/GPUImageMovieWriter
iOS/GPUImagePicture
iOS/GPUImageView
Next, create a category on UIImage, that will add a blur effect to an existing UIImage:
#import "UIImage+Utils.h"
#import "GPUImagePicture.h"
#import "GPUImageSolidColorGenerator.h"
#import "GPUImageAlphaBlendFilter.h"
#import "GPUImageBoxBlurFilter.h"
#implementation UIImage (Utils)
- (UIImage*) GPUBlurredImage
{
GPUImagePicture *source =[[GPUImagePicture alloc] initWithImage:self];
CGSize size = CGSizeMake(self.size.width * self.scale, self.size.height * self.scale);
GPUImageBoxBlurFilter *blur = [[GPUImageBoxBlurFilter alloc] init];
[blur setBlurRadiusInPixels:4.0f];
[blur setBlurPasses:2.0f];
[blur forceProcessingAtSize:size];
[source addTarget:blur];
GPUImageSolidColorGenerator * white = [[GPUImageSolidColorGenerator alloc] init];
[white setColorRed:1.0f green:1.0f blue:1.0f alpha:0.1f];
[white forceProcessingAtSize:size];
GPUImageAlphaBlendFilter * blend = [[GPUImageAlphaBlendFilter alloc] init];
blend.mix = 0.9f;
[blur addTarget:blend];
[white addTarget:blend];
[blend forceProcessingAtSize:size];
[source processImage];
return [blend imageFromCurrentlyProcessedOutput];
}
#end
And last, add the following frameworks to your project:
AVFoundation
CoreMedia
CoreVideo
OpenGLES
Yeah, got fun with this much faster approach ;)
You can try using my custom view, which has capability to blur the background. It does this by faking taking snapshot of the background and blur it, just like the one in Apple's WWDC code. It is very simple to use.
I also made some improvement over to fake the dynamic blur without losing the performance. The background of my view is a scrollView which scrolls with the view, thus provide the blur effect for the rest of the superview.
See the example and code on my GitHub
Core Background implements the desired iOS 7 effect.
https://github.com/justinmfischer/core-background
Disclaimer: I am the author of this project
I'm developing an iOS app for iPad. Is there any way to rotate a UIImage 90ยบ and then add it to a UIImageView? I've tried a lot of different codes but none worked...
Thanks!
You may rotate UIImageView itself with:
UIImageView *iv = [[UIImageView alloc] initWithImage:image];
iv.transform = CGAffineTransformMakeRotation(M_PI_2);
Or if you really want to change image, you may use code from this answer, it works.
To rotate the pixels you can use the following. This creates an intermediate UIImage with rotated metadata and renders it into a image context with width/height dimensions transposed. The resulting image has the pixels rotated (i.e the underlying CGImage)
- (UIImage*)rotateUIImage:(UIImage*)sourceImage clockwise:(BOOL)clockwise
{
CGSize size = sourceImage.size;
UIGraphicsBeginImageContext(CGSizeMake(size.height, size.width));
[[UIImage imageWithCGImage:[sourceImage CGImage] scale:1.0 orientation:clockwise ? UIImageOrientationRight : UIImageOrientationLeft] drawInRect:CGRectMake(0,0,size.height ,size.width)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
There are other possible values that can be passed for the orientation parameter to achieve 180 degree rotation and flips etc.
This will rotate an image by any given degrees.
Note this works 2x and 3x retina as well
- (UIImage *)imageRotatedByDegrees:(CGFloat)degrees {
CGFloat radians = DegreesToRadians(degrees);
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0, self.size.width, self.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(radians);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
UIGraphicsBeginImageContextWithOptions(rotatedSize, NO, [[UIScreen mainScreen] scale]);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, rotatedSize.width / 2, rotatedSize.height / 2);
CGContextRotateCTM(bitmap, radians);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2 , self.size.width, self.size.height), self.CGImage );
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
There is also imageWithCIImage:scale:orientation if you wanted to rotate the UIImage not the UIImageView
with one of these orientations:
typedef enum {
UIImageOrientationUp,
UIImageOrientationDown, // 180 deg rotation
UIImageOrientationLeft, // 90 deg CW
UIImageOrientationRight, // 90 deg CCW
UIImageOrientationUpMirrored, // vertical flip
UIImageOrientationDownMirrored, // horizontal flip
UIImageOrientationLeftMirrored, // 90 deg CW then perform horizontal flip
UIImageOrientationRightMirrored, // 90 deg CCW then perform vertical flip
} UIImageOrientation;
Here is the swift version of #RyanG's Objective C code as an extension to UIImage:
extension UIImage {
func rotate(byDegrees degree: Double) -> UIImage {
let radians = CGFloat(degree*M_PI)/180.0 as CGFloat
let rotatedViewBox = UIView(frame: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
let t = CGAffineTransform(rotationAngle: radians)
rotatedViewBox.transform = t
let rotatedSize = rotatedViewBox.frame.size
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(rotatedSize, false, scale)
let bitmap = UIGraphicsGetCurrentContext()
CGContextTranslateCTM(bitmap, rotatedSize.width / 2, rotatedSize.height / 2);
bitmap!.rotate(by: radians);
bitmap!.scaleBy(x: 1.0, y: -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2 , self.size.width, self.size.height), self.CGImage );
let newImage = UIGraphicsGetImageFromCurrentImageContext()
return newImage
}
}
The usage is image.rotate(degree).
With Swift, you can rotate an image by doing:
var image: UIImage = UIImage(named: "headerBack.png")
var imageRotated: UIImage = UIImage(CGImage: image.CGImage, scale:1, orientation: UIImageOrientation.UpMirrored)
UIImage *img = [UIImage imageWithName#"aaa.png"];
UIImage *image = [UIImage imageWithCGImage:img.CGImage scale:1.0 orientation:UIImageOrientationRight];
Another way of doing this would be to render the UIImage again using Core Graphics.
Once you have the context, use CGContextRotateCTM.
More info on this Apple Doc
Thanks Jason Crocker this solved my problem. Only one minor correction, interchange height and width in both locations and no distortion occurs, ie,
UIGraphicsBeginImageContext(CGSizeMake(size.width, size.height));
[[UIImage imageWithCGImage:[sourceImage CGImage] scale:1.0 orientation:clockwise ? UIImageOrientationRight : UIImageOrientationLeft] drawInRect:CGRectMake(0,0,size.width,size.height)];
My problem could not be solved by CGContextRotateCTM, I don't know why. My issue is that I'm transmitting my image to a server and it was alway displayed off by 90 degrees. You can easily test if your images are going to work in the non apple world by copying the image to an MS Office Program that you are running on your mac.
This is what i've done when i wanted to change the orientation of an image (rotate 90 degree clockwise).
//Checking for the orientation ie, image taken from camera is in portrait or not.
if(yourImage.imageOrientation==3)
{
//Image is in portrait mode.
yourImage=[self imageToRotate:yourImage RotatedByDegrees:90.0];
}
- (UIImage *)image:(UIImage *)imageToRotate RotatedByDegrees:(CGFloat)degrees
{
CGFloat radians = degrees * (M_PI / 180.0);
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0, image.size.height, image.size.width)];
CGAffineTransform t = CGAffineTransformMakeRotation(radians);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
UIGraphicsBeginImageContextWithOptions(rotatedSize, NO, [[UIScreen mainScreen] scale]);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, rotatedSize.height / 2, rotatedSize.width / 2);
CGContextRotateCTM(bitmap, radians);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-image.size.width / 2, -image.size.height / 2 , image.size.height, image.size.width), image.CGImage );
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The rotated image may be of size >= 15MB (from my experience). So you should compress it and use it. Otherwise, you may met with crash causing memory pressure. Code I used for compressing is given below.
NSData *imageData = UIImageJPEGRepresentation(yourImage, 1);
//1 - it represents the quality of the image.
NSLog(#"Size of Image(bytes):%d",[imageData length]);
//Here I used a loop because my requirement was, the image size should be <= 4MB.
//So put an iteration for more than 1 time upto when the image size is gets <= 4MB.
for(int loop=0;loop<100;loop++)
{
if([imageData length]>=4194304) //4194304 = 4MB in bytes.
{
imageData=UIImageJPEGRepresentation(yourImage, 0.3);
yourImage=[[UIImage alloc]initWithData:imageData];
}
else
{
NSLog(#"%d time(s) compressed.",loop);
break;
}
}
Now your yourImage can be used for anywhere..
Happy coding...
I've got a large display area that can be panned and zoomed to view different objects. The problem that I'm running into is that the quality of the PNG images UIButton becomes somewhat degraded if I'm zoomed out (however it is back to normal when I zoom back in to 100%). It almost looks as if the image becomes oversharpened. Is this something that I'm going to have to live with, or is there a way to get rid of this grainy edge effect? The aspect ratio of the images are always 1:1, by the way.
I was able to solve this by using the answer found here in my scrollViewDidEndZooming method. Here is my code:
Resize function
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
ScrollView Method
(Widget is a UIViewController subclass which contains a button and a "widgetImage" which stores the full resolution of the image that the button should display)
- (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale
{
for(Widget *theWidget in widgets){
UIImage *newScaledImage = [self resizeImage:theWidget.widgetImage newSize:CGSizeMake(theWidget.view.frame.size.width * scale, theWidget.view.frame.size.height * scale)];
[theWidget.widgetButton setImage:newScaledImage forState:UIControlStateNormal];
// theWidget.widgetButton.currentImage = newScaledImage;
}
}
Is there any way to draw an NSImage like images in NSButtons or other cocoa interface elements?
Here are examples:
Apple uses pdf's with black icons:
If you simply want this effect to be applied when you use your own images in a button, use [myImage setTemplate:YES]. There is no built-in way to draw images with this effect outside of a button that has the style shown in your screenshots.
You can however replicate the effect using Core Graphics. If you look closely, the effect consists of a horizontal gradient, a white drop shadow and a dark inner shadow (the latter is the most difficult).
You could implement this as a category on NSImage:
//NSImage+EtchedDrawing.h:
#interface NSImage (EtchedImageDrawing)
- (void)drawEtchedInRect:(NSRect)rect;
#end
//NSImage+EtchedDrawing.m:
#implementation NSImage (EtchedImageDrawing)
- (void)drawEtchedInRect:(NSRect)rect
{
NSSize size = rect.size;
CGFloat dropShadowOffsetY = size.width <= 64.0 ? -1.0 : -2.0;
CGFloat innerShadowBlurRadius = size.width <= 32.0 ? 1.0 : 4.0;
CGContextRef c = [[NSGraphicsContext currentContext] graphicsPort];
//save the current graphics state
CGContextSaveGState(c);
//Create mask image:
NSRect maskRect = rect;
CGImageRef maskImage = [self CGImageForProposedRect:&maskRect context:[NSGraphicsContext currentContext] hints:nil];
//Draw image and white drop shadow:
CGContextSetShadowWithColor(c, CGSizeMake(0, dropShadowOffsetY), 0, CGColorGetConstantColor(kCGColorWhite));
[self drawInRect:maskRect fromRect:NSMakeRect(0, 0, self.size.width, self.size.height) operation:NSCompositeSourceOver fraction:1.0];
//Clip drawing to mask:
CGContextClipToMask(c, NSRectToCGRect(maskRect), maskImage);
//Draw gradient:
NSGradient *gradient = [[[NSGradient alloc] initWithStartingColor:[NSColor colorWithDeviceWhite:0.5 alpha:1.0]
endingColor:[NSColor colorWithDeviceWhite:0.25 alpha:1.0]] autorelease];
[gradient drawInRect:maskRect angle:90.0];
CGContextSetShadowWithColor(c, CGSizeMake(0, -1), innerShadowBlurRadius, CGColorGetConstantColor(kCGColorBlack));
//Draw inner shadow with inverted mask:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef maskContext = CGBitmapContextCreate(NULL, CGImageGetWidth(maskImage), CGImageGetHeight(maskImage), 8, CGImageGetWidth(maskImage) * 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(maskContext, kCGBlendModeXOR);
CGContextDrawImage(maskContext, maskRect, maskImage);
CGContextSetRGBFillColor(maskContext, 1.0, 1.0, 1.0, 1.0);
CGContextFillRect(maskContext, maskRect);
CGImageRef invertedMaskImage = CGBitmapContextCreateImage(maskContext);
CGContextDrawImage(c, maskRect, invertedMaskImage);
CGImageRelease(invertedMaskImage);
CGContextRelease(maskContext);
//restore the graphics state
CGContextRestoreGState(c);
}
#end
Example usage in a view:
- (void)drawRect:(NSRect)dirtyRect
{
[[NSColor colorWithDeviceWhite:0.8 alpha:1.0] set];
NSRectFill(self.bounds);
NSImage *image = [NSImage imageNamed:#"MyIcon.pdf"];
[image drawEtchedInRect:self.bounds];
}
This would give you the following result (shown in different sizes):
You may need to experiment a bit with the gradient colors and offset/blur radius of the two shadows to get closer to the original effect.
If you don't mind calling a private API, you can let the operating system (CoreUI) do the shading for you. You need a few declarations:
typedef CFTypeRef CUIRendererRef;
extern void CUIDraw(CUIRendererRef renderer, CGRect frame, CGContextRef context, CFDictionaryRef object, CFDictionaryRef *result);
#interface NSWindow(CoreUIRendererPrivate)
+ (CUIRendererRef)coreUIRenderer;
#end
And for the actual drawing:
CGRect drawRect = CGRectMake(x, y, width, height);
CGImageRef cgimage = your_image;
CFDictionaryRef dict = (CFDictionaryRef) [NSDictionary dictionaryWithObjectsAndKeys:
#"backgroundTypeRaised", #"backgroundTypeKey",
[NSNumber numberWithBool:YES], #"imageIsGrayscaleKey",
cgimage, #"imageReferenceKey",
#"normal", #"state",
#"image", #"widget",
[NSNumber numberWithBool:YES], #"is.flipped",
nil];
CUIDraw ([NSWindow coreUIRenderer], drawRect, cg, dict, nil);
CGImageRelease (cgimage);
This will take the alpha channel of cgimage and apply the embossing effect as seen on toolbar buttons. You may or may not need the "is.flipped" line. Remove it if your result is upside-down.
There are a bunch of variations:
kCUIPresentationStateKey = kCUIPresentationStateInactive: The window is not active, the image will be lighter.
state = rollover: Only makes sense with the previous option. This means you are hovering over the image, the window is inactive, but the button is sensitive (click-through is enabled). It will become darker.
state = pressed: Occurs when the button is pressed. The icon gets slightly darker.
Bonus tip: To find out stuff like this, you can use the SIMBL plugin CUITrace. It prints out all the CoreUI invocations of a target app. This is a treasure trove if you have to draw your own native-looking UI.
Here's a much simpler solution: just create a cell and let it draw. No mucking around with private APIs or Core Graphics.
Code could look similar to the following:
NSButtonCell *buttonCell = [[NSButtonCell alloc] initImageCell:image];
buttonCell.bordered = YES;
buttonCell.bezelStyle = NSTexturedRoundedBezelStyle;
// additional configuration
[buttonCell drawInteriorWithFrame: someRect inView:self];
You can use different cells and configurations depending on the look you want to have (eg. NSImageCell with NSBackgroundStyleDark if you want the inverted look in a selected table view row)
And as a bonus, it will automatically look correct on all versions of OS X.
To get to draw correctly within any rect, the CGContextDrawImage and CGContextFillRect for the inner mask must have the origin of (0,0). then when you draw the image for the inner shadow you can then reuse the mask rect. So ends up looking like:
CGRect cgRect = CGRectMake( 0, 0, maskRect.size.width, maskRect.size.height );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef maskContext = CGBitmapContextCreate( NULL, CGImageGetWidth( maskImage ), CGImageGetHeight( maskImage ), 8, CGImageGetWidth( maskImage ) * 4, colorSpace, kCGImageAlphaPremultipliedLast );
CGColorSpaceRelease( colorSpace );
CGContextSetBlendMode( maskContext , kCGBlendModeXOR );
CGContextDrawImage( maskContext, cgRect, maskImage );
CGContextSetRGBFillColor( maskContext, 1.0, 1.0, 1.0, 1.0 );
CGContextFillRect( maskContext, cgRect );
CGImageRef invertedMaskImage = CGBitmapContextCreateImage( maskContext );
CGContextDrawImage( context, maskRect, invertedMaskImage );
CGImageRelease( invertedMaskImage );
CGContextRelease( maskContext );
CGContextRestoreGState( context );
You also have to leave a 1px border around the outside of the image or the shadows won't work correctly.
Is there an easy way to get an two-dimensional array or something similar that represents the pixel data of an image?
I have black & white PNG images and I simply want to read the color value at a certain coordinate. For example the color value at 20/100.
This Category on UIImage might be helpful Source
#import <CoreGraphics/CoreGraphics.h>
#import "UIImage+ColorAtPixel.h"
#implementation UIImage (ColorAtPixel)
- (UIColor *)colorAtPixel:(CGPoint)point {
// Cancel if point is outside image coordinates
if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, self.size.width, self.size.height), point)) {
return nil;
}
// Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
// Reference: http://stackoverflow.com/questions/1042830/retrieving-a-pixel-alpha-value-for-a-uiimage
NSInteger pointX = trunc(point.x);
NSInteger pointY = trunc(point.y);
CGImageRef cgImage = self.CGImage;
NSUInteger width = CGImageGetWidth(cgImage);
NSUInteger height = CGImageGetHeight(cgImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bytesPerPixel = 4;
int bytesPerRow = bytesPerPixel * 1;
NSUInteger bitsPerComponent = 8;
unsigned char pixelData[4] = { 0, 0, 0, 0 };
CGContextRef context = CGBitmapContextCreate(pixelData,
1,
1,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
// Draw the pixel we are interested in onto the bitmap context
CGContextTranslateCTM(context, -pointX, -pointY);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
CGContextRelease(context);
// Convert color values [0..255] to floats [0.0..1.0]
CGFloat red = (CGFloat)pixelData[0] / 255.0f;
CGFloat green = (CGFloat)pixelData[1] / 255.0f;
CGFloat blue = (CGFloat)pixelData[2] / 255.0f;
CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}
#end
You could put the png into an image view, and then use this method to get the pixel value from a graphics context that you would draw the the image into.
A class to do it for you, and explained too:
http://www.markj.net/iphone-uiimage-pixel-color/
The direct approach is slightly tedious, but here goes:
Get the CoreGraphics image.
CGImageRef cgImage = image.CGImage;
Get the "data provider", and from that get the data.
NSData * d = [(id)CGDataProviderCopyData(CGImageGetDataProvider(cgImage)) autorelease];
Figure out what format the data is in.
CGImageGetBitmapInfo();
CGImageGetBitsPerComponent();
CGImageGetBitsPerPixel();
CGImageGetBytesPerRow();
figure out the colour space (PNG supports greyscale/RGB/paletted).
CGImageGetColorSpace()
The indirect approach is to draw the image to a context (note that you may need to specify the context's byte order if you want any guarantees) and read the bytes out.
If you only want single pixels, it might be faster to draw the image to a 1x1 context with the right rect
(something like (CGRect){{-x,-y},{imgWidth,imgHeight}}).
This will handle colour-space conversion for you. If you just want a brightness value, use a greyscale context.