UIImage from SKTexture - objective-c

How to get UIImage from SKTexture?
I tried to get UIImage from SKTextureAtlas, but it seems not working too:
// p40_prop1 is a part of SKTextureAtlas
UIImage *image = [UIImage imageNamed:#"p40_prop1"];
image is nil.

Starting from iOS 9 it is a piece of cake. SKTexture now has CGImage property, which is of CGImageRef type. So getting image from a texture is just one line now:
let image : UIImage = UIImage(CGImage:texture.CGImage)

This seems to be working for me:
- (UIImage*) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
- (UIImage*) imageFromNode:(SKNode*)node
{
SKTexture* tex = [self.scene.view textureFromNode:node];
SKView* view = [[SKView alloc]initWithFrame:CGRectMake(0, 0, tex.size.width, tex.size.height)];
SKScene* scene = [SKScene sceneWithSize:tex.size];
SKSpriteNode* sprite = [SKSpriteNode spriteNodeWithTexture:tex];
sprite.position = CGPointMake( CGRectGetMidX(view.frame), CGRectGetMidY(view.frame) );
[scene addChild:sprite];
[view presentScene:scene];
return [self imageWithView:view];
}
get the SKTexture for your node using the current SKView
make another SKView that is just big enough for your texture
add a SKSpriteNode with the texture into your new scene, placing it in the middle
render the view into a graphics context
Or for those who prefer Swift:
func imageWithView(view : UIView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0)
view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: true)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
func imageFromNode(node : SKNode) -> UIImage? {
if let tex = self.scene?.view?.textureFromNode(node) {
let view = SKView(frame:CGRectMake(0, 0, tex.size().width, tex.size().height))
let scene = SKScene(size: tex.size())
let sprite = SKSpriteNode(texture: tex)
sprite.position = CGPoint(x: CGRectGetMidX(view.frame), y: CGRectGetMidY(view.frame))
scene.addChild(sprite)
view.presentScene(scene)
return self.imageWithView(view)
}
return nil
}

There is actually a way to get a UIImage out of a SKView in iOS 7.0!
It uses regular UIView APIs to render the view into an ImageContext, then pulls a UIImage out of that. However, this solution is very limited in scope. It draws the SKView into a UIImage, then crops the resulting image to fit a given node's frame. So there must not be anything covering that node you want to snapshot. Also, both the view and scene must be visible on-screen (which is stricter than the usual -[SKView textureFromNode:] method). There may even be further restrictions that I haven't discovered.
Given all that, this procedure was still enough for what I needed, so I thought it was worth sharing.
+(UIImage *)imageFromNode:(SKNode *)node {
SKView *view = node.scene.view;
CGFloat scale = [UIScreen mainScreen].scale;
CGRect nodeFrame = [node calculateAccumulatedFrame];
// render SKView into UIImage
UIGraphicsBeginImageContextWithOptions(view.bounds.size, YES, 0.0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *sceneSnapshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// crop to the requested node (making sure to flip the y-coordinate)
CGFloat originY = sceneSnapshot.size.height*scale - nodeFrame.origin.y*scale - nodeFrame.size.height*scale;
CGRect cropRect = CGRectMake(nodeFrame.origin.x * scale, originY, nodeFrame.size.width*scale, nodeFrame.size.height*scale);
CGImageRef croppedSnapshot = CGImageCreateWithImageInRect(sceneSnapshot.CGImage, cropRect);
UIImage *nodeSnapshot = [UIImage imageWithCGImage:croppedSnapshot];
CGImageRelease(croppedSnapshot);
return nodeSnapshot;
}
I've tested this on the simulator in 3.5" and 4" retina iPhones, retina and non-retina iPads. As for actual devices, it worked on iPhone 4S, iPhone 5S, and iPad 2, all running 7.0.4.

func loadBackground() {
guard let _ = childNode(withName: "background") as! SKSpriteNode? else {
let texture = SKTexture(image: UIImage(named: "stick.jpg")!)
let node = SKSpriteNode(texture: texture)
node.size = texture.size()
node.zPosition = StickHeroGameSceneZposition.backgroundZposition.rawValue
self.physicsWorld.gravity = CGVector(dx: 0, dy: gravity)
addChild(node)
return
}
}

As of iOS 7.0 there's no way to get a UIImage from SKTexture, SKTextureAtlas or the SKView.

Related

Performance issue when try to make rounded UIImage (using mask)

I've developed method which make from rect UIImage to rounded. The problem is that it decrease performance if do that operation for 10 different images in a row. Images have resolution 120x120
- (UIImage *)roundedImage:(UIImage*)anOriginalImage radius:(CGFloat)aRadius
{
UIImage *result = nil;
if (anOriginalImage != nil) {
UIGraphicsBeginImageContextWithOptions(anOriginalImage.size, NO, 0);
[[UIBezierPath bezierPathWithRoundedRect:(CGRect){CGPointZero, anOriginalImage.size}
cornerRadius:aRadius] addClip];
[anOriginalImage drawInRect:(CGRect){CGPointZero, anOriginalImage.size}];
result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return result;
}
How to fix that?
Import the QuartzCore framework to get access to the .cornerRadius property of your UIView or UIImageView.
#import <QuartzCore/QuartzCore.h>
Also manually add it to your project's Frameworks folder.
Add this method to your view controller or wherever you need it:
-(void)setRoundedView:(UIImageView *)roundedView toDiameter:(float)newSize;
{
CGPoint saveCenter = roundedView.center;
CGRect newFrame = CGRectMake(roundedView.frame.origin.x, roundedView.frame.origin.y, newSize, newSize);
roundedView.frame = newFrame;
roundedView.layer.cornerRadius = newSize / 2.0;
roundedView.center = saveCenter;
}
To use it, just pass it a UIImageView and a diameter. This example assumes you have a UIImageView named "circ" added as a subview to your view. It should have a backgroundColor set so you can see it.
[self setRoundedView:circ toDiameter:100.0];
This just handles UIImageViews but you can generalize it to any UIView.

Blurry transparent view over UITableView [duplicate]

I'm trying to replicate this blurred background from Apple's publicly released iOS 7 example screen:
This question suggests applying a CI filter to the contents below, but that's a whole different approach. It's obvious that iOS 7 doesn't capture the contents of the views below, for many reasons:
Doing some rough testing, capturing a screenshot of the views below and applying a CIGaussianBlur filter with a large enough radius to mimic iOS 7's blur style takes 1-2 seconds, even on a simulator.
The iOS 7 blur view is able to blur over dynamic views, such as a video or animations, with no noticeable lag.
Can anyone hypothesize what frameworks they could be using to create this effect, and if it's possible to create a similar effect with current public APIs?
Edit: (from comment) We don't exactly know how Apple is doing it, but are there any basic assumptions we can make? We can assume they are using hardware, right?
Is the effect self-contained in each view, such that the effect doesn't actually know what's behind it? Or must, based on how blurs work, the contents behind the blur be taken into consideration?
If the contents behind the effect are relevant, can we assume that Apple is receiving a "feed" of the contents below and continuously rendering them with a blur?
Why bother replicating the effect? Just draw a UIToolbar behind your view.
myView.backgroundColor = [UIColor clearColor];
UIToolbar* bgToolbar = [[UIToolbar alloc] initWithFrame:myView.frame];
bgToolbar.barStyle = UIBarStyleDefault;
[myView.superview insertSubview:bgToolbar belowSubview:myView];
Apple released code at WWDC as a category on UIImage that includes this functionality, if you have a developer account you can grab the UIImage category (and the rest of the sample code) by going to this link: https://developer.apple.com/wwdc/schedule/ and browsing for section 226 and clicking on details. I haven't played around with it yet but I think the effect will be a lot slower on iOS 6, there are some enhancements to iOS 7 that make grabbing the initial screen shot that is used as input to the blur a lot faster.
Direct link: https://developer.apple.com/downloads/download.action?path=wwdc_2013/wwdc_2013_sample_code/ios_uiimageeffects.zip
Actually I'd bet this would be rather simple to achieve. It probably wouldn't operate or look exactly like what Apple has going on but could be very close.
First of all, you'd need to determine the CGRect of the UIView that you will be presenting. Once you've determine that you would just need to grab an image of the part of the UI so that it can be blurred. Something like this...
- (UIImage*)getBlurredImage {
// You will want to calculate this in code based on the view you will be presenting.
CGSize size = CGSizeMake(200,200);
UIGraphicsBeginImageContext(size);
[view drawViewHierarchyInRect:(CGRect){CGPointZero, w, h} afterScreenUpdates:YES]; // view is the view you are grabbing the screen shot of. The view that is to be blurred.
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Gaussian Blur
image = [image applyLightEffect];
// Box Blur
// image = [image boxblurImageWithBlur:0.2f];
return image;
}
Gaussian Blur - Recommended
Using the UIImage+ImageEffects Category Apple's provided here, you'll get a gaussian blur that looks very much like the blur in iOS 7.
Box Blur
You could also use a box blur using the following boxBlurImageWithBlur: UIImage category. This is based on an algorythem that you can find here.
#implementation UIImage (Blur)
-(UIImage *)boxblurImageWithBlur:(CGFloat)blur {
if (blur < 0.f || blur > 1.f) {
blur = 0.5f;
}
int boxSize = (int)(blur * 50);
boxSize = boxSize - (boxSize % 2) + 1;
CGImageRef img = self.CGImage;
vImage_Buffer inBuffer, outBuffer;
vImage_Error error;
void *pixelBuffer;
CGDataProviderRef inProvider = CGImageGetDataProvider(img);
CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
pixelBuffer = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
if(pixelBuffer == NULL)
NSLog(#"No pixelbuffer");
outBuffer.data = pixelBuffer;
outBuffer.width = CGImageGetWidth(img);
outBuffer.height = CGImageGetHeight(img);
outBuffer.rowBytes = CGImageGetBytesPerRow(img);
error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
if (error) {
NSLog(#"JFDepthView: error from convolution %ld", error);
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(outBuffer.data,
outBuffer.width,
outBuffer.height,
8,
outBuffer.rowBytes,
colorSpace,
kCGImageAlphaNoneSkipLast);
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
//clean up
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(pixelBuffer);
CFRelease(inBitmapData);
CGImageRelease(imageRef);
return returnImage;
}
#end
Now that you are calculating the screen area to blur, passing it into the blur category and receiving a UIImage back that has been blurred, now all that is left is to set that blurred image as the background of the view you will be presenting. Like I said, this will not be a perfect match for what Apple is doing, but it should still look pretty cool.
Hope it helps.
iOS8 answered these questions.
- (instancetype)initWithEffect:(UIVisualEffect *)effect
or Swift:
init(effect effect: UIVisualEffect)
I just wrote my little subclass of UIView that has ability to produce native iOS 7 blur on any custom view. It uses UIToolbar but in a safe way for changing it's frame, bounds, color and alpha with real-time animation.
Please let me know if you notice any problems.
https://github.com/ivoleko/ILTranslucentView
There is a rumor that Apple engineers claimed, to make this performant they are reading directly out of the gpu buffer which raises security issues which is why there is no public API to do this yet.
This is a solution that you can see in the vidios of the WWDC. You have to do a Gaussian Blur, so the first thing you have to do is to add a new .m and .h file with the code i'm writing here, then you have to make and screen shoot, use the desired effect and add it to your view, then your UITable UIView or what ever has to be transparent, you can play with applyBlurWithRadius, to archive the desired effect, this call works with any UIImage.
At the end the blured image will be the background and the rest of the controls above has to be transparent.
For this to work you have to add the next libraries:
Acelerate.framework,UIKit.framework,CoreGraphics.framework
I hope you like it.
Happy coding.
//Screen capture.
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(c, 0, 0);
[self.view.layer renderInContext:c];
UIImage* viewImage = UIGraphicsGetImageFromCurrentImageContext();
viewImage = [viewImage applyLightEffect];
UIGraphicsEndImageContext();
//.h FILE
#import <UIKit/UIKit.h>
#interface UIImage (ImageEffects)
- (UIImage *)applyLightEffect;
- (UIImage *)applyExtraLightEffect;
- (UIImage *)applyDarkEffect;
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor;
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius tintColor:(UIColor *)tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage;
#end
//.m FILE
#import "cGaussianEffect.h"
#import <Accelerate/Accelerate.h>
#import <float.h>
#implementation UIImage (ImageEffects)
- (UIImage *)applyLightEffect
{
UIColor *tintColor = [UIColor colorWithWhite:1.0 alpha:0.3];
return [self applyBlurWithRadius:1 tintColor:tintColor saturationDeltaFactor:1.8 maskImage:nil];
}
- (UIImage *)applyExtraLightEffect
{
UIColor *tintColor = [UIColor colorWithWhite:0.97 alpha:0.82];
return [self applyBlurWithRadius:1 tintColor:tintColor saturationDeltaFactor:1.8 maskImage:nil];
}
- (UIImage *)applyDarkEffect
{
UIColor *tintColor = [UIColor colorWithWhite:0.11 alpha:0.73];
return [self applyBlurWithRadius:1 tintColor:tintColor saturationDeltaFactor:1.8 maskImage:nil];
}
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor
{
const CGFloat EffectColorAlpha = 0.6;
UIColor *effectColor = tintColor;
int componentCount = CGColorGetNumberOfComponents(tintColor.CGColor);
if (componentCount == 2) {
CGFloat b;
if ([tintColor getWhite:&b alpha:NULL]) {
effectColor = [UIColor colorWithWhite:b alpha:EffectColorAlpha];
}
}
else {
CGFloat r, g, b;
if ([tintColor getRed:&r green:&g blue:&b alpha:NULL]) {
effectColor = [UIColor colorWithRed:r green:g blue:b alpha:EffectColorAlpha];
}
}
return [self applyBlurWithRadius:10 tintColor:effectColor saturationDeltaFactor:-1.0 maskImage:nil];
}
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius tintColor:(UIColor *)tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactor maskImage:(UIImage *)maskImage
{
if (self.size.width < 1 || self.size.height < 1) {
NSLog (#"*** error: invalid size: (%.2f x %.2f). Both dimensions must be >= 1: %#", self.size.width, self.size.height, self);
return nil;
}
if (!self.CGImage) {
NSLog (#"*** error: image must be backed by a CGImage: %#", self);
return nil;
}
if (maskImage && !maskImage.CGImage) {
NSLog (#"*** error: maskImage must be backed by a CGImage: %#", maskImage);
return nil;
}
CGRect imageRect = { CGPointZero, self.size };
UIImage *effectImage = self;
BOOL hasBlur = blurRadius > __FLT_EPSILON__;
BOOL hasSaturationChange = fabs(saturationDeltaFactor - 1.) > __FLT_EPSILON__;
if (hasBlur || hasSaturationChange) {
UIGraphicsBeginImageContextWithOptions(self.size, NO, [[UIScreen mainScreen] scale]);
CGContextRef effectInContext = UIGraphicsGetCurrentContext();
CGContextScaleCTM(effectInContext, 1.0, -1.0);
CGContextTranslateCTM(effectInContext, 0, -self.size.height);
CGContextDrawImage(effectInContext, imageRect, self.CGImage);
vImage_Buffer effectInBuffer;
effectInBuffer.data = CGBitmapContextGetData(effectInContext);
effectInBuffer.width = CGBitmapContextGetWidth(effectInContext);
effectInBuffer.height = CGBitmapContextGetHeight(effectInContext);
effectInBuffer.rowBytes = CGBitmapContextGetBytesPerRow(effectInContext);
UIGraphicsBeginImageContextWithOptions(self.size, NO, [[UIScreen mainScreen] scale]);
CGContextRef effectOutContext = UIGraphicsGetCurrentContext();
vImage_Buffer effectOutBuffer;
effectOutBuffer.data = CGBitmapContextGetData(effectOutContext);
effectOutBuffer.width = CGBitmapContextGetWidth(effectOutContext);
effectOutBuffer.height = CGBitmapContextGetHeight(effectOutContext);
effectOutBuffer.rowBytes = CGBitmapContextGetBytesPerRow(effectOutContext);
if (hasBlur) {
CGFloat inputRadius = blurRadius * [[UIScreen mainScreen] scale];
NSUInteger radius = floor(inputRadius * 3. * sqrt(2 * M_PI) / 4 + 0.5);
if (radius % 2 != 1) {
radius += 1;
}
vImageBoxConvolve_ARGB8888(&effectInBuffer, &effectOutBuffer, NULL, 0, 0, radius, radius, 0, kvImageEdgeExtend);
vImageBoxConvolve_ARGB8888(&effectOutBuffer, &effectInBuffer, NULL, 0, 0, radius, radius, 0, kvImageEdgeExtend);
vImageBoxConvolve_ARGB8888(&effectInBuffer, &effectOutBuffer, NULL, 0, 0, radius, radius, 0, kvImageEdgeExtend);
}
BOOL effectImageBuffersAreSwapped = NO;
if (hasSaturationChange) {
CGFloat s = saturationDeltaFactor;
CGFloat floatingPointSaturationMatrix[] = {
0.0722 + 0.9278 * s, 0.0722 - 0.0722 * s, 0.0722 - 0.0722 * s, 0,
0.7152 - 0.7152 * s, 0.7152 + 0.2848 * s, 0.7152 - 0.7152 * s, 0,
0.2126 - 0.2126 * s, 0.2126 - 0.2126 * s, 0.2126 + 0.7873 * s, 0,
0, 0, 0, 1,
};
const int32_t divisor = 256;
NSUInteger matrixSize = sizeof(floatingPointSaturationMatrix)/sizeof(floatingPointSaturationMatrix[0]);
int16_t saturationMatrix[matrixSize];
for (NSUInteger i = 0; i < matrixSize; ++i) {
saturationMatrix[i] = (int16_t)roundf(floatingPointSaturationMatrix[i] * divisor);
}
if (hasBlur) {
vImageMatrixMultiply_ARGB8888(&effectOutBuffer, &effectInBuffer, saturationMatrix, divisor, NULL, NULL, kvImageNoFlags);
effectImageBuffersAreSwapped = YES;
}
else {
vImageMatrixMultiply_ARGB8888(&effectInBuffer, &effectOutBuffer, saturationMatrix, divisor, NULL, NULL, kvImageNoFlags);
}
}
if (!effectImageBuffersAreSwapped)
effectImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (effectImageBuffersAreSwapped)
effectImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
UIGraphicsBeginImageContextWithOptions(self.size, NO, [[UIScreen mainScreen] scale]);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -self.size.height);
CGContextDrawImage(outputContext, imageRect, self.CGImage);
if (hasBlur) {
CGContextSaveGState(outputContext);
if (maskImage) {
CGContextClipToMask(outputContext, imageRect, maskImage.CGImage);
}
CGContextDrawImage(outputContext, imageRect, effectImage.CGImage);
CGContextRestoreGState(outputContext);
}
if (tintColor) {
CGContextSaveGState(outputContext);
CGContextSetFillColorWithColor(outputContext, tintColor.CGColor);
CGContextFillRect(outputContext, imageRect);
CGContextRestoreGState(outputContext);
}
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
You can find your solution from apple's DEMO in this page:
WWDC 2013 , find out and download UIImageEffects sample code.
Then with #Jeremy Fox's code. I changed it to
- (UIImage*)getDarkBlurredImageWithTargetView:(UIView *)targetView
{
CGSize size = targetView.frame.size;
UIGraphicsBeginImageContext(size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(c, 0, 0);
[targetView.layer renderInContext:c]; // view is the view you are grabbing the screen shot of. The view that is to be blurred.
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [image applyDarkEffect];
}
Hope this will help you.
Here is a really easy way of doing it:https://github.com/JagCesar/iOS-blur
Just copy the layer of UIToolbar and you're done, AMBlurView does it for you.
Okay, it's not as blurry as control center, but is's blurry enough.
Remember that iOS7 is under NDA.
Every response here is using vImageBoxConvolve_ARGB8888 this function is really, really slow, that is fine, if the performance is not a high priority requirement, but if you are using this for transitioning between two View Controllers (for example) this approach means times over 1 second or maybe more, that is very bad to the user experience of your application.
If you prefer leave all this image processing to the GPU (And you should) you can get a much better effect and also awesome times rounding 50ms (supposing that you have a time of 1 second in the first approach), so, lets do it.
First download the GPUImage Framework (BSD Licensed) here.
Next, Add the following classes (.m and .h) from the GPUImage (I'm not sure that these are the minimum needed for the blur effect only)
GPUImage.h
GPUImageAlphaBlendFilter
GPUImageFilter
GPUImageFilterGroup
GPUImageGaussianBlurPositionFilter
GPUImageGaussianSelectiveBlurFilter
GPUImageLuminanceRangeFilter
GPUImageOutput
GPUImageTwoInputFilter
GLProgram
GPUImageBoxBlurFilter
GPUImageGaussianBlurFilter
GPUImageiOSBlurFilter
GPUImageSaturationFilter
GPUImageSolidColorGenerator
GPUImageTwoPassFilter
GPUImageTwoPassTextureSamplingFilter
iOS/GPUImage-Prefix.pch
iOS/GPUImageContext
iOS/GPUImageMovieWriter
iOS/GPUImagePicture
iOS/GPUImageView
Next, create a category on UIImage, that will add a blur effect to an existing UIImage:
#import "UIImage+Utils.h"
#import "GPUImagePicture.h"
#import "GPUImageSolidColorGenerator.h"
#import "GPUImageAlphaBlendFilter.h"
#import "GPUImageBoxBlurFilter.h"
#implementation UIImage (Utils)
- (UIImage*) GPUBlurredImage
{
GPUImagePicture *source =[[GPUImagePicture alloc] initWithImage:self];
CGSize size = CGSizeMake(self.size.width * self.scale, self.size.height * self.scale);
GPUImageBoxBlurFilter *blur = [[GPUImageBoxBlurFilter alloc] init];
[blur setBlurRadiusInPixels:4.0f];
[blur setBlurPasses:2.0f];
[blur forceProcessingAtSize:size];
[source addTarget:blur];
GPUImageSolidColorGenerator * white = [[GPUImageSolidColorGenerator alloc] init];
[white setColorRed:1.0f green:1.0f blue:1.0f alpha:0.1f];
[white forceProcessingAtSize:size];
GPUImageAlphaBlendFilter * blend = [[GPUImageAlphaBlendFilter alloc] init];
blend.mix = 0.9f;
[blur addTarget:blend];
[white addTarget:blend];
[blend forceProcessingAtSize:size];
[source processImage];
return [blend imageFromCurrentlyProcessedOutput];
}
#end
And last, add the following frameworks to your project:
AVFoundation
CoreMedia
CoreVideo
OpenGLES
Yeah, got fun with this much faster approach ;)
You can try using my custom view, which has capability to blur the background. It does this by faking taking snapshot of the background and blur it, just like the one in Apple's WWDC code. It is very simple to use.
I also made some improvement over to fake the dynamic blur without losing the performance. The background of my view is a scrollView which scrolls with the view, thus provide the blur effect for the rest of the superview.
See the example and code on my GitHub
Core Background implements the desired iOS 7 effect.
https://github.com/justinmfischer/core-background
Disclaimer: I am the author of this project

How to rotate UIImage

I'm developing an iOS app for iPad. Is there any way to rotate a UIImage 90ยบ and then add it to a UIImageView? I've tried a lot of different codes but none worked...
Thanks!
You may rotate UIImageView itself with:
UIImageView *iv = [[UIImageView alloc] initWithImage:image];
iv.transform = CGAffineTransformMakeRotation(M_PI_2);
Or if you really want to change image, you may use code from this answer, it works.
To rotate the pixels you can use the following. This creates an intermediate UIImage with rotated metadata and renders it into a image context with width/height dimensions transposed. The resulting image has the pixels rotated (i.e the underlying CGImage)
- (UIImage*)rotateUIImage:(UIImage*)sourceImage clockwise:(BOOL)clockwise
{
CGSize size = sourceImage.size;
UIGraphicsBeginImageContext(CGSizeMake(size.height, size.width));
[[UIImage imageWithCGImage:[sourceImage CGImage] scale:1.0 orientation:clockwise ? UIImageOrientationRight : UIImageOrientationLeft] drawInRect:CGRectMake(0,0,size.height ,size.width)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
There are other possible values that can be passed for the orientation parameter to achieve 180 degree rotation and flips etc.
This will rotate an image by any given degrees.
Note this works 2x and 3x retina as well
- (UIImage *)imageRotatedByDegrees:(CGFloat)degrees {
CGFloat radians = DegreesToRadians(degrees);
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0, self.size.width, self.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(radians);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
UIGraphicsBeginImageContextWithOptions(rotatedSize, NO, [[UIScreen mainScreen] scale]);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, rotatedSize.width / 2, rotatedSize.height / 2);
CGContextRotateCTM(bitmap, radians);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2 , self.size.width, self.size.height), self.CGImage );
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
There is also imageWithCIImage:scale:orientation if you wanted to rotate the UIImage not the UIImageView
with one of these orientations:
typedef enum {
UIImageOrientationUp,
UIImageOrientationDown, // 180 deg rotation
UIImageOrientationLeft, // 90 deg CW
UIImageOrientationRight, // 90 deg CCW
UIImageOrientationUpMirrored, // vertical flip
UIImageOrientationDownMirrored, // horizontal flip
UIImageOrientationLeftMirrored, // 90 deg CW then perform horizontal flip
UIImageOrientationRightMirrored, // 90 deg CCW then perform vertical flip
} UIImageOrientation;
Here is the swift version of #RyanG's Objective C code as an extension to UIImage:
extension UIImage {
func rotate(byDegrees degree: Double) -> UIImage {
let radians = CGFloat(degree*M_PI)/180.0 as CGFloat
let rotatedViewBox = UIView(frame: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
let t = CGAffineTransform(rotationAngle: radians)
rotatedViewBox.transform = t
let rotatedSize = rotatedViewBox.frame.size
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(rotatedSize, false, scale)
let bitmap = UIGraphicsGetCurrentContext()
CGContextTranslateCTM(bitmap, rotatedSize.width / 2, rotatedSize.height / 2);
bitmap!.rotate(by: radians);
bitmap!.scaleBy(x: 1.0, y: -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2 , self.size.width, self.size.height), self.CGImage );
let newImage = UIGraphicsGetImageFromCurrentImageContext()
return newImage
}
}
The usage is image.rotate(degree).
With Swift, you can rotate an image by doing:
var image: UIImage = UIImage(named: "headerBack.png")
var imageRotated: UIImage = UIImage(CGImage: image.CGImage, scale:1, orientation: UIImageOrientation.UpMirrored)
UIImage *img = [UIImage imageWithName#"aaa.png"];
UIImage *image = [UIImage imageWithCGImage:img.CGImage scale:1.0 orientation:UIImageOrientationRight];
Another way of doing this would be to render the UIImage again using Core Graphics.
Once you have the context, use CGContextRotateCTM.
More info on this Apple Doc
Thanks Jason Crocker this solved my problem. Only one minor correction, interchange height and width in both locations and no distortion occurs, ie,
UIGraphicsBeginImageContext(CGSizeMake(size.width, size.height));
[[UIImage imageWithCGImage:[sourceImage CGImage] scale:1.0 orientation:clockwise ? UIImageOrientationRight : UIImageOrientationLeft] drawInRect:CGRectMake(0,0,size.width,size.height)];
My problem could not be solved by CGContextRotateCTM, I don't know why. My issue is that I'm transmitting my image to a server and it was alway displayed off by 90 degrees. You can easily test if your images are going to work in the non apple world by copying the image to an MS Office Program that you are running on your mac.
This is what i've done when i wanted to change the orientation of an image (rotate 90 degree clockwise).
//Checking for the orientation ie, image taken from camera is in portrait or not.
if(yourImage.imageOrientation==3)
{
//Image is in portrait mode.
yourImage=[self imageToRotate:yourImage RotatedByDegrees:90.0];
}
- (UIImage *)image:(UIImage *)imageToRotate RotatedByDegrees:(CGFloat)degrees
{
CGFloat radians = degrees * (M_PI / 180.0);
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0, image.size.height, image.size.width)];
CGAffineTransform t = CGAffineTransformMakeRotation(radians);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
UIGraphicsBeginImageContextWithOptions(rotatedSize, NO, [[UIScreen mainScreen] scale]);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(bitmap, rotatedSize.height / 2, rotatedSize.width / 2);
CGContextRotateCTM(bitmap, radians);
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-image.size.width / 2, -image.size.height / 2 , image.size.height, image.size.width), image.CGImage );
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The rotated image may be of size >= 15MB (from my experience). So you should compress it and use it. Otherwise, you may met with crash causing memory pressure. Code I used for compressing is given below.
NSData *imageData = UIImageJPEGRepresentation(yourImage, 1);
//1 - it represents the quality of the image.
NSLog(#"Size of Image(bytes):%d",[imageData length]);
//Here I used a loop because my requirement was, the image size should be <= 4MB.
//So put an iteration for more than 1 time upto when the image size is gets <= 4MB.
for(int loop=0;loop<100;loop++)
{
if([imageData length]>=4194304) //4194304 = 4MB in bytes.
{
imageData=UIImageJPEGRepresentation(yourImage, 0.3);
yourImage=[[UIImage alloc]initWithData:imageData];
}
else
{
NSLog(#"%d time(s) compressed.",loop);
break;
}
}
Now your yourImage can be used for anywhere..
Happy coding...

Mac OS X: Drawing into an offscreen NSGraphicsContext using CGContextRef C functions has no effect. Why?

Mac OS X 10.7.4
I am drawing into an offscreen graphics context created via +[NSGraphicsContext graphicsContextWithBitmapImageRep:].
When I draw into this graphics context using the NSBezierPath class, everything works as expected.
However, when I draw into this graphics context using the CGContextRef C functions, I see no results of my drawing. Nothing works.
For reasons I won't get into, I really need to draw using the CGContextRef functions (rather than the Cocoa NSBezierPath class).
My code sample is listed below. I am attempting to draw a simple "X". One stroke using NSBezierPath, one stroke using CGContextRef C functions. The first stroke works, the second does not. What am I doing wrong?
NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSSize imgSize = imgRect.size;
NSBitmapImageRep *offscreenRep = [[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:imgSize.width
pixelsHigh:imgSize.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bitmapFormat:NSAlphaFirstBitmapFormat
bytesPerRow:0
bitsPerPixel:0] autorelease];
// set offscreen context
NSGraphicsContext *g = [NSGraphicsContext graphicsContextWithBitmapImageRep:offscreenRep];
[NSGraphicsContext setCurrentContext:g];
NSImage *img = [[[NSImage alloc] initWithSize:imgSize] autorelease];
CGContextRef ctx = [g graphicsPort];
// lock and draw
[img lockFocus];
// draw first stroke with Cocoa. this works!
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];
// draw second stroke with Core Graphics. This doesn't work!
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgSize.width, imgSize.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);
[img unlockFocus];
You don't specify how you are looking at the results. I assume you are looking at the NSImage img and not the NSBitmapImageRep offscreenRep.
When you call [img lockFocus], you are changing the current NSGraphicsContext to be a context to draw into img. So, the NSBezierPath drawing goes into img and that's what you see. The CG drawing goes into offscreenRep which you aren't looking at.
Instead of locking focus onto an NSImage and drawing into it, create an NSImage and add the offscreenRep as one of its reps.
NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSSize imgSize = imgRect.size;
NSBitmapImageRep *offscreenRep = [[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:imgSize.width
pixelsHigh:imgSize.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bitmapFormat:NSAlphaFirstBitmapFormat
bytesPerRow:0
bitsPerPixel:0] autorelease];
// set offscreen context
NSGraphicsContext *g = [NSGraphicsContext graphicsContextWithBitmapImageRep:offscreenRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:g];
// draw first stroke with Cocoa
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];
// draw second stroke with Core Graphics
CGContextRef ctx = [g graphicsPort];
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgSize.width, imgSize.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);
// done drawing, so set the current context back to what it was
[NSGraphicsContext restoreGraphicsState];
// create an NSImage and add the rep to it
NSImage *img = [[[NSImage alloc] initWithSize:imgSize] autorelease];
[img addRepresentation:offscreenRep];
// then go on to save or view the NSImage
The solution by #Robin Stewart worked well for me. I was able to condense it to an NSImage extension.
extension NSImage {
convenience init(size: CGSize, actions: (CGContext) -> Void) {
self.init(size: size)
lockFocusFlipped(false)
actions(NSGraphicsContext.current!.cgContext)
unlockFocus()
}
}
Usage:
let image = NSImage(size: CGSize(width: 100, height: 100), actions: { ctx in
// Drawing commands here for example:
// ctx.setFillColor(.white)
// ctx.fill(pageRect)
})
I wonder why everyone writes such complicated code for drawing to an image. Unless you care for the exact bitmap representation of an image (and usually you don't!), there is no need to create one. You can just create a blank image and directly draw to it. In that case the system will create an appropriate bitmap representation (or maybe a PDF representation or whatever the system believes to be more suitable for drawing).
The documentation of the init method
- (instancetype)initWithSize:(NSSize)aSize
which exists since MacOS 10.0 and still isn't deprecated, clearly says:
After using this method to initialize an image object, you are
expected to provide the image contents before trying to draw the
image. You might lock focus on the image and draw to the image or you
might explicitly add an image representation that you created.
So here's how I would have written that code:
NSRect imgRect = NSMakeRect(0.0, 0.0, 100.0, 100.0);
NSImage * image = [[NSImage alloc] initWithSize:imgRect.size];
[image lockFocus];
// draw first stroke with Cocoa
NSPoint p1 = NSMakePoint(NSMaxX(imgRect), NSMinY(imgRect));
NSPoint p2 = NSMakePoint(NSMinX(imgRect), NSMaxY(imgRect));
[NSBezierPath strokeLineFromPoint:p1 toPoint:p2];
// draw second stroke with Core Graphics
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGContextBeginPath(ctx);
CGContextMoveToPoint(ctx, 0.0, 0.0);
CGContextAddLineToPoint(ctx, imgRect.size.width, imgRect.size.height);
CGContextClosePath(ctx);
CGContextStrokePath(ctx);
[image unlockFocus];
That's all folks.
graphicsPort is actually void *:
#property (readonly) void * graphicsPort
and documented as
The low-level, platform-specific graphics context represented
by the graphic port.
Which may be pretty much everything, but the final note says
In OS X, this is the Core Graphics context,
a CGContextRef object (opaque type).
This property has been deprecated in 10.10 in favor of the new property
#property (readonly) CGContextRef CGContext
which is only available in 10.10 and later. If you have to support older systems, it's fine to still use graphicsPort.
Swift 4: I use this code, which replicates the convenient API from UIKit (but runs on macOS):
public class UIGraphicsImageRenderer {
let size: CGSize
init(size: CGSize) {
self.size = size
}
func image(actions: (CGContext) -> Void) -> NSImage {
let image = NSImage(size: size)
image.lockFocusFlipped(true)
actions(NSGraphicsContext.current!.cgContext)
image.unlockFocus()
return image
}
}
Usage:
let renderer = UIGraphicsImageRenderer(size: imageSize)
let image = renderer.image { ctx in
// Drawing commands here
}
Here are 3 ways of drawing same image (Swift 4).
The method suggested by #Mecki produces an image without blurring artefacts (like blurred curves). But this can be fixed by adjusting CGContext settings (not included in this example).
public struct ImageFactory {
public static func image(size: CGSize, fillColor: NSColor, rounded: Bool = false) -> NSImage? {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
return drawImage(size: size) { context in
if rounded {
let radius = min(size.height, size.width)
let path = NSBezierPath(roundedRect: rect, xRadius: 0.5 * radius, yRadius: 0.5 * radius).cgPath
context.addPath(path)
context.clip()
}
context.setFillColor(fillColor.cgColor)
context.fill(rect)
}
}
}
extension ImageFactory {
private static func drawImage(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
return drawImageInLockedImageContext(size: size, drawingCalls: drawingCalls)
}
private static func drawImageInLockedImageContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
let image = NSImage(size: size)
image.lockFocus()
guard let context = NSGraphicsContext.current else {
image.unlockFocus()
return nil
}
drawingCalls(context.cgContext)
image.unlockFocus()
return image
}
// Has scalling or antialiasing issues, like blurred curves.
private static func drawImageInBitmapImageContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
guard let offscreenRep = NSBitmapImageRep(pixelsWide: Int(size.width), pixelsHigh: Int(size.height),
bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true,
isPlanar: false, colorSpaceName: .deviceRGB) else {
return nil
}
guard let context = NSGraphicsContext(bitmapImageRep: offscreenRep) else {
return nil
}
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.current = context
drawingCalls(context.cgContext)
NSGraphicsContext.restoreGraphicsState()
let img = NSImage(size: size)
img.addRepresentation(offscreenRep)
return img
}
// Has scalling or antialiasing issues, like blurred curves.
private static func drawImageInCGContext(size: CGSize, drawingCalls: (CGContext) -> Void) -> NSImage? {
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
guard let context = CGContext(data: nil, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8,
bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else {
return nil
}
drawingCalls(context)
guard let image = context.makeImage() else {
return nil
}
return NSImage(cgImage: image, size: size)
}
}

UITableViewCell's imageView fit to 40x40

I use the same big images in a tableView and detailView.
Need to make imageView filled in 40x40 when an imags is showed in tableView, but stretched on a half of a screen. I played with several properties but have no positive result:
[cell.imageView setBounds:CGRectMake(0, 0, 50, 50)];
[cell.imageView setClipsToBounds:NO];
[cell.imageView setFrame:CGRectMake(0, 0, 50, 50)];
[cell.imageView setContentMode:UIViewContentModeScaleAspectFill];
I am using SDK 3.0 with build in "Cell Objects in Predefined Styles".
I put Ben's code as an extension in my NS-Extensions file so that I can tell any image to make a thumbnail of itself, as in:
UIImage *bigImage = [UIImage imageNamed:#"yourImage.png"];
UIImage *thumb = [bigImage makeThumbnailOfSize:CGSizeMake(50,50)];
Here is .h file:
#interface UIImage (PhoenixMaster)
- (UIImage *) makeThumbnailOfSize:(CGSize)size;
#end
and then in the NS-Extensions.m file:
#implementation UIImage (PhoenixMaster)
- (UIImage *) makeThumbnailOfSize:(CGSize)size
{
UIGraphicsBeginImageContextWithOptions(size, NO, UIScreen.mainScreen.scale);
// draw scaled image into thumbnail context
[self drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newThumbnail = UIGraphicsGetImageFromCurrentImageContext();
// pop the context
UIGraphicsEndImageContext();
if(newThumbnail == nil)
NSLog(#"could not scale image");
return newThumbnail;
}
#end
I cache a thumbnail version since using large images scaled down on the fly uses too much memory.
Here's my thumbnail code:
- (UIImage *)thumbnailOfSize:(CGSize)size {
if( self.previewThumbnail )
return self.previewThumbnail; // returned cached thumbnail
UIGraphicsBeginImageContext(size);
// draw scaled image into thumbnail context
[self.preview drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newThumbnail = UIGraphicsGetImageFromCurrentImageContext();
// pop the context
UIGraphicsEndImageContext();
if(newThumbnail == nil)
NSLog(#"could not scale image");
self.previewThumbnail = newThumbnail;
return self.previewThumbnail;
}
Just make sure you properly clear the cached thumbnail if you change your original image (self.preview in my case).
I have mine wrapped in a UIView and use this code:
imageView.contentMode = UIViewContentModeScaleAspectFit;
imageView.autoresizingMask = UIViewAutoresizingFlexibleWidth |UIViewAutoresizingFlexibleHeight;
[self addSubview:imageView];
imageView.frame = self.bounds;
(self is the wrapper UIView, with the dimensions I want - I use AsyncImageView).
I thought Ben Lachman's suggestion of generating thumbnails in advance rather than on the fly was smart, so I adapted his code so it could handle a whole array and to make it more portable (no hard-coded property names).
- (NSArray *)arrayOfThumbnailsOfSize:(CGSize)size fromArray:(NSArray*)original {
NSMutableArray *temp = [NSMutableArray arrayWithCapacity:[original count]];
for(UIImage *image in original){
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0,0,size.width,size.height)];
UIImage *thumb = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[temp addObject:thumb];
}
return [NSArray arrayWithArray:temp];
}
you might be able to use this?
yourTableViewController.rowImage = [UIImage imageNamed:#"yourImage.png"];
and/or
cell.image = yourTableViewController.rowImage;
and if your images are already 40x40 then you shouldn't have to worry about setting bounds and stuff... but, i'm also new to this, so, i wouldn't know, haven't played around with Table View row/cell images much
hope this helps.
I was able to make this work using interface builder and a tableviewcell. You can set the "Mode" properties for an image view to "Aspect Fit". I'm not sure how to do this programatically.
Try setting UIImageView.autoresizesSubviews and/or UIImageView.contentStretch.