I can't solve this problem. Load image, convert to CGImageRef. Try to get bitmap context and render on the screen.
NSURL *imageFileURL = [NSURL fileURLWithPath:stringIMG];
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)imageFileURL, NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
NSInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSInteger width = CGImageGetWidth(imageRef);
NSInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imageRef);
size_t rawData = bytesPerRow*height;
unsigned char *data = malloc(rawData);
memset(data2, 0, rawData);
CGContextRef context = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace, bitmapInfo);
CGImageRef imageRef2 = CGBitmapContextCreateImage(context);
// CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef2);
UIImage *result = [UIImage imageWithCGImage:imageRef2];//if i do like this i got white empty screen
UIImageView *image = [[UIImageView alloc] initWithImage:result];
[self.view addSubview:image];// if i do like this i got black rectangle on the white screen
I have no idea. I check by breakpoint that context is not null. I don't know what should i do. Please maybe anyone can help me?
Category added it to UIImage for resizing using CoreGraphics
UIImage+Resize.h
#import <UIKit/UIKit.h>
#interface UIImage (Resizing)
-(UIImage*)resizedImageWithSize:(CGSize)size;
#end
UIImage+Resizing.m
#import "UIImage+Resizing.h"
#implementation UIImage (Resizing)
-(UIImage*)resizedImageWithSize:(CGSize)size {
CGImageRef cgImage = [self CGImage];
size_t bitsPerComponent = CGImageGetBitsPerComponent(cgImage);
size_t bytesPerRow = CGImageGetBytesPerRow(cgImage);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(cgImage);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(cgImage);
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), cgImage);
CGImageRef resizedImageRef = CGBitmapContextCreateImage(context);
UIImage *resizedImage = [UIImage imageWithCGImage:resizedImageRef];
CFRelease(resizedImageRef);
CFRelease(context);
return resizedImage;
}
#end
Related
I have a custom class Frame that gets image data from multiple sources. The class can generate an UIImage. The problem is when a generated UIImage is drawn on the screen, it crashes with EXC_BAD_ACCESS.
Callstack is empty, it ends at start->main->UIApplicationMain.
I think it has something to do with CGImageCreate and that the pointer isn't retained somehow. But I have a hard time figuring out why. The XCode debugger shows the UIImage exists right before it's added as a subview through UIImageView, but after it just crashes. I've also tried to draw it directly to a custom UIView with drawRect, but it crashes with EXC_BAD_ACCESS at drawRect.
Any thoughts would be greatly appreciated!
Here's the code:
UIImage *image = [UIImage imageNamed:#"test.png"];
// To NSData
CGImageRef imageRef = image.CGImage;
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
const unsigned char *pixels = CFDataGetBytePtr(dataRef);
const signed long length = CFDataGetLength(dataRef);
NSData *data = [NSData dataWithBytes:pixels length:length];
CGFloat width = CGImageGetWidth(imageRef);
CGFloat height = CGImageGetHeight(imageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
// NSData to UIImage
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProviderRef =
CGDataProviderCreateWithData(NULL, data.bytes, data.length, NULL);
CGImageRef imageRef2 =
CGImageCreate(width, height, 8, 32, 4 * width, colorSpaceRef, bitmapInfo,
dataProviderRef, NULL, NO, kCGRenderingIntentDefault);
UIImage *image2 = [UIImage imageWithCGImage:imageRef2];
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(dataProviderRef);
CGImageRelease(imageRef);
//Show UIImage
UIImageView *imageView = [[UIImageView alloc] initWithFrame:self.view.frame];
imageView.image = image2;
//Breakpoint here shows that `image2` is equal to `image`
[self.view addSubview:imageView];
//EXC_BAD_ACCESS
i have CGImageRef object (var quartzImage). How convert this object to format PNG data for web:
"data:image/png;base64,"+ base64 data image
my code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
NSLog(#"%#",quartzImage);
}
If you already have a CGImageRef (with name quartzImage in your code) then you do not need to create an NSImage. Create a NSBitmapImageRep directly. And you should in no case use the lockFocus method. This is good for images that shall be depicted to the screen. And therefore lockFocus usually creates images with a resolution of 72 dpi and 144 dpi for Retina screens. Or do you want to create images for the web with the properties of your screen? Try this:
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:quartzImage];
NSData *repData = [bitmapRep representationUsingType:NSPNGFileType] properties:nil];
NSString *base64String = [repData base64EncodedStringWithOptions:0];
This base64… method is not available before OS X 10.9. In that case you should use base64Encoding
NSImage *image = [NSImage imageWithCGImage:imageRef];
[image lockFocus];
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, image.size.width, image.size.height)];
[image unlockFocus];
NSData *imageData = [bitmapRep representationUsingType:NSPNGFileType properties:nil];;
NSString *base64String = [imageData base64EncodedStringWithOptions:0];
I want to change color of my image but dont want to change alpha of image.
I am using following code for change color in blue.
But i want to change image of all pixels array into perticuler RGB value.
Like I have to apply RGB value (R= 116 G=170 B= 243).
CGImageRef sourceImage = ImageView_Test.image.CGImage;
CFDataRef theData;
theData = CGDataProviderCopyData(CGImageGetDataProvider(sourceImage));
UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(theData);
int red = 0;
int green = 1;
int blue = 2;
int dataLength = CFDataGetLength(theData);
for (int index = 0; index < dataLength; index += 4)
{
if (pixelData[index + blue] - 80 > 0)
{
pixelData[index + red] = pixelData[index + blue] - 139;
pixelData[index + green] = pixelData[index + blue] - 85;
}
else
{
pixelData[index + green] = 0;
pixelData[index + red] = 0;
}
}
CGContextRef context;
context = CGBitmapContextCreate(pixelData,
CGImageGetWidth(sourceImage),
CGImageGetHeight(sourceImage),
8,
CGImageGetBytesPerRow(sourceImage),
CGImageGetColorSpace(sourceImage),
kCGImageAlphaPremultipliedLast);
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage];
ImageView_Test.image = newImage;
CGContextRelease(context);
CFRelease(theData);
CGImageRelease(newCGImage);
I am using following method for change color of UIImage without affecting alpha of it.
-(UIImage *)didImageColorchanged:(NSString *)name withColor:(UIColor *)color
{
UIImage *img = [UIImage imageNamed:name];
UIGraphicsBeginImageContext(img.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[color setFill];
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImg;
}
Ex:
resultView.image = [self didImageColorchanged:[UIImage imageNamed:#"xyz.png"] withColor:[UIColor redColor]];
you can just use
// load image
UIImage *image = [UIImage imageNamed:#"test.png"];
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
char *pixels = (char *)[data bytes];
// this is where you manipulate the individual pixels
// assumes a 4 byte pixel consisting of rgb and alpha
// for PNGs without transparency use i+=3 and remove int a
for(int i = 0; i < [data length]; i += 4)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
pixels[r] = 0; // eg. remove red
pixels[g] = pixels[g];
pixels[b] = pixels[b];
pixels[a] = pixels[a];
}
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
CGImageRef newImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// the modified image
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// cleanup
free(pixels);
CGImageRelease(imageRef);
CGColorSpaceRelease(colorspace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);
I have a method where I'm taking a screenshot, but there's 2 problems with it. For the 2 lines
CGSize displaySize = [[CCDirector sharedDirector] displaySize];
CGSize winSize = [[CCDirector sharedDirector] winSize];
I get the warning Invalid initializer for displaySize, and also CCDirector may not respond to '-displaySize'
Oh and I'm using cocos2d...
This is the entire method
-(UIImage *)screenshot {
CGSize displaySize = [[CCDirector sharedDirector] displaySize];
CGSize winSize = [[CCDirector sharedDirector] winSize];
GLuint bufferLength = displaySize.width * winSize.height * 4;
GLubyte *buffer = (GLubyte *) malloc (bufferLength);
glReadPixels(0, 0, displaySize.width, displaySize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * displaySize.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate (displaySize.width, displaySize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t *pixels = (uint32_t *) malloc (bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, winSize.width, winSize.height, 8, winSize.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0, displaySize.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, displaySize.width, displaySize.height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
image = [[[UIImage alloc] initWithCGImage:imageRef] autorelease];
NSString *file = #"GameOver_Screenshot.png";
NSString *directory = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *path = [directory stringByAppendingPathComponent:file];
[UIImagePNGRepresentation(image) writeToFile:path atomically:YES];
return image;
}
Looks like "CCDirecrot.h" is not imported to the source file
EDIT:
What version of cocos are you using?
http://www.cocos2d-iphone.org/api-ref/0.99.5/interface_c_c_director.html
Here is no displaySize method for example
Also try to write this
CCDirector *director = [CCDirector shareDirector];
and check with debugger director is correct.
If director is correct then check with debugger what values are put in winSize and displaySize or when the application is crashing.
I'm trying to create an image mask that from a composite of two existing images.
First I start with creating the composite which consists of a small image that is the masking image, and a larger image which is the same size as the background:
UIImage *baseTextureImage = [UIImage imageNamed:#"background.png"];
UIImage *maskImage = [UIImage imageNamed:#"my_mask.jpg"];
UIImage *shapesBase = [UIImage imageNamed:#"largerimage.jpg"];
UIImage *maskImageFull;
CGSize finalSize = CGSizeMake(480.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[shapesBase drawInRect:CGRectMake(0, 0, 480, 320)];
[maskImage drawInRect:CGRectMake(150, 50, 250, 250)];
maskImageFull = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I can output this UIImage (MaskImageFull) and it looks right. It is a full size background size and it has a white background with my mask object in black, in the right place on the screen.
I then pass the MaskImageFull UIImage through this:
CGImageRef maskRef = [maskImage CGImage];
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([image CGImage], mask);
UIImage *retImage= [UIImage imageWithCGImage:masked];
The problem is that the retImage is all black. If I send a pre-made UIImage in as the mask it works fine, it is just when I try to make it from multiple images that it breaks.
I thought it was a colorspace thing but couldn't seem to fix it. Any help is much appreciated!
I tried the same thing with CGImageCreateWithMask, and got the same result. The solution I found was to use CGContextClipToMask instead:
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;
- (UIImage *) maskImage:(UIImage *)image {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UIImage *maskImage = [UIImage imageNamed:#"MaskFinal.png"];
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2 , -((image.size.height*ratio)-maskImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;
}
The image to be masked MUST be created with an alpha channel. The Alpha channel may not be created from the code.