I have a method where I'm taking a screenshot, but there's 2 problems with it. For the 2 lines
CGSize displaySize = [[CCDirector sharedDirector] displaySize];
CGSize winSize = [[CCDirector sharedDirector] winSize];
I get the warning Invalid initializer for displaySize, and also CCDirector may not respond to '-displaySize'
Oh and I'm using cocos2d...
This is the entire method
-(UIImage *)screenshot {
CGSize displaySize = [[CCDirector sharedDirector] displaySize];
CGSize winSize = [[CCDirector sharedDirector] winSize];
GLuint bufferLength = displaySize.width * winSize.height * 4;
GLubyte *buffer = (GLubyte *) malloc (bufferLength);
glReadPixels(0, 0, displaySize.width, displaySize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * displaySize.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate (displaySize.width, displaySize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t *pixels = (uint32_t *) malloc (bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, winSize.width, winSize.height, 8, winSize.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0, displaySize.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, displaySize.width, displaySize.height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
image = [[[UIImage alloc] initWithCGImage:imageRef] autorelease];
NSString *file = #"GameOver_Screenshot.png";
NSString *directory = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *path = [directory stringByAppendingPathComponent:file];
[UIImagePNGRepresentation(image) writeToFile:path atomically:YES];
return image;
}
Looks like "CCDirecrot.h" is not imported to the source file
EDIT:
What version of cocos are you using?
http://www.cocos2d-iphone.org/api-ref/0.99.5/interface_c_c_director.html
Here is no displaySize method for example
Also try to write this
CCDirector *director = [CCDirector shareDirector];
and check with debugger director is correct.
If director is correct then check with debugger what values are put in winSize and displaySize or when the application is crashing.
Related
I can't solve this problem. Load image, convert to CGImageRef. Try to get bitmap context and render on the screen.
NSURL *imageFileURL = [NSURL fileURLWithPath:stringIMG];
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)imageFileURL, NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
NSInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSInteger width = CGImageGetWidth(imageRef);
NSInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imageRef);
size_t rawData = bytesPerRow*height;
unsigned char *data = malloc(rawData);
memset(data2, 0, rawData);
CGContextRef context = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace, bitmapInfo);
CGImageRef imageRef2 = CGBitmapContextCreateImage(context);
// CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef2);
UIImage *result = [UIImage imageWithCGImage:imageRef2];//if i do like this i got white empty screen
UIImageView *image = [[UIImageView alloc] initWithImage:result];
[self.view addSubview:image];// if i do like this i got black rectangle on the white screen
I have no idea. I check by breakpoint that context is not null. I don't know what should i do. Please maybe anyone can help me?
Category added it to UIImage for resizing using CoreGraphics
UIImage+Resize.h
#import <UIKit/UIKit.h>
#interface UIImage (Resizing)
-(UIImage*)resizedImageWithSize:(CGSize)size;
#end
UIImage+Resizing.m
#import "UIImage+Resizing.h"
#implementation UIImage (Resizing)
-(UIImage*)resizedImageWithSize:(CGSize)size {
CGImageRef cgImage = [self CGImage];
size_t bitsPerComponent = CGImageGetBitsPerComponent(cgImage);
size_t bytesPerRow = CGImageGetBytesPerRow(cgImage);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(cgImage);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(cgImage);
CGContextRef context = CGBitmapContextCreate(nil, size.width, size.height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), cgImage);
CGImageRef resizedImageRef = CGBitmapContextCreateImage(context);
UIImage *resizedImage = [UIImage imageWithCGImage:resizedImageRef];
CFRelease(resizedImageRef);
CFRelease(context);
return resizedImage;
}
#end
i have CGImageRef object (var quartzImage). How convert this object to format PNG data for web:
"data:image/png;base64,"+ base64 data image
my code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
NSLog(#"%#",quartzImage);
}
If you already have a CGImageRef (with name quartzImage in your code) then you do not need to create an NSImage. Create a NSBitmapImageRep directly. And you should in no case use the lockFocus method. This is good for images that shall be depicted to the screen. And therefore lockFocus usually creates images with a resolution of 72 dpi and 144 dpi for Retina screens. Or do you want to create images for the web with the properties of your screen? Try this:
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCGImage:quartzImage];
NSData *repData = [bitmapRep representationUsingType:NSPNGFileType] properties:nil];
NSString *base64String = [repData base64EncodedStringWithOptions:0];
This base64… method is not available before OS X 10.9. In that case you should use base64Encoding
NSImage *image = [NSImage imageWithCGImage:imageRef];
[image lockFocus];
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, image.size.width, image.size.height)];
[image unlockFocus];
NSData *imageData = [bitmapRep representationUsingType:NSPNGFileType properties:nil];;
NSString *base64String = [imageData base64EncodedStringWithOptions:0];
I want to change color of my image but dont want to change alpha of image.
I am using following code for change color in blue.
But i want to change image of all pixels array into perticuler RGB value.
Like I have to apply RGB value (R= 116 G=170 B= 243).
CGImageRef sourceImage = ImageView_Test.image.CGImage;
CFDataRef theData;
theData = CGDataProviderCopyData(CGImageGetDataProvider(sourceImage));
UInt8 *pixelData = (UInt8 *) CFDataGetBytePtr(theData);
int red = 0;
int green = 1;
int blue = 2;
int dataLength = CFDataGetLength(theData);
for (int index = 0; index < dataLength; index += 4)
{
if (pixelData[index + blue] - 80 > 0)
{
pixelData[index + red] = pixelData[index + blue] - 139;
pixelData[index + green] = pixelData[index + blue] - 85;
}
else
{
pixelData[index + green] = 0;
pixelData[index + red] = 0;
}
}
CGContextRef context;
context = CGBitmapContextCreate(pixelData,
CGImageGetWidth(sourceImage),
CGImageGetHeight(sourceImage),
8,
CGImageGetBytesPerRow(sourceImage),
CGImageGetColorSpace(sourceImage),
kCGImageAlphaPremultipliedLast);
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage];
ImageView_Test.image = newImage;
CGContextRelease(context);
CFRelease(theData);
CGImageRelease(newCGImage);
I am using following method for change color of UIImage without affecting alpha of it.
-(UIImage *)didImageColorchanged:(NSString *)name withColor:(UIColor *)color
{
UIImage *img = [UIImage imageNamed:name];
UIGraphicsBeginImageContext(img.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[color setFill];
CGContextTranslateCTM(context, 0, img.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
CGContextDrawImage(context, rect, img.CGImage);
CGContextClipToMask(context, rect, img.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return coloredImg;
}
Ex:
resultView.image = [self didImageColorchanged:[UIImage imageNamed:#"xyz.png"] withColor:[UIColor redColor]];
you can just use
// load image
UIImage *image = [UIImage imageNamed:#"test.png"];
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
char *pixels = (char *)[data bytes];
// this is where you manipulate the individual pixels
// assumes a 4 byte pixel consisting of rgb and alpha
// for PNGs without transparency use i+=3 and remove int a
for(int i = 0; i < [data length]; i += 4)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
pixels[r] = 0; // eg. remove red
pixels[g] = pixels[g];
pixels[b] = pixels[b];
pixels[a] = pixels[a];
}
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
CGImageRef newImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
);
// the modified image
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// cleanup
free(pixels);
CGImageRelease(imageRef);
CGColorSpaceRelease(colorspace);
CGDataProviderRelease(provider);
CGImageRelease(newImageRef);
I have NSImage and I want to make OpenGL texture from it. So I do the fallowing:
someNSData = [someNSImage TIFFRepresentation];
someNSBitmapImageRepData = [[NSBitmapImageRep alloc] initWithData:someNSData]
And if someNSImage is .png it works OK. But if someNSImage is .jpg texture is being broken.
With .png it looks like that:
And same image but .jpg format it looks like that:
Whats wrong?
Try this
#implementation NSImage(NSImageToCGImageRef)
- (NSBitmapImageRep *)bitmapImageRepresentation
{
NSBitmapImageRep *ret = (NSBitmapImageRep *)[self bestRepresentationForDevice:nil];
if(![ret isKindOfClass:[NSBitmapImageRep class]])
{
ret = nil;
for(NSBitmapImageRep *rep in [self representations])
if([rep isKindOfClass:[NSBitmapImageRep class]])
{
ret = rep;
break;
}
}
// if ret is nil we create a new representation
if(ret == nil)
{
NSSize size = [self size];
size_t width = size.width;
size_t height = size.height;
size_t bitsPerComp = 32;
size_t bytesPerPixel = (bitsPerComp / CHAR_BIT) * 4;
size_t bytesPerRow = bytesPerPixel * width;
size_t totalBytes = height * bytesPerRow;
NSMutableData *data = [NSMutableData dataWithBytesNoCopy:calloc(totalBytes, 1) length:totalBytes freeWhenDone:YES];
CGColorSpaceRef space = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef ctx = CGBitmapContextCreate([data mutableBytes], width, height, bitsPerComp, bytesPerRow, space, kCGBitmapFloatComponents | kCGImageAlphaPremultipliedLast);
if(ctx != NULL)
{
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:ctx flipped:[self isFlipped]]];
[self drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef img = CGBitmapContextCreateImage(ctx);
ret = [[[NSBitmapImageRep alloc] initWithCGImage:img] autorelease];
[self addRepresentation:ret];
CFRelease(img);
CFRelease(space);
CGContextRelease(ctx);
}
else NSLog(#"%# Couldn't create CGBitmapContext", self);
}
return ret;
}
#end
//in your code
NSBitmapImageRep *tempRep = [image bitmapImageRepresentation];
the width and the height of the a texture must be power of 2, i.e. 128, 256, 512, 1024, etc.
It looks like your image format isn't 32 bit.
i am using this function to create Screenshots of the my IPAD App. I am using the Sparrow Framework in my Project. SPDisplayObject uses OpenGl-ES based rendering.
#implementation SPDisplayObject (ScreenshotFromSPDisplayObject)
- (UIImage *)getImageScreenshot{
int WIDTH = 1024;
int HEIGHT = 768;
CGSize size = CGSizeMake(WIDTH,HEIGHT);
//Create un buffer for pixels
GLuint bufferLenght=size.width*size.height*4;
GLubyte *buffer = (GLubyte *) malloc(bufferLenght);
//Read Pixels from OpenGL
glReadPixels(0,0,size.width,size.height,GL_RGBA,GL_UNSIGNED_BYTE,buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLenght, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(size.width,size.height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorSpaceRef,bitmapInfo,provider,NULL,NO,renderingIntent);
uint32_t *pixels = (uint32_t *)malloc(bufferLenght);
CGContextRef context = CGBitmapContextCreate(pixels, WIDTH, HEIGHT, 8, WIDTH*4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context,0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, size.width, size.height), iref);
UIImage* screenshot = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
UIGraphicsEndImageContext();
//free memory
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
free(buffer);
free(pixels);
return screenshot;
}
#end
I use it like this from an UIViewController:
#interface
UIImageView *screenShot;
UIImage *tempImage;
-(void) deactivePage
{
// attach screenshot
tempImage = [self.stage getImageScreenshot];
screenShot = [[UIImageView alloc] initWithFrame:CGRectMake(0,0,1024,768)];
screenShot.image = tempImage;
[self.view addSubview:screenShot];
}
- (void)dealloc
{
screenShot.image = nil;
[screenShot removeFromSuperview];
[screenShot release];
[super dealloc];
}
The UIViewController is released and deallocated aprox. 5 Seconds after the "deactivePage" function is called.
The Screenshot is used for a View Transition.
Taking Screenshots works like a charm, but with every Screenshot my App is growing around 10 MBs So i can do this around 15 Times till the app crashes.
So where is the leak? I am stuck.. :-(
In the getImageScreenshot function you do this:
UIImage* screenshot = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
which creates a CGImageRef and then creates (autorelease) an UIImage from it.
What happens here is that this CGImageRef remains alive and is never released, so it's leaking.
What you should do, instead, is this:
CGImageRef myCGImage = CGBitmapContextCreateImage(context);
UIImage* screenshot = [UIImage imageWithCGImage:myCGImage];
CGImageRelease(myCGImage);
Have you tried to see it using Instruments (Leaks or Heapshots)? you should see these CGImareRef elements still alive.
I don't see where you deallocate tempImage in the UIViewController when it's going down.