Drawing to a bitmap context - cocoa-touch

I am trying to draw to a bitmap context but coming up empty. I believe I'm creating things properly because I can initialize the context, draw a few things, then create an image from it and draw that image. What I cannot do is, after initialization, trigger further drawing on the context that draws more items on it. I'm not sure if I'm missing some common practice that implies I can only draw it at certain places or that I have to do something else. Here is what I do, below.
I copied the helper function provided by apple with one modification to obtain the color space because it wasn't compiling (this is for iPad, don't know if that matters):
CGContextRef MyCreateBitmapContext (int pixelsWide, int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);// 1
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB(); //CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);// 2
bitmapData = malloc( bitmapByteCount );// 3
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,// 4
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);// 5
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );// 6
return context;// 7
}
I initialize it in my init method below with a few sample draws just to be sure it looks right:
mContext = MyCreateBitmapContext (rect.size.width, rect.size.height);
// sample fills
CGContextSetRGBFillColor (mContext, 1, 0, 0, 1);
CGContextFillRect (mContext, CGRectMake (0, 0, 200, 100 ));
CGContextSetRGBFillColor (mContext, 0, 0, 1, .5);
CGContextFillRect (mContext, CGRectMake (0, 0, 100, 200 ));
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 5.0);
CGContextAddEllipseInRect(mContext, CGRectMake(0, 0, 60.0, 60.0));
CGContextStrokePath(mContext);
In my drawRect method, I create an image from it to render it. Maybe I should be creating and keeping this image as a member var and updating it everytime I draw something new and not creating the image every frame? (Some advice on this would be nice):
// draw bitmap context
CGImageRef myImage = CGBitmapContextCreateImage (mContext);
CGContextDrawImage(context, rect, myImage);
CGImageRelease(myImage);
Then as a test I try drawing a circle when I touch, but nothing happens, and the touch is definitely triggering:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint location;
for (UITouch* touch in touches)
{
location = [touch locationInView: [touch view]];
}
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 2.0);
CGContextAddEllipseInRect(mContext, CGRectMake(location.x, location.y, 60.0, 60.0));
CGContextStrokePath(mContext);
}
Help?

[self setNeedsDisplay]; !!!!
!!!!!!!!!
so it was because drawRect was never being called after init since it didn't know it needed to refresh. My understanding is that I should just call setNeedsDisplay anytime I draw and that seems to work. :)

Related

UIImage rotation and orientation issue

I'm using standart code to rotate image, but when UIImageRotatation is not equal to 2, the image is rotated not correctly.
Any code to fix UIImageRotation and rotate image by X degrees? Try nearly everything.
NSLog(#"Image orientation %d",self.imageOrientation);
CGSize rotatedSize = CGSizeApplyAffineTransform(self.size, CGAffineTransformMakeRotation(rad(degrees)));
if (rotatedSize.width < 0) {
rotatedSize.width *= -1;
}
if (rotatedSize.height < 0) {
rotatedSize.height *= -1;
}
// Create the bitmap context
UIGraphicsBeginImageContextWithOptions(rotatedSize, YES, 0.0);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
// // Rotate the image context
CGContextRotateCTM(bitmap, rad(degrees));
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-self.size.width / 2, -self.size.height / 2, self.size.width, self.size.height), [self CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;

How can I get the specify point's color in the UIImage?

For example, I have a UIImage, I would like to know the image position (300, 200) is which color, how can I do so? Thanks.
Call this method form uitouch move..
-(void) getPixelColorAtLocation:(CGPoint)point
{
unsigned char pixel[4] = {0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -point.x, -point.y);
[self.layer renderInContext:context];
// NSLog(#"x- %f y- %f",point.x,point.y);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
NSLog(#RGB Color code "%d %d %d",pixel[0],pixel[1],pixel[2]);
}
You can get color code of touch point in RGB colr combination.
try it

Transparency in CGContext

I'm trying to set a transparent background for a CGContext but keep getting :
CGBitmapContextCreateImage: invalid context 0x0
Here's what I've got. If I switch kCGImageAlphaLast to kCGImageAlphaNoneSkipFirst it works but the alpha channel is completely ignored. I'm very new to this color & context stuff - any ideas?
-(BOOL) initContext:(CGSize)size {
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
cacheContext = CGBitmapContextCreate (NULL, size.width, size.height, 8, bitmapBytesPerRow, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaLast);
CGContextSetRGBFillColor(cacheContext, 1.0, 1.0, 1.0f, 0.5);
CGContextFillRect(cacheContext, self.bounds);
return YES;
}
CGBitmapContext supports only certain possible pixel formats. You probably want kCGImageAlphaPremultipliedLast.
(Here's an explanation of premultiplied alpha.)
Also note:
There is no need to malloc cacheBitmap. Since you are passing in NULL as the first argument to CGBitmapContextCreate, the bitmap context will do its own allocation.
Depending on your code, self.bounds may not be the correct rectangle to fill. It would be safer to use CGRectMake(0.f, 0.f, size.width, size.height).

Xcode Screenshot EAGLContext [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get UIImage from EAGLView?
So I was just wondering if anybody knows any way to save what is stored in an EAGLContext as a UIImage.
I am currently using:
UIGraphicsBeginImageContext(CGSizeMake(768, 1024));
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
in other apps I have and this works fine, but obviously, EAGLContext doesn't have a .layer property. I've tried casting to UIView, but that - unsurprisingly - doesn't work:
UIView *newView = [[UIView alloc] init];
newView = (UIView *)context;
I am drawing to an EAGLContext property on a UIView (technically an EAGLContext on a UIView on another UIView on a View Controller, but I figure that shouldn't make any difference) using OpenGLES 1.
If anybody knows anything about this, even if its just that I'm completely barking up an impossible tree, please let me know!
Matt
After a few days I finally got a working solution to this. There is code provided by Apple which produces an UIImage from an EAGLView. Then you simply need to flip the image vertically since UIKit is upside down. The link to the documentation where I found this method doesn't exist anymore.
Method to capture EAGLView:
-(UIImage *)drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Method to flip the image vertically:
- (UIImage *)flipImageVertically:(UIImage *)originalImage
{
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}

images not showing in pdf preview?

Strange feeling that if i want to preview a pdf on a controller alongside an image the only way i can get it to work is to use a defined segue from the previous view controller, maybe i'm not clear. How to i get an image to show up on a preview of a QuickLook action without having to segue in and not just modal into the view ? Is this logical ?
I dev an app which generates PDF's, and i'm giving the user the possibility to preview them....but when these have an image....boom the image disappears from the preview...unless i use a segue then no problem, seems weird though. Any suggestions ?....thanks in advance...
- (CFRange)renderPage:(NSInteger)pageNum withTextRange:(CFRange)currentRange
andFramesetter:(CTFramesetterRef)framesetter
{
CGContextRef currentContext = UIGraphicsGetCurrentContext();
// Put the text matrix into a known state. This ensures
// that no old scaling factors are left in place.
CGContextSetTextMatrix(currentContext, CGAffineTransformIdentity);
// Create a path object to enclose the text. Use 72 point
// margins all around the text.
#pragma-----------here to disclose the height of text on pdf preview
CGRect frameRect = CGRectMake(40, 72, 468, 350);
CGMutablePathRef framePath = CGPathCreateMutable();
CGPathAddRect(framePath, NULL, frameRect);
[_imageView1 setImage:theImage1];
CGRect frame = CGRectMake(40, 50, 130, 130);
[ProgramPDFViewController drawImage:theImage1 inRect:frame];
CGSize size = theImage1.size;
CGFloat ratio = 0;
if (size.width > size.height) {
ratio = 44.0 / size.width;
} else {
ratio = 44.0 / size.height;
}
CGRect rect = CGRectMake(0.0, 0.0, ratio * size.width, ratio * size.height);
UIGraphicsBeginImageContext(rect.size);
[theImage1 drawInRect:rect];
UIGraphicsEndImageContext();
CTFrameRef frameRef = CTFramesetterCreateFrame(framesetter, currentRange, framePath, NULL);
CGPathRelease(framePath);
CGContextTranslateCTM(currentContext, 0, kDefaultPageHeight);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CTFrameDraw(frameRef, currentContext);
currentRange = CTFrameGetVisibleStringRange(frameRef);
currentRange.location += currentRange.length;
currentRange.length = 0;
CFRelease(frameRef);
return currentRange;
}