Related
Good day to all.
At the moment I am trying to implement CCLabelTTF subclass with suppport of NSAttributedString to get multi-colored label. And I am hampered by lack of CoreText and CoreGraphics knowledge.
After reading few guides I, created CCTexture2D category to create texture using NSAttributedString object.
Here is my drawing code:
data = calloc(POTHigh, POTWide * 2);
colorSpace = CGColorSpaceCreateDeviceGray();
context = CGBitmapContextCreate(data, POTWide, POTHigh, 8, POTWide, colorSpace, kCGImageAlphaNone);
CGColorSpaceRelease(colorSpace);
if( ! context )
{
free(data);
[self release];
return nil;
}
UIGraphicsPushContext(context);
CGContextTranslateCTM(context, 0.0f, POTHigh);
CGContextScaleCTM(context, 1.0f, -1.0f);
// draw attributed string to context
CTFramesetterRef frameSetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)string);
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddRect(path, NULL, CGRectMake(0.f, 0.f, dimensions.width, dimensions.height));
CTFrameRef frame = CTFramesetterCreateFrame(frameSetter, CFRangeMake(0, 0), path, NULL);
CTFrameDraw(frame, context);
UIGraphicsPopContext();
CFRelease(frame);
CGPathRelease(path);
CFRelease(frameSetter);
And now I have few troubles:
The first one - my texture is shown flipped vertically. I thought, that these lines
CGContextTranslateCTM(context, 0.0f, POTHigh);
CGContextScaleCTM(context, 1.0f, -1.0f);
should prevent this.
The second one, if I create RGB context, I cannot see anything on the screen. I tried to create RGB context with these lines.
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(data, POTWide, POTHigh, 8, POTWide * 4, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Big);
I tried to google, but don't find anything related to my issues =( Any help(links or suggestions) is appreciated.
Couple things to try:
Your data allocation isn't big enough for RGB. Try: data = calloc(POTHigh, POTWide * 4); for RGB color space.
CTFrameDraw draws in relation to GL coords so you don't need to use CGContextScaleCTM(context, 1.0f, -1.0f);
that line was put in the original CCTexture2D creation for a CCLabelTTF because it used NSString's drawInRect: which draws in relation to UIKit coords.
Maybe try other alpha mask flags...? Check out Apple's documentation on Supported Pixel Formats for iOS to see what your options are.
You may want to take a look at ActiveTextView-iOS (https://github.com/storify/ActiveTextView-iOS). It may be of use.
use this to get color texture:
context = CGBitmapContextCreate(data, POTWide, POTHigh, 8, POTWide, colorSpace, kCGImageAlphaPremultipliedLast);
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to get UIImage from EAGLView?
So I was just wondering if anybody knows any way to save what is stored in an EAGLContext as a UIImage.
I am currently using:
UIGraphicsBeginImageContext(CGSizeMake(768, 1024));
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
in other apps I have and this works fine, but obviously, EAGLContext doesn't have a .layer property. I've tried casting to UIView, but that - unsurprisingly - doesn't work:
UIView *newView = [[UIView alloc] init];
newView = (UIView *)context;
I am drawing to an EAGLContext property on a UIView (technically an EAGLContext on a UIView on another UIView on a View Controller, but I figure that shouldn't make any difference) using OpenGLES 1.
If anybody knows anything about this, even if its just that I'm completely barking up an impossible tree, please let me know!
Matt
After a few days I finally got a working solution to this. There is code provided by Apple which produces an UIImage from an EAGLView. Then you simply need to flip the image vertically since UIKit is upside down. The link to the documentation where I found this method doesn't exist anymore.
Method to capture EAGLView:
-(UIImage *)drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Method to flip the image vertically:
- (UIImage *)flipImageVertically:(UIImage *)originalImage
{
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}
I am trying to draw to a bitmap context but coming up empty. I believe I'm creating things properly because I can initialize the context, draw a few things, then create an image from it and draw that image. What I cannot do is, after initialization, trigger further drawing on the context that draws more items on it. I'm not sure if I'm missing some common practice that implies I can only draw it at certain places or that I have to do something else. Here is what I do, below.
I copied the helper function provided by apple with one modification to obtain the color space because it wasn't compiling (this is for iPad, don't know if that matters):
CGContextRef MyCreateBitmapContext (int pixelsWide, int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);// 1
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB(); //CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);// 2
bitmapData = malloc( bitmapByteCount );// 3
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,// 4
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);// 5
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );// 6
return context;// 7
}
I initialize it in my init method below with a few sample draws just to be sure it looks right:
mContext = MyCreateBitmapContext (rect.size.width, rect.size.height);
// sample fills
CGContextSetRGBFillColor (mContext, 1, 0, 0, 1);
CGContextFillRect (mContext, CGRectMake (0, 0, 200, 100 ));
CGContextSetRGBFillColor (mContext, 0, 0, 1, .5);
CGContextFillRect (mContext, CGRectMake (0, 0, 100, 200 ));
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 5.0);
CGContextAddEllipseInRect(mContext, CGRectMake(0, 0, 60.0, 60.0));
CGContextStrokePath(mContext);
In my drawRect method, I create an image from it to render it. Maybe I should be creating and keeping this image as a member var and updating it everytime I draw something new and not creating the image every frame? (Some advice on this would be nice):
// draw bitmap context
CGImageRef myImage = CGBitmapContextCreateImage (mContext);
CGContextDrawImage(context, rect, myImage);
CGImageRelease(myImage);
Then as a test I try drawing a circle when I touch, but nothing happens, and the touch is definitely triggering:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint location;
for (UITouch* touch in touches)
{
location = [touch locationInView: [touch view]];
}
CGContextSetRGBStrokeColor(mContext, 1.0, 1.0, 1.0, 1.0);
CGContextSetRGBFillColor(mContext, 0.0, 0.0, 1.0, 1.0);
CGContextSetLineWidth(mContext, 2.0);
CGContextAddEllipseInRect(mContext, CGRectMake(location.x, location.y, 60.0, 60.0));
CGContextStrokePath(mContext);
}
Help?
[self setNeedsDisplay]; !!!!
!!!!!!!!!
so it was because drawRect was never being called after init since it didn't know it needed to refresh. My understanding is that I should just call setNeedsDisplay anytime I draw and that seems to work. :)
So I tried to use the Quartz CGImageCreateWithMaskingColors function, but he only problem is that it masks the color range you are selecting.
I want to mask everything but the color range I am selecting. For instance, I want to show all red colors in a picture but remove the other channels (Green and Blue).
I am doing this in Objective-C and I am a noob so please give me examples :)
Any help is greatly appreciated.
use these methods.i found them in one of the SO posts.
-(void)changeColor
{
UIImage *temp23=Image;//Pass your image here
CGImageRef ref1=[self createMask:temp23];
const float colorMasking[6] = {1.0, 3.0, 1.0, 2.0, 2.0, 3.0};
CGImageRef New=CGImageCreateWithMaskingColors(ref1, colorMasking);
UIImage *resultedimage=[UIImage imageWithCGImage:New];
EditImageView.image = resultedimage;
[EditImageView setNeedsDisplay];
}
-(CGImageRef)createMask:(UIImage*)temp
{
CGImageRef ref=temp.CGImage;
int mWidth=CGImageGetWidth(ref);
int mHeight=CGImageGetHeight(ref);
int count=mWidth*mHeight*4;
void *bufferdata=malloc(count);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGContextRef cgctx = CGBitmapContextCreate (bufferdata,mWidth,mHeight, 8,mWidth*4, colorSpaceRef, kCGImageAlphaPremultipliedFirst);
CGRect rect = {0,0,mWidth,mHeight};
CGContextDrawImage(cgctx, rect, ref);
bufferdata = CGBitmapContextGetData (cgctx);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bufferdata, mWidth*mHeight*4, NULL);
CGImageRef savedimageref = CGImageCreate(mWidth,mHeight, 8, 32, mWidth*4, colorSpaceRef, bitmapInfo,provider , NULL, NO, renderingIntent);
CFRelease(colorSpaceRef);
return savedimageref;
}
then call changecolor method on a buttons click event and see the result
I found the answer for my above problem, Follow the above code of Rahul with some changes to set your own color,
-(void)changeColor
{
UIImage *temp23=Image;//Pass your image here
CGImageRef ref1=[self createMask:temp23];
const float colorMasking[6] = {1.0, 3.0, 1.0, 2.0, 2.0, 3.0};
CGImageRef New=CGImageCreateWithMaskingColors(ref1, colorMasking);
UIImage *resultedimage=[UIImage imageWithCGImage:New];
EditImageView.image = resultedimage;
[EditImageView setNeedsDisplay];
}
-(CGImageRef)createMask:(UIImage*)temp
{
CGImageRef ref=temp.CGImage;
int mWidth=CGImageGetWidth(ref);
int mHeight=CGImageGetHeight(ref);
int count=mWidth*mHeight*4;
void *bufferdata=malloc(count);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGContextRef cgctx = CGBitmapContextCreate (bufferdata,mWidth,mHeight, 8,mWidth*4,colorSpaceRef, kCGImageAlphaPremultipliedLast);
CGRect rect = {0,0,mWidth,mHeight};
CGContextDrawImage(cgctx, rect, ref);
CGContextSaveGState(cgctx);
CGContextSetBlendMode(cgctx, kCGBlendModeColor);
CGContextSetRGBFillColor (cgctx, 1.0, 0.0, 0.0, 1.0);
CGContextFillRect(cgctx, rect);
bufferdata = CGBitmapContextGetData (cgctx);
const float colorMasking[6] = {1.0, 3.0, 1.0, 2.0, 2.0, 3.0};
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bufferdata, mWidth*mHeight*4, NULL);
CGImageRef savedimageref = CGImageCreate(mWidth,mHeight, 8, 32, mWidth*4, colorSpaceRef, bitmapInfo,provider , NULL, NO, renderingIntent);
CFRelease(colorSpaceRef);
return savedimageref;
}
Hmmm...
I may be missing something, but I don't believe that the provided answers apply to the question. The second response gets closer to the mark, but it contains spurious code that has little to do with the solution.
The createMask method makes a copy of the original image assuming alpha in LSB position. The changeColor performs a masking call that isn't going to do much to an RGB image -- basically only black is going to be masked (i.e, RGB triplets in the range/combinations of (1,1,2) to (3,2,1)).
I am guessing that the red shift that is being observed is due to the parameter
kCGImageAlphaPremultipliedFirst
in the line
CGContextRef cgctx = CGBitmapContextCreate (bufferdata,mWidth,mHeight, 8,mWidth*4, colorSpaceRef, kCGImageAlphaPremultipliedFirst);
due to improper treatment of the alpha channel. If in the changeColor method you modify the block
CGImageRef ref1=[self createMask:temp23];
const float colorMasking[6] = {1.0, 3.0, 1.0, 2.0, 2.0, 3.0};
CGImageRef New=CGImageCreateWithMaskingColors(ref1, colorMasking);
UIImage *resultedimage=[UIImage imageWithCGImage:New];
EditImageView.image = resultedimage;
to be
CGImageRef ref1=[self createMask:temp23];
UIImage *resultedimage=[UIImage imageWithCGImage:ref];
EditImageView.image = resultedimage;
you'll see no difference in the display. Changing the CGBitmapInfo constant to kCGImageAlphaPremultipliedLast should display the image correctly using either of the above code blocks.
The next response gets a bit closer to what the OP asks for, but its in terms of visual effect, not actual data. Here the pertinent code in createMask is
CGContextRef cgctx = CGBitmapContextCreate (bufferdata,mWidth,mHeight, 8,mWidth*4,colorSpaceRef, kCGImageAlphaPremultipliedLast);
which displays the image correctly, followed by
CGContextSetBlendMode(cgctx, kCGBlendModeColor);
CGContextSetRGBFillColor (cgctx, 1.0, 0.0, 0.0, 1.0);
CGContextFillRect(cgctx, rect);
and then the logic for constructing the image. The blend logic overlays a red tint on the original image, achieving a similar effect as the misplaced alpha channel in the original response. This still really is not what the OP asks for, which is to mask one or more channels, not blend colors.
This really amounts to setting the channel values for the colors that are not desired to zero. Here's an example for returning just the red channel as the OP requests; the assumption is that the format of the pixel is ABGR:
- (CGImageRef) redChannel:(CGImageRef)image
{
CGDataProviderRef provider = CGImageGetDataProvider(image);
NSMutableData* data = (id)CGDataProviderCopyData(provider);
int width = CGImageGetWidth(image);
int height = CGImageGetHeight(image);
[data autorelease];
// get a mutable reference to the image data
uint32_t* dwords = [data mutableBytes];
for (size_t idx = 0; idx < width*height; idx++) {
uint32_t* pixel = &dwords[idx];
// perform a logical AND of the pixel with a mask that zeroes out green and blue pixels
*pixel &= 0x000000ff;
}
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// now create a new image using the masked original data
CGDataProviderRef iprovider = CGDataProviderCreateWithData(NULL, dwords, width*height*4, NULL);
CGImageRef savedimageref = CGImageCreate(width, height, 8, 32, width*4, colorSpaceRef, bitmapInfo, iprovider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(iprovider);
return savedimageref;
}
A good summary of bitwise operations can be found here.
As is pointed out here, you may need to change the structure of the mask depending on the LSB/MSB ordering of the bits in the pixel. This example assumes 32 bit pixels from a standard iPhone PNG.
Strange feeling that if i want to preview a pdf on a controller alongside an image the only way i can get it to work is to use a defined segue from the previous view controller, maybe i'm not clear. How to i get an image to show up on a preview of a QuickLook action without having to segue in and not just modal into the view ? Is this logical ?
I dev an app which generates PDF's, and i'm giving the user the possibility to preview them....but when these have an image....boom the image disappears from the preview...unless i use a segue then no problem, seems weird though. Any suggestions ?....thanks in advance...
- (CFRange)renderPage:(NSInteger)pageNum withTextRange:(CFRange)currentRange
andFramesetter:(CTFramesetterRef)framesetter
{
CGContextRef currentContext = UIGraphicsGetCurrentContext();
// Put the text matrix into a known state. This ensures
// that no old scaling factors are left in place.
CGContextSetTextMatrix(currentContext, CGAffineTransformIdentity);
// Create a path object to enclose the text. Use 72 point
// margins all around the text.
#pragma-----------here to disclose the height of text on pdf preview
CGRect frameRect = CGRectMake(40, 72, 468, 350);
CGMutablePathRef framePath = CGPathCreateMutable();
CGPathAddRect(framePath, NULL, frameRect);
[_imageView1 setImage:theImage1];
CGRect frame = CGRectMake(40, 50, 130, 130);
[ProgramPDFViewController drawImage:theImage1 inRect:frame];
CGSize size = theImage1.size;
CGFloat ratio = 0;
if (size.width > size.height) {
ratio = 44.0 / size.width;
} else {
ratio = 44.0 / size.height;
}
CGRect rect = CGRectMake(0.0, 0.0, ratio * size.width, ratio * size.height);
UIGraphicsBeginImageContext(rect.size);
[theImage1 drawInRect:rect];
UIGraphicsEndImageContext();
CTFrameRef frameRef = CTFramesetterCreateFrame(framesetter, currentRange, framePath, NULL);
CGPathRelease(framePath);
CGContextTranslateCTM(currentContext, 0, kDefaultPageHeight);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CTFrameDraw(frameRef, currentContext);
currentRange = CTFrameGetVisibleStringRange(frameRef);
currentRange.location += currentRange.length;
currentRange.length = 0;
CFRelease(frameRef);
return currentRange;
}