Currently i load a texture in iOS using Image I/O and I extract its image data with Core Graphics. Then i can send the image data to OpenGL like this :
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->width, texture->height, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture->imageData);
The problem is that the Core Graphics part is really slow, i need to setup and draw with Core Graphics just to extract the image data...i don't want to show it on screen. There must be a more efficient way to extract image data in iOS?...
Here is my code :
...
myTexRef = CGImageSourceCreateWithURL((__bridge CFURLRef)url, myOptions);
...
MyTexture2D* texture;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( tileSize.width * tileSize.height * 4 );
CGContextRef imgContext = CGBitmapContextCreate( imageData, tileSize.width, tileSize.height, 8, 4 * tileSize.width, colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease( colorSpace );
CGContextClearRect( imgContext, CGRectMake( 0, 0, tileSize.width, tileSize.height ) );
CGContextTranslateCTM( imgContext, 0, 0 );
...
CGImageRef tiledImage = CGImageCreateWithImageInRect (imageRef, tileArea);
CGRect drawRect = CGRectMake(0, 0, tileSize.width, tileSize.height);
// *** THIS CALL IS REALLY EXPENSIVE!
CGContextDrawImage(imgContext, drawRect, tiledImage);
CGImageRelease(tiledImage);
// TamTexture2D takes the ownership of imageData and will be responsible to free it
texture = new MyTexture2D(tileSize.width, tileSize.height, imageData);
CGContextRelease(imgContext);
If you are developing for iOS 5 and above, GLKTextureLoader is what you're looking for:
GLKTextureLoader is orders of magnitude slower than using CG.
I have logged this as a bug with Apple DTS, and got it thrown back as "yes, we know, don't care, not going to fix it. You should be using CoreGraphics instead if you want your textures to be loaded fast" (almost that wording)
glTexImage2D is immensely fast if you give it the raw buffer that CGContext* methods create for you / allow you to pass in. Assuming you get the RGBA/ARGB/etc colour-spaces correct, of course.
cgcontextDrawImage is also super fast. My guess is that it's taking time to load your data over the web...
Related
I'm currently working a lot with CoreGraphics on OSX.
I've run Time Profiler over my code and found the biggest hang-up is in CGContextDrawImage. It's part of a loop that gets called many times per second.
I don't have any way of optimizing this code per se (since it's in the Apple libraries) - but I am wondering if there's a speedier alternative or way to improve the speed.
I'm using CGContextDraw image after some blend-mode code such as: CGContextSetBlendMode(context, kCGBlendModeDifference); so alternative implementations would need to be able to support blending.
Time profiler results:
3658.0ms 15.0% 0.0 CGContextDrawImage
3658.0ms 15.0% 0.0 ripc_DrawImage
3539.0ms 14.5% 0.0 ripc_AcquireImage
3539.0ms 14.5% 0.0 CGSImageDataLock
3539.0ms 14.5% 1.0 img_data_lock
3465.0ms 14.2% 0.0 img_interpolate_read
2308.0ms 9.4% 7.0 resample_band
1932.0ms 7.9% 1932.0 resample_byte_h_3cpp_vector
369.0ms 1.5% 369.0 resample_byte_v_Ncpp_vector
1157.0ms 4.7% 2.0 img_decode_read
1150.0ms 4.7% 8.0 decode_data
863.0ms 3.5% 863.0 decode_swap
267.0ms 1.0% 267.0 decode_byte_8bpc_3
Update:
The actual source is something along the lines of the following:
/////////////////////////////////////////////////////////////////////////////////////////
- (CGImageRef)createBlendedImage:(CGImageRef)image
secondImage:(CGImageRef)secondImage
blendMode:(CGBlendMode)blendMode
{
// Get the image width and height
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
// Set the frame
CGRect frame = CGRectMake(0, 0, width, height);
// Create context with alpha channel
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
CGImageGetBitsPerComponent(image),
CGImageGetBytesPerRow(image),
CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
if (!context) {
return nil;
}
// Draw the image inside the context
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, frame, image);
// Set the blend mode and draw the second image
CGContextSetBlendMode(context, blendMode);
CGContextDrawImage(context, frame, secondImage);
// Get the masked image from the context
CGImageRef blendedImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
return blendedImage;
}
/////////////////////////////////////////////////////////////////////////////////////////
- (CGImageRef)createImageTick
{
// `self.image` and `self.previousImage` are two instance properties (CGImageRefs)
// Create blended image (stage one)
CGImageRef stageOne = [self createBlendedImage:self.image
secondImage:self.previousImage
blendMode:kCGBlendModeXOR];
// Create blended image (stage two) if stage one image is 50% red
CGImageRef stageTwo = nil;
if ([self isImageRed:stageOne]) {
stageTwo = [self createBlendedImage:self.image
secondImage:stageOne
blendMode:kCGBlendModeSourceAtop];
}
// Release intermediate image
CGImageRelease(stageOne);
return stageTwo;
}
#JeremyRoman et al: Thank you so much for your comments. I am drawing the same image a couple of times per loop, onto different contexts with different filters, and combining with new images. Does resampling include switching from RGB to RGBA? What could I try to speed up or eliminate resampling? – Chris Nolet
This is what Core Image is for. See the Core Image Programming Guide for details. CGContext is designed for rendering final images to the screen, which it sounds like is not your goal with every image you're creating.
I am making a PDF annotator and when you switch pages it has to redraw all of the previously drawn OpenGL content (which was saved to file in JSON format). The problem is that it takes longer the more content there is to draw. I have a UIImage saved to disk for each page so I was hoping to speed up this process by drawing that UIImage onto EAGLContext in one big stroke.
I want to know how to take an UIImage (or JPEG/PNG file) and draw it directly on to the screen. The reason why it has to be on the EAGLView is because it needs to support the eraser, and using the regular UIKit way wouldn't work with that.
I assume there's some way to set a brush as the whole image and just stamp the screen with it once. Any suggestions?
As a pedantic note, there is no standard class named EAGLView, but I assume you're referring to one of Apple's sample UIView subclasses that host OpenGL ES content.
The first step in doing this would be to load the UIImage into a texture. The following is some code that I've used for this in my image processing framework (newImageSource is the input UIImage):
CGSize pointSizeOfImage = [newImageSource size];
CGFloat scaleOfImage = [newImageSource scale];
pixelSizeOfImage = CGSizeMake(scaleOfImage * pointSizeOfImage.width, scaleOfImage * pointSizeOfImage.height);
CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
BOOL shouldRedrawUsingCoreGraphics = YES;
// For now, deal with images larger than the maximum texture size by resizing to be within that limit
CGSize scaledImageSizeToFitOnGPU = [GPUImageOpenGLESContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
if (!CGSizeEqualToSize(scaledImageSizeToFitOnGPU, pixelSizeOfImage))
{
pixelSizeOfImage = scaledImageSizeToFitOnGPU;
pixelSizeToUseForTexture = pixelSizeOfImage;
shouldRedrawUsingCoreGraphics = YES;
}
if (self.shouldSmoothlyScaleOutput)
{
// In order to use mipmaps, you need to provide power-of-two textures, so convert to the next largest power of two and stretch to fill
CGFloat powerClosestToWidth = ceil(log2(pixelSizeOfImage.width));
CGFloat powerClosestToHeight = ceil(log2(pixelSizeOfImage.height));
pixelSizeToUseForTexture = CGSizeMake(pow(2.0, powerClosestToWidth), pow(2.0, powerClosestToHeight));
shouldRedrawUsingCoreGraphics = YES;
}
GLubyte *imageData = NULL;
CFDataRef dataFromImageDataProvider;
if (shouldRedrawUsingCoreGraphics)
{
// For resized image, redraw
imageData = (GLubyte *) calloc(1, (int)pixelSizeToUseForTexture.width * (int)pixelSizeToUseForTexture.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 8, (int)pixelSizeToUseForTexture.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, pixelSizeToUseForTexture.width, pixelSizeToUseForTexture.height), [newImageSource CGImage]);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
}
else
{
// Access the raw image bytes directly
dataFromImageDataProvider = CGDataProviderCopyData(CGImageGetDataProvider([newImageSource CGImage]));
imageData = (GLubyte *)CFDataGetBytePtr(dataFromImageDataProvider);
}
glBindTexture(GL_TEXTURE_2D, outputTexture);
if (self.shouldSmoothlyScaleOutput)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
if (self.shouldSmoothlyScaleOutput)
{
glGenerateMipmap(GL_TEXTURE_2D);
}
if (shouldRedrawUsingCoreGraphics)
{
free(imageData);
}
else
{
CFRelease(dataFromImageDataProvider);
}
As you can see, this has some functions for resizing images that exceed the maximum texture size of the device (the class method in the above code merely queries the max texture size), as well as a boolean flag for whether or not to generate mipmaps for the texture for smoother downsampling. These can be removed if you don't care about those cases. This is also OpenGL ES 2.0 code, so there might be an OES suffix or two that you'd need to add to some of the functions above in order for them to work with 1.1.
Once you have the UIImage in a texture, you can draw it to the screen by using a textured quad (two triangles that make up a rectangle, with appropriate texture coordinates for the corners). How you do this will differ between OpenGL ES 1.1 and 2.0. For 2.0, you use a passthrough shader program that just reads the color from that location in the texture and draws that to the screen and for 1.1, you just set up the texture coordinates for your geometry and draw the two triangles.
I have some OpenGL ES 2.0 code for this in this answer.
I need to implement SURF algorithm in objc iOS.
I have searched on openCV and also tried to implement following examples
jonmarimba and ishai jaffe
The examples are not working and I need to make any one of them work so atleast I can get relieved that yes SURF can work on iOS as well. I have tried to build from scratch but I am totally FUSED with SHORT CIRCUIT.
I am trying to use openCV 2.4.2 in jonmarimba's example.
And also trying to use iOS5.1.1 with Xcode 4.3
First of all: Go with OpenCVs C++-interface. Objective-C is a strict super set of C, so you can just use it.
To get a grip on the topic take a look at OpenCVs official docs and the example code about Feature Description.
The next step is to grab a copy of the current OpenCV version for iOS. As of version 2.4.2 OpenCV has official iOS-support and you just need the opencv2.framework.
To convert an UIImage to a cv::Mat use this function:
static UIImage* MatToUIImage(const cv::Mat& m) {
CV_Assert(m.depth() == CV_8U);
NSData *data = [NSData dataWithBytes:m.data length:m.elemSize()*m.total()];
CGColorSpaceRef colorSpace = m.channels() == 1 ?
CGColorSpaceCreateDeviceGray() : CGColorSpaceCreateDeviceRGB();
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(m.cols, m.cols, m.elemSize1()*8, m.elemSize()*8,
m.step[0], colorSpace, kCGImageAlphaNoneSkipLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace); return finalImage;
}
… and vice-versa:
static void UIImageToMat(const UIImage* image, cv::Mat& m) {
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width, rows = image.size.height;
m.create(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(m.data, m.cols, m.rows, 8,
m.step[0], colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef); CGColorSpaceRelease(colorSpace);
}
The rest of the work you have to do is plain OpenCV Stuff. So grab you a coffee and start working.
If you need some "inspiration" take a look at this repo gsoc2012 - /ios/trunk It's dedicated to OpenCV + iOS.
I have done a simple but effective emboss effect with Core Graphics.
It works great! But only in simulator...
Here is the result:
What I do is the following:
- From a picked image, I take the alpha out if it has and I fill it with white.
- I transform this RGB image to Grayscale
- I invert colors of this image
I then call a custom method to create the effect with parameters:
canvasImg: a semi-transparent image to mask on
maskImg: the image I just created, grayscale and inverted:
opacitity: the opacity of the resulting image
The method then makes a simple mask, applies shadows and oppacity and returns a brand new UIImage.
I can't understand why in the simulator it does work, nor the device.
While running in the device, I get a non-null UIImage tho...
Please help!
Here is the code:
- (UIImage *)stampImage:(UIImage *)canvasImg withMask:(UIImage *)maskImg withOpacity:(CGFloat)opacity
{
//Creating the mask Image
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
mainViewContentContext = CGBitmapContextCreate(NULL, maskImg.size.width, maskImg.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext == NULL) return NULL;
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, maskImg.size.width, maskImg.size.height), maskImg.CGImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, maskImg.size.width, maskImg.size.height), canvasImg.CGImage);
CGContextSetAllowsAntialiasing(mainViewContentContext, true);
CGContextSetShouldAntialias(mainViewContentContext, true);
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *maskedImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
CGImageRelease(mainViewContentBitmapContext);
//Giving some Drop shadows
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef shadowContext = CGBitmapContextCreate(NULL, maskedImage.size.width + 10, maskedImage.size.height + 10,
CGImageGetBitsPerComponent(maskedImage.CGImage), 0,
colourSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colourSpace);
CGContextSetShadowWithColor(shadowContext, CGSizeMake(0, -1), 1, [UIColor colorWithWhite:1.0 alpha:0.3].CGColor);
CGContextSetAllowsAntialiasing(shadowContext, true);
CGContextSetShouldAntialias(shadowContext, true);
CGContextDrawImage(shadowContext, CGRectMake(0, 10, maskedImage.size.width, maskedImage.size.height), maskedImage.CGImage);
CGImageRef shadowedCGImage = CGBitmapContextCreateImage(shadowContext);
CGContextRelease(shadowContext);
UIImage *stampImg = [UIImage imageWithCGImage:shadowedCGImage];
CGImageRelease(shadowedCGImage);
return stampImg;
}
Also be aware of the memory limitations of the device vs the simulator. I've had CG logic that would build and run fine on the simulator; the same logic will build and run emitting no errors on the device, but the visual result is not the desired one. I'd suggest trying your logic on a considerably smaller image to verify that it works on the device. I had to abandon some very cool image masking stuff that I'd come up with because the device didn't have the horsepower to pull it off for larger images.
When I load textures from images normally, they are upside down because of OpenGL's coordinate system. What would be the best way to flip them?
glScalef(1.0f, -1.0f, 1.0f);
mapping the y coordinates of the textures in reverse
vertically flipping the image files manually (in Photoshop)
flipping them programatically after loading them (I don't know how)
This is the method I'm using to load png textures, in my Utilities.m file (Objective-C):
+ (TextureImageRef)loadPngTexture:(NSString *)name {
CFURLRef textureURL = CFBundleCopyResourceURL(
CFBundleGetMainBundle(),
(CFStringRef)name,
CFSTR("png"),
CFSTR("Textures"));
NSAssert(textureURL, #"Texture name invalid");
CGImageSourceRef imageSource = CGImageSourceCreateWithURL(textureURL, NULL);
NSAssert(imageSource, #"Invalid Image Path.");
NSAssert((CGImageSourceGetCount(imageSource) > 0), #"No Image in Image Source.");
CFRelease(textureURL);
CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
NSAssert(image, #"Image not created.");
CFRelease(imageSource);
GLuint width = CGImageGetWidth(image);
GLuint height = CGImageGetHeight(image);
void *data = malloc(width * height * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSAssert(colorSpace, #"Colorspace not created.");
CGContextRef context = CGBitmapContextCreate(
data,
width,
height,
8,
width * 4,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
NSAssert(context, #"Context not created.");
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGImageRelease(image);
CGContextRelease(context);
return TextureImageCreate(width, height, data);
}
Where TextureImage is a struct that has a height, width and void *data.
Right now I'm just playing around with OpenGL, but later I want to try making a simple 2d game. I'm using Cocoa for all the windowing and Objective-C as the language.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Thanks.
Any of those:
Flip texture during the texture load,
OR flip model texture coordinates during model load
OR set texture matrix to flip y (glMatrixMode(GL_TEXTURE)) during render.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Depends on how you are going to render text.
Jordan Lewis pointed out CGContextDrawImage draws image upside down when passed UIImage.CGImage. There I found a quick and easy solution: Before calling CGContextDrawImage,
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM(context, 1.0f, -1.0f);
Does the job perfectly well.