Fragment shaders on a texture - objective-c

I am trying to add some post-processing capabilities to a program. The rendering is done using openGL. I just want to allow the program to load some home made fragment shader and use them on the video stream.
I wrote a little piece of shader using "OpenGL Shader Builder" that just turns a texture in grayscale. The shaders works well in the shader builder but I can't make it work in the main program. The screens stays all black.
Here is the setup :
#implementation PluginGLView
- (id) initWithCoder: (NSCoder *) coder
{
const GLubyte * strExt;
if ((self = [super initWithCoder:coder]) == nil)
return nil;
glLock = [[NSLock alloc] init];
if (nil == glLock) {
[self release];
return nil;
}
// Init pixel format attribs
NSOpenGLPixelFormatAttribute attrs[] =
{
NSOpenGLPFAAccelerated,
NSOpenGLPFANoRecovery,
NSOpenGLPFADoubleBuffer,
0
};
// Get pixel format from OpenGL
NSOpenGLPixelFormat* pixFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs];
if (!pixFmt)
{
NSLog(#"No Accelerated OpenGL pixel format found\n");
NSOpenGLPixelFormatAttribute attrs2[] =
{
NSOpenGLPFANoRecovery,
0
};
// Get pixel format from OpenGL
pixFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs2];
if (!pixFmt) {
NSLog(#"No OpenGL pixel format found!\n");
[self release];
return nil;
}
}
[self setPixelFormat:[pixFmt autorelease]];
/*
long swapInterval = 1 ;
[[self openGLContext]
setValues:&swapInterval
forParameter:NSOpenGLCPSwapInterval];
*/
[glLock lock];
[[self openGLContext] makeCurrentContext];
// Init object members
strExt = glGetString (GL_EXTENSIONS);
texture_range = gluCheckExtension ((const unsigned char *)"GL_APPLE_texture_range", strExt) ? GL_TRUE : GL_FALSE;
texture_hint = GL_STORAGE_SHARED_APPLE ;
client_storage = gluCheckExtension ((const unsigned char *)"GL_APPLE_client_storage", strExt) ? GL_TRUE : GL_FALSE;
rect_texture = gluCheckExtension((const unsigned char *)"GL_EXT_texture_rectangle", strExt) ? GL_TRUE : GL_FALSE;
// Setup some basic OpenGL stuff
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Loads the shaders
shader=LoadShader(GL_FRAGMENT_SHADER,"/Users/alexandremathieu/fragment.fs");
program=glCreateProgram();
glAttachShader(program, shader);
glLinkProgram(program);
glUseProgram(program);
[NSOpenGLContext clearCurrentContext];
[glLock unlock];
image_width = 1024;
image_height = 512;
image_depth = 16;
image_type = GL_UNSIGNED_SHORT_1_5_5_5_REV;
image_base = (GLubyte *) calloc(((IMAGE_COUNT * image_width * image_height) / 3) * 4, image_depth >> 3);
if (image_base == nil) {
[self release];
return nil;
}
// Create and load textures for the first time
[self loadTextures:GL_TRUE];
// Init fps timer
//gettimeofday(&cycle_time, NULL);
drawBG = YES;
// Call for a redisplay
noDisplay = YES;
PSXDisplay.Disabled = 1;
[self setNeedsDisplay:true];
return self;
}
And here is the "render screen" function which basically...renders the screen.
- (void)renderScreen
{
int bufferIndex = whichImage;
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, bufferIndex+1);
glUseProgram(program);
int loc=glGetUniformLocation(program, "texture");
glUniform1i(loc,bufferIndex+1);
glTexSubImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, 0, 0, image_width, image_height, GL_BGRA, image_type, image[bufferIndex]);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(-1.0f, 1.0f);
glTexCoord2f(0.0f, image_height);
glVertex2f(-1.0f, -1.0f);
glTexCoord2f(image_width, image_height);
glVertex2f(1.0f, -1.0f);
glTexCoord2f(image_width, 0.0f);
glVertex2f(1.0f, 1.0f);
glEnd();
[[self openGLContext] flushBuffer];
[NSOpenGLContext clearCurrentContext];
//[glLock unlock];
}
and finally here's the shader.
uniform sampler2DRect texture;
void main() {
vec4 color, texel;
color = gl_Color;
texel = texture2DRect(texture, gl_TexCoord[0].xy);
color *= texel;
// Begin Shader
float gray=0.0;
gray+=(color.r + color.g + color.b)/3.0;
color=vec4(gray,gray,gray,color.a);
// End Shader
gl_FragColor = color;
}
The loading and using of shaders works since I am able to turn the screen all red with this shader
void main(){
gl_FragColor=vec4(1.0,0.0,0.0,1.0);
}
If the shader contains a syntax error I get an error message from the LoadShader function etc. If I remove the use of the shader, everything works normally.
I think the problem comes from the "passing the texture as a uniform parameter" thing. But these are my very firsts step with openGL and I can't be sure of anything.

Texture samplers have to be set to the number of the active texture unit. So for example with glActiveTexture(GL_TEXTURE3) the sampler must be set to 3 as well. In your case the number should be 0.

Related

Unable to Release Quartz 2D and Core Text created Images

I am having a problem deleting my OpenGL ES textures are created via Quartz 2D and Core Text as shown below :
- (void)drawText:(CGContextRef)contextP startX:(float)x startY:(float)
y withText:(NSString *)standString
{
CGContextTranslateCTM(contextP, 0, (bottom-top)*2);
CGContextScaleCTM(contextP, 1.0, -1.0);
CGRect frameText = CGRectMake(1, 0, (right-left)*2, (bottom-top)*2);
NSMutableAttributedString * attrString = [[NSMutableAttributedString alloc] initWithString:standString];
[attrString addAttribute:NSFontAttributeName
value:[UIFont fontWithName:#"Helvetica-Bold" size:12.0]
range:NSMakeRange(0, attrString.length)];
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((__bridge CFAttributedStringRef)(attrString));
struct CGPath * p = CGPathCreateMutable();
CGPathAddRect(p, NULL, frameText);
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0,0), p, NULL);
CTFrameDraw(frame, contextP);
CFRelease(framesetter);
CFRelease(frame);
CGPathRelease(p);
standString = nil;
attrString = nil;
}
- (UIImage *)drawTexture : (NSArray *)verticesPassed : (UIColor *)statusColour : (NSString *)standString {
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGRect shp = [self boundFromFrame:verticesPassed];
CGContextRef conPattern = CGBitmapContextCreate(NULL,
shp.size.width*sceneScalar,
shp.size.height*sceneScalar,
8,
0,
rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(rgbColorSpace);
CGContextSetLineWidth(conPattern, 2);
CGContextSetStrokeColorWithColor(conPattern, [UIColor blackColor].CGColor);
Line * start = [sortedVertices objectAtIndex:0];
StandPoint * startPoint = start.origin;
CGContextMoveToPoint(conPattern, ([startPoint.x floatValue]-shp.origin.x)*sceneScalar , ([startPoint.y floatValue]-shp.origin.y)*sceneScalar);
for (Line * vertice in sortedVertices) {
StandPoint * standPoint = vertice.origin;
CGContextAddLineToPoint(conPattern, ([standPoint.x floatValue]-shp.origin.x)*sceneScalar, ([standPoint.y floatValue]-shp.origin.y)*sceneScalar);
}
CGContextAddLineToPoint(conPattern, ([startPoint.x floatValue]-shp.origin.x)*sceneScalar , ([startPoint.y floatValue]-shp.origin.y)*sceneScalar);
CGContextSetFillColorWithColor(conPattern, statusColour.CGColor);
CGContextDrawPath(conPattern, kCGPathFillStroke);
[self drawText:conPattern startX:0 startY:20 withText:standString];
CGImageRef cgImage = CGBitmapContextCreateImage(conPattern);
UIImage *imgPattern = [[UIImage alloc]initWithCGImage:cgImage];
//UIImageWriteToSavedPhotosAlbum(imgPattern, nil, nil, nil); Uncomment if you need to view textures (photo album).
self.standHeight = (top - bottom)*sceneScalar;
self.standWidth = (right - left)*sceneScalar;
self.xOffset = [startPoint.x floatValue]*sceneScalar;
self.yOffset = (-[startPoint.y floatValue]*sceneScalar)+standHeight;
CFRelease(cgImage);
CGContextRelease(conPattern);
return imgPattern;
}
However when I try to delete the textures as follows in viewWillDisappear, I can see (in instruments) that the memory is not released :
for (PolygonObject * stand in standArray) {
GLuint name = stand.textureInfo.name;
// delete texture from opengl
glDeleteTextures(1, &name);
// set texture info to nil
stand.textureInfo = nil;
}
I think that something is being retained in the texture somewhere in the code above. Can anyone suggest where ?
As far as I can see, there's no memory leak in your code if you're using ARC. However, the UIImage returned from drawTexture may be retained by some other objects or by the auto release pool.
To check if this is the case, you may subclass UIImage, say UIDebugImage, override the dealloc method and set a breakpoint there to see if it's called.

Rotate, Change Colors and Get RGB565 data from NSImage

I have found myself in a situation where I have several NSImage objects that I need to rotate by 90 degrees, change the colour of pixels that are one colour to another colour and then get the RGB565 data representation for it as an NSData object.
I found the vImageConvert_ARGB8888toRGB565 function in the Accelerate framework so this should be able to do the RGB565 output.
There are a few UIImage rotation I have found here on StackOverflow, but I'm having trouble converting them to NSImage as it appears I have to use NSGraphicsContext not CGContextRef?
Ideally I would like these in an NSImage Category so I can just call.
NSImage *rotated = [inputImage rotateByDegrees:90];
NSImage *colored = [rotated changeColorFrom:[NSColor redColor] toColor:[NSColor blackColor]];
NSData *rgb565 = [colored rgb565Data];
I just don't know where to start as image manipulation is new to me.
I appreciate any help I can get.
Edit (22/04/2013)
I have managed to piece this code together to generate the RGB565 data, it generates it upside down and with some small artefacts, I assume the first is due to different coordinate systems being used and the second possibly due to me going from PNG to BMP. I will do some more testing using a BMP to start and also a non-tranparent PNG.
- (NSData *)RGB565Data
{
CGContextRef cgctx = CreateARGBBitmapContext(self.CGImage);
if (cgctx == NULL)
return nil;
size_t w = CGImageGetWidth(self.CGImage);
size_t h = CGImageGetHeight(self.CGImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, self.CGImage);
void *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx);
if (!data)
return nil;
vImage_Buffer src;
src.data = data;
src.width = w;
src.height = h;
src.rowBytes = (w * 4);
void* destData = malloc((w * 2) * h);
vImage_Buffer dst;
dst.data = destData;
dst.width = w;
dst.height = h;
dst.rowBytes = (w * 2);
vImageConvert_ARGB8888toRGB565(&src, &dst, 0);
size_t dataSize = 2 * w * h; // RGB565 = 2 5-bit components and 1 6-bit (16 bits/2 bytes)
NSData *RGB565Data = [NSData dataWithBytes:dst.data length:dataSize];
free(destData);
return RGB565Data;
}
- (CGImageRef)CGImage
{
return [self CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
}
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
return nil;
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease( colorSpace );
return nil;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
For most of this, you'll want to use Core Image.
Rotation you can do with the CIAffineTransform filter. This takes an NSAffineTransform object. You may have already worked with that class before. (You could do the rotation with NSImage itself, but it's easier with Core Image and you'll probably need to use it for the next step anyway.)
I don't know what you mean by “change the colour of pixels that are one colour to another colour”; that could mean any of a lot of different things. Chances are, though, there's a filter for that.
I also don't know why you need 565 data specifically, but assuming you have a real need for that, you're correct that that function will be involved. Use CIContext's lowest-level rendering method to get 8-bit-per-component ARGB output, and then use that vImage function to convert it to 565 RGB.
I have managed to get what I want by using NSBitmapImageRep (accessing it with a bit of a hack). If anyone knows a better way of doing this, please do share.
The - (NSBitmapImageRep)bitmap method is my hack. The NSImage starts of having only an NSBitmapImageRep, however after the rotation method a CIImageRep is added which takes priority over the NSBitmapImageRep which breaks the colour code (as NSImage renders the CIImageRep which doesn't get colored).
BitmapImage.m (Subclass of NSImage)
CGContextRef CreateARGBBitmapContext (CGImageRef inImage)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
bitmapBytesPerRow = (int)(pixelsWide * 4);
bitmapByteCount = (int)(bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
return nil;
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
CGColorSpaceRelease( colorSpace );
return nil;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (NSData *)RGB565Data
{
CGContextRef cgctx = CreateARGBBitmapContext(self.CGImage);
if (cgctx == NULL)
return nil;
size_t w = CGImageGetWidth(self.CGImage);
size_t h = CGImageGetHeight(self.CGImage);
CGRect rect = {{0,0},{w,h}};
CGContextDrawImage(cgctx, rect, self.CGImage);
void *data = CGBitmapContextGetData (cgctx);
CGContextRelease(cgctx);
if (!data)
return nil;
vImage_Buffer src;
src.data = data;
src.width = w;
src.height = h;
src.rowBytes = (w * 4);
void* destData = malloc((w * 2) * h);
vImage_Buffer dst;
dst.data = destData;
dst.width = w;
dst.height = h;
dst.rowBytes = (w * 2);
vImageConvert_ARGB8888toRGB565(&src, &dst, 0);
size_t dataSize = 2 * w * h; // RGB565 = 2 5-bit components and 1 6-bit (16 bits/2 bytes)
NSData *RGB565Data = [NSData dataWithBytes:dst.data length:dataSize];
free(destData);
return RGB565Data;
}
- (NSBitmapImageRep*)bitmap
{
NSBitmapImageRep *bitmap = nil;
NSMutableArray *repsToRemove = [NSMutableArray array];
// Iterate through the representations that back the NSImage
for (NSImageRep *rep in self.representations)
{
// If the representation is a bitmap
if ([rep isKindOfClass:[NSBitmapImageRep class]])
{
bitmap = [(NSBitmapImageRep*)rep retain];
break;
}
else
{
[repsToRemove addObject:rep];
}
}
// If no bitmap representation was found, we create one (this shouldn't occur)
if (bitmap == nil)
{
bitmap = [[[NSBitmapImageRep alloc] initWithCGImage:self.CGImage] retain];
[self addRepresentation:bitmap];
}
for (NSImageRep *rep2 in repsToRemove)
{
[self removeRepresentation:rep2];
}
return [bitmap autorelease];
}
- (NSColor*)colorAtX:(NSInteger)x y:(NSInteger)y
{
return [self.bitmap colorAtX:x y:y];
}
- (void)setColor:(NSColor*)color atX:(NSInteger)x y:(NSInteger)y
{
[self.bitmap setColor:color atX:x y:y];
}
NSImage+Extra.m (NSImage Category)
- (CGImageRef)CGImage
{
return [self CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
}
Usage
- (IBAction)load:(id)sender
{
NSOpenPanel* openDlg = [NSOpenPanel openPanel];
[openDlg setCanChooseFiles:YES];
[openDlg setCanChooseDirectories:YES];
if ( [openDlg runModalForDirectory:nil file:nil] == NSOKButton )
{
NSArray* files = [openDlg filenames];
for( int i = 0; i < [files count]; i++ )
{
NSString* fileName = [files objectAtIndex:i];
BitmapImage *image = [[BitmapImage alloc] initWithContentsOfFile:fileName];
imageView.image = image;
}
}
}
- (IBAction)colorize:(id)sender
{
float width = imageView.image.size.width;
float height = imageView.image.size.height;
BitmapImage *img = (BitmapImage*)imageView.image;
NSColor *newColor = [img colorAtX:1 y:1];
for (int x = 0; x <= width; x++)
{
for (int y = 0; y <= height; y++)
{
if ([img colorAtX:x y:y] == newColor)
{
[img setColor:[NSColor redColor] atX:x y:y];
}
}
}
[imageView setNeedsDisplay:YES];
}
- (IBAction)rotate:(id)sender
{
BitmapImage *img = (BitmapImage*)imageView.image;
BitmapImage *newImg = [img rotate90DegreesClockwise:NO];
imageView.image = newImg;
}
Edit (24/04/2013)
I have changed the following code:
- (RGBColor)colorAtX:(NSInteger)x y:(NSInteger)y
{
NSUInteger components[4];
[self.bitmap getPixel:components atX:x y:y];
//NSLog(#"R: %ld, G:%ld, B:%ld", components[0], components[1], components[2]);
RGBColor color = {components[0], components[1], components[2]};
return color;
}
- (BOOL)color:(RGBColor)a isEqualToColor:(RGBColor)b
{
return ((a.red == b.red) && (a.green == b.green) && (a.blue == b.blue));
}
- (void)setColor:(RGBColor)color atX:(NSUInteger)x y:(NSUInteger)y
{
NSUInteger components[4] = {(NSUInteger)color.red, (NSUInteger)color.green, (NSUInteger)color.blue, 255};
//NSLog(#"R: %ld, G: %ld, B: %ld", components[0], components[1], components[2]);
[self.bitmap setPixel:components atX:x y:y];
}
- (IBAction)colorize:(id)sender
{
float width = imageView.image.size.width;
float height = imageView.image.size.height;
BitmapImage *img = (BitmapImage*)imageView.image;
RGBColor oldColor = [img colorAtX:0 y:0];
RGBColor newColor;// = {255, 0, 0};
newColor.red = 255;
newColor.green = 0;
newColor.blue = 0;
for (int x = 0; x <= width; x++)
{
for (int y = 0; y <= height; y++)
{
if ([img color:[img colorAtX:x y:y] isEqualToColor:oldColor])
{
[img setColor:newColor atX:x y:y];
}
}
}
[imageView setNeedsDisplay:YES];
}
But now it changes the pixels to red the first time and then blue the second time the colorize method is called.
Edit 2 (24/04/2013)
The following code fixes it. It was because the rotation code was adding an alpha channel to the NSBitmapImageRep.
- (RGBColor)colorAtX:(NSInteger)x y:(NSInteger)y
{
if (self.bitmap.hasAlpha)
{
NSUInteger components[4];
[self.bitmap getPixel:components atX:x y:y];
RGBColor color = {components[1], components[2], components[3]};
return color;
}
else
{
NSUInteger components[3];
[self.bitmap getPixel:components atX:x y:y];
RGBColor color = {components[0], components[1], components[2]};
return color;
}
}
- (void)setColor:(RGBColor)color atX:(NSUInteger)x y:(NSUInteger)y
{
if (self.bitmap.hasAlpha)
{
NSUInteger components[4] = {255, (NSUInteger)color.red, (NSUInteger)color.green, (NSUInteger)color.blue};
[self.bitmap setPixel:components atX:x y:y];
}
else
{
NSUInteger components[3] = {color.red, color.green, color.blue};
[self.bitmap setPixel:components atX:x y:y];
}
}
Ok, I decided to spend the day researching Peter's suggestion of using CoreImage.
I had done some research previously and decided it was too hard but after an entire day of research I finally worked out what I needed to do and amazingly it couldn't be easier.
Early on I had decided that the Apple ChromaKey Core Image example would be a great starting point but the example code frightened me off due to the 3-dimensional colour cube. After watching the WWDC 2012 video on Core Image and finding some sample code on github (https://github.com/vhbit/ColorCubeSample) I decided to jump in and just give it a go.
Here are the important parts of the working code, I haven't included the RGB565Data method as I haven't written it yet, but it should be easy using the method Peter suggested:
CIImage+Extras.h
- (NSImage*) NSImage;
- (CIImage*) imageRotated90DegreesClockwise:(BOOL)clockwise;
- (CIImage*) imageWithChromaColor:(NSColor*)chromaColor BackgroundColor:(NSColor*)backColor;
- (NSColor*) colorAtX:(NSUInteger)x y:(NSUInteger)y;
CIImage+Extras.m
- (NSImage*) NSImage
{
CGContextRef cg = [[NSGraphicsContext currentContext] graphicsPort];
CIContext *context = [CIContext contextWithCGContext:cg options:nil];
CGImageRef cgImage = [context createCGImage:self fromRect:self.extent];
NSImage *image = [[NSImage alloc] initWithCGImage:cgImage size:NSZeroSize];
return [image autorelease];
}
- (CIImage*) imageRotated90DegreesClockwise:(BOOL)clockwise
{
CIImage *im = self;
CIFilter *f = [CIFilter filterWithName:#"CIAffineTransform"];
NSAffineTransform *t = [NSAffineTransform transform];
[t rotateByDegrees:clockwise ? -90 : 90];
[f setValue:t forKey:#"inputTransform"];
[f setValue:im forKey:#"inputImage"];
im = [f valueForKey:#"outputImage"];
CGRect extent = [im extent];
f = [CIFilter filterWithName:#"CIAffineTransform"];
t = [NSAffineTransform transform];
[t translateXBy:-extent.origin.x
yBy:-extent.origin.y];
[f setValue:t forKey:#"inputTransform"];
[f setValue:im forKey:#"inputImage"];
im = [f valueForKey:#"outputImage"];
return im;
}
- (CIImage*) imageWithChromaColor:(NSColor*)chromaColor BackgroundColor:(NSColor*)backColor
{
CIImage *im = self;
CIColor *backCIColor = [[CIColor alloc] initWithColor:backColor];
CIImage *backImage = [CIImage imageWithColor:backCIColor];
backImage = [backImage imageByCroppingToRect:self.extent];
[backCIColor release];
float chroma[3];
chroma[0] = chromaColor.redComponent;
chroma[1] = chromaColor.greenComponent;
chroma[2] = chromaColor.blueComponent;
// Allocate memory
const unsigned int size = 64;
const unsigned int cubeDataSize = size * size * size * sizeof (float) * 4;
float *cubeData = (float *)malloc (cubeDataSize);
float rgb[3];//, *c = cubeData;
// Populate cube with a simple gradient going from 0 to 1
size_t offset = 0;
for (int z = 0; z < size; z++){
rgb[2] = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgb[1] = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgb[0] = ((double)x)/(size-1); // Red value
float alpha = ((rgb[0] == chroma[0]) && (rgb[1] == chroma[1]) && (rgb[2] == chroma[2])) ? 0.0 : 1.0;
cubeData[offset] = rgb[0] * alpha;
cubeData[offset+1] = rgb[1] * alpha;
cubeData[offset+2] = rgb[2] * alpha;
cubeData[offset+3] = alpha;
offset += 4;
}
}
}
// Create memory with the cube data
NSData *data = [NSData dataWithBytesNoCopy:cubeData
length:cubeDataSize
freeWhenDone:YES];
CIFilter *colorCube = [CIFilter filterWithName:#"CIColorCube"];
[colorCube setValue:[NSNumber numberWithInt:size] forKey:#"inputCubeDimension"];
// Set data for cube
[colorCube setValue:data forKey:#"inputCubeData"];
[colorCube setValue:im forKey:#"inputImage"];
im = [colorCube valueForKey:#"outputImage"];
CIFilter *sourceOver = [CIFilter filterWithName:#"CISourceOverCompositing"];
[sourceOver setValue:im forKey:#"inputImage"];
[sourceOver setValue:backImage forKey:#"inputBackgroundImage"];
im = [sourceOver valueForKey:#"outputImage"];
return im;
}
- (NSColor*)colorAtX:(NSUInteger)x y:(NSUInteger)y
{
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithCIImage:self];
NSColor *color = [bitmap colorAtX:x y:y];
[bitmap release];
return color;
}

Flood fill Crash

I have been trying to get a simple Flood Fill Algorithm working for an iPhone app that I am developing and I just can't get it working correctly.
I have got the actual process to work great however the app will crash when the fill is too large. From what I can tell its because the thread is overflowing from all of the functions running. From what I have read, I need to implement a stack but I can't work out how this works.
typedef struct {
int red;
int green;
int blue;
} color;
#interface EMFloodTest : UIViewController {
UIImageView *mainImage;
unsigned char *imageData;
color selColor;
color newColor;
int maxByte;
}
#end
#implementation EMFloodTest
- (void)setupImageData {
CGImageRef imageRef = mainImage.image.CGImage;
if (imageRef == NULL) { return; }
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
maxByte = height * width * 4;
imageData = malloc(height * width * 4);
CGContextRef context = CGBitmapContextCreate(imageData, width, height, bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
}
- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
if (self) {
mainImage = [[UIImageView alloc]initWithImage:[UIImage imageNamed:#"Color6.png"]];
[self.view addSubview:mainImage];
newColor.red = 255;
newColor.green = 94;
newColor.blue = 0;
[self setupImageData];
}
return self;
}
- (void)updateImage {
CGImageRef imageRef = mainImage.image.CGImage;
if (imageRef == NULL) { return; }
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(imageData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (context);
mainImage.image = [UIImage imageWithCGImage:imageRef];
CGContextRelease(context);
}
- (void)setPixel:(NSUInteger)byte toColor:(color)color {
imageData[byte] = color.red;
imageData[byte+1] = color.green;
imageData[byte+2] = color.blue;
}
- (BOOL)testByte:(NSInteger)byte againstColor:(color)color {
if (imageData[byte] == color.red && imageData[byte+1] == color.green && imageData[byte+2] == color.blue) {
return YES;
} else {
return NO;
}
}
// This is where the flood fill starts. Its a basic implementation but crashes when filling large sections.
- (void)floodFillFrom:(NSInteger)byte bytesPerRow:(NSInteger)bpr {
int u = byte - bpr;
int r = byte + 4;
int d = byte + bpr;
int l = byte - 4;
if ([self testByte:u againstColor:selColor]) {
[self setPixel:u toColor:newColor];
[self floodFillFrom:u bytesPerRow:bpr];
}
if ([self testByte:r againstColor:selColor]) {
[self setPixel:r toColor:newColor];
[self floodFillFrom:r bytesPerRow:bpr];
}
if ([self testByte:d againstColor:selColor]) {
[self setPixel:d toColor:newColor];
[self floodFillFrom:d bytesPerRow:bpr];
}
if ([self testByte:l againstColor:selColor]) {
[self setPixel:l toColor:newColor];
[self floodFillFrom:l bytesPerRow:bpr];
}
}
- (void)startFillFrom:(NSInteger)byte bytesPerRow:(NSInteger)bpr {
if (imageData[byte] == 0 && imageData[byte+1] == 0 && imageData[byte+2] == 0) {
NSLog(#"Black Selected");
return;
} else if ([self testByte:byte againstColor:newColor]) {
NSLog(#"Same Fill Color");
} else {
// code goes here
NSLog(#"Color to be replaced");
[self floodFrom:byte bytesPerRow:bpr];
[self updateImage];
}
}
- (void)selectedColor:(CGPoint)point {
CGImageRef imageRef = mainImage.image.CGImage;
if (imageRef == NULL) { return; }
if (imageData == NULL) { return; }
NSInteger width = CGImageGetWidth(imageRef);
NSInteger byteNumber = 4*((width*round(point.y))+round(point.x));
NSInteger bytesPerPixel = 4;
NSInteger bytesPerRow = bytesPerPixel * width;
selColor.red = imageData[byteNumber];
selColor.green = imageData[byteNumber + 1];
selColor.blue = imageData[byteNumber + 2];
NSLog(#"Selected Color, RGB: %i, %i, %i",selColor.red, selColor.green, selColor.blue);
NSLog(#"Byte:%i",byteNumber);
[self startFillFrom:byteNumber bytesPerRow:bytesPerRow];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView:mainImage];
[self selectedColor:location];
}
Any help on how I might be able to implement a stack or even use another algorithm would be greatly appreciated.
Best,
Darren
The problem is recursive implementation.
Too much deep recursive call to function make stack overflow error.
You have to implement your algorithm in iterative manner.
If you want to see iterative example of flood Fill you can visit:
UIImageScanlineFloodfill

OpenGL Texture from CALayer (AVPlayerLayer)

I have an AVPlayerLayer which I would like to create an OpenGL Texture out of. I'm comfortable with opengl textures, and even comfortable with converting a CGImageRef into an opengl texture. It seems to me the code below should work, but I get just plain black. What am I doing wrong? Do I need to set any properties on the CALayer / AVPlayerLayer first?
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int width = (int)[layer bounds].size.width;
int height = (int)[layer bounds].size.height;
CGContextRef context = CGBitmapContextCreate(NULL,
width,
height,
8,
width * 4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (context== NULL) {
ofLog(OF_LOG_ERROR, "getTextureFromLayer: failed to create context 1");
return;
}
[[layer presentationLayer] renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
int bytesPerPixel = CGImageGetBitsPerPixel(cgImage)/8;
if(bytesPerPixel == 3) bytesPerPixel = 4;
GLubyte *pixels = (GLubyte *) malloc(width * height * bytesPerPixel);
CGContextRelease(context);
context = CGBitmapContextCreate(pixels,
width,
height,
CGImageGetBitsPerComponent(cgImage),
width * bytesPerPixel,
CGImageGetColorSpace(cgImage),
kCGImageAlphaPremultipliedLast);
if(context == NULL) {
ofLog(OF_LOG_ERROR, "getTextureFromLayer: failed to create context 2");
free(pixels);
return;
}
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), cgImage);
int glMode;
switch(bytesPerPixel) {
case 1:
glMode = GL_LUMINANCE;
break;
case 3:
glMode = GL_RGB;
break;
case 4:
default:
glMode = GL_RGBA; break;
}
if(texture.bAllocated() == false || texture.getWidth() != width || texture.getHeight() != height) {
NSLog(#"getTextureFromLayer: allocating texture %i, %i\n", width, height);
texture.allocate(width, height, glMode, true);
}
// test texture
// for(int i=0; i<width*height*4; i++) pixels[i] = ofRandomuf() * 255;
texture.loadData(pixels, width, height, glMode);
CGContextRelease(context);
CFRelease(cgImage);
free(pixels);
P.S. The variable 'texture' is a C++ opengl (-es compatible) texture object which I know works. If I uncomment the 'test texture' for-loop filling the texture with random noise, I can see that, so problem is definitely before.
UPDATE
In response to Nick Weaver's reply I tried a different approach, and now I'm always getting NULL back from copyNextSampleBuffer with status == 3 (AVAssetReaderStatusFailed). Am I missing something?
variables
AVPlayer *videoPlayer;
AVPlayerLayer *videoLayer;
AVAssetReader *videoReader;
AVAssetReaderTrackOutput*videoOutput;
init
videoPlayer = [[AVPlayer alloc] initWithURL:[NSURL fileURLWithPath:[NSString stringWithUTF8String:videoPath.c_str()]]];
if(videoPlayer == nil) {
NSLog(#"videoPlayer == nil ERROR LOADING %s\n", videoPath.c_str());
} else {
NSLog(#"videoPlayer: %#", videoPlayer);
videoLayer = [[AVPlayerLayer playerLayerWithPlayer:videoPlayer] retain];
videoLayer.frame = [ThreeDView instance].bounds;
// [[ThreeDView instance].layer addSublayer:videoLayer]; // test to see if it's loading and running
AVAsset *asset = videoPlayer.currentItem.asset;
NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], (NSString*)kCVPixelBufferPixelFormatTypeKey, nil];
videoReader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
videoOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:[tracks objectAtIndex:0] outputSettings:settings];
[videoReader addOutput:videoOutput];
[videoReader startReading];
}
draw loop
if(videoPlayer == 0) {
ofLog(OF_LOG_WARNING, "Shot::drawVideo: videoPlayer == 0");
return;
}
if(videoOutput == 0) {
ofLog(OF_LOG_WARNING, "Shot::drawVideo: videoOutput == 0");
return;
}
CMSampleBufferRef sampleBuffer = [videoOutput copyNextSampleBuffer];
if(sampleBuffer == 0) {
ofLog(OF_LOG_ERROR, "Shot::drawVideo: sampleBuffer == 0, status: %i", videoReader.status);
return;
}
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFRelease(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
unsigned char *pixels = ( unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);
int width = CVPixelBufferGetWidth(imageBuffer);
int height = CVPixelBufferGetHeight(imageBuffer);
if(videoTexture.bAllocated() == false || videoTexture.getWidth() != width || videoTexture.getHeight() != height) {
NSLog(#"Shot::drawVideo() allocating texture %i, %i\n", width, height);
videoTexture.allocate(width, height, GL_RGBA, true);
}
videoTexture.loadData(pixels, width, height, GL_BGRA);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
I think iOS4: how do I use video file as an OpenGL texture? will be helpful for your question.

How to create a CGGradient for my UIView subclass?

Does anyone know how to create a CGGradient that will fill my view,
my current code is this and it fill the UIView with a red rectangle I want to have a gradient (from black to grey for instance) instead of the rect :
- (void)drawRect:(CGRect)rect {
CGContextRef context=UIGraphicsGetCurrentContext();
CGRect r;
r.origin.x=0.;
r.origin.y=0.;
r.size.width=rect.size.width;
r.size.height=rect.size.height;
CGContextSetRGBFillColor(context, 1., 0., 0., 1.);
CGContextFillRect (context,r);
}
In my answer to this question, I provide code for drawing a gloss gradient within a UIView. The colors and drawing positions can be modified from that to form whatever linear gradient you need.
this is a subclass where you can chose the colors
.h file
#import <UIKit/UIKit.h>
#interface GradientView : UIView {
CGGradientRef gradient;
}
#property(nonatomic, assign) CGGradientRef gradient;
- (id)initWithGradient:(CGGradientRef)gradient;
- (id)initWithColor:(UIColor*)top bottom:(UIColor*)bot;
- (void)setGradientWithColor:(UIColor*)top bottom:(UIColor*)bot;
- (void)getRGBA:(CGFloat*)buffer;
#end
.m file
#import "GradientView.h"
#implementation GradientView
#synthesize gradient;
- (id)initWithGradient:(CGGradientRef)grad {
self = [super init];
if(self){
[self setGradient:grad];
}
return self;
}
- (id)initWithColor:(UIColor*)top bottom:(UIColor*)bot {
self = [super init];
if(self){
[self setGradientWithColor:top bottom:bot];
}
return self;
}
- (void)setGradient:(CGGradientRef)g {
if(gradient != NULL && g != gradient){
CGGradientRelease(gradient);
}
if(g != gradient){
CGGradientRetain(g);
}
gradient = g;
[self setNeedsDisplay];
}
- (void)setGradientWithColor:(UIColor*)top bottom:(UIColor*)bot {
CGColorSpaceRef rgb = CGColorSpaceCreateDeviceRGB();
CGFloat clr[8];
[top getRGBA:clr];
[bot getRGBA:clr+4] ;
CGGradientRef grad = CGGradientCreateWithColorComponents(rgb, clr, NULL, sizeof(clr)/(sizeof(clr[0])*4));
[self setGradient:grad];
CGColorSpaceRelease(rgb);
CGGradientRelease(grad);
}
- (void)getRGBA:(CGFloat*)buffer {
CGColorRef clr = [self CGColor];
NSInteger n = CGColorGetNumberOfComponents(clr);
const CGFloat *colors = CGColorGetComponents(clr);
// TODO: add other ColorSpaces support
switch (n) {
case 2:
for(int i = 0; i<3; ++i){
buffer[i] = colors[0];
}
buffer[3] = CGColorGetAlpha(clr);
break;
case 3:
for(int i = 0; i<3; ++i){
buffer[i] = colors[i];
}
buffer[3] = 1.0;
break;
case 4:
for(int i = 0; i<4; ++i){
buffer[i] = colors[i];
}
break;
default:
break;
}
}
- (void)drawRect:(CGRect)rect {
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextDrawLinearGradient(c, gradient, CGPointMake(0, 0), CGPointMake(0, rect.size.height), 0);
}
- (void)dealloc {
CGGradientRelease(gradient);
[super dealloc];
}
#end
Use CGGradientCreateWithColorComponents or CGGradientCreateWithColors to create the gradient. (The latter takes CGColor objects.) Then, use CGContextDrawLinearGradient or CGContextDrawRadialGradient to draw it.
A linear gradient will extend infinitely in at least the two directions perpendicular to the line of the gradient. A radial gradient extends infinitely in every direction. To prevent the gradient from spilling outside your view, you'll probably need to add the view's bounds to the clipping path using CGContextClipToRect.