NSImage initwithData memory leak - objective-c

When I wanna create a NSImage object, I encountered memory leak. I compiled the code by:
clang -o test test.m -framework Foundation -fsanitize=leak -framework CoreGraphics -framework AppKit, the clang I used is from this way: gist
// test.m
#include <Foundation/Foundation.h>
#include <Foundation/NSURL.h>
#include <dlfcn.h>
#include <stdint.h>
#include <sys/shm.h>
#include <dirent.h>
#import <Cocoa/Cocoa.h>
#import <ImageIO/ImageIO.h>
int main(int argc, const char * argv[]) {
if (argc < 2) {
printf("Usage: %s path/to/image\n", argv[0]);
return 0;
}
NSString* path = [NSString stringWithUTF8String:argv[1]];
NSData* content = [NSData dataWithContentsOfFile:path];
while(true) {
NSImage* img = [[NSImage alloc] initWithData:content];
[img release];
}
[content release];
[path release];
return 0;
}
Then I invoked it by ./test test.tiff, asan reported error that initWithData has memory leak.
If it runs in while loop, the memory consumption keeps increasing.
It works with the answer provided by #Asperi. But when I want to do more works related with NSImage, like this: this code will crash, because the CGImageRelease(cgImg); and [img release]; can not be enabled meanwhile. But if I disabled one of them, the code won't crash, but the memory consumption will keep increasing.
#include <Foundation/Foundation.h>
#include <Foundation/NSURL.h>
#include <dlfcn.h>
#include <stdint.h>
#include <sys/shm.h>
#include <dirent.h>
#import <Cocoa/Cocoa.h>
#import <ImageIO/ImageIO.h>
int main(int argc, const char * argv[]) {
if (argc < 2) {
printf("Usage: %s path/to/image\n", argv[0]);
return 0;
}
NSString* path = [NSString stringWithUTF8String:argv[1]];
NSData* content = [NSData dataWithContentsOfFile:path];
while(true) {
#autoreleasepool {
NSImage* img = [[NSImage alloc] initWithData:content];
NSLog(#"Image # %p: %#\n", img, img);
CGImageRef cgImg = [img CGImageForProposedRect:nil context:nil hints:nil];
if (cgImg) {
size_t width = CGImageGetWidth(cgImg);
size_t height = CGImageGetHeight(cgImg);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(0, width, height, 8, 0, colorspace, 1);
CGRect rect = CGRectMake(0, 0, width, height);
CGContextDrawImage(ctx, rect, cgImg);
CGColorSpaceRelease(colorspace);
CGContextRelease(ctx);
CGImageRelease(cgImg);
}
[img release];
}
}
[content release];
[path release];
return 0;
}

There might be autoreleased objected created inside SDK, so try to use (it is always good practice for such cycles with objective-c objects)
while(true) {
#autoreleasepool {
NSImage* img = [[NSImage alloc] initWithData:content];
[img release];
}
}

Related

NSImage doesn't scale

I'm developing a quick app in which I have a method that should rescale a #2x image to a regular one. The problem is that it doesn't :(
Why?
-(BOOL)createNormalImage:(NSString*)inputRetinaImagePath {
NSImage *inputRetinaImage = [[NSImage alloc] initWithContentsOfFile:inputRetinaImagePath];
NSSize size = NSZeroSize;
size.width = inputRetinaImage.size.width*0.5;
size.height = inputRetinaImage.size.height*0.5;
[inputRetinaImage setSize:size];
NSLog(#"%f",inputRetinaImage.size.height);
NSBitmapImageRep *imgRep = [[inputRetinaImage representations] objectAtIndex: 0];
NSData *data = [imgRep representationUsingType: NSPNGFileType properties: nil];
NSString *outputFilePath = [[inputRetinaImagePath substringToIndex:inputRetinaImagePath.length - 7] stringByAppendingString:#".png"];
NSLog([#"Normal version file path: " stringByAppendingString:outputFilePath]);
[data writeToFile:outputFilePath atomically: NO];
return true;
}
You have to be very wary of the size attribute of an NSImage. It doesn't necessarily refer to the bitmapRepresentation's pixel dimensions, it could refer to the displayed size for example. An NSImage may have a number of bitmapRepresentations for use at different output sizes.
Likewise, changing the size attribute of an NSImage does nothing to alter the bitmapRepresentations
So what you need to do is work out the size you want your output image to be, and then draw a new image at that size using a bitmapRepresentation from the source NSImage.
Getting that size depends on how you have obtained your input image and what you know about it. For example, if you are confident that your input image has only one bitmapImageRep you can use this type of thing (as a category on NSImage)
- (NSSize) pixelSize
{
NSBitmapImageRep* bitmap = [[self representations] objectAtIndex:0];
return NSMakeSize(bitmap.pixelsWide,bitmap.pixelsHigh);
}
Even if you have a number of bitmapImageReps, the first one should be the largest one, and if that is the size that your Retina image was created at, it should be the Retina size you are after.
When you have worked out your final size, you can make the image:
- (NSImage*) resizeImage:(NSImage*)sourceImage size:(NSSize)size
{
NSRect targetFrame = NSMakeRect(0, 0, size.width, size.height);
NSImage* targetImage = nil;
NSImageRep *sourceImageRep =
[sourceImage bestRepresentationForRect:targetFrame
context:nil
hints:nil];
targetImage = [[NSImage alloc] initWithSize:size];
[targetImage lockFocus];
[sourceImageRep drawInRect: targetFrame];
[targetImage unlockFocus];
return targetImage;
}
update
Here is a more elaborate version of a pixel-size-getting category on NSImage... let's assume nothing about the image, how many imageReps it has, whether it has any bitmapImageReps... this will return the largest pixel dimensions it can find. If it can't find bitMapImageRep pixel dimensions it will use whatever else it can get, which will most likely be bounding box dimensions (used by eps and pdfs).
NSImage+PixelSize.h
#import <Cocoa/Cocoa.h>
#import <QuartzCore/QuartzCore.h>
#interface NSImage (PixelSize)
- (NSInteger) pixelsWide;
- (NSInteger) pixelsHigh;
- (NSSize) pixelSize;
#end
NSImage+PixelSize.m
#import "NSImage+PixelSize.h"
#implementation NSImage (Extensions)
- (NSInteger) pixelsWide
{
/*
returns the pixel width of NSImage.
Selects the largest bitmapRep by preference
If there is no bitmapRep returns largest size reported by any imageRep.
*/
NSInteger result = 0;
NSInteger bitmapResult = 0;
for (NSImageRep* imageRep in [self representations]) {
if ([imageRep isKindOfClass:[NSBitmapImageRep class]]) {
if (imageRep.pixelsWide > bitmapResult)
bitmapResult = imageRep.pixelsWide;
} else {
if (imageRep.pixelsWide > result)
result = imageRep.pixelsWide;
}
}
if (bitmapResult) result = bitmapResult;
return result;
}
- (NSInteger) pixelsHigh
{
/*
returns the pixel height of NSImage.
Selects the largest bitmapRep by preference
If there is no bitmapRep returns largest size reported by any imageRep.
*/
NSInteger result = 0;
NSInteger bitmapResult = 0;
for (NSImageRep* imageRep in [self representations]) {
if ([imageRep isKindOfClass:[NSBitmapImageRep class]]) {
if (imageRep.pixelsHigh > bitmapResult)
bitmapResult = imageRep.pixelsHigh;
} else {
if (imageRep.pixelsHigh > result)
result = imageRep.pixelsHigh;
}
}
if (bitmapResult) result = bitmapResult;
return result;
}
- (NSSize) pixelSize
{
return NSMakeSize(self.pixelsWide,self.pixelsHigh);
}
#end
You would #import "NSImage+PixelSize.h" in your current file to make it accessible.
With this image category and the resize: method, you would modify your method thus:
//size.width = inputRetinaImage.size.width*0.5;
//size.height = inputRetinaImage.size.height*0.5;
size.width = inputRetinaImage.pixelsWide*0.5;
size.height = inputRetinaImage.pixelsHigh*0.5;
//[inputRetinaImage setSize:size];
NSImage* outputImage = [self resizeImage:inputRetinaImage size:size];
//NSBitmapImageRep *imgRep = [[inputRetinaImage representations] objectAtIndex: 0];
NSBitmapImageRep *imgRep = [[outputImage representations] objectAtIndex: 0];
That should fix things for you (proviso: I haven't tested it on your code)
I modified the script i use to downscale my images for you :)
-(BOOL)createNormalImage:(NSString*)inputRetinaImagePath {
NSImage *inputRetinaImage = [[NSImage alloc] initWithContentsOfFile:inputRetinaImagePath];
//determine new size
NSBitmapImageRep* bitmapImageRep = [[inputRetinaImage representations] objectAtIndex:0];
NSSize size = NSMakeSize(bitmapImageRep.pixelsWide * 0.5,bitmapImageRep.pixelsHigh * 0.5);
NSLog(#"size = %#", NSStringFromSize(size));
//get CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)[inputRetinaImage TIFFRepresentation], NULL);
CGImageRef oldImageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(oldImageRef);
if (alphaInfo == kCGImageAlphaNone) alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context
CGContextRef bitmap = CGBitmapContextCreate(NULL, size.width, size.height, 8, 4 * size.width, CGImageGetColorSpace(oldImageRef), alphaInfo);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, CGRectMake(0, 0, size.width, size.height), oldImageRef);
// Get an image from the context
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
//this does not work in my test.
NSString *outputFilePath = [[inputRetinaImagePath substringToIndex:inputRetinaImagePath.length - 7] stringByAppendingString:#".png"];
//but this does!
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* docsDirectory = [paths objectAtIndex:0];
NSString *newfileName = [docsDirectory stringByAppendingFormat:#"/%#", [outputFilePath lastPathComponent]];
CFURLRef url = (__bridge CFURLRef)[NSURL fileURLWithPath:newfileName];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, newImageRef, nil);
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"Failed to write image to %#", newfileName);
}
CFRelease(destination);
return true;
}

NSImage -> NSData -> NSBitmapImageRep breake .jpg images?

I have NSImage and I want to make OpenGL texture from it. So I do the fallowing:
someNSData = [someNSImage TIFFRepresentation];
someNSBitmapImageRepData = [[NSBitmapImageRep alloc] initWithData:someNSData]
And if someNSImage is .png it works OK. But if someNSImage is .jpg texture is being broken.
With .png it looks like that:
And same image but .jpg format it looks like that:
Whats wrong?
Try this
#implementation NSImage(NSImageToCGImageRef)
- (NSBitmapImageRep *)bitmapImageRepresentation
{
NSBitmapImageRep *ret = (NSBitmapImageRep *)[self bestRepresentationForDevice:nil];
if(![ret isKindOfClass:[NSBitmapImageRep class]])
{
ret = nil;
for(NSBitmapImageRep *rep in [self representations])
if([rep isKindOfClass:[NSBitmapImageRep class]])
{
ret = rep;
break;
}
}
// if ret is nil we create a new representation
if(ret == nil)
{
NSSize size = [self size];
size_t width = size.width;
size_t height = size.height;
size_t bitsPerComp = 32;
size_t bytesPerPixel = (bitsPerComp / CHAR_BIT) * 4;
size_t bytesPerRow = bytesPerPixel * width;
size_t totalBytes = height * bytesPerRow;
NSMutableData *data = [NSMutableData dataWithBytesNoCopy:calloc(totalBytes, 1) length:totalBytes freeWhenDone:YES];
CGColorSpaceRef space = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef ctx = CGBitmapContextCreate([data mutableBytes], width, height, bitsPerComp, bytesPerRow, space, kCGBitmapFloatComponents | kCGImageAlphaPremultipliedLast);
if(ctx != NULL)
{
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithGraphicsPort:ctx flipped:[self isFlipped]]];
[self drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
CGImageRef img = CGBitmapContextCreateImage(ctx);
ret = [[[NSBitmapImageRep alloc] initWithCGImage:img] autorelease];
[self addRepresentation:ret];
CFRelease(img);
CFRelease(space);
CGContextRelease(ctx);
}
else NSLog(#"%# Couldn't create CGBitmapContext", self);
}
return ret;
}
#end
//in your code
NSBitmapImageRep *tempRep = [image bitmapImageRepresentation];
the width and the height of the a texture must be power of 2, i.e. 128, 256, 512, 1024, etc.
It looks like your image format isn't 32 bit.

captureOutput:didOutputSampleBuffer:fromConnection: image buffer size is always 360x480 even on iPad

I am using the captureOutput:didOutputSampleBuffer:fromConnection: delegate method of AVCaptureVideoDataOutput. When testing it on the iPad the image buffer size is always 360x480 which seems really strange, I would think it would be the size of the iPad screen.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
NSLog(#"image size: h %zu, w %zu", height, width);
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGRect zoom = CGRectMake(self.touchPoint.y, self.touchPoint.x, 120, 120);
CGImageRef newImage2 = CGImageCreateWithImageInRect(newImage, zoom);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage* zoomedImage = [[UIImage alloc] initWithCGImage:newImage2 scale:1.0 orientation:UIImageOrientationUp];
[self.zoomedView.layer performSelectorOnMainThread:#selector(setContents:) withObject:(__bridge id)zoomedImage.CGImage waitUntilDone:YES];
CGImageRelease(newImage);
CGImageRelease(newImage2);
}
}//end
Is there a reason why the image buffer would be so small, even on iPad?
The quality of the AVCaptureSession is determined by the sessionPreset property, which defaults to AVCaptureSessionPresetHigh. It doesn't care what the resolution of the screen on the capturing device is; the capture quality is a function of the device's camera.
If you want the capture resolution to more closely match the screen resolution, you'll have to change the sessionPreset. Just note that none of the presets correspond directly to any screen resolution, rather they correspond to common video formats, like VGA, 720p, 1080p, etc:
NSString *const AVCaptureSessionPresetPhoto;
NSString *const AVCaptureSessionPresetHigh;
NSString *const AVCaptureSessionPresetMedium;
NSString *const AVCaptureSessionPresetLow;
NSString *const AVCaptureSessionPreset352x288;
NSString *const AVCaptureSessionPreset640x480;
NSString *const AVCaptureSessionPreset1280x720;
NSString *const AVCaptureSessionPreset1920x1080;
NSString *const AVCaptureSessionPresetiFrame960x540;
NSString *const AVCaptureSessionPresetiFrame1280x720;

How can I display an array of pixels on a NSWindow?

Very simple question... I have an array of pixels, how do I display them on the screen?
#define WIDTH 10
#define HEIGHT 10
#define SIZE WIDTH*HEIGHT
unsigned short pixels[SIZE];
for (int i = 0; i < WIDTH; i++) {
for (int j = 0; j < HEIGHT; j++) {
pixels[j*HEIGHT + i] = 0xFFFF;
}
}
That's it... now how can I show them on the screen?
Create a new "Cocoa Application" (if you don't know how to create a cocoa application go to Cocoa Dev Center)
Subclass NSView (if you don't know how to subclass a view read section "Create the NSView Subclass")
Set your NSWindow to size 400x400 on interface builder
Use this code in your NSView
#import "MyView.h"
#implementation MyView
#define WIDTH 400
#define HEIGHT 400
#define SIZE (WIDTH*HEIGHT)
#define BYTES_PER_PIXEL 2
#define BITS_PER_COMPONENT 5
#define BITS_PER_PIXEL 16
- (id)initWithFrame:(NSRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code here.
}
return self;
}
- (void)drawRect:(NSRect)dirtyRect
{
// Get current context
CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
// Colorspace RGB
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Pixel Matrix allocation
unsigned short *pixels = calloc(SIZE, sizeof(unsigned short));
// Random pixels will give you a non-organized RAINBOW
for (int i = 0; i < WIDTH; i++) {
for (int j = 0; j < HEIGHT; j++) {
pixels[i+ j*HEIGHT] = arc4random() % USHRT_MAX;
}
}
// Provider
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, pixels, SIZE, nil);
// CGImage
CGImageRef image = CGImageCreate(WIDTH,
HEIGHT,
BITS_PER_COMPONENT,
BITS_PER_PIXEL,
BYTES_PER_PIXEL*WIDTH,
colorSpace,
kCGImageAlphaNoneSkipFirst,
// xRRRRRGGGGGBBBBB - 16-bits, first bit is ignored!
provider,
nil, //No decode
NO, //No interpolation
kCGRenderingIntentDefault); // Default rendering
// Draw
CGContextDrawImage(context, self.bounds, image);
// Once everything is written on screen we can release everything
CGImageRelease(image);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
}
#end
There's a bunch of ways to do this. One of the more straightforward is to use CGContextDrawImage. In drawRect:
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, bitmap, bitmap_bytes, nil);
CGImageRef img = CGImageCreate(..., provider, ...);
CGDataProviderRelease(provider);
CGContextDrawImage(ctx, dstRect, img);
CGImageRelease(img);
CGImageCreate has a bunch of arguments which I've left out here, as the correct values will depend on what your bitmap format is. See the CGImage reference for details.
Note that, if your bitmap is static, it may make sense to hold on to the CGImageRef instead of disposing of it immediately. You know best how your application works, so you decide whether that makes sense.
I solved this problem by using an NSImageView with NSBitmapImageRep to create the image from the pixel values. There are lots of options how you create the pixel values. In my case, I used 32-bit pixels (RGBA). In this code, pixels is the giant array of pixel value. display is the outlet for the NSImageView.
NSBitmapImageRep *myBitmap;
NSImage *myImage;
unsigned char *buff[4];
unsigned char *pixels;
int width, height, rectSize;
NSRect myBounds;
myBounds = [display bounds];
width = myBounds.size.width;
height = myBounds.size.height;
rectSize = width * height;
memset(buff, 0, sizeof(buff));
pixels = malloc(rectSize * 4);
(fill in pixels array)
buff[0] = pixels;
myBitmap = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:buff
pixelsWide:width
pixelsHigh:height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:0
bytesPerRow:(4 * width)
bitsPerPixel:32];
myImage = [[NSImage alloc] init];
[myImage addRepresentation:myBitmap];
[display setImage: myImage];
[myImage release];
[myBitmap release];
free(pixels);

Fragment shaders on a texture

I am trying to add some post-processing capabilities to a program. The rendering is done using openGL. I just want to allow the program to load some home made fragment shader and use them on the video stream.
I wrote a little piece of shader using "OpenGL Shader Builder" that just turns a texture in grayscale. The shaders works well in the shader builder but I can't make it work in the main program. The screens stays all black.
Here is the setup :
#implementation PluginGLView
- (id) initWithCoder: (NSCoder *) coder
{
const GLubyte * strExt;
if ((self = [super initWithCoder:coder]) == nil)
return nil;
glLock = [[NSLock alloc] init];
if (nil == glLock) {
[self release];
return nil;
}
// Init pixel format attribs
NSOpenGLPixelFormatAttribute attrs[] =
{
NSOpenGLPFAAccelerated,
NSOpenGLPFANoRecovery,
NSOpenGLPFADoubleBuffer,
0
};
// Get pixel format from OpenGL
NSOpenGLPixelFormat* pixFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs];
if (!pixFmt)
{
NSLog(#"No Accelerated OpenGL pixel format found\n");
NSOpenGLPixelFormatAttribute attrs2[] =
{
NSOpenGLPFANoRecovery,
0
};
// Get pixel format from OpenGL
pixFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs2];
if (!pixFmt) {
NSLog(#"No OpenGL pixel format found!\n");
[self release];
return nil;
}
}
[self setPixelFormat:[pixFmt autorelease]];
/*
long swapInterval = 1 ;
[[self openGLContext]
setValues:&swapInterval
forParameter:NSOpenGLCPSwapInterval];
*/
[glLock lock];
[[self openGLContext] makeCurrentContext];
// Init object members
strExt = glGetString (GL_EXTENSIONS);
texture_range = gluCheckExtension ((const unsigned char *)"GL_APPLE_texture_range", strExt) ? GL_TRUE : GL_FALSE;
texture_hint = GL_STORAGE_SHARED_APPLE ;
client_storage = gluCheckExtension ((const unsigned char *)"GL_APPLE_client_storage", strExt) ? GL_TRUE : GL_FALSE;
rect_texture = gluCheckExtension((const unsigned char *)"GL_EXT_texture_rectangle", strExt) ? GL_TRUE : GL_FALSE;
// Setup some basic OpenGL stuff
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Loads the shaders
shader=LoadShader(GL_FRAGMENT_SHADER,"/Users/alexandremathieu/fragment.fs");
program=glCreateProgram();
glAttachShader(program, shader);
glLinkProgram(program);
glUseProgram(program);
[NSOpenGLContext clearCurrentContext];
[glLock unlock];
image_width = 1024;
image_height = 512;
image_depth = 16;
image_type = GL_UNSIGNED_SHORT_1_5_5_5_REV;
image_base = (GLubyte *) calloc(((IMAGE_COUNT * image_width * image_height) / 3) * 4, image_depth >> 3);
if (image_base == nil) {
[self release];
return nil;
}
// Create and load textures for the first time
[self loadTextures:GL_TRUE];
// Init fps timer
//gettimeofday(&cycle_time, NULL);
drawBG = YES;
// Call for a redisplay
noDisplay = YES;
PSXDisplay.Disabled = 1;
[self setNeedsDisplay:true];
return self;
}
And here is the "render screen" function which basically...renders the screen.
- (void)renderScreen
{
int bufferIndex = whichImage;
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, bufferIndex+1);
glUseProgram(program);
int loc=glGetUniformLocation(program, "texture");
glUniform1i(loc,bufferIndex+1);
glTexSubImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, 0, 0, image_width, image_height, GL_BGRA, image_type, image[bufferIndex]);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(-1.0f, 1.0f);
glTexCoord2f(0.0f, image_height);
glVertex2f(-1.0f, -1.0f);
glTexCoord2f(image_width, image_height);
glVertex2f(1.0f, -1.0f);
glTexCoord2f(image_width, 0.0f);
glVertex2f(1.0f, 1.0f);
glEnd();
[[self openGLContext] flushBuffer];
[NSOpenGLContext clearCurrentContext];
//[glLock unlock];
}
and finally here's the shader.
uniform sampler2DRect texture;
void main() {
vec4 color, texel;
color = gl_Color;
texel = texture2DRect(texture, gl_TexCoord[0].xy);
color *= texel;
// Begin Shader
float gray=0.0;
gray+=(color.r + color.g + color.b)/3.0;
color=vec4(gray,gray,gray,color.a);
// End Shader
gl_FragColor = color;
}
The loading and using of shaders works since I am able to turn the screen all red with this shader
void main(){
gl_FragColor=vec4(1.0,0.0,0.0,1.0);
}
If the shader contains a syntax error I get an error message from the LoadShader function etc. If I remove the use of the shader, everything works normally.
I think the problem comes from the "passing the texture as a uniform parameter" thing. But these are my very firsts step with openGL and I can't be sure of anything.
Texture samplers have to be set to the number of the active texture unit. So for example with glActiveTexture(GL_TEXTURE3) the sampler must be set to 3 as well. In your case the number should be 0.