Best method of particle simulation in Objective-C - objective-c

I'm about to create a particle simulator in Objective-C on the Mac, using Core Graphics for rendering. I've calculated that Core Graphics is capable of rendering about 1.8*10^6 1x1 colored pixels per second to a view using CGContextFillRect, which works out to about 250,000 1x1 particles per frame being rendered to the screen, for the FPS to remain at 60.
A limit of 250,000 particles isn't that great - I'd like that number to be much higher. What is the most efficient way to render this many 1x1 coloured pixels to a view?
Is there a way to better utilise the GPU?
This is the code I have been using:
- (void)drawRect:(NSRect)dirtyRect {
[super drawRect:dirtyRect];
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGContextSetRGBFillColor (ctx, 1, 0, 0, 1);
CFTimeInterval t1 = CFAbsoluteTimeGetCurrent();
CGRect point = CGRectMake(10.0f, 10.0f, 1.0f, 1.0f);
for (int i = 0; i < 20000000; i++) {
CGContextFillRect (ctx, point);
}
NSLog(#"%.10f", CFAbsoluteTimeGetCurrent() - t1);
}

Related

need assistance regarding growing & shrinking circle from centre in quartz-2d

I am currently working on drawing app, in which have a slider to increase and decrease line width. I just want to do a simple thing that a circle in front of slider to present a width. I easily did that but its not growing and shrinking from centre, its growing and shrinking from top x, y, here is the code
- (UIImage *)circleOnImage:(int)size
{
UIGraphicsBeginImageContext(CGSizeMake(25, 25));
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] setFill];
CGContextTranslateCTM(ctx, 12, 12);//also try to change the coordinate but didn't work
CGRect circleRect = CGRectMake(0, 0, size, size);
CGContextFillEllipseInRect(ctx, circleRect);
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return retImage;
}
Try
CGContextTranslateCTM(ctx, 12.5, 12.5);
CGRect circleRect = CGRectMake(-size/2., -size/2., size, size);

High-Resolution Content for paint app Using OpenGL ES on iPad device

I am working on paint app [taking reference from GLPaint app] for iPhone and iPad. In this app I am filling colors in paint-images by drawings lines onscreen based on where the user touches. App working properly for iPhone. In iPad without zooming lines on the paint view are proper [no pixel distortion] but after zooming lines on the paintView has distorted pixels i.e Content of OpenGL ES is not High Resolution.
I am using Following code for initialize paint view:
-(id)initWithCoder:(NSCoder*)coder {
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData;
size_t width, height;
CGFloat components[3];
if ((self = [super initWithCoder:coder])) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = NO;
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
if (!context || ![EAGLContext setCurrentContext:context]) {
return nil;
}
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
brushImage = [UIImage imageNamed:#"circle 64.png"].CGImage;
}
else {
brushImage = [UIImage imageNamed:#"flower 128.png"].CGImage;
}
// Get the width and height of the image
width = CGImageGetWidth(brushImage) ;
height = CGImageGetHeight(brushImage) ;
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
CGFloat scale;
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
NSLog(#"IPAd");
self.contentScaleFactor=1.0;
scale = self.contentScaleFactor;
}
else {
// NSLog(#"IPHone");
self.contentScaleFactor=2.0;
}
//scale = 2.000000;
// Setup OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
NSLog(#"Scale %f", scale);
glOrthof(0, (frame.size.width) * scale, 0, (frame.size.height) * scale, -1, 1);
glViewport(0, 0, (frame.size.width) * scale, (frame.size.height) * scale);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DITHER);
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_BLEND);
// Set a blending function appropriate for premultiplied alpha pixel data
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(width / kBrushScale);
// Make sure to start with a cleared buffer
needsErase = YES;
// Define a starting color
HSL2RGB((CGFloat) 0.0 / (CGFloat)kPaletteSize, kSaturation, kLuminosity, &components[0], &components[1], &components[2]);
[self setBrushColorWithRed:245.0f green:245.0f blue:0.0f];
boolEraser=NO;
}
return self;
}
TO CREATE FRAME BUFFER
-(BOOL)createFramebuffer {
// Generate IDs for a framebuffer object and a color renderbuffer
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
// For this sample, we also need a depth buffer, so we'll create and attach one via another renderbuffer.
glGenRenderbuffersOES(1, &depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, depthRenderbuffer);
if (glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES)
{
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
return NO;
}
return YES;
}
Line Drawn using Following code
-(void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end {
static GLfloat* vertexBuffer = NULL;
static NSUInteger vertexMax = 64;
NSUInteger vertexCount = 0,
count,
i;
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Convert locations from Points to Pixels
//CGFloat scale = self.contentScaleFactor;
CGFloat scale;
scale=self.contentScaleFactor;
NSLog(#"Scale %f",scale);
start.x *= scale;
start.y *= scale;
end.x *= scale;
end.y *= scale;
float dx = end.x - start.x;
float dy = end.y - start.y;
float dist = (sqrtf(dx * dx + dy * dy)/ kBrushPixelStep);
// Allocate vertex array buffer
if(vertexBuffer == NULL)
// vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
count = MAX(ceilf(dist), 1);
//NSLog(#"count %d",count);
for(i = 0; i < count; ++i) {
if (vertexCount == vertexMax) {
vertexMax = 2 * vertexMax;
vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
// NSLog(#"if loop");
}
vertexBuffer[2 * vertexCount + 0] = start.x + (dx) * ((GLfloat)i / (GLfloat)count);
vertexBuffer[2 * vertexCount + 1] = start.y + (dy) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
For ipad device content of paint view is proper- high resolution for normal view but after zooming I am not getting High resolution content of paint view pixel of the lines looks distorted.
I have tried to change ContentScaleFactor as well as scale parameter of above code to see the difference but nothing worked as expected. IPad supports contentScaleFactor of 1.0 & 1.5, when I set contentScaleFactor = 2 Paint view can not display line it shows weird dotted lines.
Is there any way to make contents of OpenGL es high resolution?
The short answer is YES, you can have "High resolution" Content.
But you will have to clearly understand the issue before solving it. This is the long answer :
The brushes you use have a specific size (64 or 128). As soon as your virtual paper (the area in which you draw) will display its pixels larger than 1 screen pixel, you will start to see the "distortion". For example, in your favorite picture viewer, if you open one of your brush and zoom in the picture will also be distorted. You cannot avoid that, unless using vertor-brushes (with is not the scope of this answer and is far more complicated).
The quickest way would be to use more detailled brushes, but it is a fudge as if you zoom enought, the texture will look distorted as well.
You can also add a magnification filter using glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); . You used MIN in your sample, add this one will smooth the textures
i am not sure what you mean by high resolution. opengl is a vector library with a bitmap backed rendering system. the backing store will have the size in pixels (multiplied by the content scale factor) of the layer you are using to create the renderbuffer in:
- (BOOL)renderbufferStorage:(NSUInteger)target fromDrawable:(id<EAGLDrawable>)drawable
once it is created there is no way to change the resolution, nor would it make sense to do so generally, one renderbuffer pixel per screen pixel makes the most sense.
it is hard to know exactly what problem you are trying to solve without knowing what zooming you are talking about. i assume you have set up a CAEAGLLayer in a UIScrollView, and you are seeing pixel artifacts. this is inevitable, how else could it work?
if you want your lines to be smooth, you need to implement them using triangle strip meshes with alpha blending at the edges, which will provide antialiasing. instead of zooming the layer itself, you would simply "zoom" the contents by scaling the vertices, but keeping the CAEAGLLayer the same size. this would eliminate pixelation and give purdy alpha blended edges.

Ribbon Graphs (charts) with Core Graphics

I am generating a classic line graph using core graphics, which renders and work very well.
There are several lines stacking one after another using "layer.zPosition"
-(void)drawRect:(CGRect)rect {
float colorChange = (0.1 * [self tag]);
theFillColor = [UIColor colorWithRed:(colorChange) green:(colorChange*0.50) blue:colorChange alpha:0.75f].CGColor;
CGContextRef c = UIGraphicsGetCurrentContext();
CGFloat white[4] = {1.0f, 1.0f, 1.0f, 1.0f};
CGContextSetFillColorWithColor(c, theFillColor);
CGContextSetStrokeColor(c, white);
CGContextSetLineWidth(c, 2.0f);
CGContextBeginPath(c);
//
CGContextMoveToPoint(c, 0.0f, 200-[[array objectAtIndex:0] floatValue]);
CGContextAddLineToPoint(c, 0.0f, 200-[[array objectAtIndex:0] floatValue]);
//
distancePerPoint = (rect.size.width / [array count]);
float lastPointX = 750.0;
for (int i = 0 ; i < [array count] ; i++)
{
CGContextAddLineToPoint(c, (i*distancePerPoint), 200-[[array objectAtIndex:i] floatValue]);
lastPointX = (i*distancePerPoint);
}
//
CGContextAddLineToPoint(c, lastPointX, 200.0);
CGContextAddLineToPoint(c, 0, 200);
CGContextClosePath(c);
//
//CGContextFillPath(c);
CGContextDrawPath(c, kCGPathFillStroke);
//CGContextDrawPath(c, kCGPathStroke);
}
(The above code is generating the following result):
(I can post the code I am using for the 3d effect if needed, but the way I do it is generically by
CATransform3D rotationAndPerspectiveTransform = CATransform3DIdentity;)
Question:
How can I transform my line graph to have depth ?
I would like to have a "depth" to the line(s) graph (thus making them a ribbon) as later I would like to represent them using rotation And Perspective Transform (as stated above)
You can't easily do with with Core Graphics or Core animation because CALayers are "flat" - they work like origami, where you can make 3D structures by connecting rectangles in 3D space, but you can't have arbitrary polygonal 3D shapes.
Actually that's not strictly true, you could look at using CAShapeLayers to do your drawing, and then manipulate them in 3D, but I think this is generally going to be very hard work to calculate where to position each shape and to get the edges to line up correctly.
Really the way to make this kind of 3D structure is to use OpenGL directly.
If you're not too familiar with low-level OpenGL programming, you might want to check out Galaxy Engine or Cocos3D.

Does iOS 5 support blur CoreImage fiters?

According to the documentation it should support blurring, note the "Available in iOS 5.0 and later":
CIFilter Class Reference
But according to the device, it doesn't:
[CIFilter filterNamesInCategory:kCICategoryBlur];
returns nothing.
According to the following only these filters are available on my iPhone and Simulator (which are both running 5.0):
[CIFilter filterNamesInCategory:kCICategoryBuiltIn]
CIAdditionCompositing,
CIAffineTransform,
CICheckerboardGenerator,
CIColorBlendMode,
CIColorBurnBlendMode,
CIColorControls,
CIColorCube,
CIColorDodgeBlendMode,
CIColorInvert,
CIColorMatrix,
CIColorMonochrome,
CIConstantColorGenerator,
CICrop,
CIDarkenBlendMode,
CIDifferenceBlendMode,
CIExclusionBlendMode,
CIExposureAdjust,
CIFalseColor,
CIGammaAdjust,
CIGaussianGradient,
CIHardLightBlendMode,
CIHighlightShadowAdjust,
CIHueAdjust,
CIHueBlendMode,
CILightenBlendMode,
CILinearGradient,
CILuminosityBlendMode,
CIMaximumCompositing,
CIMinimumCompositing,
CIMultiplyBlendMode,
CIMultiplyCompositing,
CIOverlayBlendMode,
CIRadialGradient,
CISaturationBlendMode,
CIScreenBlendMode,
CISepiaTone,
CISoftLightBlendMode,
CISourceAtopCompositing,
CISourceInCompositing,
CISourceOutCompositing,
CISourceOverCompositing,
CIStraightenFilter,
CIStripesGenerator,
CITemperatureAndTint,
CIToneCurve,
CIVibrance,
CIVignette,
CIWhitePointAdjust
While Core Image on iOS 5.0 lacks blur filters, there is still a way to get GPU-accelerated blurs of images and video. My open source GPUImage framework has multiple blur types, including Gaussian (using the GPUImageGaussianBlurFilter for a general Gaussian or the GPUImageFastBlurFilter for a hardware-optimized 9-hit Gaussian), box (using a GPUImageBoxBlurFilter), median (using a GPUImageMedianFilter), and a bilateral blur (using a GPUImageBilateralBlurFilter).
I describe the shaders used to pull off the hardware-optimized Gaussian blur in this answer, and you can examine the code I use for the rest within the framework. These filters run tens of times faster than any CPU-bound routine I've tried yet.
I've also incorporated these blurs into multi-stage processing effects, like unsharp masking, tilt-shift filtering, Canny edge detection, and Harris corner detection, all of which are available as filters within this framework.
Again, in an attempt to save all iOS blur isses, here is my contribution:
https://github.com/tomsoft1/StackBluriOS
A simple blur library based on Stack Blur. Stack Blur is very similar to Gaussian Blur, but much faster (see http://incubator.quasimondo.com/processing/fast_blur_deluxe.php )
use it like this:
UIImage *newIma=[sourceIma stackBlur:radius]
Hope this help
I too was disappointed to find that Core Image in iOS doesn't support blurs. Here's the function I wrote to do a 9-tap Gaussian blur on a UIImage. Call it repeatedly to get stronger blurs.
#interface UIImage (ImageBlur)
- (UIImage *)imageWithGaussianBlur9;
#end
#implementation UIImage (ImageBlur)
- (UIImage *)imageWithGaussianBlur9 {
float weight[5] = {0.2270270270, 0.1945945946, 0.1216216216, 0.0540540541, 0.0162162162};
// Blur horizontally
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[self drawInRect:CGRectMake(0, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[0]];
for (int x = 1; x < 5; ++x) {
[self drawInRect:CGRectMake(x, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[x]];
[self drawInRect:CGRectMake(-x, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[x]];
}
UIImage *horizBlurredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Blur vertically
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
[horizBlurredImage drawInRect:CGRectMake(0, 0, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[0]];
for (int y = 1; y < 5; ++y) {
[horizBlurredImage drawInRect:CGRectMake(0, y, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[y]];
[horizBlurredImage drawInRect:CGRectMake(0, -y, self.size.width, self.size.height) blendMode:kCGBlendModePlusLighter alpha:weight[y]];
}
UIImage *blurredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//
return blurredImage;
}
Just call it on an existing image like this:
UIImage *blurredImage = [originalImage imageWithGaussianBlur9];
and repeat it to get stronger blurring, like this:
blurredImage = [blurredImage imageWithGaussianBlur9];
Unfortunately, it does not support any blurs. For that, you'll have to roll your own.
UPDATE: As of iOS 6 [CIFilter filterNamesInCategory:kCICategoryBlur]; returns CIGaussianBlur meaning that this filter is available on the device. Even though this is true, you (probably) will get better performance and more flexibility using GPUImage.
Here is the link to our tutorial on making blur effect in iOS application with different approaches. http://blog.denivip.ru/index.php/2013/01/blur-effect-in-ios-applications/?lang=en
If you can use OpenGL ES in your iOS app, this is how you calculate the median in a pixel neighborhood radius of your choosing (the median being a type of blur, of course):
kernel vec4 medianUnsharpKernel(sampler u) {
vec4 pixel = unpremultiply(sample(u, samplerCoord(u)));
vec2 xy = destCoord();
int radius = 3;
int bounds = (radius - 1) / 2;
vec4 sum = vec4(0.0);
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
sum += unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
}
}
vec4 mean = vec4(sum / vec4(pow(float(radius), 2.0)));
float mean_avg = float(mean);
float comp_avg = 0.0;
vec4 comp = vec4(0.0);
vec4 median = mean;
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
comp = unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
comp_avg = float(comp);
median = (comp_avg < mean_avg) ? max(median, comp) : median;
}
}
return premultiply(vec4(vec3(abs(pixel.rgb - median.rgb)), 1.0));
}
A brief description of the steps
1. Calculate the mean of the values of the pixels surrounding the source pixel in a 3x3 neighborhood;
2. Find the maximum pixel value of all pixels in the same neighborhood that are less than the mean.
3. [OPTIONAL] Subtract the median pixel value from the source pixel value for edge detection.
If you're using the median value for edge detection, there are a couple of ways to modify the above code for better results, namely, hybrid median filtering and truncated media filtering (a substitute and a better 'mode' filtering). If you're interested, please ask.
Because I'm using Xamarin, I converted John Stephen's answer to C#:
private UIImage ImageWithGaussianBlur9(UIImage image)
{
var weight = new nfloat[]
{
0.2270270270f, 0.1945945946f, 0.1216216216f, 0.0540540541f, 0.0162162162f
};
var width = image.Size.Width;
var height = image.Size.Height;
// Blur horizontally
UIGraphics.BeginImageContextWithOptions(image.Size, false, 1f);
image.Draw(new CGRect(0f, 0f, width, height), CGBlendMode.PlusLighter, weight[0]);
for (int x = 1; x < 5; ++x)
{
image.Draw(new CGRect(x, 0, width, height), CGBlendMode.PlusLighter, weight[x]);
image.Draw(new CGRect(-x, 0, width, height), CGBlendMode.PlusLighter, weight[x]);
}
var horizBlurredImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
// Blur vertically
UIGraphics.BeginImageContextWithOptions(image.Size, false, 1f);
horizBlurredImage.Draw(new CGRect(0, 0, width, height), CGBlendMode.PlusLighter, weight[0]);
for (int y = 1; y < 5; ++y)
{
horizBlurredImage.Draw(new CGRect(0, y, width, height), CGBlendMode.PlusLighter, weight[y]);
horizBlurredImage.Draw(new CGRect(0, -y, width, height), CGBlendMode.PlusLighter, weight[y]);
}
var blurredImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return blurredImage;
}

Flipping OpenGL texture

When I load textures from images normally, they are upside down because of OpenGL's coordinate system. What would be the best way to flip them?
glScalef(1.0f, -1.0f, 1.0f);
mapping the y coordinates of the textures in reverse
vertically flipping the image files manually (in Photoshop)
flipping them programatically after loading them (I don't know how)
This is the method I'm using to load png textures, in my Utilities.m file (Objective-C):
+ (TextureImageRef)loadPngTexture:(NSString *)name {
CFURLRef textureURL = CFBundleCopyResourceURL(
CFBundleGetMainBundle(),
(CFStringRef)name,
CFSTR("png"),
CFSTR("Textures"));
NSAssert(textureURL, #"Texture name invalid");
CGImageSourceRef imageSource = CGImageSourceCreateWithURL(textureURL, NULL);
NSAssert(imageSource, #"Invalid Image Path.");
NSAssert((CGImageSourceGetCount(imageSource) > 0), #"No Image in Image Source.");
CFRelease(textureURL);
CGImageRef image = CGImageSourceCreateImageAtIndex(imageSource, 0, NULL);
NSAssert(image, #"Image not created.");
CFRelease(imageSource);
GLuint width = CGImageGetWidth(image);
GLuint height = CGImageGetHeight(image);
void *data = malloc(width * height * 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSAssert(colorSpace, #"Colorspace not created.");
CGContextRef context = CGBitmapContextCreate(
data,
width,
height,
8,
width * 4,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
NSAssert(context, #"Context not created.");
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGImageRelease(image);
CGContextRelease(context);
return TextureImageCreate(width, height, data);
}
Where TextureImage is a struct that has a height, width and void *data.
Right now I'm just playing around with OpenGL, but later I want to try making a simple 2d game. I'm using Cocoa for all the windowing and Objective-C as the language.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Thanks.
Any of those:
Flip texture during the texture load,
OR flip model texture coordinates during model load
OR set texture matrix to flip y (glMatrixMode(GL_TEXTURE)) during render.
Also, another thing I was wondering about: If I made a simple game, with pixels mapped to units, would it be alright to set it up so that the origin is in the top-left corner (personal preference), or would I run in to problems with other things (e.g. text rendering)?
Depends on how you are going to render text.
Jordan Lewis pointed out CGContextDrawImage draws image upside down when passed UIImage.CGImage. There I found a quick and easy solution: Before calling CGContextDrawImage,
CGContextTranslateCTM(context, 0, height);
CGContextScaleCTM(context, 1.0f, -1.0f);
Does the job perfectly well.