Using Core OpenGL to programmatically create a context with depth buffer: What am I doing wrong? - objective-c

I'm trying to create an OpenGL context with depth buffer using Core OpenGl. I then wish to display the OpenGL content via a CAOpenGLLayer. From what I've read it seems I should be able to create the desired context by the following method:
I declare the following instance variable in the interface
#interface TorusCAOpenGLLayer : CAOpenGLLayer
{
//omitted code
CGLPixelFormatObj pix;
GLint pixn;
CGLContextObj ctx;
}
Then in the implementation I override copyCGLContextForPixelFormat, which I believe should create the required context
- (CGLContextObj)copyCGLContextForPixelFormat:(CGLPixelFormatObj)pixelFormat
{
CGLPixelFormatAttribute attrs[] =
{
kCGLPFAColorSize, (CGLPixelFormatAttribute)24,
kCGLPFAAlphaSize, (CGLPixelFormatAttribute)8,
kCGLPFADepthSize, (CGLPixelFormatAttribute)24,
(CGLPixelFormatAttribute)0
};
NSLog(#"Pixel format error:%d", CGLChoosePixelFormat(attrs, &pix, &pixn)); //returns 0
NSLog(#"Context error: %d", CGLCreateContext(pix, NULL, &ctx)); //returns 0
NSLog(#"The context:%p", ctx); // returns same memory address as similar NSLog call in function below
return ctx;
}
Finally I override drawInCGLContext to display the content.
-(void)drawInCGLContext:(CGLContextObj)glContext pixelFormat: (CGLPixelFormatObj)pixelFormat forLayerTime:(CFTimeInterval)timeInterval displayTime:(const CVTimeStamp *)timeStamp
{
// Set the current context to the one given to us.
CGLSetCurrentContext(glContext);
int depth;
NSLog(#"The context again:%p", glContext); //returns the same memory address as the NSLog in the previous function
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho(0.5, 0.5, 1.0, 1.0, -1.0, 1.0);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_DEPTH_TEST);
glGetIntegerv(GL_DEPTH_BITS, &depth);
NSLog(#"%i bits depth", depth); // returns 0
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//drawing code here
// Call super to finalize the drawing. By default all it does is call glFlush().
[super drawInCGLContext:glContext pixelFormat:pixelFormat forLayerTime:timeInterval displayTime:timeStamp];
}
The program compiles fine and displays the content, but without the depth testing. Is there something extra I have to do to get this to work? Or is my entire approach wrong?

Looks like I was overriding the wrong method. To obtain the required depth buffer one should override the copyCGLPixelFormatForDisplayMask like so:
- (CGLPixelFormatObj)copyCGLPixelFormatForDisplayMask:(uint32_t)mask {
CGLPixelFormatAttribute attributes[] =
{
kCGLPFADepthSize, 24,
0
};
CGLPixelFormatObj pixelFormatObj = NULL;
GLint numPixelFormats = 0;
CGLChoosePixelFormat(attributes, &pixelFormatObj, &numPixelFormats);
if(pixelFormatObj == NULL)
NSLog(#"Error: Could not choose pixel format!");
return pixelFormatObj;
}
Based on the code here.

Related

OpenCL and path tracing material system

I started developing my own path tracing program with Opencl using GPU (for the speed). I had easily created path tracer with basic mateial with standard properties (as metallic, specular, refractionChance, emission etc.).
But what I am interested in now is option to create more complex materials something as in Blender's shader graph where you can use for example Fresnel function.
I imagine this as function for every material (and stop me if this is horrible approach) and the function returns basic information about point where ray landed (base color ,metallic...). It is some sort of sampler for object's material.
This would be easily done with C++ on CPU. But OpenCL doesnt support virtual functions and creating big switch for every material's function is really bad for GPU performance by my knowledge.
Does anyone have some expirience with material systems or just some small tip to get started? I would be thanks for any answer.
This is my C++ CPU approach:
struct MaterialProp{
float3 baseColor;
float metallic;
float specular;
// ETC.
};
struct Material {
virtual MaterialProp sampleMaterial(Ray ray, ... ){
MaterialProp prop;
prop.baseColor = float3(0.0f, 0.0f, 0.0f);
prop.metallic = 0.0f;
prop.specular = 0.0f;
...
prop;
}
};
// Brand new material
struct NewMaterial : public Material{
Texture tex;
virtual MaterialProp sampleMaterial(Ray ray, ... ){
MaterialProp prop;
prop.baseColor = sampleTex(tex);
prop.metallic = fresnel;
prop.specular = 0.0f;
...
prop;
}
};
Color trace(Ray ray){
Object* hitObj = intersectScene(ray);
if(hitObj){
MaterialProp prop = hitObj.Material.sampleMaterial(ray);
//Do other stuff with properties;
//return color;
}else{
return Color(0.0f, 0.0f, 0.0f);
}
};

Adjusting screen contrast programmatically in Cocoa application

I'm trying to adjust the screen contrast in Objective C for a Cocoa application using kIODisplayContrastKey. I saw a post about adjusting screen brightness here:
Programmatically change Mac display brightness
- (void) setBrightnessTo: (float) level
{
io_iterator_t iterator;
kern_return_t result = IOServiceGetMatchingServices(kIOMasterPortDefault,
IOServiceMatching("IODisplayConnect"),
&iterator);
// If we were successful
if (result == kIOReturnSuccess)
{
io_object_t service;
while ((service = IOIteratorNext(iterator)))
{
IODisplaySetFloatParameter(service, kNilOptions, CFSTR(kIODisplayBrightnessKey), level);
// Let the object go
IOObjectRelease(service);
return;
}
}
}
Code by Alex King from the link above.
And that code worked. So I tried to do the same for contrast by using a different key (kIODisplayContrastKey), but that doesn't seem to work. Does anybody have an idea if that's possible?
I'm using 10.9.3

Repair Fragments in CGPathes of Cocoa Touch / Core Graphics

I have an unusual problem that caused my mind to stuck. I have created a drawing application that uses CGPathes to be created by a pen tool on the fly via the following view controller code.
#implementation WWLGrabManager
- (void) touchesBegan:(NSSet*)touchesIgnore withEvent:(UIEvent*)event {
currentPath = CGPathCreateMutable();
CGPathMoveToPoint(currentPath, NULL, pt.x, pt.y);
}
- (void) touchesMoved:(NSSet*)touchesIgnore withEvent:(UIEvent*)event {
CGPathAddLineToPoint(currentPath, NULL, pt.x, pt.y);
}
- (void) touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event {
currentPath = [self newSmoothedPathWithPath:currentPath];
}
#end
After the drawing I iterate once again over the CGPath in order to smooth the drawn edges of the path. This shall make the path to not look so ugly by building the middle of each three points and applying a curve to each point. The code for smoothing edges mainly consists of the following pseudo code:
-(CGMutablePathRef)newSmoothedPathWithPath:(CGPathRef)path {
// This is pseudo code
CGMutablePathRef newPath = CGPathCreateMutable();
foreach (CGPoint point in path)
CGPathAddCurveToPoint(newPath, NULL,
ctrlPt1.x,ctrlPt1.y,ctrlPt2.x,ctrlPt2.y,
((*preLastPoint).x + (*lastPoint).x + (point).x)/3,
((*preLastPoint).y + (*lastPoint).y + (point).y)/3);
}
return newPath;
}
Now, after applying the smoothing function, the new CGPath causes fragments like in the picture here. I have double checked the control points, but I cannot figure out why this is happening.
For debugging reasons I have printed out a log of the CGPathes points and control points below.
MainPoint // ControlPoint1 // ControlPoint2
66.00, 91.00 // 67.00,87.00 // 64.25,95.75
59.00,110.00 // 60.75,105.25 // 58.00,113.75
55.00,125.00 // 56.00,121.25 // 54.00,128.75
51.00,140.00 // 52.00,136.25 // 50.25,144.25
48.00,157.00 // 48.75,152.75 // 47.25,161.00
45.00,173.00 // 45.75,169.00 // 44.75,176.75
44.00,188.00 // 44.25,184.25 // 43.75,191.75
43.00,203.00 // 43.25,199.25 // 43.00,207.00
43.00,219.00 // 43.00,215.00 // 43.00,223.00
43.00,235.00 // 43.00,231.00 // 43.00,239.25
43.00,252.00 // 43.00,247.75 // 43.00,256.00
43.00,268.00 // 43.00,264.00 // 44.25,272.00
48.00,284.00 // 46.75,280.00 // 49.50,287.50
54.00,298.00 // 52.50,294.50 // 56.75,300.75
65.00,309.00 // 62.25,306.25 // 68.75,310.75
80.00,316.00 // 76.25,314.25 // 84.00,316.50
96.00,318.00 // 92.00,317.50 // 101.50,318.25
118.00,319.00 // 112.50,318.75 // 124.75,319.00
145.00,319.00 // 138.25,319.00 // 151.25,319.00
170.00,319.00 // 163.75,319.00 // 175.50,318.25
192.00,316.00 // 186.50,316.75 // 199.50,314.00
222.00,308.00 // 214.50,310.00 // 226.75,306.75
241.00,303.00 // 236.25,304.25 // 245.00,301.75
257.00,298.00 // 253.00,299.25 // 260.50,295.75
271.00,289.00 // 267.50,291.25 // 273.25,285.25
280.00,274.00 // 277.75,277.75 // 280.50,270.25
282.00,259.00 // 281.50,262.75 // 282.00,254.50
282.00,241.00 // 282.00,245.50 // 280.50,237.25
276.00,226.00 // 277.50,229.75 // 273.50,222.75
266.00,213.00 // 268.50,216.25 // 263.00,208.75
254.00,196.00 // 257.00,200.25 // 249.75,192.50
237.00,182.00 // 241.25,185.50 // 234.00,179.75
225.00,173.00 // 228.00,175.25 // 221.75,170.50
212.00,163.00 // 215.25,165.50 // 208.50,160.25
198.00,152.00 // 201.50,154.75 // 194.00,148.75
182.00,139.00 // 186.00,142.25 // 178.00,136.75
166.00,130.00 // 170.00,132.25 // 162.25,128.50
Update: I used the algorithm described in this link, which was translated to Objective-C as follows:
CGPoint ctrl2 = controlPointForPoints(*lastPoint,*preLastPoint,currentPoint);
CGPoint ctrl1 = controlPointForPoints(*lastPoint,currentPoint,*preLastPoint);
static CGPoint controlPointForPoints(CGPoint pt, CGPoint pre, CGPoint post) {
CGPoint ctrlPt = CGPointMake(
middleOfPoints(middleOfPoints(pt.x, pre.x), middleOfPoints(symmetryOfPoints(pt.x, post.x), pt.x)),
middleOfPoints(middleOfPoints(pt.y, pre.y), middleOfPoints(symmetryOfPoints(pt.y, post.y), pt.y))
);
return ctrlPt;
}
static float symmetryOfPoints(float a,float b) {
return a - ((b-a)*smoothingFactor) / 100.0;
}
static float middleOfPoints(float a, float b) {
return (a+b) / 2.0;
}
However, exchanging the two control points does not lead to satisfactory results but increases the number of fragments enourmously. I'd appreciate any further help.
Unfortunately, your code doesn't show how the control points of the smoothed path are calculated. But they are the problem. Most of them are on the wrong side of the start or end point of a curve segment. As a consequence, your path makes a very tight loop or S curve at many main points.
Another indication of this is that while the path generally turns left, many path segments are to the right.
I don't quite understand why you have these white half circles. It seems the even-odd rule is used to draw the path instead of the non-zero winding rule, which probably cannot be changed. Anyway, the problem will disappear if you fix the above problem.

iOS: How to convert UIViewAnimationCurve to UIViewAnimationOptions?

The UIKeyboardAnimationCurveUserInfoKey has a UIViewAnimationCurve value. How do I convert it to the corresponding UIViewAnimationOptions value for use with the options argument of +[UIView animateWithDuration:delay:options:animations:completion:]?
// UIView.h
typedef enum {
UIViewAnimationCurveEaseInOut, // slow at beginning and end
UIViewAnimationCurveEaseIn, // slow at beginning
UIViewAnimationCurveEaseOut, // slow at end
UIViewAnimationCurveLinear
} UIViewAnimationCurve;
// ...
enum {
// ...
UIViewAnimationOptionCurveEaseInOut = 0 << 16, // default
UIViewAnimationOptionCurveEaseIn = 1 << 16,
UIViewAnimationOptionCurveEaseOut = 2 << 16,
UIViewAnimationOptionCurveLinear = 3 << 16,
// ...
};
typedef NSUInteger UIViewAnimationOptions;
Obviously, I could create a simple category method with a switch statement, like so:
// UIView+AnimationOptionsWithCurve.h
#interface UIView (AnimationOptionsWithCurve)
#end
// UIView+AnimationOptionsWithCurve.m
#implementation UIView (AnimationOptionsWithCurve)
+ (UIViewAnimationOptions)animationOptionsWithCurve:(UIViewAnimationCurve)curve {
switch (curve) {
case UIViewAnimationCurveEaseInOut:
return UIViewAnimationOptionCurveEaseInOut;
case UIViewAnimationCurveEaseIn:
return UIViewAnimationOptionCurveEaseIn;
case UIViewAnimationCurveEaseOut:
return UIViewAnimationOptionCurveEaseOut;
case UIViewAnimationCurveLinear:
return UIViewAnimationOptionCurveLinear;
}
}
#end
But, is there an even easier/better way?
The category method you suggest is the “right” way to do it—you don’t necessarily have a guarantee of those constants keeping their value. From looking at how they’re defined, though, it seems you could just do
animationOption = animationCurve << 16;
...possibly with a cast to NSUInteger and then to UIViewAnimationOptions, if the compiler feels like complaining about that.
Arguably you can take your first solution and make it an inline function to save yourself the stack push. It's such a tight conditional (constant-bound, etc) that it should compile into a pretty tiny piece of assembly.
Edit:
Per #matt, here you go (Objective-C):
static inline UIViewAnimationOptions animationOptionsWithCurve(UIViewAnimationCurve curve)
{
switch (curve) {
case UIViewAnimationCurveEaseInOut:
return UIViewAnimationOptionCurveEaseInOut;
case UIViewAnimationCurveEaseIn:
return UIViewAnimationOptionCurveEaseIn;
case UIViewAnimationCurveEaseOut:
return UIViewAnimationOptionCurveEaseOut;
case UIViewAnimationCurveLinear:
return UIViewAnimationOptionCurveLinear;
}
}
Swift 3:
extension UIViewAnimationOptions {
init(curve: UIViewAnimationCurve) {
switch curve {
case .easeIn:
self = .curveEaseIn
case .easeOut:
self = .curveEaseOut
case .easeInOut:
self = .curveEaseInOut
case .linear:
self = .curveLinear
}
}
}
In Swift you can do
extension UIViewAnimationCurve {
func toOptions() -> UIViewAnimationOptions {
return UIViewAnimationOptions(rawValue: UInt(rawValue << 16))
}
}
An issue with the switch based solution is that it assumes no combination of options will be ever passed in. Practice shows though, that there may be situations where the assumption doesn't hold. One instance I found is (at least on iOS 7) when you obtain the keyboard animations to animate your content along with the appearance/disappearance of the keyboard.
If you listen to the keyboardWillShow: or keyboardWillHide: notifications, and then get the curve the keyboard announces it will use, e.g:
UIViewAnimationCurve curve = [userInfo[UIKeyboardAnimationCurveUserInfoKey] integerValue];
you're likely to obtain the value 7. If you pass that into the switch function/method, you won't get a correct translation of that value, resulting in incorrect animation behaviour.
Noah Witherspoon's answer will return the correct value. Combining the two solutions, you might write something like:
static inline UIViewAnimationOptions animationOptionsWithCurve(UIViewAnimationCurve curve)
{
UIViewAnimationOptions opt = (UIViewAnimationOptions)curve;
return opt << 16;
}
The caveat here, as noted by Noah also, is that if Apple ever changes the enumerations where the two types no longer correspond, then this function will break. The reason to use it anyway, is that the switch based option doesn't work in all situations you may encounter today, while this does.
iOS 10+
Swift 5
A Swift alternative to converting UIView.AnimationCurve to UIView.AnimationOptions, which may not even be possible, is to use UIViewPropertyAnimator (iOS 10+), which accepts UIView.AnimationCurve and is a more modern animator than UIView.animate.
Most likely you'll be working with UIResponder.keyboardAnimationCurveUserInfoKey, which returns an NSNumber. The documentation for this key is (Apple's own notation, not mine):
public class let keyboardAnimationCurveUserInfoKey: String // NSNumber of NSUInteger (UIViewAnimationCurve)
Using this approach, we can eliminate any guesswork:
if let kbTiming = notification.userInfo?[UIResponder.keyboardAnimationCurveUserInfoKey] as? NSNumber, // doc says to unwrap as NSNumber
let timing = UIView.AnimationCurve.RawValue(exactly: kbTiming), // takes an NSNumber
let curve = UIView.AnimationCurve(rawValue: timing) { // takes a raw value
let animator = UIViewPropertyAnimator(duration: duration, curve: curve) {
// add animations
}
animator.startAnimation()
}

Very simple test view in MonoTouch draws a line using Core Graphics but view content is not shown

I give up now on this very simple test I've been trying to run. I want to add a subview to my window which does nothing but draw a line from one corner of the iPhone's screen to the other and then, using touchesMoved() it is supposed to draw a line from the last to the current point.
The issues:
1. Already the initial line is not visible.
2. When using Interface Builder, the initial line is visible, but drawRect() is never called, even if I call SetNeedsDisplay().
It can't be that hard...can somebody fix the code below to make it work?
In main.cs in FinishedLaunching():
oView = new TestView();
oView.AutoresizingMask = UIViewAutoresizing.FlexibleWidth | UIViewAutoresizing.FlexibleHeight;
oView.Frame = new System.Drawing.RectangleF(0, 0, 320, 480);
window.AddSubview(oView);
window.MakeKeyAndVisible ();
The TestView.cs:
using System;
using MonoTouch.UIKit;
using MonoTouch.CoreGraphics;
using System.Drawing;
using MonoTouch.CoreAnimation;
using MonoTouch.Foundation;
namespace Test
{
public class TestView : UIView
{
public TestView () : base()
{
}
public override void DrawRect (RectangleF area, UIViewPrintFormatter formatter)
{
CGContext oContext = UIGraphics.GetCurrentContext();
oContext.SetStrokeColor(UIColor.Red.CGColor.Components);
oContext.SetLineWidth(3.0f);
this.oLastPoint.Y = UIScreen.MainScreen.ApplicationFrame.Size.Height - this.oLastPoint.Y;
this.oCurrentPoint.Y = UIScreen.MainScreen.ApplicationFrame.Size.Height - this.oCurrentPoint.Y;
oContext.StrokeLineSegments(new PointF[] {this.oLastPoint, this.oCurrentPoint });
oContext.Flush();
oContext.RestoreState();
Console.Out.WriteLine("Current X: {0}, Y: {1}", oCurrentPoint.X.ToString(), oCurrentPoint.Y.ToString());
Console.Out.WriteLine("Last X: {0}, Y: {1}", oLastPoint.X.ToString(), oLastPoint.Y.ToString());
}
private PointF oCurrentPoint = new PointF(0, 0);
private PointF oLastPoint = new PointF(320, 480);
public override void TouchesMoved (MonoTouch.Foundation.NSSet touches, UIEvent evt)
{
base.TouchesMoved (touches, evt);
UITouch oTouch = (UITouch)touches.AnyObject;
this.oCurrentPoint = oTouch.LocationInView(this);
this.oLastPoint = oTouch.PreviousLocationInView(this);
this.SetNeedsDisplay();
}
}
}
You overrode the wrong Draw method.
You overrode DrawRect (RectangleF, UIViewPrintFormatter) which is used for printing.
You want to override Draw (RectangleF)