Objective-C class unable to respond to helper method - objective-c

I am receiving the familiar and nonspecific diagnostic: "No visible #interface for... declares the selector ..." I appreciate that this diagnostic has appeared numerous times on this site (I have looked at most of them), but none of the suggested fixes seem to be relevant to my case. Thanks!
I have an old Cocoa program which rotates an OpenGL surface using the mouse. (It is based upon an old Apple demo: NSGL Teapot). The surface is created in an NSOpenGLView (Class SurfaceView) and the code used to perform the rotation is in a helper class called Trackball. When the mouse "slides" on the screen, a method (called rollTo) in a Trackball instance invokes a method (called rotateBy) in SurfaceView which provides the amount of rotation. Unfortunately, Xcode 7.3 complains that NSOpenGLView lacks a visible interface for the rotateBy selector (method).
The code compiled perfectly 15 years ago, but not now. Everything else in the program works fine (i.e., the surface is rendered and it responds correctly to various sliders, etc.) Can you suggest how I can get the OpenGL view to respond to rotateBy?
Thanks very much!
Here is the Interface (indented 4 spaces)
#import <Cocoa/Cocoa.h>
#import <OpenGL/OpenGL.h>
#import <OpenGL/gl.h>
#import <OpenGL/glu.h>
#import <GLUT/glut.h>
#import "Trackball.h"
#interface SurfaceView : NSOpenGLView
{
float width;
GLUnurbsObj *theNurb;
Trackball *m_trackball;
// The main rotation
float m_rotation[4];
// The trackball rotation
float m_tbRot[4];
}
- (id)initWithFrame:(NSRect)frameRect;
- (void)drawRect:(NSRect)rect;
- (void)rotateBy:(float *)r;
- (void)mouseDown:(NSEvent *)theEvent;
- (void)mouseUp:(NSEvent *)theEvent;
- (void)mouseDragged:(NSEvent *)theEvent;
- (void)zeroRotate;
#end
Method body:
- (void)rotateBy:(float *)r
{
m_tbRot[0] = r[0];
m_tbRot[1] = r[1];
m_tbRot[2] = r[2];
m_tbRot[3]= r[3];
}
Here is the method in which the error occurs:
- (void)rollTo:(NSPoint)pt sender:(NSOpenGLView *)sender
{
float xxyy;
float rot[4];
float cosAng, sinAng;
float ls, le, lr;
m_endPt[0] = pt.x - m_ctr.x;
m_endPt[1] = pt.y - m_ctr.y;
if (fabs(m_endPt[0] - m_startPt[0]) < kTol && fabs(m_endPt[1] - m_startPt[1]) < kTol)
return; // Not enough change in the vectors to have an action.
// Compute the ending vector from the surface of the ball to its center.
xxyy = m_endPt[0]*m_endPt[0] + m_endPt[1]*m_endPt[1];
if (xxyy > m_radius*m_radius) {
// Outside the sphere.
m_endPt[2] = 0.;
} else
m_endPt[2] = sqrt(m_radius*m_radius - xxyy);
// Take the cross product of the two vectors. r = s X e
rot[1] = m_startPt[1] * m_endPt[2] - m_startPt[2] * m_endPt[1];
rot[2] = -m_startPt[0] * m_endPt[2] + m_startPt[2] * m_endPt[0];
rot[3] = m_startPt[0] * m_endPt[1] - m_startPt[1] * m_endPt[0];
// Use atan for a better angle. If you use only cos or sin, you only get
// half the possible angles, and you can end up with rotations that flip around near
// the poles.
// cos(a) = (s . e) / (||s|| ||e||)
cosAng = m_startPt[0]*m_endPt[0] + m_startPt[1]*m_endPt[1] + m_startPt[2]*m_endPt[2]; // (s . e)
ls = sqrt(m_startPt[0]*m_startPt[0] + m_startPt[1]*m_startPt[1] + m_startPt[2]*m_startPt[2]);
ls = 1. / ls; // 1 / ||s||
le = sqrt(m_endPt[0]*m_endPt[0] + m_endPt[1]*m_endPt[1] + m_endPt[2]*m_endPt[2]);
le = 1. / le; // 1 / ||e||
cosAng = cosAng * ls * le;
// sin(a) = ||(s X e)|| / (||s|| ||e||)
sinAng = lr = sqrt(rot[1]*rot[1] + rot[2]*rot[2] + rot[3]*rot[3]); // ||(s X e)||;
// keep this length in lr for normalizing the rotation vector later.
sinAng = sinAng * ls * le;
rot[0] = (float)atan2(sinAng, cosAng) * kRad2Deg; // GL rotations are in degrees.
// Normalize the rotation axis.
lr = 1. / lr;
rot[1] *= lr; rot[2] *= lr; rot[3] *= lr;
[sender rotateBy:rot];
}

If you want to call rotateBy: on sender then your method signature needs to have a sender that implements that method.
Rewrite it as:
- (void)rollTo:(NSPoint)pt sender:(SurfaceView *)sender
You could also use a cast on sender but that's more dangerous. (i.e. likely to get a run-time error instead of a compiler one.)

The usage is wrong. It was probably always wrong, only the compiler is now detecting it better.
Your sender is declared as NSOpenGLView and you are calling rotateBy: method on it. NSOpenGLView does not have any such method. It's the subclass SurfaceView that has the method.
One simple fix is to declare sender as id, to completely remove type information.
A better fix is to declare the sender parameter correctly as SurfaceView.
If you need to keep the same interface, just perform a cast
[(SurfaceView *) sender rotateBy:rot];

Related

How to get values from simd_float4 in objective-c

I got a simd_float4*4 matrix from ARKit. I want to check the values of the matrix but found myself do not know how to do it in Objective-C. In Swift, this can be written as matrix.columns.3 to fetch a vector of values. But I do not know how to do it in Objective-C. Could someone point me a direction please. Thanks!
simd_float4x4 is a struct (like 4 simd_float4), and you can use
simd_float4x4.columns[index]
to access column in matrix.
/*! #abstract A matrix with 4 rows and 4 columns.*/
struct simd_float4x4 {
public var columns: (simd_float4, simd_float4, simd_float4, simd_float4)
public init()
public init(columns: (simd_float4, simd_float4, simd_float4, simd_float4))
}
Apple document link: https://developer.apple.com/documentation/simd/simd_float4x4?language=objc
hope helpful!
here's an example -- from this project, a pretty nice reference for Obj-C ARKit adventures -- https://github.com/markdaws/arkit-by-example/blob/part3/arkit-by-example/ViewController.m
- (void)insertGeometry:(ARHitTestResult *)hitResult {
// Right now we just insert a simple cube, later we will improve these to be more
// interesting and have better texture and shading
float dimension = 0.1;
SCNBox *cube = [SCNBox boxWithWidth:dimension height:dimension length:dimension chamferRadius:0];
SCNNode *node = [SCNNode nodeWithGeometry:cube];
// The physicsBody tells SceneKit this geometry should be manipulated by the physics engine
node.physicsBody = [SCNPhysicsBody bodyWithType:SCNPhysicsBodyTypeDynamic shape:nil];
node.physicsBody.mass = 2.0;
node.physicsBody.categoryBitMask = CollisionCategoryCube;
// We insert the geometry slightly above the point the user tapped, so that it drops onto the plane
// using the physics engine
float insertionYOffset = 0.5;
node.position = SCNVector3Make(
hitResult.worldTransform.columns[3].x,
hitResult.worldTransform.columns[3].y + insertionYOffset,
hitResult.worldTransform.columns[3].z
);
[self.sceneView.scene.rootNode addChildNode:node];
[self.boxes addObject:node];
}

How to create instance of struct in Objective-C that was defined in C file

In Xcode I have C classes that contains the following definition:
typedef struct {
double x;
double y;
double z;
CoordUnit unit;
} YG
#define YGMeterPoint(x,y,z) __YGPointWithUnit(x,y,z,METER)
This used as follows in C:
//Declares origin point and translated point
YGPoint point = YGMeterPoint(994272.661, 113467.422);
//Converts point in Lambert Zone 1 to WGS84
point = YGPointConvertWGS84(point, LAMBERT_I)
//Convert to Degree
point = YGPointToDegree(point);
printf("Lat:%.9f - Lon:%.9f", point.y, point.x);
But I would like call it using Objective-C in Xcode. How to do that?
Like #Martin R said, normally I can't see why you would need/want to do this however this article explains how to do it. Look at Example 2.

GluUnProject for iOS

To detect 3D world coordinates through the 2D screen coordinates of the iOS, is there any other possible way besides the gluUnProject port?
I've been fiddling around with this days on end now, and I can't seemingly get the hang of it.
-(void)receivePoint:(CGPoint)loke
{
GLfloat projectionF[16];
GLfloat modelViewF[16];
GLint viewportI[4];
glGetFloatv(GL_MODELVIEW_MATRIX, modelViewF);
glGetFloatv(GL_PROJECTION_MATRIX, projectionF);
glGetIntegerv(GL_VIEWPORT, viewportI);
loke.y = (float) viewportI[3] - loke.y;
float nearPlanex, nearPlaney, nearPlanez, farPlanex, farPlaney, farPlanez;
gluUnProject(loke.x, loke.y, 0, modelViewF, projectionF, viewportI, &nearPlanex, &nearPlaney, &nearPlanez);
gluUnProject(loke.x, loke.y, 1, modelViewF, projectionF, viewportI, &farPlanex, &farPlaney, &farPlanez);
float rayx = farPlanex - nearPlanex;
float rayy = farPlaney - nearPlaney;
float rayz = farPlanez - nearPlanez;
float rayLength = sqrtf((rayx*rayx)+(rayy*rayy)+(rayz*rayz));
//normalizing rayVector
rayx /= rayLength;
rayy /= rayLength;
rayz /= rayLength;
float collisionPointx, collisionPointy, collisionPointz;
for (int i = 0; i < 50; i++)
{
collisionPointx = rayx * rayLength/i*50;
collisionPointy = rayy * rayLength/i*50;
collisionPointz = rayz * rayLength/i*50;
}
}
There's a good chunk of my code. Yeah, I could have easily used a struct but I was too mentally fat to do it at the time. That's something I could go back and fix later.
Anywho, the point is that when I output to the debugger using NSLog after I use gluUnProject, the nearplane's and farplane's don't relay results even close to accurate. In fact, they both relay the exact same results, not to mention, the first click always reproduces x, y, & z being all equal to "nan."
Am I skipping over something extraordinarily important here?
There is no gluUnProject function in ES2.0, what is this port that you are using? Also there is no GL_MODELVIEW_MATRIX or GL_PROJECTION_MATRIX, which is most likely your problem.

Line/Ray-intersection not working as expected

I've been working on cobbling together a ray tracer. You know, for fun. So far most things are going as planned, but as soon as I started transforming my test spheres, it all went awry.
The fundamental concept is using one of standard shapes as origin, transforming the camera rays into object space, and then intersecting.
As long as the sphere is identical in object space and world space, it works as expected, but as soon as the spheres are scaled, normals and intersection points go wild.
I've been wracking my brains, and poring over this code over and over, but I just can't find the mistake. Fresh eyes would be much appreciated.
#implementation RTSphere
- (CGFloat)intersectsRay:(RTRay *)worldRay atPoint:(RTVector *)intersection normal:(RTVector *)normal material:(RTMaterial **)material {
RTRay *objectRay = [worldRay rayByTransformingByMatrix:self.inverseTransformation];
RTVector D = objectRay.direction;
RTVector O = objectRay.start;
CGFloat A, B, C;
A = RTVectorDotProduct(D, D);
B = 2 * RTVectorDotProduct(D,O);
C = RTVectorDotProduct(O, O) - 0.25;
CGFloat BB4AC = B * B - 4 * A * C;
if (BB4AC < 0.0) {
return -1.0;
}
CGFloat t0 = (-B - sqrt(BB4AC)) / 2 * A;
CGFloat t1 = (-B + sqrt(BB4AC)) / 2 * A;
if (t0 > t1) {
CGFloat tmp = t0;
t0 = t1;
t1 = tmp;
}
if (t1 < 0.0) {
return -1.0;
}
CGFloat t;
if (t0 < 0.0) {
t = t1;
} else {
t = t0;
}
if (material) {
*material = self.material;
}
if (intersection) {
RTVector isect_o = RTVectorAddition(objectRay.start, RTVectorMultiply(objectRay.direction, t));
*intersection = RTVectorMatrixMultiply(isect_o, self.transformation);
if (normal) {
RTVector normal_o = RTVectorSubtraction(isect_o, RTMakeVector(0.0, 0.0, 0.0));
RTVector normal_w = RTVectorUnit(RTVectorMatrixMultiply(normal_o, self.transformationForNormal));
*normal = normal_w;
}
}
return t;
}
#end
Why are the normals and intersection points not translating into world space as expected?
Edit: I'm moderately confident that my vector and matrix functions are mathematically sound; and I'm thinking it's chiefly a method error, but I recognize that I could be wrong.
There is a lot of RT* code here "behind the scenes" that we have no way to know is correct, so I would start by making sure you have good unit tests of those math functions. The ones I would most suspect, from my experience managing transforms, is rayByTransformingByMatrix: or the value of inverseTransformation. I've found that this is very easy to get wrong when you combine transformations. Rotating and scaling is not the same as scaling and rotating.
At what point does it go wrong for you? Are you sure objectRay itself is correct? (If it isn't, then the rest of this function doesn't matter.) Again, unit test is your friend. You should hand-calculate several situations and then write unit tests to ensure that your methods return the right answers.

Rendering painted lines as nodes in Cocos

I'm working on a drawing app for iPad using Cocos-iOS and I'm having performance issues with drawing lines as a type of CCNode. I understand that using draw in a node causes it to be called every time the canvas is repainted and the current code is very heavy if used every time:
for (LineNodePoint *point in self.points) {
start = end;
end = point;
if (start && end) {
float distance = ccpDistance(start.point, end.point);
if (distance > 1) {
int d = (int)distance;
float difx = end.point.x - start.point.x;
float dify = end.point.y - start.point.y;
for (int i = 0; i < d; i++) {
float delta = i / distance;
[[self.brush sprite] setPosition:ccp(start.point.x + (difx * delta), start.point.y + (dify * delta))];
[[self.brush sprite] visit];
}
}
}
}
Very heavy...
I either need a better way to draw the lines or to be able to cache the drawing as a raster.
Thanks in advance for any help.
How about ccDrawLine or CCMutableTexture? CCMutableTexture is for manipulating pixels using CCRenderTexture internally as you said.
ccDrawLine
cocos2d for iPhone 1.0.0 API reference
CCMutableTexture
Fast set/getPixel for an opengl texture?
[render texture] pixel manipulation (integrated CCMutableTexture functionality)