OpenGL. gluLookAt function not working - glulookat

This is an image which I draw with my program:
[IMG] http://i62.tinypic.com/j163j8.png [/IMG]
It is supposed to be 3D. When I try to check it with gluLookAt:
GL.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);
GL.glLoadIdentity();
GL.glMatrixMode(GL.GL_MODELVIEW);
glu.gluLookAt(
5.0, 2.0, 2.0,
0.0, 0.0, 0.0,
0.0, 0.0, 0.0);
figura(drawable); //Drawing figure
GL.glFlush();
It only shows white screen, or a messed up figure for a second, then it goes off. If I undestand good first 3 coords shows point of view, second 3 shows viewing destination and third 3 shows rotation axis. But this function only messes everything up.
Thanks for answers.

The last three parameters aren't exactly the rotation, but the "up" vector. If you do not want a distorted view, the up vector should be perpendicular to your direction vector, which here is {-5, -2, -2}. Among all the perpendicular vectors, the one you will choose will define the rotation as you call it.
In your example, the most upward perpendicular vector (the one you would use in a first person game for instance) would be {-.323, .937, -.129} after normalization. I computed it first by finding the "left" vector (cross product between absolute up {0, 1, 0} and the direction), and then as the cross product between the direction and "left".

Related

How do 'normalized figure coordinates' work?

In matplotlib, I recently came across the term 'normalized figure coordinates', which is apparently a specification of a rectangle by four parameters.
It is evident that a rectangle can be described by four numbers, and I'm guessing these four numbers somehow describe the dimensions as well as the location of the rectangle. However, I haven't managed to find an answer as to which of these parameters specifies which value.
Additionally, I'm not sure whether this is a matplotlib-specific term or one of general meaning, as the matplotlib documentation does not cite or link any sources with respect to this term.
Can anyone shed some light on this issue, please?
There are several functions where normalized figure coordinates are used.
In general possibilities are
(left, bottom, width, height) (this is called "bounds" in matplotlib); or
(left, bottom, right, top) (called "extent").
Hopefully the documentation will make it clear which 4 tuple is expected in the respective case.
Here you seem to be interested in the GridSpec's tight_layout parameter rect. From its documentation
rect : tuple of 4 floats, optional
(left, bottom, right, top) rectangle in normalized figure coordinates that the whole subplots area (including labels) will fit into. Default is (0, 0, 1, 1).
To answer your last question, the term normalization is not matplotlib-specific you can get a very short intro from wikipedia.
As for Matplotlib: you can have different coordinate systems relative to different objects (e.g. the axis, the figure).
Each of these systems is normalized, in the sense that the 4 corners of the chosen reference object will always have the following coordinates:
(0,1) Top left corner
(1,1) Top right corner
(1,0) Bottom right corner
(0,0) Bottom left corner
Where the first element of each pair refers to x-axis and the second element refers to the y-axis.
This makes, among other things, annotation or placements of artist objects easier as you can specify the position of the element you wish to add using any of the available coordinate systems.
All you need to do is select an appropriate coordinate system by passing a transformation object to the transform parameter.
Some example:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot([5.], [2.], 'o')
circle=plt.Circle((0, 0), 0.1, color="g",transform=ax.transAxes) #bottom (y=0) left (x=0) green circle of radius 0.1 (expressed in coord system)
ax.add_artist(circle)
ax.annotate('I am the top (y=1.0) right (x=1.0) Figure corner',
xy=(1, 1), xycoords=fig.transFigure,
xytext=(0.2, 0.2), textcoords='offset points',
)
plt.text( # position text relative to data
5., 2., 'I am the (5,2) data point', # x, y, text,
ha='center', va='bottom', # text alignment
transform=ax.transData # coordinate system transformation
)
plt.text( # position text relative to Axes
1.0, 0.0, 'I am the bottom (y=0.0) right (x=1.0) axis corner',
ha='right', va='bottom',
transform=ax.transAxes
)
plt.text( # position text relative to Figure
0.0, 1.0, 'I am the top (y=1.0) left (x=0.0) figure corner',
ha='left', va='top',
transform=fig.transFigure
)
plt.show()

Not getting how the property rotation works in SceneKit

When you specify a rotation for an object, you do something like this :
_earthNode.rotation = SCNVector4Make(1, 0, 0, M_PI/2);
What I am not getting is how to specify a specific rotation for each axis ? Because let's say that I wanted to rotate my node from PI on x, PI/2 on y, and PI/4 on z, how would I do that ? I thought that I could do something like this :
_earthNode.rotation = SCNVector4Make(1, 0.5, 0.25, M_PI);
But it doesn't change anything....
How does this property works ?
The rotation vector in Scene Kit is specified as the axis of rotation (first 3 components) follow by the angle (4th component), called axis-angle representation.
The format you are trying to specify (the different angles along each axis) is called Euler angles (unless I'm remembering wrong).
Translating between the two representations is just a bit of trigonometry. A quick online search for "Euler angles to axis angle" lead to this page which shows who to do it in Java.
SCNNode has an eulerAngles property that allows you to do just that

GLKView GLKMatrix4MakeLookAt description and explanation

For modelviewMatrix I understand how to form translate and scale Matrix. But I am unable to understand how to form viewMatrix using GLKMatrix4MakeLookAt. Can anyone explain how to it works and how to give value to parameters(eye center up X Y Z).
GLK_INLINE GLKMatrix4 GLKMatrix4MakeLookAt(float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ)
GLKMatrix4MakeLookAt creates a viewing matrix (in the same way as gluLookAt does, in case you look at other OpenGL code). As the parameters suggest, it considers the position of the viewer's eye, the point in space they're looking at (e.g., a point on an object), and the up vector, which specifies which direction is "up" (e.g., pointing towards the sky). The viewing matrix generated is the combination of a rotation matrix (composed of a set of orthonormal bases [basis vectors]) and an translation.
Logically, the matrix is basically constructed in a few steps:
compute the line-of-sight vector, which is the normalized vector going from the eye's position to the point you're looking at, the center point.
compute the cross product of the line-of-sight vector with the up vector, and normalize the resulting vector.
compute the cross product of the vector computed in step 2. with the line-of-sight to complete the orthonormal basis.
create a 3x3 rotation matrix by setting the first row to the vector created in step 2., the middle row with the vector from step 3., and the bottom row to the negated, normalized line-of-sight vector.
those three steps produce a rotation matrix that will rotate the world coordinate system into eye coordinates (a coordinate system where the eye is located at the origin, and the line-of-sight is down the -z axis. The final viewing matrix is computed by multiplying a translation to the negated eye position, which moves the "world coordinate positioned eye" to the origin for eye coordinates.
Here's a related question showing the code of GLKMatrix4MakeLookAt, and here's a question with more detail about eye coordinates and related coordinate systems: (What exactly are eye space coordinates?) .

Visualizing the Anchor Point of a UIImageView

Is there an easy way of putting a mark (like a cross for example) on the anchor point of an UIImageView? I'm trying to line up several rotating images by their anchor point, and being able to see these points would make the job a lot easier.
Many thanks.
You are asking how to visualize the anchor point within a view but it seem to me that you are asking for it so that you can help align the anchor points. I'll try and answer both questions.
Visualizing the anchor point.
Every view on iOS have an underlying layer that has an anchor point. The anchor point is in unit coordinate space of the layer (x and y goes from 0 to 1). This means that you can multiply x by the width and y by the height to get the position of the anchor point inside the layer in the coordinate space of the view/layer. You can then place a subview/sublayer there to show the location of the anchor point.
In code you could do something like this to display a small black dot where the anchor point is.
CALayer *anchorPointLayer = [CALayer layer];
anchorPointLayer.backgroundColor = [UIColor blackColor].CGColor;
anchorPointLayer.bounds = CGRectMake(0, 0, 6, 6);
anchorPointLayer.cornerRadius = 3;
CGPoint anchor = viewWithVisibleAnchorPoint.layer.anchorPoint;
CGSize size = viewWithVisibleAnchorPoint.layer.bounds.size;
anchorPointLayer.position = CGPointMake(anchor.x * size.width,
anchor.y * size.height);
[viewWithVisibleAnchorPoint.layer addSublayer:anchorPointLayer];
You can see the result in the image below for four different rotations.
Aligning layers by their anchor point
That is cool and all but it's actually easier then that to align anchor points.
The key trick is that the position and the anchorPoint is always the same point, only in two different coordinate spaces. The position is specified in the coordinate space of the super layer. The anchor point is specified in the unit coordinate space of the layer.
The nice thing about this is that views that have their position property aligned will automatically have their anchorPoint aligned. Note that the content is drawn relative to the anchor point. Below is an example of a bunch of views that all have the same y component of their position, thus they are aligned in y.
There really isn't any special code to do this. Just make sure that the position properties are aligned.

Applying a scale and translate transformation to UIBezierPath

I have a UIBezierPath and I would like to:
Move to any coordinate on the UIView
Make bigger or smaller
I am drawing the UIBezierPath based off of a list of predefined coordinates. I implemented this code:
CGAffineTransform move = CGAffineTransformMakeTranslation(0, 0);
CGAffineTransform moveAndScale = CGAffineTransformScale(move, 1.0f, 1.0f);
[shape applyTransform:moveAndScale];
I have also tried scaling and then moving the shape, it seems to make little to no difference.
Using this code:
[shape moveToPoint:CGPointMake(0, 0)];
I start drawing the shape at (0, 0), but this is what happens. I assume this is because a line is being drawn from 0, 0 to the next point in the list.
When I set the move transformation to (0, 0) this is where it draws. Here, moveToPoint is set to the first coordinate pair in the list. As you can see, it is not at 0, 0.
Finally, increasing the 1.0f moves the shape off the screen completely, no matter where the I tell the shape to move.
Can someone help me understand why the shape is not drawing at 0, 0 and why it moves off the screen when I scale it.
(As requested by the OP in a comment above)
I might be wrong on this one, but doesn't this code
CGAffineTransformMakeTranslation(0, 0);
just say that something should be moved 0 pixels along the x-axis and 0 pixels along the y-axis? (reference) It won't actually move anything to origo (0, 0), as it seems you are trying to do.
Also, it seems like you have slightly misunderstood how to properly use moveToPoint:.. Think of it as a way to move your cursor, but without actually drawing anything. It is just a way to say 'I want to start drawing at this point'. The drawing itself can be performed by other methods. If you wanted to e.g. draw a square with sides of length L, then you could do something like this..
// 'shape' is a UIBezierPath
NSInteger L = 100;
CGPoint origin = CGPointMake(50, 50);
[shape moveToPoint:origin]; // Initial point to draw from
[shape addLineToPoint:CGPointMake(origin.x+L, origin.y)]; // Draw from origin to the right
[shape addLineToPoint:CGPointMake(origin.x+L, origin.y+L)]; // Draw a vertical line
[shape addLineToPoint:CGPointMake(origin.x, origin.y+L)]; // Draw bottom line
[shape addLineToPoint:origin]; // Draw vertical line back to origin
Note that this code is not tested at all, but it should give you the idea of how to use moveToPoint: and addLineToPoint:.
You need to be careful of the order you apply the transforms in and you should think about concatenating the transforms together and applying them in one go.
The order is important as each transform affects all x,y positions in the path. So, the translation is affected by the scale. Reverse the order and the path will be scaled and then moved.
Also, the coordinate system is important, particularly if you are scaling. Ensure you draw around 0,0 and then scale and then translate. This is easiest if you normalise the points. Normalising for lat/long values means dividing latitude by 90 and longitude by 180 (this will actually give you a range -1..1). When doing this you should first scale the path, then translate it to the centre of the view, then apply your desired translation.