I am struggling in solving this problem.
On my scene, I have a camera which looks at the center of mass of an object. I have a some buttons that enable to set camera position on particular view (front view, back view,...) along a invisible sphere that surroung the object (constant radius).
When I click on the button, i would like the camera to move from its start position to the end position along the sphere surface. When camera moves I would like it to keep fixing center of mass of the object.
Has anyone have a clue on how to achieve this?
Thanks for help!
If you are happy/prefer to use basic trigonometry then in your initialisation section you could do this:
var cameraAngle = 0;
var orbitRange = 100;
var orbitSpeed = 2 * Math.PI/180;
var desiredAngle = 90 * Math.PI/180;
...
camera.position.set(orbitRange,0,0);
camera.lookAt(myObject.position);
Then in your render/animate section you could do this:
if (cameraAngle == desiredAngle) { orbitSpeed = 0; }
else {
cameraAngle += orbitSpeed;
camera.position.x = Math.cos(cameraAngle) * orbitRange;
camera.position.y = Math.sin(cameraAngle) * orbitRange;
}
Of course, your buttons would modify what the desiredAngle was (0°, 90°, 180° or 270° presumably), you need to rotate around the correct plane (I am rotating around the XY plane above), and you can play with the orbitRange and orbitSpeed until you hare happy.
You can also modify orbitSpeed as it moves along the orbit path, speeding up and slowing down at various cameraAngles for a smoother ride. This process is called 'tweening' and you could search on 'tween' or 'tweening' if you want to know more. I think Three.js has tweening support but have never looked into it.
Oh, also remember to set your camera's far property to be greater than orbitRadius or you will only see the front half of your object and, depending on what it is, that might look weird.
Related
I would like to rotate long image representing hand of a clock.
I have following code:
val hand = Image(handBitmap).apply {
scaledHeight = 50.0
scaledWidth = 400.0
anchor(.0, 0.5)
addUpdater {
rotation = Angle.fromDegrees( rotation.degrees + 1)
}
}
I expected result like on this image:
but I got this:
What should I change to achieve a hand of clock like effect?
The code seems correct, and seems to work properly with the latest version of KorGE. Which version of KorGE are you using?
As discussed in discord:
The problem was that the anchor point is set to the left, center of the image, while the image itself has a gap:
By adjusting the anchor point (that is a ratio to something that fits the end of the arrow, should fix the issue)
In the case of this image an anchor of anchor(.09, 0.52) could work:
Note also that you can open the debugger by pressing F7 inside the window to actually see the bounds of the image, the anchor point, and the AABB bounds to debug this kind of issues.
Hope this was helpful!
The problem looked more like this:
o
|
o------- -------o
|
o
It occurs when I set scaledHeight and scaledWidth.
Finally setting scale(1.0) helped.
I am making a game based on the game AZ on the website Y8, and I am having problems with tile collisions.
the player moves basically by giving it speed when up is pressed, then rotating left or right.
direction = image_angle;
if(keyForward)
{
speed = 2;
}
else speed = 0;
// rotate
if(keyRotateLeft)
{
image_angle = image_angle + 5;
}
if(keyRotateRight)
{
image_angle = image_angle - 5;
}
then I said when the player collides with the tile speed = 0. But the player gets stuck and can't move anymore. is there a better way to do this.
A simple approach would be as following:
Attempt to rotate
Check if you are now stuck in a wall
If you are, undo the rotation.
A more advanced approach would be to attempt pushing the player out of solids while rotating.
Alternatively, you may be able to get away with giving the player a circular mask and not rotating the actual mask (using a user-defined variable instead of image_angle).
I have a moving camera in camera container which flies arond the scene on giving paths like an airplane; so it can move to any position x,y,z positive and negative. The camera container is looking at his own future path using a spline curve.
Now I want to rotate the camera using the mouse direction but still keeping the general looking at position while moving forward with object. You could say, i want to turn my head on my body: while moving the body having the general looking at direction, i am turning my head around to lets say 220 degree up and down. So i can't look behind my body.
In my code the cameraContainer is responsible to move on a pline curve and to lookAt the moving direction. The camera is added as a child to the cameraContainer responsible for the rotation using the mouse.
What i don't get working properly is the rotation of the camera. I guess its a very common problem. Lets say the camera when moving only on x-axes moves not straight, it moves like a curve. Specially in different camera positions, the rotation seems very different. I was tryiing to use the cameraContainer to avoid this problem, but the problem seems nothing related to the world coordinates.
Here is what i have:
// camera is in a container
cameraContainer = new THREEJS.Object3D();
cameraContainer.add(camera);
camera.lookAt(0,0,1);
cameraContainer.lookAt(nextPositionOnSplineCurve);
// Here goes the rotation depending on mouse
// Vertical
var mouseVerti = 1; // 1 = top, -1 = bottom
if(window.App4D.mouse.y <= instance.domCenterPos.y) // mouse is bottom?
mouseVerti = -1;
// how far is the mouse away from center: 1 most, 0 near
var yMousePerc = Math.abs(Math.ceil((instance.domCenterPos.y - window.App4D.mouse.y) / (instance.domCenterPos.y - instance.domBoundingBox.bottom) * 100) / 100);
var yAngleDiffSide = (instance.config.scene.camera.maxAngle - instance.config.scene.camera.angle) / 2;
var yRotateRan = mouseVerti * yAngleDiffSide * yMousePerc * Math.PI / 180;
instance.camera.rotation.x += yRotateRan; // rotation x = vertical
// Horizontal
var mouseHori = 1; // 1 = top, -1 = bottom
if(window.App4D.mouse.x <= instance.domCenterPos.x) // mouse is left?
mouseHori = -1;
// how far is the mouse away from center: 1 most, 0 near
var xMousePerc = Math.abs(Math.ceil((instance.domCenterPos.x - window.App4D.mouse.x) / (instance.domCenterPos.x - instance.domBoundingBox.right) * 100) / 100);
var xAngleDiffSide = (instance.config.scene.camera.maxAngle - instance.config.scene.camera.angle) / 2;
var xRotateRan = mouseHori * xAngleDiffSide * xMousePerc * Math.PI / 180;
instance.camera.rotation.y += xRotateRan; // rotation y = horizontal
Would be really thankful if someone can give me a hint!
I got the answer after some more trial and error. The solution is to simply take the initial rotation of y in consideration.
When setting up the camera container and the real camera as child of the container, i had to point the camera to the frontface of the camera container object, in order to let the camera looking in the right direction. That lead to the initial rotation of 0, 3.14, 0 (x,y,z). The solution was to added 3.14 to the y rotation everytime i assigned (as mentioned by WestLangley) the mouse rotation.
cameraReal.lookAt(new THREE.Vector3(0,0,1));
cameraReal.rotation.y = xRotateRan + 3.14;
I'm using LevelHelper to build my level, and I'm adding some particles (dynamically initialized CCParticleSystemQuad's) inside my level. All works fine until I move the world (it's a dynamically drawn world in Box2D in which I follow the player with the camera). If I move the world, newly added particles,which are emitting continuously, are drawn at the right position but in the particle-animation afterwards the particles seem to be drawn relatively of the global world/screen position. This gives a weird 'trippy' effect which looks totally unrealistic. The particles should be redrawn/refreshed inside the world
LevelHelperLoader * lh = gameLayer.lh;
LHLayer * layer = [lh layerWithUniqueName:#"MAIN_LAYER"];
NSArray * array = [lh spritesWithTag:WORTEL];
CCParticleSystemQuad * particle;
CGPoint position;
for (LHSprite * sprite in array) {
particle = [CCParticleSystemQuad particleWithFile:#"DirtParticles.plist"];
[layer addChild:particle z:0];
position = sprite.position;
position.y += sprite.contentSize.height * 0.5f;
[particle setPosition:position];
[particle resetSystem];
}
Does anybody know what I might be doing wrong?
Try changing the particle position type:
particle.positionType = kCCPositionTypeFree;
The alternatives are kCCPositionTypeRelative and kCCPositionTypeGrouped. You may have to try all to see which of them best fits your scenario, I'm guessing it's either "free" or "relative".
In my glut application I'm simulating a plane with the camera. When the planes speed is low I intend to have the nose start to point towards the ground as the camera falls. My first instinct was to just change the pitch until it was pointed downwards at -90degrees. However I can't just change the pitch because if the plane is tilted on its side or upside down then it would note be changing direction towards the ground.
Now i'm trying to do a rough simulation of this by shifting the 'lookAt.y' downwards. To do this I am trying to get all the current camera coordinates that I use to set the camera
(eye.x, eye.y, eye.z, look.x, look.y, look.z, up.x, up.y, up.z). Then recall the set with the new modified values.
I've been working with the Camera.cpp and Camera.h to control my camera functions. They can be found here
after adding methods to get all the values, only the eye values are actually updated when various camera motions are made. I guess my question is how do I retrieve these values.
The glLoadMaxtrix call is in this function
void Camera :: setModelViewMatrix(void)
{ // load model view matrix with existing camera values
float m[16];
Vector3 eVec(eye.x, eye.y, eye.z);
m[0] = u.x; m[4] = u.y; m[8] = u.z; m[12] = -eVec.dot(u);
m[1] = v.x; m[5] = v.y; m[9] = v.z; m[13] = -eVec.dot(v);
m[2] = n.x; m[6] = n.y; m[10] = n.z; m[14] = -eVec.dot(n);
m[3] = 0; m[7] = 0; m[11] = 0; m[15] = 1.0;
look.x = u.y; look.y = v.y; look.z = n.y;
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(m);
}
Is there a way to get 'eye', 'lookAt', and 'up' values from the matrix here? Or should I do something else to get these values?
-Thanks in advance for your help
The camera class you link to is not an actual OpenGL class, but it should be simple enough to work with.
The function quoted just takes the current values of the camera object and sends them to OpenGL. If you look at the camera's set function, you can see how the program calculates the values it actually stores.
The eye value is stored directly. The lookAt value is just the value of (eye - n), by vector math. The up value is the hardest, but if I remember my vector math correctly, I believe that up = (n cross u).