I have a problem about camera redirection, unsolved for a long time. Briefly, I have a camera whose up vector is always (0,1,0). I want to move camera's position while keeping a certain 3D point appearing in the same position of the screen.
For example, let P be the camera's position, T be the camera's lookat, and F be the point appearing in (x,y) of the screen. Now P is changed to P' (camera's new position); in order to make F still appear in (x,y) of the screen, T must be changed.
Question is, what is the new value of T?
I believe the problem needs a lot of math. Tell me if you have any idea. Thanks greatly!
Consider using the object hierarchy instead of applying math to the camera.
var cNull = new THREE.Object3D();
var camera == new THREE.PerspectiveCamera();
cNull.add(camera);
scene.add(cNull);
If you do this, then you can turn the camera to a single angle and the just use your point F(x,y,z) as the .lookAt() point for cNull. cNull will aim its center at F and the angular offset of the camera within cNull's coordinate system will keep that F point in a constant screen location.
Related
I may be making this harder than what it really is, but I am also pretty new to developing games. Currently I am making a practice scene to get back used to the unity engine as I have not had time to use it since last summer. My issue is that I can not figure out how to lift the camera in game mode. Notice my photo below, and how much of the "underground" is showing. I want to raise the camera to keep it at the very least a specific y axis value, so that I can make less of the ground visible, and more of the background visible. If I am over complicating this, please also let me know. Thank you
If main camera is still then just lift the camera in scene view you can see changes in game view.
Or if camera moves with respect to player then you have to use a script and attach it to camera and get a reference of player transform in the script and according to the player position change position of the camera. Add an offset value in y component of the camera.
I am trying to do some web work and am utilizing leap motion. Does anyone know how to detect when you are touching the edge of the HTML5 canvas you are working in? And if so do you also know how to grab and store that data?
You have to create a mapping between the 3D, real-world coordinate system of the Leap and the 2D, pixel coordinate system of the web page or canvas object. You could, for example, decide that one real-world millimeter equals one pixel and that you are going to ignore the Leap's Z-axis. Then if a finger is 200mm to the right of the origin, then that could mean that the corresponding position on the HTML canvas is 200 pixels to the right of the center of the image.
See https://developer.leapmotion.com/documentation/skeletal/javascript/devguide/Leap_Coordinate_Mapping.html for more information on this topic.
As for getting the 3D finger position in the first place:
* Get a Frame object
* Get a Finger object from the Frame
* Get the tipPosition from the Finger
You will also have to decide when a finger qualifies as touching at all.
The position and rotation of Vuforia's ARcam are determined by their algorithms to give the AR effect. I'd like to know how to do the following:
When some object is clicked the camera goes to some specific position (for that I need to somehow take control of the camera).
Also have to possibility to yield control of the camera back to Vuforia.
You'll want to use a second camera in this instance. Position the second camera at the AR Camera position, then adjust the depths to make the new camera the view that you see. Then you can tween its position to the predefined one you have.
Do the reverse to get control back to the AR Camera.
Here's the task:
We have an Mesh, drawn in position POS with rotation ROT
Also we have a camera Which position and rotation is relative to Mesh For example camera point is CPOS and camera rotation is CROT.
How to calculate resulting angle for camera? I was assuming that it something like:
camera.rotation.x = mesh.rotation.x + viewport.rotation.x
camera.rotation.y = mesh.rotation.y + viewport.rotation.y
camera.rotation.z = mesh.rotation.z + viewport.rotation.z
That worked strange and wrong.
Then I decided to read about it on docs and completely dissapointed.
There are several kind of rotation structures (Euler, Quaternion). But What a want is something different.
Imagine, like you are on spaceship. And it moves in space. You are sitting at starboard turret and looking at objects. They seems like passing by...
Then you want to turn your head - Angel of your head is known to you (in raw opengl, I'd just multiplied head rotation matrix on ship's rotation matrix and got my projection matrix).
In other words I want only x and y axis for camera rotations, combined in matrix. Then I want to multiply it with position-rotation matrix of an object. And this final matrix would be my projection matrix.
How could I do the same in THREE.js?
-----EDIT-----
Thank you for the answer.
Which coords should I give to a camera? It should be local, mesh relative coords, or something absolute?
I understand, that this questions are obvious, but there's no any description about relative objects in THREE.JS docs (besides api description). And the answer might be ambiguous.
Add the camera as a child of the mesh like so:
mesh.add( camera );
When the camera is a child of an object, the camera's position and orientation are specified relative to the parent object.
You can set the camera's orientation by setting either the camera's quaternion or Euler rotation -- your choice.
Please note that the renderer updates the object's matrix and matrixWorld for you. You do not need to do that manually.
three.js r.63
I do not really understand the way I'm suppose to render a side-scroller? How do I know what to render when my character move? What kind of positionning should I use for the characters?
I hope my question is clear
The easiest way i've found to do it is have a characterX and characterY variable [integer or float, whatever you want] Then have a cameraX and cameraY variable. Every object in the scene is drawn at theObjectX-cameraX, theObjectY-cameraY...
CameraX/CameraY are tweened by a similar-to-midpoint formula so eventually they'll reach playerx/playery[Cx = (Cx*99+Px)/100] ... yeah
By doing this, every object moves in the stage's space, and is transformed only on render [saving you from headaches]
Use a matrix to define a camera reference frame.
Use space partitioning to split up your level into screens/windows.
Think of your player sprite as any other entity, like enemies and interactive objects.
Now what you want is the abstraction of a camera. You can define a camera as a 3x3 matrix with this layout:
[rotX_X, rotY_X, 0]
[rotX_Y, rotY_Y, 0]
[transX, transY, 1]
The 2x2 sub-matrix in the top-left corner is a rotation matrix. transX and transY defines the translation part, i.e the origin. You also get scaling for free. Just simply scale the rotation part with a scalar, and you have yourself a zoom.
For this to work properly with rotation, your sprites need to be polygons/primitives, say like triangles or quads; you can't just apply the matrix to the positions of the sprites when drawing. If you don't need rotation, just transforming the center point will work fine.
If you want the camera to follow the player, use the player's position as the camera origin. That is the translation vector [transX, transY]
So how do you apply the matrix to entity positions and model vertices? You do a vector-matrix multiplication.
v' = vM^-1, where v' is the new vector, v is the old vector, and M^-1 is the matrix inverse. A camera needs to be an inverse transform because it defines a local coordinate system. An analogy could be: If you are in front of me and I turn left from my reference frame, I am turning your right. This applies to all affine and linear transformations, like scaling, rotation and translation.
Split up your level into sub-parts so you can cull objects and scenery which does not need to be rendered. Your viewport is of a certain size/resolution. Only render scenery and entities which intersect with your viewport. Instead of checking each and every entity against the viewport bounds, assign each entity to a certain sub-screen and test the bounds of the sub-screen against the viewport and camera bounds. If your divide your levels into parts which are the same size as your viewport, then the maximum number of screens visible
at any particular time is:
2 if your camera only scrolls left and right.
4 if your camera scrolls left, right, up and down.
4 if your camera scrolls in any direction, and additionally can be rotated.
A screen-change is an event you can use to activate entities belonging to that screen. That could be enemies, background animations, doors or whatever you like.
If this is your first foray into writing a side-scroller, I'd suggest considering using an already existing game engine (like Construct or Gamemaker or XNA or whatever fits your experience level) so you don't have to worry about what order to render things and how to make it all work. Mess with that a bit--probably exploring a few of them--to get a feel for how they do things then venture out to your own once you've gotten used to it.
Not that there's anything wrong with baptism by fire but it can get pretty overwhelming in my opinion.