When using Matrix.CreateTranslation(x,y,z) I get bizarre results. I have tested using fixed values, one variable at a time and have determined the following:
When altering the X coordinates, the model moves from the top left corner to the bottom right corner.
When altering the Y coordinates, the model moves up and down as it should.
I do not plan to alter the Z coordinates, but because of the nature of my program I can't figure out exactly what it does.
I have my model drawn. Rotation works fine. I am performing my translations in the correct order (at least I think): scale * rotation * translation.
I think the problem lies in my camera settings, but I have no idea exactly what the problem is. I am trying to create a top-down-style RTS camera.
Here are my camera settings:
campos = new Vector3(5000.0F, 5000.0F, 5000.0F)
effect.View = Matrix.CreateLookAt(campos, Vector3.Down, Vector3.Up)
I can provide more information as needed.
The second argument of Matrix.CreateLookAt is not the direction the camera is facing, but the targeted point.
If you try to make the camera look down, use
Matrix.CreateLookAt(campos, campos + Vector3.Down, Vector3.Forward)
This will tell the camera to always look at the point one unit below the camera.
Your translation probably doesn't work well because the camera is not looking at the point you want it to, and therefore looks like the model is moving diagonally.
Related
I'm not sure why, but for some reason whenever the camera in my game moves, everything but the character it's focusing on does this weird thing where they move like they should, but they almost vibrate and you can see a little trail of the back of the object, although it's very small. can someone tell me why this is happening? here's the code:
x+= (xTo-x)/camera_speed_width;
y+= (yTo-y)/camera_speed_height;
x=clamp(x, CAMERA_WIDTH/2, room_width-CAMERA_WIDTH/2);
y=clamp(y, CAMERA_HEIGHT/2, room_height-CAMERA_HEIGHT/2);
if (follow != noone)
{
xTo=follow.x;
yTo=follow.y;
}
var _view_matrix = matrix_build_lookat(x,y,-10,x,y,0,0,1,0);
var _projection_matrix = matrix_build_projection_ortho(CAMERA_WIDTH,CAMERA_HEIGHT,-10000,10000)
camera_set_view_mat(camera,_view_matrix);
camera_set_proj_mat(camera,_projection_matrix);
I can think of 2 options:
Your game runs on a low Frames Per Second (30 or lower), a higher FPS will render moving graphics smoother (60 FPS been the usual minimum)
another possibility is that your camera is been set to a target multiple times, perhaps one part (or block code) follows the player earlier than the other. I think you could also let a viewport follow an object in the room editor, perhaps that's set as well.
Try and see if these options will help you out.
If your camera is low-resolution, you should consider rounding/flooring your camera coordinates - otherwise the instances are (relative to camera) at fractional coordinates, at which point you are at mercy of GPU as to how they will be rendered. If the instances themselves also use fractional coordinates, you are going to get wobble as combined fractions round to one or other number.
I placed a standard Camera in front of a moving Actor. When I set the current view to this camera I noticed a strange behaviour: If the actor get really close to another object on the scenario (a default cube) it disappear from the view. It looks like the camera is getting into the cube. I'm pretty sure the camera is not colliding with the cube because the actor has a couple of bumpers that prevent the side where the camera is placed to collide with other objects and the whole camera mesh is placed fully 'inside' the actor. The problem maybe is related with the size of the actor that's about 40cm x 30cm x 10cm. The observed cube is 1mt x 1mt x 1mt, the minimum distance of camera from cube is around 3 cm.
Sounds to me like you're experiencing an issue with an object passing your camera's "clipping plane." In the 3D world, this is simply just draw distance minimum and maximum values. For more information on what you are experiencing, check out this brilliant explanation by Autodesk: https://knowledge.autodesk.com/support/maya/learn-explore/caas/CloudHelp/cloudhelp/2018/ENU/Maya-Rendering/files/GUID-D69C23DA-ECFB-4D95-82F5-81118ED41C95-htm.html
Now, let's fix the issue! In Unreal Engine, it's super easy. Go into your Project Settings/General Settings. There is a value called Near Clip Plane, which simply changes the minimum clipping value for Camera components. I would bet making this value smaller will fix your issue! For a visual representation, check out this tutorial by Kyle Dail: https://www.youtube.com/watch?v=oO79qxNnOfU
I want to use the Android orientation sensor data for my GLES camera - giving it the rotation matrix. I found a very good example here:
How to use onSensorChanged sensor data in combination with OpenGL
but this is only working with GL1.0 and I need to work on it for GLES2.0. Using my own shaders, everything works, moving the camera manuall is fine. But the moment I use the rotation matrix like in the example, it doesn't really work.
I generate the rotation matrix with:
SensorManager.getRotationMatrix(rotationMatrix, null, bufferedAccelGData, bufferedMagnetData);
My application is running in LANDSCAPe so I use that methode after (like in the example code):
float[] result = new float[16];
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, result);
return result;
It worked fine on my phone in his code but not in mine. My screen looks like that:
The rotation matrix seems to be rotated 90° to the right (almost as if I have forgotten to switch to landscape for my activity).
I was thinking of using the remap() method in a wrong way but in the example it makes sense, the camera movement works now. If I rotate to the left, the screen rotates to the left as well, even though, since everything is turned, it rotates "up" (compared to the ground, which is not on the bottom but on the right). It just looks like I made a wall instead of a ground but I'm sure my coordinates are right for the vertices.
I took a look ath the draw method for the GLSurface and I don't see what I might have done wrong here:
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
MatrixStack.glLoadMatrix(sensorManager.getRotationMatrix()); // Schreibt die MVMatrix mit der ogn. Rotationsmatrix
GameRenderer.setPerspMatrix(); // Schreibt die Perspektivmatrix Uniform für GLES. Daran sollte es nicht liegen.
MatrixStack.mvPushMatrix();
drawGround();
MatrixStack.mvPopMatrix();
As I said, when moving my camera manually everything works perfect. So what is wrong with the rotation matrix I get?
Well, okay, it was a very old problem but now that I took a look at the code again I found the solution.
Having the phone in landscape I had to remap the axis using
SensorManager.remapCoordinateSystem(R, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, R);
But that still didn't rotate the image - even though the mapping of the Y and -X Axis worked fine. So simply using
Matrix.rotateM(R, 0, 90, 1, 0, 0);
Does the job. Not really nicely but it works.
I know it was a very old question and I don't see why I made this mistake but perhaps anyone else has the same problem one day.
Hope this helps,
Tobias
If it is (was) working on a specific phone but not on yours, I guess the android version may play a role here. We faced the issue in mixare Augmented Reality Engine where a Surfaceview is superimposed over a Camera view. Pleas consider that the information here may not apply to your case since we are not using OpenGL.
Modern version of android will return a default orientation, whereas previously portrait was the default. You can check how we query this in the Compatibility class. This information is then used to apply different values to the RemapCoordinateSystem call check lines 739 and onwards of this file.
Mixare is as well using landscape mode by default, so I guess our values for the remapping should apply to your case just as well. As I said earlier, we are using the 3x3 matrices, since we are not using OpenGL, but I guess this should be the same for OpenGL compatible matrices.
Take time, play with the orientation matrix, you will find a column that contains useful vaule.
Besides Log vales for each column, see which one is useful, try quaternions, keep playing with values never try the code directly in renderer, first check the values
Because later, you will have more options for input like touch, there too you have to test the values, play with them, use sensitivity constants with matrices too
I'm trying to make a little archer game, and the problem I'm having has to do with 2 pixels in particular, I'll call them _arm and _arrow. When a real archer is pulling back an arrow, he doesn't immediately pull the arrow back as far as his strength allows him, the arrow takes a little bit of time to be pulled back.
The _arm's angle is equal to the vector from a point to where the user touched on the screen. The rotation is perfect, so the _arm is good. The _arrow needs to be on the same line as _arrow, they are 1 pixel wide each so it looks as though the _arrow is exactly on top of the _arm.
I tried to decrement from the x/y coordinates based on a variable that changes with time, and I set the _arrow's location equal to the _arm's location, and tried to make it look like the _arrow was being pulled back. however, if you rotated, the x/y would mess up because it is not proportional on the x and y axis, so basically _arrow will either be slightly above the arm or slightly below it depending on the angle of the vector, based on touch.
How could I used the x/y position of _arm and the vector of touch to make the arrow appear as though it was being pulled back by a small amount, yet keep the arrow on top of the _arm sprite so that it's position would be similar to the arm, but slightly off yet still on top of the _arm pixel at all times. If you need anymore info, just leave a comment.
I'm not sure I've fully understood, but I'll have a go at answering anyway:
To make the arrow move and rotate to the same place as the consider adding the arrow as a child of the arm. You can still render it behind if you like by making its z is less than one: [arm addChild:arrow z:-1]
To then make the arrow move away from the arm as the bow is drawn, you then just set the position of the arrow with respect to the arm.
The problem I do see with this solution however is that this grouping of the sprites may be a little unusual after the arrow leaves the bow. Here you probably don't want the arrow to be a child of the arm as the coordinate systems are no longer related.
Even though they're sure what I "suggested would have solved [the] problem" here is the
Poster's solution
I had to get the x and y coords of the arm based of angle, then I got the sin/cos of a number that was based of the same angle as the arm and subtraced from that.
I wonder if someone could tell me how to make it possible to move a camera in a 3D space when
the camera is rotated.
I am working on my own 3D engine (nothing fancy) and I can move the camera forward backward left right up down, thats all good.
However when I rotate the camera, it doesnt move in the direction that the camera is directed to.
Here is a picture that should help you understand what I mean:
http://www.xaid.se/camrot.jpg
Does anybody know how to make this work?
(If you're interested in what I'm working on, visit this site)
i'm not sure if i really get what you mean, but your problem looks like you want to move along the direction of the camera instead along one (main-)axis?
therefore my solution would be to store a vector which keeps the direction the camera is looking, and update this vector everytime you rotate the camera. now you can use your direction vector for the forward movement. position + vector*stepsize.
hope that helps a little bit.