glm rotation does not according to model's new orientation - game-engine

I want to achieve camera rotation around an object, however when i rotate the camera in different directions then apply more rotation, the model rotates with respect to it's initial orientation not the new orientation I don't know if I am missing anything or not, how can I resolve this issue?
my MVP initialization
ubo.model = attr.transformation[0];
ubo.view = glm::lookAt(glm::vec3(0.0f, 10.0f, 20.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
ubo.proj = glm::perspective(glm::radians(50.0f), extend.width / (float)extend.height, 0.1f, 100.0f);
ubo.proj[1][1] *= -1;
my update code
while (accumulatedTime >= timeFPS)
{
SDL_PollEvent(&event);
if (event.type == SDL_QUIT)
{
exit = true;
}
if (event.type == SDL_MOUSEMOTION && leftMouseButtonPressed == false)
{
prev.x = event.button.x;
prev.y = event.button.y;
}
if (event.type == SDL_MOUSEBUTTONDOWN && event.button.button == SDL_BUTTON_LEFT)
{
leftMouseButtonPressed = true;
}
if (event.type == SDL_MOUSEBUTTONUP && event.button.button == SDL_BUTTON_LEFT)
{
leftMouseButtonPressed = false;
}
if (event.type == SDL_MOUSEMOTION && leftMouseButtonPressed == true)
{
New.x = event.button.x;
New.y = event.button.y;
delta = New - prev;
if(delta.x != 0)
ubo.view = glm::rotate(ubo.view, timeDelta * delta.x/20, glm::vec3(0.0f, 1.0f, 0.0f));
if (delta.y != 0)
ubo.view = glm::rotate(ubo.view, timeDelta * delta.y/20, glm::vec3(1.0f, 0.0f, 0.0f));
prev = New;
}
accumulatedTime -= timeFPS;
v.updateUniformBuffer(ubo);
}
v.drawFrame();
}
my vertex Buffer
#version 450
#extension GL_ARB_separate_shader_objects : enable
....
void main()
{
gl_Position = ubo.proj * ubo.view * ubo.model * vec4 (inPosition, 1.0);
fragTexCoord = inTexCoord;
Normal = ubo.proj * ubo.view * ubo.model * vec4 (inNormals, 1.0);
}

As it turns out i was missing some essential parts of vector rotation, my conclusions are as follows:-
We need to have four vectors cameraPos, cameraDirection, cameraUp, cameraRight (the latter two vector are the up and right vectors relative to the camera orientation)
Each time we rotate our camera position we need to keep track of the cameraUp and cameraRight vectors (we need to rotate them as well; this was the part that i was missing)
The final camera orientation is calculated from the newly transformed cameraPos, cameraUp and cameraRight vectors
You can find a good tutorial about this in Here

Related

Camera and Game Character Rotation Issues (Unity - Third Person Character Controller)

I'm trying to edit the Third Person Controller script that's found in the 'Starter Assets' pack created by Unity (https://assetstore.unity.com/packages/essentials/starter-assets-third-person-character-controller-196526)
Note: Just a heads up that I am extremely new to coding, so please be as detailed with your answer as possible. Thanks in advance, I really appreciate it.
Current Situation (note: I'm building for mobile) -
My character moves forwards, diagonally forwards (top left, top right), and both left/right correctly and rotates to face that direction. The follow camera stays behind the character and rotates when the character does. This is all great.
The Problem - When my character moves backwards (pressing 'S' on the keyboard or 'Down on the Joystick' (mobile)) the character rotates causing the camera and character to rotate continuously on the spot. This is also the case when moving diagonally backwards (bottom left, bottom right).
What I Want To Achieve - I want it so that when backward input is detected, but only backwards, the character moves backward with no rotation to either the player or camera. Moving diagonally backwards however I would still like the rotation for the camera (not the player) but in the opposite direction.
Here is the part of the script that I believe controls this. If you could please assist me on how to edit, what to add/remove, that would be amazing.
// normalise input direction
Vector3 inputDirection = new Vector3(_input.move.x, 0.0f, _input.move.y).normalized;
// note: Vector2's != operator uses approximation so is not floating point error prone, and is cheaper than magnitude
// if there is a move input rotate player when the player is moving
if (_input.move != Vector2.zero)
{
_targetRotation = Mathf.Atan2(inputDirection.x, inputDirection.z) * Mathf.Rad2Deg +
_mainCamera.transform.eulerAngles.y;
float rotation = Mathf.SmoothDampAngle(transform.eulerAngles.y, _targetRotation, ref _rotationVelocity,
RotationSmoothTime);
// rotate to face input direction relative to camera position
transform.rotation = Quaternion.Euler(0.0f, rotation, 0.0f);
}
Vector3 targetDirection = Quaternion.Euler(0.0f, _targetRotation, 0.0f) * Vector3.forward;
// move the player
_controller.Move(targetDirection.normalized * (_speed * Time.deltaTime) +
new Vector3(0.0f, _verticalVelocity, 0.0f) * Time.deltaTime);
Just in case the full script is required to assist me, please see it below;
using UnityEngine;
#if ENABLE_INPUT_SYSTEM && STARTER_ASSETS_PACKAGES_CHECKED
using UnityEngine.InputSystem;
#endif
/* Note: animations are called via the controller for both the character and capsule using animator null checks
*/
namespace StarterAssets
{
[RequireComponent(typeof(CharacterController))]
#if ENABLE_INPUT_SYSTEM && STARTER_ASSETS_PACKAGES_CHECKED
[RequireComponent(typeof(PlayerInput))]
#endif
public class ThirdPersonController : MonoBehaviour
{
[Header("Player")]
[Tooltip("Move speed of the character in m/s")]
public float MoveSpeed = 2.0f;
[Tooltip("Sprint speed of the character in m/s")]
public float SprintSpeed = 5.335f;
[Tooltip("How fast the character turns to face movement direction")]
[Range(0.0f, 0.3f)]
public float RotationSmoothTime = 0.12f;
[Tooltip("Acceleration and deceleration")]
public float SpeedChangeRate = 10.0f;
public AudioClip LandingAudioClip;
public AudioClip[] FootstepAudioClips;
[Range(0, 1)] public float FootstepAudioVolume = 0.5f;
[Space(10)]
[Tooltip("The height the player can jump")]
public float JumpHeight = 1.2f;
[Tooltip("The character uses its own gravity value. The engine default is -9.81f")]
public float Gravity = -15.0f;
[Space(10)]
[Tooltip("Time required to pass before being able to jump again. Set to 0f to instantly jump again")]
public float JumpTimeout = 0.50f;
[Tooltip("Time required to pass before entering the fall state. Useful for walking down stairs")]
public float FallTimeout = 0.15f;
[Header("Player Grounded")]
[Tooltip("If the character is grounded or not. Not part of the CharacterController built in grounded check")]
public bool Grounded = true;
[Tooltip("Useful for rough ground")]
public float GroundedOffset = -0.14f;
[Tooltip("The radius of the grounded check. Should match the radius of the CharacterController")]
public float GroundedRadius = 0.28f;
[Tooltip("What layers the character uses as ground")]
public LayerMask GroundLayers;
[Header("Cinemachine")]
[Tooltip("The follow target set in the Cinemachine Virtual Camera that the camera will follow")]
public GameObject CinemachineCameraTarget;
[Tooltip("How far in degrees can you move the camera up")]
public float TopClamp = 70.0f;
[Tooltip("How far in degrees can you move the camera down")]
public float BottomClamp = -30.0f;
[Tooltip("Additional degress to override the camera. Useful for fine tuning camera position when locked")]
public float CameraAngleOverride = 0.0f;
[Tooltip("For locking the camera position on all axis")]
public bool LockCameraPosition = false;
// cinemachine
private float _cinemachineTargetYaw;
private float _cinemachineTargetPitch;
// player
private float _speed;
private float _animationBlend;
private float _targetRotation = 0.0f;
private float _rotationVelocity;
private float _verticalVelocity;
private float _terminalVelocity = 53.0f;
// timeout deltatime
private float _jumpTimeoutDelta;
private float _fallTimeoutDelta;
// animation IDs
private int _animIDSpeed;
private int _animIDGrounded;
private int _animIDJump;
private int _animIDFreeFall;
private int _animIDMotionSpeed;
#if ENABLE_INPUT_SYSTEM && STARTER_ASSETS_PACKAGES_CHECKED
private PlayerInput _playerInput;
#endif
private Animator _animator;
private CharacterController _controller;
private StarterAssetsInputs _input;
private GameObject _mainCamera;
private const float _threshold = 0.01f;
private bool _hasAnimator;
private bool IsCurrentDeviceMouse
{
get
{
#if ENABLE_INPUT_SYSTEM && STARTER_ASSETS_PACKAGES_CHECKED
return _playerInput.currentControlScheme == "KeyboardMouse";
#else
return false;
#endif
}
}
private void Awake()
{
// get a reference to our main camera
if (_mainCamera == null)
{
_mainCamera = GameObject.FindGameObjectWithTag("MainCamera");
}
}
private void Start()
{
_cinemachineTargetYaw = CinemachineCameraTarget.transform.rotation.eulerAngles.y;
_hasAnimator = TryGetComponent(out _animator);
_controller = GetComponent<CharacterController>();
_input = GetComponent<StarterAssetsInputs>();
#if ENABLE_INPUT_SYSTEM && STARTER_ASSETS_PACKAGES_CHECKED
_playerInput = GetComponent<PlayerInput>();
#else
Debug.LogError( "Starter Assets package is missing dependencies. Please use Tools/Starter Assets/Reinstall Dependencies to fix it");
#endif
AssignAnimationIDs();
// reset our timeouts on start
_jumpTimeoutDelta = JumpTimeout;
_fallTimeoutDelta = FallTimeout;
}
private void Update()
{
_hasAnimator = TryGetComponent(out _animator);
JumpAndGravity();
GroundedCheck();
Move();
}
private void LateUpdate()
{
CameraRotation();
}
private void AssignAnimationIDs()
{
_animIDSpeed = Animator.StringToHash("Speed");
_animIDGrounded = Animator.StringToHash("Grounded");
_animIDJump = Animator.StringToHash("Jump");
_animIDFreeFall = Animator.StringToHash("FreeFall");
_animIDMotionSpeed = Animator.StringToHash("MotionSpeed");
}
private void GroundedCheck()
{
// set sphere position, with offset
Vector3 spherePosition = new Vector3(transform.position.x, transform.position.y - GroundedOffset,
transform.position.z);
Grounded = Physics.CheckSphere(spherePosition, GroundedRadius, GroundLayers,
QueryTriggerInteraction.Ignore);
// update animator if using character
if (_hasAnimator)
{
_animator.SetBool(_animIDGrounded, Grounded);
}
}
private void CameraRotation()
{
// if there is an input and camera position is not fixed
if (_input.look.sqrMagnitude >= _threshold && !LockCameraPosition)
{
//Don't multiply mouse input by Time.deltaTime;
float deltaTimeMultiplier = IsCurrentDeviceMouse ? 0.1f : Time.deltaTime;
_cinemachineTargetYaw += _input.look.x * deltaTimeMultiplier;
_cinemachineTargetPitch += _input.look.y * deltaTimeMultiplier;
}
// clamp our rotations so our values are limited 360 degrees
_cinemachineTargetYaw = ClampAngle(_cinemachineTargetYaw, float.MinValue, float.MaxValue);
_cinemachineTargetPitch = ClampAngle(_cinemachineTargetPitch, BottomClamp, TopClamp);
// Cinemachine will follow this target
CinemachineCameraTarget.transform.rotation = Quaternion.Euler(_cinemachineTargetPitch + CameraAngleOverride,
_cinemachineTargetYaw, 0.0f);
}
private void Move()
{
// set target speed based on move speed, sprint speed and if sprint is pressed
float targetSpeed = _input.sprint ? SprintSpeed : MoveSpeed;
// a simplistic acceleration and deceleration designed to be easy to remove, replace, or iterate upon
// note: Vector2's == operator uses approximation so is not floating point error prone, and is cheaper than magnitude
// if there is no input, set the target speed to 0
if (_input.move == Vector2.zero) targetSpeed = 0.0f;
// a reference to the players current horizontal velocity
float currentHorizontalSpeed = new Vector3(_controller.velocity.x, 0.0f, _controller.velocity.z).magnitude;
float speedOffset = 0.1f;
float inputMagnitude = _input.analogMovement ? _input.move.magnitude : 0.0f;
// accelerate or decelerate to target speed
if (currentHorizontalSpeed < targetSpeed - speedOffset ||
currentHorizontalSpeed > targetSpeed + speedOffset)
{
// creates curved result rather than a linear one giving a more organic speed change
// note T in Lerp is clamped, so we don't need to clamp our speed
_speed = Mathf.Lerp(currentHorizontalSpeed, targetSpeed * inputMagnitude,
Time.deltaTime * SpeedChangeRate);
// round speed to 3 decimal places
_speed = Mathf.Round(_speed * 1000f) / 1000f;
}
else
{
_speed = targetSpeed;
}
_animationBlend = Mathf.Lerp(_animationBlend, targetSpeed, Time.deltaTime * SpeedChangeRate);
if (_animationBlend < 0.01f) _animationBlend = 0f;
// normalise input direction
Vector3 inputDirection = new Vector3(_input.move.x, 0.0f, _input.move.y).normalized;
// note: Vector2's != operator uses approximation so is not floating point error prone, and is cheaper than magnitude
// if there is a move input rotate player when the player is moving
if (_input.move != Vector2.zero)
{
_targetRotation = Mathf.Atan2(inputDirection.x, inputDirection.z) * Mathf.Rad2Deg +
_mainCamera.transform.eulerAngles.y;
float rotation = Mathf.SmoothDampAngle(transform.eulerAngles.y, _targetRotation, ref _rotationVelocity,
RotationSmoothTime);
// rotate to face input direction relative to camera position
transform.rotation = Quaternion.Euler(0.0f, rotation, 0.0f);
}
Vector3 targetDirection = Quaternion.Euler(0.0f, _targetRotation, 0.0f) * Vector3.forward;
// move the player
_controller.Move(targetDirection.normalized * (_speed * Time.deltaTime) +
new Vector3(0.0f, _verticalVelocity, 0.0f) * Time.deltaTime);
// update animator if using character
if (_hasAnimator)
{
_animator.SetFloat(_animIDSpeed, _animationBlend);
_animator.SetFloat(_animIDMotionSpeed, inputMagnitude);
}
}
private void JumpAndGravity()
{
if (Grounded)
{
// reset the fall timeout timer
_fallTimeoutDelta = FallTimeout;
// update animator if using character
if (_hasAnimator)
{
_animator.SetBool(_animIDJump, false);
_animator.SetBool(_animIDFreeFall, false);
}
// stop our velocity dropping infinitely when grounded
if (_verticalVelocity < 0.0f)
{
_verticalVelocity = -2f;
}
// Jump
if (_input.jump && _jumpTimeoutDelta <= 0.0f)
{
// the square root of H * -2 * G = how much velocity needed to reach desired height
_verticalVelocity = Mathf.Sqrt(JumpHeight * -2f * Gravity);
// update animator if using character
if (_hasAnimator)
{
_animator.SetBool(_animIDJump, true);
}
}
// jump timeout
if (_jumpTimeoutDelta >= 0.0f)
{
_jumpTimeoutDelta -= Time.deltaTime;
}
}
else
{
// reset the jump timeout timer
_jumpTimeoutDelta = JumpTimeout;
// fall timeout
if (_fallTimeoutDelta >= 0.0f)
{
_fallTimeoutDelta -= Time.deltaTime;
}
else
{
// update animator if using character
if (_hasAnimator)
{
_animator.SetBool(_animIDFreeFall, true);
}
}
// if we are not grounded, do not jump
_input.jump = false;
}
// apply gravity over time if under terminal (multiply by delta time twice to linearly speed up over time)
if (_verticalVelocity < _terminalVelocity)
{
_verticalVelocity += Gravity * Time.deltaTime;
}
}
private static float ClampAngle(float lfAngle, float lfMin, float lfMax)
{
if (lfAngle < -360f) lfAngle += 360f;
if (lfAngle > 360f) lfAngle -= 360f;
return Mathf.Clamp(lfAngle, lfMin, lfMax);
}
private void OnDrawGizmosSelected()
{
Color transparentGreen = new Color(0.0f, 1.0f, 0.0f, 0.35f);
Color transparentRed = new Color(1.0f, 0.0f, 0.0f, 0.35f);
if (Grounded) Gizmos.color = transparentGreen;
else Gizmos.color = transparentRed;
// when selected, draw a gizmo in the position of, and matching radius of, the grounded collider
Gizmos.DrawSphere(
new Vector3(transform.position.x, transform.position.y - GroundedOffset, transform.position.z),
GroundedRadius);
}
private void OnFootstep(AnimationEvent animationEvent)
{
if (animationEvent.animatorClipInfo.weight > 0.5f)
{
if (FootstepAudioClips.Length > 0)
{
var index = Random.Range(0, FootstepAudioClips.Length);
AudioSource.PlayClipAtPoint(FootstepAudioClips[index], transform.TransformPoint(_controller.center), FootstepAudioVolume);
}
}
}
private void OnLand(AnimationEvent animationEvent)
{
if (animationEvent.animatorClipInfo.weight > 0.5f)
{
AudioSource.PlayClipAtPoint(LandingAudioClip, transform.TransformPoint(_controller.center), FootstepAudioVolume);
}
}
}
}

How to color individual pixels with OpenGL ES 2.0?

Is there possible to change the color of an individual pixel with OpenGL ES 2.0? Right now, I have found that I can manage that using a vertex. I've used this method to draw it:
GLES20.glDrawArrays(GLES20.GL_POINTS, 0, 1);
The size of the point was set to minimum in order to be a single pixel painted.
All good, until I've needed to draw 3 to 4 millions of them! It takes 5-6 seconds to initialize only one frame. This is time-inefficient as long as the pixels will be updated constantly. The update/ refresh would be preferable to be as close as possible to 60 fps.
How can I paint them in a more efficient way?
Note: It's a must to paint them individually only!
My attempt is here (for a screen of 1440x2560 px):
package com.example.ctelescu.opengl_pixel_draw;
import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.opengl.Matrix;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
public class PixelDrawRenderer implements GLSurfaceView.Renderer {
private float[] mModelMatrix = new float[16];
private float[] mViewMatrix = new float[16];
private float[] mProjectionMatrix = new float[16];
private float[] mMVPMatrix = new float[16];
private final FloatBuffer mVerticesBuffer;
private int mMVPMatrixHandle;
private int mPositionHandle;
private int mColorHandle;
private final int mBytesPerFloat = 4;
private final int mStrideBytes = 7 * mBytesPerFloat;
private final int mPositionOffset = 0;
private final int mPositionDataSize = 3;
private final int mColorOffset = 3;
private final int mColorDataSize = 4;
public PixelDrawRenderer() {
// Define the vertices.
// final float[] vertices = {
// // X, Y, Z,
// // R, G, B, A
// -1f, 1f, 0.0f,
// 1.0f, 0.0f, 0.0f, 1.0f,
//
// -0.9f, 1.2f, 0.0f,
// 0.0f, 0.0f, 1.0f, 1.0f,
//
// -0.88f, 1.2f, 0.0f,
// 0.0f, 1.0f, 0.0f, 1.0f,
//
// -0.87f, 1.2f, 0.0f,
// 0.0f, 1.0f, 0.0f, 1.0f,
//
// -0.86f, 1.2f, 0.0f,
// 0.0f, 1.0f, 0.0f, 1.0f,
//
// -0.85f, 1.2f, 0.0f,
// 0.0f, 1.0f, 0.0f, 1.0f};
// Initialize the buffers.
mVerticesBuffer = ByteBuffer.allocateDirect(22579200 * mBytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
// mVerticesBuffer.put(vertices);
}
#Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config) {
// Set the background clear color to gray.
GLES20.glClearColor(0.5f, 0.5f, 0.5f, 0.5f);
// Position the eye behind the origin.
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = 1.5f;
// We are looking toward the distance
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = -5.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
// Set the view matrix. This matrix can be said to represent the camera position.
// NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
// view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
final String vertexShader =
"uniform mat4 u_MVPMatrix; \n" // A constant representing the combined model/view/projection matrix.
+ "attribute vec4 a_Position; \n" // Per-vertex position information we will pass in.
+ "attribute vec4 a_Color; \n" // Per-vertex color information we will pass in.
+ "varying vec4 v_Color; \n" // This will be passed into the fragment shader.
+ "void main() \n" // The entry point for our vertex shader.
+ "{ \n"
+ " v_Color = a_Color; \n" // Pass the color through to the fragment shader.
// It will be interpolated across the vertex.
+ " gl_Position = u_MVPMatrix \n" // gl_Position is a special variable used to store the final position.
+ " * a_Position; \n" // Multiply the vertex by the matrix to get the final point in
+ " gl_PointSize = 0.1; \n"
+ "} \n"; // normalized screen coordinates.
final String fragmentShader =
"#ifdef GL_FRAGMENT_PRECISION_HIGH \n"
+ "precision highp float; \n"
+ "#else \n"
+ "precision mediump float; \n" // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
+ "#endif \n"
+ "varying vec4 v_Color; \n" // This is the color from the vertex shader interpolated across the
// vertex per fragment.
+ "void main() \n" // The entry point for our fragment shader.
+ "{ \n"
+ " gl_FragColor = v_Color; \n" // Pass the color directly through the pipeline.
+ "} \n";
// Load in the vertex shader.
int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);
if (vertexShaderHandle != 0) {
// Pass in the shader source.
GLES20.glShaderSource(vertexShaderHandle, vertexShader);
// Compile the shader.
GLES20.glCompileShader(vertexShaderHandle);
// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);
// If the compilation failed, delete the shader.
if (compileStatus[0] == 0) {
GLES20.glDeleteShader(vertexShaderHandle);
vertexShaderHandle = 0;
}
}
if (vertexShaderHandle == 0) {
throw new RuntimeException("Error creating vertex shader.");
}
// Load in the fragment shader shader.
int fragmentShaderHandle = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);
if (fragmentShaderHandle != 0) {
// Pass in the shader source.
GLES20.glShaderSource(fragmentShaderHandle, fragmentShader);
// Compile the shader.
GLES20.glCompileShader(fragmentShaderHandle);
// Get the compilation status.
final int[] compileStatus = new int[1];
GLES20.glGetShaderiv(fragmentShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);
// If the compilation failed, delete the shader.
if (compileStatus[0] == 0) {
GLES20.glDeleteShader(fragmentShaderHandle);
fragmentShaderHandle = 0;
}
}
if (fragmentShaderHandle == 0) {
throw new RuntimeException("Error creating fragment shader.");
}
// Create a program object and store the handle to it.
int programHandle = GLES20.glCreateProgram();
if (programHandle != 0) {
// Bind the vertex shader to the program.
GLES20.glAttachShader(programHandle, vertexShaderHandle);
// Bind the fragment shader to the program.
GLES20.glAttachShader(programHandle, fragmentShaderHandle);
// Bind attributes
GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
GLES20.glBindAttribLocation(programHandle, 1, "a_Color");
// Link the two shaders together into a program.
GLES20.glLinkProgram(programHandle);
// Get the link status.
final int[] linkStatus = new int[1];
GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);
// If the link failed, delete the program.
if (linkStatus[0] == 0) {
GLES20.glDeleteProgram(programHandle);
programHandle = 0;
}
}
if (programHandle == 0) {
throw new RuntimeException("Error creating program.");
}
// Set program handles. These will later be used to pass in values to the program.
mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
mColorHandle = GLES20.glGetAttribLocation(programHandle, "a_Color");
// Tell OpenGL to use this program when rendering.
GLES20.glUseProgram(programHandle);
}
#Override
public void onSurfaceChanged(GL10 glUnused, int width, int height) {
// Set the OpenGL viewport to the same size as the surface.
GLES20.glViewport(0, 0, width, height);
// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 10.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
float[] vertices = new float[22579200];
int counter = 0;
for (float i = -width / 2; i < width / 2; i++) {
for (float j = height / 2; j > -height / 2; j--) {
// Initialize the buffers.
vertices[counter++] = 2f * i * (1f / width); //X
vertices[counter++] = 2f * j * (1.5f / height); //Y
vertices[counter++] = 0; //Z
vertices[counter++] = 1f; //blue
vertices[counter++] = 1f; //green
vertices[counter++] = 0f; //blue
vertices[counter++] = 1f; //alpha
}
}
mVerticesBuffer.put(vertices);
mVerticesBuffer.clear();
}
#Override
public void onDrawFrame(GL10 glUnused) {
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
// Draw the vertices facing straight on.
Matrix.setIdentityM(mModelMatrix, 0);
drawVertices(mVerticesBuffer);
}
private void drawVertices(final FloatBuffer aVertexBuffer) {
// Pass in the position information
aVertexBuffer.position(mPositionOffset);
GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
mStrideBytes, aVertexBuffer);
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Pass in the color information
aVertexBuffer.position(mColorOffset);
GLES20.glVertexAttribPointer(mColorHandle, mColorDataSize, GLES20.GL_FLOAT, false,
mStrideBytes, aVertexBuffer);
GLES20.glEnableVertexAttribArray(mColorHandle);
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_POINTS, 0, 3225600);
}
}

How to test if cursor in on circle? (Processor)

I want to implement on my project this sketch about dragging a box.
Instead of one box, I have several circles each drawn with different coordinates in the form
ellipse(lemma23.x, lemma23.y, diameterLemma23, diameterLemma23);
ellipse(law3.x, law3.y, diameterLaw3, diameterLaw3);
ellipse(law2.x, law2.y, diameterLaw2, diameterLaw2);
How do I test if the cursor is on one of the circles?
Here's a screen shot of my project:
I want to test when cursor is on (or near) a circle so that I can change its position by dragging.
The entire sketch is in pastebin.
I started with the example in your question. There are a few main differences for drawing multiple shapes:
You have to check whether the cursor is within each shape.
You have to draw each shape.
You may want to worry about overlapping, but I did not.
In the following code, I build upon the example directly although I removed the few lines which change the color of the box when clicked and I reorganized the code into the MovingEllipse class so that multiple ellipses can be drawn easily. (This code draws two ellipses.)
Note that the code in draw() checks the position of the mouse for each ellipse, however, I suppose this could be improved upon (i.e. perhaps by creating an array of ellipse positions and looping over the array). Also, for this code to work properly, mousePressed and mouseReleased methods need to be copied like the mouseDragged method. (I was trying to make my example brief.)
Anyway, this is one way to draw multiple ellipses and detect which one should be moved. Hope it helps!
int esize = 75;
MovingEllipse e1 = new MovingEllipse(0.0, 0.0, esize, 0.0, 0.0);
MovingEllipse e2 = new MovingEllipse(0.0, 0.0, esize, 0.0, 0.0);
void setup()
{
size(640, 360);
e1.eX = width/2.0; // Center of ellipse 1.
e1.eY = height/2.0;
e2.eX = width/4.0; // Center of ellipse 2.
e2.eY = height/4.0;
}
void draw()
{
background(0);
// Test if the cursor is over the ellipse.
if (mouseX > e1.eX-esize && mouseX < e1.eX+esize &&
mouseY > e1.eY-esize && mouseY < e1.eY+esize) {
e1.overBox = true;
e2.overBox = false;
} else if (mouseX > e2.eX-esize && mouseX < e2.eX+esize &&
mouseY > e2.eY-esize && mouseY < e2.eY+esize) {
e2.overBox = true;
e1.overBox = false;
} else {
e1.overBox = false;
e2.overBox = false;
}
// Draw the ellipse(s).
e1.update(e1.eX, e1.eY, e1.overBox);
e2.update(e2.eX, e2.eY, e2.overBox);
}
void mouseDragged() {
e1.mouseDragged();
e2.mouseDragged();
}
// Don't forget to repeat this for mousePressed and mouseReleased!
// ...
class MovingEllipse {
float eX, eY; // Position of ellipse.
int eSize; // Radius. For a circle use eSize for both x and y radii.
float xOffset, yOffset; // Where user clicked minus center of ellipse.
boolean locked, overBox; // Flags used for determining if the ellipse should move.
MovingEllipse (float ex, float ey, int esize, float xoff, float yoff) {
eX = ex;
eY = ey;
eSize = esize;
xOffset = xoff;
yOffset = yoff;
}
void update(float ex, float ey, boolean over) {
eX = ex;
eY = ey;
overBox = over;
// Draw the ellipse. By default, (eX, eY) represents the center of the ellipse.
ellipse(eX, eY, eSize, eSize);
}
void mousePressed() {
if(overBox) {
locked = true;
} else {
locked = false;
}
xOffset = mouseX-eX;
yOffset = mouseY-eY;
}
void mouseDragged() {
if(locked) {
eX = mouseX-xOffset;
eY = mouseY-yOffset;
}
}
void mouseReleased(){
locked = false;
}
}
Just check if the distance between the cursor and the centre of the circle is within the hit radius. The hit radius could be made larger than the radius of the circle to catch near hits.

How to find the Joint coordinates(X,Y,Z) ,also how to draw a locus of the tracked joint?

I am trying to develop a logic to recognize a circle which is made by users right hand, I got the code to draw the skeleton and track from the sample code,
private void SensorSkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
Skeleton[] skeletons = new Skeleton[0];
using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
{
if (skeletonFrame != null)
{
skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];
skeletonFrame.CopySkeletonDataTo(skeletons);
}
}
using (DrawingContext dc = this.drawingGroup.Open())
{
// Draw a transparent background to set the render size
dc.DrawRectangle(Brushes.Black, null, new Rect(0.0, 0.0, RenderWidth, RenderHeight));
if (skeletons.Length != 0)
{
foreach (Skeleton skel in skeletons)
{
RenderClippedEdges(skel, dc);
if (skel.TrackingState == SkeletonTrackingState.Tracked)
{
this.DrawBonesAndJoints(skel, dc);
}
else if (skel.TrackingState == SkeletonTrackingState.PositionOnly)
{
dc.DrawEllipse(
this.centerPointBrush,
null,
this.SkeletonPointToScreen(skel.Position),
BodyCenterThickness,
BodyCenterThickness);
}
}
}
// prevent drawing outside of our render area
this.drawingGroup.ClipGeometry = new RectangleGeometry(new Rect(0.0, 0.0, RenderWidth, RenderHeight));
}
}
What I want to do now is to track the coordinates of users right hand for gesture recognition,
Here is how I am planning to get the job done:
Start the gesture
Draw the circled gesture, Make sure to store the coordinates for start and then keep noting the coordinates for every 45 degree shift of the Joint from the start, for 8 octants we will get 8 samples.
For making a decision that a circle was drawn we can just check the relation ship between the eight samples.
Also, in the depthimage I want to show the locus of the drawn gesture, so as the handpoint moves it leaves a trace behind so at the end we will get a figure which was drawn by an user. I have no idea how to achieve this.
Coordinates for each joint are available for each tracked skeleton during each SkeletonFrameReady event. Inside your foreach loop...
foreach (Skeleton skeleton in skeletons) {
// get the joint
Joint rightHand = skeleton.Joints[JointType.HandRight];
// get the individual points of the right hand
double rightX = rightHand.Position.X;
double rightY = rightHand.Position.Y;
double rightZ = rightHand.Position.Z;
}
You can look at the JointType enum to pull out any of the joints and work with the individual coordinates.
To draw your gesture trail you can use the DrawContext you have in your example or use another way to draw a Path onto the visual layer. With your x/y/z values, you would need to scale them to the window coordinates. The "Coding4Fun" library offers a pre-built function to do it; alternatively you can write your own, for example:
private static double ScaleY(Joint joint)
{
double y = ((SystemParameters.PrimaryScreenHeight / 0.4) * -joint.Position.Y) + (SystemParameters.PrimaryScreenHeight / 2);
return y;
}
private static void ScaleXY(Joint shoulderCenter, bool rightHand, Joint joint, out int scaledX, out int scaledY)
{
double screenWidth = SystemParameters.PrimaryScreenWidth;
double x = 0;
double y = ScaleY(joint);
// if rightHand then place shouldCenter on left of screen
// else place shouldCenter on right of screen
if (rightHand)
{
x = (joint.Position.X - shoulderCenter.Position.X) * screenWidth * 2;
}
else
{
x = screenWidth - ((shoulderCenter.Position.X - joint.Position.X) * (screenWidth * 2));
}
if (x < 0)
{
x = 0;
}
else if (x > screenWidth - 5)
{
x = screenWidth - 5;
}
if (y < 0)
{
y = 0;
}
scaledX = (int)x;
scaledY = (int)y;
}

Image manipulation with Kinect

I'm trying to create an application that zooms in and out of an image and rotates it via Kinect. So far it works but for either or the cases. What I would like is that if I have rotated the image, that new value is saved for when I zoom, so I zoom on an image that has been rotated X degrees. The way I have it now, if I first rotate and then try to zoom, the image goes back to the initial stage.
private void TrackDistances(Skeleton skeleton)
{
if (skeleton.TrackingState == SkeletonTrackingState.Tracked)
{
...
if (wristLeft.Y > shoulderLeft.Y && wristRight.Y > shoulderRight.Y)
{
float distance = Math.Abs(wristLeft.X - wristRight.X);
image_Zoom(distance);
}
if (wristLeft.Y < shoulderLeft.Y && wristRight.Y < shoulderRight.Y)
{
angleDeg = GetJointAngle(zeroPoint, anglePoint);
image_Rotate(angleDeg);
}
}
}
private void image_Zoom(float distance)
{
//TransformGroup transformGroup = (TransformGroup)image.RenderTransform;
//ScaleTransform scale = (ScaleTransform)transformGroup.Children[0];
//double zoom = distance * 1.5;
//scale.ScaleX = zoom;
//scale.ScaleY = zoom;
double zoom = distance * 1.5;
double ScaleX = zoom;
double ScaleY = zoom;
ScaleTransform scale = new ScaleTransform(ScaleX, ScaleY);
image.RenderTransform = scale;
}
private void image_Rotate(double angleDeg)
{
var angle = angleDeg - 180;
RotateTransform rotate = new RotateTransform(angle);
image.RenderTransform = rotate;
}
Any suggestions?
Thanks!
I think it's because you change RenderTransform to be either ScaleTranform or RotateTransform.
You can set the ScaleTransform and RotateTransform of the image in the XAML, and just change the angle or zoom parameter in the code behind.
also see here:
How can I do both zoom and rotate on an inkcanvas?