I have read that 1200 cents are in 1 octave.
So, I tried the following:
const audioContext = new AudioContext();
function Oscillator(frequency, detune) {
this.oscillator = audioContext.createOscillator();
this.oscillator.connect(audioContext.destination);
this.oscillator.frequency.value = frequency;
this.oscillator.detune.value = detune;
this.oscillator.start(0);
this.oscillator.stop(3);
console.log('Playing new oscillator!');
}
Case 1:
const x = 200;
new Oscillator(x, 1200);
new Oscillator(2 * x, 0);
Both the oscillators individually produce the same sound for all values of x and it made sense to me because 1200 cents detune is one octave up (double of the frequency).
Case 2:
const x = 200;
new Oscillator(x, 600);
new Oscillator(x * 1.5, 0);
So, I expected that if I got halfway in terms of cents, then there should be a 50% hike in frequency. But, when I heard them individually, for many different values of x they all produced a different sound. It sounded like both sounds have the same frequency but different amplitude.
I am not able to understand why this is happening. Please help me out with this. I am quite new to physics behind sounds.
The formula to convert the value of the detune param into Hz is Math.pow(2, detune / 1200).
https://webaudio.github.io/web-audio-api/#oscillatornode
That means your second example should be either ...
const x = 200;
new Oscillator(x, 701.95);
new Oscillator(x * 1.5, 0);
... or ...
const x = 200;
new Oscillator(x, 600);
new Oscillator(x * 1.414, 0);
Related
I am making a space game in Godot and whenever my ship is a big distance away from (0,0,0) every time I move the camera or the ship, it shakes violently. Here is my code for moving the ship:
extends KinematicBody
export var default_speed = 500000
export var max_speed = 5000
export var acceleration = 100
export var pitch_speed = 1.5
export var roll_speed = 1.9
export var yaw_speed = 1.25
export var input_response = 8.0
var velocity = Vector3.ZERO
var forward_speed = 0
var vert_speed = 0
var pitch_input = 0
var roll_input = 0
var yaw_input = 0
var alt_input = 0
var system = "System1"
func _ready():
look_at(get_parent().get_node("Star").translation, Vector3.UP)
func get_input(delta):
if Input.is_action_pressed("boost"):
max_speed = 299792458
acceleration = 100
else:
max_speed = default_speed
acceleration = 100
if Input.is_action_pressed("throttle_up"):
forward_speed = lerp(forward_speed, max_speed, acceleration * delta)
if Input.is_action_pressed("throttle_down"):
forward_speed = lerp(forward_speed, 0, acceleration * delta)
pitch_input = lerp(pitch_input, Input.get_action_strength("pitch_up") - Input.get_action_strength("pitch_down"), input_response * delta)
roll_input = lerp(roll_input, Input.get_action_strength("roll_left") - Input.get_action_strength("roll_right"), input_response * delta)
yaw_input = lerp(yaw_input, Input.get_action_strength("yaw_left") - Input.get_action_strength("yaw_right"), input_response * delta)
func _physics_process(delta):
get_input(delta)
transform.basis = transform.basis.rotated(transform.basis.z, roll_input * roll_speed * delta)
transform.basis = transform.basis.rotated(transform.basis.x, pitch_input * pitch_speed * delta)
transform.basis = transform.basis.rotated(transform.basis.y, yaw_input * yaw_speed * delta)
transform.basis = transform.basis.orthonormalized()
velocity = -transform.basis.z * forward_speed * delta
move_and_collide(velocity * delta)
func _on_System1_area_entered(area):
print(area, area.name)
system = "E"
func _on_System2_area_entered(area):
print(area, area.name)
system = "System1"
Is there any way to prevent this from happening?
First of all, I want to point out, that this is not a problem unique to Godot. Although other engines have automatic fixes for it.
This happens because the precision of floating point numbers decreases as it goes away form the origin. In other words, the gap between one floating number and the next becomes wider.
The issue is covered in more detail over the game development sister site:
Why loss of floating point precision makes rendered objects vibrate?
Why does the resolution of floating point numbers decrease further from an origin?
What's the largest "relative" level I can make using float?
Why would a bigger game move the gameworld around the Player instead of just moving a player within a gameworld?
Moving player inside of moving spaceship?
Spatial Jitter problem in large unity project
Godot uses single precision. Support for double precision has been added in Godot 4, but that just reduces the problem, it does not eliminate it.
The general solution is to warp everything, in such way that the player is near the origin again. So, let us do that.
We will need a reference to the node we want to keep near the origin. I'm going to assume which node it is does not change during game play.
export var target_node_path:NodePath
onready var _target_node:Spatial = get_node(target_node_path)
And we will need a reference to the world we need to move. I'm also assuming it does not change. Furthermore, I'm also assuming the node we want to keep near the origin is a child of it, directly or indirectly.
export var world_node_path:NodePath
onready var _world_node:Node = get_node(target_node_path)
And we need a maximum distance at which we perform the shift:
export var max_distance_from_origin:float = 10000.0
We will not move the world itself, but its children.
func _process() -> void:
var target_origin := _target_node.global_transform.origin
if target_origin.length() < max_distance_from_origin:
return
for child in _world_node.get_children():
var spatial := child as Spatial
if spatial != null:
spatial.global_translate(-target_origin)
Now, something I have not seen discussed is what happens with physics objects. The concern is that The physics server might be trying to move them in the old position (in practice this is only a problem with RigidBody), and it will overwrite what we did.
So, if that is a problem… We can handle physic objects with a teleport. For example:
func _process() -> void:
var target_origin := _target_node.global_transform.origin
if target_origin.length() < max_distance_from_origin:
return
for child in _world_node.get_children():
var spatial := child as Spatial
if spatial != null:
var body_transform := physics_body.global_transform
var new_transform := Transform(
body_transform.basis,
body_transform.origin - target_origin
)
spatial.global_transform = new_transform
var physics_body := spatial as PhysicsBody # Check for RigidBody instead?
if physics_body != null:
PhysicsServer.body_set_state(
physics_body.get_rid(),
PhysicsServer.BODY_STATE_TRANSFORM,
new_transform
)
But be aware that the above code does not consider any physics objects deeper in the scene tree.
I'm stack for along time in this problem and i will really appreciate if any one could help me in that.
I asked many times in many forums, i've searched alot but no answer that really helped me.
i'm developping an application where i have to calculate the velocity of a joint of skeleton body using vs c# 2012 and kinect sdk 1.7
i have first to be sure of the logic of things before asking this question so,
if I understood correctly, the delta_time i'm looking for to calculate velocity, is not the duration of one frame (1/30s) but it must be calculated from two instants:
1- the instant when detecting and saving the "joint point" in the first frame
2- the instant when detecting and saving the same "joint point" in the next frame
if it's not true, thank you for clarifying things.
starting from this hypothesis, i wrote a code to :
detectiong a person
tracking the spine joint ==> if it's is tracked then saving its coordinates into a list (I reduced the work for the moment on the Y axis to simplify)
pick up the time when saving the coordinates
increment the framecounter (initially equal to zero)
if the frame counter is > 1 calculate velocity ( x2 - x1)/(T2 - T1) and save it
here is a piece of the code:
System.Diagnostics.Stopwatch stopWatch = new System.Diagnostics.Stopwatch();
double msNow;
double msPast;
double diff;
TimeSpan currentTime;
TimeSpan lastTime = new TimeSpan(0);
List<double> Sylist = new List<double>();
private int framecounter = 0;
private void KinectSensorOnAllFramesReady(object sender, AllFramesReadyEventArgs allFramesReadyEventArgs)
{
Skeleton first = GetFirstSkeleton(allFramesReadyEventArgs);
if (first == null) // if there is no skeleton
{
txtP.Text = "No person detected"; // (Idle mode)
return;
}
else
{
txtP.Text = "A person is detected";
skeletonDetected = true;
/// look if the person is totally detected
find_coordinates(first);
/*******************************
* time computing *
/*******************************/
currentTime = stopWatch.Elapsed;
msNow = currentTime.Seconds * 1000 + currentTime.Milliseconds;
if (lastTime.Ticks != 0)
{
msPast = lastTime.Seconds * 1000 + lastTime.Milliseconds;
diff = msNow - msPast;
}
lastTime = currentTime;
}
//framecounter++;
}
void find_coordinates(Skeleton first)
{
//*modification 07052014 *****/
Joint Spine = first.Joints[JointType.Spine];
if (Spine.TrackingState == JointTrackingState.Tracked)
{
double Sy = Spine.Position.Y;
/*******************************
* time starting *
/*******************************/
stopWatch.Start();
Sylist.Add(Sy);
framecounter++;
}
else
return;
if (framecounter > 1)
{
double delta_Distance = Sylist[Sylist.Count] - Sylist[Sylist.Count - 1];
}
}
to be honnest, i dont really know how ti use timespan and stopwatch in this context ( i mean when there are frames to process many times/s)
i will be thankfull for any help !
First:
The SkeletonFrame has a property called Timespamp that you can use. It's better to use that one than to create your own timesystem because the timestamp is directly generated by the Kinect.
Second:
Keep track of the previous Timestamp and location.
Then it's just a matter of calculation.
(CurrentLocation - PreviousLocation) = Distance difference
(CurrentTimestamp - PreviousTimestamp) = Time taken to travel the distance.
For example you would get 0.1 meter per 33 miliseconds.
So you can get the meters per seconds like this = (1 second / time taken to travel) * distance difference. In the example this is = (1000/33)*0.1 = 3.03 meter per second.
I am trying to rotate the camera around to X-axis of the scene.
At this point my code is like this:
rotation += 0.05;
camera.position.y = Math.sin(rotation) * 500;
camera.position.z = Math.cos(rotation) * 500;
This makes the camera move around but during the rotation something weird happens and either the camera flips, or it skips some part of the imaginary circle it's following.
You have only provided a snippet of code, so I have to make some assumptions about what you are doing.
This code:
rotation += 0.05;
camera.position.x = 0;
camera.position.y = Math.sin(rotation) * 500;
camera.position.z = Math.cos(rotation) * 500;
camera.lookAt( scene.position ); // the origin
will cause the "flipping" you refer to because the camera is trying to remain "right side up", and it will quickly change orientation as it passes over the "north pole."
If you offset the camera's x-coordinate like so,
camera.position.x = 200;
the camera behavior will appear more natural to you.
Three.js tries to keep the camera facing up. When you pass 0 along the z-axis, it'll "fix" the camera's rotation. You can just check and reset the camera's angle manually.
camera.lookAt( scene.position ); // the origin
if (camera.position.z < 0) {
camera.rotation.z = 0;
}
I'm sure this is not the best solution, but if anyone else runs across this question while playing with three.js (like I just did), it'll give one step further.
This works for me, I hope it helps.
Rotating around X-Axis:
var x_axis = new THREE.Vector3( 1, 0, 0 );
var quaternion = new THREE.Quaternion;
camera.position.applyQuaternion(quaternion.setFromAxisAngle(x_axis, rotation_speed));
camera.up.applyQuaternion(quaternion.setFromAxisAngle(x_axis, rotation_speed));
Rotating around Y-Axis:
var y_axis = new THREE.Vector3( 0, 1, 0 );
camera.position.applyQuaternion(quaternion.setFromAxisAngle(y_axis, angle));
Rotating around Z-Axis:
var z_axis = new THREE.Vector3( 0, 0, 1 );
camera.up.applyQuaternion(quaternion.setFromAxisAngle(z_axis, angle));
I wanted to move my camera to a new location while having the camera look at a particular object, and this is what I came up with [make sure to load tween.js]:
/**
* Helper to move camera
* #param loc Vec3 - where to move the camera; has x, y, z attrs
* #param lookAt Vec3 - where the camera should look; has x, y, z attrs
* #param duration int - duration of transition in ms
**/
function flyTo(loc, lookAt, duration) {
// Use initial camera quaternion as the slerp starting point
var startQuaternion = camera.quaternion.clone();
// Use dummy camera focused on target as the slerp ending point
var dummyCamera = camera.clone();
dummyCamera.position.set(loc.x, loc.y, loc.z);
// set the dummy camera quaternion
var rotObjectMatrix = new THREE.Matrix4();
rotObjectMatrix.makeRotationFromQuaternion(startQuaternion);
dummyCamera.quaternion.setFromRotationMatrix(rotObjectMatrix);
dummyCamera.up.set(camera)
console.log(camera.quaternion, dummyCamera.quaternion);
// create dummy controls to avoid mutating main controls
var dummyControls = new THREE.TrackballControls(dummyCamera);
dummyControls.target.set(loc.x, loc.y, loc.z);
dummyControls.update();
// Animate between the start and end quaternions
new TWEEN.Tween(camera.position)
.to(loc, duration)
.onUpdate(function(timestamp) {
// Slerp the camera quaternion for smooth transition.
// `timestamp` is the eased time value from the tween.
THREE.Quaternion.slerp(startQuaternion, dummyCamera.quaternion, camera.quaternion, timestamp);
camera.lookAt(lookAt);
})
.onComplete(function() {
controls.target = new THREE.Vector3(scene.children[1].position-0.001);
camera.lookAt(lookAt);
}).start();
}
Example usage:
var pos = {
x: -4.3,
y: 1.7,
z: 7.3,
};
var lookAt = scene.children[1].position;
flyTo(pos, lookAt, 60000);
Then in your update()/render() function, call TWEEN.update();
Full example
I want to change the paper(objects base) size of Raphael to fit the window resizing. [ using Firefox_13.0, Raphael_2.1.0, WindowsXP ]
If it is available, I would like to fit full-screen-mode.
==================================================
(steps)
I created the paper : paper = Raphael(0, 50, 800, 600); // initial width and height are 800 and 600.
I placed objects on the paper.
The window size of browser is checked by windowW = window.innerWidth and winnowH = window.innerHeight (on Firefox).
Scaling value is calculated by sv = windowW/800;
And scaling the paper by paper.scale(sv, sv);
==================================================
(the script)
window.onload = function () {
paper = Raphael(0, 50, 800, 600);
var background = paper.rect(0, 0, 800, 600).attr({fill:'#669999'});
// placing the objects
var circle = ...;
var rect = ...;
var ellipse = ...;
winowW = window.innerWidth;
winowH = window.innerHeight;
sv = winowW/800.
paper.scale(sv, sv);
}
==================================================
(result)
Though circle.scale(sv), rect.scale(sv, sv) and ellipse.scale(sv, sv) are valid, paper.scale(sv, sv) and background.scale(sv, sv) are not.
Why this case is happen ? I can get the window size by window.onresize = function() {...} on real-time. If there are better methods, please tell me.
Thanks,
I've succeeded by following two points:
1) "paper" itself is not manipulative object. I think we should look it as billboard.
2) use st = paper.set() and put the objects(circle, rect, ...) in it. And use st.scale(sv, sv, 0, 0);
* third and fourth parameter (0, 0) are very impotent.
(caution)
Serial resizing operation is not good for the function "scale()". Because each of resizing coefficient is piled as the
power of a number. So when one have done 1.1 times resizing operation 5 times, the scale will be 1.1^5.
Use setViewBox()
It should do the work
http://raphaeljs.com/reference.html#Paper.setViewBox
I want to be able to make image move realistically with the accelerometer controlling it, like any labyrinth game. Below shows what I have so far but it seems very jittery and isnt realistic at all. The ball images seems to never be able to stop and does lots of jittery movements around everywhere.
- (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
deviceTilt.x = 0.01 * deviceTilt.x + (1.0 - 0.01) * acceleration.x;
deviceTilt.y = 0.01 * deviceTilt.y + (1.0 - 0.01) * acceleration.y;
}
-(void)onTimer {
ballImage.center = CGPointMake(ballImage.center.x + (deviceTilt.x * 50), ballImage.center.y + (deviceTilt.y * 50));
if (ballImage.center.x > 279) {
ballImage.center = CGPointMake(279, ballImage.center.y);
}
if (ballImage.center.x < 42) {
ballImage.center = CGPointMake(42, ballImage.center.y);
}
if (ballImage.center.y > 419) {
ballImage.center = CGPointMake(ballImage.center.x, 419);
}
if (ballImage.center.y < 181) {
ballImage.center = CGPointMake(ballImage.center.x, 181);
}
Is there some reason why you can not use the smoothing filter provided in response to your previous question: How do you use a moving average to filter out accelerometer values in iPhone OS ?
You need to calculate the running average of the values. To do this you need to store the last n values in an array, and then push and pop values off the array when ever you read the accelerometer data. Here is some pseudocode:
const SIZE = 10;
float[] xVals = new float[SIZE];
float xAvg = 0;
function runAverage(float newX){
xAvg += newX/SIZE;
xVals.push(newX);
if(xVals.length > SIZE){
xAvg -= xVals.pop()/SIZE;
}
}
You need to do this for all three axis. Play around with the value of SIZE; the larger it is, the smoother the value, but the slower things will seem to respond. It really depends on how often you read the accelerometer value. If it is read 10 times per second, then SIZE = 10 might be too large.