I can't seem to create two oscillators with independent gain envelopes.
The code below creates two buttons which each play a sine tone at a different pitch. When I click on the first button, I hear the tone grow in volume as it should. But, when I click the second button, the tone reacts as if it is connected to the gain of the first tone. For example, if I click the second button (turning on the second tone) while the first tone is at volume 1, the second tone will enter at volume 1, even though it is supposed to envelope from 0 to 1 to 0 over the course of 10 seconds.
Can I only have one gain node per audio context? Or is there some other reason that the gains of these oscillators are being connected? In addition, after I play the tones once, I cannot play them again, which makes me especially think that I am doing something wrong. : )
Thanks. A link is below and the code is below that. This is my first post here so let me know if you need anything else. This code must be run in versions of Chrome or Safari that support the web audio api.
http://whitechord.org/just_mod/poly_test.html
WAAPI tests
<button onclick="play()">play one</button>
<button onclick="play2()">play two</button>
<script>
var context;
window.addEventListener('load', initAudio, false);
function initAudio() {
try {
context = new webkitAudioContext();
} catch(e) {
onError(e);
}
}
function play() {
var oscillator = context.createOscillator();
var gainNode = context.createGainNode();
gainNode.gain.value = 0.0;
oscillator.connect(gainNode);
gainNode.connect(context.destination);
oscillator.frequency.value = 700;
gainNode.gain.linearRampToValueAtTime(0.0, 0); // envelope
gainNode.gain.linearRampToValueAtTime(0.1, 5); // envelope
gainNode.gain.linearRampToValueAtTime(0.0, 10); // envelope
oscillator.noteOn(0);
}
function play2() {
var oscillator2 = context.createOscillator();
var gainNode2 = context.createGainNode();
gainNode2.gain.value = 0.0;
oscillator2.connect(gainNode2);
gainNode2.connect(context.destination);
oscillator2.frequency.value = 400;
gainNode2.gain.linearRampToValueAtTime(0.0, 0); // envelope
gainNode2.gain.linearRampToValueAtTime(0.1, 5); // envelope
gainNode2.gain.linearRampToValueAtTime(0.0, 10); // envelope
oscillator2.noteOn(0);
}
/* error */
function onError(e) {
alert(e);
}
</script>
</body>
</html>
Can I only have one gain node per audio context? Or is there some other reason that the gains of these oscillators are being connected? In addition, after I play the tones once, I cannot play them again, which makes me especially think that I am doing something wrong. : )
You can have as many gain nodes as you want (this is how you could achieve mixing bus-like setups, for example), so that's not the problem. Your problem is the following:
Remember that the second parameter to linearRampToValueAtTime() is time in the same time coordinate system as your context.currentTime.
And your context.currentTime is always moving forward in real time, so all your ramps, curves, etc. should be calculated relative to it.
If you want something to happen 4 seconds from now, you'd pass context.currentTime + 4 to the Web Audio API function.
So, change all your calls linearRampToValueAtTime() in your code, so that they look like:
gainNode2.gain.linearRampToValueAtTime(0.0, context.currentTime); // envelope
gainNode2.gain.linearRampToValueAtTime(0.1, context.currentTime + 5); // envelope
gainNode2.gain.linearRampToValueAtTime(0.0, context.currentTime + 10); // envelope
And that should take care of your issues.
BTW you have a stray double quote in your BODY opening markup tag.
Ask Will Conklin
gainNode2.gain.linearRampToValueAtTime(0.1, context.currentTime + 5); // envelope
Related
I'm creating an application that needs to allow the user to select the play position of an audio file while the file is playing. However, once the position is changed, I'm having trouble identifying the current position.
Here's an example program that shows how I'm going about it.
using NAudio.Utils;
using NAudio.Wave;
using System;
namespace NAudioTest
{
class Program
{
static void Main()
{
var audioFileReader = new AudioFileReader("test.mp3");
var waveOutEvent = new WaveOutEvent();
waveOutEvent.Init(audioFileReader);
waveOutEvent.Play();
while (true)
{
var key = Console.ReadKey(true);
if (key.Key == ConsoleKey.Enter)
{
var playLocationSeconds =
waveOutEvent.GetPositionTimeSpan().TotalSeconds;
Console.WriteLine(
"Play location is " + playLocationSeconds + "seconds");
}
else if (key.Key == ConsoleKey.RightArrow)
{
audioFileReader.CurrentTime =
audioFileReader.CurrentTime.Add(TimeSpan.FromSeconds(30));
}
else
{
break;
}
}
}
}
}
Steps to reproduce the problem
Start the program: the audio file starts playing
Press the enter key: the current play time is written to the console
Press the right arrow key: the played audio jumps ahead (presumably) to the expected location
Press the Enter key again: a play time is written to the
console, but it looks to be the amount of time since the audio first
started playing, not the time of the current play position.
I have tried getting the value of AudioFileReader.CurrentTime instead of calling GetPositionTimeSpan on the WaveOutEvent. The problem with this approach is that the AudioFileReader.CurrentTime value proceeds in jumps, presumably because this underlying stream is buffered when used with WaveOutEvent so CurrentTime does not accurately reflect the play position, only the position in the underlying stream.
How do I support arbitrary play positioning yet continue to get an accurate play position current time?
The "CurrentTime" property of your audio file reader is good enough to tell the current position of playback, especially if your latency is not very high. I found the difference between it and waveOutEvent.GetPositionTimeSpan() to be 100-200 ms. at most.
You are indeed using the setter of the CurrentTime property to reposition within the stream. It would be consistent to use the getter to then query the current position as well. If you are concerned with precision, you can use lower latency.
The extension method "GetPositionTimeSpan()" does seem to return the total length of playback so far and not the position within the stream. Admittedly I do not know why this is so.
Here I want to write a script that can stabilize the time lapse sequence by adding Warp Stabilizer VFX, then followed by deflicker using DEFlicker Time Lapse, and finally render and export the video, which runs before sleeping so that it does not slow down my computer at working time. However, I cannot find the API that adds effects to a layer in AE scripting documentation, does anyone knows how to do this? thanks in advance!
You can add effects to the layers like this:
if (!theLayer.Effects.property("Warp Stabilizer")){ //add only if no such effect applied
var theEffect = theLayer.property("Effects").addProperty("Warp Stabilizer"); // the regular way to add an effect
}
To test it you can add it to selected layer, full code to apply it to the selected layer can look like this:
var activeItem = app.project.activeItem;
if (activeItem != null && activeItem instanceof CompItem) { // only proceeds if one comp is active
if (activeItem.selectedLayers.length == 1) { // only proceeds if one layer is selected
var theLayer = activeItem.selectedLayers[0];
if (!theLayer.Effects.property("Warp Stabilizer")){
var theEffect = theLayer.property("Effects").addProperty("Warp Stabilizer"); // the regular way to add an effect
}
}
}
Solution is based on adobe forum: https://forums.adobe.com/thread/1204115
I'm trying to use NAudio's FadeInOutSampleProvider to fade in a sample and fade it out. The fade in works OK, but instead of fading out gradually I get abrupt silence from where the fade-out should begin.
What's the correct way to fade out with FadeInOutSampleProvider?
Here's how I'm trying to do it:
IWaveProvider waveSource; // initialised by reading a WAV file
// The ISampleProvider will be the underlying source for the following operations
ISampleProvider sampleSource = waveSource.ToSampleProvider();
// Create a provider which defines the samples we want to fade in
// (including the full-volume "middle" of the final output)
ISampleProvider fadeInSource = new OffsetSampleProvider(sampleSource);
fadeInSource.TakeSamples = most_of_file; // calculation omitted for brevity
// Create a provider which defines the samples we want to fade out:
// We will play these samples when fadeInSource is finished
ISampleProvider fadeOutSource = new OffsetSampleProvider(sampleSource);
fadeOutSource.SkipOverSamples = fadeInSource.TakeSamples;
// Wrap the truncated sources in FadeInOutSampleProviders
var fadeIn = new FadeInOutSampleProvider(fadeInSource);
fadeIn.BeginFadeIn(500); // half-second fade
var fadeOut = new FadeInOutSampleProvider(fadeOutSource);
fadeOut.BeginFadeOut(500);
// doc-comments suggest the fade-out will begin "after first Read"
I'm expecting fadeOut to initially read non-zero samples from 500ms before the end of the original source, but fade out to zeros by the end of the source.
However, when I play fadeIn to completion, then play fadeOut, I find that the very first Read call to fadeOut fills the buffer with zeros.
Am I doing something wrong? Or is there a bug in NAudio?
Note: I'm handling the sequential playback using a ConcatenatingSampleProvider which I implemented myself — I can't anything similar in NAudio's API. It's pretty trivial, so I've omitted the source here.
The problem is you're trying to reuse sampleSource twice in your graph. So sampleSource has already been read to the end before anything is read from fadeOutSource. Probably for your usage, it would be better for FadeInOutSampleProvider to be able to "schedule" a fade-out after a known number of samples.
An alternative approach is a FadeOutSampleProvider that caches the fade-out duration, and then when it detects the end of its source has been reached, it returns the cached portion faded out. It does mean latency is introduced.
I'm trying to write a basic game using Apple's Sprite Kit framework. So far, I have a ship flying around the screen, using SKPhysicsBody. I want to keep the ship from flying off the screen, so I edited my update method to make the ship's velocity zero. This works most of the time, but every now and then, the ship will fly off the screen.
Here's my update method.
// const int X_MIN = 60;
// const int X_MAX = 853;
// const int Y_MAX = 660;
// const int Y_MIN = 60;
// const float SHIP_SPEED = 50.0;
- (void)update:(CFTimeInterval)currentTime {
if (self.keysPressed & DOWN_ARROW_PRESSED) {
if (self.ship.position.y > Y_MIN) {
[self.ship.physicsBody applyForce:CGVectorMake(0, -SHIP_SPEED)];
} else {
self.ship.physicsBody.velocity = CGVectorMake(self.ship.physicsBody.velocity.dx, 0);
}
}
if (self.keysPressed & UP_ARROW_PRESSED) {
if (self.ship.position.y < Y_MAX) {
[self.ship.physicsBody applyForce:CGVectorMake(0, SHIP_SPEED)];
} else {
self.ship.physicsBody.velocity = CGVectorMake(self.ship.physicsBody.velocity.dx, 0);
}
}
if (self.keysPressed & RIGHT_ARROW_PRESSED) {
if (self.ship.position.x < X_MAX) {
[self.ship.physicsBody applyForce:CGVectorMake(SHIP_SPEED, 0)];
} else {
self.ship.physicsBody.velocity = CGVectorMake(0, self.ship.physicsBody.velocity.dy);
}
}
if (self.keysPressed & LEFT_ARROW_PRESSED) {
if (self.ship.position.x > X_MIN) {
[self.ship.physicsBody applyForce:CGVectorMake(-SHIP_SPEED, 0)];
} else {
self.ship.physicsBody.velocity = CGVectorMake(0, self.ship.physicsBody.velocity.dy);
}
}
}
At first, I used applyImpulse in didBeginContact to push the ship back. This made the ship bounce, but I don't want the ship to bounce. I just want it to stop at the edge.
What is the right way to make the ship stop once it reaches the edge? The code above works most of the time, but every now and then the ship shoots off screen. This is for OS X—not iOS—in case that matters.
Check out this link...
iOS7 SKScene how to make a sprite bounce off the edge of the screen?
[self setPhysicsBody:[SKPhysicsBody bodyWithEdgeLoopFromRect:self.frame]]; //Physics body of Scene
This should set up a barrier around the edge of your scene.
EDIT:
This example project from Apple might also be useful
https://developer.apple.com/library/mac/samplecode/SpriteKit_Physics_Collisions/Introduction/Intro.html
Your code is not clear in what the velocity variables represent. Keep in mind that if the velocity is too high your ship will have travelled multiple points between updates. For example, your ship's X/Y is at (500,500) at the current update. Given a high enough velocity, your ship could be at (500,700) at the very next update. If you had your boundary set at (500,650) your ship would already be past it.
I suggest you do a max check on velocity BEFORE applying it to your ship. This should avoid the problem I outlined above.
As for bouncy, bouncy... did you try setting your ship's self.physicsBody.restitution = 0; ? The restitution is the bounciness of the physics body. If you use your own screen boundaries, then I would recommend setting those to restitution = 0 as well.
Your best bet would be to add a rectangle physics body around the screen (boundary). Set the collision and contact categories of the boundary and player to interact with each other. In the didBeginContact method you can check if the bodies have touched and, if they have, you can call a method to redirect the ship.
Your problem is that your update method may not be checking the location frequently enough before the ship gets off screen.
This will help you to define you screen edges in Swift.
self.physicsBody = SKPhysicsBody ( edgeLoopFromRect: self.frame )
I need to search for some special features/patterns that might be visible in only one (or two) of many frames. The frame rate can be as slow as 25 frames per second and the video may contain over 7500 frames. I often start by scanning the video at high speed looking for a segment where I might find the feature, then rewind. I repeat this procedure while gradually reducing the playback speed, until I find a fairly small time window in which I can expect to find the feature (if it is present). I would then like to step forward and backward by single frames using key hit events (e.g. right arrow and left arrow keys) to find the feature of interest. I have managed to use HTML5 with JavaScript to control the forward speed; but, still do not know how to use the keyboard for single frame stepping forward and backward through a video. How can this be accomplished? Note, my web browser is Firefox 26 running on a Windows 7 platform.
You can seek to any time in the video by setting the currentTime property. Something like this:
var video = document.getElementById('video'),
frameTime = 1 / 25; //assume 25 fps
window.addEventListener('keypress', function (evt) {
if (video.paused) { //or you can force it to pause here
if (evt.keyCode === 37) { //left arrow
//one frame back
video.currentTime = Math.max(0, video.currentTime - frameTime);
} else if (evt.keyCode === 39) { //right arrow
//one frame forward
//Don't go past the end, otherwise you may get an error
video.currentTime = Math.min(video.duration, video.currentTime + frameTime);
}
}
});
Just a couple things you need to be aware of, though they shouldn't cause you too much trouble:
There is no way to detect the frame rate, so you have to either hard-code it or list it in some lookup table or guess.
Seeking may take a few milliseconds or more and does not happen synchronously. The browser needs some time to load the video from the network (if it's not already loaded) and to decode the frame. If you need to know when seeking is done, you can listen for the 'seeked' event.
You can check to see if the video has advanced to the next frame by checking the
targetTime += (1 / 25) // 25 fps
video.currentTime = targetTime // set the video frame
if (video.currentTime >= video.duration) { // if it's the end of the video
alert('done!');
}
if (video.currentTime == targetTime) { // frame has been updated
}