Passing an AudioBuffer to AudioContext Analyser in CreateJS - createjs

I have made an audioCtx in JavaScript using the AudioContext() class. I have an analyser made with audioCtx.createAnalyser(). If my audio is an audio tag and I make a source with audioCtx.createMediaElementSource(audio) then pass that to the analyser: source.connect(analyser); this works - I receive data. I can also connect a mic using audioCtx.createMediaStreamSource(stream); etc.
BUT, if my source is a CreateJS AbstractSoundInstance object (called input) which has a playbackResource property (the sound is playing) that returns an AudioBuffer object:
AudioBuffer { sampleRate: 44100, length: 5961072, duration:
135.1717006802721, numberOfChannels: 2 }
I can't seem to connect this. I have tried
var source = audioCtx.createBufferSource(input.playbackResource);
and then tried connecting the destination with analyser.connect(audioCtx.destination); but I cannot get any data. The only hook I seem to have into the CreateJS sound is the playbackResource - the SoundJS docs say: "For example, WebAudioPlugin will set an array buffer."
Any recommendations on how to pass that AudioBuffer to the Analyser (Lanny? Grant?) Thanks!

Related

NAudio: How to accurately get the current play location when changing play position using AudioFileReader and WaveOutEvent

I'm creating an application that needs to allow the user to select the play position of an audio file while the file is playing. However, once the position is changed, I'm having trouble identifying the current position.
Here's an example program that shows how I'm going about it.
using NAudio.Utils;
using NAudio.Wave;
using System;
namespace NAudioTest
{
class Program
{
static void Main()
{
var audioFileReader = new AudioFileReader("test.mp3");
var waveOutEvent = new WaveOutEvent();
waveOutEvent.Init(audioFileReader);
waveOutEvent.Play();
while (true)
{
var key = Console.ReadKey(true);
if (key.Key == ConsoleKey.Enter)
{
var playLocationSeconds =
waveOutEvent.GetPositionTimeSpan().TotalSeconds;
Console.WriteLine(
"Play location is " + playLocationSeconds + "seconds");
}
else if (key.Key == ConsoleKey.RightArrow)
{
audioFileReader.CurrentTime =
audioFileReader.CurrentTime.Add(TimeSpan.FromSeconds(30));
}
else
{
break;
}
}
}
}
}
Steps to reproduce the problem
Start the program: the audio file starts playing
Press the enter key: the current play time is written to the console
Press the right arrow key: the played audio jumps ahead (presumably) to the expected location
Press the Enter key again: a play time is written to the
console, but it looks to be the amount of time since the audio first
started playing, not the time of the current play position.
I have tried getting the value of AudioFileReader.CurrentTime instead of calling GetPositionTimeSpan on the WaveOutEvent. The problem with this approach is that the AudioFileReader.CurrentTime value proceeds in jumps, presumably because this underlying stream is buffered when used with WaveOutEvent so CurrentTime does not accurately reflect the play position, only the position in the underlying stream.
How do I support arbitrary play positioning yet continue to get an accurate play position current time?
The "CurrentTime" property of your audio file reader is good enough to tell the current position of playback, especially if your latency is not very high. I found the difference between it and waveOutEvent.GetPositionTimeSpan() to be 100-200 ms. at most.
You are indeed using the setter of the CurrentTime property to reposition within the stream. It would be consistent to use the getter to then query the current position as well. If you are concerned with precision, you can use lower latency.
The extension method "GetPositionTimeSpan()" does seem to return the total length of playback so far and not the position within the stream. Admittedly I do not know why this is so.

How to mute/unmute mic in webrtc

I have read from here that how i can mute/unmute mic for a localstream in webrtc:WebRTC Tips & Tricks
When i start my localstream mic is enable at that time by default so when i set audioTracks[0].enabled=false it muted a mic in my local stream but when i set it back true it enable to unmute. Here is my code mute/unmute for a localstream:
getLocalStream(function (stream,enable) {
if (stream) {
for (var i = 0; i < stream.getTracks().length; i++) {
var track = stream.getAudioTracks()[0];
if (track)
track.enabled = enable;
//track.stop();
}
}
});
Can someone suggest me how i can unmute mic back in a localstream.
I assume that your method getLocalStream is actually calling navigator.getUserMedia. In this case when you do this you'll get another stream, not the original one. Using the orignal stream you should do
mediaStream.getAudioTracks()[0].enabled = true; // or false to mute it.
Alternatively you can check https://stackoverflow.com/a/35363284/1210071
There are 2 properties enabled and muted.
enabled is for setting, and muted is read-only on the remote side (the other person) (I have tried, setting muted does not work, basically, value cannot be changed)
stream.getAudioTracks()[0].enabled = true; // remote one will get muted change
Ahhh there is a good way to do this:
mediaStream.getVideoTracks()[0].enabled = !(mediaStream.getVideoTracks()[0].enabled);
You should read and set the "enabled" value. The "enabled" value is for 'muting'. The "muted" value is a read-only value to do with whether the stream is currently unable to play.
The enabled property on the MediaStreamTrack interface is a Boolean value which is true if the track is allowed to render the source stream or false if it is not. This can be used to intentionally mute a track. When enabled, a track's data is output from the source to the destination; otherwise, empty frames are output.
In the case of audio, a disabled track generates frames of silence (that is, frames in which every sample's value is 0). For video tracks, every frame is filled entirely with black pixels.
The value of enabled, in essence, represents what a typical user would consider the muting state for a track, whereas the muted property indicates a state in which the track is temporarily unable to output data, such as a scenario in which frames have been lost in transit.
https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack/enabled
Step 1) call jquery.min.js
Step 2) use below code ,
A) To Mute
$("video").prop('muted','true');
B) To unmute
$("video").prop('muted','');
single Icon mute and unmute like youtube
function enablemute(thisimag) {
if($(thisimag).attr('src')=='images/mute.png')
{
$("video").prop('muted','');
$(thisimag).prop('src','images/unmute.png');
}
else
{
//alert('her');
$("video").prop('muted','true');
$(thisimag).prop('src','images/mute.png');
}
}
above function enablemute should call from onclick

Frame Listener in QMLOgre Lib Freeze Window

I'm newbie in using ogre3D and I need help on a certain point!
I'm trying a library mixing ogre3D engine and qtml :
http://advancingusability.wordpress.com/2013/08/14/qmlogre-is-now-a-library/
this library works fine when you want to draw some object and rotate or translate these objects already initialise in a first step.
void initialize(){
// we only want to initialize once
disconnect(this, &ExampleApp::beforeRendering, this, &ExampleApp::initializeOgre);
// start up Ogre
m_ogreEngine = new OgreEngine(this);
m_root = m_ogreEngine->startEngine();
m_ogreEngine->setupResources();
m_ogreEngine->activateOgreContext();
//draw a small cube
new DebugDrawer(m_sceneManager, 0.5f);
DrawCube(100,100,100);
DebugDrawer::getSingleton().build();
m_ogreEngine->doneOgreContext();
emit(ogreInitialized());
}
but If you want to draw or change the scene after this initialisation step it is problematic!
In fact in Ogre3D only (without the qtogre library), you have to use a frameListener
which will connect the rendering thread and allow a repaint of your scene.
But here, we have two ContextOpengl: one for qt and the other one for Ogre.
So If you try to put the common part of code :
createScene();
createFrameListener();
// La Boucle de rendu
m_root->startRendering();
//createScene();
while(true)
{
Ogre::WindowEventUtilities::messagePump();
if(pRenderWindow->isClosed())
std::cout<<"pRenderWindow close"<<std::endl;
if(!m_root->renderOneFrame())
std::cout<<"root renderOneFrame"<<std::endl;
}
the app will freeze! I know that startRendering is a render loop itself, so the loop below never gets executed.
But I don't know where to put those line or how to correct this part!
I've also try to add a background buffer and to swap them :
void OgreEngine::updateOgreContext()
{
glPopAttrib();
glPopClientAttrib();
m_qtContext->functions()->glUseProgram(0);
m_qtContext->doneCurrent();
delete m_qtContext;
m_BackgroundContext= QOpenGLContext::currentContext();
// create a new shared OpenGL context to be used exclusively by Ogre
m_BackgroundContext = new QOpenGLContext();
m_BackgroundContext->setFormat(m_quickWindow->requestedFormat());
m_BackgroundContext->setShareContext(m_qtContext);
m_BackgroundContext->create();
m_BackgroundContext->swapBuffers(m_quickWindow);
//m_ogreContext->makeCurrent(m_quickWindow);
}
but i've also the same error:
OGRE EXCEPTION(7:InternalErrorException): Cannot create GL vertex buffer in GLHardwareVertexBuffer::GLHardwareVertexBuffer at Bureau/bibliotheques/ogre_src_v1-8-1/RenderSystems/GL/src/OgreGLHardwareVertexBuffer.cpp (line 46)
I'm very stuck!
I don't know what to do?
Thanks!

I can't get kinect sdk to do speech recognition and track skeletal data at the sime time

I' ve a program in witch I enabled speech recognition with..
RecognizerInfo ri = GetKinectRecognizer();
speechRecognitionEngine = new SpeechRecognitionEngine(ri.Id);
// Create a grammar from grammar definition XML file.
using (var memoryStream = new MemoryStream(Encoding.ASCII.GetBytes(fileContent)))
{
var g = new Grammar(memoryStream);
speechRecognitionEngine.LoadGrammar(g);
}
speechRecognitionEngine.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(speechEngine_SpeechRecognized);
speechRecognitionEngine.SpeechRecognitionRejected += new EventHandler<SpeechRecognitionRejectedEventArgs>(speechEngine_SpeechRecognitionRejected);
speechRecognitionEngine.SetInputToAudioStream(
sensor.AudioSource.Start(), new SpeechAudioFormatInfo(EncodingFormat.Pcm, 16000, 16, 1, 32000, 2, null));
speechRecognitionEngine.RecognizeAsync(RecognizeMode.Multiple);
..
all'is working fine and SpeechRecognized event get fired correctly..
The problem is, when I anable skeletal data tracking,
sensor.SkeletonStream.Enable();
sensor.SkeletonStream.TrackingMode = SkeletonTrackingMode.Seated;
sensor.SkeletonFrameReady += sensor_SkeletonFrameReady;
speech recognition stops working ...
can I get your help?
Thank you so much!
Audio is not processed if skeleton stream is enabled after starting audio capture
Due to a bug, enabling or disabling the SkeletonStream will stop the AudioSource stream returned by the Kinect sensor. The following sequence of instructions will stop the audio stream:
kinectSensor.Start();
kinectSensor.AudioSource.Start(); // --> this will create an audio stream
kinectSensor.SkeletonStream.Enable(); // --> this will stop the audio stream as an undesired side effect
The workaround is to invert the order of the calls or to restart the AudioSource after changing SkeletonStream status.
Workaround #1 (start audio after skeleton):
kinectSensor.Start();
kinectSensor.SkeletonStream.Enable();
kinectSensor.AudioSource.Start();
Workaround #2 (restart audio after skeleton):
kinectSensor.Start();
kinectSensor.AudioSource.Start(); // --> this will create an audio stream
kinectSensor.SkeletonStream.Enable(); // --> this will stop the audio stream as an undesired side effect
kinectSensor.AudioSource.Start(); // --> this will create another audio stream
Source - http://msdn.microsoft.com/en-us/library/jj663798.aspx

YouTube video on Flash Banner CS4 AS3

I am trying to make a flash banner in CS4 with AS3.
In this banner I have to embed youtube videos.
My problem is.. after the video loaded I cant have/see usual controls (fullscreen, pause, stop, etc) on the video.. and the video has the autoplay by default.
I am using this code:
Security.allowDomain("*");
Security.allowDomain("www.youtube.com");
Security.allowDomain("youtube.com");
Security.allowDomain("s.ytimg.com");
Security.allowDomain("i.ytimg.com");
var my_player1:Object;
var my_loader1:Loader = new Loader();
my_loader1.load(new URLRequest("http://www.youtube.com/apiplayer?version=3"));
my_loader1.contentLoaderInfo.addEventListener(Event.INIT, onLoaderInit);
function onLoaderInit(e:Event):void{
addChild(my_loader1);
my_player1 = my_loader1.content;
my_player1.addEventListener("onReady", onPlayerReady);
}
function onPlayerReady(e:Event):void{
my_player1.setSize(200,100);
/////////////////////////////////
//this example is with parameter//
//my_player1.loadVideoByUrl("http://www.youtube.com/v/ID-YOUTUBE?autohide=1&autoplay=0&fs=1&rel=0",0);
//////////////////////////////////
// this one is only the video id//
my_player1.loadVideoByUrl("http://www.youtube.com/v/ID-YOUTUBE",0);
}
I was trying to pass the parameter in the url to try but seems to be is not working.
I was checking too the google API for AS3 (http://code.google.com/apis/youtube/flash_api_reference.html) but honestly I dont find the way to implement that I need.
Whats is the way to see this controls in the video??
Thank you :)
I was trying different thing and I found a partial solution that i want to share with:
Security.allowDomain("www.youtube.com");
Security.allowDomain("youtube.com");
Security.allowDomain("s.ytimg.com");
Security.allowDomain("i.ytimg.com");
Security.allowDomain("s.youtube.com");
var my_player1:Object;
var my_loader1:Loader = new Loader();
//before I used that:
//my_loader1.load(new URLRequest("http://www.youtube.com/apiplayer?version=3"));
//Now is use this:
my_loader1.load(new URLRequest("http://www.youtube.com/v/ID_VIDEO?version=3));
my_loader1.contentLoaderInfo.addEventListener(Event.INIT, onLoaderInit);
function onLoaderInit(e:Event):void{
addChild(my_loader1);
my_player1 = my_loader1.content;
my_player1.addEventListener("onReady", onPlayerReady);
}
function onPlayerReady(e:Event):void{
my_player1.setSize(200,100);
my_player1.loadVideoByUrl("http://www.youtube.com/v/ID_VIDEO",0);
}
Basically Instead of use the "Loading the chromeless player" I use the "Loading the embedded player"
My problem now is How I can modify for example the size of the controls bar.. because is taking 35px height and I want to reduce it
Thank