WebRTC Changing Media Streams on the Go - webrtc

Now since device enumeration is present in chrome, i know i can select a device during "getUserMedia" negotiation. I was also wondering whether i could switch devices during the middle of a call (queue up a local track and switch tracks or do i have to renegotiate the stream)? I am not sure if this is something that is still blocked or now is "allowable"
I have tried to make a new track, but i can't figure out how to switch the track on the go. I know this was previously impossible, but was wondering now if it is possible?

Even i have the same requirement. I have to record the video using MediaRecorder. For this I am using navigator.getUserMedia with constraints of audio and video. You can pass the video or audio tracks dynamically by getting the available devices from navigator.mediaDevices.enumerateDevices() and attaching the respective device to constraints and calling navigator.getUserMedia with new constraints again. The point to be noted when doing this is, you have to kill the existing tracks using track.stop() method.
You can see my example here.
StreamTrack's readyState is getting changed to ended, just before playing the stream (MediaStream - MediaStreamTrack - WebRTC)

In Firefox, you can use the RTPSender object to call replaceTrack() to replace a track on the fly (with no renegotiation). This should eventually be supported by other browsers as part of the spec.
Without replaceTrack(), you can remove the old stream, add a new one, deal with onnegotiationnedded, and let the client process the change in streams.
See the replaceTrack() test in the Mozilla source: https://developer.mozilla.org/en-US/docs/Web/API/RTCRtpSender/replaceTrack

Have you tried calling getUserMedia() when you want to change to a different device?
There's an applyConstraints() method in the Media Capture and Streams spec that makes it possible to change constraints on the fly, but it hasn't been implemented yet:
dev.w3.org/2011/webrtc/editor/getusermedia.html#the-model-sources-sinks-constraints-and-states
dev.w3.org/2011/webrtc/editor/getusermedia.html#methods-1

Related

Automatically delete / unlist YouTube video after Live Stream ends

As far as I remember, the enableDvr flag used to work like this. If it was not set, the video would not get archived. This is how we used it in our implementation. However, recently we got feedback that the videos with the flag disabled stay in YouTube after the live stream ends.
Has there been a change in the API and how this flag is used?
Also, the documentation states:
Important: You must set the value to true and also set the enableArchive property's value to true if you want to make playback available immediately after the broadcast ends.
On the other hand, the API changelog states that the enableArchive flag was removed back in 2013... Why is it still in the documentation then?
In addition, when setting up an event in YouTube Creators Studio there is an option to automatically unlist the video after the live stream ends. I can't find this option in the LiveBroadcast object. Is it available via the API or would I need to unlist or delete the video manually when the broadcast ends?
They have a defect in their documentation. In the March 27, 2013 revision history, they renamed this property to recordFromStart.
https://developers.google.com/youtube/v3/live/docs/liveBroadcasts#recordFromStart

Flash player API for browser extension

let's say we have a P2P multi-player Flash based game hosted on a website. Would it be possible to create a browser extension that would listen to what is going on within the Flash application? For example, I would like to know when a player connects to a room, gets kicked or banned, or simply leaves by himself. I'm sorry this is not really a specific question but I need a direction to start. Thanks in advance!
I can see a few ways to communicate between Flash and a browser plugin.
One is to open a socket to a server running on the local machine. Because of the security sandbox, this may not be the easiest approach, but if feasible, it is of course probably the one to go for because you've already got your socket-handling code written, and listening/writing to a additional socket isn't terribly complicated. For this approach, you just need your plugin to start listening on a socket, and get the flash applet to connect to it.
Another way might be to try something with passing messages in cookies. Pretty sure this would just cause much grief, though.
Another way, and I suspect this may turn out to be the easier path, is to communicate between Flash and JavaScript using the ExternalInterface class, then from JavaScript to the plugin. Adobe's IntrovertIM example should get you started if you can find a copy on the web.
In Flash, create two functions, a jsToSwf(command:String, args:Array<String>):Dynamic function, to handle incoming messages from JS that are sent to that callback, and a swfToJs(command:String, args:Array<String> = null):Dynamic function, which calls flash.external.ExternalInterface.call("swfToJs", command, args);.
To set it up, you need to do something like:
if (flash.external.ExternalInterface.available) {
flash.external.ExternalInterface.addCallback("jsToSwf", jsToSwf);
swfToJs("IS JS READY?");
}
(The two parameters to addCallback are what the function is called in JS, and what it's called in Flash. They don't have to be the same thing, but it sort of makes sense that they do)
In JS, you need the same functions: function swfToJs(command, params) accepts commands and parameter lists from Flash; and jsToSwf(command, params) calls getSwf("Furcadia").jsToSwf(command, params);.
getSwf("name") should probably be something like:
/** Get ref to specified SWF file.
// Unfortunately, document.getElementById() doesn't
// work well with Flash Player/ExternalInterface. */
function getSwf(movieName) {
result = '';
if (navigator.appName.indexOf("Microsoft") != -1) {
result = window[movieName];
} else {
result = document[movieName];
}
return result;
}
The only fiddly bit there is that you need to do a little handshake to make sure everyone's listening. So when you have Flash ready, it calls swfToJs("IS JS READY?"); then the JS side, on getting that command, replies with jsToSwf("JS IS READY!"); then on getting that, Flash confirms receipt with swfToJs("FLASH IS READY!"); and both sides set a flag saying they're now clear to send any commands they like.
So, you've now got Flash talking with JS. But how does JS talk with a browser extension? And, do you mean extension, or add-on, since there's a difference! Well, that becomes a whole 'nother can of worms, since you didn't specify which browser.
Every browser handles things differently. For example, Mozilla has port.emit()/port.on() and the older postMessage() as APIs for JS to communicate with add-ons.
Still, I think ExternalInterface lets us reduce a hard question (Flash-to-external-code comms) to a much simpler question (Js-to-external-code comms).

NAudio asynchronous audio

I want to be able to dump sound files to the audio output asynchronously throughout the life of the application. I need to be able to request a file to be played and have it mixed in with any files already playing. The same file may be called multiple times. Is WaveMixerStream the way to go?
Is it safe/recommended to keep a player and mixer stream open for the life of the application or will that cause performance problems?
//globals
IWavePlayer _Context;
WaveMixerStream32 _Mixer;
//constructor
_Context = new DirectSoundOut();
_Mixer = new WaveMixerStream32();
_Context.Init( Instance._Mixer );
_Context.Play();
//asynchronous sound output, on demand from user interface
_Mixer.AddInputStream( sound.FileWaveStream );
There are several ways you could do this, but here's one option. Create a player and mixer open all the time. I would use the MixingSampleProvider as the mixer, passing it through a SampleToWaveProvider or SampleToWaveProvider16 as the input to the player.
To ensure that playback never stops you would either need to customise MixingSampleProvider so that Read always returns the number of samples asked for even if there are no active inputs, or add a dummy input which is a never-ending stream of silence.
Now to play a sound, just pass an ISampleProvider (e.g. AudioFileReader) into your mixer's AddMixerInput and it will play it. It auto-removes inputs when they reach the end.
An alternative is to create a new player object for each sound to play. The disadvantage is working out a way to keep the player object alive until playback stops (probably by storing in a dictionary and then Disposing and taking it out when the PlaybackStopped event fires). The advantage is that you aren't keeping the sound device open when you don't need to, and you can easily play sounds at different sample rates / number of channels.

How do I record video to a local disk in AIR?

I'm trying to record a webcam's video and audio to a FLV file stored on the users local hard disk. I have a version of this code working which uses NetConnection and NetStream to stream the video over a network to a FMS (Red5) server, but I'd like to be able to store the video locally for low bandwidth/flaky network situations. I'm using FLex 3.2 and AIR 1.5, so I don't believe there should be any sandbox restrictions which prevent this from occurring.
Things I've seen:
FileStream - Allows reading.writing local files but no .attachCamera and .attachAudio methids for creating a FLV.
flvrecorder - Produces screen grabs from the web cam and creates it's own flv file. Doesn't support Audio. License prohibits commercial use.
SimpleFLVWriter.as - Similar to flvrecorder without the wierd license. Doesn't support audio.
This stackoverflow post - Which demonstrates the playback of a video from local disk using a NetConnection/NetStream.
Given that I have a version already which uses NetStream to stream to the server I thought the last was most promising and went ahead and put together this demo application. The code compiles and runs without errors, but I don't have a FLV file on disk which the stop button is clicked.
-
<mx:Script>
<![CDATA[
private var _diskStream:NetStream;
private var _diskConn:NetConnection;
private var _camera:Camera;
private var _mic:Microphone;
public function cmdStart_Click():void {
_camera = Camera.getCamera();
_camera.setQuality(144000, 85);
_camera.setMode(320, 240, 15);
_camera.setKeyFrameInterval(60);
_mic = Microphone.getMicrophone();
videoDisplay.attachCamera(_camera);
_diskConn = new NetConnection();
_diskConn.connect(null);
_diskStream = new NetStream(_diskConn);
_diskStream.client = this;
_diskStream.attachCamera(_camera);
_diskStream.attachAudio(_mic);
_diskStream.publish("file://c:/test.flv", "record");
}
public function cmdStop_Click() {
_diskStream.close();
videoDisplay.close();
}
]]>
</mx:Script>
<mx:VideoDisplay x="10" y="10" width="320" height="240" id="videoDisplay" />
<mx:Button x="10" y="258" label="Start" click="cmdStart_Click()" id="cmdStart"/>
<mx:Button x="73" y="258" label="Stop" id="cmdStop" click="cmdStop_Click()"/>
</mx:WindowedApplication>
It seems to me that there's either something wrong with the above code which is preventing it from working, or NetStream just can't be abused in this wany to record video.
What I'd like to know is, a) What (if anything) is wrong with the code above? b) If NetStream doesn't support recording to disk, are there any other alternatives which capture Audio AND Video to a file on the users local hard disk?
Thanks in advance!
It is not possible To stream video directly to the local disk without using some streaming service like Windows Media encoder, or Red5 or Adobe's media server or something else.
I have tried all the samples on the internet with no solution to date.
look at this link for another possibility:
http://www.zeropointnine.com/blog/updated-flv-encoder-alchem/
My solution was to embed Red5 into AIR.
Sharing with you my article
http://mydevrecords.blogspot.com/2012/01/local-recording-in-adobe-air-using-red5.html
In general, the solution is to embed free media server Red5 into AIR like an asset. So, the server will be present in AIR application folder. Then, through the NativeProcess, you can run Red5 and have its instance in memory. As result, you can have local video recording without any network issues.
I am also trying to do the same thing, but I have been told from the developers of avchat.net that it is not possible to do this with AIR at the moment. If you do find out how to do it, I would love to know!
I also found this link, not sure how helpful it is http://www.zeropointnine.com/blog/webcam-dvr-for-apollo/
Well, I just think that letting it connect to nothing(NULL) doesn't work. I've already let him try to connect to localhost, but that didn't work out either. I don't think this is even possible. Streaming video works only with Flash Media Server and Red5, not local. Maybe you could install Red5 on you PC?
Sadly video support in flash from cameras is very poor. When you stream its raw so the issue is that you have to encode to FLV and doing it in real time takes a very fast computer. First gen concepts would write raw bitmaps to a file (or serialize an array) then a second method would convert the file to an FLV. Basically you have to poll the camera and save each frame as a bitmap then stack in an array. This is very limited and could not do audio. It was also very hard to get above 5-10fps.
The gent at zero point nine, came up with a new version and your on the right path. Look at the new flv recorder. I spent a lot of time working with this but never quite got it to work for my needs (two cameras). I just could not get the FPS i needed. But it might work for you. It was much faster than the original method.
The only other working option I know of is to have the Red5 save the video and download it back to the app.

Start playing streaming audio on symbian

The tiny question is:
How to start (realplayer ?) playing given online resourse (e.g. http://example.com/file.mp3)
PyS60, C++ or C# via RedFiveLabs would do.
EDIT1: Title changed from "Start RealPlayer on symbian" to the more appropriate.
I think the title is a little misleading if you just want to play back media content and not use a particular application for it.
In C++ there is CMdaAudioPlayerUtility::OpenUrlL() but it's not widely implemented. For example in S60 it will complete with KErrNotSupported status. To play files you can use other open functions in CMdaAudioPlayerUtility such as OpenFileL() or OpenDesL() but you need a separate mechanism for retrieving the files or at least the bytes onto the device.
There is also CVideoPlayerUtility::OpenUrlL() which supports rtsp audio streams but not http.