How to stream audio file with opentok? - webrtc

In opentok, with OT.initPublisher, you only can pass a deviceId to the audioSource. Does someone know a method to stream an audio file ?
For example, I have done this:
navigator.getUserMedia({audio: true, video: false},
function(stream) {
var context = new AudioContext();
var microphone = context.createMediaStreamSource(stream);
var backgroundMusic = context.createMediaElementSource(document.getElementById("song"));
var mixedOutput = context.createMediaStreamDestination();
microphone.connect(mixedOutput);
backgroundMusic.connect(mixedOutput);
},
handleError);
Like this, I can have a stream with the voice and my music but how to put this stream to a publisher ? Is it possible or is there another way to do this ?

Update: There is now an official way to do this, using the videoSource and audioSource properties provided to OT.initPublisher, please see the documentation: https://tokbox.com/developer/sdks/js/reference/OT.html#initPublisher
This is an example of how to stream a canvas element as a video track: https://github.com/opentok/opentok-web-samples/tree/master/Publish-Canvas
You can apply the same technique to stream an audio track.
Old Answer:
It's not currently possible with the officially supported API but there is a way to do it.
Please see the TokBox blog post about Camera Filters: https://tokbox.com/blog/camera-filters-in-opentok-for-web/
In order to modify the stream before it reaches the OpenTok JS SDK we use the mockGetUserMedia function to intercept the stream:
https://github.com/aullman/opentok-camera-filters/blob/master/src/mock-get-user-media.js
You could invoke mockGetUserMedia with a function which does your audio mixing. Something like this:
mockGetUserMedia(function(originalStream) {
var context = new AudioContext();
var microphone = context.createMediaStreamSource(originalStream);
var backgroundMusic = context.createMediaElementSource(document.getElementById("song"));
var mixedOutput = context.createMediaStreamDestination();
microphone.connect(mixedOutput);
backgroundMusic.connect(mixedOutput);
var stream = mixedOutput.stream;
originalStream.getVideoTracks().map(function(track) {
stream.addTrack(track);
});
return stream;
});
Note: I have not tested this function but it should lead you in the right direction. Remember that this technique is error prone and not officially supported by TokBox.
We are currently working on a new feature which will enable this use case but I cannot give a time estimate of when it will be available.

Thank you for the help but we cannot make it work since this morning.
So we made a different file with this code which is implemented before the opentok library in our html :
function mockGetUserMedia(mockOnStreamAvailable) {
var oldGetUserMedia = void 0;
if (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia) {
oldGetUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
navigator.webkitGetUserMedia = navigator.getUserMedia = navigator.mozGetUserMedia = function getUserMedia(constraints, onStreamAvailable, onStreamAvailableError, onAccessDialogOpened, onAccessDialogClosed, onAccessDenied) {
return oldGetUserMedia.call(navigator, constraints, function (stream) {
onStreamAvailable(mockOnStreamAvailable(stream));
}, onStreamAvailableError, onAccessDialogOpened, onAccessDialogClosed, onAccessDenied);
};
} else {
console.warn('Could not find getUserMedia function to mock out');
}
};
mockGetUserMedia(function(stream) {
var context = new AudioContext();
var bgMusic = context.createMediaElementSource(document.getElementById("song"));
var microphone = context.createMediaStreamSource(stream);
var destination = context.createMediaStreamDestination();
bgMusic.connect(destination);
microphone.connect(destination);
var mixedStream = destination.stream;
stream.getVideoTracks().map(function(track) {
mixedStream.addTrack(track);
});
return mixedStream;
});
In our angular, we init the session, create a publisher and publish it but get the error :
Uncaught DOMException: Failed to execute 'createMediaElementSource' on 'BaseAudioContext': HTMLMediaElement already connected previously to a different MediaElementSourceNode.
This error, I think, is throw because the function is executed twice. When the js load, and when we publish.
I am not sure how to use this mockGetUserMedia function, do you know what is wrong with our code ?
EDIT
We made it work with some if condition. Thank you so much man, very appreciated.

Related

MediaRecorder has a delay of multiple seconda

I'm trying to use a MediaRecorder to record a MediaStream and display it in a video element using a MediaSource. So the setup looks like:
Request a MediaStream from the browser
Add it to the MediaRecorder
Add the recorded blobs to the MediaSource Buffer
The result looks very good but there is one problem: There is a delay in the playback.
When displaying the MediaStream directly there is no delay so I sorted out the first bulletpoint as the problem.
Nevertheless, it seems like either the MediaRecorder or the MediaSource is adding a delay of about 3 seconds to the stream.
this.screenRecording = await mediaDevices.getDisplayMedia({ video: { frameRate: 60, resizeMode: 'none' } });
const mediaRecorder = new MediaRecorder(this.screenRecording);
mediaRecorder.ondataavailable = async (event: any) => {
if (this.screenReceiving.readyState === 'open') {
if (this.screenReceivingBuffer == null) {
this.screenReceivingBuffer = this.screenReceiving.addSourceBuffer('video/webm;codecs=vp8');
}
if (!this.screenReceivingBuffer.updating) {
this.screenReceivingBuffer.appendBuffer(await new Response(event.data).arrayBuffer());
}
}
};
mediaRecorder.start(16);
The above code is only copy & paste from the actual project so please don't expect it to work by copy & paste ;)
Does anyone have an idea why this delay exists?
Any ideas on how to tweak the browser to not add this delay?

setSinkId change muliple audio ouputs

Here is the problem,
First I enumerate all the devices that I have available with in select elements:
navigator.mediaDevices.enumerateDevices()
When I change one output, it sounds in the device that I choose.
HTMLMediaElement.setSinkId(deviceId)
After if I play another audio and change the device output (setSinkId), it changes also the first one to the last deviceId. So I have both sounds in the same device.
Do I need to have the last adapter.js version to implement properly that problem?
********* EDITED **********
Following the above comment, it try the web audio, but not success. With getUserMedia everything is fine.
navigator.getUserMedia( { audio: true, video: false },
function (mediaStream) {
// Create an audio context for the audio
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
//var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(mediaStream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});
},
function (error) {
console.log(error);
}
);
But using my remote stream, I cannot get any noise
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(stream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
// Element is my HTMLMediaElement
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});
this is most likely caused by how Chrome renders audio. See here for a description which also suggests using webaudio to workaround the problem.
adapter.js can not fix this.

Download a PDF generated by Apps Script via web app

I'm trying to figure out how to make a Google Apps Script deployed as a web app download a PDF that's generated on a click. It almost works, but the resulting file isn't valid. I can't figure out if it's an encoding issue or something else.
In Apps Script the code looks simple:
function makePDF() {
...
var pdfBlob = doc.getAs('application/pdf');
return Utilities.base64Encode(pdfBlob.getBytes());
}
In the browser, there's a click handler:
function clickHandler(ev) {
ev.preventDefault();
google.script.run
.withSuccessHandler(function(data) {
var pdf = new Blob([window.atob(data)]);
var href = window.URL.createObjectURL(pdf);
var link = document.querySelector('#hiddenLink');
link.href = href;
link.click();
})
.makePDF();
}
Any suggestions?
Thanks!
I figured it out, so posting the answer if anyone else is trying to pass a PDF from Apps Script to the client javascript. It's all much simpler than I had made it.
Rather than messing around with base64 encodings, just pass back the bytes array:
function makePDF() {
...
var pdfBlob = DocumentApp.openById('1234').getAs('application/pdf');
return pdfBlob.getBytes();
}
Now, on the client side, construct a new Blob from an ArrayBuffer. That's easy too:
function clickHandler(ev) {
google.script.run
.withSuccessHandler(function(data) {
var arr = new Uint8Array(data);
var blob = new Blob([arr.buffer], {type: 'application/pdf'});
var obj_url = window.URL.createObjectURL(blob);
var hiddenLink = document.getElementById('hiddenPDFLink');
hiddenLink.setAttribute('href', obj_url);
hiddenLink.setAttribute('download', 'filename.pdf');
hiddenLink.click();
})
.makePDF();
}
And that's it! Hope someone else finds this helpful.
I assume that your makePDF function is doing some other stuffs/Calculation and at the end you need that document to be downloaded to local computer.
What you can do is inside success handler
var link = document.querySelector('#hiddenLink');
link.href = "https://docs.google.com/feeds/download/documents/export/Export?id=**TheIdOfDocumenToBeDownloaded**&exportFormat=pdf";
link.click();
It will then give you a prompt to save document on to local computer.

WebRTC mix local and remote audio steams and record

So far i've found a way only to record either local or remote using MediaRecorder API but is it possible to mix and record both steams and get a blob?
Please note its audio steam only and i don't want to mix/record in server side.
I've a RTCPeerConnection as pc.
var local_stream = pc.getLocalStreams()[0];
var remote_stream = pc.getRemoteStreams()[0];
var audioChunks = [];
var rec = new MediaRecorder(local_stream);
rec.ondataavailable = e => {
audioChunks.push(e.data);
if (rec.state == "inactive")
// Play audio using new blob
}
rec.start();
Even i tried adding multiple tracks in MediaStream API but it still gives only first track audio. Any help or insight 'd be appreciated!
The WebAudio API can do mixing for you. Consider this code if you want to record all the audio tracks in the array audioTracks:
const ac = new AudioContext();
// WebAudio MediaStream sources only use the first track.
const sources = audioTracks.map(t => ac.createMediaStreamSource(new MediaStream([t])));
// The destination will output one track of mixed audio.
const dest = ac.createMediaStreamDestination();
// Mixing
sources.forEach(s => s.connect(dest));
// Record 10s of mixed audio as an example
const recorder = new MediaRecorder(dest.stream);
recorder.start();
recorder.ondataavailable = e => console.log("Got data", e.data);
recorder.onstop = () => console.log("stopped");
setTimeout(() => recorder.stop(), 10000);

WebRTC: Switch from Video Sharing to Screen sharing during call

Initially, I had two different webpages:
One was to do Video Call and
Other was to do Screen Sharing
Now, I want to do both of them in one page.
Here is the scenario:
During Live call, a user wants to stop sharing his/her video and start sharing screen.
Afterwards, again he/she wishes to turn off screen sharing and start video sharing.
For clarity, here are some questions I want to ask:
On Caller Side:
1) How can I change my local stream from video to screen and vice versa?
2) Once it is done, how can I assign it to the local video element?
On Callee Side:
1) How do I handle if the current stream I am receiving is changed from video to screen?
2) How do I handle if the stream I am receiving has stopped? I mean, now I can receive neither video nor screen (just audio)
Kindly, help me in this regards. If there are any open source codes available, kindly share their links too.
Just for your reference, I was trying to handle it using following code. (i know this is naive and won't work)
function handleUserMedia(newStream){
var localvideo = document.getElementById("localvideo");
localvideo.src = URL.createObjectURL(newStream);
localStream = newStream;
sendMessage('got user media');
if (isInitiator) {
maybeStart();
}
}
function handleUserMediaError(error){
console.log(error);
}
var video_constraints = {video: true, audio: true};
var screen_constraints = {video: { mandatory: { chromeMediaSource: 'screen' } }};
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
//getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
$scope.toggleSelected = function () {
$scope.selected = !$scope.selected;
if($scope.selected)
{
getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Video';
}
else
{
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
}
};
Check this demo:
https://www.webrtc-experiment.com/demos/switch-streams.html
and the relevant tutorial:
https://www.webrtc-experiment.com/docs/how-to-switch-streams.html
simply renegotiate peer connections on both users' side!