So far i've found a way only to record either local or remote using MediaRecorder API but is it possible to mix and record both steams and get a blob?
Please note its audio steam only and i don't want to mix/record in server side.
I've a RTCPeerConnection as pc.
var local_stream = pc.getLocalStreams()[0];
var remote_stream = pc.getRemoteStreams()[0];
var audioChunks = [];
var rec = new MediaRecorder(local_stream);
rec.ondataavailable = e => {
audioChunks.push(e.data);
if (rec.state == "inactive")
// Play audio using new blob
}
rec.start();
Even i tried adding multiple tracks in MediaStream API but it still gives only first track audio. Any help or insight 'd be appreciated!
The WebAudio API can do mixing for you. Consider this code if you want to record all the audio tracks in the array audioTracks:
const ac = new AudioContext();
// WebAudio MediaStream sources only use the first track.
const sources = audioTracks.map(t => ac.createMediaStreamSource(new MediaStream([t])));
// The destination will output one track of mixed audio.
const dest = ac.createMediaStreamDestination();
// Mixing
sources.forEach(s => s.connect(dest));
// Record 10s of mixed audio as an example
const recorder = new MediaRecorder(dest.stream);
recorder.start();
recorder.ondataavailable = e => console.log("Got data", e.data);
recorder.onstop = () => console.log("stopped");
setTimeout(() => recorder.stop(), 10000);
Related
Hello I am going to create a surveillance system. I would like to get a webcam video and a shared screen, but using addtrack will only get the media stream I declared later. Is there any way to get both streams.
thanks.
here is code offer side
let stream = video.srcObject;
let stream2 = shareVideo.srcObject;
stream.getTracks().forEach(track => peerConnection.addTrack(track, stream));
stream2.getTracks().forEach(track => peerConnection.addTrack(track, stream2));
and here is answer side
peerConnections[id].ontrack = (event) => {
console.log(event);
when i checked log. event has one track and stream[0] has mediastream bu steam[1] has no mediastream
How to check the sdp plan (plan-b or unified-plan) used in RTCPeerConnection object?
I know in Chrome I can call:
var p = new RTCPeerConnection()
console.log('plan:', p.getConfiguration().sdpSemantics)
The sdpSemantics works on Chrome, but does not have on Safari, how to check that on Safari?
After my research, it looks like there is no simple solution for this to be sure.
However, according to the docs, we can differentiate Plan-b / unified-plan by how the SDP looks like when there is more than 1 track of one kind.
In the unified plan, every track of the same kind has a separate m= section in the SDP, while in Plan-B they are grouped together.
Here is the working code snippet:
function isUnifiedPlanEnabled() {
const canvas = document.createElement('canvas');
const track = canvas.captureStream(1).getTracks()[0];
const pc = new RTCPeerConnection();
pc.addTrack(track);
pc.addTrack(track.clone());
return pc.createOffer().then(offer => {
const sdpRows = offer.sdp.split('\n')
const mVideoRows = sdpRows.filter(row => row.indexOf('m=video') === 0)
return mVideoRows.length === 2
})
}
I have a WebRTC stream which is sending audio/video, I am displaying the volume in a meter widget which is retrieved from a getStats call on the peerConnection.
getStats(function (stats) {
var results = stats.result()
for (let i=0; i < results.length; i++) {
var res = results[i]
if (res.type == 'ssrc') {
volume = parseInt(res.stat('audioInputLevel'))
}
}
})
This is working fine, the issue is when I run replaceTrack to update the streams audio/video the above getStats returns 0 for the audio level.
navigator.mediaDevices.getUserMedia(media)
.then(stream => {
const tracks = stream.getTracks()
peerConnection.getSenders()
.forEach(sender => {
const newTrack = tracks.find(track => track.kind === sender.track.kind)
sender.replaceTrack(newTrack)
})
})
The local stream get's updated, the remote user get's updated and audio / video is working. But getStats is no longer returning the audioInputLevel.
Would anyone be able to help me understand why? Or what a fix maybe.
Thanks
audioLevel is broken in spec-stats, see https://bugs.chromium.org/p/chromium/issues/detail?id=920630#c16 and the linked bugs.
I am taking a MediaStream and merging two separate tracks (video and audio) using a canvas and the WebAudio API. The MediaStream itself does not seem to fall out of sync, but after reading it into a MediaRecorder and buffering it into a video element the audio will always seem to play much earlier than the video Here's the code that seems to have the issue:
let stream = new MediaStream();
// Get the mixed sources drawn to the canvas
this.canvas.captureStream().getVideoTracks().forEach(track => {
stream.addTrack(track);
});
// Add mixed audio tracks to the stream
// https://stackoverflow.com/questions/42138545/webrtc-mix-local-and-remote-audio-steams-and-record
this.audioMixer.dest.stream.getAudioTracks().forEach(track => {
stream.addTrack(track);
});
// stream = stream;
let mediaRecorder = new MediaRecorder(stream, { mimeType: 'video/webm;codecs=opus,vp8' });
let mediaSource = new MediaSource();
let video = document.createElement('video');
video.src = URL.createObjectURL(mediaSource);
document.body.appendChild(video);
video.controls = true;
video.autoplay = true;
// Source open
mediaSource.onsourceopen = () => {
let sourceBuffer = mediaSource.addSourceBuffer(mediaRecorder.mimeType);
mediaRecorder.ondataavailable = (event) => {
if (event.data.size > 0) {
const reader = new FileReader();
reader.readAsArrayBuffer(event.data);
reader.onloadend = () => {
sourceBuffer.appendBuffer(reader.result);
console.log(mediaSource.sourceBuffers);
console.log(event.data);
}
}
}
mediaRecorder.start(1000);
}
AudioMixer.js
export default class AudioMixer {
constructor() {
// Initialize an audio context
this.audioContext = new AudioContext();
// Destination outputs one track of mixed audio
this.dest = this.audioContext.createMediaStreamDestination();
// Array of current streams in mixer
this.sources = [];
}
// Add an audio stream to the mixer
addStream(id, stream) {
// Get the audio tracks from the stream and add them to the mixer
let sources = stream.getAudioTracks().map(track => this.audioContext.createMediaStreamSource(new MediaStream([track])));
sources.forEach(source => {
// Add it to the current sources being mixed
this.sources.push(source);
source.connect(this.dest);
// Connect to analyser to update volume slider
let analyser = this.audioContext.createAnalyser();
source.connect(analyser);
...
});
}
// Remove all current sources from the mixer
flushAll() {
this.sources.forEach(source => {
source.disconnect(this.dest);
});
this.sources = [];
}
// Clean up the audio context for the mixer
cleanup() {
this.audioContext.close();
}
}
I assume it has to do with how the data is pushed into the MediaSource buffer but I'm not sure. What am I doing that de-syncs the stream?
A late reply to an old post, but it might help someone ...
I had exactly the same problem: I have a video stream, which should be supplemented by an audio stream. In the audio stream short sounds (AudioBuffer) are played from time to time. The whole thing is recorded via MediaRecorder.
Everything works fine on Chrome. But on Chrome for Android, all sounds were played back in quick succession. The "when" parameter for "play()" was ignored on Android. (audiocontext.currentTime continued to increase over time ... - that was not the point).
My solution is similar to Jacob's comment Sep 2 '18 at 7:41:
I created and connected a sine wave oscillator with inaudible 48,000 Hz, which played permanently in the audio stream during recording. Apparently this leads to the proper time progress.
An RTP endpoint that is emitting multiple related RTP streams that
require synchronization at the other endpoint(s) MUST use the same
RTCP CNAME for all streams that are to be synchronized. This
requires a short-term persistent RTCP CNAME that is common across
several RTP streams, and potentially across several related RTP
sessions. A common example of such use occurs when lip-syncing audio
and video streams in a multimedia session, where a single participant
has to use the same RTCP CNAME for its audio RTP session and for its
video RTP session. Another example might be to synchronize the
layers of a layered audio codec, where the same RTCP CNAME has to be
used for each layer.
https://datatracker.ietf.org/doc/html/rfc6222#page-2
There is a bug in Chrome, that plays buffered media stream audio with 44100KHz, even when it's encoded with 48000 (which leads to gaps and video desync). All other browsers seem to play it fine. You can choose to change codec to the one which supports 44.1KHz encoding or play a file from web link as a source (this way Chrome can play it correctly)
Here is the problem,
First I enumerate all the devices that I have available with in select elements:
navigator.mediaDevices.enumerateDevices()
When I change one output, it sounds in the device that I choose.
HTMLMediaElement.setSinkId(deviceId)
After if I play another audio and change the device output (setSinkId), it changes also the first one to the last deviceId. So I have both sounds in the same device.
Do I need to have the last adapter.js version to implement properly that problem?
********* EDITED **********
Following the above comment, it try the web audio, but not success. With getUserMedia everything is fine.
navigator.getUserMedia( { audio: true, video: false },
function (mediaStream) {
// Create an audio context for the audio
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
//var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(mediaStream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});
},
function (error) {
console.log(error);
}
);
But using my remote stream, I cannot get any noise
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(stream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
// Element is my HTMLMediaElement
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});
this is most likely caused by how Chrome renders audio. See here for a description which also suggests using webaudio to workaround the problem.
adapter.js can not fix this.