WebRTC save video and audio [duplicate] - webrtc

This question already has answers here:
How to record webcam and audio using webRTC and a server-based Peer connection
(9 answers)
Closed 8 years ago.
I want to save recorded video and audio to a server. But I don't want to encode the video and audio on the client side, I want to encode them on the server side. How can I send the video and audio to the server? Do I stream it?

You can check this repo : Html5_Video_Audio_Recorder
Here is the basic usage of the library
var virec = new VIRecorder.initVIRecorder(
{
recorvideodsize : 0.4, // recorded video dimentions are 0.4 times smaller than the original
webpquality : 0.7, // chrome and opera support webp imags, this is about the aulity of a frame
framerate : 15, // recording frame rate
videotagid : "viredemovideoele",
videoWidth : "640",
videoHeight : "480",
} ,
function(){
//success callback. this will fire if browsers supports
},
function(err){
//onerror callback, this will fire if browser does not support
console.log(err.code +" , "+err.name);
}
);
startRecord.addEventListener("click" , function(){
virec.startCapture(); // this will start recording video and the audio
startCountDown(null);
});
stopRecord.addEventListener("click" , function(){
virec.stopCapture(oncaptureFinish);
});
playBackRecord.addEventListener("click" , function(){
virec.play(); /*Clientside playback,*/
});
discardRecordng.addEventListener("click" , function(){
virec.clearRecording();
});
uploadrecording.addEventListener("click" , function(){
var uploadoptions = {
blobchunksize : 1048576,
requestUrl : "php/fileupload.php",
requestParametername : "filename",
videoname : "video.webm",
audioname : "audio.wav"
};
virec.uploadData( uploadoptions , function(totalchunks, currentchunk){
progressNumber.innerHTML = ((currentchunk/totalchunks)*100);
console.log(currentchunk +" OF " +totalchunks);
});
});

You can send the audio and video over websockets to a WebSocket server that can then handle the packets the way you want. There are recorders out there currently and I have modified some to focus on sending over websockets, not downloading the files.
Link to Repo.

Related

MediaRecorder has a delay of multiple seconda

I'm trying to use a MediaRecorder to record a MediaStream and display it in a video element using a MediaSource. So the setup looks like:
Request a MediaStream from the browser
Add it to the MediaRecorder
Add the recorded blobs to the MediaSource Buffer
The result looks very good but there is one problem: There is a delay in the playback.
When displaying the MediaStream directly there is no delay so I sorted out the first bulletpoint as the problem.
Nevertheless, it seems like either the MediaRecorder or the MediaSource is adding a delay of about 3 seconds to the stream.
this.screenRecording = await mediaDevices.getDisplayMedia({ video: { frameRate: 60, resizeMode: 'none' } });
const mediaRecorder = new MediaRecorder(this.screenRecording);
mediaRecorder.ondataavailable = async (event: any) => {
if (this.screenReceiving.readyState === 'open') {
if (this.screenReceivingBuffer == null) {
this.screenReceivingBuffer = this.screenReceiving.addSourceBuffer('video/webm;codecs=vp8');
}
if (!this.screenReceivingBuffer.updating) {
this.screenReceivingBuffer.appendBuffer(await new Response(event.data).arrayBuffer());
}
}
};
mediaRecorder.start(16);
The above code is only copy & paste from the actual project so please don't expect it to work by copy & paste ;)
Does anyone have an idea why this delay exists?
Any ideas on how to tweak the browser to not add this delay?

Using video.js is it possible to get current HLS timestamp?

I have an application which is embedding a live stream in it. To cater for delays I'd like to know what is the current timestamp of the stream and compare it with the time on the server.
What I have tested up till now is checking the difference between the buffered time of the video with the current time of the video:
player.bufferedEnd() - player.currentTime()
However I'd like to compare the time with the server instead and to do so I need to get the timestamp of the last requested .ts file.
So, my question is using video.js, is there some sort of hook to get the timestamp of the last requested .ts file?
Video.js version: 7.4.1
I had managed to solve this issue, however please bear with me I don't remember where I had found the documentation for this bit of code.
In my case I was working in an Angular application, I had a video component responsible for loading a live stream with the use of video.js. Anyway let's see some code...
Video initialisation
private videoInit() {
this.player = videojs('video', {
aspectRatio: this.videoStream.aspectRatio,
controls: true,
autoplay: false,
muted: true,
html5: {
hls: {
overrideNative: true
}
}
});
this.player.src({
src: '://some-stream-url.com',
type: 'application/x-mpegURL'
});
// on video play callback
this.player.on('play', () => {
this.saveHlsObject();
});
}
Save HLS Object
private saveHlsObject() {
if (this.player !== undefined) {
this.playerHls = (this.player.tech() as any).hls;
// get and syncing server time...
// make some request to get server time...
// then calculate difference...
this.diff = serverTime.getTime() - this.getVideoTime().getTime();
}
}
Get Timestamp of Video Segment
// access the player's playlists, get the last segment and extract time
// in my case URI of segments were for example: 1590763989033.ts
private getVideoTime(): Date {
const targetMedia = this.playerHls.playlists.media();
const lastSegment = targetMedia.segments[0];
const uri: string = lastSegment.uri;
const segmentTimestamp: number = +uri.substring(0, uri.length - 3);
return new Date(segmentTimestamp);
}
So above the main point is the getVideoTime function. The time of a segment can be found in the segment URI, so that function extracts the time from the segment URI and then converts it to a Date object. Now to be honest, I don't know if this URI format is something that's a standard for HLS or something that was set for the particular stream I was connecting to. Hope this helps, and sorry I don't have any more specific information!

WebRTC video/audio streams out of sync (MediaStream -> MediaRecorder -> MediaSource -> Video Element)

I am taking a MediaStream and merging two separate tracks (video and audio) using a canvas and the WebAudio API. The MediaStream itself does not seem to fall out of sync, but after reading it into a MediaRecorder and buffering it into a video element the audio will always seem to play much earlier than the video Here's the code that seems to have the issue:
let stream = new MediaStream();
// Get the mixed sources drawn to the canvas
this.canvas.captureStream().getVideoTracks().forEach(track => {
stream.addTrack(track);
});
// Add mixed audio tracks to the stream
// https://stackoverflow.com/questions/42138545/webrtc-mix-local-and-remote-audio-steams-and-record
this.audioMixer.dest.stream.getAudioTracks().forEach(track => {
stream.addTrack(track);
});
// stream = stream;
let mediaRecorder = new MediaRecorder(stream, { mimeType: 'video/webm;codecs=opus,vp8' });
let mediaSource = new MediaSource();
let video = document.createElement('video');
video.src = URL.createObjectURL(mediaSource);
document.body.appendChild(video);
video.controls = true;
video.autoplay = true;
// Source open
mediaSource.onsourceopen = () => {
let sourceBuffer = mediaSource.addSourceBuffer(mediaRecorder.mimeType);
mediaRecorder.ondataavailable = (event) => {
if (event.data.size > 0) {
const reader = new FileReader();
reader.readAsArrayBuffer(event.data);
reader.onloadend = () => {
sourceBuffer.appendBuffer(reader.result);
console.log(mediaSource.sourceBuffers);
console.log(event.data);
}
}
}
mediaRecorder.start(1000);
}
AudioMixer.js
export default class AudioMixer {
constructor() {
// Initialize an audio context
this.audioContext = new AudioContext();
// Destination outputs one track of mixed audio
this.dest = this.audioContext.createMediaStreamDestination();
// Array of current streams in mixer
this.sources = [];
}
// Add an audio stream to the mixer
addStream(id, stream) {
// Get the audio tracks from the stream and add them to the mixer
let sources = stream.getAudioTracks().map(track => this.audioContext.createMediaStreamSource(new MediaStream([track])));
sources.forEach(source => {
// Add it to the current sources being mixed
this.sources.push(source);
source.connect(this.dest);
// Connect to analyser to update volume slider
let analyser = this.audioContext.createAnalyser();
source.connect(analyser);
...
});
}
// Remove all current sources from the mixer
flushAll() {
this.sources.forEach(source => {
source.disconnect(this.dest);
});
this.sources = [];
}
// Clean up the audio context for the mixer
cleanup() {
this.audioContext.close();
}
}
I assume it has to do with how the data is pushed into the MediaSource buffer but I'm not sure. What am I doing that de-syncs the stream?
A late reply to an old post, but it might help someone ...
I had exactly the same problem: I have a video stream, which should be supplemented by an audio stream. In the audio stream short sounds (AudioBuffer) are played from time to time. The whole thing is recorded via MediaRecorder.
Everything works fine on Chrome. But on Chrome for Android, all sounds were played back in quick succession. The "when" parameter for "play()" was ignored on Android. (audiocontext.currentTime continued to increase over time ... - that was not the point).
My solution is similar to Jacob's comment Sep 2 '18 at 7:41:
I created and connected a sine wave oscillator with inaudible 48,000 Hz, which played permanently in the audio stream during recording. Apparently this leads to the proper time progress.
An RTP endpoint that is emitting multiple related RTP streams that
require synchronization at the other endpoint(s) MUST use the same
RTCP CNAME for all streams that are to be synchronized. This
requires a short-term persistent RTCP CNAME that is common across
several RTP streams, and potentially across several related RTP
sessions. A common example of such use occurs when lip-syncing audio
and video streams in a multimedia session, where a single participant
has to use the same RTCP CNAME for its audio RTP session and for its
video RTP session. Another example might be to synchronize the
layers of a layered audio codec, where the same RTCP CNAME has to be
used for each layer.
https://datatracker.ietf.org/doc/html/rfc6222#page-2
There is a bug in Chrome, that plays buffered media stream audio with 44100KHz, even when it's encoded with 48000 (which leads to gaps and video desync). All other browsers seem to play it fine. You can choose to change codec to the one which supports 44.1KHz encoding or play a file from web link as a source (this way Chrome can play it correctly)

setSinkId change muliple audio ouputs

Here is the problem,
First I enumerate all the devices that I have available with in select elements:
navigator.mediaDevices.enumerateDevices()
When I change one output, it sounds in the device that I choose.
HTMLMediaElement.setSinkId(deviceId)
After if I play another audio and change the device output (setSinkId), it changes also the first one to the last deviceId. So I have both sounds in the same device.
Do I need to have the last adapter.js version to implement properly that problem?
********* EDITED **********
Following the above comment, it try the web audio, but not success. With getUserMedia everything is fine.
navigator.getUserMedia( { audio: true, video: false },
function (mediaStream) {
// Create an audio context for the audio
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
//var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(mediaStream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});
},
function (error) {
console.log(error);
}
);
But using my remote stream, I cannot get any noise
var ac = new (window.AudioContext || window.webKitAudioContext)();
// Create a clone of the stream, if not the id of all the stream is default
var streamClone = stream.clone();
var ss = ac.createMediaStreamSource(stream);
// Create a destination
var sd = ac.createMediaStreamDestination()
ss.connect(sd);
// Element is my HTMLMediaElement
element.srcObject = sd.stream;
// Play the sound
element.play();
element.setSinkId(deviceId).then(function() {
console.log('Set deviceId('+deviceId+') in the selected audio element');
});
this is most likely caused by how Chrome renders audio. See here for a description which also suggests using webaudio to workaround the problem.
adapter.js can not fix this.

WebRTC: Switch from Video Sharing to Screen sharing during call

Initially, I had two different webpages:
One was to do Video Call and
Other was to do Screen Sharing
Now, I want to do both of them in one page.
Here is the scenario:
During Live call, a user wants to stop sharing his/her video and start sharing screen.
Afterwards, again he/she wishes to turn off screen sharing and start video sharing.
For clarity, here are some questions I want to ask:
On Caller Side:
1) How can I change my local stream from video to screen and vice versa?
2) Once it is done, how can I assign it to the local video element?
On Callee Side:
1) How do I handle if the current stream I am receiving is changed from video to screen?
2) How do I handle if the stream I am receiving has stopped? I mean, now I can receive neither video nor screen (just audio)
Kindly, help me in this regards. If there are any open source codes available, kindly share their links too.
Just for your reference, I was trying to handle it using following code. (i know this is naive and won't work)
function handleUserMedia(newStream){
var localvideo = document.getElementById("localvideo");
localvideo.src = URL.createObjectURL(newStream);
localStream = newStream;
sendMessage('got user media');
if (isInitiator) {
maybeStart();
}
}
function handleUserMediaError(error){
console.log(error);
}
var video_constraints = {video: true, audio: true};
var screen_constraints = {video: { mandatory: { chromeMediaSource: 'screen' } }};
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
//getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
$scope.toggleSelected = function () {
$scope.selected = !$scope.selected;
if($scope.selected)
{
getUserMedia(screen_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Video';
}
else
{
getUserMedia(video_constraints, handleUserMedia, handleUserMediaError);
$scope.btnLabel = 'Share Screen';
}
};
Check this demo:
https://www.webrtc-experiment.com/demos/switch-streams.html
and the relevant tutorial:
https://www.webrtc-experiment.com/docs/how-to-switch-streams.html
simply renegotiate peer connections on both users' side!