I am trying to implement video chat in my application with webrtc.
I am attaching the stream via this:
getUserMedia(
{
// Permissions to request
video: true,
audio: true
},
function (stream) {
I am passing that stream to remote client via webrtc.
I am able to see both the videos on my screen (mine as well as of client).
The issue I am getting is that I am getting my own voice too in the stream which I don't want. I want the audio of other party only.
Can you let me know what can be the issue?
Did you add the "muted" attribute to your local video tag as follow :
<video muted="true" ... >
Try setting echoCancellation flag to true on your constraints.
4.3.5 MediaTrackSupportedConstraints
W3.org
Media Capture and Streams
When one or more audio streams is being played in the processes of
various microphones, it is often desirable to attempt to remove the
sound being played from the input signals recorded by the microphones.
This is referred to as echo cancellation. There are cases where it is
not needed and it is desirable to turn it off so that no audio
artifacts are introduced. This allows applications to control this
behavior.
Related
I'm trying to build an web app where there's a broadcaster of a camera stream, and viewers who can watch and control the stream. Is it possible for a viewer to control the constraints of the camera (exposure, brightness, etc.) and possibly pause, rewind, and record footage, being used to broadcast the stream using webrtc? Wanted to know before I decide to use webrtc as the way to accomplish this task. Based on my reading of the webrtc guide and example pages, I think recording is possible. But I wasn't sure about a remote peerconnection changing the local peerconnection's settings or vice versa.
experts! Issue - we have an equipment which can receive voice stream via sip. We can use standart application to do this (and it works) but we want to send voice stream from browser (i.e. Chrome)
clients and "sevrer" (means equipment) are in our local net
I've discoverded WebRTC, and tried to get MediaStream from Chrome.
My code
var constraints={audio:true};
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
alert(stream)
})
.catch(function(err){.
alert(err)
});
} else {
alert('getUserMedia is not supported in this browser.');
}
But what should I do to send voice stream to equipment?
I know "coonection string" to equipment (e.g sip:192.168.22.123:5060)
Thanks
You need to have a signaling server which can exchange an offer and answer, as well as ICE candidates. A SIP INVITE can include SDP which can be provided to the setRemoteDescription method of an RTCPeerConnection object in the browser. Then, create an answer and send that back as a SIP 200. I recommend doing some reading about the basics of WebRTC before you post again. You really have not shown any effort on the side of WebRTC, only in capturing a media stream from the browser, which is actually not a part of WebRTC itself, only often used in conjunction. https://www.oreilly.com/library/view/real-time-communication-with/9781449371869/ch01.html
I want to capture screen (or a canvas) with recordRTC and send it to tokbox session as a stream instead of a stream from camera, microphone or sharescreen.
What I want is subscribers gets a stream that is the record of a canvas of the other peer (the publisher). Is there a way to do it?
Thanks
This blog post details how you can publish a custom MediaStream into an OpenTok Session. https://tokbox.com/blog/camera-filters-in-opentok-for-web/
It's not officially supported just yet you have to do a bit of a hack.
as the title shows, is there any methods I can use to play multiple videos continuously using simple rtmp client(my rtmp server is wowza)? Here is the way I think:
when the first video is about to be finished,open a new thread to send a new createStream command and a new play command and get the video rtmp packet and put them into a buffer list, when the first video is finished, then play the video rtmp in the buffer list..
Can this way be available or are there any other recommended methods to achieve it? Any suggestion will be appreciated, thanks!
While the functionality is not built-in, Wowza does have a module called StreamPublisher to allow you to implement a server-side type of playlist. The source code for the module is available on GitHub. A scheduled playlist of multiple VOD files is streamed as a live stream, similar to a TV broadcast.
So, I have what I think is a fairly interesting and, hopefully, not intractable problem. I have an audio/video getUserMedia stream that I am recording in Chrome. Individually, the tracks record perfectly well. However, when attempting to record both, one blocks the main thread, hosing the other. I know that there is a way to resolve this. Muaz Khan has a few demos that seem to work without blocking.
Audio is recorded via the web audio API. I am piping the audio track into a processor node which converts it to a 16b mono channel and streams it to a node.js server.
Video is recorded via the usual canvas hack and Whammy.js. When recording, video frames are drawn to a canvas and then the resulting image data is pushed into a frames array which is later encoded into a webm container by Whammy, subsequently uploaded to the node.js server.
The two are then muxed together via ffmpeg server-side and the result stored.
The ideas I've had so far are:
Delegate one to a worker thread. Unfortunately both the canvas and the stream are members of the DOM as far as I know.
Install headless browser in node.js and establish an rtc connection with the client, thereby exposing the entire stream server-side
The entire situation will eventually be solved by Audio Worker implementation. The working group seems to have stalled public progress updates on that while things are shuffled around a bit though.
Any suggestions for resolving the thread blocking?
Web Audio Connections:
var context = new AudioContext();
var source = context.createMediaStreamSource(stream);
var node = context.createScriptProcessor(2048, 1, 1);
node.onaudioprocess = audioProcess;
source.connect(node);
node.connect(context.destination);
Web Audio Processing:
if (!recording.audio) return;
var leftChannel = e.inputBuffer.getChannelData(0);
Socket.emit('record-audio', convertFloat32ToInt16(leftChannel));
Video Frame Buffering:
if (recording.video) {
players.canvas.context.fillRect(0, 0, players.video.width, players.video.height);
players.canvas.context.drawImage(players.video.element, 0, 0, players.video.width, players.video.height);
frames.push({
duration: 100,
image: players.canvas.element.toDataURL('image/webp')
});
lastTime = new Date().getTime();
requestAnimationFrame(drawFrame);
} else {
requestAnimationFrame(getBlob);
}
Update: I've since managed to stop the two operations from completely blocking one another, but it's still doing it enough to distort my audio.
There are a few key things that allow for successful getUserMedia recording in Chrome at the moment, as taken from a conglomeration of information gleaned from the helpful comments attached to the original question and my own experience.
When harvesting data from the recording canvas, encode as jpeg. I had been attempting webp to satisfy the requirements of Whammy.js. Generating a webp dataURI is apparently a cycle hog.
Delegate as much of the non-DOM operations as possible to worker threads. This is especially true of any streaming / upload operations (e.g., audio sample streaming via websockets)
Avoid requestAnimationFrame as a means of facilitating recording canvas drawing. It is resource intensive, and as Aldel has pointed out, can fail if the user switches tabs. Using setInterval is much more efficient/reliable. It also allows for better framerate control.
For Chrome at least, avoid client-side AV encoding for the time being. Stream audio samples and video frames server-side for processing. While client-side AV encoding libraries are very cool, they simply don't seem efficient enough for production quite yet.
Also, for Node.js ffmpeg automation, I highly recommend fluent-ffmpeg. Special thanks to Benjamin Trent for some practical examples.
#aldel is right. Increasing bufferSize value fixes it. E.g. bufferSize= 16384;
Try this demo in chrome and record audio+video. You'll hear clear recorded WAV in parallel with 720p video frames.
BTW, I agree with jesup that MediaRecorder solutions should be preferred.
Chromium guys are very close and hoping M47/48 will bring MediaRecorder implementations! At least for video (vp8) recordings.
There is chrome-based alternative for whammy.js as well:
https://github.com/streamproc/MediaStreamRecorder/issues/43