experts! Issue - we have an equipment which can receive voice stream via sip. We can use standart application to do this (and it works) but we want to send voice stream from browser (i.e. Chrome)
clients and "sevrer" (means equipment) are in our local net
I've discoverded WebRTC, and tried to get MediaStream from Chrome.
My code
var constraints={audio:true};
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia(constraints)
.then(function(stream) {
alert(stream)
})
.catch(function(err){.
alert(err)
});
} else {
alert('getUserMedia is not supported in this browser.');
}
But what should I do to send voice stream to equipment?
I know "coonection string" to equipment (e.g sip:192.168.22.123:5060)
Thanks
You need to have a signaling server which can exchange an offer and answer, as well as ICE candidates. A SIP INVITE can include SDP which can be provided to the setRemoteDescription method of an RTCPeerConnection object in the browser. Then, create an answer and send that back as a SIP 200. I recommend doing some reading about the basics of WebRTC before you post again. You really have not shown any effort on the side of WebRTC, only in capturing a media stream from the browser, which is actually not a part of WebRTC itself, only often used in conjunction. https://www.oreilly.com/library/view/real-time-communication-with/9781449371869/ch01.html
Related
I am new here in webrtc, i have strange issue, when i worked with one to one user onaddstream event is working, i am getting its response, but after then 3rd person joined the room onaddstream event is not working, can anyone please help me how to resolve this issue ? here i have added my whole code for it, can anyone please review it and helped me to get event for all the remote users
var pc = new RTCPeerConnection(configuration);
pc.onaddstream = (remoteaddstream) => {
console.log(remoteaddstream);
};
navigator.mediaDevices.getUserMedia({
audio: true,
video: true,
}).then(stream => {
var localstreamid = stream.id;
console.log("stream id :"+localstreamid);
pc.addStream(stream);
}, function(e) {
console.log(e);
});
You need multiple peer connections to connect more than 2 parties, or alternatively you can make all the parties connect to a server that forwards the data.
From https://medium.com/#khan_honney/webrtc-servers-and-multi-party-communication-in-webrtc-6bf3870b15eb
Mesh Topology
Mesh is the simplest topology for a multiparty application. In this topology, every participant sends and receives its media to all other participants. We said it is the simplest because it is the most straightforward method.
Mixing Topology and MCU
Mixing is another topology where each participant sends its media to a central server and receives a media from the central server. This media may contain some or all other participant’s media
Routing Topology and SFU
Routing is a multiparty topology where each participant sends its media to a central server and receives all other’s media from the central server.
I have created a UDP server, and have tested it using the Microsoft UDP tool and it receives a UDP message and executes accordingly. I am only trying to send a "1" or a "2", but I need to send this to several UDP Servers nearly simultaneously (which is why I cannot wait for a response). The Microsoft UDP tool has entry for IP Address & Port, then a connect button - then I type a message and hit send. Pretty simple and extremely fast.
I can find several examples using React-Native-UDP for receiving, but I cannot find a simple example where someone sends something as simple as a "2" from React-Native to a listening UDP Server.
My goal is to cycle through 10 IP address / port combinations, sending a "1" or "2" depending on the circumstance.
I have accomplished this using HTTP Server, but it is much too slow to be useful.
Is this doable using Native-React-UDP? Will this work for both Android and iOS?
I made it work with react-native-udp. Create a socket and send the udp request. Make sure you have something like 200-500 ms between requests else they may fail.
docu: https://www.npmjs.com/package/react-native-udp
a extra tip: download one of many udp sender / receiver apps on the play and/or appstore to help you debug. Good luck!
import dgram from 'react-native-udp'
const socket = dgram.createSocket('udp4')
socket.bind(12345)
socket.once('listening', function() {
socket.send('Hello World!', undefined, undefined, remotePort, remoteHost, function(err) {
if (err) throw err
console.log('Message sent!')
})
})
socket.on('message', function(msg, rinfo) {
console.log('Message received', msg)
})
I want to capture screen (or a canvas) with recordRTC and send it to tokbox session as a stream instead of a stream from camera, microphone or sharescreen.
What I want is subscribers gets a stream that is the record of a canvas of the other peer (the publisher). Is there a way to do it?
Thanks
This blog post details how you can publish a custom MediaStream into an OpenTok Session. https://tokbox.com/blog/camera-filters-in-opentok-for-web/
It's not officially supported just yet you have to do a bit of a hack.
I am trying to implement video chat in my application with webrtc.
I am attaching the stream via this:
getUserMedia(
{
// Permissions to request
video: true,
audio: true
},
function (stream) {
I am passing that stream to remote client via webrtc.
I am able to see both the videos on my screen (mine as well as of client).
The issue I am getting is that I am getting my own voice too in the stream which I don't want. I want the audio of other party only.
Can you let me know what can be the issue?
Did you add the "muted" attribute to your local video tag as follow :
<video muted="true" ... >
Try setting echoCancellation flag to true on your constraints.
4.3.5 MediaTrackSupportedConstraints
W3.org
Media Capture and Streams
When one or more audio streams is being played in the processes of
various microphones, it is often desirable to attempt to remove the
sound being played from the input signals recorded by the microphones.
This is referred to as echo cancellation. There are cases where it is
not needed and it is desirable to turn it off so that no audio
artifacts are introduced. This allows applications to control this
behavior.
I am trying to create an application which requires a user to send his local video stream to multiple peers using webRTC. As far as I've seen I am responsible for managing several PeerConnection objects because a PeerConnection can only connect to a single peer at a time. What I want to know is if it is possible to create a connection and send my local stream to a peer without him sendig his local stream to me using webRTC.
Simply don't call peer.addStream for broadcast-viewers to make it oneway streaming!
You can disable audio/video media lines in the session description by setting OfferToReceiveAudio and OfferToReceiveVideo to false.
3-Way handshake isn't drafted by RTCWebb IETF WG, yet.
Because browser needs to take care of a lot stuff simultaneously like multi-tracks and multi-media lines; where each m-line should point out a unique peer.