I need to add my track and send to other peers after having etablished the connection I've followed the MDN example
pc.onnegotiationneeded = async () => {
try {
makingOffer = true;
await pc.setLocalDescription();
signaler.send({ description: pc.localDescription });
} catch(err) {
console.error(err);
} finally {
makingOffer = false;
}
};
onnegotiationneeded is fired when we call pc.addTrack so I will remake the process offer -> answer -> ICE.
So I have a function which call getUserMedia and set the local video I've added a callback to apply addTrack to my peers
const handleActiveVideo = async (cb) => {
const devices = await getVideoDevices();
const localStream = await createLocalStream(devices[0].deviceId);
setLocalVideoEl(localStream, devices[0].deviceId);
cb((pc) => {
localStream.getTracks().forEach((track) => {
pc.connection.addTrack(track, localStream);
});
});
};
But if I do that with an etablished connection when I add my local stream to the track with one peer it's ok all work fine but with my second peer I have this error on Firefox
Remote description indicates ICE restart but offer did not request ICE
restart (new remote description changes either the ice-ufrag or
ice-pwd)
with Chrome
DOMException: Failed to execute 'setRemoteDescription' on
'RTCPeerConnection': Failed to set remote answer sdp: Failed to apply
the description for m= section with mid='0': Failed to set SSL role
for the transport.
To recapitulate
connect my 2 peers without any tracks on both sides
start the video with my Peer1 ok I can see the video with my Peer2
start the video with my Peer2 error on Peer2 in the answer
setRemoteDescription(description);
Related
I installed coturn server on an Ubuntu VPS for testing purposes, the testing of the coturn server on Trickle ICE looks great and everything is fine, but when I try to make a connection it returns null iceCandidate when peers are from different networks and it works fine when peers are in the same local network.
This is my Trickle ICE test results:
This is what I get when console logging the iceCandidate:
And this is my iceservers:
const iceServers = {
iceServer: [
{
urls: "stun:stun.biodietfood.com:3478",
},
{
urls: "turn:turn.biodietfood.com:3478?transport=tcp",
username: "myuser",
credential: "mypassword",
},
],
};
And this is my code to make peer connection and sending an offer:
rtcPeerConnection = new RTCPeerConnection(); //create peer connection
rtcPeerConnection.setConfiguration(iceServers);
rtcPeerConnection.onicecandidate = onIceCandidate; //generate ice candidate
rtcPeerConnection.onicecandidateerror = iceCandidateError;
rtcPeerConnection.ontrack = onAddStream; // generate the remote stream to send to other user
rtcPeerConnection.addTrack(localStream.getTracks()[0], localStream); // adding video
rtcPeerConnection.addTrack(localStream.getTracks()[1], localStream); // adding audio
rtcPeerConnection // creating offer
.createOffer({ iceRestart: true })
.then((sessionDescription) => {
rtcPeerConnection.setLocalDescription(sessionDescription);
socket.emit("offer", {
type: "offer",
sdp: sessionDescription,
room: roomNumber,
});
})
.catch((err) => {
console.log(err);
});
The same code is used for creating an answer with little modifications.
How can I solve the problem?
Does an Azure VM throttle SignalR messages being sent? Running locally, the client receives every message, but when hosted on the VM, clients only receive messages 30% of the time!?!?
This question is about Microsoft.AspNet.SignalR nuget package for SignalR in .Net Core 3.1 API back end, with a VueJS SPA front-end, all being hosted on an Azure Windows Server 2016 VM using IIS 10.
On my local machine, SignalR works perfectly. Messages get sent/received all the time, instantaneously. Then I publish to the VM, and when (IF) the WebSocket connection is successful, the client can only receive messages for the first 5 or so seconds at most, then stops receiving messages.
I've set up a dummy page that sends a message to my API, which then sends said message back down to all connections. It's a simple input form and "Send" button. After the few seconds of submitting and receiving messages, I need to rapidly submit (even hold down the "enter" button to submit) the form and send what should be a constant stream of messages back, until, low and behold, several seconds later messages begin to be received again, but only for a few seconds.
I've actually held down the submit button for constant stream and timed how long it takes to start getting messages, then again how long it takes to stop receiving. My small sample shows ~30 messages get received, then skips (does not receive) the next ~70 messages until another ~30 messages come in .. then the pattern persists, no messages for several seconds, then (~30) messages for a few seconds.
Production Environment Continuously Sending 1000 messages:
Same Test in Local Environment Sending 1000 messages:
If I stop the test, not matter how long I wait, when I hold the enter button (repeated submit), it takes a few seconds to get back into the 3 second/2 second pattern. It's almost as if I need to keep pressuring the server to send the message back to the client, otherwise the server gets lazy and doesn't do any work at all. If I slow play the message submits, it's rare that the client receives any messages at all. I really need to persistently and quickly send messages in order to start receiving them again.
FYI, during the time that I am holding down submit, or rapidly submitting, I receive no errors for API calls (initiating messages) and no errors for Socket Connection or receiving messages. All the while, when client side SignalR log level is set to Trace, I see ping requests being sent and received successfully every 10 seconds.
Here is the Socket Config in .Net:
services.AddSignalR()
.AddHubOptions<StreamHub>(hubOptions => {
hubOptions.EnableDetailedErrors = true;
hubOptions.ClientTimeoutInterval = TimeSpan.FromHours(24);
hubOptions.HandshakeTimeout = TimeSpan.FromHours(24);
hubOptions.KeepAliveInterval = TimeSpan.FromSeconds(15);
hubOptions.MaximumReceiveMessageSize = 1000000;
})
.AddJsonProtocol(options =>
{
options.PayloadSerializerOptions.PropertyNamingPolicy = null;
});
// Adding Authentication
services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme;
})
// Adding Jwt Bearer
.AddJwtBearer(options =>
{
options.SaveToken = true;
options.RequireHttpsMetadata = false;
options.TokenValidationParameters = new TokenValidationParameters()
{
ValidateIssuer = true,
ValidateAudience = true,
ValidAudience = Configuration["JWT:ValidAudience"],
ValidIssuer = Configuration["JWT:ValidIssuer"],
IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(Configuration["JWT:Secret"])),
ClockSkew = TimeSpan.Zero
};
// Sending the access token in the query string is required due to
// a limitation in Browser APIs. We restrict it to only calls to the
// SignalR hub in this code.
// See https://learn.microsoft.com/aspnet/core/signalr/security#access-token-logging
// for more information about security considerations when using
// the query string to transmit the access token.
options.Events = new JwtBearerEvents
{
OnMessageReceived = context =>
{
var accessToken = context.Request.Query["access_token"];
// If the request is for our hub...
var path = context.HttpContext.Request.Path;
if (!string.IsNullOrEmpty(accessToken) && (path.StartsWithSegments("/v1/stream")))
{
// Read the token out of the query string
context.Token = accessToken;
}
return Task.CompletedTask;
}
};
});
I use this endpoint to send back messages:
[HttpPost]
[Route("bitcoin")]
public async Task<IActionResult> SendBitcoin([FromBody] BitCoin bitcoin)
{
await this._hubContext.Clients.All.SendAsync("BitCoin", bitcoin.message);
return Ok(bitcoin.message);
}
Here is the Socket Connection in JS and the button click to call message API:
this.connection = new signalR.HubConnectionBuilder()
.configureLogging(process.env.NODE_ENV.toLowerCase() == 'development' ? signalR.LogLevel.None : signalR.LogLevel.None)
.withUrl(process.env.VUE_APP_STREAM_ROOT, { accessTokenFactory: () => this.$store.state.auth.token })
.withAutomaticReconnect({
nextRetryDelayInMilliseconds: retryContext => {
if(retryContext.retryReason && retryContext.retryReason.statusCode == 401) {
return null
}
else if (retryContext.elapsedMilliseconds < 3600000) {
// If we've been reconnecting for less than 60 minutes so far,
// wait between 0 and 10 seconds before the next reconnect attempt.
return Math.random() * 10000;
} else {
// If we've been reconnecting for more than 60 seconds so far, stop reconnecting.
return null;
}
}
})
.build()
// connection timeout of 10 minutes
this.connection.serverTimeoutInMilliseconds = 1000 * 60 * 10
this.connection.reconnectedCallbacks.push(() => {
let alert = {
show: true,
text: 'Data connection re-established!',
variant: 'success',
isConnected: true,
}
this.$store.commit(CONNECTION_ALERT, alert)
setTimeout(() => {
this.$_closeConnectionAlert()
}, this.$_appMessageTimeout)
// this.joinStreamGroup('event-'+this.event.eventId)
})
this.connection.onreconnecting((err) => {
if(!!err) {
console.log('reconnecting:')
this.startStream()
}
})
this.connection.start()
.then((response) => {
this.startStream()
})
.catch((err) => {
});
startStream() {
// ---------
// Call client methods from hub
// ---------
if(this.connection.connectionState.toLowerCase() == 'connected') {
this.connection.methods.bitcoin = []
this.connection.on("BitCoin", (data) => {
console.log('messageReceived:', data)
})
}
}
buttonClick() {
this.$_apiCall({url: 'bitcoin', method: 'POST', data: {message:this.message}})
.then(response => {
// console.log('message', response.data)
})
}
For the case when the Socket Connection fails:
On page refresh, sometimes the WebSocket Connection fails, but there are multiple calls to the Socket endpoint that are almost identical, where one returns 404 and another returns a 200 result
Failed Request
This is the request that failed, the only difference to the request that succeeded (below) is the content-length in the Response Headers (highlighted). The Request Headers are identical:
Successful request to Socket Endpoint
Identical Request Headers
What could be so different about the configuration on my local machine vs. the configuration on my Azure VM? Why would the client stop and start receiving messages like this? Might it be the configuration on my VM? Are the messages getting blocked somehow? I've exhausted myself trying to figure this out!!
Update:
KeepAlive messages are being sent correctly, but the continuous stream of messages sent (expecting received) only works periodically.
Here we see that the KeepAlive messages are being sent and received every 15 seconds, as expected.
I have a webRTC implementation working, but it requires a setup function to be run on page load.
setup = () => {
peerConnection = new RTCPeerConnection(servers)
peerConnection.onicecandidate = onicecandidate
peerConnection.onconnectionstatechange = onconnectionstatechange(
peerConnection,
)
peerConnection.onnegotiationneeded = onnegotiationneeded
peerConnection.ontrack = function({streams: [stream]}) {
const remoteVideo = document.getElementById('remote-video')
if (remoteVideo) {
remoteVideo.srcObject = stream
}
}
mediaStream.getTracks().forEach(track => peerConnection.addTrack(track, mediaStream))
}
Only then can I make the offer:
makeOffer = () => {
peerConnection
.createOffer()
.then(offer => {
return peerConnection
.setLocalDescription(new RTCSessionDescription(offer))
.then(() => offer)
})
.then(offer => {
sendOffer(offer)
})
}
If i attempt to run the setup just prior to the makeOffer function I get the error:
DOMException: Failed to execute 'setLocalDescription' on
'RTCPeerConnection': Failed to set local offer sdp: The order of
m-lines in subsequent offer doesn't match order from previous
offer/answer.
My assumption is that it is due to the makeOffer running before something in the setup has completed, but nothing in the setup seems to be asynchronous. What do I need to do to be able to setup the connection and send the offer all at once?
This question already has answers here:
How do you get around NATs using WebRTC without a TURN server?
(2 answers)
Closed 2 years ago.
I'm trying to get a simple proof of concept WebRTC app going. I've got code that works on the local network, but not over the Internet.
I believe I've got a bug outlined here, but I can't figure it out.
server.js (expressjs)
require('dotenv').config();
const express = require('express');
const app = express();
const http = require('http').createServer(app);
const io = require('socket.io')(http);
app.use(express.static('public'));
io.on('connection', (socket) => {
socket.on('join', () => socket.broadcast.emit('join', socket.id));
socket.on('offer', (id, sdp) => socket.to(id).emit('offer', socket.id, sdp));
socket.on('answer', (id, sdp) => socket.to(id).emit('answer', socket.id, sdp));
socket.on('candidate', (id, candidate) => socket.to(id).emit('candidate', socket.id, candidate));
});
const port = process.env.PORT || 443;
http.listen(port, () => console.log("listening on :" + port));
client.js (built w/ webpack)
import io from 'socket.io-client';
console.log('getting video stream');
navigator.mediaDevices.getUserMedia({ video: true }).then(myStream => {
const socket = io.connect();
console.log('joining');
socket.emit('join');
let connections = {};
socket.on('join', async peer_id => {
const connection = newConnection(peer_id);
console.log('creating offer');
const sdp = await connection.createOffer();
console.log('setting local desc');
await connection.setLocalDescription(sdp);
console.log('sending offer');
socket.emit('offer', peer_id, connection.localDescription);
});
socket.on('offer', async (peer_id, remoteDescription) => {
const connection = newConnection(peer_id);
console.log('setting remote desc');
await connection.setRemoteDescription(remoteDescription);
console.log('creating answer');
const sdp = await connection.createAnswer();
console.log('setting local desc');
await connection.setLocalDescription(sdp);
console.log('sending answer');
socket.emit('answer', peer_id, connection.localDescription);
});
socket.on('answer', (peer_id, remoteDescription) => {
console.log('setting remote desc');
connections[peer_id].setRemoteDescription(remoteDescription);
});
socket.on('candidate', (peer_id, candidate) => {
console.log('adding candidate');
connections[peer_id].addIceCandidate(candidate);
});
function newConnection(peer_id) {
console.log('creating connection');
const connection = new RTCPeerConnection(
{ iceServers: [{ urls: ['stun:stun.l.google.com:19302'] }] }
);
connections[peer_id] = connection;
connection.onconnectionstatechange = () => console.log('connection state', connection.connectionState);
console.log('searching for candidates');
connection.onicecandidate = event => {
if (event.candidate) {
console.log('sending candidate');
socket.emit('candidate', peer_id, event.candidate);
} else {
console.log('all candidates found');
}
};
console.log('listening for tracks');
connection.ontrack = event => {
console.log('track received');
const stream = new MediaStream();
stream.addTrack(event.track);
document.getElementById('video').srcObject = stream;
};
for (const track of myStream.getTracks()) {
console.log('sending track');
connection.addTrack(track);
}
return connection;
}
});
Logs from local network peers (connected)
first peer
getting video stream
joining
creating connection
searching for candidates
listening for tracks
sending track
creating offer
setting local desc
sending offer
(2) sending candidate
setting remote desc
adding candidate
track received
connection state connecting
all candidates found
connection state connected
second peer
getting video stream
joining
creating connection
searching for candidates
listening for tracks
sending track
setting remote desc
track received
creating answer
setting local desc
sending answer
sending candidate
connection state connecting
all candidates found
connection state connected
(2) adding candidate
Logs from peers over the Internet (failed)
first peer
getting video stream
joining
creating connection
searching for candidates
listening for tracks
sending track
creating offer
setting local desc
sending offer
(3) sending candidate
all candidates found
setting remote desc
track received
connection state connecting
(5) adding candidate
connection state failed
second peer
getting video stream
joining
creating connection
searching for candidates
listening for tracks
sending track
setting remote desc
track received
creating answer
setting local desc
sending answer
(5) sending candidate
(3) adding candidate
connection state connecting
connection state failed
all candidates found
WebRTC isn't going to work on all connection pairings unless you use TURN.
You will see some peers able to connect though, you can read about all the cases that matter in my answer here
In my nodejs application, i am using socket.io for sockets connection.
I am configuring my server side code like this
socket.io configuration in separate file.
//socket_io.js
var socket_io = require('socket.io');
var io = socket_io();
var socketApi = {};
socketApi.io = io;
module.exports = socketApi;
below is my server.js file in which i am attaching my socket io to the server like this
var socketApi = require('./server/socket_io');
// Create HTTP server.
const server = http.createServer(app);
// Attach Socket IO
var io = socketApi.io;
io.attach(server);
// Listen on provided port, on all network interfaces.
server.listen(port, () => console.log(`API running on localhost:${port}`));
and then i am using socket.io in my game.js file to emit updated user coins like this.
//game.js
var socketIO = require('../socket_io');
function updateUserCoins(userBet) {
userId = mongoose.Types.ObjectId(userBet.user);
User.findUserWithId(userId).then((user) => {
user.coins = user.coins - userBet.betAmount;
user.save((err, updatedUser) => {
socketIO.io.sockets.emit('user coins', {
userCoins: updatedUser.coins,
});
});
})
}
and then in my client side, i am doing something like this,
socket.on('user coins', (data) => {
this.coins = data.userCoins;
});
but with the above implementation, updating coins of any user, updates all user coins at the client side, since all the clients are listening to the same socket user coins.
To solve the above problem, i know that i have to do something like this,
// sending to individual socketid (private message)
socketIO.io.sockets.to(<socketid>).emit('user coins', {
userCoins: updatedUser.coins,
});
but my concern is that how will get <socketid> with my current implementation.
You can get it by listening to connection to your socket.io server :
io.on('connection', function (socket) {
socket.emit('news', { hello: 'world' });
socket.on('my other event', function (data) {
console.log(data);
});
});
Here you have a socket object ( io.on('connection', function (socket) ) and you can get his id with socket.id
So you'll probably need to wrap your code with the connection listener.
Source of the exemple for the connection listener
Socket object description