WebRTC ICE connection state changed to: failed - webrtc

I downloaded code of a opensource WebRTC application demo project provided by google WebRTC team from GitHub
with this URL :
https://github.com/webrtc/apprtc
This project runs with Google Cloud SDK , and Grunt.
Its run on my localhost with port 8095 and publish with NgRok.
I installed it, and its works great but only for internal network, I tried to implement TURN server but unsuccessful.
In the documentation they say to change the file:
constants.py
And I changed it to this code:
import os
# Deprecated domains which we should to redirect to REDIRECT_URL.
REDIRECT_DOMAINS = [
'apprtc.appspot.com', 'apprtc.webrtc.org', 'www.appr.tc'
]
# URL which we should redirect to if matching in REDIRECT_DOMAINS.
REDIRECT_URL = 'https://appr.tc'
ROOM_MEMCACHE_EXPIRATION_SEC = 60 * 60 * 24
MEMCACHE_RETRY_LIMIT = 100
LOOPBACK_CLIENT_ID = 'LOOPBACK_CLIENT_ID'
ICE_SERVER_OVERRIDE = [
{
'url': 'stun:stun.l.google.com:19302'
},
{
'url': 'turn:192.158.29.39:3478?transport=udp',
'credential': 'JZEOEt2V3Qb0y27GRntt2u2PAYA=',
'username': '28224511:1379330808'
},
{
'url': 'turn:192.158.29.39:3478?transport=tcp',
'credential': 'JZEOEt2V3Qb0y27GRntt2u2PAYA=',
'username': '28224511:1379330808'
}
]
ICE_SERVER_BASE_URL = 'https://networktraversal.googleapis.com'
ICE_SERVER_URL_TEMPLATE = '%s/v1alpha/iceconfig?key=%s'
ICE_SERVER_API_KEY = os.environ.get('ICE_SERVER_API_KEY')
# Dictionary keys in the collider instance info constant.
WSS_INSTANCE_HOST_KEY = 'host_port_pair'
WSS_INSTANCE_NAME_KEY = 'vm_name'
WSS_INSTANCE_ZONE_KEY = 'zone'
WSS_INSTANCES = [{
WSS_INSTANCE_HOST_KEY: 'apprtc-ws.webrtc.org:443',
WSS_INSTANCE_NAME_KEY: 'wsserver-std',
WSS_INSTANCE_ZONE_KEY: 'us-central1-a'
}, {
WSS_INSTANCE_HOST_KEY: 'apprtc-ws-2.webrtc.org:443',
WSS_INSTANCE_NAME_KEY: 'wsserver-std-2',
WSS_INSTANCE_ZONE_KEY: 'us-central1-f'
}]
WSS_HOST_PORT_PAIRS = [ins[WSS_INSTANCE_HOST_KEY] for ins in WSS_INSTANCES]
# memcache key for the active collider host.
WSS_HOST_ACTIVE_HOST_KEY = 'wss_host_active_host'
# Dictionary keys in the collider probing result.
WSS_HOST_IS_UP_KEY = 'is_up'
WSS_HOST_STATUS_CODE_KEY = 'status_code'
WSS_HOST_ERROR_MESSAGE_KEY = 'error_message'
RESPONSE_ERROR = 'ERROR'
RESPONSE_ROOM_FULL = 'FULL'
RESPONSE_UNKNOWN_ROOM = 'UNKNOWN_ROOM'
RESPONSE_UNKNOWN_CLIENT = 'UNKNOWN_CLIENT'
RESPONSE_DUPLICATE_CLIENT = 'DUPLICATE_CLIENT'
RESPONSE_SUCCESS = 'SUCCESS'
RESPONSE_INVALID_REQUEST = 'INVALID_REQUEST'
IS_DEV_SERVER = os.environ.get('APPLICATION_ID', '').startswith('dev')
BIGQUERY_URL = 'https://www.googleapis.com/auth/bigquery'
# Dataset used in production.
BIGQUERY_DATASET_PROD = 'prod'
# Dataset used when running locally.
BIGQUERY_DATASET_LOCAL = 'dev'
# BigQuery table within the dataset.
BIGQUERY_TABLE = 'analytics'
But still cant communicate between external networks , with console error :
ICE connection state changed to: failed
This is my chrome://webrtc-internals output:
https://363b9c63.ngrok.io/r/004936951, { iceServers: [], iceTransportPolicy: all, bundlePolicy: max-bundle, rtcpMuxPolicy: require, iceCandidatePoolSize: 0 },
Time Event
13.2.2018, 21:08:08
addStream
13.2.2018, 21:08:08
createOffer
13.2.2018, 21:08:08 negotiationneeded
13.2.2018, 21:08:08
createOfferOnSuccess
13.2.2018, 21:08:08
setLocalDescription
13.2.2018, 21:08:08
signalingstatechange
13.2.2018, 21:08:08 setLocalDescriptionOnSuccess
13.2.2018, 21:08:08
icegatheringstatechange
13.2.2018, 21:08:08
icecandidate (host)
13.2.2018, 21:08:08
icecandidate (host)
13.2.2018, 21:08:08
icecandidate (host)
13.2.2018, 21:08:08
icecandidate (host)
13.2.2018, 21:08:08
icecandidate (host)
13.2.2018, 21:08:08
icecandidate (host)
13.2.2018, 21:08:09
icecandidate (host)
13.2.2018, 21:08:09
icecandidate (host)
13.2.2018, 21:08:09
icecandidate (host)
13.2.2018, 21:08:09
icecandidate (host)
13.2.2018, 21:08:09
icecandidate (host)
13.2.2018, 21:08:09
icecandidate (host)
13.2.2018, 21:08:09
icegatheringstatechange
13.2.2018, 21:09:20
setRemoteDescription
13.2.2018, 21:09:20
addIceCandidate (host)
13.2.2018, 21:09:20
addIceCandidate (host)
13.2.2018, 21:09:20
signalingstatechange
13.2.2018, 21:09:20
iceconnectionstatechange
13.2.2018, 21:09:20
onAddStream
13.2.2018, 21:09:20 setRemoteDescriptionOnSuccess
13.2.2018, 21:09:36
iceconnectionstatechange
Stats Tables
googTrack_4c6e7e30-f429-41bc-a283-ffb2e4a12383 (googTrack)
googTrack_3225dba0-c46b-4169-86fe-221b9464cc61 (googTrack)
googLibjingleSession_399041596204088920 (googLibjingleSession)
bweforvideo (VideoBwe)
googCertificate_1D:A9:01:31:DB:24:0E:B1:22:D8:A5:1F:44:9B:22:84:D8:25:60:DE:B2:10:B3:08:B6:D6:63:80:89:27:68:E5 (googCertificate)
Channel-audio-1 (googComponent)
ssrc_3643280748_send (ssrc)
ssrc_3936871711_send (ssrc)
googTrack_7eb768fd-dd5a-4c4c-ada8-b9e1d539d867 (googTrack)
googTrack_1ceb9e2e-7a66-4c6d-b49d-e0b969415118 (googTrack)
Conn-audio-1-0 (googCandidatePair)
Cand-QTIxLw5A (localcandidate)
Cand-40/xeX4y (remotecandidate)
Conn-audio-1-1 (googCandidatePair)
Cand-4ohU6a7F (remotecandidate)
Conn-audio-1-2 (googCandidatePair)
Cand-qUBezng5 (localcandidate)
Conn-audio-1-3 (googCandidatePair)
Conn-audio-1-4 (googCandidatePair)
Cand-tA5Vubib (localcandidate)
Conn-audio-1-5 (googCandidatePair)
ssrc_3046207950_recv (ssrc)
ssrc_3017098662_recv (ssrc)
Stats graphs for bweforvideo (VideoBwe)
Stats graphs for ssrc_3643280748_send (ssrc) (audio)
cname:t09jrVcXtatu5gPI
msid:450695b0-d46b-4566-8386-40c7339f109a 4c6e7e30-f429-41bc-a283-ffb2e4a12383
mslabel:450695b0-d46b-4566-8386-40c7339f109a
label:4c6e7e30-f429-41bc-a283-ffb2e4a12383
Stats graphs for ssrc_3936871711_send (ssrc) (video)
cname:t09jrVcXtatu5gPI
msid:450695b0-d46b-4566-8386-40c7339f109a 3225dba0-c46b-4169-86fe-221b9464cc61
mslabel:450695b0-d46b-4566-8386-40c7339f109a
label:3225dba0-c46b-4169-86fe-221b9464cc61
Stats graphs for Conn-audio-1-0 (googCandidatePair)
Stats graphs for Cand-QTIxLw5A (localcandidate)
Stats graphs for Cand-40/xeX4y (remotecandidate)
Stats graphs for Conn-audio-1-1 (googCandidatePair)
Stats graphs for Cand-4ohU6a7F (remotecandidate)
Stats graphs for Conn-audio-1-2 (googCandidatePair)
Stats graphs for Cand-qUBezng5 (localcandidate)
Stats graphs for Conn-audio-1-3 (googCandidatePair)
Stats graphs for Conn-audio-1-4 (googCandidatePair)
Stats graphs for Cand-tA5Vubib (localcandidate)
Stats graphs for Conn-audio-1-5 (googCandidatePair)
Stats graphs for ssrc_3046207950_recv (ssrc)
cname:QR/GiCZZOxrOJ/9e
msid:FxFjhc5PJ3xgeUp2cVAJPKaP3KMFiolDmLOU 7eb768fd-dd5a-4c4c-ada8-b9e1d539d867
mslabel:FxFjhc5PJ3xgeUp2cVAJPKaP3KMFiolDmLOU
label:7eb768fd-dd5a-4c4c-ada8-b9e1d539d867
Stats graphs for ssrc_3017098662_recv (ssrc) (video)
cname:QR/GiCZZOxrOJ/9e
msid:FxFjhc5PJ3xgeUp2cVAJPKaP3KMFiolDmLOU 1ceb9e2e-7a66-4c6d-b49d-e0b969415118
mslabel:FxFjhc5PJ3xgeUp2cVAJPKaP3KMFiolDmLOU
label:1ceb9e2e-7a66-4c6d-b49d-e0b969415118

Related

Cannot find host name for ElephantSQL server

I've been trying to set up an express server. All my routes seem to work correctly, it's connecting to the database that's giving me issues. I'm not great at backend but I feel like getting the host name shouldn't give me this many problems.
Here's my error: (batyr.db.elephantsql.com is my DB_HOST name):
Error: getaddrinfo ENOTFOUND 'batyr.db.elephantsql.com';
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: "'batyr.db.elephantsql.com';"
}
Here's my repository
Here's my knexfile.js:
require('dotenv').config();
const path = require("path");
module.exports = {
development: {
client: 'pg',
connection: {
host: process.env.DB_HOST,
user: process.env.DB_USERNAME,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
},
charset: 'utf8',
pool: { min: 1, max: 5 },
migrations: {
directory: path.join(__dirname, "src", "db", "migrations"),
},
},
};
Here's my .env file:
DATABASE_URL = 'postgres://pausapcx:vxSa5l3ZK_F2lrlMGhyt0XBlYbX7hfWY#batyr.db.elephantsql.com/pausapcx';
DB_HOST = 'batyr.db.elephantsql.com';
DB_USERNAME = 'pausapcx';
DB_PASSWORD = '***'; // cut just for this post
DB_NAME = 'pausapcx';
My server.js is running on port 8080, pgAdmin is running on port 5432
Here are the host and server names from elephantsql:
(https://i.stack.imgur.com/CnGQ4.png)
(https://i.stack.imgur.com/EVRkP.png)
I've tried with several differnet host names:
batyr.db.elephantsql.com (pgAdmin accepted this one)
batyr
I've tried the database URL
I've even tried just Elephantsql.com
I've pinged batyr.db.elephantsql.com in the terminal and gotten this response:
Pinging ec2-3-89-203-254.compute-1.amazonaws.com [3.89.203.254] with 32 bytes of data:
Reply from 3.89.203.254: bytes=32 time=41ms TTL=44
Reply from 3.89.203.254: bytes=32 time=76ms TTL=44
Reply from 3.89.203.254: bytes=32 time=38ms TTL=44
Reply from 3.89.203.254: bytes=32 time=37ms TTL=44
Ping statistics for 3.89.203.254:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 37ms, Maximum = 76ms, Average = 48ms
I really suck at backend, I'd appreciate any help.
Was able to get data!
terminal screenshot
The solution was to just swap out all the random connection data with a single database URL (I'll secure it in .env once it's all working)
// works:
connection: 'postgres://pausapcx:***#batyr.db.elephantsql.com/pausapcx',
// doesn't work:
connection: {
host: 'batyr.db.elephantsql.com',
user: 'pausapcx',
password: '***',
database: 'pausapcx',
},

how to set up freeipa + rabbitmq

rabbitmq version 3.10.0
tell me how to write rabbitmq.conf correctly without using advanced.config
work BindDN in another server--> uid=myuserinfreeipa,cn=users,cn=accounts,dc=mydc1,dc=mydc2
work SearchFilter in another server ---> "(&(uid=%u)(memberOf=cn=mygroupinfreeipa,cn=groups,cn=accounts,dc=mydc1,dc=mydc2)(!(nsaccountlock=TRUE)))"
work BaseDN in another server --> "cn=users,cn=accounts,dc=mydc1,dc=mydc2"
rabbitmq.conf
auth_backends.1 = ldap
auth_ldap.servers.1 = my.server.com
auth_ldap.timeout = 500
auth_ldap.port = 389
auth_ldap.user_dn_pattern = CN=${username},OU=Users,dc=mydc1,dc=mydc2
auth_ldap.use_ssl = false
ssl_options.cacertfile = /etc/rabbitmq/ca.crt
auth_ldap.dn_lookup_bind.user_dn = test
auth_ldap.dn_lookup_bind.password = password
auth_ldap.dn_lookup_attribute = distinguishedName
auth_ldap.dn_lookup_base = cn=users,cn=accounts,dc=mydc1,dc=mydc2
auth_ldap.log = network
advanced.config
[
{
rabbitmq_auth_backend_ldap,
[
{
tag_queries, [
{administrator,{in_group,"CN=mygroupinfreeipa,dc=mydc1,dc=mydc2","member"}},
{management, {constant, true}}
]
}
]%% rabbitmq_auth_backend_ldap,
}
].
tail -f /var/log/rabbitmq/rabbit#amqptest.log
LDAP CHECK: login for test
LDAP connecting to servers: ["my.server.com"]
LDAP network traffic: bind request = {'BindRequest',3,"xxxx",
{simple,"xxxx"}}
LDAP network traffic: bind reply = {ok,
{'LDAPMessage',1,
{bindResponse,
{'BindResponse',invalidCredentials,
[],[],asn1_NOVALUE,asn1_NOVALUE}},
asn1_NOVALUE}}
LDAP bind returned "invalid credentials": xxxx
LDAP connecting to servers: ["my.server.com"]
LDAP network traffic: bind request = {'BindRequest',3,"xxxx",
{simple,"xxxx"}}
LDAP bind error: "xxxx" {'EXIT',
{{badmatch,
{error,
{asn1,
{function_clause,
[{'ELDAPv3',encode_restricted_string,
[{refused,"test",[]},[<<4>>]]

Splunk UF not sending data to indexer

I have Splunk UF and Splunk Enterprise Server, both v8.2.1, running in docker containers but I am unable to see any data on the Enterprise Server with regards to the new index I created, 'mytest':
The Enterprise Server has default port 9997 active as a receiver port:
Both of the containers are connected to 'splunk' network I created:
"Containers": {
"0f9e44620ce9fba16df21af6d2253c4b02b9714cb3ea126a616f10d06f836eb9": {
"Name": "dspinelli-uf",
"EndpointID": "0e1dd065ee3d815c943a8b52e6107e53a4b57d9e3103b17d1461611543769869",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"3a1a084561eda8013baa8847f4ca30fd68eb74468ff666195bf1c15e0f8a280f": {
"Name": "dspinelli-ent",
"EndpointID": "7159b1a41840f9dfae04b50bb61386f8c3ac2233aee334026b9f1d685cfcf571",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
Inputs.conf on the UF:
[splunktcp://9997]
disabled = 0
[http://hec-uf]
description = UF HTTP Event Collector
disabled = 0
token = 4022d42f-9132-442a-8a79-5d3eea1ad40d
index = mytest
indexes = mytest
outputgroup = tcpout
Outputs.conf on UF:
[indexAndForward]
index = false
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = dspinelli-ent:9997
[tcpout-server://dspinelli-ent:9997]
Communication between the UF and Enterprise Server is established:
netstat -an | grep 9997
tcp 0 0 0.0.0.0:9997 0.0.0.0:* LISTEN
tcp 0 0 172.18.0.3:44420 172.18.0.2:9997 ESTABLISHED
./bin/splunk list forward-server
Active forwards:
dspinelli-ent:9997
Configured but inactive forwards:
None
Attempt to curl the UF with some test data shows success:
curl -k https://x.x.x.x:8087/services/collector \
> -H 'Authorization: Splunk 4022d42f-9132-442a-8a79-5d3eea1ad40d' \
> -d '{"sourcetype": "demo", "event":"Hello, I was sent from UF"}'
{"text":"Success","code":0}
However, no data is ever displayed on the index in Enterprise Server:
Does anyone know what could possibly be wrong here or what the next steps would be?
The issue was with inputs.conf. Updated as follows:
[http://hec-uf]
description = UF HTTP Event Collector
disabled = 0
token = 4022d42f-9132-442a-8a79-5d3eea1ad40d
_TCP_ROUTING = *
index = _internal
After update/restart the messages started to be received on Enterprise:

coturn STUN requests work locally, but not for remote connections

I've successfully made a TURN request to coturn server (https://github.com/coturn/coturn), but failed when executing a STUN request. If I try to STUN connect to coturn server from the same machine running the server using turnutils_stunclient myIP, the server responds with
RFC 5780 response 1
0: IPv4. Response origin: : IP1:3478
0: IPv4. Other addr: : IP2:3479
0: IPv4. UDP reflexive addr: IP1:36457
========================================
RFC 5780 response 2
0: IPv4. Response origin: : IP2:3479
0: IPv4. Other addr: : IP2:3479
0: IPv4. UDP reflexive addr: IP1:36457
========================================
RFC 5780 response 3
0: IPv4. Response origin: : IP2:3479
0: IPv4. Other addr: : IP2:3479
0: IPv4. UDP reflexive addr: IP1:36458
, which is great! To test STUN requests remotely, I've also installed coturn on my laptop. If I make the same request from my laptop, the stun server responds with only 1 response and hangs after that.
RFC 5780 response 1
0: IPv4. Response origin: : IP1:3478
0: IPv4. Other addr: : IP2:3479
0: IPv4. UDP reflexive addr: IP1:36457
There is no second and third message and the command line is blocked.
The problem is further confirmed when running javascript request:
function checkTURNServer(turnConfig, timeout){
return new Promise(function(resolve, reject){
setTimeout(function(){
if(promiseResolved) return;
resolve(false);
promiseResolved = true;
}, timeout || 5000);
var promiseResolved = false
, myPeerConnection = window.RTCPeerConnection || window.mozRTCPeerConnection || window.webkitRTCPeerConnection //compatibility for firefox and chrome
, pc = new myPeerConnection({iceServers:[turnConfig]})
, noop = function(){};
pc.createDataChannel(""); //create a bogus data channel
pc.createOffer(function(sdp){
if(sdp.sdp.indexOf('typ relay') > -1){ // sometimes sdp contains the ice candidates...
promiseResolved = true;
resolve(true);
}
pc.setLocalDescription(sdp, noop, noop);
}, noop); // create offer and set local description
pc.onicecandidate = function(ice){ //listen for candidate events
if(promiseResolved || !ice || !ice.candidate || !ice.candidate.candidate || !(ice.candidate.candidate.indexOf('typ relay')>-1)) return;
promiseResolved = true;
resolve(true);
};
});
}
checkTURNServer({"url":"stun:IP1:3478"}).then(function(bool){
console.log('is TURN server active? ', bool? 'yes':'no'); // prints no!!
}).catch(console.error.bind(console));
What should I do to make STUN requests work? Can you point me in the right direction?
-----------------Additional info---------------------
I've tested just TURN functionality, and I was able connect:
checkTURNServer({"url":"turn:IP1:3478",username: 'mainuser',
credential: 'mainpassword' }).then(function(bool){
console.log('is TURN server active? ', bool? 'yes':'no'); // prints yes
}).catch(console.error.bind(console));
I've installed coturn and configured it as a service. This is the contents of my /etc/systemd/system/coturn.service file:
[Unit]
Description=turnserver Service
After=network.target
[Service]
Type=simple
User=root
ExecStart=/usr/local/bin/turnserver -c /usr/local/etc/turnserver.conf
Restart=on-abort
[Install]
WantedBy=multi-user.target
When I run systemctl start coturn, service starts normally.
Contents of /usr/local/etc/turnserver.conf
listening-ip=IP1
listening-ip=IP2
lt-cred-mech
user=mainuser:mainpassword
realm=mydomain.com
max-bps=128000
cert=/etc/letsencrypt/live/mydomain.com/cert.pem
pkey=/etc/letsencrypt/live/mydomain.com/privkey.pem
result of iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N FORWARD_IN_ZONES
-N FORWARD_IN_ZONES_SOURCE
-N FORWARD_OUT_ZONES
-N FORWARD_OUT_ZONES_SOURCE
-N FORWARD_direct
-N FWDI_iredmail
-N FWDI_iredmail_allow
-N FWDI_iredmail_deny
-N FWDI_iredmail_log
-N FWDO_iredmail
-N FWDO_iredmail_allow
-N FWDO_iredmail_deny
-N FWDO_iredmail_log
-N INPUT_ZONES
-N INPUT_ZONES_SOURCE
-N INPUT_direct
-N IN_iredmail
-N IN_iredmail_allow
-N IN_iredmail_deny
-N IN_iredmail_log
-N OUTPUT_direct

WebRTC: ICE failed since Firefox v 53

I got a problem with my webRTC-project. My code works well at Chrome and Firefox until version 52. Since Firefox 53 i got the error: "ICE failed, see about:webrtc for more details". This is my code:
let connection;
let dataChannel;
const initCon = (initiatorBoolean, choosenONE) => {
createConnection();
startConnection(initiatorBoolean, choosenONE);
console.log('initCon(' + initiatorBoolean + ') / choosenONE = ' +choosenONE);
}
var boolJoin = false;
var lobbies = [];
var me = prompt("Please enter your name", "Bert");
// Helper für das Errorlogging
const logErrors = err => console.log(err);
// Websockets-Verbindung herstellen
const socket = io.connect('http://7daysclub.de:8080');
/* Methode to send information between to Users
*#param
*receiver= User who get the message
*type= Type of the message: could be init, joined, getUsers, setUsers, offer, answer, candidate
*descr= Content of the message
*/
const sendMessage = (receiver, type, descr) => {
message = Object.assign({to: 'default'}, {
user: receiver,
type: type,
descr: descr,
from: me
});
socket.emit('message', message);
console.log("send: " + message.type + ". to: " + message.user);
};
const createConnection = () => {
//Verbindung erstellen
connection = new RTCPeerConnection( {
'iceServers': [
{
/*
'urls': 'stun:stun.l.google.com:19302'
*/
'urls': 'stun:185.223.29.90:3478'
},
{
'urls': 'turn:185.223.29.90:3478',
'credential': '***',
'username': '***'
}
]
});
connection.oniceconnectionstatechange = function(event) {
console.log('state: '+connection.iceConnectionState );
if (connection.iceConnectionState === "failed" ||
connection.iceConnectionState === "disconnected" ||
connection.iceConnectionState === "closed") {
// Handle the failure
}
};
}
const call = choosenONE => {
if (!boolJoin) {
sendMessage(choosenONE, 'joined', null);
}
};
// Nachricht vom Signaling-Kanal empfangen
const receiveMessage = message => {
console.log("receive: " + message.type);
//Filter messages
if (message.user != 'all' && message.user != me) {
console.log("Block= " + message.user);
return;
}
// Nachricht verarbeiten
switch (message.type) {
// Partner ist verfügbar
case 'joined':
choosenONE = message.from;
sendMessage( choosenONE, 'init', null);
boolJoin = true;
initCon(true, choosenONE);
break;
case 'getUsers':
sendMessage(message.from,'setUsers', null)
lobbies.push(message.from);
lobbylist();
console.log('add User: ' + lobbies[lobbies.length-1] + ' to List');
break;
case 'setUsers':
lobbies.push(message.from);
console.log('add User: ' + lobbies[lobbies.length-1] + ' to List');
lobbylist();
break;
// Verbindungsaufbau wurde vom Partner angefordert
case 'init':
choosenONE = message.from;
boolJoin = true;
initCon(false, choosenONE);
break;
// Ein SDP-Verbindungsangebot ist eingegangen – wir erstellen eine Antwort
case 'offer':
connection
.setRemoteDescription(new RTCSessionDescription(message.descr))
.then(() => connection.createAnswer())
.then(answer => connection.setLocalDescription(new RTCSessionDescription(answer)))
.then(() => sendMessage(message.from,'answer', connection.localDescription, me))
.catch(logErrors);
break;
// Eine SDP-Verbindungsantwort auf unsere Anfrage ist eingegangen.
case 'answer':
connection.setRemoteDescription(new RTCSessionDescription(message.descr));
break;
// Der Partner hat eine mögliche Host-Port-Kombination ("ICE Candidate") gefunden
case 'candidate':
connection.addIceCandidate(new RTCIceCandidate({candidate: message.descr}));
break;
}
};
socket.on('message', receiveMessage);
// Verbindung initialisieren
const startConnection = (isCreator, choosenONE) => {
// Wenn wir mögliche Kommunikationsendpunkte finden, diese an den Partner weitergeben
connection.onicecandidate = event => {
if (event.candidate) {
sendMessage(choosenONE, 'candidate', event.candidate.candidate);
}
};
// Wenn die Gegenseite einen Stream hinzufügt, diesen an das video-element hängen
connection.ontrack = (e) => {
document.getElementById('vidRemote').src = window.URL.createObjectURL(e.stream);
};
// Falls wir der erste Teilnehmer sind, starten wir den Verbindungsaufbau
if (isCreator) {
// Datenkanal anlegen
dataChannel = connection.createDataChannel('chat');
onDataChannelCreated(dataChannel);
connection
.createOffer()
.then(offer => connection.setLocalDescription(new RTCSessionDescription(offer)))
.then(() => sendMessage(choosenONE, 'offer', connection.localDescription))
.catch(logErrors);
} else {
// Wenn wir nicht der Initiator sind, reagieren wir nur auf das Anlegen eines Datenkanals
connection.ondatachannel = function (event) {
dataChannel = event.channel;
onDataChannelCreated(dataChannel);
};
}
};
const onDataChannelCreated = (channel, choosenONE) => {
// Sobald der Datenkanal verfügbar ist, Chateingaben zulassen
channel.onopen = () => {
const enterChat = document.getElementById('enterChat');
enterChat.disabled = false;
enterChat.onkeyup = (keyevent) => {
// Bei "Enter" absenden
if (keyevent.keyCode === 13) {
dataChannel.send(enterChat.value);
appendChatMessage(me+':', enterChat.value);
enterChat.value = '';
}
}
};
channel.onmessage = (message) => appendChatMessage(choosenONE+':', message.data);
};
const appendChatMessage = (name, text) => {
const displayChat = document.getElementById('displayChat');
const time = new Date().toString('H:i:s');
displayChat.innerHTML = `<p>${name} - ${time}<br>${text}</p>` + displayChat.innerHTML;
};
function lobbylist() {
allusers = "";
lobbies.forEach( element => {
allusers += "<li><button onclick=\"call('"+element+"')\">"+element+"</button></li>"
});
document.getElementById('lobbylist').innerHTML = allusers;
}
//Alle User bei Ankunft erfragen
sendMessage('all','getUsers', null);
lobbylist();
And about:webrtc =
(registry/INFO) insert 'ice' (registry) succeeded: ice
(registry/INFO) insert 'ice.pref' (registry) succeeded: ice.pref
(registry/INFO) insert 'ice.pref.type' (registry) succeeded: ice.pref.type
(registry/INFO) insert 'ice.pref.type.srv_rflx' (UCHAR) succeeded: 0x64
(registry/INFO) insert 'ice.pref.type.peer_rflx' (UCHAR) succeeded: 0x6e
(registry/INFO) insert 'ice.pref.type.host' (UCHAR) succeeded: 0x7e
(registry/INFO) insert 'ice.pref.type.relayed' (UCHAR) succeeded: 0x05
(registry/INFO) insert 'ice.pref.type.srv_rflx_tcp' (UCHAR) succeeded: 0x63
(registry/INFO) insert 'ice.pref.type.peer_rflx_tcp' (UCHAR) succeeded: 0x6d
(registry/INFO) insert 'ice.pref.type.host_tcp' (UCHAR) succeeded: 0x7d
(registry/INFO) insert 'ice.pref.type.relayed_tcp' (UCHAR) succeeded: 0x00
(registry/INFO) insert 'stun' (registry) succeeded: stun
(registry/INFO) insert 'stun.client' (registry) succeeded: stun.client
(registry/INFO) insert 'stun.client.maximum_transmits' (UINT4) succeeded: 7
(registry/INFO) insert 'ice.trickle_grace_period' (UINT4) succeeded: 5000
(registry/INFO) insert 'ice.tcp' (registry) succeeded: ice.tcp
(registry/INFO) insert 'ice.tcp.so_sock_count' (INT4) succeeded: 0
(registry/INFO) insert 'ice.tcp.listen_backlog' (INT4) succeeded: 10
(registry/INFO) insert 'ice.tcp.disable' (char) succeeded: \000
(generic/EMERG) Exit UDP socket connected
(generic/ERR) UDP socket error:Internal error at z:/build/build/src/dom/network/UDPSocketParent.cpp:283 this=00000000120CC400
(ice/INFO) z:/build/build/src/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:173 function nr_socket_multi_tcp_create_stun_server_socket skipping UDP STUN server(addr:IP4:185.223.29.90:3478/UDP)
(ice/INFO) z:/build/build/src/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:173 function nr_socket_multi_tcp_create_stun_server_socket skipping UDP STUN server(addr:IP4:185.223.29.90:3478/UDP)
(ice/WARNING) z:/build/build/src/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:617 function nr_socket_multi_tcp_listen failed with error 3
(ice/WARNING) ICE(PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/)): failed to create passive TCP host candidate: 3
(ice/NOTICE) ICE(PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/)): peer (PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/):default) no streams with non-empty check lists
(ice/NOTICE) ICE(PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/)): peer (PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/):default) no streams with pre-answer requests
(ice/NOTICE) ICE(PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/)): peer (PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/):default) no checks to start
(ice/ERR) ICE(PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/)): peer (PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/):default) pairing local trickle ICE candidate host(IP4:192.168.0.4:54605/UDP)
(ice/ERR) ICE(PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/)): peer (PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/):default) pairing local trickle ICE candidate host(IP4:192.168.0.4:62417/TCP) active
(stun/INFO) Unrecognized attribute: 0x802b
(stun/INFO) STUN-CLIENT(srflx(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)): Received response; processing
(ice/ERR) ICE(PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/)): peer (PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/):default) pairing local trickle ICE candidate srflx(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)
(stun/INFO) STUN-CLIENT(relay(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)::TURN): Received response; processing
(stun/WARNING) STUN-CLIENT(relay(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)::TURN): Error processing response: Retry may be possible, stun error code 401.
(stun/INFO) STUN-CLIENT(relay(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)::TURN): Received response; processing
(stun/WARNING) STUN-CLIENT(relay(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)::TURN): Error processing response: Retry may be possible, stun error code 401.
(turn/WARNING) TURN(relay(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)): Exceeded the number of retries
(turn/WARNING) TURN(relay(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)): mode 20, nr_turn_client_error_cb
(turn/WARNING) TURN(relay(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)) failed
(turn/INFO) TURN(relay(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)): cancelling
(turn/WARNING) ICE-CANDIDATE(relay(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)): nr_turn_allocated_cb called with state 4
(turn/WARNING) ICE-CANDIDATE(relay(IP4:192.168.0.4:54605/UDP|IP4:185.223.29.90:3478/UDP)): nr_turn_allocated_cb failed
(ice/INFO) ICE(PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/)): peer (PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/):default) Trickle grace period is over; marking every component with only failed pairs as failed.
(ice/INFO) ICE-PEER(PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/):default)/STREAM(0-1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/) aLevel=0)/COMP(1): All pairs are failed, and grace period has elapsed. Marking component as failed.
(ice/INFO) ICE-PEER(PC:1512241952789000 (id=19327352835 url=http://7daysclub.de/webrtc2/):default): all checks completed success=0 fail=1
(generic/EMERG) Exit UDP socket connected
(generic/ERR) UDP socket error:Internal error at z:/build/build/src/dom/network/UDPSocketParent.cpp:283 this=000000000BAE3000
(ice/INFO) z:/build/build/src/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:173 function nr_socket_multi_tcp_create_stun_server_socket skipping UDP STUN server(addr:IP4:185.223.29.90:3478/UDP)
(ice/INFO) z:/build/build/src/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:173 function nr_socket_multi_tcp_create_stun_server_socket skipping UDP STUN server(addr:IP4:185.223.29.90:3478/UDP)
(ice/WARNING) z:/build/build/src/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:617 function nr_socket_multi_tcp_listen failed with error 3
(ice/WARNING) ICE(PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/)): failed to create passive TCP host candidate: 3
(ice/NOTICE) ICE(PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/)): peer (PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/):default) no streams with non-empty check lists
(ice/NOTICE) ICE(PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/)): peer (PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/):default) no streams with pre-answer requests
(ice/NOTICE) ICE(PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/)): peer (PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/):default) no checks to start
(ice/ERR) ICE(PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/)): peer (PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/):default) pairing local trickle ICE candidate host(IP4:192.168.0.4:63286/UDP)
(ice/ERR) ICE(PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/)): peer (PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/):default) pairing local trickle ICE candidate host(IP4:192.168.0.4:54433/TCP) active
(stun/INFO) Unrecognized attribute: 0x802b
(stun/INFO) STUN-CLIENT(srflx(IP4:192.168.0.4:63286/UDP|IP4:185.223.29.90:3478/UDP)): Received response; processing
(ice/ERR) ICE(PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/)): peer (PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/):default) pairing local trickle ICE candidate srflx(IP4:192.168.0.4:63286/UDP|IP4:185.223.29.90:3478/UDP)
(stun/INFO) STUN-CLIENT(relay(IP4:192.168.0.4:63286/UDP|IP4:185.223.29.90:3478/UDP)::TURN): Received response; processing
(stun/WARNING) STUN-CLIENT(relay(IP4:192.168.0.4:63286/UDP|IP4:185.223.29.90:3478/UDP)::TURN): Error processing response: Retry may be possible, stun error code 401.
(stun/INFO) STUN-CLIENT(relay(IP4:192.168.0.4:63286/UDP|IP4:185.223.29.90:3478/UDP)::TURN): Received response; processing
(turn/INFO) TURN(relay(IP4:192.168.0.4:63286/UDP|IP4:185.223.29.90:3478/UDP)): Succesfully allocated addr IP4:185.223.29.90:50497/UDP lifetime=3600
(ice/ERR) ICE(PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/)): peer (PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/):default) pairing local trickle ICE candidate turn-relay(IP4:192.168.0.4:63286/UDP|IP4:185.223.29.90:50497/UDP)
(ice/INFO) ICE(PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/)): peer (PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/):default) Trickle grace period is over; marking every component with only failed pairs as failed.
(ice/INFO) ICE-PEER(PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/):default)/STREAM(0-1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/) aLevel=0)/COMP(1): All pairs are failed, and grace period has elapsed. Marking component as failed.
(ice/INFO) ICE-PEER(PC:1512242125375000 (id=19327352836 url=http://7daysclub.de/webrtc2/):default): all checks completed success=0 fail=1
+++++++ END ++++++++
And my turnserver config looks like this:
# TURN listener port for UDP and TCP (Default: 3478).
# Note: actually, TLS & DTLS sessions can connect to the
# "plain" TCP & UDP port(s), too - if allowed by configuration.
#
listening-port=3478
# TURN listener port for TLS (Default: 5349).
# Note: actually, "plain" TCP & UDP sessions can connect to the TLS & DTLS
# port(s), too - if allowed by configuration. The TURN server
# "automatically" recognizes the type of traffic. Actually, two listening
# endpoints (the "plain" one and the "tls" one) are equivalent in terms of
# functionality; but we keep both endpoints to satisfy the RFC 5766 specs.
# For secure TCP connections, we currently support SSL version 3 and
# TLS version 1.0, 1.1 and 1.2. SSL2 "encapculation mode" is also supported.
# For secure UDP connections, we support DTLS version 1.
#
tls-listening-port=5349
# Listener IP address of relay server. Multiple listeners can be specified.
# If no IP(s) specified in the config file or in the command line options,
# then all IPv4 and IPv6 system IPs will be used for listening.
#
listening-ip=185.223.29.90
# Relay address (the local IP address that will be used to relay the
# packets to the peer).
# Multiple relay addresses may be used.
# The same IP(s) can be used as both listening IP(s) and relay IP(s).
#
# If no relay IP(s) specified, then the turnserver will apply the default
# policy: it will decide itself which relay addresses to be used, and it
# will always be using the client socket IP address as the relay IP address
# of the TURN session (if the requested relay address family is the same
# as the family of the client socket).
#
relay-ip=185.223.29.90
lt-cred-mech
# Realm for long-term credentials mechanism and for TURN REST API.
#
realm=7daysclub.de
#Local system IP address to be used for CLI server endpoint. Default value
# is 127.0.0.1.
#
cli-ip=185.223.29.90
# CLI server port. Default is 5766.
#
cli-port=5766
Looking at your online demo there is no a=candidate line in the remote SDP which suggests the candidate is never added.
sendMessage(choosenONE, 'candidate', event.candidate.candidate);
this is discarding the sdpMid and sdpMLineIndex which needs to be signalled to the remote peer. Add a .catch to your addIceCandidate to log any errors.