I made a WebRTC video calling application that works and is able to share Audio and Video to the remote users and it also works on different networks.
We setup Coturn Server on Amazon EC2 Instance for NAT Traversal.
The issue that I'm facing is that on some networks (Globally), I'm not able to get the remote user's audio/video and I can't figure out what the issue is.
Trickle ICE test works perfectly. It's just some networks that are giving issues.
Also tried deploying Coturn on a separate EC2 instance.
tried to change ports to every possible turn configuration.
If anyone can shed some light on this one, please do let us know.
I'm attaching the logs
ICE(PC:1601497557091097 (id=10737418241 url=https://*****)): relay only option is set without any TURN server configured
ICE(PC:1601497557091097 (id=10737418241 url=https://*****)): relay only option results in no host candidate for IP4:192.168.43.57:0/UDP
ICE(PC:1601497557091097 (id=10737418241 url=https://*****)): relay only option is set without any TURN server configured
ICE(PC:1601497557091097 (id=10737418241 url=https://*****)): relay/proxy only option results in ICE TCP being disabled
ICE(PC:1601497557091097 (id=10737418241 url=https://*****)): couldn’t create any valid candidates
2:03
________
2:03
builds/worker/checkouts/gecko/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:617 function nr_socket_multi_tcp_listen failed with error 3
ICE(PC:1601487818957513 (id=4294967297 url=https://*****)): failed to create passive TCP host candidate: 3
/builds/worker/checkouts/gecko/media/mtransport/third_party/nICEr/src/net/nr_socket_multi_tcp.c:617 function nr_socket_multi_tcp_listen failed with error 3
ICE(PC:1601487818957513 (id=4294967297 url=https://*****)): failed to create passive TCP host candidate: 3
ICE(PC:1601487818957513 (id=4294967297 url=https://*****): All candidates initialized
2:06
_________________________
2:06
ICE(PC:1601497557091097 (id=10737418241 url=https://*****)): relay only option results in no host candidate for IP6:[2401:4900:1202:b867:213f:764b:4942:4757]:0/UDP
Related
I'm using GCP like in the following schema:
TCP balancer -> backend-service -> MIG(my app) with auto scaling.
"My app" accepts commands on a TCP port (A) and sends notifications on another TCP port(B) for subscriber.
I'm running my tests against TCP LB's IP - my tests connect to port B on a startup(i.e. one of instances of "my app") and also my tests make a connection to port A for each test.
i.e. I've faced with a case when port A and port B are terminated/connected to different hosts.
I am not sure how to circumvent this case.
I have mitigated the issue using --session-affinity=CLIENT_IP for backend-services configuration, I.e. all connections from one IP are directed to the same target.
Through tcpdump in dhcp-server, it shows the server can receive the DHCPDISCOVER package and send the DHCPOFFER package, but can not receive the DHCPREQUEST package from the dhcp-client, so the client can not get IP address and always in send DHCPDISCOVER package.
But the dhcp-server which runs in VMWARE's instance can send DHCPACK to client and the same client will get the IP success. The dhcp-server using the same configure as in Openstack's instance.
And, if I configure the static IP address in the client instance, it will ping the dhcp-server's IP successful.
One more thing, the server and client are in the same vlan.
Is there any limit rule in Openstack's instance? How can I resolve this problem, THX.
The essential reason is that the traffic of port is limited by the security groups in openstack.
By default, all security groups contain a series of basic (sanity) and anti-spoofing rules that perform the following actions:
Deny egress DHCP and DHCPv6 responses to prevent instances from acting as DHCP(v6) servers.
Resolution:
disable security groups (no recommend)
set dhcp-relay to the dhcp server in router (recommend)
security groups limited the traffic by hypervisor's iptables which will drop the packets which's src port is 67 and dst port is 68.
DHCPOFFER packets will send to router by src and dst port 67, and it will works to all vlans.
For DHCP relay and DHCP proxy, packets sent to the DHCP server from the router have both the source and destination UDP ports set to 67. The DHCP server responds using the same ports.
Maybe there are some methods but I can't find out until now ?
I have installed the TURN server everything in the server code is working fine. no error in the log file. only a warning stating
0: WARNING: I cannot support STUN CHANGE_REQUEST functionality because only one IP address is provided
but the TURN server running on the server.
here is what shows when I check lsof -i :3478
turnserve 999 root 15u IPv4 446811411 0t0 TCP domain.com:stun (LISTEN)
turnserve 999 root 23u IPv4 446811417 0t0 TCP domain:stun (LISTEN)
turnserve 999 root 24u IPv4 446810998 0t0 UDP domain.com:stun
turnserve 999 root 25u IPv4 446810999 0t0 UDP domain.com:stun
when I check STUN in Trickle ICE it throws an errors
The server stun:xxx.xxx.xxx.xxx:3478 returned an error with code=701:
STUN server address is incompatible.
The server stun:xxx.xxx.xxx.xxx:3478 returned an error with code=701:
STUN allocate request timed out.
what's going wrong in this.
Thank you
I think that 701 error is a more generic connectivity error that Trickle ICE uses to indicate it didn't get a binding response back. Run stunclient your.stun.ip.address with the command line tools at www.stunprotocol.org to see if your STUN service is accessible from the outside world.
STUN technically requires being hosted on a device with two IP addresses and two ports. It's typically a command line parameter to specify which IP addresses the server should listen on. But most server implementations can operate on a host with a single IP address.
The second IP address and port on the server is used for STUN client filtering tests to detect what type of NAT is in effect. The client sends a binding request on the server's primary ip and port, but with a change request attribute to have the server respond from the alternate IP address or port. More often than not, this binding request with a change-request attribute fails since NATs will not forward traffic from the other IP/port.
The filtering test is useful for logging what type of NAT the client is on. So that failed connections can be debugged and that success/failure metrics can be correlated to NAT type.
Since most ICE implementations will exchange all available address candidates (local, mapped, and relay), the filtering test isn't very or useful to connectivity establishment.
I'm surprised Trickle ICE is giving you an error. I didn't think WebRTC ever used the changer-request attribute. I just did a Wireshark trace of a Trickle ICE session to stunserver.stunprotocol.org. I don't see the webrtc client setting the change-request attribute in either of the two binding requests it makes.
More details in RFC 5780 Section 3.2
In macOS, I just do so:
> brew install stuntman
when it done
> stunclient stunserver.stunprotocol.org
Binding test: success
Local address: 198.18.0.1:54898
Mapped address: 210.0.158.130:56750
To specify port, just like this:
> stunclient stunserver.stunprotocol.org 3478
Binding test: success
Local address: 198.18.0.1:63061
Mapped address: 210.0.158.130:37126
Have fun!
I have two clients communicating over webrtc. (Client A writen in js, Client B in Python with aiortc). Now it happens that Client A wants to connect from a mobile Network thus it requires a turn-relay connection.
I have already setup a turn server which seems to do his job. But only approx 50% of the connections succeed now. I already found out when they succeed and when they fail:
SDP relay information in case of success:
Offer Client A
a=candidate:3 2 UDP 92217086 172.31.16.8 59986 typ relay raddr 172.31.16.8 rport 59986
Response Client B
a=candidate:11 1 UDP 92086015 172.31.16.8 49910 typ relay raddr 172.31.16.8 rport 49910
SDP relay information in case of failure:
Offer Client A
a=candidate:7 1 UDP 92151551 172.31.16.8 49871 typ relay raddr 172.31.16.8 rport 49871
Response Client B
a=candidate:5820bb1602563a80c76891a80be14933 1 udp 16777215 18.185.84.96 53279 typ relay raddr 172.31.1.103 rport 49244
The important difference is the IP address shown in the Response from Client B, in the successfull scenario it is the IP adress of the net in which Client B is, in the failing scenario it is the IP address of the turnserver (18.185.84.96).
Actually I do not understand why it sometimes gives the IP of the turnserver and the other times not, and what it means that the IP address of the turnserver is not possible to use...
Anyone any ideas, on where to start looking for the issue?
It seems like our turn server was missconfigured.
I can not tell what was misconfigured, because sadly I have no access to the configuration of the turn server.
But I tested by deploying some turn servers on my local machine and they behaved similar when they where not correctly configured. By looking into the logs of thos turn servers I saw 401 Unauthorized popping up all the time. So I changed the configuration, until the authorization was working. With this config we deployed a new server which is now working.
Some words on the configuration for people also having troubles with that on the first run, those are the configurations we put into /etc/turnserver.conf and passed it when starting the server with turnserver -v -c /etc/turnserver.conf:
listening-port=<port>
alt-listening-port=<port>
listening-ip=<listening-ip>
external-ip=<external-ip>
realm=<realm>
fingerprint
lt-cred-mech
user=<user:pw>
bofore coming to that configuration we made some errors, maybe they are obious to experienced people but they were not to us:
we had the use-auth-secret in the config file, this should not be enabled when using user
we had the issue that the turn server was usable in firefox but not in chrome or others, (not possible to gather relay candidates), this was do to realm not beeing configured in the config
libvirt.libvirtError: unable to connect to server at 'ccrfox112:49152': Connection timed out
When migrating QEMU guests, without tunnelling via libvirtd, QEMU will listen on a port number in the range 49152->49216 for a connection from the source host. This error messages shows that the source host was unable to connect to the target host. You've not provided any useful information about your setup, so I'd have to guess that probably you have firewall rules on the target host that are blocking the source host access to the TCP port in question.