WebRTC Connection State management - webrtc

What I wish to achieve:
When establish a connection, prevent user from sending any message until the connection had finished all the setup (with STUN/TURN server etc)
When there is a sudden disconnect, prevent the user from sending any message until the connection is re-established.
My best guess is either one of the event handler below will do the trick, but I don't know which one, and don't know what are the differences between the two.
onconnectionstatechange()
oniceconnectionstatechange()

oniceconnectionstatechange doesn't include the establishment of the DTLS handshake on top of the ice connection.
Use onconnectionstatechange to detect when the connection is fully established and also to detect disconnections.

Related

How to detect server-side disconnection of TCPClient?

The System.Net.TcpClient object can only connect to an endpoint once. If the client forces the disconnect, then it is clear that the client needs to be replaced. If the server cancels the connection, then it is possible to test the connection and know when it is no longer connected. However once it has been disconnected, I cannot see any property or method that differentiated the disconnected and used client from a disconnected and fresh client.
What is the correct way to test for a disconnected and used client?

Multiple XMPP servers to handle upstream GCM messages

I want to have multiple XMPP servers listening on upstream GCM messages for load balancing and fault tolerance. If I connect two servers to the same sender ID, would google automatically split the messages between them?
It's stated in Implementing an XMPP Connection Server that, periodically, CCS needs to close down a connection to perform load balancing to control messages.
Before it closes the connection, CCS sends a CONNECTION_DRAINING message to indicate that the connection is being drained and will be closed soon. "Draining" refers to shutting off the flow of messages coming into a connection, but allowing whatever is already in the pipeline to continue. When you receive a CONNECTION_DRAINING message, you should immediately begin sending messages to another CCS connection, opening a new connection if necessary. You should, however, keep the original connection open and continue receiving messages that may come over the connection (and ACKing them)—CCS handles initiating a connection close when it is ready.
I might be wrong but what I understand is that splitting of messages is not done since there's only one active connection. Sending of messages to another server starts as soon as CCS sends message that connection will be closed.

Does SSL have anything built in that can detect a dropped connection?

Consider this scenario:
[wanting to write] [sent token success]
Application -------> SSL ----------->
| *peer drops*
[waiting to read] |
***blocked***<------------
In other words, your application wants to write something, but the SSL internal state is WANT_READ. On the other end, the peer connection has dropped.
Can SSL detect this through some keep-alive check of it's own? What can you do in this case?
SSL usually leaves detection of connection problems to the underlying transport layer, i.e. TCP. This means that by using TCP keep alive it can be detected if the peer vanishes without proper connection close. Apart from that there is also the heartbeat extension at the TLS level but contrary to TCP keep alive it is not universally supported.
If SSL detects a connection that hasn't been correctly terminated from SSL's point of view via an SSL close_notify message, it will regard it as a truncation attack, and will give you an error message or an exception, depending on which API you are using.
your application wants to write something, but the SSL internal state is WANT_READ. On the other end, the peer connection has dropped.
What it wants you to read is either the close_notify or the error message or exception. Whatever the case, when it says WANT_READ, you have to read.

WCF client becomes unusuable after internet is lost and reconnected

On this previous question: Tell when wcf client lost connection One of the commenters states:
Your service should not care whether a network cable was disconnected.
One feature of TCP is that unless someone is actively sending data, it
can tolerate momentary interruptions in network connectivity.
This is even more true in WCF, where there are layers of extra
framework to help protect you against network unreliability.
I'm having an issue where this is not working correctly. I have WCF client that makes a connection to the server using a DuplexChannelFactory. The connection stays open for 3 minutes. I disconnect the client from the internet and reconnect. The client regains internet connection, however any calls made from the server to that client fail. Once the client reconnects it begins working again.
When I pull the plug on the internet, the client throws several exceptions but the channel is still listed as being in an open state. Once the connection is regained and I made a request from the server to the client, I get errors such as: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it has been Aborted.
Obviously if the request comes in when the client is offline it won't work, but I'm trying to get it so this channel will recover once the internet comes back without having to set up a new connection.
Should this be working as-is, based on the comment I listed above? Or is there something I need to change to make that actually work?
The issue here is that the channel you're trying to use is in a faulted state, and cannot be used any longer (as the error message indicates).
Your client needs to trap (catch) that exception, and then abort the current channel and create a new one. WCF will not do that for you automatically, you have to code for it yourself.
You could also check the CommunicationState of the channel to see if it is faulted, and recover that way.
A final option would be to use the OnFaulted event handler, and when the channel is faulted, abort the channel and create a new one.

recv() fails on UDP

I’m writing a simple client-server app which for the time being will be for my own personal use. I’m using Winsock for the net communication. I have not done any networking for the last 10 years, so I am quite rusty. I’d like to use as little external code as possible, so I have written a home-made server discovery mechanism, as follows.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket. The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket. The client then uses this info to connect to the server (on a different socket). This mechanism should allow the server to bind its listening socket to the first port it can within the dynamic port range (49152-65535) and to the clients to discover where the server is and on which port it is listening.
The server part works fine: the server receives the broadcast messages and successfully sends its response.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket).
But the message never makes it to the client app. I’ve tried doing a recv() in blocking and non-blocking mode, and there is never any data available. ioctlsocket() always shows no data is available, even though I know the packet got it to the machine.
The server succeeds on doing a recv() on broadcasted data. But the client fails on doing a recv() of the server’s response which is addressed to its discovery socket.
The question is very vague: what gotchas should I watch for in this scenario? Why would recv() fail to get a packet which has actually arrived to the machine? The sockets are UDP, so the fact that they are not connected is irrelevant. Or is it?
Many thanks in advance.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket.
The message doesn't need to contain anything. Just broadcast an empty message from the 'discovery socket'. recvfrom() will tell the server where it came from, and it can just reply directly.
The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket.
Fair enough, although actually the server could just broadcast its own TCP listening port every 5 seconds or whatever.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket). But the message never makes it to the client app
If it got to the host it must get to the application. You must have got the ports mixed up somehow. Simplify it as above and retry.
Well, it was one of those stupid situations: Windows Firewall was active, besides the other firewall, and silently dropping packets. Deactivating it solved the problem.
But I still don’t understand how it works, as it was allowing the server to receive packets sent through broadcasting. And when I got at my wits end and set the server to answer back through a broadcast, THOSE packets got dropped.
Two days of frustration. I hope someone profits from my experience.