webrtc connection gets disconnected but sound is still on - webrtc

I have a video chat application using WebRTC. There is a slight problem: Below is my code for oniceconnectionstatechage:
connection.oniceconnectionstatechange = function () {
if (connection.iceConnectionState == 'disconnected' || connection.iceConnectionState == 'closed')
console.log('connection gone.')
};
The problem is that sometimes, when the internet speed is not well, my connection gets disconnected and I see "connection gone" in my console but the sound still stays on. Both sides can hear each other but the video is gone. What can I do to disconnect my connection completely is such a situation?

You see connection gone in your console when the network connection is unstable, as iceConnectionState may have the state disconnected as
a transient state, e. g. on a flaky network, that can recover by itself.
(Mozilla Developer Network)
It might be - this is an assumption - that in some/many of such cases video is dropped as the available bandwidth can't support both audio and video.
If you really want to close the connection completely when a disconnect occurs, you can replace your if-statement (incl. the console.log) in oniceconnectionchange listerner with the following code:
if (connection.iceConnectionState == 'disconnected'){
console.log('Connection gone. Closing connection.');
connection.close();
}
So each disconnect will be followed by closing the connection completely.
However, this is probably bad practice:
Why would you want to close down the whole connection just because there are temporary problems with the network?
I assume that using this code in a real application will lead to bad user experience, as disconnects will appear in many cases the user would not notice otherwise. I suggest it is better to display a warning to the user in case of (frequent) disconnects.

Related

When WebRTCPeer disconnects/closes connection it is never propagated to the server

I am using an implementation of https://github.com/webrtc for my app. I have a networking stack implemented that uses WebRTC. The issue I am seeing is that when the client exits the app it closes the PeerConnection object, and the server gets a state change from kIceConnectionConnected to kIceConnectionDisconnected which is good and all but I would expect to see kIceConnectionClosed. The problem with the disconnected state is that if your network is spotty then you can intermittently get kIceConnectionDisconnected state which may heal itself afterward. And I don't want to close the connection.
The question is: how do I as a server guarantee that the client has quit the app and that I can tier down the connection on the server side right away?
Edit:
According to https://datatracker.ietf.org/doc/html/draft-ietf-rtcweb-data-channel-13#section-6.7
there should be a channel reset that happens on both sides, how would server knows that it needs to reset the channel?

How to detect network disconnects (on an RTCPeerConnection) as soon as possible or the resulting frozen video?

I am using RTCPeerconnections to submit video and audio in a web-RTC-based video-messenger. I am able to detect network disconnects after approximately 7 seconds - yet, this is 7 seconds in which the user is staring at a frozen video and starts to randomly click buttons in the app. I would like to improve the user experience by shortening this timespan - e.g. by informing the user about a network issue if the video freezes for more than 1 second.
Status Quo: I am currently detecting respective situations by listening to the onconnectionstatechange event of the RTCPeerConnection. Yet, the event is only fired approximately 7 seconds after the disconnect. I determined the ~7 seconds by connecting two machines via normal WiFi, using a hardware switch on one of the laptops to switch off the wireless (such switches exist on some older Lenovo models / guarantee an immediate disconnect) and wait for the other machine to detect the event.
Consideration: The root cause being the interruption of the underlying network connection, it would be ideal to detect the changed network status as early as possible (even if its just transport delays). This said, the disturbance faced by the user ultimately stems from the video that instantly freezes when interrupting the network. If there was no way to detect the connection issue earlier, it could be an option to detect the frozen video instead. Is any of these two things possible (ideally event-driven, so that I don't need to poll things every second)?
Here's a very simple code snippet describing my current disconnect detection:
myRTCPeerConnection.onconnectionstatechange = (event: Event) => {
let newCS = myRTCPeerConnection.connectionState;
if (newCS == "disconnected" || newCS == "failed" || newCS == "closed") {
//do something on disconnect - e.g. show messages to user and start reconnect
}
}
(ice)connectionstatechange is the right event in general.
If you want more granularity you'll need to poll getStats and looks for stats like framesReceived. But there is no guaranteed frame rate sent from the other side (e.g. in screensharing you go below 1/s).
While the actual ICE statistics like requestsSent seem more useful they happen much less frequently, only once per second and you can loose a packet or it comes late.
In general this is a question of how reliable the detection of the network failure is. If it is too aggressive you end up with a poor UX showing a warning too often.
You might not end up that is significantly better than at the cost of introducing complexity that you need to maintain.
Thanks Philipp for your insights - this pointed me into the right direction.
I'm now looking into using getStats to identify any freezes. At first sight, polling the framesPerSecond value seems most promising to me. The good thing: it reacts instantly upon disconnect - and - it still works when the underlying video stream is paused (i'm allowing the user to pause video submission / implemented it by setting all video tracks to enabled = false). I.e. even if the video tracks are disabled on the sending side, the receiving side still continues to receive the agreed frames per second.
As the usage of the getStats function appears weak on documentation at the time of this being written / there's rarely a simple examples for its usage, please find my code extract below:
peerRTCPC
.getReceivers()
.forEach(
(
receiver: RTCRtpReceiver,
index: number,
array: RTCRtpReceiver[]
) => {
if (receiver.track.kind == "video") {
receiver.getStats().then((myStatsReport: RTCStatsReport) => {
myStatsReport.forEach(
(statValue: any, key: string, parent: RTCStatsReport) => {
if (statValue.type == "inbound-rtp") {
console.log(
"The PC stats returned the framesPerSecond value " +
statValue["framesPerSecond"] +
" while the full inbound-rtp stats reflect as " +
JSON.stringify(statValue)
);
}
}
);
});
}
}
);
Note that upon disconnect, the framesPerSecond do not necessarily go to zero, even though the webRTCInternals screen suggests the same. I am seeing undefined when a disconnect happens.
Runtime impact of polling this at high frequency / across larger numbers of connections probably needs to be looked at more closely. Yet, this seems like a good step into the right direction unless doing it way to frequently.

WebRTC: removeStream and then re- addStream after negotiation: Safe?

After a WebRTC session has been established, I would like to stop sending the stream to the peer at some point and then resume sending later.
If I call removeStream, it does indeed stop sending data. If I then call addStream (with the previously removed stream), it resumes. Great!
However, is this safe? Or do I need to re-negotiate/exchange the SDP after the removal and/or after the re-add? I've seen it mentioned in several places that the SDP needs to be re-negotiated after such changes, but I'm wondering if it is ok in this simple case where the same stream is being removed and re-added?
PS: Just in case anyone wants to suggest it: I don't want to change the enabled state of the track since I still need the local stream playing, even while it is not being sent to the peer.
It will only work in Chrome, and is non-spec, so it's neither web compatible nor future-proof.
The spec has pivoted from streams to tracks, sporting addTrack and removeTrack instead of addStream and removeStream. As a result the latter isn't even implemented in Firefox.
Unfortunately, because Chrome hasn't caught up, this means renegotiation currently works differently in different browsers. It is possible to do however with some effort.
The new model has a cleaner separation between streams and what's sent in RTCPeerConnection.
Instead of renegotiating, setting track.enabled = false is a good idea. You can still play a local view of your video by cloning the track.

What's the correct way to use sockets in iOS for the different application states?

Lets say i have an client application on iOS which is connected to a server using a C socket.
I receive and send data on this socket.
Now the user closes the App, so something else (let's say check his mail) and returns to the application.
My (bundle of) question(s):
What to do with the socket connection?
Should you close it and try to reopen the socket when relaunching the application?
Or can i leave the socket open? If so, what happens with the data which is received on the connection?
Other situations to consider are:
I do not know when the user returns to the application.
I do not know if the user stays in the same network.
Thanks
The connection should be closed and the received data saved [if necessary], when the application is about to 'resign active'.
The connection would not be able to run in the background. And you will not receive any data in the background.
When the application resumes from the background reopen the connection and continue.
These methods will help you keep track of your application's state
– applicationWillResignActive:
– applicationDidEnterBackground:
– applicationWillEnterForeground:
I would close the connection and open a new after the app comes to focus. I wrote an app some months ago where the app talked to a radiostation playoutserver to display some information.
- you don't know how long the app stays in background
- you don't know if the user stays within the same network
- you don't know if the user forgets about the still in background living app
...
i would vote for closing the socket connection.

Sockets and Timeout Errors

I'm building a program that has a very basic premise.
For X amount of Objects
Open Conection
Perform Actions
Close Connection
Open Next
Each of these connections is made on a socks5 proxy and after about the 200th connection I get "The operation has timeout" errors. I have tested all the proxies and they work just fine and the really wierd thing is if I shut down the program and restart it again the problems go away. So I'm left to believe that when I'm closing my connection that its really not closing the connection and the computer is being overloaded. How cna i force all socks connections to close that are associated with a class?
socket.Shutdown(SocketShutdown.Both);
//socket.Close();
socket.Disconnect(true);
socket = null;
In reponse to a tip to use netstat I checked it out. I noticed connections where lingering but finally would go away. However, the problem still remains, after about the 100th connection, 5 second pause between connections. I get timeout errors. If I close the proram and restart it they go away. So for some reason I feel that the connections are leaving behind something. Netstat dosent show it. I've even tried adding the instances of the client to a list and each time one is finish remove it from the list and then set it to null. Is there a way to kill a port? Maybe that would work, if I killed the port the connection was being made on? Is it possible this is a Windows OS issue? Something thats used to prevent viruses? I'm making roughly a connection a minute and mainint that connection for about 1 minute before moving on to the next with atleast 20 concurent if not more connections at the same time. What dosent make sense to me is that shuting down the program seem sto clean up whatever resources I'm not cleaning up in my code. I'm using an class I found on the internet that allows socks5 proxies to be used with the socket class. So i'm really at a loss, any advice or direction to head woudl be great? It dosent have to be pretty. I'm have tempted to wite to a text file where I was in my connection list and shutdown the program and then have anohter program restart it to pick up where it left off at this point.
Sounds like your connections aren't really closed. Without seeing the code, it's hard to troubleshoot this; can you boil it down to a program that loops through an open-close sequence?
If the connection doesn't close as expected, you can probably see what state it is in with netstat. Do you have 200 established connections, or are they in some sort of closing state?
Sockets implement IDisposable. Only calling Dispose or Close will cause the socket to give give up the unmanaged resources in a deterministic manner. This is causing you to run out of the resources that the socket uses (probably a network handle of some sort), even though you may not any managed object useing them.
So you should probably just do
socket.Shutdown(SocketShutdown.Both);
socket.Close();
To be clear setting the socket to Null does not do this because setting the socket to null only causes the sockets to be placed on the freachable queue, to have its finalizer called when it gets around to processing the freachable queue.
You may want to review this article which gives a good model on how Unmanaged resources are dealt with in .NET
Update
I checked and Sockets do indeed contain a handle to a WSASocket. So unless you call close or dispose you'll have to wait until the Finalizers run (or exiting the appplication) for you to get them back.