I'm building a program that has a very basic premise.
For X amount of Objects
Open Conection
Perform Actions
Close Connection
Open Next
Each of these connections is made on a socks5 proxy and after about the 200th connection I get "The operation has timeout" errors. I have tested all the proxies and they work just fine and the really wierd thing is if I shut down the program and restart it again the problems go away. So I'm left to believe that when I'm closing my connection that its really not closing the connection and the computer is being overloaded. How cna i force all socks connections to close that are associated with a class?
socket.Shutdown(SocketShutdown.Both);
//socket.Close();
socket.Disconnect(true);
socket = null;
In reponse to a tip to use netstat I checked it out. I noticed connections where lingering but finally would go away. However, the problem still remains, after about the 100th connection, 5 second pause between connections. I get timeout errors. If I close the proram and restart it they go away. So for some reason I feel that the connections are leaving behind something. Netstat dosent show it. I've even tried adding the instances of the client to a list and each time one is finish remove it from the list and then set it to null. Is there a way to kill a port? Maybe that would work, if I killed the port the connection was being made on? Is it possible this is a Windows OS issue? Something thats used to prevent viruses? I'm making roughly a connection a minute and mainint that connection for about 1 minute before moving on to the next with atleast 20 concurent if not more connections at the same time. What dosent make sense to me is that shuting down the program seem sto clean up whatever resources I'm not cleaning up in my code. I'm using an class I found on the internet that allows socks5 proxies to be used with the socket class. So i'm really at a loss, any advice or direction to head woudl be great? It dosent have to be pretty. I'm have tempted to wite to a text file where I was in my connection list and shutdown the program and then have anohter program restart it to pick up where it left off at this point.
Sounds like your connections aren't really closed. Without seeing the code, it's hard to troubleshoot this; can you boil it down to a program that loops through an open-close sequence?
If the connection doesn't close as expected, you can probably see what state it is in with netstat. Do you have 200 established connections, or are they in some sort of closing state?
Sockets implement IDisposable. Only calling Dispose or Close will cause the socket to give give up the unmanaged resources in a deterministic manner. This is causing you to run out of the resources that the socket uses (probably a network handle of some sort), even though you may not any managed object useing them.
So you should probably just do
socket.Shutdown(SocketShutdown.Both);
socket.Close();
To be clear setting the socket to Null does not do this because setting the socket to null only causes the sockets to be placed on the freachable queue, to have its finalizer called when it gets around to processing the freachable queue.
You may want to review this article which gives a good model on how Unmanaged resources are dealt with in .NET
Update
I checked and Sockets do indeed contain a handle to a WSASocket. So unless you call close or dispose you'll have to wait until the Finalizers run (or exiting the appplication) for you to get them back.
Related
I have multi-threaded (linux) server that registers async_writes and async_reads on the same native file descriptor through a socket object. I noticed under very heavy load when the server was dropping connections, on a very rare occasion a client would receive a garbled first message.
Tracking it down, the async_read detects an error on the socket and closes the socket. This closes the native file descriptor. If that file descriptor is reused before the original async_write has a chance to fire, it will find its native file descriptor valid and proceed to send its message (which is really a message from a previous session).
The only way I could see to fix this was to make the the async_read and async_write callbacks know if there were other callbacks registered and only close the socket if it were the last one.
Has anyone seen this issue?
Haven't seen it but it sounds plausible. Although I am surprised to see a new native file descriptor getting the exact same number than a recently closed descriptor.
You might want to put the socket in a shared_ptr and query shared_ptr::is_unique in both async_read and async_write. That'd be the easiest way to let the other callback know if both callbacks are registered. If is_unique is true you can be sure that no one else is still using this socket and can close it.
So if the connection gets dropped, async_read can check is_unique. If it is true, close the socket. And let go of the shared_ptr in either case.
Then, when async_write also fires it will find is_unique true and can close the socket, unless async_read has already closed it.
The only drawback is of course: async_write has to fire also (perhaps with an error code) in order to close the socket.
Oh I've seen exactly this in production code. (Much fun: we would be talking a proprietary protocol on a TCP socket to mysql server). The problem is when some thread "handles" (mis-handles) errors by closing sockets using the native handle (fd). Don't. Use shutdown (perhaps with cancel) instead and let the destructor take care of close. Of course, the real problem is the non-owning copies of the handle (fd) that are the cause of the resource race.
Critical Note:
Tracking it down, the async_read detects an error on the socket and closes the socket. This closes the native file descriptor
That's patently UNTRUE for Asio itself. Perhaps you have (third-party) code in the completion handlers doing that, but as I mention above, you cannot afford to do that.
Every once in a while Bitmex disconnects our websocket connection which forces us to reconnect. However, they provide a connection pool of 40 connections per hour. In times of low volatility it seems not to be a problem AT ALL, however as soon as trading activity goes up, we are running through these 40 connections in no time leaving our connection dead eventually.
We do have a keep-alive but it does not solve the problem at all.
We haven’t seen any specifics on the API documentation regarding how to deal with this problem, or the specific reasons we get so many close opcodes whenever the volatility raises
Does anyone know if we are doing something wrong?
EDIT: heartbeat is also in place
I suggest implementing heartbeats as per https://www.bitmex.com/app/wsAPI#Heartbeats
In general, WebSocket connections can drop if the connection remains idle for too long without transmitting any data.
I have an application that needs to stream data up to a server over a particular port. When the application starts it performs a number of checks, for network connectivity, a reachable host, among others. I'd like it to also check the port to ensure it's able to receive data.
I have this code:
Private Function CheckPort(ByVal Host As String) As Boolean
Dim IPs() As IPAddress = Dns.GetHostAddresses(Host)
Dim s As New Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)
s.Connect(IPs(0), <port # removed>)
If s.Poll(-1, SelectMode.SelectWrite) Then
Return True
End If
Return False
End Function
The code works well, but I'm concerned that by performing the check I might be inadvertently preventing subsequent messages from the application from reaching the port. Will performing this check interfere with the actual data I want to send to the host? Should I close the socket with s.close() before returning the function?
I mainly agree with CodeCaster's response.
If you say the server is buggy, it is also probable that a few minutes after you check the port connection, the connection is broken or even closed by the server.
If you still want to do it as a means to reduce risk of making the user write some message that later cannot be sent, this would be a good approach to follow. An alternative is that you can save the data as draft locally or somewhere else for sending it later when the server is available.
Now, going to your code:
You are opening a connection inside function CheckPort so when the function finishes you will lose any reference to s, the open socket. In C, you would have a leak of resources; in Java, a garbage collector will take care of the Socket and close it. I don't know how it is in VB but I'd close that socket before leaving the function. In any case it's a healthy practice.
Additionally, when you have data ready to send you won't be able to do it on the same connection if you lose the reference (s) to the open socket or you close the socket (unless VB has any trick that I don't know). You will have to open a new connection to send data. Don't worry, you will be able to do it even if you made the check before. The server will see it as just a different connection, maybe from a different source port (or could be the same if you closed the connection before, that depends on the OS).
Checking a port before connecting to it is as useful as verifying file existence before opening it: not at all.
The file can be deleted in between checking and opening, and your socket can be closed for a variety of reasons.
Just connect and write to it, it'll throw an appropriate exception when it won't work.
We have a custom listener on our WCF solution, which inherits from ChannelListenerBase<IDuplexSessionChannel>.
We have a badly-behaved client (out of our control), which has a TCP conversation with us along the following lines:
SYN
SYN,ACK
RST
Basically, they're trying to perform operations on the socket before it's established, failing, and closing the socket.
In our OnEndAcceptChannel code, we end up not being able to create a channel, because the underlying Socket has already been closed by the time we get there, and we get a SocketException. This then seems to kill the listener dead, stopping it accepting further connections.
From OnEndAcceptChannel, we've tried returning null, throwing the Exception, and Faulting the listener so that it can be restarted higher up the call stack. The latter is the only solution we've found that will allow the channel to effectively keep on listening, but that has the unpleasant (& unacceptable) side effect of killing all established connections to the service.
Anybody got any suggestions of how to handle this situation, keep listening, and not lose established connections...?
We managed to fix it in the end. Instead of returning null, we returned an instance of a dummy class implementing IDuplexSessionChannel that is essentially a dumb state machine, and nothing more - fools WCF into carrying on regardless.
Is it necessary to close the connection of a tcplistener or tcpclient after every message received, or is it possible to close it at a later time while it continues to receive data? Is there any major security issue with leaving it open and listening? I know trojans tend to open a listener and leave it open, will my program be detected as such? Its just a simple chat program....
Thanks for the help!
This is in vb.net.
It depends what the protocol is. If the protocol expects a new connection for each message, then you should close it. (This is like HTTP 1.0.)
If the protocol allows multiple messages to be sent down the same connection, then it's entirely reasonable to leave it open. (This is like HTTP 1.1 and most native database connections.)
I wouldn't expect your connection to be treated with undue suspicion just for keeping open.