Is it necessary to close the connection of a tcplistener or tcpclient after every message received, or is it possible to close it at a later time while it continues to receive data? Is there any major security issue with leaving it open and listening? I know trojans tend to open a listener and leave it open, will my program be detected as such? Its just a simple chat program....
Thanks for the help!
This is in vb.net.
It depends what the protocol is. If the protocol expects a new connection for each message, then you should close it. (This is like HTTP 1.0.)
If the protocol allows multiple messages to be sent down the same connection, then it's entirely reasonable to leave it open. (This is like HTTP 1.1 and most native database connections.)
I wouldn't expect your connection to be treated with undue suspicion just for keeping open.
Related
I have multi-threaded (linux) server that registers async_writes and async_reads on the same native file descriptor through a socket object. I noticed under very heavy load when the server was dropping connections, on a very rare occasion a client would receive a garbled first message.
Tracking it down, the async_read detects an error on the socket and closes the socket. This closes the native file descriptor. If that file descriptor is reused before the original async_write has a chance to fire, it will find its native file descriptor valid and proceed to send its message (which is really a message from a previous session).
The only way I could see to fix this was to make the the async_read and async_write callbacks know if there were other callbacks registered and only close the socket if it were the last one.
Has anyone seen this issue?
Haven't seen it but it sounds plausible. Although I am surprised to see a new native file descriptor getting the exact same number than a recently closed descriptor.
You might want to put the socket in a shared_ptr and query shared_ptr::is_unique in both async_read and async_write. That'd be the easiest way to let the other callback know if both callbacks are registered. If is_unique is true you can be sure that no one else is still using this socket and can close it.
So if the connection gets dropped, async_read can check is_unique. If it is true, close the socket. And let go of the shared_ptr in either case.
Then, when async_write also fires it will find is_unique true and can close the socket, unless async_read has already closed it.
The only drawback is of course: async_write has to fire also (perhaps with an error code) in order to close the socket.
Oh I've seen exactly this in production code. (Much fun: we would be talking a proprietary protocol on a TCP socket to mysql server). The problem is when some thread "handles" (mis-handles) errors by closing sockets using the native handle (fd). Don't. Use shutdown (perhaps with cancel) instead and let the destructor take care of close. Of course, the real problem is the non-owning copies of the handle (fd) that are the cause of the resource race.
Critical Note:
Tracking it down, the async_read detects an error on the socket and closes the socket. This closes the native file descriptor
That's patently UNTRUE for Asio itself. Perhaps you have (third-party) code in the completion handlers doing that, but as I mention above, you cannot afford to do that.
I don't quite understand exactly how a few of the features are shared when a TcpListener and TcpClient communicate.
Let's say the following code is run (for now ignore synchronisation):
Server:
Dim server As New TcpListener(localAddr, port)
server.Start()
Dim client As TcpClient = server.AcceptTcpClient()
Client:
Dim client As New TcpClient
client.Connect(hostAddr, port)
And the connection is successfully established. Now there are two TcpClient instances — one on server side and one on client side. However, they share the same network stream through TcpClient.GetStream().
I'm slightly confused — does the client pass itself and all of its properties to the server when server.AcceptTcpClient() is called?
What about any changes to either of the TcpClient instances after this? When the connection shuts down I call this on both sides:
client.GetStream.Close()
client.Close()
But I get an exception with TcpClient.GetStream.Close() on the client which executes this code the latest because it tells me that the client is already closed (this happens when the above code isn't perfectly synchronised on both sides).
What about the .SendBufferSize and .ReceiveBufferSize properties? Do I need to set this on both sides of the connection?
Hope someone can clear up my confusion with an explanation of how exactly the TcpClient/Listener classes work during the communication — so far I haven't been able to find documentation explaining what exactly happens.
The TCP protocol does not know what a TcpClient is. This is a .NET concept. TCP does not reference .NET concepts at all. For that reason no objects will be sent across the wire.
The only thing that is sent is the bytes you explicitly write.
Each side has it's own isolated objects. Both sides use their own TcpClient object which acts like a handle to the TCP connection.
client.GetStream.Close()
client.Close()
This is not the proper shutdown sequence. The first line is redundant to the second and incomplete. Close should never be called. The best way to do it is to wrap the client in using. The second best way is to call Dispose on the client. The Close methods in the BCL are historic accidents and should be ignored. They do the same thing that Dispose does in all cases that I ever looked at.
Don't touch the buffer sizes. They control how much memory the kernel uses to buffer data on your end of the connection. The kernel is capable of managing this by itself.
Also don't look at the buffer sizes in your code. They are meaningless. Also don't use the DataAvailable property because if it returns false/0 this does not mean that no data can be read.
The Connected property is not necessarily synchronized on both sides. If the network goes down there can be no synchronization. Never look at the Connected property. If it says true the next nanosecond it could be false. So it's not possible to make decisions based on that property. You do not need to test anything. Just Read/Write and handle the exceptions by aborting.
Regarding packets, you are not sending packets when you Write. TCP has a boundaryless stream of bytes. The kernel packetizes your data internally. You do not need to split data into specific sizes. Just use a fairly big buffer size such as 8K (or more on fast networks). The write size is only about saving CPU time by being less chatty (assuming nagling is enabled).
I am building a WebRTC app where two users are selected at random and then connect to each other to chat. Both clients keep an open WebSocket connection and I am planning to use this to exchange their offers/answers to signal a connection. The case I am trying to account for is when there is a peer that intentionally sends bad configuration information, and also when the peer might spontaneously disconnect in the middle of the signaling exchange.
My solution to the first case is have the server keep state of the exchange, so when the connection is first established I would expect that user A provide an offer and user B have an answer. Is this appropriate? or should this be implemented exclusively client side?
My solution to the second problem feels to me like a hack. What I am trying to do is notify the user that a match has been made and then the user will set a timeout say 20 seconds, if a connection hasn't been made in that amount of time then it should move on...
Are these appropriate solutions? How do you reliably establish a WebRTC when either peer can't be trusted? Should the signaling server be concerned with the state of the exchange?
Sounds like you're more concerned about call set up errors rather than being able to trust the identity of the remote peer. They are two very different problems.
Assuming it is the call set up errors you are concerned about you shouldn't be trying to avoid them you should be trying to make sure your application can handle them. Network connection issues are something that will always crop up and need to be handled.
Setting a timer for the establishment of a WebRTC call to complete is a logical solution. Displaying a warning to the user that the time limit is approaching also seems like a good idea. SIP is a signalling protocol and it has a defined timeout for the completion of a transaction and if it doesn't complete within that time it will generate an error response. You could use the same approach.
I have an application that needs to stream data up to a server over a particular port. When the application starts it performs a number of checks, for network connectivity, a reachable host, among others. I'd like it to also check the port to ensure it's able to receive data.
I have this code:
Private Function CheckPort(ByVal Host As String) As Boolean
Dim IPs() As IPAddress = Dns.GetHostAddresses(Host)
Dim s As New Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)
s.Connect(IPs(0), <port # removed>)
If s.Poll(-1, SelectMode.SelectWrite) Then
Return True
End If
Return False
End Function
The code works well, but I'm concerned that by performing the check I might be inadvertently preventing subsequent messages from the application from reaching the port. Will performing this check interfere with the actual data I want to send to the host? Should I close the socket with s.close() before returning the function?
I mainly agree with CodeCaster's response.
If you say the server is buggy, it is also probable that a few minutes after you check the port connection, the connection is broken or even closed by the server.
If you still want to do it as a means to reduce risk of making the user write some message that later cannot be sent, this would be a good approach to follow. An alternative is that you can save the data as draft locally or somewhere else for sending it later when the server is available.
Now, going to your code:
You are opening a connection inside function CheckPort so when the function finishes you will lose any reference to s, the open socket. In C, you would have a leak of resources; in Java, a garbage collector will take care of the Socket and close it. I don't know how it is in VB but I'd close that socket before leaving the function. In any case it's a healthy practice.
Additionally, when you have data ready to send you won't be able to do it on the same connection if you lose the reference (s) to the open socket or you close the socket (unless VB has any trick that I don't know). You will have to open a new connection to send data. Don't worry, you will be able to do it even if you made the check before. The server will see it as just a different connection, maybe from a different source port (or could be the same if you closed the connection before, that depends on the OS).
Checking a port before connecting to it is as useful as verifying file existence before opening it: not at all.
The file can be deleted in between checking and opening, and your socket can be closed for a variety of reasons.
Just connect and write to it, it'll throw an appropriate exception when it won't work.
I'm building a program that has a very basic premise.
For X amount of Objects
Open Conection
Perform Actions
Close Connection
Open Next
Each of these connections is made on a socks5 proxy and after about the 200th connection I get "The operation has timeout" errors. I have tested all the proxies and they work just fine and the really wierd thing is if I shut down the program and restart it again the problems go away. So I'm left to believe that when I'm closing my connection that its really not closing the connection and the computer is being overloaded. How cna i force all socks connections to close that are associated with a class?
socket.Shutdown(SocketShutdown.Both);
//socket.Close();
socket.Disconnect(true);
socket = null;
In reponse to a tip to use netstat I checked it out. I noticed connections where lingering but finally would go away. However, the problem still remains, after about the 100th connection, 5 second pause between connections. I get timeout errors. If I close the proram and restart it they go away. So for some reason I feel that the connections are leaving behind something. Netstat dosent show it. I've even tried adding the instances of the client to a list and each time one is finish remove it from the list and then set it to null. Is there a way to kill a port? Maybe that would work, if I killed the port the connection was being made on? Is it possible this is a Windows OS issue? Something thats used to prevent viruses? I'm making roughly a connection a minute and mainint that connection for about 1 minute before moving on to the next with atleast 20 concurent if not more connections at the same time. What dosent make sense to me is that shuting down the program seem sto clean up whatever resources I'm not cleaning up in my code. I'm using an class I found on the internet that allows socks5 proxies to be used with the socket class. So i'm really at a loss, any advice or direction to head woudl be great? It dosent have to be pretty. I'm have tempted to wite to a text file where I was in my connection list and shutdown the program and then have anohter program restart it to pick up where it left off at this point.
Sounds like your connections aren't really closed. Without seeing the code, it's hard to troubleshoot this; can you boil it down to a program that loops through an open-close sequence?
If the connection doesn't close as expected, you can probably see what state it is in with netstat. Do you have 200 established connections, or are they in some sort of closing state?
Sockets implement IDisposable. Only calling Dispose or Close will cause the socket to give give up the unmanaged resources in a deterministic manner. This is causing you to run out of the resources that the socket uses (probably a network handle of some sort), even though you may not any managed object useing them.
So you should probably just do
socket.Shutdown(SocketShutdown.Both);
socket.Close();
To be clear setting the socket to Null does not do this because setting the socket to null only causes the sockets to be placed on the freachable queue, to have its finalizer called when it gets around to processing the freachable queue.
You may want to review this article which gives a good model on how Unmanaged resources are dealt with in .NET
Update
I checked and Sockets do indeed contain a handle to a WSASocket. So unless you call close or dispose you'll have to wait until the Finalizers run (or exiting the appplication) for you to get them back.