So we have about 30 UDP messages coming in per second, and on each message we are performing processing and database operations. My first thought is to complete the operations, then call the BeginReceive method again and let the next message come in. Since there is a slight interim between when the message is received and when BeginReceive is called again, what happens to messages that are received in this interim? Also, if I move the BeginReceive to the beginning of the asynchronous callback before the operations, would this create all kinds of thread complications or even be possible to do safely?
Public Sub Start()
_udpclient.BeginReceive(New AsyncCallback(AddressOf ReceiveCallback), Nothing)
End Sub
METHOD 1:
Public Sub ReceiveCallback(ar As IAsyncResult)
Try
Dim bytes As Byte() = _udpclient.EndReceive(ar, _ipendpoint)
HeavyProcessAndDBOperation(bytes)
_udpclient.BeginReceive(New AsyncCallback(AddressOf ReceiveCallback), Nothing)
Catch ex1 As Exception
End Try
End Sub
METHOD 2:
Public Sub ReceiveCallback(ar As IAsyncResult)
Try
Dim bytes As Byte() = _udpclient.EndReceive(ar, _ipendpoint)
_udpclient.BeginReceive(New AsyncCallback(AddressOf ReceiveCallback), Nothing)
HeavyProcessAndDBOperation(bytes)
Catch ex1 As Exception
End Try
End Sub
You want that client receiving data again as immediately as possible after a packet is received.
If I remember correctly, each call to ReceiveCallback will be fired in a separate thread, so calling BeginReceive in your callback right away (as per your method 2) is the pattern you want here.
You'll have one thread per ReceiveCallback that gets fired, that is: per packet, essentially. Each ReceiveCallback will fire one more thread to BeginReceive immediately, which will allow you to receive as much data as you can.
As for packets in the interim: keep in mind that UDP is a connection-less protocol. By using your method 2, you will minimize but potentially not eliminate packet-loss. It is possible that you miss packets between EndReceive and BeginReceive.
If you opt for method 1, you will lose the number of packets sent in the time window occupied by HeavyProcessAndDBOperation().
The ReceiveCallback threads are independent of each other and data processed in one will not be affected by data processed in the others Unless it is a shared field, database connection, etc....
What you may want to do here to remedy the shared fields etc.., and I don't know the best solution so take this with a grain of salt, is: fire another thread in each ReceiveCallback into HeavyProcessAndDBOperation. From within that thread, put a lock around your actual database operations and shared-field processing. This will increase the time required to process the data, but since it's in another thread it will not affect the receive operation of other packets.
I'm not sure the correct method for locking a processing block. I used to use SyncLock, but it's been a few years since I've done this sort of work so you may want to do a bit of research there.
Hope that helps.
Related
I am building a communication library based on the Net.Sockets.TcpClient class. During some unit tests I wanted to test how large a datapacket could be before running into problems. My theory was that the actual size would not matter because the TcpClient would split the data into parts because of its internal sendbuffer. But the actual size did matter because somewhere around 600KB I discovered loss of data.
What my test does is create a local server and a local client that connect with each other. Then it sends a specific (large) package in a loop to test if the server receives it well. I left out all checks but after sending the data a check runs that the data is exactly the same as what was send before looping again. So there is never more data in the pipe than the size that I specified. This code is the client part in my unit test. The server part is a whole library so I cannot post that.
Using myClient As New Net.Sockets.TcpClient()
myClient.BeginConnect("127.0.0.1", ServerPort, Nothing, Nothing)
'Code that checks if the connection has been made
'Create a large string
Dim SendString as String = StrDup(1024000, "A")
Dim SendBytes as Byte() = Text.Encoding.ASCII.GetBytes(SendString)
'Loop the test
For i As Int32 = 1 To 10000
Wait.Reset()
myClient.Client.Send(SendBytes)
Wait.WaitOne(10000) 'Wait for the server to acknowledge
'Run checks to make sure the data is good, otherwise end loop
Next
myClient.Close()
End Using
What happens is that at some random point the server does not receive all data. It looks like sending 1.024.000 bytes of data works most of the times but not always. The iteration at which it fails is random but a loop will never finish 10.000 iterations successfully. I tested the loop with 512.000 bytes and that works. I also tested 600.000 bytes and that failed. I do not know what the actual size is at which it starts failing because it does not seem to be a hard limit. I cannot figure out the problem. Is the TcpClient somehow limited or do I exceed an internal buffer of some kind? I checked the SendBufferSize of the TcpClient and it was 65536. I have no idea if that has anything to do with it. Packages larger than that buffer seem to be sending just fine.
I have a usage issue I need some advice on.
I have a process with a main flow which loops, retrying a task every n hours until either a condition is met or a timeout is reached. So far so good.
There is a transactional sub process triggered to run in parallel to this main loop which, for as long as this main loop is active, carries out its own looping behaviour (every x days). This second loop should run for as long as the main loop continues, and be killed as soon as the main loop reaches one of its progression criteria.
The way I'd like to model it would be to use a message/signal throw event from the main flow after it has passed its progress criteria, with a corresponding catch message/signal as a boundary event on the sub process, which then triggers a sub process end/terminate event inside the boundaries of the sub process.
I've looked long and hard at resources and the standard, and I can't see any examples of people using boundary events in this way (as an input from outside the sub process, leading to an end event inside the sub process). Any idea if this is valid?
If not valid, anyone have a better method for having a main flow kill a sub process in this way?
Main Process: Start, parallel gateway (fork), first branch contains subprocess 1, second branch contains subprocess 2, exclusive gateway (join), end.
Subprocess 1: Start, loop, exit from the loop under some condition, then end.
Subprocess 2: Start, loop, no end node.
This way, subprocess 2 can't cause an end of looping of its own. But subprocess 1 can end, and by the exclusive join gateway, subprocess 2 will end as well.
I'm not quite sure whether a parallel fork, followed by an exclusive join, is actually allowed formally in BPMN. But some tools can handle it, and I received this hint from a tool vendor (Bonita).
I read redis source code recently, and I'm now studying the networking codes.
Redis use nonblock mode and epoll(or something simliar) for networking data read/write. When read data event arrived,"readQueryFromClient" function will be called, and in this function request data will be readed into buffer.
In "readQueryFromClient" function, if there are really data arrived, data will be readed into buffer through one 'read' function, and then the request will be handled.
nread = read(fd, c->querybuf+qblen, readlen); // **one read function**
//... some other codes to check read function retuen value
processInputBuffer(c);// **request will be handled in this function**
My question is: how redis ensure all request data can be readed into buffer by only one 'read' function call, maybe all data will be gotten by more 'read' function call?
processInputBuffer(c);// request will be handled in this function
That part is not true. Redis protocol is designed to include length of every chunk of data passed around. So the server always knows how much data it has to read to make a complete request out of it. Inside processInputBuffer if neither processInlineBuffer nor processMultibulkBuffer returns REDIS_OK (i.e. request terminator was not found in the buffer/not enough arguments), control simply falls out of the function. All that processInputBuffer did in this case was filling up a chunk of the client buffer and updating the parsing state. Then, on the next iteration of event loop, in the call to aeProcessEvents, if there is unread data remaining in the socket buffer, readQueryFromClient callback will be triggered again to parse the remaining data.
I have a sending application that uses TCP to send files. Sometimes these files contain one message, and other times the file may contain multiple messages. Unfortunately, I do not have access to the Sending application's code.
I am working on editing legacy code to receive these messages. I have managed to get the legacy application to accept a file when there is a single message sent. However, since I disconnect the socket after receiving a single message, the Sender gives a disconnect error.
I wrote a small process to help determine whether there was another message. If it worked, I was going to incorporate it into the code, but I had mixed results:
Dim check(1) As Byte
If (handler.Receive(check, SocketFlags.Peek) > 0) Then
Dim bytesRec As Integer
ReDim bytes(1024)
bytesRec = handler.Receive(bytes)
End If
If there is another message being sent, this will detect it. However, if the file only has a single message, it locks up on Receive until I send another file, and then it is accepted.
Is there a way to tell if there is another message pending that will not lock up if the stream is empty?
I won't post all of the code for accepting the message, as it is a legacy rat's nest, but the general idea is below:
s2 = CType(ar.AsyncState, Socket)
handler = s2.EndAccept(ar)
bytes = New Byte(1024) {}
Dim bytesRec As Integer = handler.Receive(bytes)
' Send Ack/Nak.
numAckBytesSent = handler.Send(myByte)
Thank you in advance for any assistance.
Socket.Select can be used as a quick way of polling a socket for readability. Pass in a timeout of 0 seconds, and the socket in question in the readability list, and it will simply check and report back immediately.
Two other options might be to set Socket.ReceiveTimeout on your socket, or make the socket non-blocking using Socket.Blocking, so that you can find out (as part of the Receive call) whether there is incoming data. These look a bit inconvenient to do in .NET, though, as they throw exceptions rather than simply returning a value, which might make the code a little longer.
Just keep reading. If there is nothing left you will get an end-of-stream indication of some kind, depending on your API.
Following is the code that I'm using for reading data over a .NET socket. This piece of code is run by a single separate thread. It works OK the first time, on the second iteration it stops at "client.Receive(buffer)" and never recovers from it. Initially I was using recursion to read data but changed it to iteration thinking that recursion could be the source of problem. But apparently it is not.
Private Sub ReceiveSocket(ByVal client As Socket)
Dim bytesRead As Integer = 0
Do
bytesRead = client.Receive(buffer)
sb.Append(Encoding.ASCII.GetString(buffer, 0, bytesRead))
Array.Clear(buffer, 0, buffer.Length)
Loop While bytesRead > 0
End Sub 'ReceiveCallback
Why does it hang at Receive?
Well, that's normal. The Receive() method won't return until the server sends something else. Which it probably doesn't do in your case until you ask it to send something else first. You should only call Receive() again if you didn't get the full server response.
Check the protocol specification. A server usually sends something that lets you tell that the full response was received. Like the number of bytes in the message. Or a special character at the end of the message. Linefeed (vbLf) is popular.