I found this post in stackoverflow.com
listening using Pcap with timeout
I am facing a similar (but different) problem: what is the GENERIC (platform-independent) method to timeout periodically when receiving captured packets by using libpcap packet receiving functions?
Actually, I am wondering if it is possible to periodically timeout from the pcap_dispatch(pcap_t...) / pcap_next_ex(pcap_t...)? If that is possible, I can use them just like using the classic select(...timeout) function ( http://linux.die.net/man/2/select ).
In addition, from the official webpage ( http://www.tcpdump.org/pcap3_man.html ), I found the original timeout mechanism is considered buggy and platform-specific (This is bad, since my program may run on different Linux and Unix boxes):
"... ... to_ms specifies the read timeout in milliseconds. The read timeout is used to arrange that the read not necessarily return immediately when a packet is seen, but that it wait for some amount of time to allow more packets to arrive and to read multiple packets from the OS kernel in one operation. Not all platforms support a read timeout; on platforms that don't, the read timeout is ignored ... ...
NOTE: when reading a live capture, pcap_dispatch() will not necessarily return when the read times out; on some platforms, the read timeout isn't supported, and, on other platforms, the timer doesn't start until at least one packet arrives. This means that the read timeout should NOT be used in, for example, an interactive application, to allow the packet capture loop to "poll" for user input periodically, as there's no guarantee that pcap_dispatch() will return after the timeout expires... ..."
Therefore, I guess I need to implement the GENERIC (platform-independent) timeout mechanism by myself like below?
create a pcap_t structure with pcap_open_live().
set it in nonblocking mode with pcap_setnonblock(pcap_t...).
poll this nonblocking pcap_t with registered OS timer like:
register OS timer_x, and reset timer_x;
while(1) {
if(timer_x times out)
{do something that need to be done periodically; reset timer_x;}
poll pcap_t by calling pcap_dispatch(pcap_t...)/pcap_next_ex(pcap_t...) to receive some packets;
do something with these packets;
}//end of while(1)
Regards,
DC
You can get the handle with pcap_fileno() and select() it.
There's a sample here in OfferReceiver::Listen().
Related
I am working on a project using a Zynq (Picozed devboard). The application is run bare-metal, uses lwIP TCP in RAW mode and basically behaves like this:
Receive a batch of data via Ethernet, which is stored in RAM.
Process the batch of data.
Send back the processed data via Ethernet.
The problem is, I need to measure the execution time of the processing part. However, running lwIP in RAW mode forces me to call tcp_fasttmr() and tcp_slowtmr() every 250/500 ms, which makes accurate measurement pretty hard. Whenever I'm not calling the tcp_tmr() functions for some time, I start repeatedly receiving error messages via UART ("unable to alloc pbuf in recv_handler"). It seems this is called from some ISR related to error handling, but I cannot really find the exact location.
My question is, how do I suspend the network functionality so I don't need to call tcp_tmr() periodically? I tried closing the connection and disabling the interface (netif_set_down()) and disabling the timer interrupt, but it still seems to have no effect on my problem.
I don't know anything about that devboard or the microcontroller on it but you should have an ethernetif.c (lwIP port) file which should contain the processing of an Ethernet receive interrupt or similar. This should be calling the lwIP function netif->input with a packet to process.
Disabling the interface won't stop this behaviour, it will just stop the higher level processing of the packet. If you are only timing how long the execution time is for debugging, you could try disabling the Ethernet receive interrupt and stop calling tcp_tmr until you have processed the packets.
From user view, the property of "may not transmit all of the data" is a trouble thing. That will cause handler calls more than one time(may be).
The free function async_write ensure handler call only once, but it requires caller must call it in sequence or the data written will be interleaving. For network application usage, this is more bad than handler be called more than once.
If user want to handler called only once and data written is correct, user need to to do something.
I want to ask is: why asio not just make socket::async_write_some transmit all data?
I want to ask is: why asio not just make socket::async_write_some
transmit all data?
Opposed to async_write, socket::async_write_some is lower-level method.
The OS network stack is designed with send buffers and receive buffers. This buffers are required to be limited with some amount of memory. When you send many data over socket, receiving side can be more slow than sending and/or there can be network speed issues.
This is the reason why socket send buffers are limited and as a result system's syscalls like write or writev should be able to notify user program that system cannot accept chunk of data right now. With socket in async mode its even more critical. So, socket syscalls cannot work in async manner without signaling program to hold on.
So, the async_write_some as a mid-level wrapper to writev is required to support partial writes. In other hand async_write is composed operation and can call async_write_some many times in order to send buffers until operation is complete or possibly failed. It calls completion handler only once, not for each chunk of data passed to network stack.
If user want to handler called only once and data written is correct,
user need to to do something.
Nothing special, just to use async_write, not socket::async_write_some.
I have a server application that uses Microsoft's I/O Completion Port (IOCP) mechanism to manage asynchronous network socket communication. In general, this IOCP approach has performed very well in my environment. However, I have encountered an edge case scenario for which I am seeking guidance:
For the purposes of testing, my server application is streaming data (lets say ~400 KB/sec) over a gigabit LAN to a single client. All is well...until I disconnect the client's Ethernet cable from the LAN. Disconnecting the cable in this manner prevents the server from immediately detecting that the client has disappeared (i.e. the client's TCP network stack does not send notification of the connection's termination to the server)
Meanwhile, the server continues to make WSASend calls to the client...and being that these calls are asynchronous, they appear to "succeed" (i.e. the data is buffered by the OS in the outbound queue for the socket).
While this is all happening, I have 16 threads blocked on GetQueuedCompletionStatus, waiting to retrieve completion packets from the port as they become available. Prior to disconnecting the client's cable, there was a constant stream of completion packets. Now, everything (as expected) seems to have come to a halt...for about 32 seconds. After 32 seconds, IOCP springs back into action returning FALSE with a non-null lpOverlapped value. GetLastError returns 121 (The semaphore timeout period has expired.) I can only assume that error 121 is an artifact of WSASend finally timing out after the TCP stack determined the client was gone?
I'm fine with the network stack taking 32 seconds to figure out my client is gone. The problem is that while the system is making this determination, my IOCP is paralyzed. For example, WSAAccept events that post to the same IOCP are not handled by any of the 16 threads blocked on GetQueuedCompletionStatus until the failed completion packet (indicating error 121) is received.
My initial plan to work around this involved using WSAWaitForMultipleEvents immediately after calling WSASend. If the socket event wasn't signaled within (e.g. 3 seconds), then I terminate the socket connection and move on (in hopes of preventing the extensive blocking effect on my IOCP). Unfortunately, WSAWaitForMultipleEvents never seems to encounter a timeout (so maybe asynchronous sockets are signaled by virtue of being asynchronous? Or copying data to the TCP queue qualifies for a signal?)
I'm still trying to sort this all out, but was hoping someone had some insights as to how to prevent the IOCP hang.
Other details: My server application is running on Win7 with 8 cores; IOCP is configured to use at most 8 concurrent threads; my thread pool has 16 threads. Plenty of RAM, processor and bandwidth.
Thanks in advance for your suggestions and advice.
It's usual for the WSASend() completions to stall in this situation. You won't get them until the TCP stack times out its resend attempts and completes all of the outstanding sends in error. This doesn't block any other operations. I expect you are either testing incorrectly or have a bug in your code.
Note that your 'fix' is flawed. You could see this 'delayed send completion' situation at any point during a normal connection if the sender is sending faster than the consumer can consume. See this article on TCP flow control and async writes. A better plan is to use a counter for the amount of oustanding writes (per connection) that you want to allow and stop sending if that counter gets reached and then resume when it drops below a 'low water mark' threshold value.
Note that if you've pulled out the network cable into the machine how do you expect any other operations to complete? Reads will just sit there and only fail once a write has failed and AcceptEx will simply sit there and wait for the condition to rectify itself.
I've been trying to intercept UDP packets before they reach an applications logic. More precisely, that application is using a DirectPlay Server and there is no source.
So I found out that DirectPlay uses async IO by posting multiple WSARecvFrom, then having some workerthreads waiting with WaitForSingleObject and finally retrieving IO status with WSAGetOverlappedResult.
When WSARecvFrom returns, lpBuffers is not filled with data yet of course, because the operation is still pending and will complete later.
So my idea to get to the data was to save the lpOverlapped/lpBuffers pair in a std::map for every WSARecvFrom call and then, if an IO operation completes (in WSAGetOverlappedResult), I would get to the corresponding (now filled) lpBuffers by looking up the lpOverlapped in the map.
However, there seems to be a big problem: DirectPlay calls WSARecvFrom multiple times with the same lpOverlapped address sometimes, and even with the same lpOverlapped->hEvent or lpBuffers addresses, also for the same socket (none of these operations complete at this time, so they are all pending). I cannot understand why this happens, the doc clearly says: "If multiple I/O operations are simultaneously outstanding, each must reference a separate WSAOVERLAPPED structure."
Because of this I cannot correctly retrieve the lpBuffers, because when WSAGetOverlappedResult is called, I don't know to which WSARecvFrom the lpOverlapped corresponds because there were several WSARecvFroms called, each with the same lpOverlapped! How can this be? Does anyone know how DirectPlay handles this? Could there be another way intercepting (and eventually dropping) UDP Packets? (I don't want to use drivers)
(There is a good reason why I'm trying to do this: Someone is sending exploited UDP packets to a gameserver using DirectPlay, and it "confuses" the DirectPlay logic, basically shutting down the server. So I have to filter out specific UDP packets before they even reach DirectPlay)
Happy for any hint!
Thanks a lot!
I am writing a client program to control a server which is in turn controlling some large hardware. The server needs to receive commands to initialize, start, stop and control the hardware.
The connection from the client to the server is via a TCP or UDP socket. Each command is encapsulated in an appropriate message using a SCADA protocol (e.g. Modbus or DNP3).
Part of the initialization phase involves sending a sequence of commands from the client to the server. In some cases there must be a delay in seconds between the commands to prevent multiple sub-systems being initialized at the same time. The value of the delay depends on the type of command.
I'm thinking that the Command Design Pattern is a good approach to follow here. The client instantiates ConcreteCommands and the Invoker places it in a queue. I'm not sure how to incorporate the variable delay and whether there's a better pattern which involves a timer and a queue to handle sending
messages with variable delays.
I'm using C# but this is probably irrelevant since it's more of a design pattern question.
It sounds like you need to store a mapping of types to delay. When your server starts, could you cache those delay times? Then call a method that processes the command after a specified delay?
When the server starts:
Dictionary<Type, int> typeToDelayMapping = GetTypeToDelayMapping();
When a command reaches the server, the server can call this:
InvokeCommand(ICommand command, int delayTimeInMilliseconds)
Like so:
InvokeCommand(command, typeToDelayMapping[type]);