Winpcap listening a device with 2 different filters - winpcap

Is it doable to listen a device with 2 different filters and capture packets? For example i start listening a device with a filter and dumping the packets to a pcap file, after 15min can i start another listen on the same device with different filter and dump the packets to another pcap file without stopping the old one?
Does pcap_open or pcap_next_ex block the incoming packets? What i mean if a packet arrives while listening from two different threads one of them will get the packet and control it for filter can the other thread access the packet?
I hope im clear sorry for bad english.

can i start another listen on the same device with different filter and dump the packets to another pcap > file without stopping the old one?
Yes. Albeit, you'd better start that listener in a different thread/process to process it.
Does pcap_open or pcap_next_ex block the incoming packets?
It does not. It drops the packets to your pcap listener if one part (you or the OS) can't keep up with the incoming packets.
pcap will also duplicate the packets(speaking at least of the *nix pcap, assuming winpcap works the same) so if you have several pcap listners, filtering for the same packets , they will all get a copy.

For each open device handle you get, you have a seperate filter, and packet buffer.
say handle 'A' and handle 'B'
now lets say both handles are on the same network device.
Now lets say that the network device gets 4 packets.
each packet gets to the hardware driver, then to winpcap.
at that point winpcap applys each handles filter one at a time.
if a match is made, the packet will be copied to that handles packet buffer.
after all handles have been processed, the packet is handed off to the OS.
Does pcap_open or pcap_next_ex block the incoming packets? No.
The fact is that the operating system, will most likely see the packet before your application will process it.
I may be wrong but I don't think winpcap has any standard method for blocking a packet.

Related

UDP server and connected sockets

[edit]
Seems my question was asked nearly 10 years ago here...
Emulating accept() for UDP (timing-issue in setting up demultiplexed UDP sockets)
...with no clean and scalable solution. I think this could be solved handily by supporting listen() and accept() for UDP, just as connect() is now.
[/edit]
In a followup to this question...
Can you bind() and connect() both ends of a UDP connection
...is there any mechanism to simultaneously bind() and connect()?
The reason I ask is that a multi-threaded UDP server may wish to move a new "session" to its own descriptor for scalability purposes. The intent is to prevent the listener descriptor from becoming a bottleneck, similar to the rationale behind SO_REUSEPORT.
However, a bind() call with a new descriptor will take over the port from the listener descriptor until the connect() call is made. That provides a window of opportunity, albeit briefly, for ingress datagrams to get delivered to the new descriptor queue.
This window is also a problem for UDP servers wanting to employ DTLS. It's recoverable if the clients retry, but not having to would be preferable.
connect() on UDP does not provide connection demultiplexing.
connect() does two things:
Sets a default address for transmit functions that don't accept a destination address (send(), write(), etc)
Sets a filter on incoming datagrams.
It's important to note that the incoming filter simply discards datagrams that do not match. It does not forward them elsewhere. If there are multiple UDP sockets bound to the same address, some OSes will pick one (maybe random, maybe last created) for each datagram (demultiplexing is totally broken) and some will deliver all datagrams to all of them (demultiplexing succeeds but is incredibly inefficient). Both of these are "the wrong thing". Even an OS that lets you pick between the two behaviors via a socket option is still doing things differently from the way you wanted. The time between bind() and connect() is just the smallest piece of this puzzle of unwanted behavior.
To handle UDP with multiple peers, use a single socket in connectionless mode. To have multiple threads processing received packets in parallel, you can either
call recvfrom on multiple threads which process the data (this works because datagram sockets preserve message boundaries, you'd never do this with a stream socket such as TCP), or
call recvfrom on a single thread, which doesn't do any processing, just queues the message to the thread responsible for processing it.
Even if you had an OS that gave you an option for dispatching incoming UDP based on designated peer addresses (connection emulation), doing that dispatching inside the OS is still not going to be any more efficient than doing it in the server application, and a user-space dispatcher tuned for your traffic patterns is probably going to perform substantially better than a one-size-fits-all dispatcher provided by the OS.
For example, a DNS (DHCP) server is going to transact with a lot of different hosts, nearly all running on port 53 (67-68) at the remote end. So hashing based on the remote port would be useless, you need to hash on the host. Conversely, a cache server supporting a web application server cluster is going to transact with a handful of hosts, and a large number of different ports. Here hashing on remote port will be better.
Do the connection association yourself, don't use socket connection emulation.
The issue you described is the one I encountered some time ago doing TCP-like listen/accept mechanism for UDP.
In my case the solution (which turned out to be bad as I will describe later) was to create one UDP socket to receive any incoming datagrams and when one arrives making this particular socket connected to sender (via recvfrom() with MSG_PEEK and connect()) and returning it to new thread. Moreover, new not connected UDP socket was created for next incoming datagrams. This way the new thread (and dedicated socket) did recv() on the socket and was handling only this particular channel from now on, while the main one was waiting for new datagrams coming from other peers.
Everything had worked well until the incoming datagram rate was higher. The problem was that while the main socket was transitioning to connected state, it was buffering not one but a few more datagrams (coming from many peers) and thus thread created to handle the particular sender was reading in effect a few more datagrams not intended to it.
I could not find solution (e.g. creating new connected socket (instead connecting the main one) and pass the received datagram on main socket to its receive buffer for futher recv()). Eventually, I ended up with N threads, each one having one "listening" socket (with use of SO_REUSEPORT) with datagram scattering done on OS level.

What is the correct method to receive UDP data from several clients synchronously?

I have 1 server and several (maybe up to 20) clients. All clients are sending UDP datagram at random time. Each datagram is quite short (about 10B), but I must make sure all the data from each client is received correctly.
If I let all clients send datagram to the same port, and client B sends it datagram at the exact time when the server is receiving data from client A, it seems the server will miss the data from client A.
So what's the correct method to do this job? Do I need to create a listener for each of the 20 clients?
When you bind a UDP socket to a port, the networking stack will allocate a buffer for a finite number of incoming UDP packets for you, so that (assuming you call recv() in a relatively timely manner), no incoming packets should get lost.
If you want see your buffer size in terminal, you can take a look at:
/proc/sys/net/core/rmem_default for recv
and
/proc/sys/net/core/wmem_default for send
I think the default buffer size on Linux is 131071B.
On Linux, you can change the UDP buffer size (e.g. to 26214400) by (as root):
sysctl -w net.core.rmem_max=26214400
You can also make it permanent by adding this line to /etc/sysctl.conf:
net.core.rmem_max=26214400
Since each packet is only 10B, shouldnt be a problem.
If you are still worried about packet loss you could implement a protocol where your client waits for a ACK from the server or it will resend. Many protocols use such a feature, but this is only possible if timing allows it. For example in streaming data it is not useful because there is no time to resend.
or consider using tcp ( if it is an option)

multicast packet loss - running two instances of the same application

On Redhat Linux, I have a multicast listener listening to a very busy multicast data source. It runs perfectly by itself, no packet losses. However, once I start the second instance of the same application with the exactly same settings (same src/dst IP address, sock buffer size, user buffer size, etc.) I started to see very frequent packet losses from both instances. And they lost exact the same packets. If I stop the one of the instances, the remaining one returns to normal without any packet loss.
Initially, I though it is the CPU/kernel load issue, maybe it could not get the packets out of buffer quickly enough. So I did another test. I still keep one instance of the application running. But then started a totally different multicast listener on the same computer but use the second NIC card and listen to a different but even busier multicast source. Both applications run fine without any packet loss.
So it looks like one NIC card is not powerful enough to support two multicast applications, even though they listen to exact the same thing. The possible cause to the packet loss problem might be that, in this scenario, the NIC card driver needs to copy the incoming data to two sock buffers, and this extra copy task is too much for the ether card to handle so it drops packets. Any deeper analysis on this issue and any possible solutions?
Thanks
You are basically finding out that the kernel is inefficient at fan-out of multicast packets. Worst case scenario the code is for every incoming packet allocating two new buffers, the SKB object and packet payload, and copying the NIC buffer twice.
Pick the best case scenario, for every incoming packet a new SKB is allocated but the packet payload is shared between the two sockets with reference counting. Now imagine what happens when two applications, each on their own core and on separate sockets. Every reference to the packet payload is going to cause the memory bus to stall whilst both core caches have to flush and reload, and above that each application is having to kernel context switch back and forth to pass the socket payload. The result is terrible performance.
You aren't the first to encounter such a problem and many vendors have created solutions to it. The basic design is to limit the incoming data to one thread on one core on one socket, then have that thread distribute the data to all other interested threads, preferably using user space code building upon shared memory and lockless data structures.
Examples are TIBCO's Rendezvous and 29 West's Ultra Messaging showing a 660ns IPC bus:
http://www.globenewswire.com/newsroom/news.html?d=194703

Does libpcap get a copy of the packet?

Does libpcap get a copy of the packet or the actual packet?
By copy, I mean: the application using libpcap gets packet A, and the kernel also gets packet A.
By actual, I mean: only the application using libpcap gets packet A, but the kernel didn't get it.
libpcap will not allow you to do what you want. The goal of pcap is to transparently receive a copy of every packet in the system.
You should investigate how to inter-operate with the existing firewall in your system, or how to add your own filters to the netfilter system (on Linux)
The kernel will get the packet then pass it through a list of filters (for example, there's usually a filter for IPsec, a firewall and so on) and once it's gone through all of these filters, it passes the packet on to the application. libpcap is another filter, but it simply adds the packet to an internal database for processing, rather than inspecting the packet, modifying or whatever else the other filters will do.
For what you want to do, the simplest solution would be to use a firewall.

For UDP broadcast gurus: Problems achieving high-bandwidth audio UDP broadcast over WiFi (802.11N and 802.11G)

I'm attempting to send multichannel audio over WiFi from one server to multiple client computers using UDP broadcast on a private network.
I'm using software called Pure Data, with a UDP broadcast tool called netsend~ and netreceive~. The code is here:
http://www.remu.fr/sound-delta/netsend~/
To cut a long story short, I'm able to achieve sending 9 channels to one client computer in a point-to-point network, but when I try to do broadcast to 2 clients (haven't yet tried more), I get no sound. I can compress the audio and send 4 channels compressed (about 10% size of uncompressed) over UDP broadcast to 2 clients successfully. Or I can send 1 channel over UDP broadcast to 2 clients, with some glitches.
The WiFi router is a Linksys WRT300N. All computers are running Windows XP. The IP addresses are 192.168.1.x, with subnet mask 255.255.255.0 and the subnet broadcast address: 192.168.1.255.
I'm curious - what happens to UDP broadcast packets in the router?
If I have a subnet mask of 255.255.255.0, then does the router make 254 packets for every packet sent ot the broadcast address?
My WiFi bandwidth is at least 100Mbps, but I can't seem to send audio of more than around 10Mbps over UDP broadcast to multiple clients.
What's stopping me from sending audio up to the bandwidth limit of the WiFi?
any suggestions for socket code modifications, network setups, router setups, subnet modifications... all very much appreciated!
thanks
Nick
Your problem is caused by the access point's rate control algorithm. With unicast the access point tracks what data rate every particular receiver can reliably receive at, and sends about that rate. With multicast the access point does not know which receivers are interested in the data, so simple access points send the data at the slowest possible rate (1Mb/s). Better implemented access points may send the data at the rate that the slowest connected client is using, and the best access points use IGMP snooping to see who's receiving each IP multicast stream, and they will choose the slowest rate out of the receivers for that stream.
The simplest solution is to not use multicast when you have a small number of WiFi receivers.
Are all parties connected via WiFi or is the sender using a
wired connection to the Access Point? Broadcast data will
be transmitted as unicast data from a station to an access
point and the access point will then retransmit the data
as broadcast/multicast traffic so it will use twice the
on-air bandwidth compared to when the sender sits on the
wired side of the AP.
When sending a unicast frame the AP will wait for an ACK
from the receiving station and it will retransmit the
frame until the ACK arrives (or it times out). Broadcast/multicast
frames are not ACKed and therefore not retransmitted.
If you have a busy/noisy radio environment this will
cause the likelyhood of dropped packets to increase,
potentially a lot, for multicast traffic compared to unicast
traffic. In an audio application this could certainly be audible.
Also, IIRC, broadcast/multicast traffic does not use the
RTS/CTS procedure for reserving the media which exarbates
the dropped packets problem.
It could actually be the case that multiple unicast streams
work better than a single multicast stream under less-than-ideal
radio conditions given that the aggregated bandwidth is
high enough.
If you can I would suggest that you use wireshark to sniff
the WiFi traffic and take a look at the destination address
in the 802.11 header. Then you can verify if the packets
are actually broadcast or not over the air.
Your design is failing due to a common misconception with WiFi speeds. With 802.11n the number 300mb/s is the link speed, not the actual bandwidth available for user data or even the IP layer. The effective bandwidth is closer to 40mb/s best case, have a look at the FAQ on SmallNetBuilder.com that discusses this in further detail.
http://www.smallnetbuilder.com/wireless/wireless-basics/31083-smallnetbuilders-wireless-faq-the-essentials
I'm curious - what happens to UDP broadcast packets in the router? If I have a subnet mask of 255.255.255.0, then does the router make 254 packets for every packet sent ot the broadcast address?
No the "router" doesn't make 254 individual packets. Furthermore, I suspect the protocol leverages "multicast" addresses rather than using a "broadcast" address.
Since broadcast/multicast traffic can easily be misused, there are many networking equipment that limit/block by default such traffic. Of course, some essential protocols (e.g. ARP, DHCP) rely on broadcast/multicast addresses to function and won't be blocked by default.
Hence, it might be a good thing to check for multicast/broadcast control knobs on your router.