AIX - collect network traffic with process ID-s - aix

I am into analyzing network communication directions of an appserver machine. I collected a lot of TCP SYN packages, so I can list network communication directions inward and outward.
The inward communications are clean, as I can tell which appserver processes are related to a certain LISTEN port.
My problem is to tell which process is initiating a communication to a particular direction.
For example in my tcpdump output I see outbound packages to 123.123.110.120:10122.
07:52:47.308694 IP 10.128.46.13.49743 > 123.123.110.120.10122: S 246322753:246322753(0) win 14600 <mss 1460,sackOK,timestamp 3109396548 0,nop,wscale 7>
Is there a way to collect process ID-s along with tcpdump outs?

Related

For UDP (Multicasts), multiple applications can subscribe to the same port. If one packet has arrived from client, which application receive it?

For UDP (Multicasts), multiple applications can subscribe to the same port.
If one packet has arrived from client, which application receive it?
Nowadays, mDNS discovery is very popular in LAN D2D use cases. Different applications may bundle different mDNS discovery modules, which leads to the fact that multiple applications listen to the same port. When one packet comes, which application receive it? Or all get notified?

What Kind of a Pyhsical Setup I Need to Test a UDP Multicasting Software

I can deliver normal UDP messages between two computers using a direct ethernet connection using my software. But I couldn't do the same for multicasting messages, I've tried other softwares that can send and receive multicast UDP messages and they didn't work as well. So I thought and wanted to ask if a direct Ethernet connection is a proper pyhsical setup or not. And if not what should I do?
Yes, a direct ethernet supports multicast. As opposed to an NBMA network like frame relay (rare these days) which does not support multicast for example.
Make sure you are not actually using a switch (which is a bridge) that has IGMP snooping enabled. IGMP snooping will not propagate multicast to nodes that have not sent an IGMP join (assuming your software does not do the join).

Delay of incoming network package on Linux - How to analyse?

The problem is: Sometimes tcpdump sees that the receiving of a UDP packet is held back until the next incoming UDP packet, although the network tap device shows it goes without delay through the cable.
Scenary: My profinet stack on Linux (located in user space) has a cyclic connection where it receives and sends Profinet protocol packets every 4ms (via raw sockets). About every 30 ms it also receives UDP packets in another thread on a UDP socket and replies them immediately, according to that protocol. It's around 10% CPU load. Sometimes it seems such received UDP packets are stuck in the network driver. After 2 seconds the next UDP packet comes in and both, the missed UDP packet and that next one is received. There are no dropped packets.
My measuring:
I use tcpdump -i eth0 --time-stamp-precision=nano --time-stamp-type=adapter_unsynced -w /tmp/tcpdump.pcap to record the UDP traffic to a RAM disk file.
At the same time I use a network tap device to record the traffic.
Question:
How to find out where the delay comes from (or is it a known effect)?
(2. What does the timestamp (which tcpdump sets to each packet) tell me? I mean, which OSI layer refers it to, in other words: When is it taken?)
Topology: "embedded device with Linux and eth0" <---> tap-device <---> PLC. The program "tcpdump" is running on the embedded device. The tap device is listening on the cable. The actual Profinet connection is between PLC and embedded device. A PC is connected on the tap device to record what it is listening to.
Wireshark (via tap and tcpdump): see here (packet no. 3189 in tcpdump.pcap)
It was a bug in the freescale Fast Ethernet Driver (fec_main.c) which NXP has fixed by its awesome support now.
The actual answer (for the question "How to find out where the delay comes from?") is: One has to build a Linux with kernel tracing on, patch the driver code with kernel tracing and then analyse such tracing with the developer Linux tool trace-cmd. It's a very complicated thing but I'm very happy it is fixed now:
trace-cmd record -o /tmp/trace.dat -p function -l fec_enet_interrupt -l fec_enet_rx_napi -e 'fec:fec_rx_tp' tcpdump -i eth0 --time-stamp-precision=nano --time-stamp-type=adapter_unsynced -w /tmp/tcpdump.pcap

About activemq network of brokers, what's the difference between multicast discovery and fixed list of URIs?

http://activemq.apache.org/networks-of-brokers.html
I'm trying activemq network of brokers, following above article.
It works all fine with a fixed list of URIs.
But I have some problem with the multicast discovery. That is, the network bridge between two activemqs on the same machine can be started. But the bridge cannot establish between different machines(I tried telnet, it is ok).
I don't know which part went wrong. So I want to ask that is these two kind of network just difference in configuration?
Telnet is proving that Unicast networking is working, multicast may requires additional configuration in your network.
Are those machines in the same subnet?
Is there a router or Layer 3 switch between them? (it would then requires to be configured if the answer is yes..)
You could use iperf to test the multicast connectivity, you can look at Generating multicast traffic article to know how to do that.

Why use Unicast versus Multicast in Weblogic Clusters

It's unclear from the documentation why you should use Unicast rather than Multicast in a WebLogic cluster. Anyone have experience using either and the benefits of moving to Unicast?
The main difference between Unicast and Multicast is as follows
Unicast:
Say you have three servers (MS-1,MS-2,MS-3) in a cluster. If they have to communicate with each other, then they have to ping (i.e. heartbeats ) the cluster master for informing it that they are alive.
If MS-1 is the master then MS-2 and MS-3 would send the ping to MS-1
Multicast:
In Multicast, there is no cluster master. Instead each server has to ping each other to inform everyone that they are alive.
So MS-1 would send the ping to MS-2 & MS-3 and at the same time MS-2 would send the ping to MS-1 & MS-3 and MS-3 would ping MS-1 & MS-3.
Thus, in multicast there are more pings sent which makes the congestion in sending the pings much heavier compared to unicast. Because of this, WLS recommends using Unicast instead for less congestion in the network: Oracle Docs: Communications In a Cluster
The principle behind Multicast is that any message is received by all subscribers to the Multicast address. So MS-1 only needs to send 1 network packet to alert all other cluster members of its status. This means a status or JNDI update requires only 1 packet per Cluster for Muticast vs 1 packet per Server (approximately) for Unicast. Multicast also requires no "master" election. Multicast is thus far simpler to code and creates less network traffic.
So, Multicast is great? Not necessarily. It uses UDP datagrams which are inherently unreliable and un-acknowledged, so given the unreliable bearer protocol - Etherent - your message might never turn up (interpret: you fall out of the cluster). The whole concept of Multicast is based on subscription, it is not a 'routable' protocol in the normal sense, so by default routers must discard Multicast packets or risk a network storm. Hence the historical requirement for all cluster members to reside on the same network segment.
These shortcomings of Multicast mean Unicast is the way to go if your cluster spans networks or you're losing too many multicast packets.
The main benefit of Unicast over Multicast is ease of configuration. Unicast uses TCP communications and this usually requires no additional network configuration. Multicast uses UDP communication and Multicast addresses and this may require some network configuration and an additional effort in selecting the address to be used.
There is a great article from the Oracle A-Team with in-depth explanation: WebLogic Server Cluster Messaging Protocols.
In documentation for WLS 12.1.2 Oracle added Considerations for Choosing Unicast or Multicast in which they suggest to use Multicast in clusters with more than 10 Managed Servers.
In my personal experience I found that Unicast may give some problems in large clusters mainly because it is a new protocol introduced in WLS 10.0 and still suffer from some minor issues.
The answer here appears to conflict with the recommendation from the Oracle A-Team. Their recommendation is:
In general, the A-Team rule of thumb is to always recommend customers use multicast unless there is good reason why it isn’t possible or practical (e.g., spanning multiple subnets where routers are not allowed to propagate UDP multicast messages). The primary reasons for this are simply efficiency and resiliency to resource shortages.
The full article can be found here.
UPDATE
Weblogic defaults to unicast and documentation for 12c implies that multicast is only supported to ensure backward compatibility:
It is important to note that although parts of the WebLogic Server documentation suggest that multicast is only supported for backward compatibility—this is not correct. The multicast cluster messaging protocol is fully supported by Oracle. The A-team is working with WebLogic Server product management to address these documentation issues in the Weblogic Server 12c documentation.
Both multicast and unicast configurations have cluster masters. In addition to a cluster master, unicast has one or more leaders. The cluster leader may or may not be the cluster master.
Multicast is a broadcast; they do not ping each other like a tcp message. In both unicast and multicast cases, the traffic is usually trivial. But, multicast is almost always the best choice if you network supports it.
Unicast presents a simpler configuration than multicast in that you don't need multicast support. All routers/switches support TCP, but not all routers/switches support or have multicast enabled. But, unicast generates more network traffic than multicast.