Discarded UDP datagram over MTU size with IPv6 - udp

I've found out that when I send an UDP datagram that gets fragmented (over 1452 bytes with MTU=1500), according to tcpdump, all the fragments are received on the target machine but then no message is received on the socket. This happens only with IPv6 addresses (both global and link-local), with IPv4 everything works as expected (and with non-fragmented datagrams as well).
As the datagram is discarded, there is this ICMP6 message:
05:10:59.887920 IP6 (hlim 64, next-header ICMPv6 (58) payload length: 69) 2620:52:0:105f::ffff:74 > 2620:52:0:105f::ffff:7b: [icmp6 sum ok] ICMP6, destination unreachable, length 69, unreachable port[|icmp6]
There's some repeated neighbour solicitation/advertisements going on and I see that it gets to the ARP cache (via ip neigh).
One minute later I get another ICMP6 messages saying that the fragment has timeout out.
What's wrong with the settings? The reassembled packet should not be discarded, when it can be delivered, right?
System is RHEL6 2.6.32-358.11.1.el6.x86_64

Related

Ryu controller drop packets after fixed number of packets or time

I am trying to block tcp packets of a specific user/session after some threshold is reached.
Currently I am able to write a script that drops tcp packets.
#set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def switch_features_handler(self, ev):
tcp_match = self.drop_tcp_packets_to_specfic_ip(parser)
self.add_flow_for_clear(datapath, 2, tcp_match)
def drop_tcp_packets_to_specfic_ip(self, parser):
tcp_match = parser.OFPMatch(eth_type=0x0800, ip_proto=6, ipv4_src=conpot_ip)
return tcp_match
Thanks.
You need to set some rule to match the packets flow.
After, you need to create an loop to get statistics about this rule.
Finally, you read each statistic and verify the number of packets. So, if the number of packets reach your threshold, you send the rule to block packets.

What does USB transfer need status phase?

Basically after every IN, OUT or SETUP transaction we have an ACK/NAK packet at the end of the transaction. If a handshake packet is part of every transfer as it comes after the data packet which is preceded by token packet, then why do we need a status stage? This seems to be present in the control transfer.
In the protocol endpoints are in a status: ACTIVE, HALT, STALL,...
in the status phase this status is determined (GET_STATUS(0x00) request (http://www.beyondlogic.org/usbnutshell/usb6.shtml) )
The status phase check is a bit like a CRC checksum over the entire request not over each single packet.
http://www.beyondlogic.org/usbnutshell/usb4.shtml:
"
Status Stage reports the status of the overall request and this once again varies due to direction of transfer. Status reporting is always performed by the function.
IN: If the host sent IN token(s) during the data stage to receive data, then the host must acknowledge the successful recept of this data. This is done by the host sending an OUT token followed by a zero length data packet. The function can now report its status in the handshaking stage. An ACK indicates the function has completed the command is now ready to accept another command. If an error occurred during the processing of this command, then the function will issue a STALL. However if the function is still processing, it returns a NAK indicating to the host to repeat the status stage later.
OUT: If the host sent OUT token(s) during the data stage to transmit data, the function will acknowledge the successful recept of data by sending a zero length packet in response to an IN token. However if an error occurred, it should issue a STALL or if it is still busy processing data, it should issue a NAK asking the host to retry the status phase later.
"
or see http://wiki.osdev.org/Universal_Serial_Bus
"
Finally, a STATUS transaction from the function to the host indicates whether the [control] transfer was successful.
"

The receiveBufferSize not being honored. UDP packet truncated

netty 4.0.24
I am passing XML over UDP. When receiving the UPD packet, the packet is always of length 2048, truncating the message. Even though, I have attempted to set the receive buffer size to something larger (4096, 8192, 65536) but it is not being honored.
I have verified the UDP sender using another UDP ingest mechanism. A standalone Java app using java.net.DatagramSocket. The XML is around 45k.
I was able to trace the stack to DatagramSocketImpl.createChannel (line 281). Stepping into DatagramChannelConfig, it has a receiveBufferSize of whatever I set (great), but a rcvBufAllocator of 2048.
Does the rcvBufAllocator override the receiveBufferSize (SO_RCVBUF)? Is the message coming in multiple buffers?
Any feedback or alternative solutions would be greatly appreciated.
I also should mention, I am using an ESB called vert.x which uses netty heavily. Since I was able to trace down to netty, I was hopeful that I could find help here.
The maximum size of incoming datagrams copied out of the socket is actually not a socket option, but rather a parameter of the socket read() function that your client passes in each time it wants to read a datagram. One advantage of this interface is that programs accepting datagrams of unknown/varying lengths can adaptively change the size of the memory allocated for incoming datagram copies such that they do not over-allocate memory while still getting the whole datagram. (In netty this allocation/prediction is done by implementors of io.netty.channel.RecvByteBufAllocator.)
In contrast, SO_RCVBUF is the size of a buffer that holds all of the datagrams your client hasn't read yet.
Here's an example of how to configure a UDP service with a fixed max incoming datagram size with netty 4.x using a Bootstrap:
import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelOption;
import io.netty.channel.FixedRecvByteBufAllocator;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioDatagramChannel;
int maxDatagramSize = 4092;
String bindAddr = "0.0.0.0";
int port = 1234;
SimpleChannelInboundHandler<DatagramPacket> handler = . . .;
InetSocketAddress address = new InetSocketAddress(bindAddr, port);
NioEventLoopGroup group = new NioEventLoopGroup();
Bootstrap b = new Bootstrap()
.group(group)
.channel(NioDatagramChannel.class)
.handler(handler);
b.option(ChannelOption.RCVBUF_ALLOCATOR, new FixedRecvByteBufAllocator(maxDatagramSize));
b.bind(address).sync().channel().closeFuture().await();
You could also configure the allocator with ChannelConfig.setRecvByteBufAllocator

Jnetpcap Payload modify in UDP packet

i would modify the content of Data in the UDP Packet read from a pcap file and send it on the network.
In the following example i write a string "User data" and it work correctly but if my data require more space than the previous payload opened, i get error, how i can increase dimension of payload data taken from the original pcap file?
Pcap pcap_off = Pcap.openOffline(fileName, errorBuf); //open original packet
PcapPacket temp= new PcapPacket(JMemory.Type.POINTER);
pcap_off.nextEx(temp); //only one UDp packet
JBuffer buff=new JBuffer(temp.size());
Ethernet eth=temp.getHeader(new Ethernet());
Ip4 ip=temp.getHeader(new Ip4());
Udp udp=temp.getHeader(new Udp());
Payload data=temp.getHeader(new Payload());
InetAddress dst = InetAddress.getByName("10.0.0.10");
ip.destination(dst.getAddress()); //modify ip dst
ip.checksum(ip.calculateChecksum());
eth.transferTo(buff);
ip.transferTo(buff, 0, ip.size(), eth.size());
*byte[] userdata = new String("User data").getBytes();*
*data.setByteArray(0,userdata);*
*data.transferTo(buff, 0, data.size(), eth.size() + ip.size()+ udp.size());*
int cs = udp.calculateChecksum(); //ricalcolo il checksum UDP
udp.setUShort(6, cs); //correct UDP checksum
udp.transferTo(buff, 0, udp.size(), eth.size() + ip.size());
JPacket new_packet =new JMemoryPacket(JProtocol.ETHERNET_ID,buff); //new packet
Many thanks to any answer.
You can't resize the buffer. You have to allocate a bigger buffer then the original packet so you have room to expand.
In your code, copy the packet from pcap buffer to jbuffer first, possibly one header at a time instead of the entire packet at once and make your changes as you go, one header at a time. The bigger buffer will give you room to include payload of any size.
For efficiency on windows systems you can also use the sendqueue which will allow you to compose many packets in a single large buffer.

unable to receive and process snmp packets having RequestID 0

I have a snmp enabled device whose monitoring i want to do.
But this device gives response with Request-ID 0 for all the get request. snmp4j library
discards these received packets because it sends get request with some Request-ID value other than 0. On receiving the response it matches the sent "Request-ID" value with the received "Request-ID" value and on finding mismatch it just discards the received packet and returns "null" value to response.
If I set the Request-ID to 0 in snmp packet before sending get request then response snmp packet can be processed.
For this snmp4j library contains the "setRequestID(Integer32 (value))" function to set the desired Request-ID of any snmp packet, but this function cannot set the Request-ID value to 0. When I set the value to 0, this function replaces this value to some random Request-ID value.
If any one having solution then please give response.
Thank you.
The request-id field is used to identify the response when it arrives back to the client. So, if the device you are querying at is returning all requests with a request-id value of 0 instead of the supplied value, then the client (snmp4j) is correctly discarding the response because it is invalid. The request-id in the response packet must always match the request-id in the original request. The device has a buggy SNMP stack. If you change your code to force the requests to always have a request-id of 0 you are breaking functionality to enable compatibility with a non-standard agent and I would advise against it.