Ryu controller drop packets after fixed number of packets or time - sdn

I am trying to block tcp packets of a specific user/session after some threshold is reached.
Currently I am able to write a script that drops tcp packets.
#set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def switch_features_handler(self, ev):
tcp_match = self.drop_tcp_packets_to_specfic_ip(parser)
self.add_flow_for_clear(datapath, 2, tcp_match)
def drop_tcp_packets_to_specfic_ip(self, parser):
tcp_match = parser.OFPMatch(eth_type=0x0800, ip_proto=6, ipv4_src=conpot_ip)
return tcp_match
Thanks.

You need to set some rule to match the packets flow.
After, you need to create an loop to get statistics about this rule.
Finally, you read each statistic and verify the number of packets. So, if the number of packets reach your threshold, you send the rule to block packets.

Related

What does the ‘ovs-dpctl show’ command means?

When I execute the 'ovs-dpctl show' command, I got:
$ ovs-dpctl show
system#ovs-system:
lookups: hit:37994604 missed:218759 lost:0
flows: 5
masks: hit:39862430 total:5 hit/pkt:1.04
port 0: ovs-system (internal)
port 1: vbr0 (internal)
port 2: gre_sys (gre)
port 3: net2
I retrieved some explanations:
[-s | --statistics] show [dp...]
Prints a summary of configured datapaths, including their data‐
path numbers and a list of ports connected to each datapath.
(The local port is identified as port 0.) If -s or --statistics
is specified, then packet and byte counters are also printed for
each port.
The datapath numbers consists of flow stats and mega flow mask
stats.
The "lookups" row displays three stats related to flow lookup
triggered by processing incoming packets in the datapath. "hit"
displays number of packets matches existing flows. "missed" dis‐
plays the number of packets not matching any existing flow and
require user space processing. "lost" displays number of pack‐
ets destined for user space process but subsequently dropped be‐
fore reaching userspace. The sum of "hit" and "miss" equals to
the total number of packets datapath processed.
The "flows" row displays the number of flows in datapath.
The "masks" row displays the mega flow mask stats. This row is
omitted for datapath not implementing mega flow. "hit" displays
the total number of masks visited for matching incoming packets.
"total" displays number of masks in the datapath. "hit/pkt" dis‐
plays the average number of masks visited per packet; the ratio
between "hit" and total number of packets processed by the data‐
path.
If one or more datapaths are specified, information on only
those datapaths are displayed. Otherwise, ovs-dpctl displays
information about all configured datapaths.
my question is:
Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?
Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
Yes. (Plus lookups.lost except that I see that's zero for you.)
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
masks.hit is the number of hash table lookups that were executed to
process all of the packets that were processed. A given packet might
require up to masks.total lookups.
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?
The ratio cannot be less than 1.00 because that would mean that
processing a packet didn't require even a single lookup. A ratio of
1.04 is very good because it means that most packets were processed with
only a single lookup. Higher ratios are worse.
by Ben Pfaff (blp#ovn.org)

What does USB transfer need status phase?

Basically after every IN, OUT or SETUP transaction we have an ACK/NAK packet at the end of the transaction. If a handshake packet is part of every transfer as it comes after the data packet which is preceded by token packet, then why do we need a status stage? This seems to be present in the control transfer.
In the protocol endpoints are in a status: ACTIVE, HALT, STALL,...
in the status phase this status is determined (GET_STATUS(0x00) request (http://www.beyondlogic.org/usbnutshell/usb6.shtml) )
The status phase check is a bit like a CRC checksum over the entire request not over each single packet.
http://www.beyondlogic.org/usbnutshell/usb4.shtml:
"
Status Stage reports the status of the overall request and this once again varies due to direction of transfer. Status reporting is always performed by the function.
IN: If the host sent IN token(s) during the data stage to receive data, then the host must acknowledge the successful recept of this data. This is done by the host sending an OUT token followed by a zero length data packet. The function can now report its status in the handshaking stage. An ACK indicates the function has completed the command is now ready to accept another command. If an error occurred during the processing of this command, then the function will issue a STALL. However if the function is still processing, it returns a NAK indicating to the host to repeat the status stage later.
OUT: If the host sent OUT token(s) during the data stage to transmit data, the function will acknowledge the successful recept of data by sending a zero length packet in response to an IN token. However if an error occurred, it should issue a STALL or if it is still busy processing data, it should issue a NAK asking the host to retry the status phase later.
"
or see http://wiki.osdev.org/Universal_Serial_Bus
"
Finally, a STATUS transaction from the function to the host indicates whether the [control] transfer was successful.
"

Ryu Controller Drop Packet

How do I send a flow entry to drop a package using Ryu? I've learned from tutorials how to send package out flow entry:
I define the action:
actions = [ofp_parser.OFPActionOutput(ofp.OFPP_FLOOD)]
Then the entry itself:
out = ofp_parser.OFPPacketOut(datapath=dp, buffer_id=msg.buffer_id, in_port=msg.in_port,actions=actions)
Send the message to the switch:
dp.send_msg(out)
I'm trying to find the documentation to make this code drop the package instead of flooding, without success. I imagine I'll have to change actions on the first step and fp_parser.OFPPacketOut on the second step. I need someone more experienced on Ryu and developing itself to point me to the right direction. Thank you.
The default disposition of a packet in OpenFlow is to drop the packet. Therefore if you have a Flow Rule that when it matches you want to drop the packet, you should simply have an instruction to CLEAR_ACTIONS and then no other instruction, which means that no other tables will be processed since there is no instruction to process (go to) another table and no actions on it.
Remember to keep in mind your flow priorities. If you have more than one flow rule that will match the packet, the one with the highest priority will be the one to take effect. So your "drop packet" could be hidden behind a higher priority flow rule.
Here is some code that I have that will drop all traffic that matches a given EtherType, assuming that no higher priority packet matches. The function is dependent on a couple of instance variables, namely datapath, proto, and parser.
def dropEthType(self,
match_eth_type = 0x0800):
parser = self.parser
proto = self.proto
match = parser.OFPMatch(eth_type = match_eth_type)
instruction = [
parser.OFPInstructionActions(proto.OFPIT_CLEAR_ACTIONS, [])
]
msg = parser.OFPFlowMod(self.datapath,
table_id = OFDPA_FLOW_TABLE_ID_ACL_POLICY,
priority = 1,
command = proto.OFPFC_ADD,
match = match,
instructions = instruction
)
self._log("dropEthType : %s" % str(msg))
reply = api.send_msg(self.ryuapp, msg)
if reply:
raise Exception

The receiveBufferSize not being honored. UDP packet truncated

netty 4.0.24
I am passing XML over UDP. When receiving the UPD packet, the packet is always of length 2048, truncating the message. Even though, I have attempted to set the receive buffer size to something larger (4096, 8192, 65536) but it is not being honored.
I have verified the UDP sender using another UDP ingest mechanism. A standalone Java app using java.net.DatagramSocket. The XML is around 45k.
I was able to trace the stack to DatagramSocketImpl.createChannel (line 281). Stepping into DatagramChannelConfig, it has a receiveBufferSize of whatever I set (great), but a rcvBufAllocator of 2048.
Does the rcvBufAllocator override the receiveBufferSize (SO_RCVBUF)? Is the message coming in multiple buffers?
Any feedback or alternative solutions would be greatly appreciated.
I also should mention, I am using an ESB called vert.x which uses netty heavily. Since I was able to trace down to netty, I was hopeful that I could find help here.
The maximum size of incoming datagrams copied out of the socket is actually not a socket option, but rather a parameter of the socket read() function that your client passes in each time it wants to read a datagram. One advantage of this interface is that programs accepting datagrams of unknown/varying lengths can adaptively change the size of the memory allocated for incoming datagram copies such that they do not over-allocate memory while still getting the whole datagram. (In netty this allocation/prediction is done by implementors of io.netty.channel.RecvByteBufAllocator.)
In contrast, SO_RCVBUF is the size of a buffer that holds all of the datagrams your client hasn't read yet.
Here's an example of how to configure a UDP service with a fixed max incoming datagram size with netty 4.x using a Bootstrap:
import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelOption;
import io.netty.channel.FixedRecvByteBufAllocator;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioDatagramChannel;
int maxDatagramSize = 4092;
String bindAddr = "0.0.0.0";
int port = 1234;
SimpleChannelInboundHandler<DatagramPacket> handler = . . .;
InetSocketAddress address = new InetSocketAddress(bindAddr, port);
NioEventLoopGroup group = new NioEventLoopGroup();
Bootstrap b = new Bootstrap()
.group(group)
.channel(NioDatagramChannel.class)
.handler(handler);
b.option(ChannelOption.RCVBUF_ALLOCATOR, new FixedRecvByteBufAllocator(maxDatagramSize));
b.bind(address).sync().channel().closeFuture().await();
You could also configure the allocator with ChannelConfig.setRecvByteBufAllocator

Jnetpcap Payload modify in UDP packet

i would modify the content of Data in the UDP Packet read from a pcap file and send it on the network.
In the following example i write a string "User data" and it work correctly but if my data require more space than the previous payload opened, i get error, how i can increase dimension of payload data taken from the original pcap file?
Pcap pcap_off = Pcap.openOffline(fileName, errorBuf); //open original packet
PcapPacket temp= new PcapPacket(JMemory.Type.POINTER);
pcap_off.nextEx(temp); //only one UDp packet
JBuffer buff=new JBuffer(temp.size());
Ethernet eth=temp.getHeader(new Ethernet());
Ip4 ip=temp.getHeader(new Ip4());
Udp udp=temp.getHeader(new Udp());
Payload data=temp.getHeader(new Payload());
InetAddress dst = InetAddress.getByName("10.0.0.10");
ip.destination(dst.getAddress()); //modify ip dst
ip.checksum(ip.calculateChecksum());
eth.transferTo(buff);
ip.transferTo(buff, 0, ip.size(), eth.size());
*byte[] userdata = new String("User data").getBytes();*
*data.setByteArray(0,userdata);*
*data.transferTo(buff, 0, data.size(), eth.size() + ip.size()+ udp.size());*
int cs = udp.calculateChecksum(); //ricalcolo il checksum UDP
udp.setUShort(6, cs); //correct UDP checksum
udp.transferTo(buff, 0, udp.size(), eth.size() + ip.size());
JPacket new_packet =new JMemoryPacket(JProtocol.ETHERNET_ID,buff); //new packet
Many thanks to any answer.
You can't resize the buffer. You have to allocate a bigger buffer then the original packet so you have room to expand.
In your code, copy the packet from pcap buffer to jbuffer first, possibly one header at a time instead of the entire packet at once and make your changes as you go, one header at a time. The bigger buffer will give you room to include payload of any size.
For efficiency on windows systems you can also use the sendqueue which will allow you to compose many packets in a single large buffer.