How to set OpenvSwitch to evict newest flows when memory is full instead of the oldest ones? - openflow

I am currently trying to overflow the OvS controller with the flow tables and make it reject new rules and subsequently, new packets.
I found this in documentation:
Flow Table Configuration
Limit flow table 0 on bridge br0 to a maximum of 100 flows:
ovs-vsctl -- --id=#ft create Flow_Table flow_limit=100 over‐
flow_policy=refuse -- set Bridge br0 flow_tables=0=#ft
So, I guess I need to implement firstly flow_policy = refuse, and do it for all 255 tables. Nevertheless, whenever I try to run this command, it returns me:
ubuntu#ubuntu:~$ sudo ovs-vsctl -- --id=#ft create Flow_Table flow_limit=100 over‐flow_policy=refuse -- set Bridge br0 flow_tables=0=#ft
ovs-vsctl: **Flow_Table does not contain a column whose name matches "over‐flow_policy"**
Is there any way to set the policy to refuse for all the tables, and why do I get this mistake?

you should try using overflow_policy instead of over-flow_policy..it'll works!!

Related

Redis Gears events in cluster

I have a redis cluster with the following configuration :
91d426e9a569b1c1ad84d75580607e3f99658d30 127.0.0.1:7002#17002 myself,master - 0 1596197488000 1 connected 0-5460
9ff311ae9f413b48578ff0519e97fef2ced57b1e 127.0.0.1:7003#17003 master - 0 1596197490000 2 connected 5461-10922
4de4d36b968bd0b5b5dc8023cb00a5a2ab62effc 127.0.0.1:7004#17004 master - 0 1596197492253 3 connected 10923-16383
a32088043c31c5d3f20828bfe06306b9f0717635 127.0.0.1:7005#17005 slave 91d426e9a569b1c1ad84d75580607e3f99658d30 0 1596197490251 1 connected
b5e9ec7851dfd8dc5ab0cf35c230a0e716dd934c 127.0.0.1:7006#17006 slave 9ff311ae9f413b48578ff0519e97fef2ced57b1e 0 1596197489000 2 connected
a34cc74321e1c75e4cf203248bc0883833c928c7 127.0.0.1:7007#17007 slave 4de4d36b968bd0b5b5dc8023cb00a5a2ab62effc 0 1596197492000 3 connected
I want to create a set with all keys in the cluster by listening key operations with redis gears and store key names in a redis set called keys.
To do thant, I run this redis gears command
RG.PYEXECUTE "GearsBuilder('KeysReader').foreach(lambda x: execute('sadd', 'keys', x['key'])).register(readValue=False)"
It work, but only if the updated key is store on the same node of the key keys
Example :
With my cluster configuration, the key keys is store un node 91d426e9a569b1c1ad84d75580607e3f99658d30 (the first node).
If i run :
SET foo bar
SET bar foo
SMEMBERS keys
I have the following result :
127.0.0.1:7002> SET foo bar
-> Redirected to slot [12182] located at 127.0.0.1:7004
OK
127.0.0.1:7004> SET bar foo
-> Redirected to slot [5061] located at 127.0.0.1:7002
OK
127.0.0.1:7002> SMEMBERS keys
1) "bar"
2) "keys"
127.0.0.1:7002>
The first key name foo is not saved in the set keys.
Is it possible to have key names on other nodes saved in the keys set with redis gears ?
Redis version : 6.0.6
Redis gears version : 1.0.1
Thanks.
If the key was written to a shard that does not contain the 'keys' key you need to make sure to move it to another shard with the repartition operation (https://oss.redislabs.com/redisgears/operations.html#repartition), so this should work:
RG.PYEXECUTE "GearsBuilder('KeysReader').repartition(lambda x: 'keys').foreach(lambda x: execute('sadd', 'keys', x['key'])).register(readValue=False)"
The repartition operation will move the record to the correct shard and the 'sadd' will succeed.
Another option is to maintain a set per shard and collect them using another Gear function. To do that you need to use the hashtag function (https://oss.redislabs.com/redisgears/runtime.html#hashtag) to make sure the set created belongs to the current shard. So the following registration will maintain a set per shard:
RG.PYEXECUTE "GearsBuilder('KeysReader').foreach(lambda x: execute('sadd', 'keys{%s}' % hashtag(), x['key'])).register(mode='sync', readValue=False)"
Notice that the sync mode tells RedisGears not to start a distributed execution and it should be much faster to follow the keys this way.
Then to collect all the values:
RG.PYEXECUTE "GB('ShardsIDReader').flatmap(lambda x: execute('smembers', 'keys{%s}' % hashtag())).run()"
The first approach is good for read-intensive use cases and the second approach is good for write-intensive use cases. Depends on your use case you need to chose the right approach.

Ryu controller drop packets after fixed number of packets or time

I am trying to block tcp packets of a specific user/session after some threshold is reached.
Currently I am able to write a script that drops tcp packets.
#set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def switch_features_handler(self, ev):
tcp_match = self.drop_tcp_packets_to_specfic_ip(parser)
self.add_flow_for_clear(datapath, 2, tcp_match)
def drop_tcp_packets_to_specfic_ip(self, parser):
tcp_match = parser.OFPMatch(eth_type=0x0800, ip_proto=6, ipv4_src=conpot_ip)
return tcp_match
Thanks.
You need to set some rule to match the packets flow.
After, you need to create an loop to get statistics about this rule.
Finally, you read each statistic and verify the number of packets. So, if the number of packets reach your threshold, you send the rule to block packets.

DPDK SRIOV multiple vlan traffic over single VF of SRIOV passthrough

When trying to use RTE API's for VLAN offload and VLAN filtering I observe that both VLAN tagged and untagged packets are being sent out.
API's used:
rte_eth_dev_set_vlan_offload ,
rte_eth_dev_vlan_filter
DPDK - 18.08
RHEL - 7.6
Driver - igb_uio
Is there a way to allow only VLAN tagged packets to be sent out?
Regards,
Not sure if I understand correctly - you're trying to strip vlan tags from tx packets? Why would you want to offload that? If you forward packets from somewhere else they already have their tags stripped by rx offload. If you create them yourself, well - you're in control.
Regardless, if you'd want to offload tx vlan insertion:
rte_eth_dev_set_vlan_offload only sets RX offload flags.
You'll probably have to set the tx offload flag in your port config manually, like in this abridged snippet from the DPDK Flow Filtering example code:
struct rte_eth_conf port_conf = {
.txmode = {
.offloads =
DEV_TX_OFFLOAD_VLAN_INSERT,
},
};

get_bgp_summary_information RPC and using logical-systems

Hi I am trying to use PyEZ to to create an automation script.
My goal is save response from bgp summary with logical system in a variable
This one works:
bgpinfo= cor1.rpc.get_bgp_summary_information
but I want to get the bgp summary for logical system based on this juniper command:
user#COR1> show bgp summary logical-system EXTERNAL
Check whether the below line of code works:
bgpinfo= cor1.rpc.get_bgp_summary_information(logical_system='EXTERNAL')
I figured it by giving the below command on the device:
reg#vj1> show bgp summary logical-system EXTERNAL | display xml rpc
<rpc-reply xmlns:junos="http://xml.juniper.net/junos/18.2D0/junos">
<rpc>
<get-bgp-summary-information>
<logical-system>EXTERNAL</logical-system>
</get-bgp-summary-information>
</rpc>
<cli>
<banner></banner>
</cli>
</rpc-reply>

How redis pipe-lining works in pyredis?

I am trying to understand, how pipe lining in redis works? According to one blog I read, For this code
Pipeline pipeline = jedis.pipelined();
long start = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
pipeline.set("" + i, "" + i);
}
List<Object> results = pipeline.execute();
Every call to pipeline.set() effectively sends the SET command to Redis (you can easily see this by setting a breakpoint inside the loop and querying Redis with redis-cli). The call to pipeline.execute() is when the reading of all the pending responses happens.
So basically, when we use pipe-lining, when we execute any command like set above, the command gets executed on the server but we don't collect the response until we executed, pipeline.execute().
However, according to the documentation of pyredis,
Pipelines are a subclass of the base Redis class that provide support for buffering multiple commands to the server in a single request.
I think, this implies that, we use pipelining, all the commands are buffered and are sent to the server, when we execute pipe.execute(), so this behaviour is different from the behaviour described above.
Could someone please tell me what is the right behaviour when using pyreids?
This is not just a redis-py thing. In Redis, pipelining always means buffering a set of commands and then sending them to the server all at once. The main point of pipelining is to avoid extraneous network back-and-forths-- frequently the bottleneck when running commands against Redis. If each command were sent to Redis before the pipeline was run, this would not be the case.
You can test this in practice. Open up python and:
import redis
r = redis.Redis()
p = r.pipeline()
p.set('blah', 'foo') # this buffers the command. it is not yet run.
r.get('blah') # pipeline hasn't been run, so this returns nothing.
p.execute()
r.get('blah') # now that we've run the pipeline, this returns "foo".
I did run the test that you described from the blog, and I could not reproduce the behaviour.
Setting breakpoints in the for loop, and running
redis-cli info | grep keys
does not show the size increasing after every set command.
Speaking of which, the code you pasted seems to be Java using Jedis (which I also used).
And in the test I ran, and according to the documentation, there is no method execute() in jedis but an exec() and sync() one.
I did see the values being set in redis after the sync() command.
Besides, this question seems to go with the pyredis documentation.
Finally, the redis documentation itself focuses on networking optimization (Quoting the example)
This time we are not paying the cost of RTT for every call, but just one time for the three commands.
P.S. Could you get the link to the blog you read?