Ryu Controller Drop Packet - sdn

How do I send a flow entry to drop a package using Ryu? I've learned from tutorials how to send package out flow entry:
I define the action:
actions = [ofp_parser.OFPActionOutput(ofp.OFPP_FLOOD)]
Then the entry itself:
out = ofp_parser.OFPPacketOut(datapath=dp, buffer_id=msg.buffer_id, in_port=msg.in_port,actions=actions)
Send the message to the switch:
dp.send_msg(out)
I'm trying to find the documentation to make this code drop the package instead of flooding, without success. I imagine I'll have to change actions on the first step and fp_parser.OFPPacketOut on the second step. I need someone more experienced on Ryu and developing itself to point me to the right direction. Thank you.

The default disposition of a packet in OpenFlow is to drop the packet. Therefore if you have a Flow Rule that when it matches you want to drop the packet, you should simply have an instruction to CLEAR_ACTIONS and then no other instruction, which means that no other tables will be processed since there is no instruction to process (go to) another table and no actions on it.
Remember to keep in mind your flow priorities. If you have more than one flow rule that will match the packet, the one with the highest priority will be the one to take effect. So your "drop packet" could be hidden behind a higher priority flow rule.
Here is some code that I have that will drop all traffic that matches a given EtherType, assuming that no higher priority packet matches. The function is dependent on a couple of instance variables, namely datapath, proto, and parser.
def dropEthType(self,
match_eth_type = 0x0800):
parser = self.parser
proto = self.proto
match = parser.OFPMatch(eth_type = match_eth_type)
instruction = [
parser.OFPInstructionActions(proto.OFPIT_CLEAR_ACTIONS, [])
]
msg = parser.OFPFlowMod(self.datapath,
table_id = OFDPA_FLOW_TABLE_ID_ACL_POLICY,
priority = 1,
command = proto.OFPFC_ADD,
match = match,
instructions = instruction
)
self._log("dropEthType : %s" % str(msg))
reply = api.send_msg(self.ryuapp, msg)
if reply:
raise Exception

Related

make-basic.publish with default value for exchange gives exception error: bad argument

I'm trying to follow the RabbitMq hello world tutorial in Lisp Flavoured Erlang. The tutorial is for Elixir, with the help of Erlang RabbitMQ client user guide I try to translate the steps to LFE. To publish a message, I need a basic.publish-record.
When I try:
(make-basic.publish routing_key #"hello")
(in a function in my lfe-file, which I call from the REPL).
This results in:
** exception error: bad argument
in min_make_record:not-working-queue-declare/0 (/home/.../min_make_record/_build/default/lib/amqp_client/include/amqp_client.hrl, line 26)
When I call make-basic.publish with the same arguments from the REPL, it returns the record as expected.
The error appears to be related to the default argument for exchange. The record is defined in rabbit_common/include/rabbit_framing.hrl as:
-record('basic.publish', {ticket = 0, exchange = <<"">>, routing_key = <<"">>, mandatory = false, immediate = false}).
The following do work:
make-basic.publish from the REPL
passing an argument for both exchange and routing_key:
(make-basic.publish exchange #"" routing_key #"hello")
Removing exchange and routing_key from the record (or a copy of it) and then calling make-basic.publish
(make-amqp_params_network) from amqp_client.hrl this record also has binary strings with defaults:
-record(amq_params_network, {username = <<"guest">>, password=<<"guest">>, virual_host=<<"/">>, host="localhost", port=undefined, channel_max=2047, frame_max=0, heartbeat=10, connection_timeout=60000, ssl_options=none, auth_mechanisms=[fun amqp_auth_mechanisms:plain/3, fun amqp_auth_mechanisms:amqplain/3], client_properties = []¸socjet_options=[]}).
Is there a difference in the syntactic sugar LFE generates for direct includes and transitive includes?
Or does the point in the name cause problems?
I tried:
starting amqp_client (via .app.src): amqp_client must be started otherwise there is an other error.
including both amqp_client.hrl and rabbit_framing.hrl from my lfe-file: this results in lots of "record ... already defined"-errors.
(Adding an include guard to rabbit_framing.hrl does not help)

UI5 Odata batch update - Connect return messages to single operation

I perform a batch update on an OData v2 model, that contains several operations.
The update is performed in a single changeset, so that a single failed operation fails the whole update.
If one operation fails (due to business logic) and a message returns. Is there a way to know which operation triggered the message? The response I get contains the message text and nothing else that seems useful.
The error function is triggered for every failed operation, and contains the same message every time.
Maybe there is a specific way the message should be issued on the SAP backend?
The ABAP method /iwbep/if_message_container->ADD_MESSAGE has a parameter IV_KEY_TAB, but it does not seem to affect anything.
Edit:
Clarification following conversation.
My service does not return a list of messages, it performs updates. If one of the update operations fails with a message, I want to connect the message to the specific update that failed, preferably without modifying the message text.
An example of the error response I'm getting:
{
"error":{
"code":"SY/530",
"message":{
"lang":"en",
"value":"<My message text>"
},
"innererror":{
"application":{
"component_id":"",
"service_namespace":"/SAP/",
"service_id":"<My service>",
"service_version":"0001"
},
"transactionid":"",
"timestamp":"20181231084555.1576790",
"Error_Resolution":{
// Sap standard message here
},
"errordetails":[
{
"code":"<My message class>",
"message":"<My message text>",
"propertyref":"",
"severity":"error",
"target":""
},
{
"code":"/IWBEP/CX_MGW_BUSI_EXCEPTION",
"message":"An exception was raised.",
"propertyref":"",
"severity":"error",
"target":""
}
]
}
}
}
If you want to keep the same exact message for all operations the simplest way to be able to determine the message origin would be to add a specific 'tag' to it in the backend.
For example, you can fill the PARAMETER field of the message structure with a specific value for each operation. This way you can easily determine the origin in gateway or frontend.
If I understand your question correctly, you could try the following.
override the following DPC methods:
changeset_begin: set cv_defer_mode to abap_true
changeset_end: just redefine it, with nothing inside
changeset_process:
here you get a list of your requests in a table, which has the operation number (thats what you seek), and the key value structure (iwbep.blablabla) for the call.
loop over the table, and call the method for each of the entries.
put the result of each of the operations in the CT_CHANGESET_RESPONSE.
in case of one operation failing, you can raise the busi_exception in there and there you can access the actual operation number.
for further information about batch processing you can check out this link:
https://blogs.sap.com/2018/05/06/batch-request-in-sap-gateway/
is that what you meant?

Unable to exit while loop in UVM monitor

This might be a silly mistake from my side that I have overlooked but I'm fairly new to UVM and I tried tinkering with my code for a while before this. I'm trying to send in a stream of 8 bit data within a packet using Data valid stall protocol from my UVM driver to the DUT. I'm facing an issue with my input monitor not being able to pick up these transactions that are driven.
I have a while loop with a condition that the valid bit must be high and the stall bit should be low. As long as this condition holds good, the monitor needs to pick up the data byte and push into the queue. I know for a fact that the data is being picked up and pushed to a queue as I used $display statements along the way. The problem is arising once all the data bytes are received and the valid bit goes low. Ideally, this should cause the exit from the while loop but isn't doing so. Any help here would be appreciated. I have attached a snippet of the code below. Thanks in advance.
virtual task main_phase (uvm_phase phase);
$display("Run phase of input monitor");
collect_transfer();
endtask: main_phase
virtual task collect_transfer();
fork
forever begin
wait_for_valid_transaction_cycle();
create_and_populate_pkt();
broadcast_pkt();
#(iP0_vif.cb_iP0_MON);
end
join_none
endtask: collect_transfer
virtual task wait_for_valid_transaction_cycle();
wait(iP0_vif.cb_iP0_MON.ip_valid && ~iP0_vif.cb_iP0_MON.ip_stall);
endtask: wait_for_valid_transaction_cycle
virtual task create_and_populate_pkt();
pkt = Router_seq_item :: type_id :: create("pkt");
pkt.valid = iP0_vif.cb_iP0_MON.ip_valid;
pkt.sop = iP0_vif.cb_iP0_MON.ip_sop;
$display("before data collection");
while(iP0_vif.cb_iP0_MON.ip_valid === `HIGH && iP0_vif.cb_iP0_MON.ip_stall === `LOW) begin
$display("After checking for stall");
pkt.data = iP0_vif.cb_iP0_MON.ip_data;
$display(pkt.data);
pkt.data_q.push_front(pkt.data);
pkt.eop = iP0_vif.cb_iP0_MON.ip_eop;
$display("print check in input monitor # time = %0t", $time);
#(iP0_vif.cb_iP0_MON);
end
$display("before printing input packet from monitor");
Check_for_port_route_and_populate_packet_field(pkt);
print_packet(pkt);
endtask: create_and_populate_pkt
The $display statement "before printing input packet from monitor" is not being displayed.
HIGH is defined as a binary 1 and LOW is defined as a binary 0.
The output of the code in terms of display statements is as below.
before data collection
before checking for stall
After checking for stall
2
print check in input monitor # time = 105
before checking for stall
After checking for stall
1
print check in input monitor # time = 115
before checking for stall
After checking for stall
3
print check in input monitor # time = 125
It's possible that the main phase objection is being dropped elsewhere in your environment. UVM will automatically kill any threads that were spawned during a phase when it ends.
To fix this, do not object to the main phase in your monitor. Objecting to that phase is the responsibility of the threads creating the stimulus. Instead, you should be launching this monitor during the run_phase, which will ensure that your loop is not killed until the end of simulation.
Also, during the shutdown phase, you will want your monitor to object whenever it is currently seeing a packet. This will ensure that simulation doesn't end as soon as stimulus has been sent in, giving your other monitors time to collect responses from the DUT.

Network Steganography StegoSip Tool

I am trying to use StegoSip tool coupled with Ekiga softphone. Finally Ekiga works, but when I run StegoSip, it gives me the warning that cb() takes exactly 3 arguments (2 given).
I found the function in the code and my opinion is that stegoSip does not recognize my conversation ( third argument ). I check the port and everything looks ok (SIP uses 5060 port).
I understand that the question is details, but I wasted too much time trying to fix this and I am desperate.
StegoSIP https://github.com/epinna/Stegosip
The problematic code:
def cb(self,i,nf_payload):
"""
Callback function of packet processing.
Get corresponding dissector and direction of packets with .getLoadedDissectorByMarker()
and send to the correct dissector using checkPkt() and processPkt().
"""
data = nf_payload.get_data()
pkt = stegoIP(data)
marker = nf_payload.get_nfmark()
dissector, incoming = dissector_dict.dissd.getLoadedDissectorByMarker(marker)
pkt.incoming = incoming
if not dissector:
nf_payload.set_verdict(nfqueue.NF_ACCEPT)
else:
dissector.checkPkt(pkt)
if pkt.extracted_payload:
dissector.processPkt(pkt, nf_payload)
return 1
The output is:
TypeError: cb() takes exactly 3 parameters (2 given)
Callback failure!

Twitter stream api with agents in F#

From Don Syme blog (http://blogs.msdn.com/b/dsyme/archive/2010/01/10/async-and-parallel-design-patterns-in-f-reporting-progress-with-events-plus-twitter-sample.aspx) I tried to implement a twitter stream listener. My goal is to follow the guidance of the twitter api documentation which says "that tweets should often be saved or queued before processing when building a high-reliability system".
So my code needs to have two components:
A queue that piles up and processes each status/tweet json
Something to read the twitter stream that dumps to the queue the tweet in json strings
I choose the following:
An agent to which I post each tweet, that decodes the json, and dumps it to database
A simple http webrequest
I also would like to dump into a text file any error from inserting in the database. ( I will probably switch to a supervisor agent for all the errors).
Two problems:
is my strategy here any good ? If I understand correctly, the agent behaves like a smart queue and processes its messages asynchronously ( if it has 10 guys on its queue it will process a bunch of them at time, instead of waiting for the 1 st one to finish then the 2nd etc...), correct ?
According to Don Syme's post everything before the while is Isolated so the StreamWriter and the database dump are Isolated. But because I need this, I never close my database connection... ?
The code looks something like:
let dumpToDatabase databaseName =
//opens databse connection
fun tweet -> inserts tweet in database
type Agent<'T> = MailboxProcessor<'T>
let agentDump =
Agent.Start(fun (inbox: MailboxProcessor<string>) ->
async{
use w2 = new StreamWriter(#"\Errors.txt")
let dumpError =fun (error:string) -> w2.WriteLine( error )
let dumpTweet = dumpToDatabase "stream"
while true do
let! msg = inbox.Receive()
try
let tw = decode msg
dumpTweet tw
with
| :? MySql.Data.MySqlClient.MySqlException as ex ->
dumpError (msg+ex.ToString() )
| _ as ex -> ()
}
)
let filter_url = "http://stream.twitter.com/1/statuses/filter.json"
let parameters = "track=RT&"
let stream_url = filter_url
let stream = twitterStream MyCredentials stream_url parameters
while true do
agentDump.Post(stream.ReadLine())
Thanks a lot !
Edit of code with processor agent:
let dumpToDatabase (tweets:tweet list)=
bulk insert of tweets in database
let agentProcessor =
Agent.Start(fun (inbox: MailboxProcessor<string list>) ->
async{
while true do
let! msg = inbox.Receive()
try
msg
|> List.map(decode)
|> dumpToDatabase
with
| _ as ex -> Console.WriteLine("Processor "+ex.ToString()))
}
)
let agentDump =
Agent.Start(fun (inbox: MailboxProcessor<string>) ->
let rec loop messageList count = async{
try
let! newMsg = inbox.Receive()
let newMsgList = newMsg::messageList
if count = 10 then
agentProcessor.Post( newMsgList )
return! loop [] 0
else
return! loop newMsgList (count+1)
with
| _ as ex -> Console.WriteLine("Dump "+ex.ToString())
}
loop [] 0)
let filter_url = "http://stream.twitter.com/1/statuses/filter.json"
let parameters = "track=RT&"
let stream_url = filter_url
let stream = twitterStream MyCredentials stream_url parameters
while true do
agentDump.Post(stream.ReadLine())
I think that the best way to describe agent is that it is is a running process that keeps some state and can communicate with other agents (or web pages or database). When writing agent-based application, you can often use multiple agents that send messages to each other.
I think that the idea to create an agent that reads tweets from the web and stores them in a database is a good choice (though you could also keep the tweets in memory as the state of the agent).
I wouldn't keep the database connection open all the time - MSSQL (and MySQL likely too) implements connection pooling, so it will not close the connection automatically when you release it. This means that it is safer and similarly efficient to reopen the connection each time you need to write data to the database.
Unless you expect to receive a large number of error messages, I would probably do the same for file stream as well (when writing, you can open it, so that new content is added to the end).
The way queue of F# agents work is that it processes messages one by one (in your example, you're waiting for a message using inbox.Receive(). When the queue contains multiple messages, you'll get them one by one (in a loop).
If you wanted to process multiple messages at once, you could write an agent that waits for, say, 10 messages and then sends them as a list to another agent (which would then perform bulk-processing).
You can also specify timeout parameter to the Receive method, so you could wait for at most 10 messages as long as they all arrive within one second - this way, you can quite elegantly implement bulk processing that doesn't hold messages for a long time.