Is there a way to delete one output port from flow entries? - sdn

openvswitch, a flow entry can have a list of actions (e.x. multiple ports for output). Is there a way that I can remove one specific output port from the action list?

Related

How do I clear up Documentum rendition queue?

We have around 300k items on dmi_queue_item
If I do right click and select "destroy queue item" I see the that row no longer appears if I query by r_object_id.
Would it mean that the file no longer will be processed by the CTS service ? Need to know if this would it be the way to clear up the queue for the rendition process (to convert to PDF) or what it would it be the best way to clear up the queue ?
Also for some items/rows I get this message when doing the right click "destroy" thing, what does it mean ? or how can I avoid it ? Not sure if maybe the item was processed and the row no longer exists or is something else.
dmi_queue_item table is used as queue for all sorts of events at Content Server.
Content Transformation Service is using it to read at least two types of events, afaik.
According to Content Transformation Services, Administration Guide, ver. 7.1, page 18 it reads dm_register_assets and performs the configured content actions for this specific objects.
I was using CTS for generating content renditions for some objects using dm_transcode_content event.
However, be carefull when cleaning up dmi_queue_item since there could be many different event types. It is up to system administrators to keep this queue clean by configuring system components to use events or not to stuff up events that are not supposed to be used.
As per cleaning the queue it is advised to use destroy API command, though you can even try to delete row using DELETE query. Of course try to do this in dev environment first.
You would need to look at 2 queues:
dm_autorender_win31 and dm_mediaserver. In order to delete them you would run a query:
delete dmi_queue_item objects where name = 'dm_mediaserver' or name = 'dm_autorender_win31'

Getting Source-Dest IP and PORT data for traffic monitoring in ONOS

I am trying to implement a monitoring system using ONOS. I am able to collect the port delta statistics from them using port_stat_changed listener.
In the flow statistics, I get the flow entry, which has selection criterion. This criterion only has only Ethernet information. Is there any way to identify the Source and Destination IP address and Port using ONOS. Any suggestion would be very helpful.
The source and destination IP and MAC if they are not in the Switch's rules can be accessed by the application by taking this information from the packets. If your application does not have access to the packages, I do not think the Switches store this kind of information. In case, then, you're going to have to develop a solution by putting a rule in the Switch to pick up that data.

(MPLS) tunneling in OpenFlow

We have a network consisting of multiple OpenFlow 1.0 and 1.3 compatible switches, that are interconnected. Each of the switches is connected to one or more switches in a way that there is a route from every switch to every other switch, though not necessarily directly (so the packets might end up having to be passed through multiple switches to reach it's destination).
What I need to do is to get some form of tunneling system, where I can create a flow that passes packets through all these switches to the target machine.
What I know that is possible is to push and pop MPLS labels to the packet. So I figured I might push two labels at the ingress. The outer label identifies the target switch and the inner label identifies the target port. This way I only need flows on each switch to pass packets with matching labels to the target switch first and then to the target port, when it reached the target switch.
The problem here is only that I found no way of matching on MPLS labels. Does anyone know if there is a way to match on these labels? Or is there any other way of doing what I want to do?
Thanks a lot in advance!
yes, you can do
match = parser.OFPMatch(in_port=inPort,eth_type=ether.ETH_TYPE_MPLS,mpls_label=m_label)
that's how you can match mpls labels and give whatever actions you wanna give.

Realtime communication between 2 files

We have "host.xlsb" and "checkin.xlsb" on PC#1.
PC#2 open "checkin" via lan.
In business hour, clients will come and scan their membership ID card's bar code using bar code scanner.
Bar code scanner reads the ID and send to "checkin".
"checkin" checks the ID and display info (eg. which table) to clients, and record the ID and the check in time to a list.
"host" is for reception, pull data from "checkin" to see who has come and who has not and check if clients went to wrong table.
Thus I want "host" could read changes on "checkin" in realtime, possible?
P.S.:
I know I can do it if I simply put "host" and "checkin" in a single workbook and use PC#1 only.
But if I combine them, I will need reception to wait for clients or clients to wait for reception.
Neither I don't want any other PC to open the combined one at the same time.
Make the application at the checkin machine keep the data in a seperate (ASCII) file. Checkin would open the file with write access and update the information. The "host" machine would open the file with read access and check the latest info, then close the file.

Mule Dynamic flow name runtime

I was wondering whether anyone had any experience dynamically setting the name of the flow I want to redirect to in Mule? The use case is that I might have data coming in and I want to route the request to a specific flow based on the data coming in. However, the mule-config may not know of this flow until runtime, so I need to select a flow that corresponds to the data in a certain incoming field.
Many thanks.
Selecting a destination flow is usually done by sending a message to the inbound endpoint of the desired flow.
For example, if you use VM inbound endpoints in your different flows, you can then at runtime use a dynamic VM outbound endpoint that will target the right VM inbound endpoint.