What happens to flow rules when OVS loses connection to a controller? - openflow

In my understanding, the flow rules stay present. My issue is with rules that direct the packet to the controller. If a new flow comes in, for example, and the controller is down, will this new packet simply be dropped?

will this new packet simply be dropped?
Exactly!

Its up to the configuration whether the flows will present or not. Here is the explanation from Openflow 1.3 documentation:
Flow entries are removed from flow tables in two ways, either at the
request of the controller or via the switch flow expiry mechanism. The
switch flow expiry mechanism that is run by the switch independantly
of the controller and is based on the state and configuration of flow
entries. Each flow entry has an idle_timeout and a hard_timeout
associated with it. If either value is non-zero, the switch must note
the flow entry's arrival time, as it may need to evict the entry
later. A non-zero hard_timeout field causes the flow entry to be
removed after the given number of seconds, regardless of how many
packets it has matched. A non-zero idle_timeout field causes the flow
entry to be removed when it has matched no packets in the given number
of seconds. The switch must implement flow expiry and remove flow
entries from the flow table when one of their timeout is exceeded.
If both idle_timeout and hard_timeout are zero, the entry is considered permanent and will never time out:
If the idle_timeout is set and the hard_timeout is zero, the entry
must expire after idle_timeout seconds with no received trafic. If the
idle_timeout is zero and the hard_timeout is set, the entry must
expire in hard_timeout seconds regardless of whether or not packets
are hitting the entry. If both idle_timeout and hard_timeout are set,
the flow entry will timeout after idle_timeout seconds with no
trafic, or hard_timeout seconds, whichever comes first. If both
idle_timeout and hard_timeout are zero, the entry is considered
permanent and will never time out. It can still be removed with a
flow_mod message of type OFPFC_DELETE.

Related

Is there any timing specs between APDU/Response pair in smart card?

Command response pair
I couldn't find a clear specification about the period between two consecutive Command/Response Pair (T in the diagram).
for example, If I had sent a command to the card and received a response, What is the maximum period I can wait before the communication is not valid anymore? I need this because I'm willing to make use of this period to give me some flexibility in my design.
This is the block waiting time (which can be computed from the BWI part of TB3 in the ATR). If the card needs more time, it has to send a Waiting Time Extension (abbreviated WTX), before this elapses, which has to be acknowledged by the other side (typically the reader). If the acknowledge is not given both sides assume a communication error.
Note, that FWI and BWI from TA1 as well as the clock supplied to the card influence the time.

How in per packet multipath routing implemented in openflow? Is flow table updated after each packet transfer?

I am trying to do multipath routing using per packet in openflow. I do not know how are multipaths used to deliver data on per packet basis. Is flow table updated everytime or group table does per packet delivery?
Each flow entry may have an idle timeout and or a hard timeout associated with it.
The idle timeout and a hard timeout control the removal of a flow entry from the OpenFlow table. If either value is non-zero, the switch must note the flow entry's arrival time, as it may need to evict the entry later. A non-zero idle_timeout entry field causes the flow entry to be removed after the given number of seconds, if no packet has been matched by the flow. A non-zero hard_timeout field causes the flow entry to be removed after the given number of seconds, regardless of how many packets it has matched.
Hard timeout: The absolute timeout after which the flow is removed from the device.
Idle timeout: The absolute timeout in which if there are no packets hitting the flow for the duration, then flow is removed from the device.

RabbitMq: Disabling prefetching (prefetch_count=0) with auto-ack=false

Is it possible to disable prefetching with auto-ack=false? I just want to avoid reading message (prefetching) from a queue every time I acknowledge a message. I want to read a message only when I call a 'consume_message'. Setting prefetch_count=0 seems doesn't work and it's treated as 'no specif limit'.
UPDATED:
As I understand 'prefetch_count' is the number of messages cached on the client side (read locally into buffers). For example there is a use case:
(let's assume there is a queue we connect to and it has messages)
Create a connection.
Set Basic.Qos (prefetch_count=1)
Start consuming Basic.Consume
Due to the prefetch_count=1 one message is already transferred to the client and ready to be read and marked as not-ack'd.
Reading message and then processing it.
Then the message is ack'd. And everything starts from step 4.
I thought that setting prefetch_count to 0 would avoid the step 4 and a message is transferred only when you read it - no caching on the client side.
Prefetch and auto-acknowledgment are not related like that. Prefetch count is simply a number of unacknowledged messages prepared to be delivered to a specific consumer.
Let's say you set prefetch count to N. If you set auto-ack to true, these means that these N messages are ACKed upon receiving. If you set it to false, this means that you still get the N messages but they're not ACKed until you manually ACK them.
For the last part - try setting prefetch_count to 1.
Also check this question and both answers.

How to specify another timeout queue for NSB?

I am using NSB 4.4.2
I want to have something like heartbeats on my saga to show processing statistics.
When i request a timeout it sends to sagas input queue.
In case of many messages prior to this timeout message, IHandleTimeouts may not be fired at specific time.
Is it a bug? Or how can i use separate queue for timeout messages?
Thanks
You are correct - when a timeout is ready to be dispatched, it is sent to the incoming queue of the endpoint, and if there are already many other messages in there, it will have to wait its turn to be processed.
Another thing you might want to consider, is that the endpoint may be down at that time.
If you want to guarantee that your saga code will be invoked at (or very close to) the time of the timeout, you'll need to set up a high availability deployment first. Then, you should look at setting the SLA required of that endpoint - how quickly messages should be processed, and then monitor the time to breach SLA performance counter.
See here for more information: http://docs.particular.net/nservicebus/monitoring-nservicebus-endpoints
You should be prepared to scale out your endpoint as needed to guarantee enough processing power to keep up with the load coming in.
NOTE: The reason we use the same incoming queue for processing these timeouts is by design. A timeout message is almost always the same priority or lower than the other business messages being processed by a saga. As such, it doesn't make sense to have them cut ahead of other messages in line.
Timeouts are sent to the [endpointname].timeouts

RabbitMQ messaging - initializing consumer

I want to use RabbitMQ to broadcast the state of an object continuously to any consumers which maybe listening. I want to set it up so when a consumer subscribes it will pick up the last available state...
Is this possible?
Use a custom last value cache exchange:
e.g.
https://github.com/squaremo/rabbitmq-lvc-plugin
Last value caching exchange:
This is a pretty simple implementation of a last value cache using RabbitMQ's pluggable exchange types feature.
The last value cache is intended to solve problems like the following: say I am using messaging to send notifications of some changing values to clients; now, when a new client connects, it won't know the value until it changes.
The last value exchange acts like a direct exchange (binding keys are compared for equality with routing keys); but, it also keeps track of the last value that was published with each routing key, and when a queue is bound, it automatically enqueues the last value for the binding key.
It is possible with the Recent History Custom Exchange. It says that it will put the last 20 messages in the queue, so if it configurable you may be able to change that to the last 1 message and you are done.
If that doesn't work, ie the number is fixed at 20, then you may have to process the first 19 messages off the queue and take the status from the 20th. This is a bit of an annoying work around but as you know the parameter is always 20 this should be fine.
Finally if this doesn't suit you perhaps you will set you consumer to wait until the first status is receive, presuming that the status is broadcast reasonably frequently. Once the first status is received then start the rest of the application. I am assuming here that you need the status before doing something else.