What does the ‘ovs-dpctl show’ command means? - openvswitch

When I execute the 'ovs-dpctl show' command, I got:
$ ovs-dpctl show
system#ovs-system:
lookups: hit:37994604 missed:218759 lost:0
flows: 5
masks: hit:39862430 total:5 hit/pkt:1.04
port 0: ovs-system (internal)
port 1: vbr0 (internal)
port 2: gre_sys (gre)
port 3: net2
I retrieved some explanations:
[-s | --statistics] show [dp...]
Prints a summary of configured datapaths, including their data‐
path numbers and a list of ports connected to each datapath.
(The local port is identified as port 0.) If -s or --statistics
is specified, then packet and byte counters are also printed for
each port.
The datapath numbers consists of flow stats and mega flow mask
stats.
The "lookups" row displays three stats related to flow lookup
triggered by processing incoming packets in the datapath. "hit"
displays number of packets matches existing flows. "missed" dis‐
plays the number of packets not matching any existing flow and
require user space processing. "lost" displays number of pack‐
ets destined for user space process but subsequently dropped be‐
fore reaching userspace. The sum of "hit" and "miss" equals to
the total number of packets datapath processed.
The "flows" row displays the number of flows in datapath.
The "masks" row displays the mega flow mask stats. This row is
omitted for datapath not implementing mega flow. "hit" displays
the total number of masks visited for matching incoming packets.
"total" displays number of masks in the datapath. "hit/pkt" dis‐
plays the average number of masks visited per packet; the ratio
between "hit" and total number of packets processed by the data‐
path.
If one or more datapaths are specified, information on only
those datapaths are displayed. Otherwise, ovs-dpctl displays
information about all configured datapaths.
my question is:
Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?

Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
Yes. (Plus lookups.lost except that I see that's zero for you.)
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
masks.hit is the number of hash table lookups that were executed to
process all of the packets that were processed. A given packet might
require up to masks.total lookups.
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?
The ratio cannot be less than 1.00 because that would mean that
processing a packet didn't require even a single lookup. A ratio of
1.04 is very good because it means that most packets were processed with
only a single lookup. Higher ratios are worse.
by Ben Pfaff (blp#ovn.org)

Related

Ryu controller drop packets after fixed number of packets or time

I am trying to block tcp packets of a specific user/session after some threshold is reached.
Currently I am able to write a script that drops tcp packets.
#set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def switch_features_handler(self, ev):
tcp_match = self.drop_tcp_packets_to_specfic_ip(parser)
self.add_flow_for_clear(datapath, 2, tcp_match)
def drop_tcp_packets_to_specfic_ip(self, parser):
tcp_match = parser.OFPMatch(eth_type=0x0800, ip_proto=6, ipv4_src=conpot_ip)
return tcp_match
Thanks.
You need to set some rule to match the packets flow.
After, you need to create an loop to get statistics about this rule.
Finally, you read each statistic and verify the number of packets. So, if the number of packets reach your threshold, you send the rule to block packets.

AUTOSAR configuaration - DCM module

I am stuck at a point where I am configuring the DCM module and the current parameter I am trying to configure DcmTimStrP2AdjustServer,
The requirement is P2CAN_SERVER_MAX = 25ms; P2STARCAN_SERVER_MAX = 5000ms;
Is DcmDspSessionP2ServerMax the same as P2CAN_SERVER_MAX? and if it is the same
What is the need for DcmTimStrP2AdjustServer and how do I find the best value for DcmTimStrP2AdjustServer.(The values all should be a multiple of DcmTaskTime which I find to be logical).
DcmTaskTime = 5ms;
I am following Autosar 4.0.3, using ETAS tool for configuring the parameters.
To fulfill your requirement, you need to configure respectively
DcmDspSessionP2ServerMax & DcmDspSessionP2StarServerMax for each session control in the DcmDspSessionRows at Dcm/DcmConfigSet/DcmDsp/DcmDspSession/.
i.e.
DcmDspSessionP2ServerMax 25
DcmDspSessionP2StarServerMax 5000
There is no DcmTimStrP2AdjustServer, but I guess you're referring to DcmTimStrP2ServerAdjust instead. DcmTimStrP2ServerAdjust & DcmTimStrP2StarServerAdjust should be configured to a multiple of your DcmTaskTime (5ms in your case, so i.e. 5ms, 10ms, 15, ms, ... is applicable) and are used to safeguard that the response is available on the bus before triggering the P2 or P2* timeouts. In your case you may want to set these values to the same values as in the DcmDspSessionRows if there is no other specification given, because the chosen timeout values there are already multiples of your DcmTaskTime:
DcmTimStrP2ServerAdjust 25
DcmTimStrP2StarServerAdjust 5000
The adjust value is an internal value, in order to adjust the delay between the Dcm Transmit Request and the message being actually on the Bus.
The definition of P2ServerMax and P2*ServerMax and their corresponding Adjust values is the same:
This parameter is used to guarantee that the diagnostic response is available on the bus before reaching P2 by adjusting the current DcmDspSessionP2ServerMax. This parameter mainly represents the software architecture dependent communication delay between the time the transmission is initiated by DCM and the time when the message is actually transmitted to the bus

summarize mutlitple values sent to graphite at the same time

I'm trying to display the sum of several values ​​sent to Graphite (carbon-cache) for the same timestamp.
Sent values are like :
test.nb 10 1421751600
test.nb 11 1421751600
test.nb 12 1421751600
test.nb 13 1421751600
and I would Graphite to display value "46" for timestamp 1421751600.
Only the last value "13" is displayed on Graphite.
Here are configuration files :
storage-aggregation.conf
[test_sum]
pattern = ^test\.*
xFilesFactor = 0.1
aggregationMethod = sum
storage-schemas.conf
[TEST]
pattern = ^test\.
retentions = 10s:30d
Is there a way to do this with Graphite/Carbon ?
Thx.
storage-aggregation.conf file defines how to aggregate data to lower precision retentions and since you only have one retention precision defined: 10s for 30 days, this is not needed.
In order to this with Graphite daemons, you will have to use
carbon-aggregator.py that is run in front of carbon-cache.py to buffer metrics over time. Check [aggregator] section in config file. By default, carbon-aggregator listens on port 2023 (default) so you will have to send data points to this port and not carbon-cache port (2004 by default).
Also, you will have to specify the aggregation rule in aggregation-rules.conf that will allow you to add several metrics together as the come in. You can find detailed explanation here.

Which element in webrtc API stat refer to incoming bit rate

I am trying to find any element refer to IncomingBitrate in webrtc dump file .
Where I can find the incoming bitrate in webrtc-internals?
Also, How I can calculate incoming bitrate from webrtc stats?
In webrtc-internals check the active connection -- it's printed in bold. Usually it is Conn-Audio-1-0. There are two fields bytesSent and bytesReceived which will allow you to calculate the bitrate. Also check the constraints + stats demo for an actual example: https://webrtc.github.io/samples/src/content/peerconnection/constraints/
In getStats, iterate the reports until you find one of kind googCandidatePair with .stat('googActiveConnection') === 'true'. That is giving you the same information as webrtc-internals. If you want per-track/stream values, reports of type ssrc have bytesSent or bytesReceived, depending on whether they are sent or received.
Then calculate the bitrate by dividing the bytes sent/received by the time difference between the getStats calls.

HID transfer comparison on different endpoints

I'm using the SiLabs C8051F320 configured as a HID to stream ADC data (in 64B or 32B reports) to the PC. I'm basing my HID on the SiLabs example code, with bInterval = 1 and experimenting with endpoint 1 (EP1) versus endpoint 2 (EP2).
Per the C8051F320's datasheet, when the endpoints are in split mode, EP1 is 64B and EP2 is 128B when not double-buffered. I have EP1 as 64B when not double-buffered and 32B when double-buffered. EP2 is 64B whether or not double-buffered. The ADC data is 2 bytes per sample, so 31 samples in a 64B report and 15 samples in 32B report are transferred per report.
1) non-double-buffered EP1 (64B per report) streams 22.5kSps ADC data properly
2) double-buffered EP1 (32B per report) streams 11.5kSps ADC data properly
3) non-double-buffered EP2 (64B per report) does not stream 22.5kSps ADC data properly (I didn't check what's the max sample rate)
4) double-buffered EP2 (64B per report) samples 22.5kSps ADC data properly
5) It seems that the time to fill a report with samples must be longer than bInterval. For example, if bInterval were 10 instead of 1, then non-double-buffered EP1 streams 3kSps properly.
Does the above scenario look right? Why does EP1 allow faster transfer than EP2? Why does the report fill time need to be longer than bInterval?
It seems that the time to fill a report with samples must be longer than bInterval.
Correct: HID uses Interrupt type endpoints, they can transport one report every bInterval ms. That allows you to calculate maximum data rate at 64 Byte * 1000 Hz = 64000 Bytes in a sec.
With 2 Bytes in a sample this results in 32kHz maximum sampling rate.
Why does EP1 allow faster transfer than EP2?
I can see no reason for this behavior besides a programming error.
Note: HID Protocol is a poor choice for streaming data. Bulk type endpoints allow much higher data throughput.