How many number of pairing devices on BlueZ? - bluez

How many pairing devices are on BlueZ?
I trace code, default pairing devices are unlimited, right?
On main.conf
# Maximum number of controllers allowed to be exposed to the system.
# Default=0 (unlimited)
# MaxControllers=0
Thanks

Related

What does the ‘ovs-dpctl show’ command means?

When I execute the 'ovs-dpctl show' command, I got:
$ ovs-dpctl show
system#ovs-system:
lookups: hit:37994604 missed:218759 lost:0
flows: 5
masks: hit:39862430 total:5 hit/pkt:1.04
port 0: ovs-system (internal)
port 1: vbr0 (internal)
port 2: gre_sys (gre)
port 3: net2
I retrieved some explanations:
[-s | --statistics] show [dp...]
Prints a summary of configured datapaths, including their data‐
path numbers and a list of ports connected to each datapath.
(The local port is identified as port 0.) If -s or --statistics
is specified, then packet and byte counters are also printed for
each port.
The datapath numbers consists of flow stats and mega flow mask
stats.
The "lookups" row displays three stats related to flow lookup
triggered by processing incoming packets in the datapath. "hit"
displays number of packets matches existing flows. "missed" dis‐
plays the number of packets not matching any existing flow and
require user space processing. "lost" displays number of pack‐
ets destined for user space process but subsequently dropped be‐
fore reaching userspace. The sum of "hit" and "miss" equals to
the total number of packets datapath processed.
The "flows" row displays the number of flows in datapath.
The "masks" row displays the mega flow mask stats. This row is
omitted for datapath not implementing mega flow. "hit" displays
the total number of masks visited for matching incoming packets.
"total" displays number of masks in the datapath. "hit/pkt" dis‐
plays the average number of masks visited per packet; the ratio
between "hit" and total number of packets processed by the data‐
path.
If one or more datapaths are specified, information on only
those datapaths are displayed. Otherwise, ovs-dpctl displays
information about all configured datapaths.
my question is:
Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?
Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
Yes. (Plus lookups.lost except that I see that's zero for you.)
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
masks.hit is the number of hash table lookups that were executed to
process all of the packets that were processed. A given packet might
require up to masks.total lookups.
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?
The ratio cannot be less than 1.00 because that would mean that
processing a packet didn't require even a single lookup. A ratio of
1.04 is very good because it means that most packets were processed with
only a single lookup. Higher ratios are worse.
by Ben Pfaff (blp#ovn.org)

Selenoid: What does the count attribute do in a quota file?

I started Selenoid with docker: aerokube/cm:latest selenoid start --args "-limit 20"
I then created a quota file with:
user.xml:
<qa:browsers xmlns:qa="urn:config.gridrouter.qatools.ru">
<browser name="chrome" defaultVersion="62.0">
<version number="62.0">
<region name="1">
<host name="1.2.3.4" port="4445" count="10"/>
</region>
</version>
</browser>
</qa:browsers>
When I run with this user it runs 20 in parallel. I thought count="10" would mean this user can do at most 10 in parallel. And -limit 20 was the max for the VM. Is this the correct usage of count?
In fact count field in Ggr quota XML file means host weight. It makes sense when two or more hosts are present in quota. This attribute is called so for historical reasons. So when you have e.g. two hosts in quota with counts 1 and 3 then sessions will be distributed as 1:3 over these hosts. When counts are equal then distribution should be random uniform. If you set count equal to the real count of browsers for each host - then you also get random uniform distribution. This is what we are recommend to do in production.

Adding extra device to namespace using asinfo

From what I saw in the config documentation it’s easy to configure multiple devices within the same name space using:
namespace <namespace-name> {
memory-size <SIZE>G # Maximum memory allocation for primary
# and secondary indexes.
storage-engine device { # Configure the storage-engine to use persistence
device /dev/<device> # raw device. Maximum size is 2 TiB
# device /dev/<device> # (optional) another raw device.
write-block-size 128K # adjust block size to make it efficient for SSDs.
}
}
Is there is anyway I can do that without restarting asd service? using asinfo tool for example?
No, you cannot add devices dynamically.
User also posted here:
https://discuss.aerospike.com/t/adding-extra-device-to-namespace-using-asinfo/4525

How to scale down a CrateDB cluster?

For testing, I wanted to shrink my 3 node cluster to 2 nodes, to later go and do the same thing for my 5 node cluster.
However, after following the best practice of shrinking a cluster:
Back up all tables
For all tables: alter table xyz set (number_of_replicas=2) if it was less than 2 before
SET GLOBAL PERSISTENT discovery.zen.minimum_master_nodes = <half of the cluster + 1>;
3 a. If the data check should always be green, set the min_availability to 'full':
https://crate.io/docs/reference/configuration.html#graceful-stop
Initiate graceful stop on one node
Wait for the data check to turn green
Repeat from 3.
When done, persist the node configurations in crate.yml:
gateway.recover_after_nodes: n
discovery.zen.minimum_master_nodes:[![enter image description here][1]][1] (n/2) +1
gateway.expected_nodes: n
My cluster never went back to "green" again, and I also have a critical node check failing.
What went wrong here?
crate.yml:
...
################################## Discovery ##################################
# Discovery infrastructure ensures nodes can be found within a cluster
# and master node is elected. Multicast discovery is the default.
# Set to ensure a node sees M other master eligible nodes to be considered
# operational within the cluster. Its recommended to set it to a higher value
# than 1 when running more than 2 nodes in the cluster.
#
# We highly recommend to set the minimum master nodes as follows:
# minimum_master_nodes: (N / 2) + 1 where N is the cluster size
# That will ensure a full recovery of the cluster state.
#
discovery.zen.minimum_master_nodes: 2
# Set the time to wait for ping responses from other nodes when discovering.
# Set this option to a higher value on a slow or congested network
# to minimize discovery failures:
#
# discovery.zen.ping.timeout: 3s
#
# Time a node is waiting for responses from other nodes to a published
# cluster state.
#
# discovery.zen.publish_timeout: 30s
# Unicast discovery allows to explicitly control which nodes will be used
# to discover the cluster. It can be used when multicast is not present,
# or to restrict the cluster communication-wise.
# For example, Amazon Web Services doesn't support multicast discovery.
# Therefore, you need to specify the instances you want to connect to a
# cluster as described in the following steps:
#
# 1. Disable multicast discovery (enabled by default):
#
discovery.zen.ping.multicast.enabled: false
#
# 2. Configure an initial list of master nodes in the cluster
# to perform discovery when new nodes (master or data) are started:
#
# If you want to debug the discovery process, you can set a logger in
# 'config/logging.yml' to help you doing so.
#
################################### Gateway ###################################
# The gateway persists cluster meta data on disk every time the meta data
# changes. This data is stored persistently across full cluster restarts
# and recovered after nodes are started again.
# Defines the number of nodes that need to be started before any cluster
# state recovery will start.
#
gateway.recover_after_nodes: 3
# Defines the time to wait before starting the recovery once the number
# of nodes defined in gateway.recover_after_nodes are started.
#
#gateway.recover_after_time: 5m
# Defines how many nodes should be waited for until the cluster state is
# recovered immediately. The value should be equal to the number of nodes
# in the cluster.
#
gateway.expected_nodes: 3
So there are two things that are important:
The number of replicas is essentially the number of nodes you can loose in a typical setup (2 is recommended so that you can scale down AND loose a node in the process and still be ok)
The procedure is recommended for clusters > 2 nodes ;)
CrateDB will automatically distribute the shards across the cluster in a way that no replica and primary share a node. If that is not possible (which is the case if you have 2 nodes and 1 primary with 2 replicas, the data check will never return to 'green'. So in your case, set the number of replicas to 1 in order to get the cluster back to green (alter table mytable set (number_of_replicas = 1)).
The critical node check is due to the cluster not having received an updated crate.yml yet: Your file also still has the configuration of a 3-node cluster in it, hence the message. Since CrateDB only loads the expected_nodes at startup (it's not a runtime setting), a restart of the whole cluster is required to conclude scaling down. It can be done with a rolling restart, but be sure to set SET GLOBAL PERSISTENT discovery.zen.minimum_master_nodes = <half of the cluster + 1>; properly, otherwise the consensus will not work...
Also, it's recommended to scale down one-by-one in order to avoid overloading the cluster with rebalancing and accidentally loosing data.

Naming spec for WMI counters

Is there a specification for parsing the names of WMI performance counters? Standard names look like '\Xxxx\Yy yy\Zzzz zzz', but we are seeing some custom names that look like '\Aaaa aaa \Bb bb BLAH(bbb\bbbb)\Ccc ccc ccc', i.e., trailing spaces, and embedded parenthetical elements with embedded '\'s. Is there a spec that describes what is allowable in these names?
Here are some typical standard counter names:
\Process(Idle)\% Processor Time
\Process(System)\% Processor Time
\LogicalDisk(HarddiskVolume1)\Avg. Disk Bytes/Transfer
\LogicalDisk(C:)\Avg. Disk Bytes/Transfer
\LogicalDisk(_Total)\Avg. Disk Bytes/Transfer
\LogicalDisk(HarddiskVolume1)\Avg. Disk Bytes/Read
\LogicalDisk(C:)\Avg. Disk Bytes/Read
\LogicalDisk(_Total)\Avg. Disk Bytes/Read
\LogicalDisk(HarddiskVolume1)\Avg. Disk Bytes/Write
\Thread(w3wp/7)\Priority Current
\Thread(w3wp/8)\Priority Current
\Thread(explorer/7)\Priority Current
\MSMQ Outgoing HTTP Session(*)\Outgoing HTTP Bytes
\MSMQ Queue(os:zyxwvut1dv\private$\profilestats_submissions_dev_current_1)\Messages in Queue
\Per Processor Network Interface Card Activity(1, Intel(R) PRO-1000 MT Network Connection)\Received Packets/sec
\Netlogon(\\ZY2XWVUT1.app5000.online)\Semaphore Waiters
Here are some custom counter names:
\Customer App (current) DEV(netmix\auth.asmx\authtkts)\ErrorCode.InvalidState Count
\Customer App (current) DEV(lorem\ipsem.asmx\rdunlcks)\ErrorCode.InvalidState Count
\Customer App (current) DEV(netmix\legal.asmx\getvalidverid)\ErrorCode.OutOfRange Count
\Customer App (current) DEV(lorem\acq.asmx\submit)\ErrorCode.OutOfRange Count
\Customer App (current) DEV(netmix\milestones.asmx\getmilestones)\ErrorCode.OutOfRange Count
\Customer App (current) AUTH(*)\ErrorCode.UnknownError Count
Note:
I am not looking for just a regex that will match the given strings above. I would like to have the reference to the documented spec that defines this.
Names are of the form \A\B
A has a limit of 256 characters
B has a limit of 1024 characters
It does not appear that there are any particular 'invalid' characters for either.
References (check the 'name' subsection of each) :
A: http://msdn.microsoft.com/en-us/library/aa394299%28v=vs.85%29.aspx
B: http://msdn.microsoft.com/en-us/library/aa371920%28v=vs.85%29.aspx