Is there a specification for parsing the names of WMI performance counters? Standard names look like '\Xxxx\Yy yy\Zzzz zzz', but we are seeing some custom names that look like '\Aaaa aaa \Bb bb BLAH(bbb\bbbb)\Ccc ccc ccc', i.e., trailing spaces, and embedded parenthetical elements with embedded '\'s. Is there a spec that describes what is allowable in these names?
Here are some typical standard counter names:
\Process(Idle)\% Processor Time
\Process(System)\% Processor Time
\LogicalDisk(HarddiskVolume1)\Avg. Disk Bytes/Transfer
\LogicalDisk(C:)\Avg. Disk Bytes/Transfer
\LogicalDisk(_Total)\Avg. Disk Bytes/Transfer
\LogicalDisk(HarddiskVolume1)\Avg. Disk Bytes/Read
\LogicalDisk(C:)\Avg. Disk Bytes/Read
\LogicalDisk(_Total)\Avg. Disk Bytes/Read
\LogicalDisk(HarddiskVolume1)\Avg. Disk Bytes/Write
\Thread(w3wp/7)\Priority Current
\Thread(w3wp/8)\Priority Current
\Thread(explorer/7)\Priority Current
\MSMQ Outgoing HTTP Session(*)\Outgoing HTTP Bytes
\MSMQ Queue(os:zyxwvut1dv\private$\profilestats_submissions_dev_current_1)\Messages in Queue
\Per Processor Network Interface Card Activity(1, Intel(R) PRO-1000 MT Network Connection)\Received Packets/sec
\Netlogon(\\ZY2XWVUT1.app5000.online)\Semaphore Waiters
Here are some custom counter names:
\Customer App (current) DEV(netmix\auth.asmx\authtkts)\ErrorCode.InvalidState Count
\Customer App (current) DEV(lorem\ipsem.asmx\rdunlcks)\ErrorCode.InvalidState Count
\Customer App (current) DEV(netmix\legal.asmx\getvalidverid)\ErrorCode.OutOfRange Count
\Customer App (current) DEV(lorem\acq.asmx\submit)\ErrorCode.OutOfRange Count
\Customer App (current) DEV(netmix\milestones.asmx\getmilestones)\ErrorCode.OutOfRange Count
\Customer App (current) AUTH(*)\ErrorCode.UnknownError Count
Note:
I am not looking for just a regex that will match the given strings above. I would like to have the reference to the documented spec that defines this.
Names are of the form \A\B
A has a limit of 256 characters
B has a limit of 1024 characters
It does not appear that there are any particular 'invalid' characters for either.
References (check the 'name' subsection of each) :
A: http://msdn.microsoft.com/en-us/library/aa394299%28v=vs.85%29.aspx
B: http://msdn.microsoft.com/en-us/library/aa371920%28v=vs.85%29.aspx
Related
i have given the thread count = 50
rampup period =0
for 48 threaads it is getting passed , for 2 threads there is no failure captured in the selenium log files.
I am expecting concurrent login of 50 users with 0 rampup period , i am not able to find out the exact reason of failure . please suggest the fixes to handle this scenario.
Check jmeter.log file for any suspicious entries
Add View Results Tree listener to your test plan - it will allow you to inspect request and response details
50 real browsers might be too high for a single machin
as per WebDriver Sampler documentation
From experience, the number of browser (threads) that the reader creates should be limited by the following formula:
C = N + 1
where
C = Number of Cores of the host running the test
and N = Number of Browser (threads).
as per Firefox 62.0 system requirements
512MB of RAM / 2GB of RAM for the 64-bit version
So you will need a machine with 51 cores and 100 GB of RAM in order to ensure there will no be JMeter-side bottleneck. If your machine hardware specifications are lower - you will have to go for Remote Testing
When I execute the 'ovs-dpctl show' command, I got:
$ ovs-dpctl show
system#ovs-system:
lookups: hit:37994604 missed:218759 lost:0
flows: 5
masks: hit:39862430 total:5 hit/pkt:1.04
port 0: ovs-system (internal)
port 1: vbr0 (internal)
port 2: gre_sys (gre)
port 3: net2
I retrieved some explanations:
[-s | --statistics] show [dp...]
Prints a summary of configured datapaths, including their data‐
path numbers and a list of ports connected to each datapath.
(The local port is identified as port 0.) If -s or --statistics
is specified, then packet and byte counters are also printed for
each port.
The datapath numbers consists of flow stats and mega flow mask
stats.
The "lookups" row displays three stats related to flow lookup
triggered by processing incoming packets in the datapath. "hit"
displays number of packets matches existing flows. "missed" dis‐
plays the number of packets not matching any existing flow and
require user space processing. "lost" displays number of pack‐
ets destined for user space process but subsequently dropped be‐
fore reaching userspace. The sum of "hit" and "miss" equals to
the total number of packets datapath processed.
The "flows" row displays the number of flows in datapath.
The "masks" row displays the mega flow mask stats. This row is
omitted for datapath not implementing mega flow. "hit" displays
the total number of masks visited for matching incoming packets.
"total" displays number of masks in the datapath. "hit/pkt" dis‐
plays the average number of masks visited per packet; the ratio
between "hit" and total number of packets processed by the data‐
path.
If one or more datapaths are specified, information on only
those datapaths are displayed. Otherwise, ovs-dpctl displays
information about all configured datapaths.
my question is:
Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?
Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
Yes. (Plus lookups.lost except that I see that's zero for you.)
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
masks.hit is the number of hash table lookups that were executed to
process all of the packets that were processed. A given packet might
require up to masks.total lookups.
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?
The ratio cannot be less than 1.00 because that would mean that
processing a packet didn't require even a single lookup. A ratio of
1.04 is very good because it means that most packets were processed with
only a single lookup. Higher ratios are worse.
by Ben Pfaff (blp#ovn.org)
I want to define a Ring Group that, when called, rings one extension and one external number (mobile phone). What is the best way to achieve that?
Right now only the extension is called. So just entering an external number in the Destination field does not work, the logs say
[NOTICE] switch_cpp.cpp:1376 [ring groups][call forward all] user_exists id <mobileno> <domainname>
and later
[DEBUG] switch_ivr_originate.c:3865 Originate Resulted in Error Cause: 27 [DESTINATION_OUT_OF_ORDER]
It will check all calls using the dialplan to see if the destination is a local number for an external number it should say user_exists false every time.
This [DESTINATION_OUT_OF_ORDER] indicates that it may not have found a matching outbound route that matches the number of digits of the external phone number. Or it may mean that your carrier rejected the call maybe didn't like the caller ID that was sent. Easiest thing to try is attempt it with an outbound rout to different carrier.
In case you weren't aware FusionPBX 4.4 was release on 5 April 2018. Instructions to upgrade are post on docs.fusionpbx.com. Search term upgrade (version upgrade).
I am trying to find any element refer to IncomingBitrate in webrtc dump file .
Where I can find the incoming bitrate in webrtc-internals?
Also, How I can calculate incoming bitrate from webrtc stats?
In webrtc-internals check the active connection -- it's printed in bold. Usually it is Conn-Audio-1-0. There are two fields bytesSent and bytesReceived which will allow you to calculate the bitrate. Also check the constraints + stats demo for an actual example: https://webrtc.github.io/samples/src/content/peerconnection/constraints/
In getStats, iterate the reports until you find one of kind googCandidatePair with .stat('googActiveConnection') === 'true'. That is giving you the same information as webrtc-internals. If you want per-track/stream values, reports of type ssrc have bytesSent or bytesReceived, depending on whether they are sent or received.
Then calculate the bitrate by dividing the bytes sent/received by the time difference between the getStats calls.
Following up on this GWT-RPC question (and answer #1) re. field size checking, I would like to know the right way to check pre-deserialization for max data size sent to server, something like if request data size > X then abort the request. Valuing simplicity and based on answer on aforementioned question/answer, I am inclined to believe checking for max overall request size would suffice, finer grained checks (i.e., field level checks) could be deferred to post-deserialization, but I am open to any best-practice suggestion.
Tech stack of interest: GWT-RPC client-server communication with Apache-Tomcat front-end web-server.
I suppose a first step would be to globally limit the size of any request (LimitRequestBody in httpd.conf or/and others?).
Are there finer-grained checks like something that can be set per RPC request? If so where, how? How much security value do finer grain checks bring over one global setting?
To frame the question more specifically with an example, let's suppose we have the two following RPC request signatures on the same servlet:
public void rpc1(A a, B b) throws MyException;
public void rpc2(C c, D d) throws MyException;
Suppose I approximately know the following max sizes:
a: 10 kB
b: 40 kB
c: 1 M B
d: 1 kB
Then I expect the following max sizes:
rpc1: 50 kB
rpc2: 1 MB
In the context of this example, my questions are:
Where/how to configure the max size of any request -- i.e., 1 MB in my above example? I believe it is LimitRequestBody in httpd.conf but not 100% sure whether it is the only parameter for this purpose.
If possible, where/how to configure max size per servlet -- i.e., max size of any rpc in my servlet is 1 MB?
If possible, where/how to configure/check max size per rpc request -- i.e., max rpc1 size is 50 kB and max rpc2 size is 1 MB?
If possible, where/how to configure/check max size per rpc request argument -- i.e., a is 10 kB, b is 40 kB, c is 1 MB, and d is 1 kB. I suspect it makes practical sense to do post-deserialization, doesn't it?
For practical purposes based of cost/benefit, what level of pre-deserialization checking is generally recommended -- 1. global, 2. servlet, 3. rpc, 4. object-argument? Stated differently, what is roughly the cost-complexity on one hand and the added value on the other hand of each of the above pre-deserialization level checks?
Thanks much in advance.
Based on what I have learned since I asked the question, my own answer and strategy until someone can show me better is:
First line of defense and check is Apache's LimitRequestBody set in httpd.conf. It is the overall max for all rpc calls across all servlets.
Second line of defense is servlet pre-deserialization by overriding GWT AbstractRemoteServiceServlet.readContent. For instance, one could do it as shown further below I suppose. This was the heart of what I was fishing for in this question.
Then one can further check each rpc call argument post-deserialization. One could conveniently use the JSR 303 validation both on the server and client side -- see references StackOverflow and gwt r.e. client side.
Example on how to override AbstractRemoteServiceServlet.readContent:
#Override
protected String readContent(HttpServletRequest request) throws ServletException, IOException
{
final int contentLength = request.getContentLength();
// _maxRequestSize should be large enough to be applicable to all rpc calls within this servlet.
if (contentLength > _maxRequestSize)
throw new IOException("Request too large");
final String requestPayload = super.readContent(request);
return requestPayload;
}
See this question in case the max request size if > 2GB.
From a security perspective, this strategy seems quite reasonable to me to control the size of data users send to server.