Selenoid: What does the count attribute do in a quota file? - selenium

I started Selenoid with docker: aerokube/cm:latest selenoid start --args "-limit 20"
I then created a quota file with:
user.xml:
<qa:browsers xmlns:qa="urn:config.gridrouter.qatools.ru">
<browser name="chrome" defaultVersion="62.0">
<version number="62.0">
<region name="1">
<host name="1.2.3.4" port="4445" count="10"/>
</region>
</version>
</browser>
</qa:browsers>
When I run with this user it runs 20 in parallel. I thought count="10" would mean this user can do at most 10 in parallel. And -limit 20 was the max for the VM. Is this the correct usage of count?

In fact count field in Ggr quota XML file means host weight. It makes sense when two or more hosts are present in quota. This attribute is called so for historical reasons. So when you have e.g. two hosts in quota with counts 1 and 3 then sessions will be distributed as 1:3 over these hosts. When counts are equal then distribution should be random uniform. If you set count equal to the real count of browsers for each host - then you also get random uniform distribution. This is what we are recommend to do in production.

Related

Spring JMS - Message-driven-channel-adapter The number of consumers doesn't reduce to the standard level

I have a message-driven-channel-adapter and I defined the max-concurrent-consumers as 100 and concurrent-consumers as 2. When I tried a load test, I saw that the concurrent-consumers increased but after the load test, The number of consumers didn't reduce to the standard level. I'm checking it with RabbitMQ management portal.
When the project restarted (no load test), the GET (Empty) is 650/s but after load test it stays about 2500/s. It is not returning to 650/s. I think concurrent-consumers property is being increased to a number but is not being reduced to original value.
How can I make it to reduce to normal level again?
Here is my message-driven-channel-adapter definition:
<int-jms:message-driven-channel-adapter id="inboundAdapter"
auto-startup="true"
connection-factory="jmsConnectionFactory"
destination="inboundQueue"
channel="requestChannel"
error-channel="errorHandlerChannel"
receive-timeout="-1"
concurrent-consumers="2"
max-concurrent-consumers="100" />
With receiveTimeout=-1; the container has no control over the idle consumer (it is blocked in the jms client).
You also need to set max-messages-per-task for the container to consider stopping a consumer.
<int-jms:message-driven-channel-adapter id="inboundAdapter"
auto-startup="true"
connection-factory="jmsConnectionFactory"
destination-name="inboundQueue"
channel="requestChannel"
error-channel="errorHandlerChannel"
receive-timeout="5000"
concurrent-consumers="2"
max-messages-per-task="10"
max-concurrent-consumers="100" />
The time elapsed for an idle consumer is receiveTimeout * maxMessagesPerTask.

jmeter and apachetop - why I see different values?

Probably explanation is simple - but I couldn't find answer to my question:
I am running jmeter test from one VM (worker) to another (target). On worker I have jmeter with 100 threads (100 users). On target I have API that runs on Apache. When I run "apachetop -f access_log" on target, I see only about 7 req/s.
Can someone explain me, why I don't see 100 req/s on target?
In test result in jmeter I see always 200 OK, so all request are hitting the target, and moreover target always responds. So I am not dropping any requests here. Network bandwidth between machines is 1G. What I am missing here?
Thanks,
Daddy
100 users doesn't necessarily mean 100 requests per second, even more, it is highly unlikely.
According to JMeter glossary:
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Roughly, if JMeter is able to get response from server in 1 second - you will get 100 requests/second. If response time will be 2 seconds - throughput will be 50 requests/second, etc, response time 4 seconds - 25 requests/second, etc.
Also JMeter configuration matters. If you don't provide enough loops you may run into a situation where some threads already finished and some are not even started. See JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more detailed explanation.
Your target load = 100 threads ( you are assuming it should generate 100 req/sec as per your plan)
Your actual load = 7 req / sec = 7*3600 / hour = 25200
Per thread throughput = 25200 / 100 threads = 252 iterations / thread / hour
Per transaction time = 3600 / 252 = 14.2 secs
This means, JMeter should be actually sending each request every 14 secs per thread. i.e., 100 requests for every 14.2 secs.
Now, analyze your JMeter summary report for the transaction timers to find out where the remaining 13.2 secs are being spent.
Possible issues are
1. High DNS resolution time (DNS issue)
2. High connection setup time (indicates load balancer issues)
3. High Request send time (indicates n/w or firewall throttling issues)
4. High request receive time (same as #3)
Now, the time that you see in Apache logs are mostly visible to JMeter as time to first byte time. I am not sure about the machine that you are running your testing. If your worker can support curl, use Curl to find the components for a single request.
echo 'request payload for POST'
| curl -X POST -H 'User-Agent: myBrowser' -H 'Content-Type: application/json' -d #- -s -w '\nDNS time:\t%{time_namelookup}\nTCP Connect time:\t%{time_connect}\nAppCon Protocol time:\t%{time_appconnect}\nRedirect time:\t%{time_redirect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' http://mytest.test.com
If the above output indicates no such issues then the time must have been spent within JMeter. You should tune your JMeter implementation by using various options like beanshell / JSR223 etc.

Reset quota is not working as expected in apigee

I have applied a quota policy by using the following code
<Quota async="false" continueOnError="false" enabled="true" name="Quota-1">
<DisplayName>Quota 1</DisplayName>
<Allow count="2"/>
<Interval>1</Interval>
<Distributed>true</Distributed>
<Synchronous>false</Synchronous>
<TimeUnit>minute</TimeUnit>
<Identifier ref="request.queryparam.id"/>
<AsynchronousConfiguration>
<SyncIntervalInSeconds>1</SyncIntervalInSeconds>
<SyncMessageCount>5</SyncMessageCount>
</AsynchronousConfiguration>
</Quota>
And then i am resetting the count by using reset count policy with following code
<ResetQuota async="false" continueOnError="false" enabled="true" name="Reset-Quota-1">
<DisplayName>Reset Quota 1</DisplayName>
<Quota name="Quota-1">
<Identifier ref="request.queryparam.id">
<Allow>6</Allow>
</Identifier>
</Quota>
</ResetQuota>
As per my knowledge when i am giving requests the available count needs to be 6,5,4,3,2,1,0
But it is showing 1,6,11,16,21,.....
In this scenario there is no chance to count come down to 0.
What might be the wrong.
Thanks in advance....
I see that you are using asynchronous quota by setting the to false. You also have an incorrect AsynchronousConfiguration because your SyncMessageCount (5) is greater than your Allow count (2).
What the Asynchronous Configuration really means is that the quota counters would not be updated to the backend for every request, but only at intervals of every 5 seconds, which is anyways higher than the allow count. That's the problem you are running into, and that's why you see the count increasing by a factor of 5 - 1,6,11,16,21 etc.
You need to set the sync interval to be smaller than the allow count. Also note that Async quota would not give you 100% accuracy since it only synchs the counters across multiple servers at regular intervals and not for every request.

Apigee Spike Arrest Rate Limit Application

I am an Apigee newbie.
I am trying to understand the Spike Arrest policy.
I am looking at this documentation:
http://apigee.com/docs/api-services/content/shield-apis-using-spikearrest
http://apigee.com/docs/api-services/content/policy-attachment-and-enforcement
The one thing I cannot understand for certain is if, when the Spike Arrest Policy is applied to an ApiProxy, whether the rate limit is applied per Key/Client Dev Application, or is it shared between all Keys/Client Dev Applications?
For example if we have the following config:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<SpikeArrest async="false" continueOnError="false" enabled="true" name="spikearrest-1">
<DisplayName>SpikeArrest-1</DisplayName>
<FaultRules/>
<Properties/>
<Identifier ref="request.header.some-header-name"/>
<MessageWeight ref="request.header.weight"/>
<Rate>50ps</Rate>
</SpikeArrest>
And Client Dev Apps:
1. DevApp1
2. DevApp2
Is the 50ps rate limit shared between DevApp1 and DevApp2, or do DevApp1 and DevApp2 get 50ps rate limit each?
Thanks,
You can use any of the predefined variables:
http://apigee.com/docs/api-services/api/variables-reference
The variable that is probably the most commonly used for Spike Arrest is client.ip.
Edge will make all elements of a request message available. If your clients are adding a client_id (aka API key) to a request as a query parameter, for example api.call.com?client_id=u34r8ur, then you would set the variable in your Spike Arrest Identifier to be:
<Identifier ref="request.queryparam.client_id"/>
Or if it is in an HTTP header:
<Identifier ref="request.header.client_id"/>
Hope that helps!
Its per app identified by your identifier.

Maximum number of messages sent to a Queue in OpenMQ?

I am currently using Glassfish v2.1 and I have set up a queue to send and receive messages from with Sesion beans and MDBs respectively. However, I have noticed that I can send only a maximum of 1000 messages to the queue. Is there any reason why I cannot send more than 1000 messages to the queue? I do have a "developer" profile setup for the glassfish domain. Could that be the reason? Or is there some resource configuration setting that I need to modify?
I have setup the sun-resources.xml configuration properties as follows:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 9.0 Resource Definitions //EN" "http://www.sun.com/software/appserver/dtds/sun-resources_1_3.dtd">
<resources>
<admin-object-resource
enabled="true"
jndi-name="jms/UpdateQueue"
object-type="user"
res-adapter="jmsra"
res-type="javax.jms.Queue">
<description/>
<property name="Name" value="UpdatePhysicalQueue"/>
</admin-object-resource>
<connector-resource
enabled="true" jndi-name="jms/UpdateQueueFactory"
object-type="user"
pool-name="jms/UpdateQueueFactoryPool">
<description/>
</connector-resource>
<connector-connection-pool
associate-with-thread="false"
connection-creation-retry-attempts="0"
connection-creation-retry-interval-in-seconds="10"
connection-definition-name="javax.jms.QueueConnectionFactory"
connection-leak-reclaim="false"
connection-leak-timeout-in-seconds="0"
fail-all-connections="false"
idle-timeout-in-seconds="300"
is-connection-validation-required="false"
lazy-connection-association="false"
lazy-connection-enlistment="false"
match-connections="true"
max-connection-usage-count="0"
max-pool-size="32"
max-wait-time-in-millis="60000"
name="jms/UpdateFactoryPool"
pool-resize-quantity="2"
resource-adapter-name="jmsra"
steady-pool-size="8"
validate-atmost-once-period-in-seconds="0"/>
</resources>
Hmm .. further investigation revealed the following in the imq logs:
[17/Nov/2009:10:27:57 CST] ERROR sendMessage: Sending message failed. Connection ID: 427038234214377984:
com.sun.messaging.jmq.jmsserver.util.BrokerException: transaction failed: [B4303]: The maximum number of messages [1,000] that the producer can process in a single transaction (TID=427038234364096768) has been exceeded. Please either limit the # of messages per transaction or increase the imq.transaction.producer.maxNumMsgs property.
So what would I do if I needed to send more than 5000 messages at a time?
What I am trying to do is to read all the records in a table and update a particular field of each record based on the corresponding value of that record in a legacy table to which I have only read only access. This table has more than 10k records in it. As of now, I am sequentially going through each record in a for loop, getting the corresponding record from the legacy table, comparing the field values, updating the record if necessary and adding corresponding new records in other tables.
However, I was hoping to improve performance by processing all the records asynchronously. To do that I was thinking of sending each record info as a separate message and hence requiring so many messages.
To configure OpenMQ and set artitrary broker properties, have a look at this blog post.
But actually, I wouldn't advice to increase the imq.transaction.producer.maxNumMsgs property, at least not above the value recommended in the documentation:
The maximum number of messages that a producer can process in a single transaction. It is recommended that the value be less than 5000 to prevent the exhausting of resources.
If you need to send more messages, consider doing it in several transactions.