AnyLogic selectOutput condition - conditional-statements

I'm simulation a queuing system where customers join one queue called RDQueue with a capacity of 5, and then moves to a different queue called TDQueue when RDQueue is full (reached the capacity).
I used a selectOutput block with RDQueue on the true branch and TDQueue on the false branch with the condition: RDQueue.size()<5
There should be customers going to TDQueue, but when I run this simulation no customers ever go through the false branch.
(for some reason the image of what I've done won't upload)
I have a source with arrival rate of 0.361 per minute and a delay for RD with a delay time: exponential(8.76) minutes.
According to queuing theory, 68.5% of arrival customers should find RDQueue full and go to TDQueue.
TIA

If your delay time is exponential(8.76) the delay time will always be below the rate in which they are coming:
Random sample from exponential distribution: x = log(1-u)/(−λ)
with λ=8.76 and u as a uniform random number, the expected value of your delay time is 0.114 minutes, so your RDQueue has a probability of being full of nearly 0%

Related

What does "bw: SpinningDown" mean in a RedisTimeoutException?

What does "bw: SpinningDown" mean in this error -
Timeout performing GET (5000ms), next: GET foo!bar!baz, inst: 5, qu: 0, qs: 0, aw: False, bw: SpinningDown, ....
Does it mean that the Redis server instance is spinning down, or something else?
It means something else actually. The abbreviation bw stands for Backlog-Writer, which contains the status of what the backlog is doing in Redis.
For this particular status: SpinningDown, you actually left out the important bits that relate to it.
There are 4 values being tracked for workers being Busy, Free, Min and Max.
Let's take these hypothetical values: Busy=250,Free=750,Min=200,Max=1000
In this case there are 50 more existing (busy) threads than the minimum.
The cost of spinning up a new thread is high, especially if you hit the .NET-provided global thread pool limit. In which case only 1 new thread is created every 500ms due to throttling.
So once the Backlog is done processing an item, instead of just exiting the thread, it will keep it in a waiting state (SpinningDown) for 5 seconds. If during that time there still is more Backlog to process, the same thread will process another item from the Backlog.
If no Backlog item needed to be processed in those 5 seconds, the thread will be exited, which will eventually lead to a decrease in Busy (existing) threads.
This only happens for threads above the Min count of course, as those will be kept alive even if there is no work to do.

Catchpoint pause vs. waitForNoRequest - What's the difference?

I have a test that was alerting because it was taking extra time for an asset to load. We changed from waitForNoRequest to a pause (at Catchpoint's suggestion). That did not seem to have the expected effect of waiting for things to load. We increased the pause from 3000 to 12000 and that helped to allow the page to load and stop the alert. We noticed some more alerts, so I tried to increase the pause to something like 45000 and it would not allow me to pause for that long.
So the main question here is - what functionality does both of these different features provide? What do I gain by pausing instead of waiting, if anything?
Here's the test, data changed to protect company specific info. Step 3 is where we had some failures and we switched between pause and wait.
// Step - 1
open("https://website.com/")
waitForNoRequest("2000")
click("//*[#id=\"userid\"]")
type("//*[#id=\"userid\"]", "${username}")
setStepName("Step1-Login-")
// Step - 2
clickMouseAndWait("//*[#id=\"continue\"]")
waitForVisible("//*[#id=\"challenge-password\"]")
click("//*[#id=\"challenge-password\"]")
type("//*[#id=\"challenge-password\"]", "${password}")
setStepName("Step2-Login-creds")
// Step - 3
clickMouseAndWait("//*[#id=\"signIn\"]")
setStepName("Step3-dashboard")
waitForTitle("Dashboard")
waitForNoRequest("3000")
click("//*[#id=\"account-header-wrapper\"]")
waitForVisible("//*[#id=\"logout-link\"]")
click("//*[#id=\"logout-link\"]")
// Step - 4
clickAndWait("//*[text()=\"Sign Out\"]")
waitForTitle("Login - ")
verifyTextPresent("You have been logged out.")
setStepName("Step5-Logout")
Rachana here, I’m a member of the Technical Service Team here at Catchpoint, I’ll be happy to answer your questions.
Please find the differences below between waitForNoRequest and Pause commands:
Pause
Purpose: This command pauses the script execution for a specified amount of time, whether there are HTTP/s requests downloading or not. Time value is provided in milliseconds, it can range between 100 to 30,000 ms.
Explanation: This command is used when the agent needs to wait for a set amount of time and this is not impacted by the way the requests are loaded before proceeding to the next step or command. Only a parameter is required for this action.
WaitForNoRequest
Purpose: This commands waits for a specified amount of time, when there was no HTTP/s requests downloading. The wait time parameter can range between 1,000 to 5,000 ms.
Explanation: The only parameter for this action is a wait time. The agent will wait for that specified amount of time before moving onto the next step/command. Which will, in return, allow necessary requests more time to load after document complete.
For instance when you add waitforNoRequest(5000), initially agent waits 5000 ms after doc complete for any network activity. During that period if there is any network activity, then the agent waits another 5000 ms for the next network activity to end and the process goes on until no other request loads within the specified timeframe(5000 ms).
A pause command with 12000 ms, gives exactly 12 seconds to load the page. After 12 seconds the script execution will continue to next command no matter the page is loaded or not.
Since waitForNoRequest has a max time value of 5000 ms, you can tell the agent to wait for a gap of 5 seconds when there is no network activity. In this case, the page did not have any network activity for 3 seconds and hence proceeded to the next action. The page was not loaded completely and the script failed.
I tried to increase the pause to something like 45000 and it would not allow me to pause for that long.
We allow a maximum of 30 seconds pause time hence 45 seconds will not work.
Please reach out to our support team and we’ll be glad to connect you with our scripting SMEs and help you with any scripting needs you might have.

GPS location refresh rate extremely low

I'm trying to access GPS data from androidhelper, but the 'location' events come at about 1 minute intervals.
I'm testing in a Motorola e5, with Android 8.
The basic code is:
import androidhelper
droid.androidhelper.Android()
droid.startLocating()
droid.eventWaitFor('location', int(9000))
location = droid.readLocation().result
print(location['gps']['latitude'])
print(location['gps']['longitude'])
droid.stopLocating()
With other apps, the GPS data refresh rate is about 1 second.
Is there any way to improve the refresh rate for androidhelper?
[https://kylelk.github.io/html-examples/androidhelper.html][1]
I think it has to do with the defaults:
startLocating(minDistance=60000,minUpdateDistance=30) Starts collecting location data. minDistance (Integer) minimum time between updates in milliseconds (default=60000) minUpdateDistance (Integer) minimum distance between updates in meters (default=30)
If I reduce them it seems to be much faster.

Perf: what do [<n percent>] records mean in perf stat output?

perf stat -e <events> <command> with many different events usually returns an output like this
127.352.815.472 r53003c [23,76%]
65.712.112.871 r53019c [23,81%]
178.027.463.861 r53010e [23,88%]
162.854.142.303 r5302c2 [24,05%]
...
What do the percentage records mean?
The percentages show the percentage of time that the specific event was being measured in the case where perf has to multiplex events. Event multiplexing is explained in more detail on the perf wiki, and I've included a brief quote below:
If there are more events than counters, the kernel uses time
multiplexing (switch frequency = HZ, generally 100 or 1000) to give
each event a chance to access the monitoring hardware. Multiplexing
only applies to PMU events. With multiplexing, an event is not
measured all the time. At the end of the run, the tool scales the
count based on total time enabled vs time running.

CPU scheduling algorithms and arrival time

I was looking at the examples found on this website :
http://www.tutorialspoint.com/operating_system/os_process_scheduling_algorithms.htm
And there's something that just doesn't make sense about those examples. Take shortest-job-first for example. The premise is that you take the process with the least execution time and run that first.
The example runs p1 first and then p0. But WHY? At t = 0 the only process that exists in the queue is p0. Wouldn't that start running at t = 0, and then p1 would start at t = 6?
I've got the same issue with priority based scheduling.
you are right , since the process P0 has arrived at the queue at 0 sec and before P1 , it will start executing before P1 .
Their answer would be correct if there was no arrival time for the corresponding process and in that case , it is considered that all the processes have reached at the queue at the same time .So, the process with shortest executing time will be executed by CPU first .