Is there any maximum allowed running time for a task instance in APScheduler?
What is the default value and how do I change it?
I have been facing problems in running tasks that last for longer time.
After asking about it at #apscheduler on freenode, I understood that there is no maximum allowed running time for a task. It doesn't make sense to have one, and even if it does, it should default to infinity.
However, there is a configurable parameter that allows controlling the maximum number of tasks that can run in parallel and that defaults to 1.
Related
I'm trying to make sense of Process Default behavior on SSAS 2017 Enterprise Edition.
My cube is processed daily in this standard sequence:
Loop through 30 dimensions and performing Process Add or Process Update as required.
Process approximately 80 partitions for the previous day.
Exec a Process Default as the final step.
Everything works just fine, and for the amount of data involved, performs really well. However I have observed that after the process default completes, if I re-run the process default step manually (with no other activity having occurred whatsoever), it will take exactly the same time as the first run.
My understanding was that this step basically scans the cube looking for unprocessed objects and will process any objects found to be unprocessed. Given the flow of dimension processing, and subsequent partition processing, I'd certainly expect some objects to be unprocessed on the first run - particularly aggregations and indexes.
The end to end processing time is around 65 mins, but 10 mins of this is the final process default step.
What would explain this is that if the process default isn't actually finding anything to do, and the elapsed time is the cost of scanning the meta data. Firstly it seems an excessive amount of time, but also if I don't run the step, the cube doesn't come online, which suggests it is definitely doing something.
I've had a trawl through Profiler to try to find events to capture what process default is doing, but I'm not able to find anything that would capture the event specifically. I've also monitored the server performance during the step, and nothing is under any real load.
Any suggestions or clarifications..?
I have setup a jmeter script with constant runtime set in runtime controller, infinite loop in loop controller and a constant delay between threads in'constant timer'. How can I perform tuning using this setup? Is there a correlation between 'no of threads', 'rampup time' and 'delay' that should be kept in mind while trying different combinations of these values for performance testing?
Number of threads is basically the number of users you will be simulating. Each JMeter thread (or virtual user) must represent a real user using your application so treat it this way. If you have a requirement that the application must support 1000 concurrent users - stick to this number as the baseline for your testing. With regards to "how much load will my N JMeter users generate" - it depends on several factors like nature of your test, server response time, timers,
Ramp-up is the time for JMeter to kick off the virtual users from point 1. Unless you're doing spike testing you should be increasing the load gradually as if you release all the users right away you will get much less information and in case of gradual increasing the load you will be able to correlate it with increasing response time and decreasing throughput, number of errors, etc. Moreover it will allow to "wamp up" application under test and it will be more ready for the stress
Delay is time the virtual user is "thinking" between operations. Real users don't hammer application non-stop, they need some time to "think" before making the next step. Depending on what the user is "doing" the think time might be different so I would recommend going for Uniform Random Timer instead of the "Constant" one.
I've started using a database at work that is based off SQL and Unix.
I am surprised to learn, that if someone requests for a change to be made to their details at around 5PM or a certain date, then the person who is allocated the incident then has to WAIT until 5pm and make the changes manually.
I'm surprised a button that says 'Apply changes later' does not exist, there is only a 'Save' button.
I have seen complicated solutions using Java on stackoverflow, but I am not familiar with UNIX or SQL, and googling brings no results.
Would it be a simple fix?
It wouldn't have to account for any time differences, and I'm assuming would just work off System clock; and I know Java has a calendar function that I assume works off the PC clock.
Java
Java does indeed have a sophisticated facility for scheduling a future task to be executed. See the ScheduledExecutorService class.
Rather than specify a date-time, you pass the schedule method a number of nanoseconds, or milliseconds, or seconds, or minutes, or hours, or days. You also pass a TimeUnit enum instance to indicate which granularity.
And, yes, Java depends on the host operating system for its clock to track the date-time.
Task Master
I suggest using your database to track the jobs to be run, in conjunction with Java. If using only Java, the scheduled jobs would exist only in memory and would disappear if the Java app exits or crashes.
Instead, the Java app on launch should check the database for any pending jobs, and schedule them with an executor. Each job on completion should mark the database "task master" table row as finished.
On a client is being raised the error "Timeout" to trigger some commands against the database.
My first test option for correction is to increase the CommandTimeout to 99999 ... but I am afraid that this treatment generates further problems.
Have experienced it ...?
I wonder if my question is relevant, and/or if there is another option more robust and elegant correction.
You are correct to assume that upping the timeout is not the correct approach. Typically, I look for log running queries that are running around the timeouts. They will typically stand out in the areas of duration and reads.
Then I'll work to reduce the query run time using this method:
https://www.simple-talk.com/sql/performance/simple-query-tuning-with-statistics-io-and-execution-plans/
If it's a report causing issues and you can't get it running faster, you may need to start thinking about setting up a reporting database.
CommandTimeout is a time, that the client is waiting for a response from server. If the query is run in the main VCL thread then the whole application is "frozen" and might be marked "not responding" by Windows. So, would you expect your users to wait at frozen app for 99999 sec?
Generally, leave the Timeout values at default and rather concentrate on tunning the queries as Sam suggests. If you happen to have long running queries (ie. some background data movement, calculations etc in Stored Procedures) set the CommandTimeout to 0 (=INFINITE) but run them in a separate thread.
We're running into a strange problem with JProfiler 7.1.1 where it displays an elapsed time for a given HTTP request completely different from the one obtained with Firebug or a manual test - 2.5s compared to 7.5s. Default session settings are used. JProfiler has always proven to be reliable, I'm a bit stumped by this behaviour.
Any ideas why?
Thanks!
Update 1 YourKit also provides accurate results, so this is clearly related to JProfiler.
By default, JProfiler shows time in the runnable thread state, not elapsed times. If you want to see elapsed times, adjust the thread state selector in the upper right corner to "All states".
For more information, see this screen cast.