JProfiler elapsed time different from actual load time - jprofiler

We're running into a strange problem with JProfiler 7.1.1 where it displays an elapsed time for a given HTTP request completely different from the one obtained with Firebug or a manual test - 2.5s compared to 7.5s. Default session settings are used. JProfiler has always proven to be reliable, I'm a bit stumped by this behaviour.
Any ideas why?
Thanks!
Update 1 YourKit also provides accurate results, so this is clearly related to JProfiler.

By default, JProfiler shows time in the runnable thread state, not elapsed times. If you want to see elapsed times, adjust the thread state selector in the upper right corner to "All states".
For more information, see this screen cast.

Related

How to use Jmeter with timer

I am having a problem with the JMETER, using it with Timer causes Crash to the Jmeter
The case is : I want to create a load of requests to be executed every half hour
Is that something you can do with Jmeter?
every-time i try it it causes Jmeter to keep loading and hangs and require a shut down
If you want to leave JMeter up and running forever make sure to follow JMeter Best Practices as certain test elements might cause memory leaks
If you need to create "spikes" of load each 30 minutes it might be a better idea to consider your operating system scheduling mechanisms to execute "short" tests each half an hour like:
Windows Task Scheduler
Unix cron
MacOS launchd
Or even better go for Continuous Integration server like Jenkins, it has the most powerful trigger mechanism allowing defining flexible criteria regarding when to start the job and you can also benefit from the Performance Plugin which allows automatically marking build as unstable or failed depending on test metrics and building performance trend charts

Jmeter test always freeze when tested server gives up

When trying to run load test in JMeter 5.1.1, tests always freeze at the end if server gives up. Test completes correctly if server does not give up. Now this is terrible because the point of test is to see at what point server gives up but as mentioned test never ends and it is necessary to kill it by hand.
Example:
Test running 500 threads for local server goes smoothly and it finish
with tiding up message
Exactly the same test running 500 threads for cloud based server
at some points results in error test goes to about 99 % then
freezes on summary as in below example:
summary + 99 in 00:10:11 = 8.7/s Avg: 872 Min: 235 Max:
5265 Err: 23633 (100.00%) Active: 500 Started: 500 Finished: 480
and that's it you can wait forever and it will just be stuck at this point.
Tried to use different thread types without success. Next step was to change Sampler error behavior and yes changing it from Continue to Start Next Thread Loop or Stop thread helps and test is ending but then results in html look bizarre and inaccurate. I even tried to set timeout setting to 60000 ms in HTTP request Defaults but this also has given strange results.
That said can someone tell me how to successful run load test for server so that is always completes regardless of issues and is accurate> Also I did see few old question about the same issue and they did not have any answer that would be helpful. Or is there any other more reliable open source testing app that also has GUI to create tests?
You're having 100% of errors which looks "strange" to me in any case.
If setting the connect and response timeouts in the HTTP Request Defaults doesn't help - most probably the reason for "hanging" lives somewhere else and the only way to determine it is taking a thread dump and analyzing the state of the threads paying attention to the ones which are BLOCKED and/or WAITING. Then you should be able to trace this down to the JMeter Test Element which is causing the problem and closely look into what could go wrong.
Other hints include:
look for suspicious entries in jmeter.log file
make sure JMeter has enough headroom to operate in terms of CPU, RAM, Network sockets, etc. It can be done using i.e. JMeter PerfMon Plugin
make sure to follow recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure

UI testing fails with error "Failed to get snapshot within 15.0s"

I am having a table view with a huge amount of cells. I tried to tap a particular cell from this table view. But the test ending with this error:
Failed to get snapshot within 15.0s
I assume that, the system will take snapshot of the whole table view before accessing its element. Because of the huge number of cells, the snapshot taken time was not enough (15 secs is may be the system default time).
I manually set the sleep time / wait time (I put 60 secs). Still I can not access the cells after 60 seconds!!
A strange thing I found is, before accessing the cell, I print the object in Debugger like this:
po print XCUIApplication().cells.debugDescription
It shows an error like
Failed to get snapshot within 15.0s
error: Execution was interrupted, reason: internal ObjC exception breakpoint(-3)..
The process has been returned to the state before expression evaluation.
Again if I print the same object using
po print XCUIApplication().cells.debugDescription
Now it will print all cells in the tableview in Debugger view.
No idea about why this happens. Does anyone faced similar kind of issues? Help needed!!
I assume that, the system will take snapshot of the whole table view before accessing its element.
Your assumption is correct but there is even more to the story. The UI test requests a snapshot from the application. The application takes this snapshot and then sends the snapshot to the test where the test finally evaluates the query. For really large snapshots (like your table view) that means that:
The snapshot takes a long time for the application to generate and
the snapshot takes a long time to send back to the test for query evaluation.
I'm at WWDC 2017 right now and there is a lot of good news about testing - specifically some things that address your exact problem. I'll outline it here but you should go watch WWDC 2017 Session 409 and skip to timestamp 17:10.
The first improvement is remote queries. This is where the test will transmit the query to the application, the application will evaluate that query remotely from the test and then only transmit back the result of that query. Expect a marginal improvement from this enhancement ~20% faster and ~30% less memory.
The second improvement is query analysis. This enhancement will reduce the size of snapshots taken by using a minimum attribute set for taking snapshots. This means that full snapshots of the view are not taken by default when evaluating a query. Example is if you're querying to tap a button, the snapshot is going to be limited to buttons within the view. This means writing less ambiguous queries are even more important. I.e. if you want to tap a navigation bar button, specify it in the query like app.navigationBars.buttons["A button"]. You'll see even more performance improvement from this enhancement ~50% faster and ~35% less memory
The last and most notable (and dangerous) improvement is what they're calling the First Match API. This comes with some trade offs/risks but offers the largest performance gain. It offers a new .firstMatch property that returns the first match for a XCUIElement query. Ambiguous matches that result in test failures will not occur when using .firstMatch, so you run the risk of evaluating or performing an action on an XCUIElement that you didn't intend to. Expect a performance increase of ~10x faster and no memory spiking at all.
So, to answer your question - update to Xcode 9, macOS High Sierra and iOS 11. Utilize .firstMatch where you can with highly specific queries and your issue of timing out snapshots should be solved. In fact the time out issue you're experiencing might already be solved with the general improvements you'll receive from remote queries and query analysis without you having to use .firstMatch!

Saving data at a particular time on an existing SQL/UNIX system?

I've started using a database at work that is based off SQL and Unix.
I am surprised to learn, that if someone requests for a change to be made to their details at around 5PM or a certain date, then the person who is allocated the incident then has to WAIT until 5pm and make the changes manually.
I'm surprised a button that says 'Apply changes later' does not exist, there is only a 'Save' button.
I have seen complicated solutions using Java on stackoverflow, but I am not familiar with UNIX or SQL, and googling brings no results.
Would it be a simple fix?
It wouldn't have to account for any time differences, and I'm assuming would just work off System clock; and I know Java has a calendar function that I assume works off the PC clock.
Java
Java does indeed have a sophisticated facility for scheduling a future task to be executed. See the ScheduledExecutorService class.
Rather than specify a date-time, you pass the schedule method a number of nanoseconds, or milliseconds, or seconds, or minutes, or hours, or days. You also pass a TimeUnit enum instance to indicate which granularity.
And, yes, Java depends on the host operating system for its clock to track the date-time.
Task Master
I suggest using your database to track the jobs to be run, in conjunction with Java. If using only Java, the scheduled jobs would exist only in memory and would disappear if the Java app exits or crashes.
Instead, the Java app on launch should check the database for any pending jobs, and schedule them with an executor. Each job on completion should mark the database "task master" table row as finished.

TADOStoredProc/TADOQuery CommandTimeout...?‏

On a client is being raised the error "Timeout" to trigger some commands against the database.
My first test option for correction is to increase the CommandTimeout to 99999 ... but I am afraid that this treatment generates further problems.
Have experienced it ...?
I wonder if my question is relevant, and/or if there is another option more robust and elegant correction.
You are correct to assume that upping the timeout is not the correct approach. Typically, I look for log running queries that are running around the timeouts. They will typically stand out in the areas of duration and reads.
Then I'll work to reduce the query run time using this method:
https://www.simple-talk.com/sql/performance/simple-query-tuning-with-statistics-io-and-execution-plans/
If it's a report causing issues and you can't get it running faster, you may need to start thinking about setting up a reporting database.
CommandTimeout is a time, that the client is waiting for a response from server. If the query is run in the main VCL thread then the whole application is "frozen" and might be marked "not responding" by Windows. So, would you expect your users to wait at frozen app for 99999 sec?
Generally, leave the Timeout values at default and rather concentrate on tunning the queries as Sam suggests. If you happen to have long running queries (ie. some background data movement, calculations etc in Stored Procedures) set the CommandTimeout to 0 (=INFINITE) but run them in a separate thread.