GWT DevMode serialization with gwt-rpc is 100 times slower than production mode. Is there any idea why it is so slow?..
For example serialization of 1000 pojos lasts 67 ms for production mode , and 8000 ms for devmode..
Because in dev mode every call to JS goes to server/IDE, instead of going to browser JS engine. This is especially slow if you attach to remote application through network.
Related
I have a small website in ASP.NET Core Razor Pages.
Number of visitor will be low (max 1000 per day, but probably no more than 500).
My questions:
Is it OK to deploy the website to the D1 Shared plan (which I'm using for testing now)?
If I understand correctly it will cost me around €9/month. Is this right?
What does it mean "240 CPU minutes / day"? Is the website offline in sleep mode when not in use?
What will happen if I go above 240 CPU minutes / day (is it possible?)?
I like the Azure panel and configuration screen but is there a better alternative (not a VPS) to host a small website like above?
1.As you may know, the D1 Shared plan is for test / develop, not for production. And it only supports 240 CPU minutes / day, so 500 or 1000 visit per day may exceed this limitation and cause some problems. It's not ok.
2.It will cost you about €8.003/month. Here is a screenshot from the pricing page:
5.Another solution without considering other cloud service(like google cloud etc.), if you don't mind re-writing you site, you can consider using azure static web site which is more cheaper(the limitation is that it can only host very simple site).
I was used a load of 100 using ultimate thread group for execution in NON GUI Mode .
The Execution takes place around 5 mins. only . After that my test environment got shut down. I am not able to drill down the issues. What could be the reason for server downs. my environment supports for 500 users.
How do you know your environment supports 500 users?
100 threads don't necessarily map to 100 real users, you need to consider a lot of stuff while designing your test, in particular:
Real users don't hammer the server non-stop, they need some time to "think" between operations. So make sure you add Timers between requests and configure them to represent reasonable think times.
Real users use real browsers, real browsers download embedded resources (images, scripts, styles, fonts, etc) but they do it only once, on subsequent requests the resources are being returned from cache and no actual request is being made. Make sure to add HTTP Cache Manager to your Test Plan
You need to add the load gradually, this way you will be able to state what was amount of threads (virtual users) where response time start exceeding acceptable values or errors start occurring. Generate a HTML Reporting Dashboard, look into metrics and correlate them with the increasing load.
Make sure that your application under test has enough headroom to operate in terms of CPU, RAM, Disk space, etc. You can monitor these counters using JMeter PerfMon Plugin.
Check your application logs, most probably they will have some clue to the root cause of the failure. If you're familiar with the programming language your application is written in - using a profiler tool during the load test can tell you the full story regarding what's going on, what are the most resources consuming functions and objects, etc.
I am trying to run load tests on my existing selenium web tests and my api(unit) tests. The tests run in Visual studio using load test editor but does not collect all the metrics like response time and requests per seconds. Are there any additional parameters that I need to add to collect all the metrics ?
Load testing; how many selenium clients are you running? One or two will not generate much load. First issue to think about; you need load generators and selenium is a poor way to go about this (unless you are running grid headless but still).
So the target server is what, Windows Server 2012? Google Create a Data Collector Set to Monitor Performance Counters.
Data collection and analysis of same is your second issue to think about. People pays loads of money for tools like LoadRunner because they provide load generators and sophisticated data collection of servers, database, WANs and LANS and analysis reports to pinpoint bottlenecks. Doing this manually is hard and not easily repeatable. Most folks who start down your path eventually abandon it. Look into the various load/performance tools to see what works best for you and that you can afford.
We are conducting a load testing on our BI infrastructure at the moment. We are testing with 10 concurrent users against single pentaho node (bi server platform).
A test scenario for each user is:
Open pentaho page
Authenticate to the platform
Open a report using URL (like this http://itrac5125:8080/pentaho/api/repos/%3Ahome%3ALoadTesting%3A4Measures.xanalyzer/editor)
When report is refreshed go to 3) and open another report
As you see steps 3. and 4. are in the loop.
After 15 minutes of running this test the BI platform becomes extremely unresponsive. It takes almost three minutes to load home page. Once loaded, trying to push buttons like Browse Files / Create nnw did not result in any change of view.
We used a java profiler tool to what's happening inside application and discovered 200 http threads (see Threads) attachment. Around 95% of them were for the majority of time blocked waiting for a resource (see Blocked). Is this normal? I am afraid that managing this amount of threads that are waiting for a resource might be quite an overhead for processor. We checked code of BI platform (see Code) and there is indeed a lock on a resource, that judging by number of threads waiting inside this method seems to be recalculated very often.
Threads (http://postimg.org/image/4c2yug17f/full/)
Blocked (http://postimg.org/image/gm32nbd29/)
Code (http://postimg.org/image/6p5vt1b6r/)
Attaching as well cpu and ram usage graphs that were taken for the time period when the test was executed.
CPU (http://postimg.org/image/tbxubog6b/full/):
RAM (http://postimg.org/image/jecpimes9/full/):
Is there anyone experiencing similar issues? I would be happy to hear about other experience in terms of load testing / load optimazing for Pentaho BI Server.
After over a week of testing it turned out to be an issue on Pentaho side related to wrong synchronization of threads that lead to a deadlock.
We have been able to contact with Pentaho and they confirmed it is a bug on their side (see jira: http://jira.pentaho.com/browse/BISERVER-12642). This should be fixed in a service pack for Pentaho 5.4.
I am trying out imageresizing.net, but found their performance to be really poor. 5 simultaneous request to resize 5 different large jpg on a 8 core machine, and the performance is like 5 second per image.
While a single request to resize for the same image is 1.1 second.
This is on a clean Win 2008 r2 server, on a separate new asp.net site running .net 4 in integrated mode, and the imageresizing library running as HTTP Module.
I can't validate the claimed performance and scale from their web site.
Can someone share their experience with using imageresizing.net, is the kind of performance I measured is normal? It seems that the library cannot resize multiple images at the same time, and rely on the disk cache to gain performance on subsequent request of the same image. My scenario is that re sizing of images will most likely not be repeated, ie won't have cache hit, hence raw performance is important.