Just got LR 11 Vugen licence and tried TruClient, looks great and the firefox based script recording works really nice. However, I have not found answers to the following:
1)Is TruClient running limited the same way as QuickTest Pro virtual users scripts (1 user per OS)?
2)It is called Ajax TruClient, does it mean it supports only javascript based web pages or all (standard php/html) including javascript etc.?
Here are a few answers for ya:
1) TruClient is not limited like a GUI Vuser (WinRunner or now QTP) to a single GUI session on a Load Generator. You can run multiple AJAX TruClient Virtual Users on a single Load Generator and they will run "invisibly" like a virtual user. You might find that the driver is much heavier (takes more memory and CPU), so you can't run as many vusers as the Web HTTP/HTML vuser.
2) TruClient is not just for AJAX-based web pages - it can work on any web page that will render in a browser.
In addition to what Mark said, it's purely event driven, i.e. if a user clicks on a link, this is what gets rendered, consumed as a resource and subsequently displayed, as opposed to traditional headless implementations, which are, however in return, using less system resources.
This is one of the main caveats with TruClient (from experience): depending on the complexity of your script or workflow, single user simulated can take lots of resources, mainly memory, in my case.
This is because for every Virtual user that gets emulated, an instance of Gecko Web Engine is being spawned, in order to replay the script, and this has its cost.
However, the level of realism reaches very close to typical user session and experience, as you can, for example, set the typewriting speed, decide whether to simulate caching mechanisms or not, make additional corrections of pattern and images recognition, etc.
Overall, mostly positive experience, which has, however, certain price. Talk to your HP sales (disclaimer: A company which I don't work for, just experience).
A little more ...
TC is a big win in some respects as you can avoid a ton of nasty correlation. But it also has some downsides, the memory/CPU footprint can be huge, and the sync issues can be tricky.
Related
I am somehow familiar with benchmarking/stress testing traditional web application and I find it relatively easy to start estimating maximum load for it. With tools I am familiar (Apache ab, Apache JMeter) I can get a rough estimate of the number of request/second a server with a standard app can handle. I can come up with user story, create a list of page I would like to check and benchmark them separately. A lot of information can be found on the internet how to go from novice like me to a master.
But in my opinion a lot of things is different when benchmarking single page application. The main entry point is the most expensive request, because the user loads majority of things needed for proper app experience (or at least in my app it is this way). After it navigation to other places is just ajax request, waiting for json, templating. So time to window load is not important anymore.
To add problems to this, I was not able to find any resources how people do it properly.
In my particular case I have a SPA written with knockout, and sitting on apache server (most probably this is irrelevant). I would like to have rough estimate how many users can my app on the particular server handle. I am not looking for a tool recommendation (also it would be nice), I am looking for experienced person to share his insight about benchmarking process.
I suggest you to test this application just like you would test any other web application, as you said - identify the common use cases, prepare scripts for them, run in the appropriate mix and analyze the results.
Web-applications can break in many different ways for different reasons. You are speculating that the first page load is heavy and the rest is just small ajax. From experience I can tell you that this is sometimes misleading - for example, you can find that the heavy page is coming from cache and the server is not working hard for it, but a small ajax response requires a lot of computing power or long running database query or has some locking in the code that cause it to break or be slow under load - that's why we do load testing.
You can do this with any load testing tool, ideally one that can handle those types of script with many dynamic values. My personal preference is WebLOAD by RadView
I am dealing with similar scenario, SPA application where first page loads and there after everything is done by just requesting for other html pages and/or web service calls to get the data.
My goal is to stress test the web server and DB server.
My solution is to just create request for those html pages(very low performance issue, IMO, since they are static and they can be cached for a very long time in the browser) and web service call requests. Biggest load will come from the request for data/processing via the web service call requests.
Capture all the requests for html and web service calls using a tool like fiddler, and use any load test tools(like JMeter) to run these requests using as many virtual users as you want to test your application with.
Is there any way to consistently detect PhantomJS/CasperJS? I've been dealing with a spat of malicious spambots built with it and have been able to mostly block them based on certain behaviours, but I'm curious if there's a rock-solid way to know if CasperJS is in use, as dealing with constant adaptations gets slightly annoying.
I don't believe in using Captchas. They are a negative user experience and ReCaptcha has never worked to block spam on my MediaWiki installations. As our site has no user registrations (anonymous discussion board), we'd need to have a Captcha entry for every post. We get several thousand legitimate posts a day and a Captcha would see that number divebomb.
I very much share your take on CAPTCHA. I'll list what I have been able to detect so far, for my own detection script, with similar goals. It's only partial, as they are many more headless browsers.
Fairly safe to use exposed window properties to detect/assume those particular headless browser:
window._phantom (or window.callPhantom) //phantomjs
window.__phantomas //PhantomJS-based web perf metrics + monitoring tool
window.Buffer //nodejs
window.emit //couchjs
window.spawn //rhino
The above is gathered from jslint doc and testing with phantom js.
Browser automation drivers (used by BrowserStack or other web capture services for snapshot):
window.webdriver //selenium
window.domAutomation (or window.domAutomationController) //chromium based automation driver
The properties are not always exposed and I am looking into other more robust ways to detect such bots, which I'll probably release as full blown script when done. But that mainly answers your question.
Here is another fairly sound method to detect JS capable headless browsers more broadly:
if (window.outerWidth === 0 && window.outerHeight === 0){ //headless browser }
This should work well because the properties are 0 by default even if a virtual viewport size is set by headless browsers, and by default it can't report a size of a browser window that doesn't exist. In particular, Phantom JS doesn't support outerWith or outerHeight.
ADDENDUM: There is however a Chrome/Blink bug with outer/innerDimensions. Chromium does not report those dimensions when a page loads in a hidden tab, such as when restored from previous session. Safari doesn't seem to have that issue..
Update: Turns out iOS Safari 8+ has a bug with outerWidth & outerHeight at 0, and a Sailfish webview can too. So while it's a signal, it can't be used alone without being mindful of these bugs. Hence, warning: Please don't use this raw snippet unless you really know what you are doing.
PS: If you know of other headless browser properties not listed here, please share in comments.
There is no rock-solid way: PhantomJS, and Selenium, are just software being used to control browser software, instead of a user controlling it.
With PhantomJS 1.x, in particular, I believe there is some JavaScript you can use to crash the browser that exploits a bug in the version of WebKit being used (it is equivalent to Chrome 13, so very few genuine users should be affected). (I remember this being mentioned on the Phantom mailing list a few months back, but I don't know if the exact JS to use was described.) More generally you could use a combination of user-agent matching up with feature detection. E.g. if a browser claims to be "Chrome 23" but does not have a feature that Chrome 23 has (and that Chrome 13 did not have), then get suspicious.
As a user, I hate CAPTCHAs too. But they are quite effective in that they increase the cost for the spammer: he has to write more software or hire humans to read them. (That is why I think easy CAPTCHAs are good enough: the ones that annoy users are those where you have no idea what it says and have to keep pressing reload to get something you recognize.)
One approach (which I believe Google uses) is to show the CAPTCHA conditionally. E.g. users who are logged-in never get shown it. Users who have already done one post this session are not shown it again. Users from IP addresses in a whitelist (which could be built from previous legitimate posts) are not shown them. Or conversely just show them to users from a blacklist of IP ranges.
I know none of those approaches are perfect, sorry.
You could detect phantom on the client-side by checking window.callPhantom property. The minimal script is on the client side is:
var isPhantom = !!window.callPhantom;
Here is a gist with proof of concept that this works.
A spammer could try to delete this property with page.evaluate and then it depends on who is faster. After you tried the detection you do a reload with the post form and a CAPTCHA or not depending on your detection result.
The problem is that you incur a redirect that might annoy your users. This will be necessary with every detection technique on the client. Which can be subverted and changed with onResourceRequested.
Generally, I don't think that this is possible, because you can only detect on the client and send the result to the server. Adding the CAPTCHA combined with the detection step with only one page load does not really add anything as it could be removed just as easily with phantomjs/casperjs. Defense based on user agent also doesn't make sense since it can be easily changed in phantomjs/casperjs.
Now I hava a requirement about Web GUI TA
I want to simulate some users(20-30) click a button at the same time and evaluate the performance of Web GUI at that time.
I use RobotFrameWork + Selenium library to do the Web Gui TA before, but as far as I know. selenium library only can handle one broswer at one time, so i dont know how to do now.
Can you give me some advice? need use another library or framework?
Like mentioned by other, what you want to do in this case is not UI testing but rather stress/load testing. You should be able to try easily Gatling. First you record the http request associated with the click on your button. Then, you write a simple scenario that launches this request 20 times at once. Something like:
setUp(scn.inject(atOnce(20 users)))
.protocols(httpConf)
Selenium has a "grid" option you can use to configure many instances running many browsers.
http://www.seleniumhq.org/projects/grid/
http://www.seleniumhq.org/docs/07_selenium_grid.jsp
Grid allows you to :
scale by distributing tests on several machines ( parallel execution )
manage multiple environments from a central point, making it easy to
run the tests against a vast combination of browsers / OS. minimize
the maintenance time for the grid by allowing you to implement custom
hooks to leverage virtual infrastructure for instance.
In short, you create a "hub" that manages things, then each "node" can perform tests as required by the hub.
Consider, however, this may not be the best route to go down. Something like multi-mechanize might be more useful: http://testutils.org/multi-mechanize/
That will allow you to have many users "clicking" the button, but not via a browser but via direct HTTP calls. That might be more suitable for multi user simultaneous "headless" load testing, which is what I think you are attempting to do.
I'm slightly confused at this question:
Are you wanting to test the GUI? If it's something like "This button makes a dropdown menu appear", then it doesn't matter how many users do it at the same time, it'll either always work, or never work.
Are you wanting to test the server under load? If so, then Selenium will work, but there are better tools. I have used JMeter with success, but there is a really good listing of all of them here: http://performance-testing.org/
Finally, are you wanting to press 20 different buttons on the same page on the same browser at the same time? If so, this isn't possible with selenium, and it isn't a standard use case either.
I want to simulate our whole checkout process under load. This essentially involves running a number of POSTs in sequence, where the client is storing a unique cookie for each sequence that allows the session to be preserved. Can anyone recommend a software or service that meets these conditions?
This sort of thing could be very easily, effectively and freely accomplished using Apache JMeter. You can either record the journey using JMeter's proxy or simply add the requests manually.
To simulate cookies add a Cookie Manager to the testplan. For any other tokens or session ids that need to be correlated you can use a Regular Expression Extractor.
There are lots of options for this kind of test. Free/open-source tools will require a bit more work on your part but are otherwise free. Tools like ours (Load Tester 5) will get the job done much quicker, but there is a cost for the software. If your organization does not have much experience with load testing and are on a tight schedule, you might want to bring in outside help to help you meet your deadline and learn the process (we offer services as well!).
I am developing a small intranet based web application. I have YSlow installed and it suggests I do several things but they don't seem relevant for me.
e.g I do not need a CDN.
My application is slow so I want to reduce the bandwidth of requests.
What rules of YSlow should I adhere to?
Are there alternative tools for smaller sites?
What is the check list I should apply before rolling out my application?
I am using ASP.net.
Bandwidth on intranet sites shouldn't be an issue at all (unless you have VPN users, that is). If you don't and it's still crawling, it's probably something to do with the backend than the front-facing structure.
If you are trying to optimise for remote users, some of the same things apply to try and optimise the whole thing:
Don't use 30 stylesheets - cat them into one
Don't use 30 JS files, cat them into one
Consider compressing both JS and CSS using minifiers or the YUI compressor.
Consider using sprites (images with multiple versions in - eg button-up and button-down, one above the other)
Obviously, massive images are a no-no
Make sure you send expires headers to make sure stylesheets/js/images/etc are all cached for a sensible amount of time.
Make sure your pages aren't ridiculously large. If you're in a controlled environment and you can guarantee JS availability, you might want to page data with AJAX.
To begin,
limit the number of HTTP requests
made for images, scripts and other
resources by combining where
possible. Consider minifying them
too. I would recommend Fiddler for debugging HTTP
Be mindful of the size of Viewstate,
set EnableViewState = false where
possible e.g. For dropdown list controls
that never have their list of items changed,
disable Viewstate and populate in
Page_Init or override OnLoad. TRULY
understanding Viewstate is a
must read article on the subject
Oli has posted an answer while writing this and have to agree that bandwidth considerations should be secondary or tertiary for an intranet application.
I've discovered Page speed since asking this question. Its not really for smaller sites but is another great fire-bug plug-in.
Update: As of June 2015 Page Speed plugins for Firefox and Chrome is no longer maintained and available, instead, Google suggests the web version.
Pingdom tools provides a quick test for any publicly accessible web page.