Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have to perform the load testing the application using Selenium WebDriver for 100 users. 100 users login and hit the server at a time.
How to do this process using Selenium WebDriver?
UPDATE As mentioned in the comments, this is a bad idea. If you are considering Load Testing with Selenium Grid, reconsider your purpose and verify whether Selenium Grid really is the only option you have.
For a free solution:
Selenium provides an easily scalable testing framework called Selenium Grid. You can use this in conjunction with TestNG to create a scalable load-testing framework.
From the link:
scale by distributing tests on several machines ( parallel execution )
manage multiple environments from a central point, making it easy to run the tests against a vast combination of browsers / OS.
minimize the maintenance time for the grid by allowing you to implement custom hooks to leverage virtual infrastructure for instance.
I have leveraged Selenium Grid to load-test our web-app with about a dozen concurrent browser sessions (so far). I used several references to achieve this:
http://testng.org/doc/documentation-main.html#parallel-tests
http://blog.wedoqa.com/2013/07/how-to-run-parallel-tests-with-selenium-webdriver-and-testng-2/
http://www.mkyong.com/unittest/testng-selenium-load-testing-example/
Though it's not completely clear of what type of performance testing you are looking for
Selenium Webdriver is mainly meant for testing frontend functional cases and not meant for cranking up the front end with load testing.
So as I see it you might be looking for either one of these
JMeter
The API or the backend's performance testing for eg: the Login API looking for a free tool I would suggest Jmeter Hands down:
http://jmeter.apache.org/
JSP PAGES PEFORMANCE(WHITE BOX)
Frontend page rending or response times like a jsp page for this there seem to be many techniques but most point to white box testing such as this.
http://www.javaperformancetuning.com/tips/j2ee_srvlt.shtml#REF12
Hope it helps.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Good afternoon.
I'm testing my company's streaming service, which works like twitch.
The task is as follows:
Log in to your account and simulate viewing the stream ( and chat)
I was thinking of writing code in selenium. But as far as I understand, in this case you will have to use your own driver for each thread. I'm afraid it will take up too much memory.
Now the question.
It's true? Is there a way to avoid this?
What methods would you recommend to solve this problem?
I just came up with the idea to try not drawing videos to save resources. But there is one caveat here, so that the streaming service doesn't think I'm a bot.
In other words, I have to constantly get it, but not draw it.
This won't work with selenium.
The question is as follows: is it possible
to send login data to the form and "view" the stream programmatically in Java?
Which libraries should I use?
Can you recommend the necessary libraries with links to the functionality I need?
You can use a service for cloud-hosted testing for this, you will not have to care about the testing infrastructure then. Some services allow you to use Selenium in the test scripts, so test creation will be similar to a local testing experience.
Here is a link to a service that will allow you to achieve what you need and you can run some tests for free there.
Also here is a step-by-step guide to create and set your test.
The easiest way to achieve this would be to use Selenium Grid with TestNg.
As long as you need to validate the front end, selenium is the tool, if not 100%, you can simple test using API calls:
Log in via API calls
Perform a get on desired page and use a html parser to make some validations regarding the front end call
API calls to check the chat
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I have custom API included in a website which creates new UID on new unique user visit like Google Analytics and sends the UID data to the backend server(NodeJs) for computation.
I need to check concurrent users and max the limit of users can be created/handled per current cloud config.
Also, need to check, is there any limit on API creating and sending users data. The API is on CDN(fastly)
Please suggest some testing tools, to check for above scenario.
SoapUI is a kind of standard for web services functional testing, it has also certain load testing capabilities
Web Services are basically JSON or SOAP over HTTP so any tool which supports HTTP protocol will suit. Here you can find the list of free and open source load testing tools. Narrowed down to the most powerful ones it will look like:
Grinder
Gatling
Apache JMeter
Tsung
Check out Open Source Load Testing Tools: Which One Should You Use? article for the main features comparison, sample scripts and reports.
I agree with Dmitry that those four (Grinder/Gatling/Tsung/Jmeter) are good tools, with a lot of functionality, but they are also fairly complex, require dependencies and can be somewhat painful to get started with. It all depends on your requirements which tool is best for you.
It sounds to me like you want to test one or two REST API end points powered by NodeJS. If you want a simple-to-get-started with tool that can be scripted, there are some good command-line tools available:
Wrk - very fast, scriptable in Lua
Artillery - NodeJS-based, scriptable in JS
k6 - our own newly released tool, currently the fastest tool scriptable in JS
There is also Locust which is scriptable in Python, but very low-performing.
I like these tools because they offer simple command-line usage and can be scripted in a real language, as opposed to Jmeter and Tsung, where you'll have to resort to XML if you want to do something slightly out of the ordinary. Gatling is a bit better, offering a DSL based on Scala classes where you can do most things but it is still not "real" Scala. The Grinder is the only one of those other tools that offers true scripting (in Jython), but again, it is not a simple one-line command to get started.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
My company at the beginning of building Test Automation architecture.
There are different types of apps: windows desktop, web, mobile.
What would you experienced folks recommend to start from?
I mean resources.
Building whole system or construct something basic and enhancing in future?
Thanks a lot!
Start small. If you don't know what you need, build the smallest thing you can that adds value.
It's very likely that the first thing you build will not be what you need, and that you will need to scrap it and do something else.
Finally, don't try and test EVERYTHING. This is what I see fail over and over. Most automated test suites die under their own weight. Someone makes the decision that EVERYTHING must be tested, and so you build 10,000 tests around every CSS change. This then costs a fortune to update when the requirements change. And then you get the requirement to make the bar blue instead of red...
One of two things happen, either the tests get ignored, and the suite dies, or the business compromises what it wants because the tests cost so much to update. In the first case, the investment in tests was a complete waste, the second case is even more dangerous, it implies that the test suite is actually impeding progress, not assisting it.
Automate the most important tests. Find the most important workflows. The analysis of what to test should take more time than writing the tests themselves.
Finally, embrace the Pyramid of Tests.
Just as Rob Conklin said,
Start small
Identify the most important tests
Build your test automation architecture around these tests
Ensure your architecture allows for reusability and manageability
Build easily understandable report and error logs
Add Test Data Management to your architecture
Once you ensure all these, you can enhance later as you add new tests
in addition to what was already mentioned:
Make sure you have fast feedback from your automated tests. Ideally they should be executed after each commit to master branch.
Identify in which areas of your system test automation brings the biggest value.
Start from integration tests and leave end-to-end tests for a while
Try to keep every automated test very small and checking only one function
Prefer low level test interface like API, CLI over GUI.
I'm curious on what path you chose. We run UI automated tests for mobile, desktop applications, and web.
Always start small but building a framework is what I recommend as the first steps when facing this problem.
The approach we took is:
create mono repo
installed selenium webdriver for web
installed winapp driver for desktop
installed appium for mobile
created an api for each system
DesktopApi
WebApi
MobileApi
These APIs contain business functions that we share across teams.
This builds our framework to now write tests going across the different systems such as:
create a user on mobile device
enter a case for them in our desktop
application login on the web as the user and check balance
Before getting started on the framework it is always best to learn from others test automation mistakes.
Start with prioritizing which tests should be automated such as business critical features, repetitive tests that must be executed for every build or release (smoke tests, sanity tests, regression tests), data-driven tests, and stress and load testing. If your application supports different operating systems and browsers, it’s highly useful to automate tests early that verifies stability and proper page rendering.
In the initial stages of building your automation framework, keep the tests simple and gradually include more complex tests. And in all cases, the tests should be easily maintained, and you need to consider how you will debug errors, report on test results, scheduling tests, and bulk test runs.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Suppose if I need to do smoke testing of my application with automated scripts. Now is it really necessary to run these scripts on all the browsers (for e.g. Chrome, Firefox, IE) to check the stability ?
Because currently I only use Firefox for with my selenium-webdriver scripts. So, I was just wondering if its necessary to regularly smoke test on all the browsers ?
To me, it primarily depends on two factors.
What is the application and how is it written?
For example, if the website is just old fashioned HTML 4 site, then I wouldn't bother testing all browsers. If the target is not some kind of monster web application but just a simple site, I'd just use headless browser PhantomJS to make sure it's functional. However, if we are talking about complex modern-age web applications, then I'd recommend testing against some real browsers (Chrome, Firefox, IE at least).
Also, it really depends on the technology used on the site. If something too up-to-date is used to build up the application, then I'd say have a think about and test against old browsers like IE8.
What browsers do your customers use?
The most important thing I care about is what browsers my customers use. If most of the users use IE6, then IE6 will be my main testing target. Have a look at user data in site's analytics tool, like Google Analytics, MixPanel, etc., to see what's more important.
For example, whenever I see someone asking about how to run Selenium using Safari on Windows, I'd tell them straight away, Safari for Windows is dead, don't do it because it's just a waste of time (unless most customers use it for some strange reason).
More thoughts from Arran's comment:
One other thing you need to measure is ROI (Return on investment).
UI tests are normally written by developers and require lots of efforts to maintain. If tests are running against Opera, which none of the customers use, developers may end up wastng time trying to maintain unstable failing tests on something would never happen in real environment.
Using more browsers means higher maintaining costs. Look though all UI bugs of the product, how many of them are browser-specific? For instance, if it's about 25% and all about IE, then I'd say run your tests on Chrome (or Firefox) and IE. But if it's only 5%, I'd say testing one browser is enough and don't worry about others.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
We are building a large CRM system based on the SalesForce.com cloud. I am trying to put together a test plan for the system but I am unsure how to create system-wide tests. I want to use some behaviour-driven testing techniques for this, but I am not sure how I should apply them to the platform.
For the custom parts we will build in the system I plan to approach this with either Cucumber of SpecFlow driving Selenium actions on the UI. But for the SalesForce UI Customisations, I am not sure how deep to go in testing. Customisations such as Workflows and Validation Rules can encapsulate a lot of complex logic that I feel should be tested.
Writing Selenium tests for this out-of-box functionality in SalesForce seems overly burdensome for the value. Can you share your experiences on System testing with the SalesForce.com platform and how should we approach this?
That is the problem with detailed test plan up front. You trying to guess what kind of errors, how many, and in what areas you will get. This may be tricky.
Maybe you should have overall Master Test Plan specifying only test strategy, main tool set, risks, relative amount of how much testing you want to put in given areas (based on risk).
Then when you starting to work on given functionality or iteration (I hope you are doing this in iterations not waterfall), you prepare detailed test plan for this set of work. You adjust your tools/estimates/test coverage based on experiences from previous parts.
This way you can say at the beginning what is your general approach and priorities, but you let yourself adapt later as project progresses.
Question about how much testing you need to put into testing COTS is the same as with any software: you need to evaluate the risk.
If your software need to be
Validated because of external
regulations (FDA,DoD..)
you will need to go deep with your
tests, almost test entire app. One
problem here may be ensuring
external regulator, that tools you
used for validation are validated
(and that is a troublesome).
If your application is
mission-critical for your company,
than you still need to do a lot of
testing based on extensive risk
analysis.
If your application is not concerned
with all above, you can go with
lighter testing. Probably you can
skip functionality that was tested
by platform manufacturer, and focus
on your customisations. On the other
hand I would still write tests (at
least happy paths) for
workflows you will be using in your
business processes.
When we started learning Selenium testing in 2008 we created Recruiting application from SalesForce handbook and created a suite of tests and described our path step by step in our blog. It may help you get started if you decide to write Selenium code to test your app.
I believe the problem with SalesForce is you have Unit and UI testing, but no Service-level testing. The SpecFlow I've seen which drives Selenium UI is brittle and doesn't encapsulate what I'm after in engineering a service-level test solution:
( When I navigate to "/Selenium-Testing-Cookbook-Gundecha-Unmesh/dp/1849515743"
And I click the 'buy now' button
And then I click the 'proceed to checkout' button)
That is not the spirit or intent of Specflow.
Given I have not selected a product
When I select Proceed to Checkout
Then ensure I am presented with a message
In order to test that with selenium, you essentially have to translate that to clicks and typing, whereas in the .NET realm, you can instantiate objects, etc., in the middle-tier, and perform hundreds of instances and derivations against the same BACKGROUND (mock setup).
I'm told that you can expose SF through an API at some security risk. I'd love to find more about THAT.