How to strategize your test automation? - selenium

I would like to get input on how to run automated smoke tests based on what developers check in. Currently, when there is a commit by devs the jenkins job gets built to build the code and smoke tests run to test the app. But smoke tests contains more than 50 tests. How would you design your automation so when there is a check in by devs, the automation only runs against the app features that could be affected by the new check in? Here is our flow: Dev checks in to git repo, jenkins job gets triggered through web hook and builds the app, once the build is done there is a downstream job to run the smoke tests. I would like to limit the smoke tests to only test the features that are affected by the new check in.

You can determine which areas of your product might be affected but you can not be 100% sure. Don't rely on that. You don't want to have regressions with unknown source. They are extremely hard to triage and one of the best things about continuous integration is that each change or small amount of changes are tested separately and you know at each moment what is wrong with your app without spending many time on investigation. 10 minutes for a set of 50 tests is actually very good. Why don't think on making them parallel instead of reducing the test suit if the only problem about running the tests is the time consumed. I would prefer to speed up the test execution phase instead of decreasing the test suit.

Related

Is it possible to continue tests sequentially from prior test state with Foundry/Forge?

I'm wondering if there is a way within Forge to continue your tests sequentially, starting from the contractual end state of the last test without repasting the prior tests code as setup. Obviously I could just make one massive test, however, i would lose the gas data and such that i would receive from individual tests. Thank you in advance to anyone who can help :)
No, this is not possible. Each test is run in isolation, with the exception that state from setUp is preserved.
You can make a bigger test and retain gas reporting information if you use forge test --gas-report instead of going by the gas reported by just forge test. The --gas-report flag uses transaction tracing to build a gas usage report of individual calls instead of the entire test.

How to use Jmeter with timer

I am having a problem with the JMETER, using it with Timer causes Crash to the Jmeter
The case is : I want to create a load of requests to be executed every half hour
Is that something you can do with Jmeter?
every-time i try it it causes Jmeter to keep loading and hangs and require a shut down
If you want to leave JMeter up and running forever make sure to follow JMeter Best Practices as certain test elements might cause memory leaks
If you need to create "spikes" of load each 30 minutes it might be a better idea to consider your operating system scheduling mechanisms to execute "short" tests each half an hour like:
Windows Task Scheduler
Unix cron
MacOS launchd
Or even better go for Continuous Integration server like Jenkins, it has the most powerful trigger mechanism allowing defining flexible criteria regarding when to start the job and you can also benefit from the Performance Plugin which allows automatically marking build as unstable or failed depending on test metrics and building performance trend charts

How are tests handled by Scrum using TFS 2012?

We are, for the first time, trying to implement scrum on our company using TFS 2012. So far the process is not doing very well since we have questions that no one could find the answers so far.
Our main concern is how to handle the tests phase. Here are or scenario (in terms of people/jobs):
We have 6 programmers
We have a scrum master
We have 2 testers (that are not programmers)
That is what we have until now:
All desires go to the board
We have the sprint meeting where we add tasks to those desires
We prepare the sprint
People start to do their jobs
Our definition of Done clarifies that a story can only be considered done when the story goes to the test guys and one of them (in that case, me) says that the story is done. So far so good.
We have a Test Server where all tests are executed and that server is similar to the production server (web app).
As I said, out main concern is how to handle tests:
Since all developers can commit their code (using SVN), when should they commit? When a task is done or when a backlog item is done?
When a test release should be released?
When should the tests begin? Should we start testing after a task is done or after a backlog item is done? How can we get notified when we should begin testing?
Should we create a Deployment task and a Test Task on every backlog item?
Any help would be nice.
- Since all developers can commit their code (using SVN), when should they commit? When a task is done or when a backlog item is done?
Ans: I think you should commit as soon as code is ready. If you have tasks created under a user story and a task covers some small development you can submit and close the task. So like wise you have smaller tasks for development of the user story. Once all tasks are completed User story (backlog item) is completed.
To test the these commits, what you can do is to have a automation test suite which runs against the CI environment. So you can cover the smoke test and the regression testing. Which type to test suite to run can be determined based on time. For ex, you can run the regression test suite weekly and smoke test suite nightly.
- When a test release should be released?
- When should the tests begin? Should we start testing after a task is done or after a backlog item is done? How can we get notified when we should begin testing?
Ans: There shouldn't be strict deadline such to release a test build. Testing can be started while a user story develops. How a tester can help is testers can have 'show me session' on the code that developer works with and give some feedback. and start feature testing once the user story is resolved (you can have this state in tfs). But make sure to complete testing before the sprint ends.
If the testing debt is high you can have a hardening sprint.
-Should we create a Deployment task and a Test Task on every backlog item?
Ans: Yes there should be two different tasks for each user story. one task covers the feature testing (user story testing). This is important to do to estimations and also to get the correct team velocity.
There is nothing specific to TFS in the way tests are handled in Scrum. In fact, there's nothing particular to Scrum for handling tests! Your team has agreed that a story (or backlog item) is Done, only after it has been tested, and the only thing that Scrum "cares about" is that you meet your Definition of Done.
With regard to your specific questions:
You should commit your changes early, and often. The more you commit, the easier it is for you to communicate changes to the rest of the team, and in case of a disaster (you screw up the code, or your hard drive fails) - the easier it is for you to recover. In my experience it is best to commit whenever your code is stable and safe enough to add to the repository. This should be done at least when you complete a task
You should release early and often (see a recurring theme here?). This means that as soon as your product has a valuable increment, and it is stable and safe enough to expose to stakeholders outside your team. After your first release, you should be able to do so whenever you complete a backlog item - every Minimally Viable Product increment.
You should test - you guessed - early and often. This means, as soon as the changes to your product are sufficient to test. For developer unit tests, this means pretty much every time you add a scenario (if you're doing test driven development). For the "QA" tests, it should be done as early as possible - whenever a task is completed, if you can manage it. In fact, you may wish - if necessary - to rethink how you break backlog items into tasks, in order to increase the tasks' testability. If, for example, you break the tasks by the component or code-layer, it is impossible to test the code before the entire story is code-complete. On the other hand, if you break the backlog items by use case or scenario, you will be able to test whenever you complete a task - and the developers will get their feedback faster - immediately after they complete the tasks!
If you need to deploy every backlog item (and you do), the work involved should be accounted for. If every deployment is slightly different (one story might need to be published to the web, another might mean uploading an app to a mobile-app store), in a non-trivial and non-obvious way, then you should have a task for it, explaining how to deploy. If, however, every deployment task's title looks like "deploy" and the estimations are the same, then you should consider not wasting your time, and simply add a state to the backlog items' workflow (something like: To Do --> In Progress --> Testing --> Deploy --> Done)
Overall, Scrum is not going to give you the process to follow. It only provides a framework within which you can experiment with your process empirically to find what works best for your team.
Since all developers can commit their code (using SVN), when should they commit? When a task is done or when a backlog item is done?
Can you achieve an integrated, done increment without committing? If not, commit when it makes sense. Pick one and talk about the impact it had. What happens when you batch all of your changes until the backlog item is "done"? Can a backlog item be done without committing?
When a test release should be released?
When the Product Owner feels they have enough value to ship or there is value in shipping. Your the increments you create in a sprint should be releasable.
Maybe you are asking can you ship mid sprint. Sure! Consider it another empirical experiment to be executed within the Scrum framework. You can try this expriment and inspect, empirically, on the results.
When should the tests begin? Should we start testing after a task is done or after a backlog item is done?
Again, pick one and use Scrum to inspect on the impact. As a recommendation, don't wait too long. Try to get yourself into a position of building tests/test cases in parallel. Collaborate between those skills.
How can we get notified when we should begin testing?
You have 6 Development Team members. Ask the other person. Email, Skype, turn your head, stand up and walk over. Maybe you can use the Daily Scrum as a place to plan the day
Should we create a Deployment task and a Test Task on every backlog item?
Again, some teams do. If this helps you a) understand the remaining work and b) execute your process then try it. Maybe "Deployable via xyz script" is a new Definition of Done item in the making.

Can TeamCity tests be run asynchronously

In our environment we have quite a few long-running functional tests which currently tie up build agents and force other builds to queue. Since these agents are only waiting on test results they could theoretically just be handing off the tests to other machines (test agents) and then run queued builds until the test results are available.
For CI builds (including unit tests) this should remain inline as we want instant feedback on failures, but it would be great to get a better balance between the time taken to run functional tests, the lead time of their results, and the throughput of our collective builds.
As far as I can tell, TeamCity does not natively support this scenario so I'm thinking there are a few options:
Spin up more agents and assign them to a 'Test' pool. Trigger functional build configs to run on these agents (triggered by successful Ci builds). While this seems the cleanest it doesn't scale very well as we then have a lead time of purchasing licenses and will often have need to run tests in alternate environments which would temporarily double (or more) the required number of test agents.
Add builds or build steps to launch tests on external machines, then immediately mark the build as successful so queued builds can be processed then, when the tests are complete, we mark the build as succeeded/failed. This is reliant on being able to update the results of a previous build (REST API perhaps?). It also feels ugly to mark something as successful then update it as failed later but we could always be selective in what we monitor so we only see the final result.
Just keep spinning up agents until we no longer have builds queueing. The problem with this is that it's a moving target. If we knew where the plateau was (or whether it existed) this would be the way to go, but our usage pattern means this isn't viable.
Has anyone had success with a similar scenario, or knows pros/cons of any of the above I haven't thought of?
Your description of the available options seems to be pretty accurate.
If you want live update of the builds progress you will need to have one TeamCity agent "busy" for each running build.
The only downside here seems to be the agent licenses cost.
If the testing builds just launch processes on other machines, the TeamCity agent processes themselves can be run on a low-end machine and even many agents on a single computer.
An extension to your second scenario can be two build configurations instead of single one: one would start external process and another one can be triggered on external process completeness and then publish all the external process results as it's own. It can also have a snapshot dependency on the starting build to maintain the relation.
For anyone curious we ended up buying more agents and assigning them to a test pool. Investigations proved that it isn't possible to update build results (I can definitely understand why this ugliness wouldn't be supported out of the box).

VS2010 Coded UI Tests vs. Web Performance test (Whats the difference??)

Been playing with both for a couple hours.
You use a Coded UI test to record some actions and verify them through assertions..
You use a Web Performance test to record some actions and verify them through validation tests/extraction tests... basically same thing... then you can convert to code optionally like the Coded UI Tests
But it seem you can only add a WEB PERFORMACE TEST to a loadTest...
But arent they both pretty much the same thing?? What am I not understand?? Why not allow a Coded UI Test to be inside a load test?
Coded UI tests are for automated functional testing. These tests will simulate user interaction against the UI, such as button clicks and entering text. Coded UI tests require an interactive desktop environment, because they actually interact with the windows and objects of your application. Coded UI Tests in VS2010 are the equivalent of using something like HP QuickTest Pro or Selenium to drive your automated functional regression tests.
Load tests record and drive your application at the HTTP level. These tests simulate headless user interaction against your app server by sending HTTP requests directly, without a UI. Load tests typically assume that your application works correctly for 1 user, but aim to see if it functions under a heavy user load. Load tests are headless because simulating thousands of users with an interactive UI is not practical. By being headless, a single load agent machine can simulate hundreds or thousands of users. VS load tests are the equivalent of using HP LoadRunner or JMeter to drive virtual user load.
Functional and performance testing are two distinct types, with different strategies and processes. On a given project, you might have hundreds of automated functional tests (coded ui, for example), but only dozens of automated performance tests. You have so many more functional tests because you are testing your app in many different scenarios relative to your business requirements. Whereas with performance tests, you take your top dozen most commonly used transactions and run them under load.
i think this article has a great value on this discussion
CodedUI Tests –
Coded UI tests are for automated functional testing. These tests will simulate user interaction against the UI, such as button clicks and entering text. Coded UI tests require an interactive desktop environment, because they actually interact with the windows and objects of your application. Coded UI Tests in VS2010 are the equivalent of using something like HP QuickTest Pro or Selenium to drive your automated functional regression tests.
Web Performance Tests -
Web testing has much more than GUI testing. Web Performance Tests are used for testing the functionality and performance of the web page, web application, web site, web services, and combination of all of these. Web Performance Tests can be created by recording the HTTP requests and events during user interaction with the web application. The recording also captures the web page redirects, validations, view state information, authentication, and all the other activities. It can be classified in two ways which includes Simple Web Performance Tests and Coded Web Performance Tests.
Simple Web Performance Tests generate and execute the test as per the recording with a series of valid flows of events. Once the test is started there won’t be any intervention and it is not conditional.
Coded Web Performance Tests are more complex but provide a lot of flexibility. These types of tests are used for conditional execution based on values. Coded web tests can be created manually or generated from the Web Performance Test recording.
Load Tests-
Load tests record and drive your application at the HTTP level. These tests simulate headless user interaction against your app server by sending HTTP requests directly, without a UI. Load tests typically assume that your application works correctly for 1 user, but aim to see if it functions under a heavy user load. Load tests are headless because simulating thousands of users with an interactive UI is not practical. By being headless, a single load agent machine can simulate hundreds or thousands of users. VS load tests are the equivalent of using HP LoadRunner or JMeter to drive virtual user load.
Conclusion
Functional and performance testing are two distinct types, with different strategies and processes. On a given project, you might have hundreds of automated functional tests (coded ui, for example), but only dozens of automated performance tests. You have so many more functional tests because you are testing your app in many different scenarios relative to your business requirements. Whereas with performance tests, you take your top dozen most commonly used transactions and run them under load.
Coded UI tests are new to 2010. They validate against the actual UI (placement in the DOM, visibility etc.) of the application where the other does not. The Web Performance Test validates against the HTTP/HTTPS connection against the server.
This talks about functional UI testing and shows the use of the Coded UI test.
http://channel9.msdn.com/shows/10-4/10-4-Episode-18-Functional-UI-Testing/
Good news, from VS2012 you can add coded ui test into Load Test.
http://msdn.microsoft.com/en-us/library/ff468125.aspx