How are tests handled by Scrum using TFS 2012? - testing

We are, for the first time, trying to implement scrum on our company using TFS 2012. So far the process is not doing very well since we have questions that no one could find the answers so far.
Our main concern is how to handle the tests phase. Here are or scenario (in terms of people/jobs):
We have 6 programmers
We have a scrum master
We have 2 testers (that are not programmers)
That is what we have until now:
All desires go to the board
We have the sprint meeting where we add tasks to those desires
We prepare the sprint
People start to do their jobs
Our definition of Done clarifies that a story can only be considered done when the story goes to the test guys and one of them (in that case, me) says that the story is done. So far so good.
We have a Test Server where all tests are executed and that server is similar to the production server (web app).
As I said, out main concern is how to handle tests:
Since all developers can commit their code (using SVN), when should they commit? When a task is done or when a backlog item is done?
When a test release should be released?
When should the tests begin? Should we start testing after a task is done or after a backlog item is done? How can we get notified when we should begin testing?
Should we create a Deployment task and a Test Task on every backlog item?
Any help would be nice.

- Since all developers can commit their code (using SVN), when should they commit? When a task is done or when a backlog item is done?
Ans: I think you should commit as soon as code is ready. If you have tasks created under a user story and a task covers some small development you can submit and close the task. So like wise you have smaller tasks for development of the user story. Once all tasks are completed User story (backlog item) is completed.
To test the these commits, what you can do is to have a automation test suite which runs against the CI environment. So you can cover the smoke test and the regression testing. Which type to test suite to run can be determined based on time. For ex, you can run the regression test suite weekly and smoke test suite nightly.
- When a test release should be released?
- When should the tests begin? Should we start testing after a task is done or after a backlog item is done? How can we get notified when we should begin testing?
Ans: There shouldn't be strict deadline such to release a test build. Testing can be started while a user story develops. How a tester can help is testers can have 'show me session' on the code that developer works with and give some feedback. and start feature testing once the user story is resolved (you can have this state in tfs). But make sure to complete testing before the sprint ends.
If the testing debt is high you can have a hardening sprint.
-Should we create a Deployment task and a Test Task on every backlog item?
Ans: Yes there should be two different tasks for each user story. one task covers the feature testing (user story testing). This is important to do to estimations and also to get the correct team velocity.

There is nothing specific to TFS in the way tests are handled in Scrum. In fact, there's nothing particular to Scrum for handling tests! Your team has agreed that a story (or backlog item) is Done, only after it has been tested, and the only thing that Scrum "cares about" is that you meet your Definition of Done.
With regard to your specific questions:
You should commit your changes early, and often. The more you commit, the easier it is for you to communicate changes to the rest of the team, and in case of a disaster (you screw up the code, or your hard drive fails) - the easier it is for you to recover. In my experience it is best to commit whenever your code is stable and safe enough to add to the repository. This should be done at least when you complete a task
You should release early and often (see a recurring theme here?). This means that as soon as your product has a valuable increment, and it is stable and safe enough to expose to stakeholders outside your team. After your first release, you should be able to do so whenever you complete a backlog item - every Minimally Viable Product increment.
You should test - you guessed - early and often. This means, as soon as the changes to your product are sufficient to test. For developer unit tests, this means pretty much every time you add a scenario (if you're doing test driven development). For the "QA" tests, it should be done as early as possible - whenever a task is completed, if you can manage it. In fact, you may wish - if necessary - to rethink how you break backlog items into tasks, in order to increase the tasks' testability. If, for example, you break the tasks by the component or code-layer, it is impossible to test the code before the entire story is code-complete. On the other hand, if you break the backlog items by use case or scenario, you will be able to test whenever you complete a task - and the developers will get their feedback faster - immediately after they complete the tasks!
If you need to deploy every backlog item (and you do), the work involved should be accounted for. If every deployment is slightly different (one story might need to be published to the web, another might mean uploading an app to a mobile-app store), in a non-trivial and non-obvious way, then you should have a task for it, explaining how to deploy. If, however, every deployment task's title looks like "deploy" and the estimations are the same, then you should consider not wasting your time, and simply add a state to the backlog items' workflow (something like: To Do --> In Progress --> Testing --> Deploy --> Done)

Overall, Scrum is not going to give you the process to follow. It only provides a framework within which you can experiment with your process empirically to find what works best for your team.
Since all developers can commit their code (using SVN), when should they commit? When a task is done or when a backlog item is done?
Can you achieve an integrated, done increment without committing? If not, commit when it makes sense. Pick one and talk about the impact it had. What happens when you batch all of your changes until the backlog item is "done"? Can a backlog item be done without committing?
When a test release should be released?
When the Product Owner feels they have enough value to ship or there is value in shipping. Your the increments you create in a sprint should be releasable.
Maybe you are asking can you ship mid sprint. Sure! Consider it another empirical experiment to be executed within the Scrum framework. You can try this expriment and inspect, empirically, on the results.
When should the tests begin? Should we start testing after a task is done or after a backlog item is done?
Again, pick one and use Scrum to inspect on the impact. As a recommendation, don't wait too long. Try to get yourself into a position of building tests/test cases in parallel. Collaborate between those skills.
How can we get notified when we should begin testing?
You have 6 Development Team members. Ask the other person. Email, Skype, turn your head, stand up and walk over. Maybe you can use the Daily Scrum as a place to plan the day
Should we create a Deployment task and a Test Task on every backlog item?
Again, some teams do. If this helps you a) understand the remaining work and b) execute your process then try it. Maybe "Deployable via xyz script" is a new Definition of Done item in the making.

Related

Is it possible to continue tests sequentially from prior test state with Foundry/Forge?

I'm wondering if there is a way within Forge to continue your tests sequentially, starting from the contractual end state of the last test without repasting the prior tests code as setup. Obviously I could just make one massive test, however, i would lose the gas data and such that i would receive from individual tests. Thank you in advance to anyone who can help :)
No, this is not possible. Each test is run in isolation, with the exception that state from setUp is preserved.
You can make a bigger test and retain gas reporting information if you use forge test --gas-report instead of going by the gas reported by just forge test. The --gas-report flag uses transaction tracing to build a gas usage report of individual calls instead of the entire test.

How to strategize your test automation?

I would like to get input on how to run automated smoke tests based on what developers check in. Currently, when there is a commit by devs the jenkins job gets built to build the code and smoke tests run to test the app. But smoke tests contains more than 50 tests. How would you design your automation so when there is a check in by devs, the automation only runs against the app features that could be affected by the new check in? Here is our flow: Dev checks in to git repo, jenkins job gets triggered through web hook and builds the app, once the build is done there is a downstream job to run the smoke tests. I would like to limit the smoke tests to only test the features that are affected by the new check in.
You can determine which areas of your product might be affected but you can not be 100% sure. Don't rely on that. You don't want to have regressions with unknown source. They are extremely hard to triage and one of the best things about continuous integration is that each change or small amount of changes are tested separately and you know at each moment what is wrong with your app without spending many time on investigation. 10 minutes for a set of 50 tests is actually very good. Why don't think on making them parallel instead of reducing the test suit if the only problem about running the tests is the time consumed. I would prefer to speed up the test execution phase instead of decreasing the test suit.

Why cleanup a DB after a test run?

I have several test suites that read and write data from a dedicated database when they are run. My strategy is to assume that the DB is in an unreliable state before a test is run and if I need certain records in certain tables or an empty table I do that setup before the test is run.
My attitude is to not cleanup the DB at the end of each test suite because each test suite should do a cleanup and setup before it runs. Also, if I'm trying to "visually" debug a test suite it helps that the final state of the DB persists after the tests have completed.
Is there a compelling reason to cleanup a DB after your tests have run?
Depends on your tests, what happens after your tests, and how many people are doing testing.
If you're just testing locally, then no, cleaning up after yourself isn't as important ~so long as~ you're consistently employing this philosophy AND you have a process in place to make sure the database is in a known-good state before doing something other than testing.
If you're part of a team, then yes, leaving your test junk behind can screw up other people/processes, and you should clean up after yourself.
In addition to the previous answer I'd like to also mention that this is more suitable when executing Integration tests. Since Integrated modules work together and in conjunction with infrastructure such as message queues and databases + each independent part works correctly with the services it depends on.
This
cleanup a DB after a test run
helps you to Isolate Test Data. A best practice here is to use transactions for database-dependent tests (e.g.,component tests) and roll back the transaction when done. Use a small subset of data to effectively test behavior. Consider it as Database Sandbox – using the Isolate Test Data pattern. E.g. each developer can use this lightweight DML to populate his local database sandboxes to expedite test execution.
Another advantage is that you Decouple your Database, so ensure that application is backward and forward compatible with your database so you can deploy each independently. Patterns like Encapsulate Table with View, and NoSQL databases ensure that you can deploy two application versions at once without either one of them throwing database-related errors. It was particularly successful in a project where it was imperative to access the database using stored procedures.
All this is actually one of the concepts that is used in Virtual test labs.
In addition to above answers, I'll add few more points:
DB shouldn't be cleaned after test because thats where you've your test data, test results and all history which can be referred later on.
DB should be cleaned only if you are changing some application setting to run your / any specific test, so that it shouldn't impact other tester.

Can TeamCity tests be run asynchronously

In our environment we have quite a few long-running functional tests which currently tie up build agents and force other builds to queue. Since these agents are only waiting on test results they could theoretically just be handing off the tests to other machines (test agents) and then run queued builds until the test results are available.
For CI builds (including unit tests) this should remain inline as we want instant feedback on failures, but it would be great to get a better balance between the time taken to run functional tests, the lead time of their results, and the throughput of our collective builds.
As far as I can tell, TeamCity does not natively support this scenario so I'm thinking there are a few options:
Spin up more agents and assign them to a 'Test' pool. Trigger functional build configs to run on these agents (triggered by successful Ci builds). While this seems the cleanest it doesn't scale very well as we then have a lead time of purchasing licenses and will often have need to run tests in alternate environments which would temporarily double (or more) the required number of test agents.
Add builds or build steps to launch tests on external machines, then immediately mark the build as successful so queued builds can be processed then, when the tests are complete, we mark the build as succeeded/failed. This is reliant on being able to update the results of a previous build (REST API perhaps?). It also feels ugly to mark something as successful then update it as failed later but we could always be selective in what we monitor so we only see the final result.
Just keep spinning up agents until we no longer have builds queueing. The problem with this is that it's a moving target. If we knew where the plateau was (or whether it existed) this would be the way to go, but our usage pattern means this isn't viable.
Has anyone had success with a similar scenario, or knows pros/cons of any of the above I haven't thought of?
Your description of the available options seems to be pretty accurate.
If you want live update of the builds progress you will need to have one TeamCity agent "busy" for each running build.
The only downside here seems to be the agent licenses cost.
If the testing builds just launch processes on other machines, the TeamCity agent processes themselves can be run on a low-end machine and even many agents on a single computer.
An extension to your second scenario can be two build configurations instead of single one: one would start external process and another one can be triggered on external process completeness and then publish all the external process results as it's own. It can also have a snapshot dependency on the starting build to maintain the relation.
For anyone curious we ended up buying more agents and assigning them to a test pool. Investigations proved that it isn't possible to update build results (I can definitely understand why this ugliness wouldn't be supported out of the box).

Should test data be used in production?

We are deploying an update to our main application in production. The update has been tested in QA and it looks good to go. Our client wants to do a test in production. For that case, we will run the application using "test data" in production and once the test has been finished, we will delete the "test data".
A couple of server admins are against this because "test data doesn't belong to production". I think it's OK since the QA server and the production server have different hardware and the databases house different applications (QA has more databases, production is dedicated). Besides that, are there other facts that I can use to back my opinion?
EDIT: adding context
The application is a tool that automates the reception and validation of data. We receive the files via email and this tool automatically validates them and imports them to the database. We have a BI system that creates reports using this information (excel files are received by email, then validate, then reports/views come out, all this automated).
The "test data" would be old files (good and bad files from previous efforts) that represent true data (actually it is true data but with problems or just too old).
Yes! But manual usage of test data in production does not sound like a good idea to me as it cannot be controlled or monitored. My answer below is assuming the test data is used for automated testing.
Test data in production is "todays" need. This was not a requirement back then when automated testing was not a requirement(or did not exist). So in general this will be frowned upon. Security is the main reason. Its impact in messing up site analytics is another reason. These are genuine and good reasons.
One cannot decide one day to simply put test data in production especially towards the end of project. This needs to be made a requirement from the time development starts. So the test data needs to be there in production from the very first deployment onwards. And its impact needs to studied and documented. Organization as a whole need to understand it's benefit and impact.
Test data needs to be divided based on it's type,need or context. eg: Retrievable test data and editable test data. First step would be to have Retrievable(read only-never changes) test data available. Perhaps this is farthest we could go in many case, still would provide good results. And creation of this read only test data needs to be automated and preferably documented.
The benefits of having test data in production is huge. An automated test of an application is more precious that then the application itself. If the management realizes that then at least the initial "frown" changes.I feel test data in production should be considered a requirement/userstory and all problems against it should be mitigated. And new patterns of development need to evolve in this area.
This discussion is also related to integration testing and this article focuses on the benefits of it over unit testing
Your admins are right. Having test data in production will expose you to the risks (security holes):
Test data in production can be used to do damage to your company (intentional or nonintentional).
For example if you have non excisting identities on production you can do payment to them. If they are linked to real bank accounts you lose money without the ability to detect it.
Test data can change your management reports. When having fake action, some can infuence reports and have impact on decisions made. This will very hard to track and even harder to correct.
Test data can interact with production data. If someone makes a mistake and make a wrong relation production data can be changed based on test data.
There is no good way of detecting you have test data, if you would mark it. All data can be marked as test data. If you handle the test data different in your businesslayer, it whould not be a real test of your production environent.
Nowadays it is a good practice have Staging environment with the same infrastructure configuration like Production, so you can execute pentests, load tests, and do whatever you want to do to ensure that Production will behave as you expect.