Can TeamCity tests be run asynchronously - testing

In our environment we have quite a few long-running functional tests which currently tie up build agents and force other builds to queue. Since these agents are only waiting on test results they could theoretically just be handing off the tests to other machines (test agents) and then run queued builds until the test results are available.
For CI builds (including unit tests) this should remain inline as we want instant feedback on failures, but it would be great to get a better balance between the time taken to run functional tests, the lead time of their results, and the throughput of our collective builds.
As far as I can tell, TeamCity does not natively support this scenario so I'm thinking there are a few options:
Spin up more agents and assign them to a 'Test' pool. Trigger functional build configs to run on these agents (triggered by successful Ci builds). While this seems the cleanest it doesn't scale very well as we then have a lead time of purchasing licenses and will often have need to run tests in alternate environments which would temporarily double (or more) the required number of test agents.
Add builds or build steps to launch tests on external machines, then immediately mark the build as successful so queued builds can be processed then, when the tests are complete, we mark the build as succeeded/failed. This is reliant on being able to update the results of a previous build (REST API perhaps?). It also feels ugly to mark something as successful then update it as failed later but we could always be selective in what we monitor so we only see the final result.
Just keep spinning up agents until we no longer have builds queueing. The problem with this is that it's a moving target. If we knew where the plateau was (or whether it existed) this would be the way to go, but our usage pattern means this isn't viable.
Has anyone had success with a similar scenario, or knows pros/cons of any of the above I haven't thought of?

Your description of the available options seems to be pretty accurate.
If you want live update of the builds progress you will need to have one TeamCity agent "busy" for each running build.
The only downside here seems to be the agent licenses cost.
If the testing builds just launch processes on other machines, the TeamCity agent processes themselves can be run on a low-end machine and even many agents on a single computer.
An extension to your second scenario can be two build configurations instead of single one: one would start external process and another one can be triggered on external process completeness and then publish all the external process results as it's own. It can also have a snapshot dependency on the starting build to maintain the relation.

For anyone curious we ended up buying more agents and assigning them to a test pool. Investigations proved that it isn't possible to update build results (I can definitely understand why this ugliness wouldn't be supported out of the box).

Related

How to strategize your test automation?

I would like to get input on how to run automated smoke tests based on what developers check in. Currently, when there is a commit by devs the jenkins job gets built to build the code and smoke tests run to test the app. But smoke tests contains more than 50 tests. How would you design your automation so when there is a check in by devs, the automation only runs against the app features that could be affected by the new check in? Here is our flow: Dev checks in to git repo, jenkins job gets triggered through web hook and builds the app, once the build is done there is a downstream job to run the smoke tests. I would like to limit the smoke tests to only test the features that are affected by the new check in.
You can determine which areas of your product might be affected but you can not be 100% sure. Don't rely on that. You don't want to have regressions with unknown source. They are extremely hard to triage and one of the best things about continuous integration is that each change or small amount of changes are tested separately and you know at each moment what is wrong with your app without spending many time on investigation. 10 minutes for a set of 50 tests is actually very good. Why don't think on making them parallel instead of reducing the test suit if the only problem about running the tests is the time consumed. I would prefer to speed up the test execution phase instead of decreasing the test suit.

Can spinnaker prevent out-of-order deployments?

Currently
We use a CI platform to build, test, and release new code when a new PR is merged into master. The "release" step is quite simple/stupid, and essentially runs kubectl patch with the tag of the newly-pushed docker image.
The Problem
When two PRs merge at about the same time (ex: A, then B -- B includes A's commits, but not vice-versa), it may happen that B finishes its build/test first, and begins its release step first. When this happens, A releases second, even though it has older code. The result is a steady-state in which B's code has been effectively rolled-back by As deployment.
We want to keep our CI/CD as continuous as possible, ideally without:
serializing our CI pipeline (so that only one workflow runs at a time)
delaying/batching our deployments
Does Spinnaker have functionality or best-practice that solves for this?
Best practises for your issue are widely described in Message Ordering for Asynchronous systems. The simpliest solution would be to implement FIFO priciple for your CI/CD pipeline.
It will save you from implementing checks between CI and CD parts.

Unable to execute parallel run on Atlassian Bamboo?

I am trying to execute multiple run parallel but configured Jobs in one plan are being executed sequentially. I have configured it for parallel run according to the link https://confluence.atlassian.com/bamboo/configuring-concurrent-builds-289277193.html .But not worked.
If I am doing anything wrong, please give the solution.
Thanks.
More details are needed to effectively answer your question (# of agents, how many are capable of running the tasks, etc), but you can follow this advice:
Make sure that you have different agents available to execute each job and that the ones available have the necessary capabilities to run those jobs. For example, if you have two jobs that you expect to run in parallel, but only 1 agent is capable of building them, then they're going to run sequentially.

Why cleanup a DB after a test run?

I have several test suites that read and write data from a dedicated database when they are run. My strategy is to assume that the DB is in an unreliable state before a test is run and if I need certain records in certain tables or an empty table I do that setup before the test is run.
My attitude is to not cleanup the DB at the end of each test suite because each test suite should do a cleanup and setup before it runs. Also, if I'm trying to "visually" debug a test suite it helps that the final state of the DB persists after the tests have completed.
Is there a compelling reason to cleanup a DB after your tests have run?
Depends on your tests, what happens after your tests, and how many people are doing testing.
If you're just testing locally, then no, cleaning up after yourself isn't as important ~so long as~ you're consistently employing this philosophy AND you have a process in place to make sure the database is in a known-good state before doing something other than testing.
If you're part of a team, then yes, leaving your test junk behind can screw up other people/processes, and you should clean up after yourself.
In addition to the previous answer I'd like to also mention that this is more suitable when executing Integration tests. Since Integrated modules work together and in conjunction with infrastructure such as message queues and databases + each independent part works correctly with the services it depends on.
This
cleanup a DB after a test run
helps you to Isolate Test Data. A best practice here is to use transactions for database-dependent tests (e.g.,component tests) and roll back the transaction when done. Use a small subset of data to effectively test behavior. Consider it as Database Sandbox – using the Isolate Test Data pattern. E.g. each developer can use this lightweight DML to populate his local database sandboxes to expedite test execution.
Another advantage is that you Decouple your Database, so ensure that application is backward and forward compatible with your database so you can deploy each independently. Patterns like Encapsulate Table with View, and NoSQL databases ensure that you can deploy two application versions at once without either one of them throwing database-related errors. It was particularly successful in a project where it was imperative to access the database using stored procedures.
All this is actually one of the concepts that is used in Virtual test labs.
In addition to above answers, I'll add few more points:
DB shouldn't be cleaned after test because thats where you've your test data, test results and all history which can be referred later on.
DB should be cleaned only if you are changing some application setting to run your / any specific test, so that it shouldn't impact other tester.

How are tests handled by Scrum using TFS 2012?

We are, for the first time, trying to implement scrum on our company using TFS 2012. So far the process is not doing very well since we have questions that no one could find the answers so far.
Our main concern is how to handle the tests phase. Here are or scenario (in terms of people/jobs):
We have 6 programmers
We have a scrum master
We have 2 testers (that are not programmers)
That is what we have until now:
All desires go to the board
We have the sprint meeting where we add tasks to those desires
We prepare the sprint
People start to do their jobs
Our definition of Done clarifies that a story can only be considered done when the story goes to the test guys and one of them (in that case, me) says that the story is done. So far so good.
We have a Test Server where all tests are executed and that server is similar to the production server (web app).
As I said, out main concern is how to handle tests:
Since all developers can commit their code (using SVN), when should they commit? When a task is done or when a backlog item is done?
When a test release should be released?
When should the tests begin? Should we start testing after a task is done or after a backlog item is done? How can we get notified when we should begin testing?
Should we create a Deployment task and a Test Task on every backlog item?
Any help would be nice.
- Since all developers can commit their code (using SVN), when should they commit? When a task is done or when a backlog item is done?
Ans: I think you should commit as soon as code is ready. If you have tasks created under a user story and a task covers some small development you can submit and close the task. So like wise you have smaller tasks for development of the user story. Once all tasks are completed User story (backlog item) is completed.
To test the these commits, what you can do is to have a automation test suite which runs against the CI environment. So you can cover the smoke test and the regression testing. Which type to test suite to run can be determined based on time. For ex, you can run the regression test suite weekly and smoke test suite nightly.
- When a test release should be released?
- When should the tests begin? Should we start testing after a task is done or after a backlog item is done? How can we get notified when we should begin testing?
Ans: There shouldn't be strict deadline such to release a test build. Testing can be started while a user story develops. How a tester can help is testers can have 'show me session' on the code that developer works with and give some feedback. and start feature testing once the user story is resolved (you can have this state in tfs). But make sure to complete testing before the sprint ends.
If the testing debt is high you can have a hardening sprint.
-Should we create a Deployment task and a Test Task on every backlog item?
Ans: Yes there should be two different tasks for each user story. one task covers the feature testing (user story testing). This is important to do to estimations and also to get the correct team velocity.
There is nothing specific to TFS in the way tests are handled in Scrum. In fact, there's nothing particular to Scrum for handling tests! Your team has agreed that a story (or backlog item) is Done, only after it has been tested, and the only thing that Scrum "cares about" is that you meet your Definition of Done.
With regard to your specific questions:
You should commit your changes early, and often. The more you commit, the easier it is for you to communicate changes to the rest of the team, and in case of a disaster (you screw up the code, or your hard drive fails) - the easier it is for you to recover. In my experience it is best to commit whenever your code is stable and safe enough to add to the repository. This should be done at least when you complete a task
You should release early and often (see a recurring theme here?). This means that as soon as your product has a valuable increment, and it is stable and safe enough to expose to stakeholders outside your team. After your first release, you should be able to do so whenever you complete a backlog item - every Minimally Viable Product increment.
You should test - you guessed - early and often. This means, as soon as the changes to your product are sufficient to test. For developer unit tests, this means pretty much every time you add a scenario (if you're doing test driven development). For the "QA" tests, it should be done as early as possible - whenever a task is completed, if you can manage it. In fact, you may wish - if necessary - to rethink how you break backlog items into tasks, in order to increase the tasks' testability. If, for example, you break the tasks by the component or code-layer, it is impossible to test the code before the entire story is code-complete. On the other hand, if you break the backlog items by use case or scenario, you will be able to test whenever you complete a task - and the developers will get their feedback faster - immediately after they complete the tasks!
If you need to deploy every backlog item (and you do), the work involved should be accounted for. If every deployment is slightly different (one story might need to be published to the web, another might mean uploading an app to a mobile-app store), in a non-trivial and non-obvious way, then you should have a task for it, explaining how to deploy. If, however, every deployment task's title looks like "deploy" and the estimations are the same, then you should consider not wasting your time, and simply add a state to the backlog items' workflow (something like: To Do --> In Progress --> Testing --> Deploy --> Done)
Overall, Scrum is not going to give you the process to follow. It only provides a framework within which you can experiment with your process empirically to find what works best for your team.
Since all developers can commit their code (using SVN), when should they commit? When a task is done or when a backlog item is done?
Can you achieve an integrated, done increment without committing? If not, commit when it makes sense. Pick one and talk about the impact it had. What happens when you batch all of your changes until the backlog item is "done"? Can a backlog item be done without committing?
When a test release should be released?
When the Product Owner feels they have enough value to ship or there is value in shipping. Your the increments you create in a sprint should be releasable.
Maybe you are asking can you ship mid sprint. Sure! Consider it another empirical experiment to be executed within the Scrum framework. You can try this expriment and inspect, empirically, on the results.
When should the tests begin? Should we start testing after a task is done or after a backlog item is done?
Again, pick one and use Scrum to inspect on the impact. As a recommendation, don't wait too long. Try to get yourself into a position of building tests/test cases in parallel. Collaborate between those skills.
How can we get notified when we should begin testing?
You have 6 Development Team members. Ask the other person. Email, Skype, turn your head, stand up and walk over. Maybe you can use the Daily Scrum as a place to plan the day
Should we create a Deployment task and a Test Task on every backlog item?
Again, some teams do. If this helps you a) understand the remaining work and b) execute your process then try it. Maybe "Deployable via xyz script" is a new Definition of Done item in the making.