Everyone.
I have got a very basic question that I didn't find out yet :
How to run two branches at the same time in Jenkins?
I actually have two branches in my multi-branch pipeline Jenkinsfile.
However, I need to run one branch for development and the other for production.
Is there any way to build them?
If not, please advise me on a good way.
Thanks.
Related
Currently, I am planning to build an application with all the environments in different plans. But when I see Bamboo it has lot of plans and its kind of confusion. Is there a way to implement all the environments within one plan and before running the plan it should show a drop down to which environment should run and that will be executed. I have seen this option in Jenkins, I hope there will be plugin added to the Bamboo in implementing this?
I'm not 100% sure what you are doing, but it sounds like you are trying to create a Plan that can deploy your application, and you have multiple environments you want to deploy to.
Bamboo gives you several ways to do this, but none of them are quite perfect in my opinion. If I'm right about what you are doing, here are your options that I know of:
Create multiple Plans as you are doing. You can eliminate some of the busy work by creating your first plan (say "Deploy to DEV") and cloning it rather than manually re-typing everything for TEST.
Create a single plan that is designed to use a variable for the environment, and run it manually as a customized build to provide a variable value that designates the environment name.
Create a Deployment Project and Environments instead of a Plan. This ends up being similar to the first option, you do have to clone the tasks that do the deployment to each environment, but it brings some added advantages of Bamboo being explicitly aware of each environment, which release of your software is currently in each environment, and what the changes are that are inside that release. Basically this treats environments and releases as first-class citizens in Bamboo that aren't things that exist or happen just behind the scenes of a regular build Plan.
It sounds like the middle option may be closest to what Jenkins allows.
The perfect solution in my mind would allow you to only have a single copy of the parts of tasks to do a deployment that are common to all environments, and a Deployment Project Environment just substitutes in those variables to that singe build execution that are specific to the Environment.
In my acceptance test suites specifically I see a lot of tests designed to run in a particular order (top to bottom) which in some ways makes sense for testing a particular flow, but I've also heard this is bad practice. Can anyone shed some light on the advantages and drawbacks here?
In majority situations if you rely on the order, there is something wrong. It's better to fix this because:
Tests should be independent to be able to run them separately (you should be able to run just 1 test).
Test-running tools often don't guarantee the order. Even if today it's a particular sequence, tomorrow you could add some configuration to the runner and the order will change.
It's hard to determine what's wrong from the test reports since you see a lot of failures while there is only 1 test that failed.
Again - from the test report tools it's not going to be easy to track the steps of the tests because these steps are assigned to different tests.
You won't be able to run them in parallel if you'd need to (hopefully you don't).
If you want to share logic - create reusable classes or methods (see this).
PS: I'd call these System Tests, not Acceptance Tests - you can write acceptance tests on unit or component levels too.
I'm working on an app that integrates with a 3rd party web service. I currently have separate integration / regression tests that call the web service to do the following:
Modify Policy - Add Vehicle
Modify Policy - Remove Vehicle
Modify Policy - Add Multiple Vehicles
Modify Policy - Add Insured
...
Most of these tests were created as bugs were found & fixed. The 3rd party web service is slooow and I'm trying to speed the testing process up. Because each test calls the web service, combining them into one test that only calls the web service once would make things much faster.
Would combining these tests be bad practice because each test was written for a specific bug? My concern is that a mistake in refactoring could potentially allow a bug to be re-introduced later on.
Yes, combining them would be a bad practice. Think instead about how to mitigate the risk without combining the tests. One approach - probably your best bet - would be to mock out the web service, so that the tests are much faster without jeopardizing their ability to detect a regression. Another would be to split your slow regression tests into their own suite that is run less frequently (but still frequently enough!) than your usual set of tests. Finally, you could combine them - but I would recommend explicitly reintroducing all the original bugs into your code to verify that the combined test still detects them.
Specific, pointed, direct, unit tests are very valuable; it's nice to know exactly what has broken. Combining tests compromises that value.
I wouldn't recommend combining them, unless you keep the ability to run them separately (maybe keep them separate in your overnight build, and combined in your continuous build).
Try parallelizing them (on separate 'policies'), if your test framework supports it.
I would suggest including them in your nightly build, so that they do run once a day when you are asleep and not watching the clock. And only removing them your developer time tests.
Of course that assumes they are not soooo sloooow that one night is not enough.
Just combining your tests into one big test is likely to make them useless or worse. Thats's not much better than just deleting them.
I am the QA Test Lead for a large enterprise software company with a team of over 30 developers and a small team of QA Testers. We currently use SVN to do all our code and schema check in which is then built out each night after hours.
My dilemma is this: All of developments code is promoted from their machine to the central repository on a daily basis into a single branch. This branch is our production code for our next software release. Each day when code is checked in, the stable branch is de-stabilized with this new piece of code until QA can get to testing it. It can sometimes take weeks for QA to get to a specific piece of code to test. The worst part of all of this is that we identify months ahead of time of what code is going to go into the standard release and what code will be bumped to the next branch, which has us coding all the way up until almost the actual release date.
I'm really starting to see the effects of this process (put in place by my predecessors) and I'm trying to come up with a way that won't piss off development whereby they can promote code to a QA environment, without holding up another developers piece of code. A lot of our code has shared libraries, and as I mentioned before it can sometimes take QA awhile to get to a piece of code to test. I don't want to hold up development in a certain area while that piece of code is waiting to be tested.
My question now is, what is the best methodology to adopt here? Is there software out there than can help with this? All I really want to do is ensure QA has enough time to test a release without any new code going in until it's tested. I don't want to end up on the street looking for a new job because "QA is doing a crappy job" according to a lot of people in the organization.
Any suggestions are greatly appreciated and will help with our testing and product.
It's a broad question which takes a broad answer, and I'm not sure if I know all it takes (I've been working as dev lead and architect, not as test manager). I see several problems in the process you describe, each require a solution:
Test team working on intermediate versions
This should be handled by working with the dev guys on splitting their work effort into meaningful iterations (called sprints in agile methodology) and delivering a working version every few weeks. Moreover, it should be established that feature are implemented by priority. This has the benefit that it keep the "test gap" fixed: you always test the latest version, which is a few weeks old, and devs understand that any problem you find there is more important than new features for next version.
Test team working on non stable versions
There is absolutely no reason why test team should invest time in version which are "dead on arrival". Continuous Integration is a methodology by which "breaking the code" is found as soon as possible. This require some investment in products like Hudson or home-grown solution to make sure build failure are notices as they occur and some "Smoke Testing" is applied to them.
Your test cycle is long
Invest in automated testing. This is not to say your testers need to learn to program; rather you should invest in recruiting or growing people with their knowledge and passion in writing stable automated tests.
You choose "coding all the way up until almost the actual release date"
That's right; it's a choice made by you and your management, favoring more features over stability and quality. It's a fine choice in some companies with a need to get to market ASAP or have a key customer satisfied; but it's a poor long-term investment. Once you convince your management it's a choice, you can stop taking it when it's not really needed.
Again, it's my two cents.
You need a continuous integration server that is able to automate the build and testing and deployment. I would look at a combination of Apache Hudson, JUnit (DBUnit), Selenium and code quality tools like Sonar.
To ensure that the code that the QA is testing is unique and not constantly changing, you should make the use of TAGs. A tag is like a branch except that the contents are immutable. Once a set of files have been checked in / committed you cannot change and then commit on top of those files. This way the QA has a stable version of code they are working with.
Using SVN without branching seems like a wasted resource. They should set up a stable branch and a test branch (ie. the daily build). When code is tested in the daily build it can be then pushed up to the development release branch.
Like Albert mentioned depending on what your code is you might also look into some automated tests for some of the shared libraries (which depending on where you are in development really shouldn't be changing all that much or your Dev team is doing a crappy job of organization imho).
You might also talk with your dev team leaders (or who ever manages them) and discuss where they view QA and what QA can do to help them the best. Ask: Does your dev team have a set cut off time before releases? Do you test every single line of code? Are there places that you might be spending too much detailed time testing? It shouldn't all fall on QA, QA and dev need to work together to get the product out.
I have one query. Maybe it is a silly question but still I need the answer to clear my doubts.
Testing is evaluating the product or application. We do testing to check whether there are any show stoppers or not, any issues that should not present.
We automate (script I am talking about) testcases from the present test cases. Once the test case is automated, how many cycle do we need to check the test that the script is running with no major errors and thus the script is reliable to run instead of manually executing the test cases.
Thanks in advance.
If the test script always fails when a test fails, you need to run the script only once. Running the script several times without changing the code will not give you additional safety.
You may discover that your tests depend on some external source that changes during the tests and thereby make the tests fail sometimes. Running the tests several times will not solve this issue, either. To solve it, you must make sure that the test setup really initializes all external factors in such a way that the tests always succeed. If you can't achieve this, you can't test reliably, so there is no way around this.
That said, tests can never make sure that your product is 100% correct or safe. They just make sure that your product is still as good (or better) as it was before all the changes your made since the last test. It's kind of having a watermark which tells you the least amount of quality that you can depend on. Anything above the watermark is speculation but below it (the part that your tests cover) is safe.
So by refining your tests, you can make your product better with every change. Without the automatic tests, every change has a chance to make your product worse. This means, without tests, your quality will certainly deteriorate while with tests, you can guarantee to maintain a certain amount of quality.
It's a huge field with no simple answer.
It depends on several factors, including:
The code coverage of your tests
How you define reliable