Currently, I am planning to build an application with all the environments in different plans. But when I see Bamboo it has lot of plans and its kind of confusion. Is there a way to implement all the environments within one plan and before running the plan it should show a drop down to which environment should run and that will be executed. I have seen this option in Jenkins, I hope there will be plugin added to the Bamboo in implementing this?
I'm not 100% sure what you are doing, but it sounds like you are trying to create a Plan that can deploy your application, and you have multiple environments you want to deploy to.
Bamboo gives you several ways to do this, but none of them are quite perfect in my opinion. If I'm right about what you are doing, here are your options that I know of:
Create multiple Plans as you are doing. You can eliminate some of the busy work by creating your first plan (say "Deploy to DEV") and cloning it rather than manually re-typing everything for TEST.
Create a single plan that is designed to use a variable for the environment, and run it manually as a customized build to provide a variable value that designates the environment name.
Create a Deployment Project and Environments instead of a Plan. This ends up being similar to the first option, you do have to clone the tasks that do the deployment to each environment, but it brings some added advantages of Bamboo being explicitly aware of each environment, which release of your software is currently in each environment, and what the changes are that are inside that release. Basically this treats environments and releases as first-class citizens in Bamboo that aren't things that exist or happen just behind the scenes of a regular build Plan.
It sounds like the middle option may be closest to what Jenkins allows.
The perfect solution in my mind would allow you to only have a single copy of the parts of tasks to do a deployment that are common to all environments, and a Deployment Project Environment just substitutes in those variables to that singe build execution that are specific to the Environment.
Related
Background
1 x Dev SQL Server
1 x UAT SQL Server
1 x Prod SQL Server
Developers use SSMS to view SQL Server objects and code and make changes directly to these objects in SQL Server itself.
Challenge
We have multiple developers potentially making changes to the same database object (let’s say a stored procedure or a view). The challenge arises from different bits of work happening on the same object where the delivery timescales for release for each of the bits of work are different. This means we end up with someone having completed their changes on the dev object, but releasing the changes into the next environment along may fail as the view (for example) contain may another developer’s changes too, and those changes themselves may require other objects. The business may not be expecting that other’s developer’s work to be released anyway, as there may be days/weeks of effort still to put into it before release. But that doesn't help the developer who's ready to go into the next environment.
How do we get round that?
How should each developer have started off, before they started making changes, to avoid dependency issues when releasing?
How can a developer “jump the queue” and release their bits of work, equally without scuppering anyone starting off their particular change too.
This is not a perfect answer, nor is it the only potential answer - but it's a good start. It's based on my experience within a relatively small shop, where tasks are re-prioritised frequently and changes required after testing etc.
Firstly - it's about process. You need to make sure you have a decent process and people follow it. Software etc can help, but it won't stop people making process errors. There are a lot of products out there to help with this, but I find making small steps is often a good start.
In our shop, we use Git source control for managing codes and releases. These script the entire database structure and views/etc, and are used to manage any changes to those scripts.
In general, we have a 'release' branch, then 'feature' branches for updates we're working on, and 'hotfix' branches for when we do changes to live on the fly (e.g., fixes etc).
When working on a specific branch, you check out that branch and work on it. Any change to the database has to go into an appropriate branch.
When ready to go live, you merge the feature/hotfix branches into that release branch when they're released. This way the 'release' branch always exactly matches what is on the production database.
For software, we use Redgate Source Control integratated with SSMS, but there are definitely others available (e.g., ApexSQL Source Control). You can also do it manually, but I wouldn't suggest it.
You don't have to, but you can also use a git GUI (e.g., SourceTree) to manage your branching and merging etc.
There are additional software products that can help to manage releases/etc (including scripting etc) but the source control aspect should be the biggest help with the main issue (being able to work on different things and helping ensure no clashes).
Regarding Git and how to use it (or SVN etc) - if you haven't used them before, they're a bit weird and take some getting used to. We had a few re-starts with a few different processes before we came up with an approach we liked. It will also take some time to run into the different issues that can arise - so you cannot expect this to just fix it out of the box.
1 source control
Any source control system GIT/TFS to manage your code and control changes
2 branching/release strategy
Git Flow! F.e. main branch with current working source code (main, develop whatever you call it), each developer works on his own feature branch, after he done his work he test it by deploying on DEV environment and running tests. After that it could be merged into release branch that will go live on PROD.
Also you need to consider merge vs rebase strategy (some link).
3 and some SCRUM
The most basic: 2 weeks for sprint, after end of the sprint you create new release branch and deploy it on UAT for testing. During next sprint release is tested on UAT, developers work on tasks from new sprint. Deploy tested release on PROD, developers have there 3rd sprint and UAT is ready for new release to be deployed. And so on.
4 more then one DEV environments
Based on the number of developers you need more DEV environments.
Due to some constraints on our production code, we have some .NET services that need to be run with their own config file. We've been using app-domains to provide arbitrary config files to these services at test run time.
The problem comes when we try and use SpecFlow for these tests - since each step is called separately and from an overall runner class that we don't have direct access to, pushing test data across app-domain boundaries for every single STEP is pretty messy and results in everything being in all sorts of odd lambdas, plus serializability needs to be considered when most of the time we shouldn't need to care about that in a test code context (internal data objects, that sort of thing).
Does anyone have a method by which SpecFlow can be convinced to run all of its steps in a provided app-domain, or generally just play nicer with the app-domain concept in general?
Would it be possible to write a plugin / test generator that did this, and if so would this be very technically complicated? I had a look at that sort of extensibility but couldn't find the right place to start to do this, so I may have missed it.
(I'm aware that "Refactor your service so you don't need arbitrary config files" would also solve the underlying problem, but for the purposes of this question please assume I can't do that - I'm interested in whether SpecFlow can be configured to solve this, whether on its own or by extending it.)
Edit: After some more investigation I think this -should- be possible by using a custom unit test generator plugin? The problem I then have is there's basically zero documentation on that, and not many examples around on the internet. If you can give me a good example that I can look at to adapt that would go a long way...
I'm currently starting a new project where we are hoping to develop a new system using reusable components and services.
We currently have 30+ systems that all have common elements, but at the moment we develop each system in isolation so it feels like we are often duplicating code and then of course we have 30+ separate code bases to maintain and support.
What we would like to do is create a generic platform using shared components to enable quick development of new collections, reusing code and reusing automated tests and reduce the code base that needs to be maintained.
Our thoughts so far are that we would have a common code base for specific modules for example User Management and Secure System Access, these modules could consist of their own generic web module, API and Context. This would create a generic package of code.
We could then deploy these different components/packages to build up a new system to save coding the same modules over and over again, so if the new system needed to manage users, you could get the User Management package and boom it does what you need. However, because we have 30+ systems we will deploy the components multiple times for each collection. Also we appreciate that some of the systems will need unique functionality so there would be the potential to add extensions to the generic modules for system specific needs OR to choose not to use one of the generic modules and create a new one, but use the rest of the generic components.
For example if we have 4 generic components that make up the system A, B, C and D. These could be deployed to create the following system set ups:
System 1 - A, B, C and D (Happy with all generic components)
System 2 - Aa, B, C and D (extended component A to include specific functionality)
System 3 - A, E, C and F (Can't reuse components B and D so create specific ones, but still reuse components A and C)
This is throwing up a few issues for me as I need to be able to test this platform and each system to ensure it works and this is the first time I've come across having to test a set up like this.
I've done some reading around Mircroservices and how to test them, but these often approach the problem for just 1 system using microservices where we are looking at multiple systems with different configurations.
My thoughts so far lead me to believe that for the generic components that will be utilised by the different collections I can create automated tests at the base code level and then those tests will confirm the generic functionality and therefore it will not be necessary to retest these functions again for each component, other than perhaps a manual sense check after deployment. Then at each system level additional automated tests can be added to check the specific functionality that may be created.
Ideally what I'd like would be to have some sort of testing platform set up so that if a change is made to a core component such as User Management it would be possible to trigger all the auto tests at the core level and then all of the specific system tests for all systems that will share the component to ensure that any changes don't affect core functionality or create a knock on effect to the specific systems. Then a quick manual check would be required. I'm keen to try and remove a massive manual test overhead checking 30+ systems each time a shared component is changed.
We work in an agile way and for our current projects we have a strong continuous integration process set up, so when a developer checks in some code (Visual Studio) this triggers a CI build (TeamCity / Octopus) that will run all of the unit tests, provided that all these tests pass, this then triggers an Integration build that will run my QA Automated tests which are a mixture of tests run at an API level and Web tests using SpecFlow and PhantomJS or Selenium Webdriver. We would like to keep this sort of framework in place to keep the quick feedback loops.
It all sounds great in theory, but where I'm struggling is trying to put something into practice and create a sound testing strategy to cover this kind of system set up.
So really what I'm hoping is that there is someone out there who has encountered something similar in the past and has thoughts on the best way to tackle this and has proven that they work.
I'm keen to get a better understanding of how I could set up a testing platform / rig to aid the continuous integration for all systems considering that each system could potentially look different, yet have shared code.
Any thoughts or links to blogs / whitepapers etc. that you think might help would be much appreciated!!
Your approach is quite good, and since soon I'll have to face the same issues like you - I can give you my ideas so far. I'm pretty sure that to
create a sound testing strategy to cover this kind of system set up
can't be squeezed-in in one post. So the big picture looks like this (to me) - you're in the middle of the Enterprise application integration process, the fundamental basis to be test covered will be the Data migration. Maybe you need to consider the concept of Service-oriented architecture
generic platform using shared components
since it'll enable you to provide application functionality as services to other applications. Here indirect benefit will be that SOA involves dramatically simplified testing. Services are autonomous, stateless, with fully documented interfaces, and separate from the cross-cutting concerns of the implementation. There are a lot of resources like this E2E testing or efficiently testing SOA.
I'm working on an app that integrates with a 3rd party web service. I currently have separate integration / regression tests that call the web service to do the following:
Modify Policy - Add Vehicle
Modify Policy - Remove Vehicle
Modify Policy - Add Multiple Vehicles
Modify Policy - Add Insured
...
Most of these tests were created as bugs were found & fixed. The 3rd party web service is slooow and I'm trying to speed the testing process up. Because each test calls the web service, combining them into one test that only calls the web service once would make things much faster.
Would combining these tests be bad practice because each test was written for a specific bug? My concern is that a mistake in refactoring could potentially allow a bug to be re-introduced later on.
Yes, combining them would be a bad practice. Think instead about how to mitigate the risk without combining the tests. One approach - probably your best bet - would be to mock out the web service, so that the tests are much faster without jeopardizing their ability to detect a regression. Another would be to split your slow regression tests into their own suite that is run less frequently (but still frequently enough!) than your usual set of tests. Finally, you could combine them - but I would recommend explicitly reintroducing all the original bugs into your code to verify that the combined test still detects them.
Specific, pointed, direct, unit tests are very valuable; it's nice to know exactly what has broken. Combining tests compromises that value.
I wouldn't recommend combining them, unless you keep the ability to run them separately (maybe keep them separate in your overnight build, and combined in your continuous build).
Try parallelizing them (on separate 'policies'), if your test framework supports it.
I would suggest including them in your nightly build, so that they do run once a day when you are asleep and not watching the clock. And only removing them your developer time tests.
Of course that assumes they are not soooo sloooow that one night is not enough.
Just combining your tests into one big test is likely to make them useless or worse. Thats's not much better than just deleting them.
I want to improve integration tests methods where I work and I would like to know how this process happens in other places.
Things like:
- When test plans writing begin
- Proportion between testers, developers and stuff (entire applications or modifications) to be tested
- What kind of methods are used for integration testing.
Actually, I test webapps and test plans are managed with Test Link. Bugs found are reported on Bugzilla. I am trying to automate tests with Selenium RC, but I takes some time to write the plans and write the code to execute on Selenium. And time is something that I dont have, because I am testing 3 or more aplications.
Most of my problems are caused by differences between test environment and production environment. But tests are taking too long to begin. If someone finishes a modification today, it will take about 3 weeks for me to begin tests. And the test process queue keeps growing.
It would be really good if anyone suggests something that would improve testing process (like more people testing,etc). But mostly, I would like to hear how testing process works on other places.
Thanks.
For us the integration test is generally performed by the developer before a commit. Just simple surface test to see that nothing obvious is broken.
Then we deploy the code from trunk on a development server connected to a test database that is a complete copy of the production database and have the users responsible for the new functionality do acceptance test and further integration tests on that server.
We have a concept of "super user" to organize this. Super users are responsible for educating other users in their area of expertise and answering helpdesk questions related to the usage of the system. The super users are also the people who are involved in feature requests and requirement discussions for all features related to their work.
So when a new feature is developed the super user is the one who first validate the design suggestion and than performs the final stages of testing before deployment.
This setup is good because it ensures that domain experts are the ones who validate the system functionality and removes some responsibilities from the IT-department.
The bad thing is that they are not usually very technical or good testers. As users they tend to see the the system for what is is rather than what it could be. The fact that they also have their ordinary functions in the organization as full time employees also means that they are a very limited resource in terms of testing.
I'll assume you mean integration testing as in checking to see if the parts of the application work together, (for example, getting the database and the website to work together after the DBA and web developer respectively say they're done) And I'll use an example from my current project
I code generate several configuration files so I can observe the application with certain modules on/off, namely error reporting, authentication, debug mode compilation, with/without SSL. Development environments are likely to have "friendly error pages" turned off, no authentication, no SSL, etc.
I also use a build script to create a copy of the application for each variant of the config file
It is helpful to pedantically reproduce the characteristics of production to staging and development as much as you can-- use virtual machines if you lack the hardware
I also wrote into the production code bases a few pages that test the sort of things that break when code move from one machine to another, i.e. does the db connection work, do emails send, is the temp folder writable and made that page the home page of the server operator
The key is automating as much as you can. Frequent integration testing catches issues earlier.
From check in to packaging code for deployment, it takes me 8 minutes of automated work and 1/2 hour of manual clicking for smoke tests.