Deployment Environment Responsibility [closed] - development-environment

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This might be not technical but rather process driven query. Help me redirect to right forum if this is not the place to ask such question.
Typically in a project, we have a deployment environment where development team deploy the code for testing purpose. Testing team executes test cases on the environment.
But I have seen projects where there are multiple environments for different teams to test on, and when I get to understand that what's the point? I do not see any reason than to have multiple environments.
Two environments:
1. Lower Environment - Developers can use this environment to test their code( this environment will be an exact replica of higher environment where Internal and External testing will happen)
2. Higher Environment - where multiple testing team can test which from my experience seems stable env. to test on.
But I see multiple environment where testing happens with no apparent concrete reason. My question is who's responsibility to support multiple environments? I find it difficult for development team to work on supporting multiple environments apart from regular dev activities, Unit test case preparation, get clarification from the design or business on User story.
Any suggesstion would be highly appreciated.

My question is who's responsibility to support multiple environments?
Depending upon the size and roles that you have in the team, responsibility would usually lie with either one of the developer, tester, or release manager.
I find it difficult for development team to work on supporting multiple environments apart from regular dev activities
Deployments across environments can be and should be automated. Make sure that a proper source control tool is in place and all developers check-in the code there. Scripts can be created once and used for every deployment. There are Continuous Integration tools available which can help with automated deployment as well by fetching the code from source control repository and making an application build from it. This will save everyone's time and minimize human errors.
Release management best practices involve setting up different environments which are mainly:
Development
Test/QA
Staging
Production/Live
Development environment:
This is where development team run their code. It could be the developer's workstation or a server shared by several developers working together on the same project. This environment is frequently updated by the developers, so testers may not be able to use this environment for executing proper test cycles. Anyone from the dev team can update it. This is what you have termed as a Lower Environment.
Test environment:
A place where testing can be conducted. This would include functional testing as well as performance testing in a physical environment with hardware and software that is identical to the production environment. This environment is less frequently updated and provides a common platform for testers to report bugs on. Testers would not have to worry about frequent updates (by developers) and at the same time developers would not have to wait for the test cycle to complete so that they can update this environment. Updates to this environment should be scheduled and properly announced to the related group. Updates to this environment should be automated but controlled by the QA team/manager.
Staging environment:
This is preferred to be a mirror of production setup. It is used as a pre-production location and contains the Release Candidate -- the next version of the application, planned to be made live. Final testing and client/manager approvals are done here. It is used to verify installation, configuration, and migration scripts / procedures, before they are applied to production environment. The software installed here should closely match the software installed in the production environment. However, it maybe acceptable that the hardware capability of staging environment can be lesser since this environment is not to be used for measuring performance. Updates to this environment are not frequent and highly controlled (done usually by the release manager). On some projects, the developer, the release manager, and QA tester can actually be the same person but it should be clear that the roles are different.
Production environment:
This is the live environment which is available to the all the end users. Similar to staging, only selected group should be able to update the live setup. A developer should not be making any changes directly to the staging or production environments.

Hi on my previous work we also have different environment but here is the thing... on your case you have two environment which is lower and higher which is good.
my suggestion is you need a dedicated person who will be in-charge in all deployment. maybe you can call them "deployment person/team" , he will be the one to check all standards such as coding etc. of all developers and he will be the one to deploy in testing site (QA testing site) so that development team can focus on their task.
to be able to achieve that , you can use centralize repository for all developers such as Tortoise SVN etc.
Also , all developers can do the checking on their local computer.. if the program still have errors they are not allow to commit the changes in the repository if the errors not yet fixed to avoid deployment issues. If everything is fine, developers can now commit the codes in the repository, then the deployment team will check, if everything is good to both QA and Dev team, the deployment team can now deploy.
Hope it helps.
Regards.

Related

Which Environments Should Integration Test be Run In?

Given a development pipeline with playground, staging, and production environments, which environment is most appropriate for integration tests? What is the best practice around this?
My thinking is that it should be in the playground environment, to get the earliest results (ie shift left). However, I have also seen some examples of re-running integration tests for each environment.
Is there value in running integration tests multiple times, or does it make more sense to just run it once in an appropriate environment?
There might not be a standard best practice, it also depends on the application and the testing setup you have.
You can skip running tests on the production environment as it will affect the performance for your users. Also it is not a good idea to put testing data into your production environment. To test out whether the functionality is working fine on production, you can create an environment which mimics the production environment.
Since different environment like QA/Staging can have different environment configuration and different CPU/Memory settings, it is a good idea to run the integration tests on multiple environments.

what is difference between sandbox and staging environments?

If the staging environment is an isolated environment for testing by testers and also sandbox is an isolated environment for testing, So what are those differences?
Actually I could not find any useful and clear information on.
Good question. Given the background you provide they appear the same. This is true in that they are both isolated from the production environments. They should not contain production data. etc. However there are a number of differences particularly in how they are used.
Staging environment
A good staging environment will be a close replica (less the data) of the production system. It is used to test upgrades and patches prior to going to production. This means that it should be a controlled environment where the engineers responsible for the production deployment are allowed to test the rollout instructions.
Access restrictions in a staging environment should be as close to production as possible. I.E. deployment by those engineers who are responsible for deployment. No root (or privileged access for developers).
Sandbox environment
As the name suggests this typically a playground for the engineering team. It has less restrictions than a staging environment because it is designed to allow the engineers to try out things easily and quickly. A sandbox environment is likely to drift away from the production environment as engineers try out different versions of the product, dependancies, plugins etc.
Access to a sandbox environment typically allows privileged access to any engineer (developer, QA etc.) who are working on the project for easy / quick deployment and debugging.

What QA Server stand for?

For my understanding a QA Server is a testing server ... I would like to know what QA stand for and what is the difference with an Staging/Pre-Production server.
Thanks for your time!
I only can answer half of the question. QA stands for Quality Assurance.
Probably the QA server is suitable for testing, measuring the quality of the software/hardware. Probably unit tests/regression tests are meant to run on this server.
The what-you-call staging/pre-production server, is probably a system running the production code used for regular usage of the software/hardware.
I know this is an old post, but I found this while searching so I thought I'd add in some of my own knowledge in case other people come here wanting more information.
Michel got most of it right, but I'd like to correct a few things, if I may.
Firstly, a QA Server usually refers to a machine that handles the QA process, and runs software that helps create environments that can test different code branches, as part of the QA process. This can range from switching environments and checking out a branch, to rebuilding entire machines that match production environments and deploying code to them.
The basic principle of a QA Server is to help create QA environments for testing.
Staging/Pre-Production environments usually refer to one or multiple environments that match, as closely as possible, the production environment that the code will be deployed to. Again, this could be as simple as a machine with software installations that match the production machine's versions, to a mini web server farm where multiple machines and databases are connected together in a way that matches the production environment. The goal, again, is to have a place that matches production, but is not production, and again, for the purposes of testing and Quality Assurance.
I hope that helps anyone that is still unsure of the original questions answered.
There exists no such clear differentiation between staging/production environment. QA Server environment is a platform, where application is deployed for testing purposes, executing functional, security and performance test cases.
Staging is an environment where application is deployed (again for testing purposes) but its maintained that it matches with production environment as much as possible in terms of OS and specifications.

What to know before setting up a new Web Dev Env?

Say you want to create a new environment for a team of developers to build a large website on a LAMP stack.
I am not interested in the knowledge needed for coding the website (php,js,html,css,etc.). This stuff I know.
I am interested in what you need to know to setup a good environment and workflow with test server, production sever, version control, backups, etc.
What would be a good learning path?
As someone who has lead this process at several companies, my recommendation is to gradually raise the "maturity" of your organisation as a software factory by incrementally consolidating a set of practices in an order that makes sense to your needs. The order I tend to follow (starting with things that I consider more basic, to the more advanced stuff):
Version control - control your sources. I used to work with SVN but I'm gradually migrating my team to Mercurial (I agree to meagar's recommendation for a distributed VCS). A great HG tutorial is in hginit
Establish a clear release process, label your releases in VCS, do clean builds in a controlled environment, test and release from these.
Defect tracking - be systematic about your bugs and feature requests. I tend to use Trac because it gives me a more or less complete solution for project management plus a wiki that I use as a knowledge base. But you have choices galore (Jira, Bugzilla, etc...)
Establish routine Testing practices. Unit tests e.g. by using one of the xUnit frameworks (make it a habit to at least write unit tests for new functions you write and old code you modify) and Integration / System tests (for webapps use some tool like Selenium).
Make your tests run frequently, as a part of an automated build process
Eventually, write your tests before you code (Test-Driven Development) and strive to increase coverage.
Go a step forward in your build/test/release cycle by setting up some continuous integration system (to make sure your build and tests are run regularly, at least nightly). I recently started using Hudson and it is great for our Java/Maven projects, but you can use it for any other build process as well
In terms of testing environments, I agree with meagar's recommendations. We have these layers:
Test at developers workstations (should contain a full setup to run your code)
Staging environment: clone your production environment as closely as possible and deploy and run your app there. We also use VMs.
Production preview: we deploy our app to the production servers with production data but in a different "preview" URL for our internal use only. We run part of our automated Integration tests against this server, and do some additional manual testing with internal users
Production - and keep fingers crossed ;)
In terms of backup, at least for your source code, distributed VCS give you the advantage that your full repos are replicated in many machines, thus minimising the risk of data loss (which is much more critical with centralised repos as is the case with SVN).
Before you do anything else, ask your developers what they want out of a test/production environment. You shouldn't be making this decision, they should. The answer to this depends entirely on what kind of workflow they're familiar with and what kind of software they'll be developing.
I'd personally recommend a distributed VCS like git or mercurial, local WAMP/LAMP stacks on each developer's workstation (shared "development" servers are silly) and a server running some testing VMs which are duplicates of your production environment. You can't ask for more specific advice than that without involving your developers.

In which practical ways can virtualization enhance your development environment?

Practical uses of virtualization in software development are about as diverse as the techniques to achieve it.
Whether running your favorite editor in a virtual machine, or using a system of containers to host various services, which use cases have proven worth the effort and boosted your productivity, and which ones were a waste of time ?
I'll edit my question to provide a summary of the answers given here.
Also it'd be interesting to read about about the virtualization paradigms employed too, as they have gotten quite numerous over the years.
Edit : I'd be particularly interested in hearing about how people virtualize "services" required during development, over the more obvious system virtualization scenarios mentioned so far, hence the title edit.
Summary of answers :
Development Environment
Allows encapsulation of a particular technology stack, particularly useful for build systems
Testing
Easy switching of OS-specific contexts
Easy mocking of networked workstations in a n-tier application context
We deploy our application into virtual instances at our host (Amazon EC2). It's amazing how easy that makes it to manage our test, QA and production environments.
Version upgrade? Just fire up a few new virtual servers, install the software to be tested/QA'd/used in production, verify the deployment went well, and throw away the old instances.
Need more capacity? Fire up new virtual servers and deploy the software.
Peak usage over? Just dispose of no-longer-needed virtual servers.
Virtualization is used mainly for various server uses where I work:
Web servers - If we create a new non-production environment, the servers for it tend to be virtual ones so there is a virtual dev server, virtual test server, etc.
Version control and QA applications - Quality Center and SVN are run on virtual servers. The SVN box also runs CC.Net for our CI here.
There may be other uses but those seem to be the big ones at the moment.
We're testing the way our application behaves on a new machine after every development iteration, by installing it onto multiple Windows virtual machines and testing the functionality. This way, we can avoid re-installing the operating system and we're able to test more often.
We needed to test the setup of a collaborative network application in which data produced on some of the nodes was shared amongst cooperating nodes on the network in a setup with ~30 machines, which was logistically (and otherwise) prohibitive to deploy and set up. The test runs could be long, up to 48 hours in some cases. It was also tedious to deploy changes based on the results of our tests because we'd have to go around to each workstation and make the appropriate changes, which was a manual and error-prone process involving several tired developers.
One approach we used with some success was to deploy stripped-down virtual machines containing the software to be tested to various people's PCs and run the software in a simulated data-production/sharing mode on those PCs as a background task in the virtual machine. They could continue working on their day-to-day tasks (which largely consisted of producing documentation, writing email, and/or surfing the web, as near as I could tell) while we could make more productive use of the spare CPU cycles without "harming" their PC configuration. Deployment (and re-deployment) of the software was simplified, since we could essentially just update one image and re-use it on all the PCs. This wasn't the entirety of our testing, but it did make that particular aspect a lot easier.
We put the development environments for older versions of the software in virtual machines. This is particularly useful for Delphi development, as not only do we use different units, but different versions of components. Using the VMs makes managing this much easier, and we can be sure that any updated exes or dlls we issue for older versions of our system are built against the right stuff. We don't waste time changing our compiler setups to point at the right shares, or de-installing and re-installing components. That's good for productivity.
It also means we don't have to keep an old dev machine set up and hanging around just-in-case. Dev machines can be re-purposed as test machines, and it's no longer a disaster if a critical old dev machine expires in a cloud of bits.