What QA Server stand for? - testing

For my understanding a QA Server is a testing server ... I would like to know what QA stand for and what is the difference with an Staging/Pre-Production server.
Thanks for your time!

I only can answer half of the question. QA stands for Quality Assurance.
Probably the QA server is suitable for testing, measuring the quality of the software/hardware. Probably unit tests/regression tests are meant to run on this server.
The what-you-call staging/pre-production server, is probably a system running the production code used for regular usage of the software/hardware.

I know this is an old post, but I found this while searching so I thought I'd add in some of my own knowledge in case other people come here wanting more information.
Michel got most of it right, but I'd like to correct a few things, if I may.
Firstly, a QA Server usually refers to a machine that handles the QA process, and runs software that helps create environments that can test different code branches, as part of the QA process. This can range from switching environments and checking out a branch, to rebuilding entire machines that match production environments and deploying code to them.
The basic principle of a QA Server is to help create QA environments for testing.
Staging/Pre-Production environments usually refer to one or multiple environments that match, as closely as possible, the production environment that the code will be deployed to. Again, this could be as simple as a machine with software installations that match the production machine's versions, to a mini web server farm where multiple machines and databases are connected together in a way that matches the production environment. The goal, again, is to have a place that matches production, but is not production, and again, for the purposes of testing and Quality Assurance.
I hope that helps anyone that is still unsure of the original questions answered.

There exists no such clear differentiation between staging/production environment. QA Server environment is a platform, where application is deployed for testing purposes, executing functional, security and performance test cases.
Staging is an environment where application is deployed (again for testing purposes) but its maintained that it matches with production environment as much as possible in terms of OS and specifications.

Related

what is difference between sandbox and staging environments?

If the staging environment is an isolated environment for testing by testers and also sandbox is an isolated environment for testing, So what are those differences?
Actually I could not find any useful and clear information on.
Good question. Given the background you provide they appear the same. This is true in that they are both isolated from the production environments. They should not contain production data. etc. However there are a number of differences particularly in how they are used.
Staging environment
A good staging environment will be a close replica (less the data) of the production system. It is used to test upgrades and patches prior to going to production. This means that it should be a controlled environment where the engineers responsible for the production deployment are allowed to test the rollout instructions.
Access restrictions in a staging environment should be as close to production as possible. I.E. deployment by those engineers who are responsible for deployment. No root (or privileged access for developers).
Sandbox environment
As the name suggests this typically a playground for the engineering team. It has less restrictions than a staging environment because it is designed to allow the engineers to try out things easily and quickly. A sandbox environment is likely to drift away from the production environment as engineers try out different versions of the product, dependancies, plugins etc.
Access to a sandbox environment typically allows privileged access to any engineer (developer, QA etc.) who are working on the project for easy / quick deployment and debugging.

Deployment Environment Responsibility [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This might be not technical but rather process driven query. Help me redirect to right forum if this is not the place to ask such question.
Typically in a project, we have a deployment environment where development team deploy the code for testing purpose. Testing team executes test cases on the environment.
But I have seen projects where there are multiple environments for different teams to test on, and when I get to understand that what's the point? I do not see any reason than to have multiple environments.
Two environments:
1. Lower Environment - Developers can use this environment to test their code( this environment will be an exact replica of higher environment where Internal and External testing will happen)
2. Higher Environment - where multiple testing team can test which from my experience seems stable env. to test on.
But I see multiple environment where testing happens with no apparent concrete reason. My question is who's responsibility to support multiple environments? I find it difficult for development team to work on supporting multiple environments apart from regular dev activities, Unit test case preparation, get clarification from the design or business on User story.
Any suggesstion would be highly appreciated.
My question is who's responsibility to support multiple environments?
Depending upon the size and roles that you have in the team, responsibility would usually lie with either one of the developer, tester, or release manager.
I find it difficult for development team to work on supporting multiple environments apart from regular dev activities
Deployments across environments can be and should be automated. Make sure that a proper source control tool is in place and all developers check-in the code there. Scripts can be created once and used for every deployment. There are Continuous Integration tools available which can help with automated deployment as well by fetching the code from source control repository and making an application build from it. This will save everyone's time and minimize human errors.
Release management best practices involve setting up different environments which are mainly:
Development
Test/QA
Staging
Production/Live
Development environment:
This is where development team run their code. It could be the developer's workstation or a server shared by several developers working together on the same project. This environment is frequently updated by the developers, so testers may not be able to use this environment for executing proper test cycles. Anyone from the dev team can update it. This is what you have termed as a Lower Environment.
Test environment:
A place where testing can be conducted. This would include functional testing as well as performance testing in a physical environment with hardware and software that is identical to the production environment. This environment is less frequently updated and provides a common platform for testers to report bugs on. Testers would not have to worry about frequent updates (by developers) and at the same time developers would not have to wait for the test cycle to complete so that they can update this environment. Updates to this environment should be scheduled and properly announced to the related group. Updates to this environment should be automated but controlled by the QA team/manager.
Staging environment:
This is preferred to be a mirror of production setup. It is used as a pre-production location and contains the Release Candidate -- the next version of the application, planned to be made live. Final testing and client/manager approvals are done here. It is used to verify installation, configuration, and migration scripts / procedures, before they are applied to production environment. The software installed here should closely match the software installed in the production environment. However, it maybe acceptable that the hardware capability of staging environment can be lesser since this environment is not to be used for measuring performance. Updates to this environment are not frequent and highly controlled (done usually by the release manager). On some projects, the developer, the release manager, and QA tester can actually be the same person but it should be clear that the roles are different.
Production environment:
This is the live environment which is available to the all the end users. Similar to staging, only selected group should be able to update the live setup. A developer should not be making any changes directly to the staging or production environments.
Hi on my previous work we also have different environment but here is the thing... on your case you have two environment which is lower and higher which is good.
my suggestion is you need a dedicated person who will be in-charge in all deployment. maybe you can call them "deployment person/team" , he will be the one to check all standards such as coding etc. of all developers and he will be the one to deploy in testing site (QA testing site) so that development team can focus on their task.
to be able to achieve that , you can use centralize repository for all developers such as Tortoise SVN etc.
Also , all developers can do the checking on their local computer.. if the program still have errors they are not allow to commit the changes in the repository if the errors not yet fixed to avoid deployment issues. If everything is fine, developers can now commit the codes in the repository, then the deployment team will check, if everything is good to both QA and Dev team, the deployment team can now deploy.
Hope it helps.
Regards.

What to know before setting up a new Web Dev Env?

Say you want to create a new environment for a team of developers to build a large website on a LAMP stack.
I am not interested in the knowledge needed for coding the website (php,js,html,css,etc.). This stuff I know.
I am interested in what you need to know to setup a good environment and workflow with test server, production sever, version control, backups, etc.
What would be a good learning path?
As someone who has lead this process at several companies, my recommendation is to gradually raise the "maturity" of your organisation as a software factory by incrementally consolidating a set of practices in an order that makes sense to your needs. The order I tend to follow (starting with things that I consider more basic, to the more advanced stuff):
Version control - control your sources. I used to work with SVN but I'm gradually migrating my team to Mercurial (I agree to meagar's recommendation for a distributed VCS). A great HG tutorial is in hginit
Establish a clear release process, label your releases in VCS, do clean builds in a controlled environment, test and release from these.
Defect tracking - be systematic about your bugs and feature requests. I tend to use Trac because it gives me a more or less complete solution for project management plus a wiki that I use as a knowledge base. But you have choices galore (Jira, Bugzilla, etc...)
Establish routine Testing practices. Unit tests e.g. by using one of the xUnit frameworks (make it a habit to at least write unit tests for new functions you write and old code you modify) and Integration / System tests (for webapps use some tool like Selenium).
Make your tests run frequently, as a part of an automated build process
Eventually, write your tests before you code (Test-Driven Development) and strive to increase coverage.
Go a step forward in your build/test/release cycle by setting up some continuous integration system (to make sure your build and tests are run regularly, at least nightly). I recently started using Hudson and it is great for our Java/Maven projects, but you can use it for any other build process as well
In terms of testing environments, I agree with meagar's recommendations. We have these layers:
Test at developers workstations (should contain a full setup to run your code)
Staging environment: clone your production environment as closely as possible and deploy and run your app there. We also use VMs.
Production preview: we deploy our app to the production servers with production data but in a different "preview" URL for our internal use only. We run part of our automated Integration tests against this server, and do some additional manual testing with internal users
Production - and keep fingers crossed ;)
In terms of backup, at least for your source code, distributed VCS give you the advantage that your full repos are replicated in many machines, thus minimising the risk of data loss (which is much more critical with centralised repos as is the case with SVN).
Before you do anything else, ask your developers what they want out of a test/production environment. You shouldn't be making this decision, they should. The answer to this depends entirely on what kind of workflow they're familiar with and what kind of software they'll be developing.
I'd personally recommend a distributed VCS like git or mercurial, local WAMP/LAMP stacks on each developer's workstation (shared "development" servers are silly) and a server running some testing VMs which are duplicates of your production environment. You can't ask for more specific advice than that without involving your developers.

In which practical ways can virtualization enhance your development environment?

Practical uses of virtualization in software development are about as diverse as the techniques to achieve it.
Whether running your favorite editor in a virtual machine, or using a system of containers to host various services, which use cases have proven worth the effort and boosted your productivity, and which ones were a waste of time ?
I'll edit my question to provide a summary of the answers given here.
Also it'd be interesting to read about about the virtualization paradigms employed too, as they have gotten quite numerous over the years.
Edit : I'd be particularly interested in hearing about how people virtualize "services" required during development, over the more obvious system virtualization scenarios mentioned so far, hence the title edit.
Summary of answers :
Development Environment
Allows encapsulation of a particular technology stack, particularly useful for build systems
Testing
Easy switching of OS-specific contexts
Easy mocking of networked workstations in a n-tier application context
We deploy our application into virtual instances at our host (Amazon EC2). It's amazing how easy that makes it to manage our test, QA and production environments.
Version upgrade? Just fire up a few new virtual servers, install the software to be tested/QA'd/used in production, verify the deployment went well, and throw away the old instances.
Need more capacity? Fire up new virtual servers and deploy the software.
Peak usage over? Just dispose of no-longer-needed virtual servers.
Virtualization is used mainly for various server uses where I work:
Web servers - If we create a new non-production environment, the servers for it tend to be virtual ones so there is a virtual dev server, virtual test server, etc.
Version control and QA applications - Quality Center and SVN are run on virtual servers. The SVN box also runs CC.Net for our CI here.
There may be other uses but those seem to be the big ones at the moment.
We're testing the way our application behaves on a new machine after every development iteration, by installing it onto multiple Windows virtual machines and testing the functionality. This way, we can avoid re-installing the operating system and we're able to test more often.
We needed to test the setup of a collaborative network application in which data produced on some of the nodes was shared amongst cooperating nodes on the network in a setup with ~30 machines, which was logistically (and otherwise) prohibitive to deploy and set up. The test runs could be long, up to 48 hours in some cases. It was also tedious to deploy changes based on the results of our tests because we'd have to go around to each workstation and make the appropriate changes, which was a manual and error-prone process involving several tired developers.
One approach we used with some success was to deploy stripped-down virtual machines containing the software to be tested to various people's PCs and run the software in a simulated data-production/sharing mode on those PCs as a background task in the virtual machine. They could continue working on their day-to-day tasks (which largely consisted of producing documentation, writing email, and/or surfing the web, as near as I could tell) while we could make more productive use of the spare CPU cycles without "harming" their PC configuration. Deployment (and re-deployment) of the software was simplified, since we could essentially just update one image and re-use it on all the PCs. This wasn't the entirety of our testing, but it did make that particular aspect a lot easier.
We put the development environments for older versions of the software in virtual machines. This is particularly useful for Delphi development, as not only do we use different units, but different versions of components. Using the VMs makes managing this much easier, and we can be sure that any updated exes or dlls we issue for older versions of our system are built against the right stuff. We don't waste time changing our compiler setups to point at the right shares, or de-installing and re-installing components. That's good for productivity.
It also means we don't have to keep an old dev machine set up and hanging around just-in-case. Dev machines can be re-purposed as test machines, and it's no longer a disaster if a critical old dev machine expires in a cloud of bits.

Does my development environment mirror user's environment?

I am trying to get a better idea on this as so far I have had mixed answers in person.
I am a solo dev in a 5 man IT dept for a Health Care related business. My developer machine is running Win 7 RC1 (x64) but my users are all running Win XP Pro (x86). Is this a big deal? Whan kind of pitfalls should I be aware of? Is having a VM of the user image enough?
Should my environment completely mirror my end user's?
Your development environment doesn't need to mirror your user's environment, but your testing environment certainly should!
Having a VM of the users image for testing would probably be good enough.
First and foremost, as a developer, your machine will never look like your client's machine. Just accept that.
You will have tools and utilities installed that they won't have. That will fundamentally change the configuration of your machine from the outset. You have DLLs, applications, services, and possibly drivers installed that your users have never even heard of (and likely never will).
As far as the OS is concerned, Win7 and WinXP, despite claims to the contrary, are not the same animal. Don't believe the hype. Having said that, don't believe the anti-hype, either. Just be aware, as you well should, that any piece of software developed under one version of an OS is not guaranteed to behave the same way under another.
The short of it: Yes, it's important that your environment is different. Should you panic about it? No. Should you account for it in testing? Absolutely. As rigorously as possible.
Is this a big deal?
Yes, it is. You have an OS 2 generations ahead of the one the users have, including you are running a non-release version.
Whan kind of pitfalls should I be aware of?
Depends on what you are developing. Some libraries may be missing that you already have, then the versions may be different etc.
Should my environment completely mirror my end user's?
Not necessarily, but you definitely need to have a testing environment that corresponds to the one the users have.
If you were developing web applications, all that would not have been an issue (well, unless you used some fancy fonts that are not present in a clean OS by default).
Its unreasonable to develop on exactly the same type of system as your users. If nothing else, your life is made much easier by installing all sorts of developer tools your end users have no reason to install. I hear Visual Studio in particular likes to squirrel a number of potential dependencies onto a system.
However, you do need to test on a system more inline with that of your end users. If you have access to an image of such a system, your VM approach should be sufficient. If nothing else, you should try for a staged (or better yet, beta/trial) release so as to avoid pushing a completely broken app out the door.
In short, don't fret about the development environment but put some thought into your testing one!
It's effectively impossible for any two machines to be set up the same, so development and production environments will always be different. One advantage in them being VERY different is that you will be more aware of possible deployment problems.
The type of environment that you need to use for testing entirely depends on what you're developing.
If you're writing web applications, having a VM with the user's standard image should be more than enough (just make sure the VM contains all the browsers your users might be using). Web development is much easier in this respect (I'm also running Windows 7 and have a couple VMs to test various environments)
If you're writing a full desktop environment, you'll probably want to ask for a second computer that you can test on (even if just to test before a final release). I say that because of differences in hardware. Just imagine if something runs fine for you, but slows down the users computer so that everything else is unusable. Opposite of that, you might spend hours trying to make something faster in the VM whereas running that on a users computer might run just fine.