What is the difference between System testing and non system testing - testing

One of my friend got this question during his interview.

To my mind, system testing tests software and hardware together, e.g. running the test on a staging system, which is similar to the productive system.
Non system testing is running the tests in another environment, like your developing machine.

Related

Deployment Environment Responsibility [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This might be not technical but rather process driven query. Help me redirect to right forum if this is not the place to ask such question.
Typically in a project, we have a deployment environment where development team deploy the code for testing purpose. Testing team executes test cases on the environment.
But I have seen projects where there are multiple environments for different teams to test on, and when I get to understand that what's the point? I do not see any reason than to have multiple environments.
Two environments:
1. Lower Environment - Developers can use this environment to test their code( this environment will be an exact replica of higher environment where Internal and External testing will happen)
2. Higher Environment - where multiple testing team can test which from my experience seems stable env. to test on.
But I see multiple environment where testing happens with no apparent concrete reason. My question is who's responsibility to support multiple environments? I find it difficult for development team to work on supporting multiple environments apart from regular dev activities, Unit test case preparation, get clarification from the design or business on User story.
Any suggesstion would be highly appreciated.
My question is who's responsibility to support multiple environments?
Depending upon the size and roles that you have in the team, responsibility would usually lie with either one of the developer, tester, or release manager.
I find it difficult for development team to work on supporting multiple environments apart from regular dev activities
Deployments across environments can be and should be automated. Make sure that a proper source control tool is in place and all developers check-in the code there. Scripts can be created once and used for every deployment. There are Continuous Integration tools available which can help with automated deployment as well by fetching the code from source control repository and making an application build from it. This will save everyone's time and minimize human errors.
Release management best practices involve setting up different environments which are mainly:
Development
Test/QA
Staging
Production/Live
Development environment:
This is where development team run their code. It could be the developer's workstation or a server shared by several developers working together on the same project. This environment is frequently updated by the developers, so testers may not be able to use this environment for executing proper test cycles. Anyone from the dev team can update it. This is what you have termed as a Lower Environment.
Test environment:
A place where testing can be conducted. This would include functional testing as well as performance testing in a physical environment with hardware and software that is identical to the production environment. This environment is less frequently updated and provides a common platform for testers to report bugs on. Testers would not have to worry about frequent updates (by developers) and at the same time developers would not have to wait for the test cycle to complete so that they can update this environment. Updates to this environment should be scheduled and properly announced to the related group. Updates to this environment should be automated but controlled by the QA team/manager.
Staging environment:
This is preferred to be a mirror of production setup. It is used as a pre-production location and contains the Release Candidate -- the next version of the application, planned to be made live. Final testing and client/manager approvals are done here. It is used to verify installation, configuration, and migration scripts / procedures, before they are applied to production environment. The software installed here should closely match the software installed in the production environment. However, it maybe acceptable that the hardware capability of staging environment can be lesser since this environment is not to be used for measuring performance. Updates to this environment are not frequent and highly controlled (done usually by the release manager). On some projects, the developer, the release manager, and QA tester can actually be the same person but it should be clear that the roles are different.
Production environment:
This is the live environment which is available to the all the end users. Similar to staging, only selected group should be able to update the live setup. A developer should not be making any changes directly to the staging or production environments.
Hi on my previous work we also have different environment but here is the thing... on your case you have two environment which is lower and higher which is good.
my suggestion is you need a dedicated person who will be in-charge in all deployment. maybe you can call them "deployment person/team" , he will be the one to check all standards such as coding etc. of all developers and he will be the one to deploy in testing site (QA testing site) so that development team can focus on their task.
to be able to achieve that , you can use centralize repository for all developers such as Tortoise SVN etc.
Also , all developers can do the checking on their local computer.. if the program still have errors they are not allow to commit the changes in the repository if the errors not yet fixed to avoid deployment issues. If everything is fine, developers can now commit the codes in the repository, then the deployment team will check, if everything is good to both QA and Dev team, the deployment team can now deploy.
Hope it helps.
Regards.

What QA Server stand for?

For my understanding a QA Server is a testing server ... I would like to know what QA stand for and what is the difference with an Staging/Pre-Production server.
Thanks for your time!
I only can answer half of the question. QA stands for Quality Assurance.
Probably the QA server is suitable for testing, measuring the quality of the software/hardware. Probably unit tests/regression tests are meant to run on this server.
The what-you-call staging/pre-production server, is probably a system running the production code used for regular usage of the software/hardware.
I know this is an old post, but I found this while searching so I thought I'd add in some of my own knowledge in case other people come here wanting more information.
Michel got most of it right, but I'd like to correct a few things, if I may.
Firstly, a QA Server usually refers to a machine that handles the QA process, and runs software that helps create environments that can test different code branches, as part of the QA process. This can range from switching environments and checking out a branch, to rebuilding entire machines that match production environments and deploying code to them.
The basic principle of a QA Server is to help create QA environments for testing.
Staging/Pre-Production environments usually refer to one or multiple environments that match, as closely as possible, the production environment that the code will be deployed to. Again, this could be as simple as a machine with software installations that match the production machine's versions, to a mini web server farm where multiple machines and databases are connected together in a way that matches the production environment. The goal, again, is to have a place that matches production, but is not production, and again, for the purposes of testing and Quality Assurance.
I hope that helps anyone that is still unsure of the original questions answered.
There exists no such clear differentiation between staging/production environment. QA Server environment is a platform, where application is deployed for testing purposes, executing functional, security and performance test cases.
Staging is an environment where application is deployed (again for testing purposes) but its maintained that it matches with production environment as much as possible in terms of OS and specifications.

Solution for a testing platform

We are looking for an automated testing software for our web application. We need to come up with a solution or software that our non-it staffs could write test cases as well as the developers.
For example I've run through some of them such as: SmartBear, National Instrument and IBM. Most of these guys are MS Windows based or commercial Linux distros which remove them from our list since we are all Debian based.
Any recommendation or guideline would be much appreciated.
Ps. We don't have any budget limit!
You're going to have a hard time getting tooling for non-technical testers to build test cases if you limit yourselves to Debian OS for developing and running the tests on. There's no reason you couldn't have a few Windows system to manage your test suites from -- those would run against your web site just fine, regardless of what stack it's hosted on. That would open you up to the tools you mentioned (and Telerik's Test Studio, the tool I help promote).
Those Windows systems could easily be run via whatever virtualization host you prefer, so you wouldn't even need physical systems to deal with that. You could easily share the same source control repository as your devs, too, since nearly every decent SCM has Windows clients.
If you're unwilling to consider having a few Windows boxes around for your testing, then you'll need to have a look at getting all your testers proficient in APIs and frameworks like WebDriver and Robot Framework. The Pages gem from Jeff Morgan (#chzy) in Ruby would be another option, as would Adam Goucher's Saunter (in Python).

Verifying IPv6 implementation

My development team is looking to implement IPv6 on embedded platform. One the primary issues we're encountering at this stage is creating our test environment. Currently the only verification suite that we have found is the one created by TAHI.org. Running through an initial setup of this suite, it appears to only be for *NIX based implementations.
Is there an available solution for creating a test environment other than this or going to UNH?
The TAHI tests, while they require a FreeBSD box to run, do not require that the target be UNIX based. In fact, we ran them against a VxWorks-based embedded device.
If memory serves, there are several "remote" scripts that you must implement, however, to (for example) reboot your target device so that compliance can be tested in cases where the IPv6 interface must go down and come back up.
UNH uses essentially the same tests as the TAHI suite. Running the TAHI tests is therefore highly recommended.

In which practical ways can virtualization enhance your development environment?

Practical uses of virtualization in software development are about as diverse as the techniques to achieve it.
Whether running your favorite editor in a virtual machine, or using a system of containers to host various services, which use cases have proven worth the effort and boosted your productivity, and which ones were a waste of time ?
I'll edit my question to provide a summary of the answers given here.
Also it'd be interesting to read about about the virtualization paradigms employed too, as they have gotten quite numerous over the years.
Edit : I'd be particularly interested in hearing about how people virtualize "services" required during development, over the more obvious system virtualization scenarios mentioned so far, hence the title edit.
Summary of answers :
Development Environment
Allows encapsulation of a particular technology stack, particularly useful for build systems
Testing
Easy switching of OS-specific contexts
Easy mocking of networked workstations in a n-tier application context
We deploy our application into virtual instances at our host (Amazon EC2). It's amazing how easy that makes it to manage our test, QA and production environments.
Version upgrade? Just fire up a few new virtual servers, install the software to be tested/QA'd/used in production, verify the deployment went well, and throw away the old instances.
Need more capacity? Fire up new virtual servers and deploy the software.
Peak usage over? Just dispose of no-longer-needed virtual servers.
Virtualization is used mainly for various server uses where I work:
Web servers - If we create a new non-production environment, the servers for it tend to be virtual ones so there is a virtual dev server, virtual test server, etc.
Version control and QA applications - Quality Center and SVN are run on virtual servers. The SVN box also runs CC.Net for our CI here.
There may be other uses but those seem to be the big ones at the moment.
We're testing the way our application behaves on a new machine after every development iteration, by installing it onto multiple Windows virtual machines and testing the functionality. This way, we can avoid re-installing the operating system and we're able to test more often.
We needed to test the setup of a collaborative network application in which data produced on some of the nodes was shared amongst cooperating nodes on the network in a setup with ~30 machines, which was logistically (and otherwise) prohibitive to deploy and set up. The test runs could be long, up to 48 hours in some cases. It was also tedious to deploy changes based on the results of our tests because we'd have to go around to each workstation and make the appropriate changes, which was a manual and error-prone process involving several tired developers.
One approach we used with some success was to deploy stripped-down virtual machines containing the software to be tested to various people's PCs and run the software in a simulated data-production/sharing mode on those PCs as a background task in the virtual machine. They could continue working on their day-to-day tasks (which largely consisted of producing documentation, writing email, and/or surfing the web, as near as I could tell) while we could make more productive use of the spare CPU cycles without "harming" their PC configuration. Deployment (and re-deployment) of the software was simplified, since we could essentially just update one image and re-use it on all the PCs. This wasn't the entirety of our testing, but it did make that particular aspect a lot easier.
We put the development environments for older versions of the software in virtual machines. This is particularly useful for Delphi development, as not only do we use different units, but different versions of components. Using the VMs makes managing this much easier, and we can be sure that any updated exes or dlls we issue for older versions of our system are built against the right stuff. We don't waste time changing our compiler setups to point at the right shares, or de-installing and re-installing components. That's good for productivity.
It also means we don't have to keep an old dev machine set up and hanging around just-in-case. Dev machines can be re-purposed as test machines, and it's no longer a disaster if a critical old dev machine expires in a cloud of bits.