I want to improve integration tests methods where I work and I would like to know how this process happens in other places.
Things like:
- When test plans writing begin
- Proportion between testers, developers and stuff (entire applications or modifications) to be tested
- What kind of methods are used for integration testing.
Actually, I test webapps and test plans are managed with Test Link. Bugs found are reported on Bugzilla. I am trying to automate tests with Selenium RC, but I takes some time to write the plans and write the code to execute on Selenium. And time is something that I dont have, because I am testing 3 or more aplications.
Most of my problems are caused by differences between test environment and production environment. But tests are taking too long to begin. If someone finishes a modification today, it will take about 3 weeks for me to begin tests. And the test process queue keeps growing.
It would be really good if anyone suggests something that would improve testing process (like more people testing,etc). But mostly, I would like to hear how testing process works on other places.
Thanks.
For us the integration test is generally performed by the developer before a commit. Just simple surface test to see that nothing obvious is broken.
Then we deploy the code from trunk on a development server connected to a test database that is a complete copy of the production database and have the users responsible for the new functionality do acceptance test and further integration tests on that server.
We have a concept of "super user" to organize this. Super users are responsible for educating other users in their area of expertise and answering helpdesk questions related to the usage of the system. The super users are also the people who are involved in feature requests and requirement discussions for all features related to their work.
So when a new feature is developed the super user is the one who first validate the design suggestion and than performs the final stages of testing before deployment.
This setup is good because it ensures that domain experts are the ones who validate the system functionality and removes some responsibilities from the IT-department.
The bad thing is that they are not usually very technical or good testers. As users they tend to see the the system for what is is rather than what it could be. The fact that they also have their ordinary functions in the organization as full time employees also means that they are a very limited resource in terms of testing.
I'll assume you mean integration testing as in checking to see if the parts of the application work together, (for example, getting the database and the website to work together after the DBA and web developer respectively say they're done) And I'll use an example from my current project
I code generate several configuration files so I can observe the application with certain modules on/off, namely error reporting, authentication, debug mode compilation, with/without SSL. Development environments are likely to have "friendly error pages" turned off, no authentication, no SSL, etc.
I also use a build script to create a copy of the application for each variant of the config file
It is helpful to pedantically reproduce the characteristics of production to staging and development as much as you can-- use virtual machines if you lack the hardware
I also wrote into the production code bases a few pages that test the sort of things that break when code move from one machine to another, i.e. does the db connection work, do emails send, is the temp folder writable and made that page the home page of the server operator
The key is automating as much as you can. Frequent integration testing catches issues earlier.
From check in to packaging code for deployment, it takes me 8 minutes of automated work and 1/2 hour of manual clicking for smoke tests.
Related
I have a requirement to automate the Loadrunner execution by doing some pre-checks. Steps involved would be checking for a new application build and Loadrunner/Performance center should start the load test.
See HP Documentation on Jekins and web services automation framework.
See command line options for mdrv and/or wlrun
Ask difficult questions. What is the end goal of the automation of the execution? How will the developers be "controlled" to not introduce structural changes to the conversation which would require business process updates to each build? How will you add actionable intelligence to the analysis of the tests?
You may be better off coupling small performance unit tests at the dev stage to ensure that code is performant at the unit and component assembly stages where the cost to fix is very small, combining this with performance checks at the functional stage (see developer tools and rules related to performance) as well as tracking response times for a single user on all business processes and then a daily/nightly execution of a business level performance test.
With performance comes ownership. If you have a working test and a developer changes something structural in their code (removes/adds web page elements, form fields, dynamic elements) and this information is not broadcast forward to test to address the changes in the test definition before the test fails, then that developer should be called out for breaking the build. The ad hominem of "Well, it didn't change the screen so it shouldn't matter...." illustrates an architectural naïveté on which OSI layer performance testing tools operate at.
After installing Load runner, you will have to create new script by selecting the correct protocol for communication between Server and the client in LoadRunner and then start recording the scripts. Next step would be to run them after parameterization required data. Please find more details at http://performancetestworld.com/JSPFiles/LoadRunnerFirstScript.jsp
For LoadRunner Enterprise, it supports integrations with a few popular CI systems to fulfill your requirment, like Jenkins, Azure Devops, etc. Learn more from the online help.
Our company is small and in a project only 1 or 2 testers are assigned. And our all test related things are maintained on Excels sheets. And for bug tracking we are using Mantis. We create test cases on Excel sheet and execute them via same.
Is TestLink or any other test management tool will be helpful to us or not. As number of testers are less so there are no merging of test cases are done. Only one QA develop test cases and execute it. Please suggest me if it will be any help to us or not.
If so please suggest only free application
I am working for a startup and we are pretty much using Testlink. Our QA team is always pretty small (in between 1-3). It's very helpful for organizing and keeping the test cases for your whole system. It becomes more useful when you go for a release. You can assign your testers based on a test build so that they can go through test cases one after another and mark which tests are passing or not. Finally, you can generate report based on those for your build.
Hope that helps.
Regardless of if there is only one tester or many, it is still a good practice to make use of a test management tool and using a lightweight solution will make you more productive.
There are many benefits over using a static excel file and we recently put together a short blog post which goes into a little detail of the benefits to organizing your testing process with a test management tool which may be of interest.
If you are using Mantis to track your issues, you will often find that test management tools integrate with tools like this so that when a test fails a ticket is automatically created and this is a huge time saver.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We work with Scrum and I think we are on the right way, but one thing bothers me: the testers aren't part of the development cycle yet. Now I am thinking on how to involve the testers in the development team. Now it is seperated and the testers have their 'own' sprint.
Currently we have a C.I. environment. Everytime a developer has finished a user story, he checks in his code and the build server builds the code on every check-in.
What I want is that the testers test the user stories in the same sprint the user story is implemented. But I am struggling on how to set this up.
My main question is: where can the tester test the user story? They can't be testing on the build server because on every check-in it creates a new build and there are a lot of check-ins . So that's not an option. So, should I create a seperate server where the testers can deploy by theirself? Or..
My question is, how have you guys set this up? How have you integrated the testers in the develpment process?
You need a staging server and deploy a build every once in a while. Thats how we do it: CI->Dev->Staging->Live
Edit: I always feel like an asshole posting wikilinks but this article about Multi-Stage CI is good: http://en.m.wikipedia.org/wiki/Multi-stage_continuous_integration
In my current project we have 4 small teams and each has 1 Tester assigned. The testers are part of the daily standup, sprint planning meetings etc. The testers also have their own daily standup so they can coordinate etc.
During the Sprint Planning Meeting 2 we create acceptance criteria / examples / test cases (however you want to call it) together (testers, developers and PO). The intend is to create a common understanding of the user story, to get the right direction and to split it into smaller pieces of functionality (scenario/test case) e.g. just a specific happy path. Thereby, we are able to deliver smaller working features, which can than be tested by the testers. Meanwhile the next part of the user story can be implemented.
Furthermore, it is decided which stories need an automated acceptance test and what level (unit, integration, gui test) makes most sense.
As already mentioned by OakNinja :) you will need at least one additional environment for the testers.
In our case those environments are not quality gates, but dev stages. So, whenever a developer finishes some functionality he tells the tester that he can redeploy if he wants to.
If the user story is finished it will be deployed on the staging server, where the acceptance of the user story will be made.
Deployment process:
Dev + Test => Staging (used for acceptances) => Demo (used for demoing user stories each 2nd week) => SIT and End2End Testing Environments (deployed each 2nd week) => Production (deployed roughly all 6 months)
We have QA resources involved throughout the sprint: Estimation, Planning, etc. When the devs first start coding, the QA members of the team start creating the test cases. As code gets checked in, it gets deployed out to a separate environment on a scheduled basis (or as needed) so that QA can execute their tests during the sprint. QA is also involved in regression after the stories have been mostly completed.
Our setup uses automated deployments using build configurations in TFS or TeamCity, depending on the project. Our environments are split like this:
Local development server. Developers have own source code, IIS, and databases (if necessary) to isolate them from each other and QA while working.
Build server. Used for CI, automated deployments. No websites or DBs here.
Daily Build environment (a.k.a. 'Dev' or 'Dev Test'). Fully functioning site where QA can review work as it is being done during the sprint and provide feedback.
QA lab (a.k.a. 'Regression' or 'UAT'). Isolated lab for regression testing, demos, and UAT.
We use build configurations to keep these up to date:
CI Build on checkins to handle checkins from local devs.
Daily scheduled build and automated deploy to Daily Build environment. Devs or QA can also trigger this manually, obviously, to make a push when needed.
Manual trigger for deploy to QA environment.
One point is missing from the explanations above, the best way to add your testers into the SCRUM process is by making sure they are part of the scrum team and work together with the rest of the team (devs, PO, etc) in the Sprint. Most of the time this is not really done, and all you end up having is (in the best case) a Mini-Waterfall process.
Now let me explain. There is little to add to the extensive hardware and environment explanations above, you can work with staged servers, or even better make it an internal feature to have the scripts in place that will allow testers to create their own environments when they want to (if you are using any CI framework chances are you already have all the parts needed in there).
What is bothering me is that you said that your testers "have their 'own' sprint".
The main problem that I've seen when getting testers involved into the SCRUM process is that they are not really part of the process itself. Sometimes the feeling is that they are not technical enough to work really close to developers, other times developers simply don't want to be bothered by explaining to testers what they are doing (until they are finished - not done!), other times it is simply a case of management not explaining that this is what is expected from the team.
In a nutshell, each User Story should have a technical owner and a testing owner. They should work together all the time and testing should start as soon as possible, even as short "informal clean-up tests" in the developers environment. After all the idea is to cut the Red Tape by eliminating all the unnecessary bureaucracy in the process.
Testers should also explain to developers the kind of testing they should be doing before telling the QA they can have a go on the feature. Manual testing is as much the responsibility of the developer as it is of the tester.
In short, if you want to have testers as part of your development, even more important than having the right infrastructure in place, you need to have the right mind-set in place, and this means changing the rules of the game and in many cases the way each person in the team sees his task and responsibility.
I wrote a couple of post on the subject in my blog, in case I didn't bother you too much up to now, you may find these interesting.
Switching to Agile, not as simple as changing your T-Shirt
Agile Thinking instead of Agile Testing
I will recommend to read the article "5 Tips for Getting Software Testing Done in the Scrum Sprint" by Clemens Reijnen. He explains how to integrate software testing teams and practices during a Scrum sprint.
Recently I've came up with the question is it worth at all to spent development time to generate automatic unit test for web based projects? I mean it seems useless at some point because at some point those projects are oriented on interactions with users/clients, so you cannot anticipate the whole possible set of user action so you be able to check the correctness of content showed. Even regression test can hardly be done. So I'm very eager to know to know the opinion of other experienced developers.
Selenium have a good web testing framework
http://seleniumhq.org/
Telerik are also in the process of developing one for web app testing.
http://www.telerik.com/products/web-ui-test-studio.aspx
You cannot anticipate the whole
possible set of user action so you be
able to check the correctness of
content showed.
You can't anticipate all the possible data your code is going to be handed, or all the possible race conditions if it's threaded, and yet you still bother unit testing. Why? Because you can narrow it down a hell of a lot. You can anticipate the sorts of pathological things that will happen. You just have to think about it a bit and get some experience.
User interaction is no different. There are certain things users are going to try and do, pathological or not, and you can anticipate them. Users are just inputting particularly imaginative data. You'll find programmers tend to miss the same sorts of conditions over and over again. I keep a checklist. For example: pump Unicode into everything; put the start date after the end date; enter gibberish data; put tags in everything; leave off the trailing newline; try to enter the same data twice; submit a form, go back and submit it again; take a text file, call it foo.jpg and try to upload it as a picture. You can even write a program to flip switches and push buttons at random, a bad monkey, that'll find all sorts of fun bugs.
Its often as simple as sitting someone down who's unfamiliar with the software and watching them use it. Fight the urge to correct them, just watch them flounder. Its very educational. Steve Krug refers to this as "Advanced Common Sense" and has an excellent book called "Don't Make Me Think" which covers cheap, simple user interaction testing. I highly recommend it. It's a very short and eye opening read.
Finally, the client themselves, if their expectations are properly prepared, can be a fantastic test suite. Be sure they understand its a work in progress, that it will have bugs, that they're helping to make their product better, and that it absolutely should not be used for production data, and let them tinker with the pre-release versions of your product. They'll do all sorts of things you never thought of! They'll be the best and most realistic testing you ever had, FOR FREE! Give them a very simple way to report bugs, preferably just a one button box right on the application which automatically submits their environment and history; the feedback box on Hiveminder is an excellent example. Respond to their bugs quickly and politely (even if its just "thanks for the info") and you'll find they'll be delighted you're so responsive to their needs!
Yes, it is. I just ran into an issue this week with a web site I am working on. I just recently switched-out the data access layer and set up unit tests for my controllers and repositories, but not the UI interactions.
I got bit by a pretty obvious bug that would have been easily caught if I had integration tests. Only through integration tests and UI functionality tests do you find issues with the way different tiers of the application interact with one another.
It really depends on the structure and architecture of your web application. If it contains an application logic layer, then that layer should be easy to unit test with automating tools such as Visual Studio. Also, using a framework that has been designed to enable unit testing, such as ASP.NET MVC, helps alot.
If you're writing a lot of Javascript, there have been a lot of JS testing frameworks that have come around the block recently for unit testing your Javascript.
Other than that, testing the web tier using something like Canoo, HtmlUnit, Selenium, etc. is more a functional or integration test than a unit test. These can be hard to maintain if you have the UI change a lot, but they can really come in handy. Recording Selenium tests is easy and something you could probably get other people (testers) to help you create and maintain. Just know that there is a cost associated with maintaining tests, and it needs to be balanced out.
There are other types of testing that are great for the web tier - fuzz testing especially, but a lot of the good options are commercial tools. One that is open source and plugs into Rails is called Tarantula. Having something like that at the web tier is a nice to have run in a continuous integration process and doesn't require much in the form of maintenance.
Unit tests make sense in TDD process. They do not have much value if you don't do test-first development. However the acceptance test are a big thing for quality of the software. I'd say that acceptance test is a holy grail of the development. Acceptance tests show whether the application satisfies the requirements. How do I know when to stop developing the feature --- only when all my acceptance test pass. Automation of acceptance testing a big thing because I do not have to do it all manualy each time I make changes to the application. After months of development there can be hundreds of test and it becomes unfeasible (sometime impossible) to run all the test manually. Then how do I know if my application still works?
Automation of acceptance tests can be implemented with use of xUnit test frameworks, which makes a confusion here. If I create an acceptance test using phpUnit or httpUnit is it a unit test? My answer is no. It does not matter what tool I use to create and run test. Acceptance test is the one that show whether the features is working IAW requirements. Unit test show whether a class (or function) satisfies the developer's implementation idea. Unit test has no value for the client (user). Acceptance test has a lot of value to the client (and thus to developer, remember Customer Affinity)
So I strongly recommend creating automated acceptance tests for the web application.
The good frameworks for the acceptance test are:
Sahi (sahi.co.in)
Silenium
Simpletest (I't a unit-test framework for php, but includes the browser object that can be used for acceptance testing).
However
You have mentioned that web-site is all about user interaction and thus test automation will not solve the whole problem of usability. For example: testing framework shows that all tests pass, however the user cannot see the form or link or other page element due to accidental style="display:none" in the div. The automated tests pass because the div is present in the document and test framework can "see" it. But the user cannot. And the manual test would fail.
Thus, all web-applications needs manual testing. The automated test can reduce the test workload drastically (80%), but manual test are as well significant for the quality of the resulting software.
As for the Unit testing and TDD -- it make the code quality. It is beneficial to the developers and for the future of the project (i.e. for projects longer that a couple of month). However TDD requires skill. If you have the skill -- use it. If you don't consider gaining the skill, but mind the time it will take to gain. It usually takes about 3 - 6 month to start creating a good Unit tests and code. If you project will last more that a year, I recommend studding TDD and investing time in proper development environment.
I've created a web test solution (docker + cucumber); it's very basic and simple, so easy to understand and modify / improve. It lies in the web directory;
my solution: https://github.com/gyulaweber/hosting_tests
I'm building a web app against a database where a small number of records (about 5000) are active at the same time. Each active working record probably experiences 50-300 changes by 30 users over a 4 hour period ... which is thousands of changes per minute.
Because our testing environment is so static, testing is not realistic, and some issues do not arise until we hit the production database.
I had the idea to Run Profiler, collect the DML statements, then replay them on the test server while debugging the app ... Assuming I can replay them in the same time intervals as the original was run. But even this wouldn't be a valid test, since tester changes could corrupt future DML statements being replayed.
Does anybody know how to simulate real time database changes for realistic testing?
Thanks.
BTW-Our problems are not concurrency issues.
Maybe this Selenium-based service is what you need: browsermob
Few people recommended it.
And yes, this is not an ad :)
There's a few commercial packages that do this. I work for Quest Software, makers of one of the packages, but I'm going to list three because I've worked with all of 'em before I came to work for Quest:
Microsoft Visual Studio Test Edition - it has load testing tools added on. It lets you design tests inside Visual Studio like simulating browsers hitting your web app. Recording the initial macros is kind of a pain, but when you've done it, it's easy to replay. It also has agents that you can deploy across multiple desktops to drive more load. For example, we installed it on several developers' desktops, and when we needed to do load testing after hours, we could throw a ton of computing power at the servers. The downside is that the setup and ongoing maintenance is kinda painful.
HP Quality Center (used to be Mercury Test Director and some other software) - also has load testing tools, but it's designed from the ground up for testers. If your testers don't have Visual Studio experience, this is an easier choice.
Quest Benchmark Factory - this tool focuses exclusively on the database server, not the web and app servers. It captures load on your production server and then can replay it on your dev/test servers, or it can generate synthetic transactions too. There's a freeware version you can use to get started.
If you know and love Visual Studio, and if you want to test your web servers and app servers, then go with Visual Studio Test Edition. If you just want to focus on the database, then go with Benchmark Factory.
Perhaps use something along the lines of a database stress-testing tool like the mysqlslap load-emulator. Here's a link explaining use-cases and specific examples.