Is it good practice to have individual database for testing an application for each developer - sql

My company has decided that each developer should have its own database locally for testing the Asp.net web application. I am against the decision as I think there should be a centralised test database for testing. But the developers put an argument that if they test the same table and other developers delete the records while I add the records at the same time it will create incorrect result. So what you guys think.

Each developer should have their own database if they are going to be making any changes to the data structure or values. If one of the devs adds/removes something that prevents a portion of code from working it will only affect that dev instead of all the devs. This also allows devs to get more comfortable with the data structure and making changes to it since they can break/fix their own environment as much as they want.
There should be a testing environment that has its own database where the current revision of the project can run for tests.

No, it's not a good practice (for integration tests for example), but it's acceptable for functional tests (or smoke tests) on development environment. The thing is that, it's better to integrate database and application from the very initial steps, thus you will avoid issues, bugs, waste of time. Also, make sure the developers don't make changes on schema, you should have a unique and well identified database version.

But the developers put an argument that if they test the same table and other developers delete the records while I add the records at the same time it will create incorrect result.
Sounds like a really bad web application. A good web application should be able to support more than one simultaneous user!
For that reason alone, perhaps your team should at least have a phase where people are sharing a common database.

Related

Dependencies in Acceptance Testing

Me and a co-worker are having a debate. We are on a craptastic legacy project and are slowly adding acceptance tests. He believes that we should be doing the work in the gui/watin then using a low level library to query the database directly for to get an 'end to end' test as he puts it.
We are using NHibernate and I advocate using gui/watin then those nhibernate objects to do the assertions in the acceptance testing. He dislikes the dependency of NHibernate in the test. My assertion was that we have/should have integration tests against the NHibernate objects to make sure they are working with DB the way we intend at which point there is no downside in using them in the acceptance test to assert proper operation. I also think his low level sql dependence will make the tests fragile and duplicate business logic in alot of cases.
Integration testing in our shop basically means its a single component with a dependency e.g. fileRepository/FileSystem Domain-NhibernateObject/Database. Acceptance testing means coming in through the GUI. Unit means all dependencies have/can be mocked/stubbed out and you've got a pure test in memory with only the method under test actually doing any real work. Let me know if my defs are off.
Anyway any articles/docs/parchment with opinions on this subject you can point me at would be appreciated.
The only reason you'd ever automate tests is to make things easier to change. If you weren't changing them, you could get away with manual testing. Tying the tests to the database will make the database much harder to change.
Tying them to the NHibernate objects won't help very much either, I'm afraid!
The users of your system won't be using the database or NHibernate. How do they get the benefit (or provide the benefit to other stakeholders)? How will they be able to tell that it's working well? If you can capture that in the Acceptance Tests, you'll be able to change the underlying code and data while still maintaining the value of your application. If someone generates reports from the data, why not generate the same reports and check that their contents are what you expect? If the data is read by another system, can you get a copy of that system and see what it outputs to its users?
Anyway, that's my opinion - keep acceptance tests as close to the business value as possible - and here's a blog post I wrote which might help. You could also try the Behavior Driven Development group on Yahoo, who have a fair bit of experience amongst them.
Oh, and doing integration tests to check that your (N)Hibernate bindings are good is an excellent idea. Saved us on a couple of projects.

Regression Testing and Deployment Strategy

I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy?
The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month.
And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application?
One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time.
These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment.
Any advice?
The idea behind a framework is that it's supposed to be the "slow moving code". You shouldn't be changing the framework as frequently as the applications it supports. Try getting the framework on a slower development cycle: perhaps a release no more often than every three or six months.
My guess is that you're still working out some of the architectural decisions in this framework. If you think the framework changes really need to be that dynamic, find out what parts of the framework are being changed so often, and try to refactor those out to the applications that need them.
Agile doesn't have to mean unlimited changes to everything. Your architect could place boundaries on what constitutes the framework, and keep people from tweaking it so readily for what are likely application shortcuts. It may take a few iterations to get it settled down, but after that it should be more stable.
I wouldn't call it an Agile approach unless you have (unit) test coverage. One of the key tenets of Agile is that you have robust unit tests that provide a safety net for frequent refactoring and new feature development. There is a lot of risk in your scenario. Deploying twenty to thirty applications a month when 1) most of them don't add any new business value to their users; and 2) there are no tests in place would not qualify as a good idea in my book. And I'm a strong believer in Agile. But you can't pick and choose only the parts of it that are convenient.
If the business application has not changed, I wouldn't release it just to compile in a new framework. Imagine every .NET application needing to be re-released every time the framework changed. Reading into your question, I wonder if the common database is driving the need for this. If your framework is isolating the schema and you're finding you need to rebuild apps whenever the schema changes, then you need to tackle that problem first. Check out Refactoring Databases, by Scott Ambler for some tips.
As another aside, there's a big difference between integration test and unit tests. Your regression tests are integration tests. It's very difficult to automate at that level. I think the breakthroughs that are happening in testing are all about writing highly testable code that makes unit testing more and more of the code base possible.
Here are some tips I can think of:
1. break the framework into independent parts, so that changing one part requires only running a small portion of test cases.
2. Employ a test case prioritizaion technique. That is, you only rerun a small portion of the test pools of the applications selected by some strategy. Additional branch and ART have better performance than others usually. They require to know the branch coverage information of each test case.
3. Update the framework less frequently. If an application doesn't need change, it means its ok not to change it. So I guess its ok for these applications to use the old version of the framework. You can update the framework for these applications say every 3 months.
Regression testing is a way of life. You will need to regression test every application before it is released. However, since time and money are not usually infinite, you should focus your testing on the areas with the most changes. A quick and dirty way to identify these areas is to count the lines of code changed in a given business area; say "accounting" or "user management". Those should get the most testing first along with any areas that you have identified as “mission critical”.
Now I know that lines of code changed is not necessarily the best way to measure change. If you have well defined change requests, it is actually better to evaluate these hot spots by looking at the number and complexity of the change requests. But not everyone has that luxury.
When you are talking about making a change to the framework, you probably don't need to test all the code that uses it. If you're talking about a change to something like the DAL, that would basically amount to everything anyway. You just need to test a large enough sample of the code to be reasonably comfortable that the change is solid. Again, start with the "mission critical" areas and the area most heavily affected.
I find it helpful to divide the project into 3 distinct code streams; Development, QA, and Production. Development is open to all changes, QA is feature locked, and Production is code locked (well, as locked as it gets anyway). If you are releasing to production on a monthly cycle, you probably want to branch a QA build from the Development code at least 1 month before the release. Then you spend that month acceptance testing the new changes and regression testing everything else that you can. You'll probably have to complete testing the changes about a week before the release so that the app can be staged and you can dry run the installation a few times. You won't get to regression test everything, so have a strategy ready for releasing patches to Production. Don't forget to merge those patches back into the QA and Development code streams too.
Automating the regression tests would be a really great thing; theoretically. In practice, you end-up spending more time updating the testing code then you would spend running the test scripts manually. Besides, you can hire two or three testing monkeys for the price of one really good test script developer. Sad but true.

In functional testing, should I compare all tabular data rendered in the browser with the one coming from the DB?

I'm working on a test plan for a website where some tests are taking the following path:
Hit the requested URI and get the data rendered inside some table(20 rows per page).
Make a database query to get the data that is supposed to be rendered in that table.
Compare the 2 data row by row, they should match.
Is that a correct way of doing functional testing? If that request was an Ajax request, what will be the answer also? Would the answer differ for integration testing?
I have some reason that makes me believe that this is wrong somehow.... still need your opinions guys!
Yes, this could be a productive test. Either you have a fixed data set or you don't.
If you have a fixed data set, this is much easier to test, because all you're doing is comparing against a fixed output.
If you don't have a fixed data set, then you need to duplicate the business logic, effectively duplicating the work already done by the developer. Then you have two sets of logic to maintain.
The second is the best approach because you get two ways of doing the same thing, effectively a peer review of the specification and code. It's also very expensive in terms of time and resources, which is why most people choose to have a fixed data set.
To answer your question, if your business logic in the query is simple, then you can get a test very easily. However, the value that the test brings isn't great, because you aren't testing very much.
If the business logic is complex, you are getting more value from the test, but it's going to be harder to maintain in the long term.
For me, what your test does bring is a simple integration test that proves that the system reads correctly from the database, and displays the data correctly. This is a good test, even better if it is automated.
This seems fine for functional testing. Integration testing in my mind has to do with the testing of different technologies or components that are supposed to work together which is generally broader than functional testing. But of course this sort of testing could also be considered integration testing, depending on how your application is put together and where the testing is happening in the lifecycle of your development. For example it may be that in order for this site to work you have to put together a few components that were developed independently; this might be one of the tests to validate that the integration works.
Don't see how this being Ajax or not has anything to do with making the answer different.
I will likely be a dissenting opinion here, but I don't consider this to be a productive test. What you are doing is simply duplicating the code which produces the page. And any time you introduce duplicated code (even across departments) you'll be looking at defects cropping up long-term.
It is far better to load the DB with known data (either through the app, or directly) and then check that the output matches what you'd expect. This also ensures that your DB layer, or DB itself, hasn't modified the data in a way you do not expect.
That is:
Load known data (preferably through the app itself)
Load the requested URI
Check that displayed data matches your known data
This kind of test could be good for testing a large set of data with relatively little tester effort if there is not much developer logic between the database and the display to the end user. Our team has done this on a number of occasions, and it is especially useful for running large quantities of real production data through our tests to be sure that actual scenarios are handled as expected. Do make sure you do at least a little fixed input testing for rare scenarios that might be especially likely to be handled differently in the DB and on the web page - null values, special characters, and other oddities.
Personally, I would call this "integration testing", since you are testing the integration of the DB and the web site, and not "functional testing". For "functional testing", I'd probably want to make a mock of the datasource (e.g., the database) that will provide pre-written sets of data in the format you expect.
Having said that, if I had high confidence in the validity of the DB data and if the logic between the DB query and the web page display was very small and low-risk, I would probably not bother with the mock and would let the integration test cover the functionality as well. I don't know that testing the functionality and integration separately would be a big quality win in this case, and there are likely better things you could do with the available testing time. If there is a lot of logic around this data, you should probably test the integration separately from the functionality. Additional integration testing would probably include things like, "What if the database can't be reached?" and "What if the database is slow?".
While this technique will work with Ajax, make sure your testing tools will work with Ajax. Specifically, think about how you will capture the database query results and how you will gather the results displayed on the web page.
I'm assuming that the validity of the data in the query is being tested elsewhere, since you mentioned that this was just one type of test in the test plan. I'm also just discussing integration with the database and this report and not other features or components, and not other aspects of testing (performance, security. etc.), since that was the scope of your question.

How integration tests are performed on your company/job/project?

I want to improve integration tests methods where I work and I would like to know how this process happens in other places.
Things like:
- When test plans writing begin
- Proportion between testers, developers and stuff (entire applications or modifications) to be tested
- What kind of methods are used for integration testing.
Actually, I test webapps and test plans are managed with Test Link. Bugs found are reported on Bugzilla. I am trying to automate tests with Selenium RC, but I takes some time to write the plans and write the code to execute on Selenium. And time is something that I dont have, because I am testing 3 or more aplications.
Most of my problems are caused by differences between test environment and production environment. But tests are taking too long to begin. If someone finishes a modification today, it will take about 3 weeks for me to begin tests. And the test process queue keeps growing.
It would be really good if anyone suggests something that would improve testing process (like more people testing,etc). But mostly, I would like to hear how testing process works on other places.
Thanks.
For us the integration test is generally performed by the developer before a commit. Just simple surface test to see that nothing obvious is broken.
Then we deploy the code from trunk on a development server connected to a test database that is a complete copy of the production database and have the users responsible for the new functionality do acceptance test and further integration tests on that server.
We have a concept of "super user" to organize this. Super users are responsible for educating other users in their area of expertise and answering helpdesk questions related to the usage of the system. The super users are also the people who are involved in feature requests and requirement discussions for all features related to their work.
So when a new feature is developed the super user is the one who first validate the design suggestion and than performs the final stages of testing before deployment.
This setup is good because it ensures that domain experts are the ones who validate the system functionality and removes some responsibilities from the IT-department.
The bad thing is that they are not usually very technical or good testers. As users they tend to see the the system for what is is rather than what it could be. The fact that they also have their ordinary functions in the organization as full time employees also means that they are a very limited resource in terms of testing.
I'll assume you mean integration testing as in checking to see if the parts of the application work together, (for example, getting the database and the website to work together after the DBA and web developer respectively say they're done) And I'll use an example from my current project
I code generate several configuration files so I can observe the application with certain modules on/off, namely error reporting, authentication, debug mode compilation, with/without SSL. Development environments are likely to have "friendly error pages" turned off, no authentication, no SSL, etc.
I also use a build script to create a copy of the application for each variant of the config file
It is helpful to pedantically reproduce the characteristics of production to staging and development as much as you can-- use virtual machines if you lack the hardware
I also wrote into the production code bases a few pages that test the sort of things that break when code move from one machine to another, i.e. does the db connection work, do emails send, is the temp folder writable and made that page the home page of the server operator
The key is automating as much as you can. Frequent integration testing catches issues earlier.
From check in to packaging code for deployment, it takes me 8 minutes of automated work and 1/2 hour of manual clicking for smoke tests.

Test accounts and products in a production system

Is it worth designing a system to expect test accounts and products to be present and active in production, or should there be no contamination of production databases with test entities, even if your shipping crew knows not to ship any box addressed to "Test Customer"?
I've implemented messaging protocols that have a test="True" attribute in the spec, and wondered if a modern schema should include metadata for tagging orders, accounts, transactions, etc. as test entities that get processed just like any other entity--but just short of the point where money gets spent. Ie: it fakes charging an imaginary credit card and fakes the shipment of a package.
This isn't expected to be a substitute for a fully separated testing, development, and QA database, but even with those, we've always had the well-known Test SKU and Test Customer in the production system. Harmless?
Having testing accounts in production is something I usually frown upon because it opens up a potential security hole. One should strive to duplicate as much of the production environment in testing as possible but there are obviously cases where that isn't possible. Expensive production only hardware is a prime example. I would say as a general practice it should be discouraged but as with all things if you can provide a reason which makes sense to you then you might overlook a hard and fast rule.
I imagine the Best Practice Police would state the mantra "never ever test in prod" and maybe even throw in "developers should not have access to prod".
However, I work on a mainframe-based system where there are huge differences between production and test/qa/qc; the larger the system, the more likely such a situation is. Additionally, the more groups that have a stake in the application, the more likely this is.
I need more than two hands to count how many times we could only duplicate a problem in the production environment. The option then becomes creating test tables/users/data or using live customer data.
At times we do also create test records in production tables, as some users/clients like having something they can search/retrieve that is always there.
So my advice is that it is OK to put test accounts/products into production if it will help to troubleshoot after go-live.
If your database is created from scripts in an automated fashion, then this becomes a non-question.
In my environment we use cruise control for continuous builds. The SQL Scripts for generating the database are checked into CVS with everything else, and the database is rebuilt from those scripts on a daily basis.
Our test data is a second set of sql scripts, which are run for the test database and are not run for the production database.
Given our environment test data never touches the production database.
This solution really works great for us.
I wouldn’t put any test data in a production system nor would I want to have access to this system as a developer.
I’m working in an industry with very sensitive medical and financial information and having such information would make it impossible to distinguish productive from data out of the testing system.
IMHO the best practice is to completely separate these two worlds and invest in setting up a procedure to prepare a comprehensive testing environment.
In out ERP systems (internally accessible only) we have test data so that when we move changes from test to production environments we can test the whole process. I view that data as a necessary evil, since subtle configuration differences between systems can cause catastrophic results, so once a change is in production we test is fully before "releasing" it to the users.
As I said though, these are internal apps only, so the security risks are lessened somewhat - that's a very valid concern.
Never ever test in prod, even though that is where all the revenue is generated/stats are collected/magic happens...?
Always have a production test plan. There are going to be problems that happen on prod, or, if you are unlucky, only happens on prod. If you don't have anything in place, the first time you need to test on prod (which are usually high-stress cases) you'll be up the creek without a paddle.
It's not harmless to have test data on prod, you do need to be careful.