My company has decided that each developer should have its own database locally for testing the Asp.net web application. I am against the decision as I think there should be a centralised test database for testing. But the developers put an argument that if they test the same table and other developers delete the records while I add the records at the same time it will create incorrect result. So what you guys think.
Each developer should have their own database if they are going to be making any changes to the data structure or values. If one of the devs adds/removes something that prevents a portion of code from working it will only affect that dev instead of all the devs. This also allows devs to get more comfortable with the data structure and making changes to it since they can break/fix their own environment as much as they want.
There should be a testing environment that has its own database where the current revision of the project can run for tests.
No, it's not a good practice (for integration tests for example), but it's acceptable for functional tests (or smoke tests) on development environment. The thing is that, it's better to integrate database and application from the very initial steps, thus you will avoid issues, bugs, waste of time. Also, make sure the developers don't make changes on schema, you should have a unique and well identified database version.
But the developers put an argument that if they test the same table and other developers delete the records while I add the records at the same time it will create incorrect result.
Sounds like a really bad web application. A good web application should be able to support more than one simultaneous user!
For that reason alone, perhaps your team should at least have a phase where people are sharing a common database.
I'm having an issue with a web application I am responsible for maintaining.
The system experiences regular bugs, and our support vendors are always asking us to see if we can "replicate the error in UAT". This is obviously a reasonable request. A lot of the time, for various reasons (some of which are clear, some of which are not), these errors are not present in UAT. This lack of bug reproducability in a testing environment is adding huge amounts of friction to the bug resolution process.
There are 3 key pieces of our system architecture where these bugs are flaring (the CMS, the API layer, and the database). I am proposing we set up a system job that perpetually clones these 3 parts of the system in to a sandboxed test environment. This cloning would happen periodically (eg, once every 24 hours), and automatically.
Is there a technical term for this sort of environment? Is this an established method of helping diagnose system issues? Is there somewhere I can read up on the industry best practices for establishing something like this? Thanks.
The technical term for this kind of process is replication it is often done for some systems like databases, but normally not for testing purpose, but in order to increase available, so the replication is used as a failover spare.
An exact copy of a production system, with all the data is not you'll find often, due to the high demand on resources. Also at some points to two systems have to differ. Most systems (I know of) have tons of interfaces you just can't copy a complete system systems.
Also: you only need the copy of the production system when you actually debugging an issue. And if you are in the middle of that you probably don't want everything to go away and get replaced by a new copy.
So instead I would recommend to setup scripts that allows to obtain a copy of the relevant parts on demand.
Also you might want to consider how you might be able to modify your system to make it easier to setup a copy.
For example, when you have all the setup automated (with chef/docker or similar) you should be able to setup the same system again anywhere you want, so you now you just have to get the production data over.
Which is an interesting point. Production data often contains secret information (because it is vital to the business, or because it is personal data). You don't want this kind of stuff hang around in a test system everybody can access.
I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy?
The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month.
And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application?
One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time.
These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment.
Any advice?
The idea behind a framework is that it's supposed to be the "slow moving code". You shouldn't be changing the framework as frequently as the applications it supports. Try getting the framework on a slower development cycle: perhaps a release no more often than every three or six months.
My guess is that you're still working out some of the architectural decisions in this framework. If you think the framework changes really need to be that dynamic, find out what parts of the framework are being changed so often, and try to refactor those out to the applications that need them.
Agile doesn't have to mean unlimited changes to everything. Your architect could place boundaries on what constitutes the framework, and keep people from tweaking it so readily for what are likely application shortcuts. It may take a few iterations to get it settled down, but after that it should be more stable.
I wouldn't call it an Agile approach unless you have (unit) test coverage. One of the key tenets of Agile is that you have robust unit tests that provide a safety net for frequent refactoring and new feature development. There is a lot of risk in your scenario. Deploying twenty to thirty applications a month when 1) most of them don't add any new business value to their users; and 2) there are no tests in place would not qualify as a good idea in my book. And I'm a strong believer in Agile. But you can't pick and choose only the parts of it that are convenient.
If the business application has not changed, I wouldn't release it just to compile in a new framework. Imagine every .NET application needing to be re-released every time the framework changed. Reading into your question, I wonder if the common database is driving the need for this. If your framework is isolating the schema and you're finding you need to rebuild apps whenever the schema changes, then you need to tackle that problem first. Check out Refactoring Databases, by Scott Ambler for some tips.
As another aside, there's a big difference between integration test and unit tests. Your regression tests are integration tests. It's very difficult to automate at that level. I think the breakthroughs that are happening in testing are all about writing highly testable code that makes unit testing more and more of the code base possible.
Here are some tips I can think of:
1. break the framework into independent parts, so that changing one part requires only running a small portion of test cases.
2. Employ a test case prioritizaion technique. That is, you only rerun a small portion of the test pools of the applications selected by some strategy. Additional branch and ART have better performance than others usually. They require to know the branch coverage information of each test case.
3. Update the framework less frequently. If an application doesn't need change, it means its ok not to change it. So I guess its ok for these applications to use the old version of the framework. You can update the framework for these applications say every 3 months.
Regression testing is a way of life. You will need to regression test every application before it is released. However, since time and money are not usually infinite, you should focus your testing on the areas with the most changes. A quick and dirty way to identify these areas is to count the lines of code changed in a given business area; say "accounting" or "user management". Those should get the most testing first along with any areas that you have identified as “mission critical”.
Now I know that lines of code changed is not necessarily the best way to measure change. If you have well defined change requests, it is actually better to evaluate these hot spots by looking at the number and complexity of the change requests. But not everyone has that luxury.
When you are talking about making a change to the framework, you probably don't need to test all the code that uses it. If you're talking about a change to something like the DAL, that would basically amount to everything anyway. You just need to test a large enough sample of the code to be reasonably comfortable that the change is solid. Again, start with the "mission critical" areas and the area most heavily affected.
I find it helpful to divide the project into 3 distinct code streams; Development, QA, and Production. Development is open to all changes, QA is feature locked, and Production is code locked (well, as locked as it gets anyway). If you are releasing to production on a monthly cycle, you probably want to branch a QA build from the Development code at least 1 month before the release. Then you spend that month acceptance testing the new changes and regression testing everything else that you can. You'll probably have to complete testing the changes about a week before the release so that the app can be staged and you can dry run the installation a few times. You won't get to regression test everything, so have a strategy ready for releasing patches to Production. Don't forget to merge those patches back into the QA and Development code streams too.
Automating the regression tests would be a really great thing; theoretically. In practice, you end-up spending more time updating the testing code then you would spend running the test scripts manually. Besides, you can hire two or three testing monkeys for the price of one really good test script developer. Sad but true.
I want to improve integration tests methods where I work and I would like to know how this process happens in other places.
Things like:
- When test plans writing begin
- Proportion between testers, developers and stuff (entire applications or modifications) to be tested
- What kind of methods are used for integration testing.
Actually, I test webapps and test plans are managed with Test Link. Bugs found are reported on Bugzilla. I am trying to automate tests with Selenium RC, but I takes some time to write the plans and write the code to execute on Selenium. And time is something that I dont have, because I am testing 3 or more aplications.
Most of my problems are caused by differences between test environment and production environment. But tests are taking too long to begin. If someone finishes a modification today, it will take about 3 weeks for me to begin tests. And the test process queue keeps growing.
It would be really good if anyone suggests something that would improve testing process (like more people testing,etc). But mostly, I would like to hear how testing process works on other places.
Thanks.
For us the integration test is generally performed by the developer before a commit. Just simple surface test to see that nothing obvious is broken.
Then we deploy the code from trunk on a development server connected to a test database that is a complete copy of the production database and have the users responsible for the new functionality do acceptance test and further integration tests on that server.
We have a concept of "super user" to organize this. Super users are responsible for educating other users in their area of expertise and answering helpdesk questions related to the usage of the system. The super users are also the people who are involved in feature requests and requirement discussions for all features related to their work.
So when a new feature is developed the super user is the one who first validate the design suggestion and than performs the final stages of testing before deployment.
This setup is good because it ensures that domain experts are the ones who validate the system functionality and removes some responsibilities from the IT-department.
The bad thing is that they are not usually very technical or good testers. As users they tend to see the the system for what is is rather than what it could be. The fact that they also have their ordinary functions in the organization as full time employees also means that they are a very limited resource in terms of testing.
I'll assume you mean integration testing as in checking to see if the parts of the application work together, (for example, getting the database and the website to work together after the DBA and web developer respectively say they're done) And I'll use an example from my current project
I code generate several configuration files so I can observe the application with certain modules on/off, namely error reporting, authentication, debug mode compilation, with/without SSL. Development environments are likely to have "friendly error pages" turned off, no authentication, no SSL, etc.
I also use a build script to create a copy of the application for each variant of the config file
It is helpful to pedantically reproduce the characteristics of production to staging and development as much as you can-- use virtual machines if you lack the hardware
I also wrote into the production code bases a few pages that test the sort of things that break when code move from one machine to another, i.e. does the db connection work, do emails send, is the temp folder writable and made that page the home page of the server operator
The key is automating as much as you can. Frequent integration testing catches issues earlier.
From check in to packaging code for deployment, it takes me 8 minutes of automated work and 1/2 hour of manual clicking for smoke tests.
Like almost anyone who's been programming for a while, I'm familiar with the term "production code" and have a vague sense of what it means. However, can someone offer a semi-rigorous definition, since it seems Wikipedia and Google can't? It seems like there are a lot of gray areas in what counts as production, such as internal tools that are used by a small group of people and therefore not "formalized" in terms of UI, documentation, etc. and open source apps that are feature complete, reasonably bug free and working, but lack polish, UI and extensive testing.
When your code runs on a production system, that means it is being used by the intended audience in a real-world situation.
Production code, however, does not necessarily mean robust, reliable, or stable code. The Daily WTF provides plenty of evidence in this regard.
Production means anything that you need to work reliably, and consistently.
Whether is a build script, or a public facing web server.
When others rely on your code, particularly folks who may not understand it (i.e. even "smart" developers but perhaps not in your group, but using a library you wrote), that code is production code.
It's production because "work stops" and "money is lost" when the production code fails.
The definition as I understand it is that production code is any code that is installed or in use on a live, non-test-bed system. A server used internally to a company is a production system if it is the live system used by the employees of the company. The point here is that code running on a server internal to the company writing the code can be production code.
Usually, a good distinction when looking at internal code is whether or not the group maintaining the code is separate from the group using the code. If the groups are separate, odds are that the code is production code. If running the business depends on the code, then it is certainly production code, even if it is developed and maintained in-house.
EDIT: The short answer: If you are "betting the farm on it", it is "production".
This is a great question--an absolutely critical distinction that routinely gets everyone in trouble due to misunderstandings. The question of what is "production" is a subset of the related question of what is an "environment".
So part of the answer is that "production" is THE "environment" that is most
important and is most trusted as THE "real" thing.
So now we must define "environment" (and then revisit "production"). We are still far from a satisfactory answer.
We programmers use the term "environment" constantly to refer to computer systems consisting of hardware that is executing software. That software is the code that we wrote plus software that it depends upon, which was written by others. We write our code and integrate it with the other software, then we typically run the integrated software through an escalating series of tests (unit tests, integration tests, functional tests, acceptance tests, regression tests, etc.), until we finally run the integrated software in the full manner in which it was intended.
Of course, not everything is fully automated. There are usually numerous people involved, and they have manual processes to perform. We programmers look for ways to automate as many of these processes as possible, but there is always a "man/machine boundary" in the systems we work on. Often, there are many such boundaries in any particular case.
On the other hand, there may not be any significant automation at all. For example, we spoke of "production" way back when we had a room full of people performing manual labor which produced a product. So, there doesn't have to be any automation present in our "production" "environment". There is also a middle ground, where the automation involved does not include software, such as in the case of a person running a loom to weave cloth.
Also, there may not be a product, since we have adapted our language of "production" "environment" to include product-less service providers.
Likewise, the testing may not involve software, since we may be testing a non-software-driven machine (e.g., the loom) or even the people (training and evaluation).
Now we have touched on all the crucial elements of an "environment":
there is a purpose, an intent, being pursued
an intent requires an intender, so there must be a sponsor (a person or
group, but not a machine) that specifies the intent
that intent is pursued through various processes that are performed by
various actors
those actors may be people, they may be software executing on hardware, or they
may be non-software-driven machines, so there may or may not be automation present
Now we can properly and fully define our original terms.
An environment consists of all the processes and their actors that
collaborate to pursue a particular intent on behalf of its sponsor. That
means software executing on hardware, that means non-software-driven machines, and that
means people performing their various duties. It is the intent that primarily
defines an environment, not its processes or its actors.
Furthermore...
If the intent being pursued in a particular environment is the
sponsor's ultimate goal, which usually involves producing a product or
providing a service in exchange for money, then we refer to that
environment as production.
Now we can go a bit further.
If the intent being pursued in an environment is the verification of
processes and their actors in preparation for production, we call
that a test environment.
We further call it an integration environment if that testing involves the
initial joining together of significant individuals or groups of processes and
their actors.
If that preparation involves the "programming" of human actors to perform new
processes, or the subsequent verification (evaluation), then we call that a
training environment.
Armed with these distinctions and definitions, we can now understand several common scenarios.
An environment can be mislabeled with a name that does not match its intent, such as when a training environment is used as test.
An environment can be grossly misused, such as when integration or training is done in production.
An environment can be misrepresented, such as when key processes or actors are left unidentified (e.g., manual reconciliations, or even by ignoring the people altogether).
An environment can be retasked, by repurposing its processes and actors to a new intent. A very successful technique for some organizations is to routinely "flip" several sets of actors (servers hosting software) between production, test, training, and integration upon each release.
In most cases, a single actor (person or hardware) can execute multiple processes which can participate in multiple environments. For example, a single computer server can host software that performs production transactions while also hosting other software that performs test or training functions.
Normally, a single instance of an actor should participate in only one environment at a time. On very rare occasion, a single actor can be shared across environments if the intents are mutually compatible. Most of the time, it is very unwise to attempt such sharing because the intents are not really compatible. A perfect example is running a test process on a server that also supports production processes, resulting in downtime because the test caused the entire server to fail.
Therefore, the intent of an environment must be construed with very wide latitude, to include concepts such as availability, reliability, performance, disaster recovery, accuracy, precision, repeatability, longevity, etc. This means that the actors and processes must often be construed to include things like providing power, cooling, backups, and redundancy.
Finally, note that the situation can get quite complex. For example, a desktop computer (actor) may be tasked by the development team (sponsor) to host their source control (process), which the team relies upon for their primary jobs (production). Nevertheless, the IT staff sees that same desktop computer as simply a developer workstation (development, not production) and treats it with contempt and nonchalance when it develops a hardware problem. But the developers are producing production code, so aren't they also part of production? Perspective matters.
EDIT: Production quality
A solid verification (testing) methodology should take packaged code from development and run it through a series of tests (integration, TQA, functional, regression, acceptance, etc.) until it comes out the other side "stamped" for production use. However, that makes the package production quality, but not actually production. The package only becomes production when a sponsor actually deploys it into an environment with that ultimate level of intent.
However, if your organization merely produces that package (its product) for the consumption of others, then such a release comes as close to production as that organization will experience with respect to that product, so it is common to stretch the term production to apply rather than clarify that it is production quality. In reality, that organization's production environment consists of the actors and processes involved in its development/release efforts that result in that product.
I said it could get quite complex...
Any code that will be used by it's intended userbase would fit into my definition of 'production code'.
Of course, the grey area in that definition would be clearly defining who your userbase is.
G-Man
The production software can perform at the necessary workload without disruption or degradation of the service
Software has been successfully tested in different production scenarios
Transforming working prototype into production software which runs on fail-safe redundant architecture that can work in real business, i.e. production environment, needs time, code refactoring, and attention to details
The production code has acceptable level of maintainability and is reasonably well commented
The documentation manual explains functionality, all features and facilitates maintenance
If the production software is an international service or application, it must be localized
Production code is used by end-users, often customers under conditions described in Terms-of-Service Agreement
Production software does not necessarily mean reliable mission critical software
The software does well, what it was intended to do
Log files provide an accurate description of run-time performance and software reliability metrics and reporting which do facilitate debugging and software maintainability
I think the best way to describe it, is as any code that "leads-to" deployment and "follows-up" deployment. Deployment itself is defined as all of the activities that make a software system available for use. If your code is ready to be used by people, in-house or otherwise, then it is production code.
In simple words "Production code which is live and in use by its intended audience"
The term "production code" mixes two different concepts. One is deployment management and the other is release life cycle.
In the strict sense of the word, a system is in production when it is being used as part of business or service operation. What's not in production are development, testing, QA, demo, and staging system. Production system does not immediately imply quality.
From release life cycle's point of view, a "production" build is the build that is released to general public or clients. It is the stage after pre-alpha, alpha, beta, (feature complete, code complete, etc.) and release candidate. For shrink-wrap products that cannot easily deploy updates, reaching the production stage likely implies series of testing and bug fixes.