Sprint & Acceptance test phase - in Scrum and XP from the trenches [closed] - testing

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I'm just in the middle of that book Scrum and XP from the Trenches, reading through the chapter How we do testing, in particular about Acceptance Testing Phase (I'll refer to it as ATP). Author suggests that one approach:
Approach 2: “OK to start building new
stuff, but prioritize getting the old
stuff into production”
but (in my opinion , or I don't get something) that approach doesn't refer to ATP at all. There's one sprint, then another, but where is ATP. Or perhaps in authors mind that first sprint contains ATP in it. If so, then how it refers to statement form subchapter Should acceptance testing be part of the sprint? several pages before:
We waver a lot here. Some of our teams
include acceptance testing in the
sprint. Most of our teams however
don’t, for two reasons: A sprint is
time-boxed. Acceptance testing (using
my definition which includes debugging
and re-releasing) is very difficult to
time-box. What if time runs out and
you still have a critical bug? Are you
going to release to production with a
critical bug? Are you going to wait
until next sprint? In most cases both
solutions are unacceptable. So we
leave manual acceptance testing
outside. If you have multiple Scrum
teams working on the same product, the
manual acceptance testing must be done
on the combined result of both team’s
work. If both teams did manual
acceptance within the sprint, you
would still need a team to test the
final release, which is the integrated
build of both team’s work.
So guys, (here is the question): how do you comprehend that chapter?
Apart from that here are my thoughts: author mentiones that ATP shouldn't be a part of the Sprint due to critical bug issue? Well, can't we have such an issue without ATP in sprint? Yes we can. And either way (we have ATP in Sptint or not) we are in trouble. Bottom line : if Sprint timebox in long enough (perhaps that was author's idea in Approach 2) it can handle ATP as well. It will eliminate a great deal of errors from arriving after release.
Thanks, Pawel
P.S. Do you know any pages where there's a change to have a active chat with book's author?
P.S. 2 I was just enlightened when reading through my question before posting it: perhaps by saying:
Approach 2: “OK to start building new
stuff, but prioritize getting the old
stuff into production”
author ment: Sprint 1 is finished, and codebase (version 1.0.0) enters ATP. At the same time we're starting Sprint 2 for release 1.1.0 and simultaneously fix bugs spoted in 1.0.0 version. When codebase prepared during Sprint 1 is spotless it goes live. So here, we have some king of overlaping. But, if that was author's intention (I'm sure it wasn't though) then it breaks fundamental principles:
After sprint new software is available (it isn't cos we wait for ATP to end)
If we consider a sprint as sprint+ATP :), then sprint is not time boxed.
All in all that book is great reading, but that chapter if a bit fuzzy (nice cool word I picked up during that reading too) to me.

Acceptance Test has little or nothing to do with building software.
You build as quickly and as well as you can.
User's accept some features (or reject some features).
You don't find "critical bugs" via acceptance test. The software already works. The users just don't like the way it works. It's not a bug. It's a miscommunication which will be fixed in the next sprint.
You can (with some tweaks) deploy the Accepted software, which is a subset of the tested software. People do this all the time.
We often conceal features, pages, buttons, screens, whatever, because they were not accepted. They passed unit test. They work. But for some reason they weren't accepted. So they aren't deployed.
Often, the user's original definitions were faulty, and the unit tests need to be fixed so that the code can be fixed and deployed in the next release.
Acceptance of a feature has nothing to do with whether it works and which sprint it was built in. It might be nice if it was all one smooth package. But, it usually isn't one smooth package. That' why we have to be Agile.

IMO at the beginning of a sprint acceptance criteria should be well known and fix for each user story. To be able to mark the story as "done" in the sprint review, the acceptance tests have to be successful. Thus IMO the ATP belongs into the sprint!
I'd like to refer to "Agile estimating and Planning" by Mike Cohn. He promotes to write the acceptance criteria on the user story post-its. They are the basis for approval in the sprint review. From this statement I derive the need to have the ATP in the sprint!
Changing requirements or acceptance criteria result in new user stories. But you never change the ones in progress.
If the acceptance tests are to be automated, this work can be done during the sprint. But the underlying criteria should already be fix at the beginning of the sprint.

Related

Is Requirement engineering is obsolete in Scrum Way of working? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
The questions may seem strange!
In the project I am working now, Scrum methodology was adapted from the last three months. We used to follow a V- Model as it was standard in embedded industry.
Our project ran into some trouble and this decision were made. What currently is being done is that the customer (Product Owner) is giving top level requirement directly to development team, the requirements team is just a part of it.
The development team works on it and show the final outcome to Product Owner and if changes are needed it is made. Once the Product Owner is ok with the result, then the changes made are reported to requirements and they document it and pass it to test team.
What my problem with such an approach is that in this process we are technically making requirements team and test team obsolete. They come too late into the process.
Is this the way Scrum works? In this process everything is driven by development team and others basically are more or less spectator.
Some where I saw that we could still have the V-Model within the scrum methodology?
Edit:
I understand the tiny V-model releases every sprint. But my question is do they all work in parallel? For example: in the traditional V-model, which is a modified waterfall, there always was a flow - the requirement team will release the requirement to Development and test and they work parallel in designing and then once development is completed the test team starts testing. How that flow is handled in scrum way of working?
You have mentioned that "The sprint is not complete until the requirements and test parts are done for each story. " In our project at least the requirement part is being done (test team is completely kept out and the testing is more or less done by the development team on the product). But the requirement job is more or less a documentation job.,
The entire scrum is being driven by the development teams perspective. We are seeing scenarios where Development team decide the way certain function work (because the initial concept is too difficult to implement for them or may be more complex).
There is no creation of boundary at any level! Is this the way Scrum suppose to work?
The test team in the project is more or less demoralized currently. They know very clearly any issue they find at system test level is not gonna be taken care much. The usual excuse from development team is that they don't usually see the issue at machine.
Having a separate requirement engineering team is obsolete in the Scrum way of working. You should all be working together.
Scrum suggests that you should be working in multidisciplinary teams and working in small increments. You can think of this as doing tiny v-model releases each sprint. The sprint is not complete until the requirements and test parts are done for each story. You should consider them part of your definition of done.
I'd suggest a good point for you is to actually read the Scrum Guide. It has the following to say about the make up of development teams:
Development Teams are cross-functional, with all of the skills as a team necessary to create a product Increment;
Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there
are no exceptions to this rule;
Scrum recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business
analysis; there are no exceptions to this rule; and,
Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as
a whole.
As an aside, I have some experience working in an embedded system with Agile methods and we had great success using automated testing to replace manual testers. Our testers, pretty much because responsible just for running the test suite on various hardware, physically running the tests. We even had the tests fully built into the production process; every new piece of hardware went through (a subset of) our test suite straight off the assembly line!

Is testing an essential part of scrum?

I don't know if StackOverflow is the right place for this question, but I am still going to ask it.
Recently I have been doing some research on software methodologies, and I have one question that came up, on which I could find a conclusive answer;
Is testing an (essential) part of Scrum, or can this be seen as a separate method? I know that when practicing a software methodology things can be difference in practice to the things described in the theory. But in this case I just want the plain facts/description of testing in relationship with Scrum.
No, I think it's more correct to say that testing is a vital part of the agile process.
The scrum is the project management side of things, getting the user stories from the customer for a specific sprint and then letting the team loose to do their work, with the daily scrum meetings.
So, while testing may be part of the deliverables decided during the initial get-together with the customer, and it might be raised in the daily scrum meetings, it's not really required for the scrum process to work. The customer may (foolishly) not have a testing requirement and the developers may strike no testing problems.
Testing can be a part of your 'Definition of Done' or you can designate testing/defect reduction sprints. Ultimately it depends on your release schedule, how your business is performed and your customers requirements and expectations.
The scrum process does not declare that any particular type of testing has to be done within the confines of a sprint.
But as an aside, you will find much more value if you can automate your testing processes.
We find it easier to include all of the testing as a part of the sprint process. The reason for that is, having 'testing sprints' allows for compound complexities if a defect is introduced in say, sprint one, but testing and defect resolution doesnt occur until say, sprint 8.

Understanding Scrum [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have been working as a .net developer following the waterfall model. When working on, say a 12 months project, usually my team follows Analysis, Design, Coding and Testing phases. But when it comes to following the Scrum process, I don't really understand how I need to deal with it.
Consider a sprint for 4 weeks and the backlog has 10 items. Let the sprint start now. If developers are working on some backlog items for the first 10 days, I don't know if testing (both SIT and UAT) will require JUST the remaining 10 days to complete the work. And now our sprint does not have any time to do last minute bug fixes and only few bugs could be fixed IN THE PLANNED SPRINT.
And when we do development, how can we make sure that we keep the testing team busy apart from just preparing test cases and waiting for us to deliver the functionality?
This raises a question if we need to deliver the first task/feature within the first 3 days of the sprint, so that testers might be ready with their test cases to test that piece.
I also need to educate my client to help in adapting the Scrum process.
I need some guidelines, references or a case study to make sure that our team follows a proper Scrum process. Any help would be appreciated.
In an ideal Scrum team, testers and developers are part of the team and testing should occur in parallel of the development, the phases are overlapping, not sequential (doing things sequentially inside a Sprint is an anti-pattern known as Scrumerfall). And by the way, contrary to some opinions expressed here, an ultimate Scrum implementation produces DONE DONE stories so testing - including IST, UAT - should be done during the Sprint.
And no, testers don't have to wait for Product Backlog Items (PBI) to be fully implemented to start doing their job, they can start writing acceptance tests scenarii, automate them (e.g. with FitNess), set up test data set, etc (this takes some time, especially if the business is complicated) as soon as the Sprint starts.
Of course, this requires very close collaboration and releasing interfaces or UI skeletons early will facilitate the job of testers but, still, testers don't have to wait for a PBI to be fully implemented. And actually, acceptance tests should be used by developers as DONEness indicator ("I know I'm done when acceptance tests are passing")1.
I'm not saying this is easy, but that's what mature (i.e. Lean) Scrum implementations and mature Scrum teams are doing.
I suggest reading Scrum And XP from the Trenches by Henrik Kniberg, this is very good practical guide.
1 As Mary Poppendieck writes, the job of testers should be to prevent defects (essential), not to find defects (waste).
You definitely don't want to do all development in the first half of the sprint and all testing in the second half. That's just a smaller waterfall.
Your stories and tasks should be broken up into very small, discrete pieces of functionality. (It may take a while to get used to doing this, especially if the software you're working on is a monolithic beast like a previous job of mine that moved to using scrum.) At the beginning of the sprint the testers are developing their tests and the developers are developing their code, and throughout the sprint the tasks and stories are completed and tested. There should be fairly constant interaction between them.
The end of the sprint may feel a bit hectic while you're getting used to the methodology. Developers will feel burdened while they're working on the rest of the code and at the same time being given bugs to fix by the testers. Testers will grow impatient because they see the end of the sprint looming and there's still code that hasn't been tested. There is a learning curve and it will take some getting used to, the business needs to be aware of this.
It's important that the developers and testers really work together to create their estimates, not just add each other's numbers to form a total. The developers need to be aware that they can't plan on coding new features up until the last minute, because that leaves the testers there over the weekend to do their job in a rush, which will end up falling back on the developers to come in and fix stuff, etc.
Some tasks will need to be re-defined along the way. Some stories will fail at the end of the sprint. It's OK, you'll get it in the next sprint. The planning meeting at the start of each sprint is where those stories/tasks will be defined. Remember to be patient with each other and make sure the business is patient with the change in process. It will pay off in the long run, not in the first sprint.
The sprint doesn't end with perfect code; if there are remaining bugs, they can go in the very next sprint, and some of the other items that would have went in the next sprint will need to be taken out. You're not stopping a sprint with something perfect, but ideally, with something stable.
You are (ironically) applying too much rigor to the process. The whole point of an agile process like scrum is that the schedule is dynamic. After your first sprint, you work with the users/testing team to evaluate the progress. At that point, they will either ask you to change details and features that were delivered in the first sprint, or they will ask you to do more work. It's up to them.
It's only eventually, once you have determined the velocity of the team (ie. how many stories one can reasonably accomplish in a sprint) that you can start estimating dates and things for larger projects
First of all, not every Sprint produces a Big Release (if at all). It is entirely acceptable for the first sprints to produce early prototypes / alpha versions, which are not expected to be bug free, but are still capable of demonstrating something to the client. This something may not even be a feature - it can simply be a skeleton UI, just for the user to see how it will look and work like.
Also, developers themselves can (and usually do) write unit tests, so whatever is delivered in a sprint should be in a fairly stable working state. If a new feature is half baked, the team simply should not deliver it. Big features are supposed to be devided into small enough chunks to fit within a single sprint.
A Scrum team is usually cross-functional, which means that the entire team is responsible for building completed pieces of functionality every Sprint. So if the QA testers did not finish the testing, it only means the Scrum team didn’t finish the testing. Scrum counts on everyone to do their part. Whenever any is needed, the people with those skills take the lead, but they all have to do their part.
Try to do continuous integration. The team should get into this habit and integrate continuously. In addition, having automated unit test suite built and executed after every check-in/delivery should provide certain level of confidence in your code base. This practice will ensure the team has code in working and sane condition at all time. Also it will enable integration and system test early in the sprint.
Defining and creating (automated) acceptance tests will keep people with primary QA/testing skills busy and involved right from the sprint start. Make sure this is done in collaboration with Product Owner(s) so everyone is on the same page and involved.
We started our agile project with developers first (a lot of training in Enterprise Framework, etc.) in the first sprint. Then we added QA slowly into the second sprint. At the end of sprint 2, QA started testing. Closing in on the end of sprint 3 QA had picked up the pace and where more or less alongside the developers. From sprint 4 and out, QA is more or less done with testing when the developers have completed the stories. The items that are usually left to test are big elephants that include replication of data between new and legacy system. And it is more a 'ensure data is OK' rather than actual tests.
We're having some issues with our definition of Done. E.g. we have none. We're working on a completely new version of a system, and now that we are closing in on the end of sprint 6, we are getting ready for deployment to production. Sprint 6 is actually something I would call a small waterfall. We have reduced the number of items to implement to ensure that we have enough time to manage potential new issues that come up. We have a code freeze, and developers will basically start on the next sprint and fix issues in the branch of necessary.
Product Owner is on top of the delivery, so I expect no issues in regards to what we deploy.
I can see that Pascal write about mature sprint teams + the definition of Done. And agile always focus on 'delivery immediately after sprint has reached its end'. However - I'm not sure if there are very many teams in the world actually doing this? We're at least not there yet :)
There isn't any testing team in Scrum. Its development team which is cross functional. Scrum discourages specialists in the team so as to avoid dependencies. So the role of tester is somewhat different in Scrum than in Waterfall. Its another debate but for now lets stick to the question at hand.
I would suggest you to slice the stories vertically in as smaller the tasks as you can during how part of the sprint planning meeting. Its recommended to break the tasks to as small units so that they can be completed in a day or two.
Define a DoD at the start of the project and keep on refining it.
Work on one task at a time and limit work in progress.
Work in order of priority and reduce waste in your system.
Do not go for detailed upfront planning and delay your best decisions till the least responsible moment.
Introduce technical competencies like BDD and Automation.
And remember that the quality is the responsibility of the whole team so don't worry about testing being done by a dedicated person.

Misusing the term "Code Freeze" [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm just curious if the community considers it acceptable to use the term "Code Freeze" for situations where we stop development except for testing and fixing bugs.
Development Situation
We're just finishing up our third and final sprint, which will be followed by a "Code freeze" and 2 weeks of Q/A testing. It is a big release and some components development have transcended all 3 sprints. Historically even though we call it a "Code Freeze" we still commit code to fix bugs.
Problem
Every release I try and correct my manager and co-workers that we should be calling it a "Feature Freeze", because it's pretty obvious that we're going to find bugs and commit code to fix them as soon as we start heavy testing. But they still persist in calling it a "Code Freeze". Sometimes we still have known bugs and declare a "Code Freeze".
The Wikipedia definition seems to agree with me here
Analysis
I suspect that calling these situations a "Code Freeze" is some sort of willful Double Think to provide false confidence to stake holders. Or we are pretending to be in a "Code Freeze" situation because according to Scrum after every sprint we should have a shippable piece of software and it is the expectation we are following Scrum. So we must call it what Scrum expects instead of what it really is.
Conclusion
Am I over analyzing this? I just find it to be unhealthy to ignoring realities of situations and should either give it up calling it something it's not or fix the root problem. Has anybody else had similar experiences with Code Freezes?
Am I over analyzing this?
Yes.
Well, probably. Realistically, you should be thinking twice before making any code changes after the freeze. Bugs should have to pass some severity test, more so if the fix requires potentially-dangerous changes to the codebase or invalidates the testing that's been done. If you're not doing that, then yeah, you're just deluding yourselves.
But if you're not gonna fix any bugs, then freezing the code is kinda pointless: just build and ship it.
Ultimately, what matters is that you all understand what's meant by the label, not the label itself. One big happy Humpty-Dumpty...
We use the term "Feature Complete". All the features are coded and functional, but we're heading into a test pass to confirm that there are no bugs. If there are bugs, we will find them, fix them, and retest. After we're satisfied with the result, we're "Code Complete".
I think, actually, that they are more correct in their interpretation. A feature freeze, to me, would be a halt to introducing new features, but features currently under development could continue to completion or you could schedule some refactoring work to remove technical debt without generating new features. A code freeze brings a halt to all new development, including refactoring -- the only new code allowed is that to fix bugs found during QA. The latter seems to be what your team is doing.
Some people who get into adaptive and agile engineering methodologies like scrum may not realise what you have gotten yourselves into.
The reason for being agile engineering is releasing to your customers whatever that is usable now and gradually build up its usability and features.
If your project is projected to complete in 18 months but if you could have increasingly something usable every 2 months - why not release features every two months rather than wait till the grand holy day 18 months away since either way the project would still last 18 months.
Your customers' requirement might change so giving your customers opportunity to change their mind frequently, before it's too late, results in exhilarated customers.
Someone might release open source module of one of your modules 10 months from now and then you don't have to do much else but integrate that module.
Therefore, scrummers, or at least scrum masters and/or project managers/architects are required by the dynamics of scrum to modularise ... modularise is not good enough; but granularise the project.
You have to granularise your modules to the right size and provide a contract-interface specification for each so that changes within a module is managed within a module. If your module by itself or due to dependence of other modules is unable to satisfy a contract-interface, you have to code-freeze to enable you to broadcast a contract-interface version 1 so that other teams could continue albeit with less than expected features in the next general product release.
A code freeze is a code freeze.
If your code freezes are experiencing frequent thawing delays, your scrum master and product architect are not communicating or not doing their jobs properly. Perhaps, there's no point in trying to impress or acquiesce to your management that they are using some industry fad called agile programming. Or management needs to hire architect and scrum master who are able to design and granularise the project within the skills of the team as well as the expectations of the customers and the technological constraints of the project.
I think there are management elements and their scrum master who do not realise how crucial a good architect is even for a scrum environment and refuse to hire one. A good architect who is able to listen and work with the team is invaluable to the scrumming process because he/she has to constantly adapt the architecture to changing granularities and expectation.
I also think there are management elements and their scrum master who belongs to the other spectrum of the programming universe due to bad experiences with longer development cycles like waterfall, who therefore think that scrum is meant to produce a product within a month and therefore meticulous investigation into cross-modules effects is not really necessary. They sit down, wet their fingers in the air and come up with a great sprint.
If your team is experiencing frequent thawing of code freezes, you guys might need to code-freeze your whole project and rethink your strategy and see that the cause is due to your refusal to define module contracts that fit the granularity of modules. Or are you guys defining module contracts at all to so that features of a stuck module could be currently rarefied to enable other teams or modules to continue.
Do you guys have a UML strategy that aids in discovering the projected features of a project release and allows you to see the effects of a stranded module and then see which module needs focus to reach a desired product release level? Are you attending scrums and sprints and you have no picture of an UML to show how advanced or delayed you are so that you are just bumping yourselves along happily or otherwise blindly? Or does your scrum master would say to room of yeas or nays, hmm ... that module seems important - without actually having a clear picture of which are the most strandable modules in relation to a product release.
A product release code-freeze is achieved by progressive freezing of modules. As soon as a module is completed, a product test is done to ensure that the module satisfies its contract and that module is code-frozen to say version 2.1. Even though work progresses on that module for 2.2, the project on the whole should not depend on 2.2 but on 2.1. The strategy is to minimise the number of modules whose contracts needs to thawed when a product release is tested and if the product release should scale down its features. If progressive modular freezing does not help your development team ... either the product is so complex and your management is under-expecting the number of iterations to achieve a proper release or the modular architecture and strategy needs serious rethinking.
I have worked on a project (waterfall) in which we had feature freeze AND code freeze.
Feature freeze means the beginning of a bugfix period. Also new branch was created for the new version so that we could implement features, i.e. this is the point when the company starts to work on the new version. No new features are implemented, only bugs are fixed.
Code freeze comes when QA thinks the product is in releasable condition (i.e. they do not know of any severe bugs). Before a final test cycle a code freeze is announced (remember a test cycle might take a week). If the test succeeds this becomes the released product. If it fails then the new bugs are fixed. These checkins are supervised by architects and managers and the risk of every line is practically documented. Then the testcycle is started again.
Summary: After feature freeze you can only check in bugfixes. After code freeze you can only check in in exceptional cases.
Yeah, it's overthought.
Yeah, it's a misnomer.
If the code isn't broken/messy you wouldn't touch it, and if it is then you will fix it. That's exactly the same situation as if you were not in code freeze. Yes, it's "requirement freeze" or "integration break" which are anti-patterns. It is a point at which to stop including new features in the next release, which is valuable in the sales/marketing/customersupport side of things. But they should probably call it "prerelease".
What ought to happen is that there are always a few releasable versions of the system in version control, and the company picks one to ship.
the Lean name for "code freeze" is "waste."
In your comment, you mentioned the word 'sprint'. That tells me you may be using Scrum(or any other Agile) methodology. In Scrum you hardly 'freeze' anything :) Flexibility, risk identification and mitigation, and above all, in terms of engineering, continuous integration matter a lot in Scrum.
Given this, the team should be cross-functional and the code will be continuously integrated. As a result, you may not have things like 'code freeze'. You just have a releasable product at the end of the sprint. It should have been tested continuously and you should have already got the bug reports which you should have fixed already.
Well, this is theory. However, good scrum teams aren't too far from theory, as scrum is mainly about principles. There aren't too many rules.
I personally won't split too many hairs on the terminology, but the intention behind the term. Most certainly, the term is used to identify a stage in the SDLC, in your organization. Speaking strictly as per Scrum, it doesn't have a bug fix phase. In case you're dedicating one or more sprints to fix bugs, then this term can mean, "no feature backlogs will be included in the sprint, but only bug fixes". This can be easily handled at the sprint planning (and pre-planning) meeting(s) and the team doesn't even have to worry about the terminology. Even better, this terminology/intention doesn't even have to go beyond the Product Owner.
While "Code Freeze" may have a clouded meaning and is, as has been mentioned, more aptly a "Feature Freeze" when considering individual projects/releases it DOES have a place in a larger, integrated deployment where another entity is responsible for packaging and/or deploying multiple software releases from various teams. "Code Freeze" gives them time to make sure the environments are lined up and all packages accounted for. "Code Freeze" also means that nothing but "show stopping" changes are getting in. Everything else would be handled in the next maintenance release.
In a perfect world, scripted testing would have completed before this point and there would have been time allowed for deployment of any last fixes and retest. I have yet to see this happen at any "globo-corp". The (business) testers test up until and even after deployment and the "Code Freeze" becomes a signal to them to step up their efforts and log everything that they've been sitting on. In some cases, it's a signal for them to START testing.
Really, "Code Freeze" is just business speak for "Here there be Tygers". ;-)
when we code freeze, the repo is locked, hopefully all the bugs are fixed that you intended to be fixed, and you the testers to a whole nother round of testing before branching and building to production. if there's any outstanding bugs scheduled for this iteration the leads will be breathing down your neck until it is closed out, or deemed noncritical and pushed back an iteration. so, yes, its really frozen.

What is your or your company's programming process? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for process suggestions, and I've seen a few around the site. What I'd love to hear is what you specifically use at your company, or just you and your hobby projects. Any links to other websites talking about these topics is certainly welcome!
Some questions to base an answer off of:
How do users report bugs/feature requests to you? What software do you use to keep track of them?
How do bugs/feature requests get turned into "work"? Do you plan the work? Do you have a schedule?
Do you have specs and follow them? How detailed are they?
Do you have a technical lead? What is their role? Do they do any programming themselves, or just architecture/mentoring?
Do you unit test? How has it helped you? What would you say your coverage is?
Do you code review? When working on a tight deadline, does code readability suffer? Do you plan to go back later and clean it up?
Do you document? How much commenting do you or your company feel comfortable with? (Description of class, each method and inside methods? Or just tricky parts of the code?)
What does your SCM flow look like? Do you use feature branches, tags? What does your "trunk" or "master" look like? Is it where new development happens, or the most stable part of your code base?
For my (small) company:
We design the UI first. This is absolutely critical for our designs, as a complex UI will almost immediately alienate potential buyers. We prototype our designs on paper, then as we decide on specifics for the design, prepare the View and any appropriate Controller code for continuous interactive prototyping of our designs.
As we move towards an acceptable UI, we then write a paper spec for the workflow logic of the application. Paper is cheap, and churning through designs guarantees that you've at least spent a small amount of time thinking about the implementation rather than coding blind.
Our specs are kept in revision control along with our source. If we decide on a change, or want to experiment, we branch the code, and IMMEDIATELY update the spec to detail what we're trying to accomplish with this particular branch. Unit tests for branches are not required; however, they are required for anything we want to incorporate back into trunk. We've found this encourages experiments.
Specs are not holy, nor are they owned by any particular individual. By committing the spec to the democratic environment of source control, we encourage constant experimentation and revision - as long as it is documented so we aren't saying "WTF?" later.
On a recent iPhone game (not yet published), we ended up with almost 500 branches, which later translated into nearly 20 different features, a huge number of concept simplifications ("Tap to Cancel" on the progress bar instead of a separate button), a number of rejected ideas, and 3 new projects. The great thing is each and every idea was documented, so it was easy to visualize how the idea could change the product.
After each major build (anything in trunk gets updated, with unit tests passing), we try to have at least 2 people test out the project. Mostly, we try to find people who have little knowledge of computers, as we've found it's far too easy to design complexity rather than simplicity.
We use DOxygen to generate our documentation. We don't really have auto generation incorporated into our build process yet, but we are working on it.
We do not code review. If the unit test works, and the source doesn't cause problems, it's probably ok - but this is because we are able to rely on the quality of our programmers. This probably would not work in all environments.
Unit testing has been a god-send for our programming practices. Since any new code can not be passed into trunk without appropriate unit tests, we have fairly good coverage with our trunk, and moderate coverage in our branches. However, it is no substitute for user testing - only a tool to aid in getting to that point.
For bug tracking, we use bugzilla. We don't like it, but it works for now. We will probably soon either roll our own solution or migrate to FogBugz. Our goal is to not release software until we reach a 0 known bugs status. Because of this stance, our updates to our existing code packages are usually fairly minimal.
So, basically, our flow usually looks something like this:
Paper UI Spec + Planning » Mental Testing » Step 1
View Code + Unit Tests » User Testing » Step 1 or 2
Paper Controller & Model Spec + Planning » Mental Testing » Step 2 or 3
Model & Controller Code + Unit Tests » User Testing » Step 3 or 4
Branched Idea » Spec » Coding (no unit tests) » Mental Testing » Rejection
Branched Idea » Spec » Coding (no unit tests) » Mental Testing » Acceptance » Unit Tests » Trunk » Step 2 or 4
Known Bugs » Bug Tracker » Bug Repair » Step 2 or 4
Finished Product » Bug Reports » Step 2 or 4
Our process is not perfect by any means, but a perfect process would also imply perfect humans and technology - and THAT's not going to happen anytime soon. The amount of paper we go through in planning is staggering - maybe it's time for us to get a contract with Dunder Mifflin?
I am not sure why this question was down voted. I think it's a great question. It's one thing to google search, and read some random websites which a lot of times are trying to sell you something rather than to be objective. And it's another thing to ask SO crowd which are developers/IT Mangers to share their experiences, and what works or doesn't work for their teams.
Now that this point is out of the way. I am sure a lot of developers will point you towards "Agile" and/or Scrum, keep in mind that these terms are often used very loosely especially Agile. I am probably going to sound very controversial by saying this which is not my intention, but these methodologies are over-hyped, especially Scrum which is more of a product being marketed by Scrum consultants than "real" methodology. Having said that, at the end of a day, you got to use what works the best for you and your team, if it's Agile/Scrum/XP or whatever, go for it. At the same time you need to be flexible about it, don't become religious about any methodology, tool, or technology. If something is not working for you, or you can get more efficient by changing something, go for it.
To be more specific regarding your questions. Here's the basic summary of techniques that have been working for me (a lot of these are common sense):
Organize all the documents, and emails pertaining to a specific project, and make it accessible to others through a central location (I use MS OneNote 2007 and Love it for all my documentation, progess, features, and bug tracking, etc.)
All meetings (which you should try to minimize) should be followed by action items where each item is assigned to a specific person. Any verbal agreement should be put into a written document. All documents added to the project site/repository. (MS OneNote in my case)
Before starting any new development, have a written document of what the system will be capable of doing (and what it wont do). Commit to it, but be flexible to business needs. How detailed the document should be? Detailed enough so that everyone understands what the final system will be capable of.
Schedules are good, but be realistic and honest to yourself and business users. The basic guideline that I use: release quality and usable software that lacks some features, rather than a buggy software with all the features.
Have open lines of communication among your dev. team and between your developers and business groups, but at the end of a day, one person (or a few key people) should be responsible for making key decisions.
Unit test where it makes sense. But DO NOT become obsessive about it. 100% code coverage != no bugs, and software works correctly according to the specs.
Do have code standards, and code reviews. Commit to standards, but if it does not work for some situations allow for flexibility.
Comment your code especially hard to read/understand parts, but don't make it into a novel.
Go back and clean up you code if you already working on that class/method; implementing new feature, working on a bug fix etc. But don't refactor it just for the sake of refactoring, unless you have nothing else to do and you're bored.
And the last and more important item:
Do not become religious about any specific methodology or technology. Borrow the best aspects from each, and find the balance that works for you and your team.
We use Trac as our bug/feature request tracking system
Trac Tickets are reviewed, changed to be workable units and then assigned to a milestone
The trac tickets are our specs, containing mostly very sparse information which has to be talked over during the milestone
No, but our development team consists only of two members
Yes, we test, and yes, TDD has helped us very much. Coverage is at about 70 Percent (Cobertura)
No, we refactor when appropriate (during code changes)
We document only public methods and classes, our maximum line count is 40, so methods are usually so small to be self-describing (if there is such a thing ;-)
svn with trunk, rc and stable branches
trunk - Development of new features, bugfixing of older features
rc - For in house testing, bugfixes are merged down from trunk
stable - only bugfixing merged down from trunk or rc
To give a better answer, my company's policy is to use XP as much as possible and to follow the principles and practices as outlined in the Agile manifesto.
http://agilemanifesto.org/
http://www.extremeprogramming.org/
So this includes things like story cards, test-driven development, pair programming, automated testing, continuous integration, one-click installs and so on. We are not big on documentation, but we realize that we need to produce just enough documentation in order to create working software.
In a nut shell:
create just enough user stories to start development (user stories here are meant to be the beginning of the conversation with business and not completed specs or fully fleshed out use cases, but short bits of business value that can be implemented in less then 1 iteration)
iteratively implement story cards based on what the business prioritizes as the most important
get feedback from the business on what was just implemented (e.g., good, bad, almost, etc)
repeat until business decides that the software is good enough