Related
Scenario:
Currently we are in the process of system and integration testing. Every day we get lots of defect raised by testers. Most of these defects do not match with requirement we were given. Lots of scenarios are new to developers. Requirement we had was signed off by business.
Could someone clarify how to distinguish between Defect vs CR?
Everything that was not a requirement is a change-request.
But live is unfortunately not that easy, so please read on.
Quarrels on what is a defect and what is a change-request are very common in projects. Managing the situation is difficult because you often have to make compromises.
I have seen project managers being removed by programme managers because they insisted to much that all the defects are really change requests. They often were right but still there behavior was not helpful for the overall progress of the programme. I have also seen project managers who killed themselves by accepting every defect and built castles though never originally required and effort estimated for.
I personally always make absolutely sure that my managers know that I am building features not originally required that came in under the disguise of defects. I also make sure the client/tester knows that this is my viewpoint. But also I am very tolerant in my consideration what a defect is.
Example: I recently joint a project where we developed a financial payments system and another programmer said to me "It is outrages what they want have that is not a defect this is a CR!". I looked at it and due to my background in this business domain I thought it actually is very fundamental requirement and asking for a CR for this is really laughable. So I decided we fix it without making a fuss about it.
Also the following questions are worth to consider:
Are you in a fix price project? Do you still have resources and show real greatness by adding features without moaning that will give you a good reputation and a future contract?
Do you get penalized if you accept a CR as a defect? Is a low-number of defects a KPI (Key Performance Indicator) and affecting your career?
Was the requirement definition poor at the beginning and you accepted it? Was the requirement mentioned in the defect real obvious and could be considered implied? E.g. never specified that amount field should only allow numeric values but still it makes sense.
Have you accepted requirements without asking about the whole big picture and are partially responsible?
Is the client ripping you off and exploiting your inability to say no and reject the defect?
In projects I always try to get the best for the client but make sure I am not being penalized undue.
I don't know if StackOverflow is the right place for this question, but I am still going to ask it.
Recently I have been doing some research on software methodologies, and I have one question that came up, on which I could find a conclusive answer;
Is testing an (essential) part of Scrum, or can this be seen as a separate method? I know that when practicing a software methodology things can be difference in practice to the things described in the theory. But in this case I just want the plain facts/description of testing in relationship with Scrum.
No, I think it's more correct to say that testing is a vital part of the agile process.
The scrum is the project management side of things, getting the user stories from the customer for a specific sprint and then letting the team loose to do their work, with the daily scrum meetings.
So, while testing may be part of the deliverables decided during the initial get-together with the customer, and it might be raised in the daily scrum meetings, it's not really required for the scrum process to work. The customer may (foolishly) not have a testing requirement and the developers may strike no testing problems.
Testing can be a part of your 'Definition of Done' or you can designate testing/defect reduction sprints. Ultimately it depends on your release schedule, how your business is performed and your customers requirements and expectations.
The scrum process does not declare that any particular type of testing has to be done within the confines of a sprint.
But as an aside, you will find much more value if you can automate your testing processes.
We find it easier to include all of the testing as a part of the sprint process. The reason for that is, having 'testing sprints' allows for compound complexities if a defect is introduced in say, sprint one, but testing and defect resolution doesnt occur until say, sprint 8.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have been working as a .net developer following the waterfall model. When working on, say a 12 months project, usually my team follows Analysis, Design, Coding and Testing phases. But when it comes to following the Scrum process, I don't really understand how I need to deal with it.
Consider a sprint for 4 weeks and the backlog has 10 items. Let the sprint start now. If developers are working on some backlog items for the first 10 days, I don't know if testing (both SIT and UAT) will require JUST the remaining 10 days to complete the work. And now our sprint does not have any time to do last minute bug fixes and only few bugs could be fixed IN THE PLANNED SPRINT.
And when we do development, how can we make sure that we keep the testing team busy apart from just preparing test cases and waiting for us to deliver the functionality?
This raises a question if we need to deliver the first task/feature within the first 3 days of the sprint, so that testers might be ready with their test cases to test that piece.
I also need to educate my client to help in adapting the Scrum process.
I need some guidelines, references or a case study to make sure that our team follows a proper Scrum process. Any help would be appreciated.
In an ideal Scrum team, testers and developers are part of the team and testing should occur in parallel of the development, the phases are overlapping, not sequential (doing things sequentially inside a Sprint is an anti-pattern known as Scrumerfall). And by the way, contrary to some opinions expressed here, an ultimate Scrum implementation produces DONE DONE stories so testing - including IST, UAT - should be done during the Sprint.
And no, testers don't have to wait for Product Backlog Items (PBI) to be fully implemented to start doing their job, they can start writing acceptance tests scenarii, automate them (e.g. with FitNess), set up test data set, etc (this takes some time, especially if the business is complicated) as soon as the Sprint starts.
Of course, this requires very close collaboration and releasing interfaces or UI skeletons early will facilitate the job of testers but, still, testers don't have to wait for a PBI to be fully implemented. And actually, acceptance tests should be used by developers as DONEness indicator ("I know I'm done when acceptance tests are passing")1.
I'm not saying this is easy, but that's what mature (i.e. Lean) Scrum implementations and mature Scrum teams are doing.
I suggest reading Scrum And XP from the Trenches by Henrik Kniberg, this is very good practical guide.
1 As Mary Poppendieck writes, the job of testers should be to prevent defects (essential), not to find defects (waste).
You definitely don't want to do all development in the first half of the sprint and all testing in the second half. That's just a smaller waterfall.
Your stories and tasks should be broken up into very small, discrete pieces of functionality. (It may take a while to get used to doing this, especially if the software you're working on is a monolithic beast like a previous job of mine that moved to using scrum.) At the beginning of the sprint the testers are developing their tests and the developers are developing their code, and throughout the sprint the tasks and stories are completed and tested. There should be fairly constant interaction between them.
The end of the sprint may feel a bit hectic while you're getting used to the methodology. Developers will feel burdened while they're working on the rest of the code and at the same time being given bugs to fix by the testers. Testers will grow impatient because they see the end of the sprint looming and there's still code that hasn't been tested. There is a learning curve and it will take some getting used to, the business needs to be aware of this.
It's important that the developers and testers really work together to create their estimates, not just add each other's numbers to form a total. The developers need to be aware that they can't plan on coding new features up until the last minute, because that leaves the testers there over the weekend to do their job in a rush, which will end up falling back on the developers to come in and fix stuff, etc.
Some tasks will need to be re-defined along the way. Some stories will fail at the end of the sprint. It's OK, you'll get it in the next sprint. The planning meeting at the start of each sprint is where those stories/tasks will be defined. Remember to be patient with each other and make sure the business is patient with the change in process. It will pay off in the long run, not in the first sprint.
The sprint doesn't end with perfect code; if there are remaining bugs, they can go in the very next sprint, and some of the other items that would have went in the next sprint will need to be taken out. You're not stopping a sprint with something perfect, but ideally, with something stable.
You are (ironically) applying too much rigor to the process. The whole point of an agile process like scrum is that the schedule is dynamic. After your first sprint, you work with the users/testing team to evaluate the progress. At that point, they will either ask you to change details and features that were delivered in the first sprint, or they will ask you to do more work. It's up to them.
It's only eventually, once you have determined the velocity of the team (ie. how many stories one can reasonably accomplish in a sprint) that you can start estimating dates and things for larger projects
First of all, not every Sprint produces a Big Release (if at all). It is entirely acceptable for the first sprints to produce early prototypes / alpha versions, which are not expected to be bug free, but are still capable of demonstrating something to the client. This something may not even be a feature - it can simply be a skeleton UI, just for the user to see how it will look and work like.
Also, developers themselves can (and usually do) write unit tests, so whatever is delivered in a sprint should be in a fairly stable working state. If a new feature is half baked, the team simply should not deliver it. Big features are supposed to be devided into small enough chunks to fit within a single sprint.
A Scrum team is usually cross-functional, which means that the entire team is responsible for building completed pieces of functionality every Sprint. So if the QA testers did not finish the testing, it only means the Scrum team didn’t finish the testing. Scrum counts on everyone to do their part. Whenever any is needed, the people with those skills take the lead, but they all have to do their part.
Try to do continuous integration. The team should get into this habit and integrate continuously. In addition, having automated unit test suite built and executed after every check-in/delivery should provide certain level of confidence in your code base. This practice will ensure the team has code in working and sane condition at all time. Also it will enable integration and system test early in the sprint.
Defining and creating (automated) acceptance tests will keep people with primary QA/testing skills busy and involved right from the sprint start. Make sure this is done in collaboration with Product Owner(s) so everyone is on the same page and involved.
We started our agile project with developers first (a lot of training in Enterprise Framework, etc.) in the first sprint. Then we added QA slowly into the second sprint. At the end of sprint 2, QA started testing. Closing in on the end of sprint 3 QA had picked up the pace and where more or less alongside the developers. From sprint 4 and out, QA is more or less done with testing when the developers have completed the stories. The items that are usually left to test are big elephants that include replication of data between new and legacy system. And it is more a 'ensure data is OK' rather than actual tests.
We're having some issues with our definition of Done. E.g. we have none. We're working on a completely new version of a system, and now that we are closing in on the end of sprint 6, we are getting ready for deployment to production. Sprint 6 is actually something I would call a small waterfall. We have reduced the number of items to implement to ensure that we have enough time to manage potential new issues that come up. We have a code freeze, and developers will basically start on the next sprint and fix issues in the branch of necessary.
Product Owner is on top of the delivery, so I expect no issues in regards to what we deploy.
I can see that Pascal write about mature sprint teams + the definition of Done. And agile always focus on 'delivery immediately after sprint has reached its end'. However - I'm not sure if there are very many teams in the world actually doing this? We're at least not there yet :)
There isn't any testing team in Scrum. Its development team which is cross functional. Scrum discourages specialists in the team so as to avoid dependencies. So the role of tester is somewhat different in Scrum than in Waterfall. Its another debate but for now lets stick to the question at hand.
I would suggest you to slice the stories vertically in as smaller the tasks as you can during how part of the sprint planning meeting. Its recommended to break the tasks to as small units so that they can be completed in a day or two.
Define a DoD at the start of the project and keep on refining it.
Work on one task at a time and limit work in progress.
Work in order of priority and reduce waste in your system.
Do not go for detailed upfront planning and delay your best decisions till the least responsible moment.
Introduce technical competencies like BDD and Automation.
And remember that the quality is the responsibility of the whole team so don't worry about testing being done by a dedicated person.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm just curious if the community considers it acceptable to use the term "Code Freeze" for situations where we stop development except for testing and fixing bugs.
Development Situation
We're just finishing up our third and final sprint, which will be followed by a "Code freeze" and 2 weeks of Q/A testing. It is a big release and some components development have transcended all 3 sprints. Historically even though we call it a "Code Freeze" we still commit code to fix bugs.
Problem
Every release I try and correct my manager and co-workers that we should be calling it a "Feature Freeze", because it's pretty obvious that we're going to find bugs and commit code to fix them as soon as we start heavy testing. But they still persist in calling it a "Code Freeze". Sometimes we still have known bugs and declare a "Code Freeze".
The Wikipedia definition seems to agree with me here
Analysis
I suspect that calling these situations a "Code Freeze" is some sort of willful Double Think to provide false confidence to stake holders. Or we are pretending to be in a "Code Freeze" situation because according to Scrum after every sprint we should have a shippable piece of software and it is the expectation we are following Scrum. So we must call it what Scrum expects instead of what it really is.
Conclusion
Am I over analyzing this? I just find it to be unhealthy to ignoring realities of situations and should either give it up calling it something it's not or fix the root problem. Has anybody else had similar experiences with Code Freezes?
Am I over analyzing this?
Yes.
Well, probably. Realistically, you should be thinking twice before making any code changes after the freeze. Bugs should have to pass some severity test, more so if the fix requires potentially-dangerous changes to the codebase or invalidates the testing that's been done. If you're not doing that, then yeah, you're just deluding yourselves.
But if you're not gonna fix any bugs, then freezing the code is kinda pointless: just build and ship it.
Ultimately, what matters is that you all understand what's meant by the label, not the label itself. One big happy Humpty-Dumpty...
We use the term "Feature Complete". All the features are coded and functional, but we're heading into a test pass to confirm that there are no bugs. If there are bugs, we will find them, fix them, and retest. After we're satisfied with the result, we're "Code Complete".
I think, actually, that they are more correct in their interpretation. A feature freeze, to me, would be a halt to introducing new features, but features currently under development could continue to completion or you could schedule some refactoring work to remove technical debt without generating new features. A code freeze brings a halt to all new development, including refactoring -- the only new code allowed is that to fix bugs found during QA. The latter seems to be what your team is doing.
Some people who get into adaptive and agile engineering methodologies like scrum may not realise what you have gotten yourselves into.
The reason for being agile engineering is releasing to your customers whatever that is usable now and gradually build up its usability and features.
If your project is projected to complete in 18 months but if you could have increasingly something usable every 2 months - why not release features every two months rather than wait till the grand holy day 18 months away since either way the project would still last 18 months.
Your customers' requirement might change so giving your customers opportunity to change their mind frequently, before it's too late, results in exhilarated customers.
Someone might release open source module of one of your modules 10 months from now and then you don't have to do much else but integrate that module.
Therefore, scrummers, or at least scrum masters and/or project managers/architects are required by the dynamics of scrum to modularise ... modularise is not good enough; but granularise the project.
You have to granularise your modules to the right size and provide a contract-interface specification for each so that changes within a module is managed within a module. If your module by itself or due to dependence of other modules is unable to satisfy a contract-interface, you have to code-freeze to enable you to broadcast a contract-interface version 1 so that other teams could continue albeit with less than expected features in the next general product release.
A code freeze is a code freeze.
If your code freezes are experiencing frequent thawing delays, your scrum master and product architect are not communicating or not doing their jobs properly. Perhaps, there's no point in trying to impress or acquiesce to your management that they are using some industry fad called agile programming. Or management needs to hire architect and scrum master who are able to design and granularise the project within the skills of the team as well as the expectations of the customers and the technological constraints of the project.
I think there are management elements and their scrum master who do not realise how crucial a good architect is even for a scrum environment and refuse to hire one. A good architect who is able to listen and work with the team is invaluable to the scrumming process because he/she has to constantly adapt the architecture to changing granularities and expectation.
I also think there are management elements and their scrum master who belongs to the other spectrum of the programming universe due to bad experiences with longer development cycles like waterfall, who therefore think that scrum is meant to produce a product within a month and therefore meticulous investigation into cross-modules effects is not really necessary. They sit down, wet their fingers in the air and come up with a great sprint.
If your team is experiencing frequent thawing of code freezes, you guys might need to code-freeze your whole project and rethink your strategy and see that the cause is due to your refusal to define module contracts that fit the granularity of modules. Or are you guys defining module contracts at all to so that features of a stuck module could be currently rarefied to enable other teams or modules to continue.
Do you guys have a UML strategy that aids in discovering the projected features of a project release and allows you to see the effects of a stranded module and then see which module needs focus to reach a desired product release level? Are you attending scrums and sprints and you have no picture of an UML to show how advanced or delayed you are so that you are just bumping yourselves along happily or otherwise blindly? Or does your scrum master would say to room of yeas or nays, hmm ... that module seems important - without actually having a clear picture of which are the most strandable modules in relation to a product release.
A product release code-freeze is achieved by progressive freezing of modules. As soon as a module is completed, a product test is done to ensure that the module satisfies its contract and that module is code-frozen to say version 2.1. Even though work progresses on that module for 2.2, the project on the whole should not depend on 2.2 but on 2.1. The strategy is to minimise the number of modules whose contracts needs to thawed when a product release is tested and if the product release should scale down its features. If progressive modular freezing does not help your development team ... either the product is so complex and your management is under-expecting the number of iterations to achieve a proper release or the modular architecture and strategy needs serious rethinking.
I have worked on a project (waterfall) in which we had feature freeze AND code freeze.
Feature freeze means the beginning of a bugfix period. Also new branch was created for the new version so that we could implement features, i.e. this is the point when the company starts to work on the new version. No new features are implemented, only bugs are fixed.
Code freeze comes when QA thinks the product is in releasable condition (i.e. they do not know of any severe bugs). Before a final test cycle a code freeze is announced (remember a test cycle might take a week). If the test succeeds this becomes the released product. If it fails then the new bugs are fixed. These checkins are supervised by architects and managers and the risk of every line is practically documented. Then the testcycle is started again.
Summary: After feature freeze you can only check in bugfixes. After code freeze you can only check in in exceptional cases.
Yeah, it's overthought.
Yeah, it's a misnomer.
If the code isn't broken/messy you wouldn't touch it, and if it is then you will fix it. That's exactly the same situation as if you were not in code freeze. Yes, it's "requirement freeze" or "integration break" which are anti-patterns. It is a point at which to stop including new features in the next release, which is valuable in the sales/marketing/customersupport side of things. But they should probably call it "prerelease".
What ought to happen is that there are always a few releasable versions of the system in version control, and the company picks one to ship.
the Lean name for "code freeze" is "waste."
In your comment, you mentioned the word 'sprint'. That tells me you may be using Scrum(or any other Agile) methodology. In Scrum you hardly 'freeze' anything :) Flexibility, risk identification and mitigation, and above all, in terms of engineering, continuous integration matter a lot in Scrum.
Given this, the team should be cross-functional and the code will be continuously integrated. As a result, you may not have things like 'code freeze'. You just have a releasable product at the end of the sprint. It should have been tested continuously and you should have already got the bug reports which you should have fixed already.
Well, this is theory. However, good scrum teams aren't too far from theory, as scrum is mainly about principles. There aren't too many rules.
I personally won't split too many hairs on the terminology, but the intention behind the term. Most certainly, the term is used to identify a stage in the SDLC, in your organization. Speaking strictly as per Scrum, it doesn't have a bug fix phase. In case you're dedicating one or more sprints to fix bugs, then this term can mean, "no feature backlogs will be included in the sprint, but only bug fixes". This can be easily handled at the sprint planning (and pre-planning) meeting(s) and the team doesn't even have to worry about the terminology. Even better, this terminology/intention doesn't even have to go beyond the Product Owner.
While "Code Freeze" may have a clouded meaning and is, as has been mentioned, more aptly a "Feature Freeze" when considering individual projects/releases it DOES have a place in a larger, integrated deployment where another entity is responsible for packaging and/or deploying multiple software releases from various teams. "Code Freeze" gives them time to make sure the environments are lined up and all packages accounted for. "Code Freeze" also means that nothing but "show stopping" changes are getting in. Everything else would be handled in the next maintenance release.
In a perfect world, scripted testing would have completed before this point and there would have been time allowed for deployment of any last fixes and retest. I have yet to see this happen at any "globo-corp". The (business) testers test up until and even after deployment and the "Code Freeze" becomes a signal to them to step up their efforts and log everything that they've been sitting on. In some cases, it's a signal for them to START testing.
Really, "Code Freeze" is just business speak for "Here there be Tygers". ;-)
when we code freeze, the repo is locked, hopefully all the bugs are fixed that you intended to be fixed, and you the testers to a whole nother round of testing before branching and building to production. if there's any outstanding bugs scheduled for this iteration the leads will be breathing down your neck until it is closed out, or deemed noncritical and pushed back an iteration. so, yes, its really frozen.
The team I'm part of manage a number of software projects - and most of the stuff we do is end to end, from requirements tracking, to project management to purchasing and setup - a big pain is tracking of financials as we have a whole process to go through for our financials. At the moment we use a spreadsheet and store all the invoices and purchase orders in a shared folder. Its difficult to capture expenses and relate them back to projects that are ongoing or completed. Anyone got better ideas? Hope this topic is relevant.
I'm just throwing this out there but Scrum seems like a methodology that might lend itself to tracking this kind of stuff easily.
I use the time tracking aspects of JIRA with a bunch of custom fields that let me note issues as having been billed for (complete with storing the invoice number details against the JIRA issue). It's easy to relate hours worked to time billed. Might not work if you're billing for functionality rather than hours, but for $/hr work it's perfect.
You're talking about bookkeeping. Your best bet is probably to talk to accountant who organised books for a software company before. Alternatively you could read up on bookkeeping basics.
This question is not strictly software development related, although software business has some traits that make it different, i.e. absence of raw materials.
Best of luck with your question, I personally find it interesting, but I don't believe it belongs on stackoverflow.
you could try http://projectsputnik.com This software combines project management and billing
If you are using Jira, you can try Clerk — Invoicing and Billing for Jira app. Jira has projects and time tracking feature and this add-on does the invoicing and billing on top of that. I hope that helps.
How about using a wiki, something like Confluence? And then you can export to a spreadsheet. and import back again...