Is there a list of famous software products that do and do not do testing? [closed] - testing

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would be interested in looking at a list of projects that did and did not do unit testing, and other forms of regression testing, to see how those companies turned out.
All test infected developers know it saves them time, but it would be interesting to what correlation there is between code quality/test coverage and business success. Something objective like:
xyz corp, makes operating systems, didnt test, makes $50M
123 corp, makes operating systems, does test, makes $100M
Does anyone know of any studies done?

Microsoft commissioned this internal study not so long ago. It compared teams that did and didn't use TDD. To quote the summary:
Based on the findings of the existing studies, it can be concluded that TDD seems to improve software quality, especially when employed in an industrial context. The findings were not so obvious in the semiindustrial or academic context, but none of those studies reported on decreased quality either. The productivity effects of TDD were not very obvious, and the results vary regardless of the context of the study. However, there were indications that TDD does not necessarily decrease the developer productivity or extend the project leadtimes: In some cases, significant productivity improvements were achieved with TDD while only two out of thirteen studies reported on decreased productivity. However, in both of those studies the quality was improved.

Yes, pick up a copy of Code Complete or even Rapid Development by Steve McConnell. He cites a number of studies.

Any realistic study would have to include thousands of companies. There are far too many factors other than does/doesn't unit test that affect the bottom line. I doubt Microsoft's profit changes all that much whether or not they release an amazing OS every year or one that's as buggy as hell. Just listing a few companies is anecdotal evidence.

Perl is big on testing and regression testing.

I always associate Unit testing with Agile development (XP in particular); you might find that any link between project success and unit testing is influenced by use of agile as well.
I don't know of any surveys specifically, but I did find this just now:
http://people.engr.ncsu.edu/txie/testingresearchsurvey.htm which has around 30 llinks to stuff such as: "Qualitative methods in empirical studies of software engineering. Seaman, C.B, Software Engineering, IEEE Transactions on , Volume: 25 , Issue: 4 , July-Aug. 1999"
Not wanted to sound rude - I assume you've already done bit of a search online?
I seem to remember that Code Complete might have references to research into unit testing and project success - but I'm not sure.
Another option would be to approach some software testing companies and see if they had any useful data.

Related

Neural Networks Project? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm looking for ideas for a Neural Networks project that I could complete in about a month or so. I'm doing it for the National Science Fair, so I need something that has some curb appeal as well since it's being judged.
It doesn't necessarily have to be completely new and unique, I'm just looking for ideas, but it should be complex enough that it would impress someone who knows about the field. My first idea was to implement a spam filter of sorts, but I recently found out that NN's aren't a very good way to do it. I've already got a basic NN simulator with Genetic Algorithms, and I'm also adding the the generic back-propagation algorithms as well.
Any ideas?
Look into Numenta's Hierarchical Temporal Memory (HTM) concept. This may be slightly off topic if the expectation is of "traditional" Neural Nets, but it is also an extremely promising avenue for Artificial Intelligence.
Although Numenta introduced HTM and its associated software platform, NuPIC, almost five years ago, the first commercial product based upon this technology was released (in beta) a few weeks ago by Vitamin D. It is called Vitamin D Video and essentially turns any webcam or IP camera into a sophisticated video monitoring system, recognizing classes of items (say persons vs. cats or other animals) in the video feed.
With the proper setup, this type of application could make for an interesting display at the Science Fair, one with much "curb appeal".
To wet your appetite or even get your feet wet with HTM technology you can download NuPIC and check its various sample applications. Chances are that you may find something that meets typical criteria of both geekness and coolness for science fairs.
Generally, HTMs aim at solving problems which are simple for humans but difficult for computers; such a statement is somewhat of a generic/applicable to Neural Nets, but HTMs take this to the "next level".
Although written in C (I think) NuPIC is typically interfaced in Python, which makes it a convenient test bed for simple yet sophisticated proofs of concept applications.
You could always try to play around with a neural network and stock courses, if I had a month of spare time for a neural network implementation, thats what I would play with.
A friend of mine in college wrote a NN to play go on a 9x9 board.
I don't think it ever got very good, but I think it would be fun to try.
Look on how a bidirectional associative memory compare with other classical edit distance algorithms (Levenshtein, Damerau-Levenshtein etc) for typo correction. Also consider the articles on hebbian unlearning while training your NN - it seems that the confabulation phenomena is avoided.
I've done some works on top of NN, mainly an XML based language (Neural XML). See details here
http://amazedsaint.blogspot.com/search/label/Neural%20Network
Also, one interesting .NET Neural network project is Aforge.net - Check out that as well..
You can implement the game Cellz or create a controller for it. It was first created by Simon M Lucas. It's a nice and interesting game, and i'm sure that everyone will love it. I used it also for a school project and it turned out very ok.
You can find in that page some links to other interesting games.
How about applying it to predicting exchange rate (USD - EUR for example for sub minute trading) should be fun to show net gain of money over 1 month.
I doubt this will work for trades longer than a minute... without a lot of extra work.
I like using committee machines so why not apply it to Face-Detection in images / movies or voice print authentication.
Finally you could get it to play pleasing music and use a crowd sourcing fitness function whereby people vote for the best "musicians"

estimating of testing effort as a percentage of development time [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Does anyone use a rule of thumb basis to estimate the effort required for testing as a percentage of the effort required for development? And if so what percentage do you use?
From my experience, 25% effort is spent on Analysis; 50% for Design, Development and Unit Test; remaining 25% for testing. Most projects will fit within a +/-10% variance of this rule of thumb depending on the nature of the project, knowledge of resources, quality of inputs & outputs, etc. One can add a project management overhead within these percentages or as an overhead on top within a 10-15% range.
The Google Testing Blog discussed this problem recently:
So a naive answer is that writing test carries a 10% tax. But, we pay taxes in order to get something in return.
(snip)
These benefits translate to real value today as well as tomorrow. I write tests, because the additional benefits I get more than offset the additional cost of 10%. Even if I don't include the long term benefits, the value I get from test today are well worth it. I am faster in developing code with test. How much, well that depends on the complexity of the code. The more complex the thing you are trying to build is (more ifs/loops/dependencies) the greater the benefit of tests are.
When you're estimating testing you need to identify the scope of your testing - are we talking unit test, functional, UAT, interface, security, performance stress and volume?
If you're on a waterfall project you probably have some overhead tasks that are fairly constant. Allow time to prepare any planning documents, schedules and reports.
For a functional test phase (I'm a "system tester" so that's my main point of reference) don't forget to include planning! A test case often needs at least as much effort to extract from requirements / specs / user stories as it will take to execute. In addition you need to include some time for defect raising / retesting. For a larger team you'll need to factor in test management - scheduling, reporting, meetings.
Generally my estimates are based on the complexity of the features being delivered rather than a percentage of dev effort. However this does require access to at least a high-level set of instructions. Years of doing testing enables me to work out that a test of a particular complexity will take x hours of effort for preparation and execution. Some tests may require extra effort for data setup. Some tests may involve negotiating with external systems and have a duration far in excess of the effort required.
In the end, though, you need to review it in the context of the overall project. If your estimate is well above that for BA or Development then there may be something wrong with your underlying assumptions.
I know this is an old topic but it's something I'm revisiting at the moment and is of perennial interest to project managers.
Some years ago, in a safety critical field, I have heard something like one day for unit testing ten lines of code.
I have also observed 50% of effort for development and 50% for testing (not only unit testing).
Are you talking about automated unit/integration tests or manual tests?
For the former, my rule of thumb (based on measurements) is 40-50% added to development time i.e. if developing a use case takes 10 days (before an QA and serious bugfixing happens), writing good tests takes another 4 to 5 days - though this should best happen before and during development, not afterwards.
When you speak of tests, you could mean waterfall or agile test development. In an agile environment, developers should spend 50% of their time developing and maintaining tests.
But that 50% extra will save you time when the re-factoring and manual verification time comes.
Testing time is probably more closely correlated to feature scope than development time. I'd also argue (perhaps controversially) that testing time is correlated to the skill of your development team.
For a 6-to-9 month development effort, I demand a absolute minimum of 2 weeks testing time, performed by actual testers (not the development team) who are well-versed in the software they will be testing (i.e., 2 weeks does not include ramp-up time). This is for a project that has ~5 developers.
Gartner in Oct 2006 states that testing typically consumes between 10% and 35% of work on a system integration project. I assume that it applies to the waterfall method. This is quite a wide range - but there are many dependencies on the amount of customisations to a standard product and the number of systems to be integrated.
The only time I factor in extra time for testing is if I'm unfamiliar with the testing technology I'll be using (e.g. using Selenium tests for the first time). Then I factor in maybe 10-20% for getting up to speed on the tools and getting the test infrastructure in place.
Otherwise testing is just an innate part of development and doesn't warrant an extra estimate. In fact, I'd probably increase the estimate for code done without tests.
EDIT: Note that I'm usually writing code test-first. If I have to come in after the fact and write tests for existing code that's going to slow things down. I don't find that test-first development slows me down at all except for very exploratory (read: throw-away) coding.
Judge by yesterday's weather. How long did it take last time? Are you trending longer or shorter? Each shop is different.
Most agile shops need a lot less time, have drastically fewer defects, and quicker time to resolve them because of TDD. Even so, most agile shops have some measurable time spent with testing/QC.
If this is the first test run for this application, then the answer is "lets see" followed by an attempt. It depends on how quick you can get questions answered,
- how testable it is,
- how many features/functions
- how many defects are discovered,
- how quickly issues are resolved,
- how many times the code cycles
through testing, and
- how many times testing is blocked by
bugs.
There is no way to tell. You could call it 50% or 175% or more, and not be wrong. Why not make a rough guess and multiply by Pi? It won't be much worse than any other answer you can make up.
You should (must) know how long it takes now and whether it's getting faster or slower, and whether the coverage is increasing or decreasing. With those three bits of information, you should be able to guess quite well.

Basic skills to work as an optimiser in the gaming industry [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm curious about a certain job title, that of "senior developer with a specialty in optimisation." It's not the actual title but that's essentially what it would be. What would this mean in the gaming industry in terms of knowledge and skills? I would assume basic stuff like
B-trees
Path finding
Algorithmic analysis
Memory management
Threading (and related topics like thread safety, atomicity, etc)
But this is only me conjecturing. What would be the real-life (and academic) basic knowledge required for such a job?
I interviewed for such a position a few years ago at one of the Big North American game studios.
The job required a lot of deep pipeline assembly programming, arithmetic optimization algorithms (think Duff's Device, branchless ifs), compile-time computation (SWAR), meta-template programming, computation of many values at once in parallel in very large registers (I forget the name for that)... You'll need to be solid on operating system fundementals, low level system operations, linear algebra, and C++ especially templates. You'll also become very familiar with the peculiar architecture of the PlayStation3, and probably be involved in developing libraries for that environment that the company's game teams will build on top of.
Generally I concur with Ether's post; this will typically be more about low-level optimisation than algorithmic stuff. Knowing good algorithms comes in handy, but there are many cases in games where you prefer the O(N) solution over the O(logN) solution because the first is far friendlier on the cache and requires less memory management. So you need a more holistic knowledge.
Perhaps on a more general level, the job may want to know if you can do some or all of the following:
use a CPU profiler (eg. VTune, CodeAnalyst) in both sampling and call graph mode;
use graphical profilers (eg. Microsoft Pix, NVPerfHud)
write your own profiling/timer code and generate useful output with it;
rewrite functions to remove dynamic memory allocations;
reorganise and reduce data to be more cache-friendly;
reorganise data to make it more SIMD-friendly;
edit graphics shaders to use fewer and cheaper instructions;
...and more, I'm sure.
This is a lot like my job actually. Real-life knowledge that would be practical for this:
Experience in using profilers of all kinds to locate bottlenecks.
Experience and skill in determining the reason those bottlenecks exist.
Good understanding of CPU caches, virtual memory, and common bottlenecks such as load-hit-store penalties, L2 misses, floating point code, etc.
Good understanding of multithreading and both lockless and locking solutions.
Good understanding of HLSL and graphics programming, including linear algebra.
Good understanding of SIMD techniques and the specific SIMD interfaces on relevant hardware (paired singles, VMX, SSE/MMX).
Good understanding of the assembly language used on relevant hardware. If writing assembly, a good understanding of instruction pairing, branch prediction, delay slots (if applicable), and any and all applicable stalls on the target platform.
Good understanding of the compilation and linking process, binary formats used on the target hardware, and tools to manipulate all of the above (including available compiler flags and optimizations).
Every once in a while people ask how to become good at low-level optimization. There are a few good sources of info, mostly proprietary, but I think it generally comes down to experience.
This is one of those "if you got it you know it" type of things. It's hard to list out specifics, and some studios will have different criteria than others.
To put it simply, the 'Senior Developer' part means you've been around the block; you have multiple years of experience in which you've excelled and have shipped games. You should have a working knowledge of a wide range of topics, with things such as memory management high up the list.
"Specialty in Optimization" essentially means that you know how to make a game run faster. You've already spent a significant amount of time successfully optimizing games which have shipped. You should have a wide knowledge of algorithms, 3d rendering (a lot of time is spent rendering), cpu intrinsics, memory management, and others. You should also typically have an in depth knowledge of the hardware you'd be working on (optimizing PS3 can be substantially different than optimizing for PC).
This is at best a starting point for understanding. The key is having significant real world experience in the topic; at a senior level it should preferably be from working on titles that have shipped.

Given this expectations, what language or system would you choose to implement the solution? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Here are the estimates the system should handle:
3000+ end users
150+ offices around the world
1500+ concurrent users at peak times
10.000+ daily updates
4-5 commits per second
50-70 transactions per second (reads/searches/updates)
This will be internal only business application, dedicated to help shipping company with worldwide shipment management.
What would be your technology choice, why that choice and roughly how long would it take to implement it? Thanks.
Note: I'm not recruiting. :-)
So, you asked how I would tackle such a project. In the Smalltalk world, people seem to agree that Gemstone makes things scale somewhat magically.
So, what I'd really do is this: I'd start developing in a simple Squeak image, using SandstoneDB. Then, this moment would come where a single image begins being too slow.
GemStone then takes care of copying your public objects (those visible from a certain root) back and forth between all instances. You get sessions and enhanced query functionalities, plus quite a fast VM.
It shares data with C, Java and Ruby.
In fact, they have their own VM for ruby, which is also worth a look.
wikipedia manages much more demanding requirements with MySQL
Your volumes are significant but not likely to strain any credible RDBMS if programmed efficiently. If your team is sloppy (i.e., casually putting SQL queries directly into components which are then composed into larger components), you face the likelihood of a "multiplier" effect where one logical requirement (get the data necessary for this page) turns into a high number of physical database queries.
So, rather than focussing on the capacity of your RDBMS, you should focus on the capacity of your programmers and the degree to which your implementation language and environment facilitate profiling and refactoring.
The scenario you propose is clearly a 24x7x365 one, too, so you should also consider the need for monitoring / dashboard requirements.
There's no way to estimate development effort based on the needs you've presented; it's great that you've analyzed your transactions to this level of granularity, but the main determinant of development effort will be the domain and UI requirements.
Choose the technology your developers know and are familiar with. All major technologies out there will handle such requirements with ease.
Your daily update numbers vs commits do not add up. Four commits per second = 14,400 per hour.
You did not mention anything about expected database size.
In any case, I would concentrate my efforts on choosing a robust back end like Oracle, Sybase, MS etc. This choice will make the most difference in performance. The front end could either be a desktop app or WEB app depending on needs. Since this will be used in many offices around the world, a WEB app might make the most sense.
I'd go with MySQL or PostgreSQL. Not likely to have problems with either one for your requirements.
I love object-databases. In terms of commits-per-second and database-roundtrip, no relational database can hold up. Check out db4o. It's dead easy to learn, check out the examples!
As for the programming language and UI framework: Well, take what your team is good at. Dynamic languages with fewer meta-time wasting will probably save time.
There is not enough information provided here to give a proper recommendation. A little more due diligence is in order.
What is the IT culture like? Do they prefer lots of little servers or fewer bigger servers or big iron? What is their position on virtualization?
What is the corporate culture like? What is the political climate like? The open source offerings may very well handle the load but you may need to go with a proprietary vendor just because they are already used to navigating the political winds of a large company. Perception is important.
What is the maturity level of the organization? Do they already have an Enterprise Architecture team in place? Do they even know what EA is?
You've described the operational side but what about the analytical side? What OLAP technology are they expecting to use or already have in place?
Speaking of integration, what other systems will you need to integrate with?

What are the most useful software development metrics? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to track metrics that can be used to improve my team’s software development process, improve time estimates, and detect special case variations that need to be addressed during the project execution.
Please limit each answer to a single metric, describe how to use it, and vote up the good answers.
(source: osnews.com)
ROI.
The total amount of revenue brought in by the software minus the total amount of costs to produce the software. Breakdown the costs by percentage of total cost and isolate your poorest performing and most expensive area in terms of return-on-investment. Improve, automate, or eliminate that problem area if possible. Conversely, find your highest return-on-investment area and find ways to amplify its effects even further. If 80% of your ROI comes from 20% of your cost or effort, expand that particular area and minimize the rest by comparison.
Costs will include payroll, licenses, legal fees, hardware, office equipment, marketing, production, distribution, and support. This can be done on a macro level for a company as whole or a micro level for a team or individual. It can also be applied to time, tasks, and methods in addition to revenue.
This doesn't mean ignore all the details, but find a way to quantify everything and then concentrate on the areas that yield the best (objective) results.
Inverse code coverage
Get a percentage of code not executed during a test. This is similiar to what Shafa mentioned, but the usage is different. If a line of code is ran during testing then we know it might be tested. But if a line of code has not been ran then we know for sure that is has not been tested. Targeting these areas for unit testing will improve quality and takes less time than auditing the code that has been covered. Ideally you can do both, but that never seams to happen.
"improve my team’s software development process": Defect Find and Fix Rates
This relates to the number of defects or bugs raised against the number of fixes which have been committed or verified.
I'd have to say this is one of the really important metrics because it gives you two things:
1. Code churn. How much code is being changed on a daily/weekly basis (which is important when you are trying to stabilize for a release), and,
2. Shows you whether defects are ahead of fixes or vice-versa. This shows you how well the development team is responding to defects raised by the QA/testers.
A low fix rate indicates the team is busy working on other things (features perhaps). If the bug count is high, you might need to get developers to address some of the defects.
A low find rate indicates either your solution is brilliant and almost bug free, or the QA team have been blocked or have another focus.
Track how long is takes to do a task that has an estimate against it. If they were well under, question why. If they are well over, question why.
Don't make it a negative thing, it's fine if tasks blow out or were way under estimated. Your goal is to continually improve your estimation process.
Track the source and type of bugs that you find.
The bug source represents the phase of development in which the bug was introduced. (eg. specification, design, implementation etc.)
The bug type is the broad style of bug. eg. memory allocation, incorrect conditional.
This should allow you to alter the procedures you follow in that phase of development and to tune your coding style guide to try to eliminate over represented bug types.
Velocity: the number of features per given unit time.
Up to you to determine how you define features, but they should be roughly the same order of magnitude otherwise velocity is less useful. For instance, you may classify your features by stories or use cases. These should be broken down so that they are all roughly the same size. Every iteration, figure out how many stories (use-cases) got implemented (completed). The average number of features/iteration is your velocity. Once you know your velocity based on your feature unit you can use it to help estimate how long it will take to complete new projects based on their features.
[EDIT] Alternatively, you can assign a weight like function points or story points to each story as a measure of complexity, then add up the points for each completed feature and compute velocity in points/iteration.
Track the number of clones (similar code snippets) in the source code.
Get rid of clones by refactoring the code as soon as you spot the clones.
Average function length, or possibly a histogram of function lengths to get a better feel.
The longer a function is, the less obvious its correctness. If the code contains lots of long functions, it's probably a safe bet that there are a few bugs hiding in there.
number of failing tests or broken builds per commit.
interdependency between classes. how tightly your code is coupled.
Track whether a piece of source has undergone review and, if so, what type. And later, track the number of bugs found in reviewed vs. unreviewed code.
This will allow you to determine how effectively your code review process(es) are operating in terms of bugs found.
If you're using Scrum, the backlog. How big is it after each sprint? Is it shrinking at a consistent rate? Or is stuff being pushed into the backlog because of (a) stuff that wasn't thought of to begin with ("We need another use case for an audit report that no one thought of, I'll just add it to the backlog.") or (b) not getting stuff done and pushing it into the backlog to meet the date instead of the promised features.
http://cccc.sourceforge.net/
Fan in and Fan out are my favorites.
Fan in:
How many other modules/classes use/know this module
Fan out:
How many other modules does this module use/know
improve time estimates
While Joel Spolsky's Evidence-based Scheduling isn't per se a metric, it sounds like exactly what you want. See http://www.joelonsoftware.com/items/2007/10/26.html
I especially like and use the system that Mary Poppendieck recommends. This system is based on three holistic measurements that must be taken as a package (so no, I'm not going to provide 3 answers):
Cycle time
From product concept to first release or
From feature request to feature deployment or
From bug detection to resolution
Business Case Realization (without this, everything else is irrelevant)
P&L or
ROI or
Goal of investment
Customer Satisfaction
e.g. Net Promoter Score
I don't need more to know if we are in phase with the ultimate goal: providing value to users, and fast.
number of similar lines. (copy/pasted code)
improve my team’s software development process
It is important to understand that metrics can do nothing to improve your team’s software development process. All they can be used for is measuring how well you are advancing toward improving your development process in regards to the particular metric you are using. Perhaps I am quibbling over semantics but the way you are expressing it is why most developers hate it. It sounds like you are trying to use metrics to drive a result instead of using metrics to measure the result.
To put it another way, would you rather have 100% code coverage and lousy unit tests or fantastic unit tests and < 80% coverage?
Your answer should be the latter. You could even want the perfect world and have both but you better focus on the unit tests first and let the coverage get there when it does.
Most of the aforementioned metrics are interesting but won't help you improve team performance. Problem is your asking a management question in a development forum.
Here are a few metrics: Estimates/vs/actuals at the project schedule level and personal level (see previous link to Joel's Evidence-based method), % defects removed at release (see my blog: http://redrockresearch.org/?p=58), Scope creep/month, and overall productivity rating (Putnam's productivity index). Also, developers bandwidth is good to measure.
Every time a bug is reported by the QA team- analyze why that defect escaped unit-testing by the developers.
Consider this as a perpetual-self-improvement exercise.
I like Defect Resolution Efficiency metrics. DRE is ratio of defects resolved prior to software release against all defects found. I suggest tracking this metrics for each release of your software into production.
Tracking metrics in QA has been a fundamental activity for quite some time now. But often, development teams do not fully look at how relevant these metrics are in relation to all aspects of the business. For example, the typical tracked metrics such as defect ratios, validity, test productivity, code coverage etc. are usually evaluated in terms of the functional aspects of the software, but few pay attention to how they matter to the business aspects of software.
There are also other metrics that can add much value to the business aspects of the software, which is very important when an overall quality view of the software is looked at. These can be broadly classified into:
Needs of the beta users captured by business analysts, marketing and sales folks
End-user requirements defined by the product management team
Ensuring availability of the software at peak loads and ability of the software to integrate with enterprise IT systems
Support for high-volume transactions
Security aspects depending on the industry that the software serves
Availability of must-have and nice-to-have features in comparison to the competition
And a few more….
Code coverage percentage
If you're using Scrum, you want to know how each day's Scrum went. Are people getting done what they said they'd get done?
Personally, I'm bad at it. I chronically run over on my dailies.
Perhaps you can test CodeHealer
CodeHealer performs an in-depth analysis of source code, looking for problems in the following areas:
Audits Quality control rules such as unused or unreachable code,
use of directive names and
keywords as identifiers, identifiers
hiding others of the same name at a
higher scope, and more.
Checks Potential errors such as uninitialised or unreferenced
identifiers, dangerous type casting,
automatic type conversions, undefined
function return values, unused
assigned values, and more.
Metrics Quantification of code properties such as cyclomatic
complexity, coupling between objects
(Data Abstraction Coupling), comment
ratio, number of classes, lines of
code, and more.
Size and frequency of source control commits.