Few weeks back we just finished a release and the way we used to evaluate the software quality was solely based on defects.Then I had this eye-opening discussion with a colleague and he explained how defect metrics are just tip of the iceberg and there is a whole lot that should be done to actually determine the Software Quality Metrics.I am trying to recollect what he said and put it down here.
First of all according to him traditional approach has been to use “defect metrics” but it only gives an idea on how less buggy the software is.Depending on what metrics are used it can also give some idea into testing effectiveness or code quality(defects/KLOC) but knowing all of this will still not give us a way to accurately determine the quality of software.So what are the other ingredients that should go into the mix.Here are few that I will write down to see how others feels about and then probably go in more detail with examples on each of them later.
Coverage based metrics
Identify ways to measure requirements coverage by testcases and actual testcase execution coverage.Some examples are
Num of TestCases written for a module/Num of Requirements for a module
Num of TestCases executed for a release/Num of TestCases written for a release
Process Effectiveness Metrics
Identify metrics to measure process stats and identify improvement opportunities.Some examples include
Total time taken for # of testcases for every build.
Resource Efficiency Metrics
Metrics to measure how effectively we are employing resources.One example being
How many testcases were developed by a particular resource?
How many defects were logged by a particular resource?
Metrics to track and give us feedback into how to optimize testing activity around a particular module.E.g
How much time was spent in testing a particular module?.Using this metrics we can see if a particular module is getting huge and can creep into regression time and take corrective action.
Automation ROI Metrics
Metrics to see how effective the automation effort is.Some simple metrics include tracking time taken by manual testcases against time taken to execute it when those testcases are automated.
In addition performance testing metrics can also be included in this set.Solution specific metrics(e.g if different models of mobile are tested.Coverage of each model can be included as part of this metrics.)