Determining C Complexity

Don Miller donm at margot.Eng.Sun.COM
Sat Aug 4 05:21:38 AEST 1990


First, a quick definition:

A software metric defines a standard way of measuring some attribute 
of the software development process.  For example, size, cost, defects,
communications, difficulty, and environment are all attributes.

from the book Software Metrics: Establishing a Company-Wide Program,
an account of the implementation of a metrics program at HP.

In article <1990Aug2.195825.29393 at zoo.toronto.edu> henry at zoo.toronto.edu (Henry Spencer) writes:
I said (among other things):
>>Thus, an attempt to
>>measure software quality with an eye toward continuous improvement
>>seems a rational course.
>
>Certainly.  What does that have to do with code metrics?
>
>That is the crux of the problem:  there is little or no evidence that
>those code metrics measure anything *useful*.
>
>If you want to measure quality, measure quality.  Count verified bug
>reports and performance problems, and perhaps introduce some sort of
>modifier for memory consumption.  These are not terribly good measures
>of quality, but at least they measure real problems!
  
  When I first joined this thread I understood and agreed with
  the discussions dismissing code metrics as an ends in themselves.
  My assertion is that the measurement of the software development
  process (including code, defects, people, time, cost, etc.) is 
  the only way to evaluate changes to the process, presumably made
  to increase quality, speed up development time, decrease resource
  usage, or increase functionality.  In general, metrics are a 
  means towards developing a maximally efficient software development
  process.
  
>(If you want a suggestion on what to *do* with that information, forget
>Japan for the moment and apply a dose of capitalism.  Start with the
>number 1 at the beginning of the quarter.  Every time you get a legitimate
>report of a flaw -- bug, performance problem, etc. -- multiply the
>current number by 0.99.  At the end of the quarter, each person in
>the programming group gets that fraction of his salary as a lump-sum
>performance bonus.  Note that this applies across the whole group, not
>person-by-person, which encourages cooperative efforts to reduce the
>flaw rate.  And yes, this means that a very low flaw rate means paying
>those people a lot of extra money -- they'll be worth it!)

  Here is a fatal flaw of metrics practice.  Using them to evaluate
  and reward is counterproductive.  Usage of metrics data for such
  purposes will only cause future data to be distorted and useless.
  The goal becomes ensuring no one knows you have bugs rather than
  not having them.

> ... -- the programmers themselves
>will figure out which quality-improvement schemes work and which don't.)
 
  Not if they don't have the means to evaluate the meaning of "work"
  (especially "works better").

  I've redirected followups to comp.software-eng for further
  interesting discussion.


Don Miller
Software Quality Engineering
Sun Microsystems, Inc.
donm at margot.eng.sun.com



More information about the Comp.lang.c mailing list