Saturday, 1 November 2008

i dream in infra red


I released mu 0.4 (my e-mail indexing/search tool), and as always, I try to learn things from it.

One of the main problems with writing correct and maintainable software is complexity. I am not talking about computational (big-O) complexity here - I am talking about code complexity, as a subjective measure for readability. Some people write very elegant and readable code, while others write code that is very hard to understand. It would be nice to have some objective measure.

cyclomatic complexity

While certainly not perfect, I found McCabe's Cyclomatic Complexity a useful tool for this. Thomas J. McCabe describes his method in his classic paper from 1976 as a metric of the flow graph of the program. I won't go into the details of the exact calculation here (it's straightforward though, read the paper) -- the bottom line is that the higher the complexity, the harder the code is to understand and to test. Indeed, it's not just about readability for humans: the complexity has a direct relation with the amount of code paths, and consequently, the testability of the function. If complexity is high, you'll have an unholy number of code paths, which are impossible to fully test, and software quality will suffer.

Making sure your code is not too complex (according to this measure) means simply assuring that there are not too many code-paths (really: decisions); ie. split your code in to short functions that do one thing, and do it well.

pmccabe

Now, how do we get the numbers to identify overly complex functions? Thankfully, we don't need to calculate anything by hand. There is the pccmcabe-package (debian/ubuntu) which does the work for us, for example:

$ pmccabe -fv prime.c
Modified McCabe Cyclomatic Complexity
| Traditional McCabe Cyclomatic Complexity
| | # Statements in function
| | | First line of function
| | | | # lines in function
| | | | | filename(definition line number):function
| | | | | |
6 6 18 4 26 prime.c(5): main
6 6 19 1 30 prime.c

An interesting example of complexity is the __strptime_internal in evolution-data-server/trunk/libedataserver/e-time-utils.c, which has complexity of 196(!). I am glad I do not have to maintain that one...

recommendation

What should be the maximum recommended cyclomatic complexity for a function is debatable - but many coding guidelines suggest a value of 10. If you go much beyond that, it's easy to see that the function gets very complex.

As always we should use guidelines with care. I can imagine some inherently complex algorithms that you nevertheless wouldn't like to split precisely *because* you want to keep things as understandable as possible. But those will be rare exceptions.

practical

Obviously, limiting cyclomatic complexity is not sufficient to create maintainable software; there are still many other opportunities for making your code hard to understand. Still, it does not hurt to at least keep this one aspect under control, especially as experience suggests there is a high correlation between function complexity and error density. Fortunately, it's usually not too hard to reduce the complexity: split big functions (carefully!) into smaller ones; logical units that do one thing, and do one thing well.

I made sure the new mu follows the <=10-rule. I found some extra targets for Makefiles quite useful for that:


cc10:
@pmccabe `find -name '*.c'` | sort -nr | awk '($$1 > 10)'

cc20:
@pmccabe `find -name '*.c'` | sort -nr | awk '($$1 > 20)'

Now, I can simply type make cc10 or make cc20 to get all the functions that violate the rule CC <= 10, resp CC <= 20. Mu version 0.3 still contained a handful of function that broke the rule, but I have now simplified them - splitting big functions up. In my projects, I have usually followed the rule to some extent, intuitively, but I definitely could have written better code if I'd pay attention to the number before. There is of course a risk in changing working code just because of 'some number'; but in the long run I think it will really pay off.

2 comments:

Anonymous said...

Is this only for C, or can it analyze my C++ too?

djcb said...

the pmccabe tool works for C++ quite well; in some rare case i've seen it get confused, e.g., when there is some preprocessor trickery.