Wednesday, 26 November 2008

it's so easy


Sometimes, I like to use mathematical notation in webpages, either to impress people or simply for decoration. One way to do that is MathML, which is an XML-based markup language for mathematical notation. However, many browsers do not support MathML at all, or require you to download plugins and/or special fonts. Another problem with MathML is that XML is a really inconvenient format to edit by hand. Practically, you'll need some kind of formula editor.

tex vs mathml


As an old-schooler, I prefer to use the math-notation invented for TeX instead - it is short and sweet and powerful. Donald Knuth invented the whole TeX language because he was unhappy with the quality of typesetting of mathematic, and it is widely used in both computer science and mathematics. Anyway, I'm sure many people remember the 'abc-formula' to calculate the roots of a quadratic function :


In the TeX-sublanguage for math, one can specify the formula as follows:

-b \pm \sqrt{b^2 - 4ac} \over 2a

The corresponding MathML is no fewer than 20 lines; see the example in Wikipedia. Clearly, MathML is not designed for hand-editing. There are are some editors available, but hand-editing TeX is much faster (at least for me); and, as mentioned, even if you have the MathML, many browser will not show it correctly.

So what I'd like is a way to use (i) TeX-notation and (ii) have it display correctly in any (graphical) browser. One way to that is to use LaTeX to process and render the formulae, and convert that to a PNG-image. In 2004, I wrote a little tool called WebTeX to create small images from TeX-formulae. It was nothing too fancy; you enter a <img ...>-element with some decription of some formula, and the little tool would turn it into an image, using LaTeX and ImageMagick. I don't maintain that old tool anymore - it was time for something new. Therefore...

texdrive


This weekend, I wrote a new maths-in-webpages tool using emacs-lisp. The emacs-integration makes adding formulae to html-pages really easy. For example, if I want to include the famous Bayes' Theorem, I simply type:

M-x texdrive-insert-formula
Formula: $P(A|B) = \frac{P(B|A)P(A)}{P(B|A)P(A) + P(B|\overline{A})P(\overline{A})}$
Title: bayes-theorem

Et voilà; the following is inserted:

<img src="bayes-theorem.png" title="bayes-theorem"
class="texdrive-formula" name="$P(A|B) = \frac{P(B|A)P(A)}{P(B|A)P(A) + P(B|\overline{A})P(\overline{A})}$"
border="0">

Now, all we need to do is texdrive-generate-images-from-html, and the corresponding image will be generated:


So, for immediate download: texdrive.el. It works pretty well for me; please let me know if you have any problems or are missing something. In some cases, the formulae are not as sharp as they could be; I hope I'll be able to improve it with some tweaking. Anyway, it's nice to see how one can solve problems by glueing together some existing open-source tools. Standing on the shoulders of giants...

Note that some wiki-software, notably Wikipedia's MediaWiki, use a similar approach.

Tuesday, 11 November 2008

the test that stumped them all

Most of us are not Donald Knuth, and indeed need to test our software. That is even true for my hobby projects - when I offer software for use by others, it's a matter of craftmanship to deliver the best software possible. It's very hard to foresee all the possible environments (architecture, compiler, library version, ...) where my software might be run. But at least, I can minimize the number of programming errors by testing things as much as possible.

The trouble with testing, however, is that it is dead boring. I hate doing boring things -- life is just too short. So, I want to do my testing in the least boring way possible -- I'd like to be able to simply run:


$ make test

and have that go through all my test cases, and report any failures. The idea is that if it is so easy to run tests, you might actually do so, and make sure your software is working according to plan. When doing a release, it is so easy to forget something really obvious, for which you get embarrasing bug reports... Running some automated tests gives some peace of mind when doing a release.

gtest

Since 2.16, the GLib library offers a unit-testing framework called GTest (note, this is not to be confused with Google Test, sometimes also called GTest). GTest is not much different from, say, check, but it's part of GLib and integrates nicely with it. I have started to use it for mu, and I am quite happy with it. Here, I will not go into the details of actually writing test cases, but talk about how to integrate GTest with your code. For the best results, you'd probably want to integrate it with your build system. I am using autotools.

The overall setup is that for all my directories with code, there is a subdirectory tests/ which contains the test code. Those test cases are unit-tests, which test one function or a couple of them combined. Now, of course it's a lot easier when your code is written in such a way that makes this easy[1]. In addition to the per-directery tests/, there is also a top-level tests/, which tests the whole software workflow. In the case of mu, this means that the tests will index some test messages, fill a database with that, and then run some test queries against this database. When all of that works correctly, I am quite confident that my software is not totally broken.

autotools

Now, let's discuss how you can integrate GTest with your code; this is inspired by the way GTK+ does it these days. First, here is gtest.mk, a file in the top of my source tree, that I include in all Makefile.ams that require GTest support:

TEST_PROGS=

test: all $(TEST_PROGS)
@ test -z "$(TEST_PROGS)" || gtester -l --verbose $(TEST_PROGS); \
test -z "$(SUBDIRS)" || \
for subdir in $(SUBDIRS); do \
test "$$subdir" = "." || \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $@ ) || exit $? ; \
done

.PHONY: test

This blob adds a test target to various Makefiles, which will run the gtester program (part of GTest) with your test programs.
In my configure.ac I have:

# g_test was introduced in glib 2.16
PKG_CHECK_MODULES(g_test,glib-2.0 >= 2.16,
[have_gtest=yes],[have_gtest=no])
AM_CONDITIONAL(MU_HAVE_GTEST, test "x$have_gtest" = "xyes")
if test "x$have_gtest" = "xno"; then
AC_MSG_WARN([You need GLIB version >= 2.16 to build the unit tests])
fi

With this, I make sure that my code also works with older versions of GLib; the unit tests will only work with newer versions, of course. With this, you'll have a symbol MU_HAVE_GTEST that you can use in your Makefile.am; for example, in index/Makefile.am, I have:

include $(top_srcdir)/gtest.mk

SUBDIRS= .

if MU_HAVE_GTEST
SUBDIRS += tests
endif
[....]

As you can see, it includes gtest.mk mentioned above, and (conditionally) add tests/ as a subdirectory to visit.The unit tests are in this subdirectory. Note that by explicitly setting SUBDIRS to '.' first, we ensure that first we build the code in index, before we go to tests/.

unit tests

Below is a simple example unit test program; it only uses a small subset of GTest. You can further organize your test cases (see GTestSuite and GTestCase) and see Fixtures, which setup the testing environment. I don't use those, but they might be useful for others. In general, I am only using a small subset; check out the GTest-documentation to find out more. Anyway, here are some simple test cases:

#include <glib.h>
#include "my-code-to-test.h"


static void
test_num_str (void)
{
char *str;

g_assert_cmpstr (str = my_num_str(1001),==,"one thousand and one");
g_free (str);

g_assert_cmpstr (str = my_num_str(-1),==,"minus one");
g_free (str);
}


static void
test_warning (void)
{
/* no complex roots: my_sqrt(-1) should
* return MY_SQRT_ERROR and issue a g_warning; the
* g_warning will trigger the process to fail,
* which is what we're expecting */
if (g_test_trap_fork (0, G_TEST_TRAP_SILENCE_STDERR))
g_assert (my_sqrt (-1) == MY_SQRT_ERROR);

g_test_trap_assert_failed ();
}



int
main (int argc, char *argv[])
{
g_test_init (&argc, &argv, NULL);

g_test_add_func ("/mytests/test-add", test_add);
g_test_add_func ("/mytests/test-warning", test_warning);

return g_test_run ();
}


Now, we can run our tests with:
$ make test

(Note that the test cases are fork()ed, and you can actually write a test case where it passes if an abort or even a segfault occurs.)

For mu-0.4 I get the following output:


[...]
make[1]: Entering directory `/home/djcb/src/mu-0.4/tests'
TEST: test-index-search... (pid=15553)
/all/test-query01: OK
/all/test-query02: OK
/all/test-query03: OK
/all/test-query04: OK
/all/test-query05: OK
/all/test-query06: OK
/all/test-query07: OK
/all/test-stats01: OK
PASS: test-index-search
make[1]: Leaving directory `/home/djcb/src/mu-0.4/tests'

Nice and easy; if you're less lucky, you might get something like:

make[1]: Entering directory `/home/djcb/src/mu-0.4/tests'
TEST: test-index-search... (pid=16024)
/all/test-query01: **
ERROR:test-index-search.c:117:query_01: assertion failed (mu_msg_sqlite_get_subject(row) == "this can't be right"): ("Re: What does 'run' do in cperl-mode?" == "this can't be right")
FAIL
GTester: last random seed: R02S2d24e3907b0c62e6a008e891f401fedf
/bin/bash: line 5: 16023 Terminated gtester --verbose test-index-search
make[1]: Leaving directory `/home/djcb/src/mu-0.4/tests'

With that, all we need to do is fix the bug and test again... rinse-lather-repeat. Using GTest, it's really easy to run test cases. In general I try to keep my software pass the tests at the end of every programming session. Now, this does not work when I do big changes, but after stabilizing things again, I make sure all test cases pass, both old and new.

parting thoughts

One thing still missing from GTest is some way to see the code coverage, i.e. to see which part of the code are covered by tests. I think it should be possible to do this using gcov, but it'd be nice if someone automated that a bit. Another issue is that for effective use, you will need something like the setup described here. One can hardly expect someone new to Unix-development to figure this out by themselves... but of course, we cannot really blame GTest for that.

Hopefully my setup helps a bit to setup non-boring testing (even though it might be a bit boring in itself...). There are real-life examples of this in both mu and GTK+. And finally, if you find any inaccuracies, please let me know -- there are no unit tests for blog entries to save me from mistakes...



[1] Now, a discussion of how to write easily testable functions deserves its own blog entry, but there are some general things to keep in mind. Keep your functions short, limit the number of parameters, avoid global variables, limit side-effects to only a few functions, etc. In other words, use the lessons learnt from functional programming languages. And as a nice side-effect (ha!), such functions tend to be much less error-prone in the first place.

Saturday, 1 November 2008

i dream in infra red


I released mu 0.4 (my e-mail indexing/search tool), and as always, I try to learn things from it.

One of the main problems with writing correct and maintainable software is complexity. I am not talking about computational (big-O) complexity here - I am talking about code complexity, as a subjective measure for readability. Some people write very elegant and readable code, while others write code that is very hard to understand. It would be nice to have some objective measure.

cyclomatic complexity

While certainly not perfect, I found McCabe's Cyclomatic Complexity a useful tool for this. Thomas J. McCabe describes his method in his classic paper from 1976 as a metric of the flow graph of the program. I won't go into the details of the exact calculation here (it's straightforward though, read the paper) -- the bottom line is that the higher the complexity, the harder the code is to understand and to test. Indeed, it's not just about readability for humans: the complexity has a direct relation with the amount of code paths, and consequently, the testability of the function. If complexity is high, you'll have an unholy number of code paths, which are impossible to fully test, and software quality will suffer.

Making sure your code is not too complex (according to this measure) means simply assuring that there are not too many code-paths (really: decisions); ie. split your code in to short functions that do one thing, and do it well.

pmccabe

Now, how do we get the numbers to identify overly complex functions? Thankfully, we don't need to calculate anything by hand. There is the pccmcabe-package (debian/ubuntu) which does the work for us, for example:

$ pmccabe -fv prime.c
Modified McCabe Cyclomatic Complexity
| Traditional McCabe Cyclomatic Complexity
| | # Statements in function
| | | First line of function
| | | | # lines in function
| | | | | filename(definition line number):function
| | | | | |
6 6 18 4 26 prime.c(5): main
6 6 19 1 30 prime.c

An interesting example of complexity is the __strptime_internal in evolution-data-server/trunk/libedataserver/e-time-utils.c, which has complexity of 196(!). I am glad I do not have to maintain that one...

recommendation

What should be the maximum recommended cyclomatic complexity for a function is debatable - but many coding guidelines suggest a value of 10. If you go much beyond that, it's easy to see that the function gets very complex.

As always we should use guidelines with care. I can imagine some inherently complex algorithms that you nevertheless wouldn't like to split precisely *because* you want to keep things as understandable as possible. But those will be rare exceptions.

practical

Obviously, limiting cyclomatic complexity is not sufficient to create maintainable software; there are still many other opportunities for making your code hard to understand. Still, it does not hurt to at least keep this one aspect under control, especially as experience suggests there is a high correlation between function complexity and error density. Fortunately, it's usually not too hard to reduce the complexity: split big functions (carefully!) into smaller ones; logical units that do one thing, and do one thing well.

I made sure the new mu follows the <=10-rule. I found some extra targets for Makefiles quite useful for that:


cc10:
@pmccabe `find -name '*.c'` | sort -nr | awk '($$1 > 10)'

cc20:
@pmccabe `find -name '*.c'` | sort -nr | awk '($$1 > 20)'

Now, I can simply type make cc10 or make cc20 to get all the functions that violate the rule CC <= 10, resp CC <= 20. Mu version 0.3 still contained a handful of function that broke the rule, but I have now simplified them - splitting big functions up. In my projects, I have usually followed the rule to some extent, intuitively, but I definitely could have written better code if I'd pay attention to the number before. There is of course a risk in changing working code just because of 'some number'; but in the long run I think it will really pay off.