Java News from Sunday, February 15, 2004

Here's a question that's been bothering me for the last couple of days. How do you know when a project is done? Obviously you can look at all the deliverables, make sure all the features are implemented, all the documentation is in place, and so forth. That's easy. But more specifically, how do you know when it's bug free?

I was actually thinking I'd have a fairly short beta cycle for XOM once I finally got all the pieces into place. (The API is almost finished. Documentation is coming along nicely) because I was fairly confident that the code as very high quality. I use aggressive unit testing. XOM has over 700 unit tests, some of which really stretch the definition of "unit test" by doing thinks like running the entire XInclude test suite. (Honestly, that's a functionality test, not a unit test. However, it's just easier if I roll all the tests into one JUnit test suite.) I profile performance and memory usage. And I use code coverage tools like Jester and Clover to make sure the tests are covering the entire code base. Furthermore, I run just about every open source static code checking tool like PMD and FindBugs across the code base every week or two. And the code's all open source so anybody can look at it. Most importantly, there had been no significant bugs found in the months since I began using code coverage tools to make sure the tests were covering the code base.

Thus, I thought XOM was in pretty good shape. However, in the last week, I've had a crisis of confidence. Three days in a row, major bugs were discovered in XOM, despite the unit tests, despite the code coverage tools, despite the static analyzers. There are parts of XOM where I would not be surprised to discover bugs. The XInclude code is so ugly I'm not confident I understand it, much less that it works (except that all the tests do pass). And I frequently uncover problems working with other software. XOM's unit tests frequently expose parser bugs, some of which I can work around, some of which I can't. Hell, I've even uncovered bugs in both the XML and XSLT conformance test suites that have apparently gone undetected by other tools. That's how comprehensive my tests are (and how rigorous XOM is). However, these three new problems were all in the core of XOM that I thought was pretty damn well covered.

One was a path of execution that was thoroughly covered by multiple tests but without the particular sequence of operations that exposed the bug. One was an edge condition (an empty text node) that had not actually been tested (though again that section of code was covered by the tests). And one will expose itself only under extreme conditions, though the definition of "extreme" could vary from one VM and system to the next.

One bug was found by an end-user. One was found by me while using XOM for an unrelated project. And one was found by a fellow developer doing a code inspection. The first two have been fixed, and unit tests have been added to cover those cases. I'm still mulling over the most difficult problem, the one that might require serious API redesign to fix.

So what am I missing here? Is there any tool or technique that I'm not using that I should be? The only one I'm aware of I really haven't tried is white box testing like Parasoft's JTest, mostly because it's too expensive and inconvenient. Maybe I should give that a whirl. But what else is out there? Are there any other techniques I'm missing? Any ideas?


JetBrains has released IntelliJ IDEA 4.0, a popular $499 payware integrated development environment for Java that runs on Windows, Mac OS X, Linux, and Unix. The big new feature in this release is a GUI designer. Other new features include:

The upgrade price is $299.