(This kicks off what I hope to be a regular, weekly series on my blog: focusing on a Perl module that’s unsung, or at least under-sung, and hopefully in doing do drawing some extra attention to a tool I feel can help other Perl developers.)
For my first “Perl Module Monday” post, I would like to introduce you to Adam Kennedy’s Test::XT. This module has been around for several months, but I only recently took the time to look at it, and see how I could utilize it.
When I first discovered the CPANTS effort, and the enormous amount of work its creators had put into it, I immediately set about improving my scoreboard. In CPAN circles, this was known as “gaming CPANTS”. And for good reason– a high score is an indicator of nothing more than the fact that your modules pass those particular metrics, none of which measure actual code quality. They only measure the quality of your distribution. I argued (which is almost too strong a word, as the discussion never really got that heated) that as more authors took the CPANTS guidelines to heart, the end result would be worthy in and of itself, a different sort of quality that stood on its own. Think of Ruby’s “gems”, and the perception of how effortless they are to install; many people have the (mistaken) impression that Perl modules are difficult, and that impression most likely came from one or two isolated incidents (whether personal or related anecdotally). And, at least in my case, it has led to better overall module development. I no longer release even the initial version of a module unless I’m pretty confident that it will meet at least the “required” metrics, if not the optional ones as well.
This dedication, though I pat myself on the back so publicly for it, has its price: a fair amount of duplicated effort. One example of this are the author-tests, or maintainer-tests if you prefer.
These are the tests that are really meant to be run only by we the authors, on our own modules. You, the user, really have nothing to gain from watching them run, because if any of them fail you really don’t have a stake in it. These are the tests for the cleanness of the POD structure, tests of the integrity of the YAML metadata file, etc. If “META.yml” doesn’t pass that test, that’s a lot less meaningful to you than if the test script for the actual functionality has one or more failures.
This is where Adam K. stepped in with Test::XT. It generates these boilerplate author/maintainer tests for you. Which handily beats my old practice of copying from an existing project when creating a new one. The test-files that it generates include checks, based on documented environment variables, that prevent the test-suites from running unless you have specified that you (as the author or maintainer) want to run them. It looks at two variables, in fact, to let you choose whether to run them during author-initiated builds, during designated “integration” (nightly, hourly, etc.) builds, or both. The logic is set up in a way that ensures the dependent modules (Test::Pod, Perl::Critic, etc.) don’t get loaded even for the purpose of the “can-we-run-these-tests” test. Which helps to avoid failing the “list of prereqs does not match actual use” metric on CPANTS. (And yes, I still have some modules that fail that, as I haven’t back-ported this to everything yet!)
It’s a simple module, not at all complex. I hope to offer some extensions or patches to it in the future, as it has been greatly helpful to me and I want to help make it even more so. So check it out– even if you aren’t a CPAN author you may find it useful for the tests you develop in your day-to-day work!