Automated testing has become axiomatic in the Python community. The Django tutorial, for example, explains testing Django projects before even explaining how to deal with CSS. A common measurement for tests is test coverage: the percentage of lines, branches or files of the code that are executed when the tests are run. How high this number should be is a frequently debated subject, but general consensus is to aim for 90-95%, with 100% being utopian unrealistic and not worth the time it may take. Some don’t care about test coverage at all, because high test coverage doesn’t guarantee the tests are any good.
For my own projects, I’ve adopted a new rule: all code must have 100% test coverage. I am not done done until the unit tests have 100% coverage. How is this not utopian, unrealistic, and a waste of time? Because I count test coverage after exclusions. Although even that won’t help you to catch every scenario.Read on →