Assertions and unit tests are all well and good, but they're too narrow-minded in my eyes. Unit tests are great for, well, testing small units of code to ensure they meet the basic requirements of a software contract - maybe a couple of typical cases, a couple of edge cases, and then additional cases as bugs arise and new test cases are created for them. No matter how many cases you create, however, you'll never have a test case for every possible scenario.
Assertions are excellent for testing in-situ; you can ensure that unacceptable values aren't given to or by a piece of code, even in production (though there is a performance penalty to enabling assertions in production, of course.) I think assertions are excellent, but not specific enough: any assertion that fails is automatically a fatal error, which is great, unless it's not really a fatal error.
That's where the concept of assumptions and expectations come in. What assertions and unit tests really do is test assumptions and expectations. A unit test says "does this code behave correctly when given this data, all assumptions considered?" An assertion says "this code assumes this thing, and will not behave correctly if it gets another, so throw an error."
When documenting an API, it's important to document assumptions and expectations, so users of the API know how to work with your code. Before I go any further, let me define what I mean by these very similar terms: to me, code that assumes something operates as if its assumptions are correct, and will likely fail if its assumptions turn out to be incorrect. Code that expects something operates as if its expectations are met, but will likely still operate correctly even if they aren't. It's not guaranteed to work, or guaranteed to fail; it's likely to work, but someone should probably know about it and look into it.
Therein lies the rub: these are basically two types of assertions, one fatal, one not. What we need is an assertion framework that allows for warning-level assertion failures. What's more, we need an assertion framework that is performant enough to be regularly enabled in production.
So, any code that's happily humming along in production, that says:
Assume.that(percentage).isBetween(0,100);
will fail immediately if percentage is outside those bounds. It's assuming that percentage is between zero or one hundred, and if it assumes wrong, it will likely fail. Since it's always better to fail fast, any case where percentage is outside that range should trigger a fatal error - preferably even if it's running in production.
On the other hand, code that says:
Expect.that(numRows).isLessThan(1000);
will trigger a warning if numRows is over a thousand. It expects numRows to be under a thousand; if it isn't, it can still complete correctly, but it may take longer than normal, or use more memory than normal, or it may simply be that if it got more rows than that, something may be amiss with the query that got the rows or the dataset the rows came from originally. It's not a critical failure, but it's cause for investigation.
Any assumption or expectation that fails should of course be automatically and immediately reported to the development team for investigation. Naturally a failed assumption, being fatal, should take priority over a failed expectation, which is recoverable.
This not only provides greater flexibility than a simple assertion framework, it also provides more explicit self-documenting code.
No comments:
Post a Comment