The Point of Testing Software
When dealing with software, you need to have a plan to deal with finding the inevitable bugs that you create along the way. In the beginning this was always just manual testing of the software — but the software was also a lot smaller.
Over time there was automation that was added into the mix. Generally these come in a couple of different flavors (from closest to the developer to the farthest):
Unit tests
Automated smoke tests
Integration tests
A common theme in much of the software development space is to “shift left.” That is, get closer to the start of the process. (It makes more sense if you’re primary language is one that is read left to right)
This is going to be about unit tests. I might write more about the other stuff.
With unit tests, you have to decide what it is that you’re going to test. Often there are some metrics that you need to hit — something like 70% coverage or whatever. But not all unit tests are created equal.
Lets start off with what I find less valuable in terms of tests: testing that the language runs. I’ve seen far too often cases where people are simply testing that a language, say Java, can call functions. If you don’t have confidence that it can, you may want to reconsider your career. I can count on one hand (honestly, I think one finger) the number of times that a compiler issue was found by me or my friends that led to code that compiles successfully to fail when running.
Up only very slightly from there are testing of external service calls using mocking. These don’t test the called service. These only test that you can pretend that they return what you just told it to. All of the expectations that you have when you wrote your code are baked into the test itself. It’s not testing the part that’s most likely to fail: your own expectations.
The things that are critical to test are also the ones that seem to be ignored when testing: the actual business logic. Most people tent to get enough coverage to pass just by “testing” the plumbing. The plumbing, at least as it’s being tested in unit tests, are the least likely things to break.
The business logic is the most important thing. Not only is it the value-add that your code is providing, it’s also the parts of the code that are most likely to change, and therefore break, in the future.
What you need to do is to isolate the business logic into easily testable functions and then test the ever-living snot out of that. If you’ve extracted the functions that make decisions or transformations into pure functions that are idempotent, you’ve made your job as a test author infinitely simpler. Not only are the units that you are testing small — as implied by the very name — but you’ve also made it that much easier to write new tests. And when a test breaks, it points more directly to the specific bit of code that’s suspect.
Smaller tests are also easier to understand than “god tests” that try to test everything from the highest level. The bigger the test is, and the more likely it is to break for small changes anywhere. The harder a test is to understand, the more likely it is to be commented out when it inevitably breaks.
The key to testing is to not just chase a coverage number, but to think about what is likely to break in the future — and that needs to be guided by what’s most likely to change or break in the future. A test that never fails is a test that isn’t really paying it’s rent.