I bet you run unit tests in your software project to ensure that your system works as expected. If you don’t, you definitely should ūüėČ since they give you the required safety for proper refactoring. But that’s another story . . .

So if you run your tests you maybe check the code coverage to get some metrics about your tests. But does this say something about the quality of your tests? At least it tells you which lines or statements will be executed in your tests and which not. If the tests verify the behaviour of the statements cannot be answered by code coverage, though.

When I am doing code reviews I also check out the code and execute the test cases. Then I walk through the code and spy for critical statements like difficult calculations, date manipulations or important parameters for requests to give some examples. After verifying that all tests run green I comment out or change an identified statement and run the tests again. If at least one test fails I am happy, so I rejoice when the tests fail ūüėČ (which is one rule of¬†The Way of Testivus). If all tests pass, however, I have identified a missing test case.

After reading the blog post ‚ÄúMutation Testing oder wie gut sind meine Tests wirklich?‚ÄĚ I realised that my approach for identifying missing test cases (and measuring the quality of our tests) can be named as manual mutation testing.