If you are interested in testing best practices, patterns or strategies have a look at xUnitPatters.com. The different ways for grouping test cases (e.g. Test Class per Fixture) helped me refactoring and simplifying our integration tests.
If you are interested in testing best practices, patterns or strategies have a look at xUnitPatters.com. The different ways for grouping test cases (e.g. Test Class per Fixture) helped me refactoring and simplifying our integration tests.
I bet you run unit tests in your software project to ensure that your system works as expected. If you don’t, you definitely should 😉 since they give you the required safety for proper refactoring. But that’s another story . . .
So if you run your tests you maybe check the code coverage to get some metrics about your tests. But does this say something about the quality of your tests? At least it tells you which lines or statements will be executed in your tests and which not. If the tests verify the behaviour of the statements cannot be answered by code coverage, though.
When I am doing code reviews I also check out the code and execute the test cases. Then I walk through the code and spy for critical statements like difficult calculations, date manipulations or important parameters for requests to give some examples. After verifying that all tests run green I comment out or change an identified statement and run the tests again. If at least one test fails I am happy, so I rejoice when the tests fail 😉 (which is one rule of The Way of Testivus). If all tests pass, however, I have identified a missing test case.
After reading the blog post “Mutation Testing oder wie gut sind meine Tests wirklich?” I realised that my approach for identifying missing test cases (and measuring the quality of our tests) can be named as manual mutation testing.
© 2024 YASEB
Theme by Anders Noren — Up ↑