my name is Marc and this is my first blog post. I’m keen of every feedback, don’t hesitate!
Like my friend and colleague Felix I visited the Clean Code Days in Munich. To get as much information and education we possibly could, we decided to split for some talks. So, here is my summary about Claudia Simsek-Grafs talk.
First things first, let’s start at the very beginning and one of the most important facts of this talk. Change!
The Clean Code Days in Munich is a developer conference about the “Clean Code Movement”. The main topics are code quality aspects, team dynamic and design principles. This post gives a small summary including the key aspects of the talks I participated in.
Keynote: Test Intelligence?
The first day started with the keynote by Dr. Elmar Jürgens (@ElmarJuergens) about Test Intelligence. This talk explains how existing data can be used to answer the questions what should be tested, which change were omitted during tests and which test execution order is the best to fail fast (this can minimize the feedback loop!)
Since a couple of weeks I follow a controversial topic on twitter which I would describe as the “anti-feature-branching” movement. In a previous post I have already written about feature branching and why someone would recommend to not following such a successful branching model.
The solution to many daily problems a typical developer is facing when applying feature branching will be solved by doing Trunk Based Development. If you integrate daily instead of waiting until the complete feature is finished, the merge won’t be a big deal. However, the merge conflict of integrating a completed feature after a few month into mainline could be a big mess. I think that is something every developer has experienced already.
A few years ago @yegor256 published a book called Elegant Objects about object oriented development. On www.elegantobjects.org you find an interesting collection of patterns and principles which should not used in true object oriented software. For example stating that getters and setters are an anti-pattern is an interesting thesis.
@tdpauw gave a nice talk about “Feature Branching is Evil” on the XP DAY in UKRAINE in November last year (the slides from a similar presentation a few days ago can be found here). The article “On DVCS, continuous integration, and feature branches” also lists some valid arguments against feature branching.
I have used “A successful Git branching model” in several software projects making both positive and negative experiences. The distinction between a master branch for releases and a develop branched used for integration helps you keeping your history clean. However, it works against the idea of being able to deliver at every point of time when applying CI/CD.
Combining your branching strategy with distributed code reviews (e.g. in combination with merge/pull requests) is another practice which helped me and my team members to share a common knowledge about our code base and keep a high software quality level.
However, I experienced more than once that feature branching can easy lead to have multiple long-lived branches which diverge and often end in a merge hell. That’s why I (and my others) recommend to work in small increments and integrate very often to ensure a lifetime of your branches.
If you are interested in testing best practices, patterns or strategies have a look at xUnitPatters.com. The different ways for grouping test cases (e.g. Test Class per Fixture) helped me refactoring and simplifying our integration tests.
I bet you run unit tests in your software project to ensure that your system works as expected. If you don’t, you definitely should 😉 since they give you the required safety for proper refactoring. But that’s another story . . .
So if you run your tests you maybe check the code coverage to get some metrics about your tests. But does this say something about the quality of your tests? At least it tells you which lines or statements will be executed in your tests and which not. If the tests verify the behaviour of the statements cannot be answered by code coverage, though.
When I am doing code reviews I also check out the code and execute the test cases. Then I walk through the code and spy for critical statements like difficult calculations, date manipulations or important parameters for requests to give some examples. After verifying that all tests run green I comment out or change an identified statement and run the tests again. If at least one test fails I am happy, so I rejoice when the tests fail 😉 (which is one rule of The Way of Testivus). If all tests pass, however, I have identified a missing test case.
After reading the blog post “Mutation Testing oder wie gut sind meine Tests wirklich?” I realised that my approach for identifying missing test cases (and measuring the quality of our tests) can be named as manual mutation testing.
It has been a while since I started thinking about publishing my own blog where I can write about all the software engineering stuff I come across while reading, learning and practicing. When working as a software engineer you gain new experience every singe day. Here I will share some of my lessons learned. I hope this will help me to reflect about what I have learned and maybe there is anyone out there finding it interesting.