When Applying Code Consistency and When Not To
FleetingSounds like a debate about opinions, in which each side as a good reason to believe what they believe.
Keeping source code consistent or breaking it to improve coding techniques are both hypothetical imperatives.
Let’s see the arguments in favor of both arguments, what categorical imperatives they help to fulfil and make our own opinion on the matter.
We won’t tackle the ideal case where you can refactor the whole code to make it better AND consistent. We focus here on the situation of having to make the choice between improving the code OR keeping it consistent.
Most of all, keeping code consistent helps people reading the code in the future. By reading a consistent code, one will be able to have cognitive ease and rely on ones intuitive brain. Then, the code being as easy to understand as possible, people will be able to focus on writing new features and maintain the code. Also, refactoring automatic tools will more easily refactor a consistent code. So the main categorical imperative seems to be maintenance ease. [1–3]
On the other hand, breaking the code consistency to bring newer ways of doing things has one advantage: making the code evolve. The code might have design issues or smells, or it might use obsolete ways of doing things (better ways of doing the same thing might have appeared). Avoiding this evolution by wanting to keep the code consistent might make the code never evolve. Also, new people being used to new techniques might have more ease maintaining a code partially “well written” than a code badly written everywhere. Also, adding new bad code adds to the burden of making it right eventually. Finally, the current code being poorly written should not be an excuse per se to write poor code. One should want to make the code better than it was previously. Therefore, the categorical imperatives here appear to be pragmatism and eventual maintenance ease. [4–7]
It is stressed out that the right or the wrong way of doing things is a matter of uses and fashion. Thus, if one person adds “better” code now and another person adds “better” code in the future, both definitions of “better” are likely to be different and add to the entropy of the code. Also, because consistency breaks are generally not documented, people keep breaking old code without following a shared vision, resulting in a “break and add new stuff” loop, that may make the code worse and worse at each iteration. Above all, deciding between consistency and evolution is a matter of making compromises. Also, the need for consistency is even more important if the code size is big. [8–10]
My opinion follows: I think we generally underestimate the time to make things right, and we often have to bring new stuffs in the code. Therefore, I think that following new standards should be the default choice, unless we think we have a good reason not to. I think, on the other hand, that doing one or the other should be challenged to realise how good are our reason to adopt it.
I also think that doing so should be accompanied by:
- a documented design discussion stating why we chose to change the way of doing (to help future people understand the vision),
- a separation between old code and new code. For instance a new library could use new standards, but a new method in a class would not.
Eventually, I think we should make our best to palliate our cognitive biases and try to ensure objective awareness of the pros and cons of the two solutions. I think we should also make a great effort understanding why people chose such “bad decisions”. We should at the very least close our eyes and try as much as possible to feel like a newcomer finding out one situation or the other and let our intuition decide. After all, there is no good or bad answer to this question, only educated choices. In addition, we cannot predict what the future will look like.
Of course, this is my opinions, as good as any opinion on this matter.