Konubinix' opinionated web of thoughts

Is It Ok to Let the CI Fail and Not Do Anything About It?

Fleeting

To me, the continuous integration is an indicator that tells you in what shape the code is.

The following is a true story that I have watched several times already.

The story: the team is only a month old, the code is only two weeks old, we just setup sonar cloud in the CI.

We decide with the team that we can setup sonar cloud so that it consider ALL the code as being new (bootstrapping sonar on a project). The CI calls sonar-scanner with so that a failing sonar makes the CI fail.

As expected, sonar comes with a red quality gate, causing the CI to fail so that we can fix the issues once and for all.

The team, already dealing with the current feature at hand, decides to change the CI so that it does not expect the quality gate to pass.

Then, features accumulate and the quality gate keeps being worse. We realize that this has actually become technical debt and decide to follow the “clean as you code”. This means that we reset our sonar project and let the first scan become the basis. Everything before this scan becomes technical debt and we can focus on new code.

The develop branch is then considered ok and features keep being worked on, in separate feature branches. Merge requests are scanned by sonar. And it happens that a developer decides to merge per work on develop, even though sonar failed (and then the whole CI).

Now, develop is back to showing a failed CI, due to sonar.

When asked about why per did not consider doing something about the CI, the developer answers that “the threshold to make the CI pass is too high”.

We eventually change the sonar expectation to reflect our own expectation.

To me, the fact that the developer simply ignored the indicator and still merged the code into develop is the symptom of a lack of understanding of why the indicator is here in the first place.

I think the developer lacked intellectual honesty and per should have discussed the topic with the team and reached the decision together of lowering the expectations of the indicator before merging the code. That way, the code would have a truly passing CI, in the sense that it would reflect more precisely the expectations of the team. If the CI failing for good reasons, they might have put this issue in a trusted system.

To me, being used to ignore failing indicators will eventually lead to dumbing our vigilance and we will eventually simply stop realizing the existence of the indicator. This, to me, is definitely a bad thing. Why is it so hard to do the right thing?