In short, safety-critical software is subject to several standards describing how it should be developed. The entire production process must be carefully studied, documented from the general, like the basic system requirements, to the specific, that is the exact design of the implementation. For each stage of project documentation, a parallel verification plan must be created.
The software implementation stage is treated here as one of many elements of the process. Of course, standards play an important role at this stage as well. Code writing methods, rules, standards are imposed – e.g. MISRA C.
So, what does the developer’s work look like in such a project? Do all these documents and rules help or disturb the developer? Let’s start with the project documentation. It is beyond question that it should always be used in software development. However, in safety-critical projects there is often much more documentation, and it is usually more detailed. In addition to the basic description of the division into blocks, classes, writing out interfaces or data structures, we need to deal with implementation details. The software implementation plan of safety-critical software needs to describe its behavior under all circumstances, even the unlikely ones. There is no room for understatement or creativity here.
At first glance, this may seem like a huge limitation but it simply is not. Indeed, sometimes the rigid form of implementation can be problematic. In some situations, it would be much easier for a developer to pass an argument to a given function in a different way, but this would require changes in the software implementation design, and perhaps even the architecture. Altering these documents means changing the plans for their verification and most likely impacts the test environment.
However, these are very rare situations, which occur even less if the earlier stages of the project are well thought out. If they occur, they force the developer to take a broader look at the impact of possible changes on the entire project.
And what about the benefits of such detailed project documentation? The main benefit is the sole process of its creation – the entire team of designers, software architects, often also developers and testers have to carefully consider every detail and every situation that may happen, they carefully plan out all the components and data exchange mechanisms.
So, when it comes to implementation, the lack of room for the developer’s invention is not a limitation. It is an assurance that what the developer creates will work as expected, and the risk of errors is as low as possible from the very beginning.
Personally, it gives me great satisfaction when the software I write words correctly right from the first launch and I do not have to fix dozens of bugs resulting from misunderstandings and lacking guidelines.
Of course, it doesn’t mean bugs never happen in such software. As they say, who makes no mistakes never makes anything. So, despite the preparation of even the most detailed project documentation. In some situations, our code doesn’t behave as expected. It is important to detect such cases as soon as possible.
Therefore, in safety-critical projects, every development stage is accompanied by a software testing stage. Starting from unit tests that check individual functions, through the component tests, up to the functional tests of the entire product. Again, from the developer’s point of view, it might seem that such meticulous software testing is redundant and only creates unnecessary work, because “I have run the program myself, and it works for me, and these testers come up with some unlikely cases that will never happen.” Again, it couldn’t be further from the truth.
Working in a safety-critical project has often shown me how much software tests are needed and how complex errors they sometimes find. The manual program launch, which is often the only test in many non-safety projects, does not check the time dependencies. It does not show whether, for example, some initial state is not unstable under specific conditions.
However, all of that is very clearly shown by well-planned tests that can enforce unlikely, but not impossible, conditions. The correction of errors detected via such tests is not bothersome for the developer, because a well-described test describes in detail the conditions for reproducing each bug, often making the process significantly faster and easier.
But how come these errors appear if we have such a precisely described design? Apart from the possibility of implementation being inconsistent with the original assumptions, the wrong way of writing the code is one of the major causes of defects.
Let’s talk about the software implementation itself. As I have already mentioned, the standards impose considerable requirements on safety-critical projects. Compliance with the rules of the MISRA C standard, the use of static analysis tools, ensuring code readability, and conducting code review are just a few examples. Once again, it might seem unnecessary and detrimental to the implementation deadlines. Especially, the MISRA rules may seem incomprehensible and useless.
For example, consider the strict requirement to cast variables of different types explicitly before an arithmetic operation. After all, everyone knows the compiler can handle it on its own and select the appropriate type for the variable. But what if during such an implicit cast we lose important information due to a rounding error?
Applying such a rule forces developers to consider whether their results are going to be correct and helps them understand how the code they write is going to be interpreted by the compiler.
Ensuring code readability and conducting reviews are also extremely important. Sure, a program written in one string with variables like ‘a’, ‘b’, ‘c’ will probably work. But it will often cause huge problems when modified not only by other team members, but even by the original author. We can avoid these problems by following clearly defined rules that describe how to divide code into sub-functions, how to name variables, etc. Even when it seems that our code is easy to read and understand, verification by another team member during the review often shows that there is still room for much improvement.
So, what has developing software in a safety-critical project taught me? Thanks to using the MISRA rules and analyzing my own mistakes, I have certainly improved my programming skills. I have learned to look more broadly at the goals I want to achieve and the predicted results of my work. I also consider the impact my work has on the whole project. I have also found out that the standards are not as scary as they may seem and meticulously planned processes, documentations, and detailed tests should be all integral parts of all projects. And I mean all projects, not only the safety-critical ones. It all makes developing a really pleasant experience, and not at all boring and very formal, as some might tell you.
Latest blog posts