Software quality is often either overlooked in embedded applications or assumed to be appropriate. Developers will often mention how their software is high quality or trust that the software they are provided is quality software. Ask a developer what metrics they use to measure software quality and the crickets will start chirping. Software quality is a measurable entity and the Renesas Synergy™ Platform was developed with five key quality metrics that were monitored through-out the entire development cycle. Let’s briefly examine these five key quality indicators.

First, quality can be measured based on how well the code base meets coding standards. There are several different coding standards that could be followed simultaneously such as stylistic coding standards and industry standards such as MISRA-C. Using a coding standard can ensure that the software is not just readable but also maintainable and follows industry best practices which should increase code quality.

Second, software industry studies have shown that there is a correlation between software quality and how complex the software’s functions are. Very complex functions tend to be harder to read, maintain and often contain more bugs than simpler code. Software complexity can be measured using techniques such as the McCabe Complexity by automated tools that reveal how complex the software is.

Next, establishing traceable requirements is critical to achieve high quality software. Traceable requirements allow the software quality team to track the requirement and its testing throughout the development cycle. Every requirement has test cases developed that ensure the requirement can be verified and tested. Test scripts are then written to fulfill the test cases. The tests are executed on continuous integration servers whose results include a test execution instance and a defect report. Any defects that are detected can then be traced back to the requirement.


Another quality measurement that can be performed on a code base is to ensure that when compiling the software, the output from the compiler results in not just zero errors but also zero warnings. This may seem like common sense, but how many times have you downloaded open source software or maybe even shipped software yourself that had half a dozen or more warnings when compiled? A clean build at least guarantees that the compiler doesn’t have any complaints about the code. There could be other issues but at least the compiler is happy and the code quality is enhanced. The SSP will compile without errors or warnings but application projects and examples might compile with warnings if there is a mismatch between coding standards used by the different component suppliers.  

Finally, and potentially the most important measurement, achieving full test coverage. I’m sure many developers just cringed. How many developers really create enough test cases and traceability to ensure that their code is fully tested? From experience, the percentage is not as high as we might expect or hope it to be. Test coverage is monitored and achieved on the SSP and can be found in the SSP Quality Summary Report. Test coverage helps ensure that every function, every branch, every line of code is executed, tested and verified to work the way that it should.

Measuring and monitoring these five key quality indicators has allowed the Synergy Platform to not just claim that its code is high quality but also produce the data to show the quality level which is something that very few people in the embedded software industry can do. Don’t take my word for it, review the SSP Quality Summary Report and SQA Handbook to better understand the processes and results the Synergy Platform follows.  Links are provided in the Hot Tip section below.



Until next time,


Live long and profit!





Hot Tip of the Week

Check-out the details about how high quality levels are achieved with the Synergy Software Package. General information can be found at:

You may also want to check-out the SQA handbook:

and the SSP Summary Report: