Surprise! You probably thought that our series on real-time operating systems was over. As it turns out, we missed an incredibly important topic that every developer needs to be aware of, optimizations! There are several areas where a developer may want to optimize their software such as performance and memory usage. In this post, we will discuss several techniques and tips for optimizing a ThreadX® based application for memory usage.

Let’s start by discussing what a developer can do to help minimize how much memory their application will use. There are several critical best practices that a developer should pay attention to. The first, and possibly the most important, is to perform a worst-case stack analysis. By default, threads have a stack size assigned to them which is a kilobyte. Depending on what the thread is doing, the default value may not be enough and cause a stack overflow or, as is more often the case, too much stack space is allocated and memory is wasted.

Take the blinky LED application that is generated by e2 studio for example. The blinky thread uses the default stack size of one kilobyte but if the only thing it is doing is calling functions that change the LED state, the stack size probably could be 128 bytes or less! Performing a worst-case stack analysis through either manual calculation, static analysis or experimentation will help a developer ensure that they not only have enough stack space, but that they don’t have too much either.

A second technique that developers can use to optimize memory, and one that is slightly controversial, is to minimize how many threads and ThreadX® objects are in the application. Now I’m not suggesting that developers just start combining tasks and removing objects unnecessarily. I am advocating that developers very carefully design their system such that they have just enough tasks and objects to get the job done right. When I review software, I often encounter applications with far more tasks and semaphores than are necessary. Each additional task requires a stack and stack space can very quickly eat up memory. Every ThreadX® object also has a control block which means memory is being allocated to manage the object state. While the amount of memory may not be much, in a memory constrained environment this could quickly become an issue.

A third technique that developers can use to help minimize memory usage is to use memory block pools when allocating memory dynamically. Memory block pools not only exhibit deterministic behavior, they also don’t fragment like a byte pool or the heap does. This can help ensure that the application doesn’t lose memory as the application runs. It also provides developers with a safe way to perform dynamic memory allocation that can minimize the overall memory footprint for the application.

Finally, a technique that doesn’t involve the way the code is architected or developed is to use the compiler to optimize for memory usage. Developers who are finding that they are near the memory limit in their processors can examine the compilers optimization settings and enable memory footprint optimizations. This will help to decrease the memory usage but be warned, it may also affect debugging or even the run-time performance so make these adjustments carefully and make sure that you take before and after measurements for comparison.

Now that we understand a few techniques that we can use to help minimize our ThreadX applications memory usage, in the next post we will examine how to maximize performance.

 

Until next time,

Live long and profit!

Professor_IoT

 

 

Hot Tip of the Week

You may be interested in the memory report that is included in the SSP Datasheet. You might have forgotten that the SSP has a Datasheet, similar to the guaranteed characteristics of Synergy MCUs, but for the software that goes into the SSP instead. No other MCU manufacturer warranties  their software the way Renesas does for SSP. The datasheet includes a variety of useful metrics, and in the context of this blog, the estimated memory requirements table is of particular interest. The below table is for ThreadX, but just about every SSP module has a similar table. These can be exceptionally useful tools for estimating your overall memory footprint and establishing a known ‘high end’ from which you can start optimizing. The SSP datasheet can be found here:

https://www.renesas.com/en-us/doc/products/renesas-synergy/doc/r01ds0272eu0138-synergy-ssp-120-datasheet.pdf