In the last several posts, we have been examining the different tools available in ThreadX® real-time operating system (RTOS), that can be used to synchronize threads such as semaphores, mutexes, event flags and message queues. Task synchronization isn’t the only important concept RTOS application developers need to understand. Memory management is just as important if a system is going to behave properly and maintain its deterministic behavior. In this post, we will memory pools, what they are, how to use them and most importantly, why developers should use them.
Every RTOS based application needs to allocate memory. Recall for a moment that every task, semaphore, mutex and RTOS object has a control block associated with it. There may also be a need to allocate buffers for communication. Typically, a developer will use malloc to dynamically allocate memory in an application. The problem with malloc is that most implementations are nondeterministic which means that there is no way to determine exactly how long it will take to execute. Even more importantly, malloc uses the heap and the more a developer allocates and then frees memory, the greater the opportunity for fragmentation to occur which further degrades the performance and risks a failure to allocate memory on a future request.
A memory pool is a feature that is built into ThreadX that allows a developer to avoid using malloc and allocate memory dynamically in the application in a safe manner. There are two memory pool types available to developers, byte pools and block pools. A byte pool behaves just like the heap and comes with all the same problems such as memory fragmentation. A block pool on the other hand allocates a memory block which is a fixed number of bytes. Block pool algorithms are not only deterministic but also do not have issues with fragmentation! This makes block pools a great tool for real-time embedded software developers.
Creating a memory block pool in the Renesas Synergy™ environment is simple but there are no user interface GUI’s to create them. If a developer wants to use them, they need to dig into the and find the API’s that are necessary to create and use them. A developer can use the smart manuals autocomplete feature by starting the API using tx_block and then pressing ctrl-space to select and fill out the API. The API’s are straight forward and to save the user time, the API’s for the block pool can be found below:
UINT tx_block_pool_create(TX_BLOCK_POOL *pool_ptr, CHAR *name_ptr, ULONG block_size, VOID *pool_start, ULONG pool_size)
UINT tx_block_allocate(TX_BLOCK_POOL *pool_ptr, VOID **block_ptr, ULONG wait_option)
UINT tx_block_release(VOID *block_ptr)
Before using the tx_block_allocate or tx_block_release calls, there are several steps that a developer needs to take. First, a few variables need to be created:
An example on how these variables should be declared can be seen below:
Once the variables are created, a developer will want to use the tx_block_pool_create API to create the block pool. If a developer allocates 1000 bytes for a memory pool and each block will contain 50 bytes, a developer will not end up with 20 blocks in the memory pool. Each block contains a little overhead to manage the block. A simple equation can be used to calculate the resultant total blocks:
Total blocks = (total bytes) / (block size + sizeof(void *))
For the example with 1000 bytes, the block total would be calculated as:
(1000 bytes) / (50 bytes + 4 bytes) = 18.54 blocks
Since half a block makes no sense, a developer has 18 blocks that they can allocate in their memory block pool.
With the block pool created, a developer can allocate blocks using the tx_block_allocate API call. (Notice that allocating a block requires a pointer to a pointer, and thus the ** used in the following example is not a typo!) A complete example on how to create a memory pool and then allocate a block can be seen in the following code snippet:
Memory pools can be a useful tool for developers who need to dynamically allocate memory in their applications but want to ensure that they don’t face fragmentation or performance issues with their applications. There are quite a few other capabilities associated with memory pools in ThreadX but for now it will be up to the developer to review the user manual. Next week we will examine how we can include the ThreadX source code and customize it.
Until next time,
Live long and profit!
Professor_IoT
Hot Tip of the Week
All the X-Ware™ component documents for Renesas Synergy are available in a single convenient zip file. This file is located on the Synergy Gallery under the SSP window Documentation tab: https://synergygallery.renesas.com/ssp/support#read
The zip file is in the right-side guide bar about half way down the list. It contains documents for ThreadX, NetX, NetX Duo, USBX and all the rest of the Synergy included ThreadX components.