Coding fundamentals
Important
This is the Azure Sphere (Legacy) documentation. Azure Sphere (Legacy) is retiring on 27 September 2027, and users must migrate to Azure Sphere (Integrated) by this time. Use the Version selector located above the TOC to view the Azure Sphere (Integrated) documentation.
We suggest that your application code meet a minimum quality standard as defined in this topic. Through our partnerships with customers seeking to improve their production-deployed applications we've found some common issues, which when fixed improve the performance of applications.
Common issues
- In setting the target API set, we recommend using the latest CMake and Azure Sphere tools, and ultimately compiling the final Release binaries by setting
AZURE_SPHERE_TARGET_API_SET="latest-lts"
. For more details, see Coding for renewable security.
Note
When specifically creating image packages to be sideloaded within a manufacturing process, set AZURE_SPHERE_TARGET_API_SET
to the appropriate Azure Sphere OS version with which the device has been sourced or recovered to; failing to do so will cause the Azure Sphere OS to reject the image package.
- When you are ready to deploy an application to production, make sure to compile the final image packages in Release mode.
- It is common to see applications deployed to production despite compiler warnings. Enforcing a zero-warnings policy for full builds ensures that every compiler warning is intentionally addressed. The following are the most frequent warning types, which we highly recommend that you address:
- Implicit conversion-related warnings: Bugs are often introduced because of implicit conversions resulting from initial, quick implementations that remained unrevised. For example, code that has many implicit numeric conversions between different numeric types can result in critical precision loss or even calculation or branching errors. For properly adjusting all the numeric types, both intentional analysis and casting are recommended, not just casting.
- Avoid altering the expected parameter types: When calling APIs, if not explicitly casting, implicit conversions can cause issues; for example, overrunning a buffer when a signed numerical type is used instead of an unsigned numerical type.
- const-discarding warnings: When a function requires a const type as a parameter, overriding it may lead to bugs and unpredictable behavior. The reason for the warning is to ensure that the const parameter remains intact and takes into account the restrictions when designing a certain API or function.
- Incompatible pointer or parameter warnings: Ignoring this warning can often hide bugs that will be difficult to track later. Eliminating these warnings can help reduce diagnosis time of other application issues.
- Setting up a consistent CI/CD pipeline is key for sustainable long-term application management, as it easily allows re-building binaries and their correspondent symbols for debugging older application releases. A proper branching strategy is also essential for tracking releases and avoids costly disk space in storing binary data.
Memory-related issues
- When possible, define all common fixed strings as
global const char*
instead of hard-coding them (for example, withinprintf
commands) so that they can be used as data pointers throughout the entire codebase while keeping the code more maintainable. In real-world applications, harvesting common text from logs or string manipulations (such asOK
,Succeeded
, or JSON property names) and globalizing it to constants has often resulted in savings in the read-only data memory section (also known as .rodata), which translates to savings in flash memory that could be used in other sections (like .text for more code). This scenario is often overlooked, but it can yield significant savings in flash memory.
Note
The above can also be achieved simply by activating compiler optimizations (such as -fmerge-constants
on gcc). If you choose this approach, also inspect the compiler output and verify that the desired optimizations have been applied, as these might vary across different compiler versions.
- For global data structures, whenever possible, consider giving fixed lengths to reasonably small array members rather than using pointers to dynamically allocated memory. For example:
typedef struct {
int chID;
...
char chName[SIZEOF_CHANNEL_NAME]; // This approach is preferable, and easier to use e.g. in a function stack.
char *chName; // Unless this points to a constant, tracking a memory buffer introduces more complexity, to be weighed with the cost/benefit, especially when using multiple instances of the structure.
...
} myConfig;
- Avoid dynamic memory allocation whenever possible, especially within frequently called functions.
- In C, look for functions that return a pointer to a memory buffer and consider converting them to functions that return a referenced buffer pointer and its related size to the callers. The reason to do this is that returning just a pointer to a buffer has often led to issues with the calling code, since the size of the returned buffer is not forcibly acknowledged and therefore could put at risk the heap's consistency. For example:
// This approach is preferable:
MY_RESULT_TYPE getBuffer(void **ptr, size_t &size, [...other parameters..])
// This should be avoided, as it lacks tracking the size of the returned buffer and a dedicated result code:
void *getBuffer([...other parameters..])
Dynamic containers and buffers
Containers such as lists and vectors are also frequently used in embedded C applications, with the caveat that because of memory limitations in using standard libraries, they typically need to be explicitly coded or linked as libraries. These library implementations can trigger intensive memory usage if not carefully designed.
Besides the typical statically allocated arrays or highly memory-dynamic implementations, we recommend an incremental allocation approach. For example, start with an empty queue implementation of N pre-allocated objects; on the (N+1)th queue push, the queue grows by a fixed X additional pre-allocated objects (N=N+X), which will remain dynamically allocated until another addition to the queue will overflow its current capacity and increment the memory allocation by X additional pre-allocated objects. You can eventually implement a new compacting function to call sparingly (as it would be too expensive to call on a regular basis) to reclaim unused memory.
A dedicated index will dynamically preserve the active object count for the queue, which can be capped to a maximum value for additional overflow protection.
This approach eliminates the "chatter" generated by continuous memory allocation and deallocation in traditional queue implementations. For details, see Memory management and usage. You can implement similar approaches for structures such as lists, arrays, and so on.