The stack is the memory set aside as scratch space for a thread of execution.
- When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data.
- When that function returns, the block becomes unused and can be used the next time when a function is called.
- The stack is always reserved in a LIFO (last in first out) order;
- the most recently reserved block is always the next block to be freed.
- This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.
The heap is memory set aside for dynamic allocation.
- Unlike the stack, there’s no enforced pattern to the allocation and deallocation of blocks from the heap;
- therefore, you can allocate a block at any time and free it at any time.
- This makes it much more complex to keep stack of which parts of the heap are allocated or free at any given time;
- There are many custom heap allocators available to tune heap performance for different usage patterns.
Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).
What is their scope?
- The stack is attached to a thread, so when the thread exits the stack is reclaimed.
- The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.
What determines the size of each of them?
- The size of the stack is set when a thread is created.
- The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).
What makes one faster?
- The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation.
- Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor’s cache, making it very fast.
- Another performance hit for the heap is that the heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be - typically - synchronized with “all” other heap accesses in the program.