No, you can't decide on heap vs stack. Go's compiler decides that. You can get feedback about the decision if you pass the right debug flags, and then based on that you may be able to tickle the optimizer into changing its mind based on code changes you make, but it'll always be an optimization decision subject to change without notice in any future versions of Go, just like any other language where you program to the optimizer.
If you need that level of control, Go is generally not the right language. However, I would encourage developers to be sure they need that level of control before taking it, and that's not special pleading for Go but special pleading for the entire class of "languages that are pretty fast but don't offer quite that level of control". There's still a lot of programmers running around with very 200x ideas of performance, even programmers who weren't programmers at the time, who must have picked it up by osmosis.
(My favorite example to show 200x perf ideas is paginated APIs where the "pages" are generally chosen from the set {25, 50, 100} for "performance reasons". In 2025, those are terribly, terribly small numbers. Presenting that many results to humans makes sense, but my default size for paginating API calls nowadays is closer to 1000, and that's the bottom end, for relatively expensive things. If I have no reason to think it's expensive, tack another order of magnitude on to my minimum.)
By default the elastic-search results were paginated and defaulted to some small number in the order of 25..100. I increased this steadily upwards beyond 100,000 to the point where every request always returned the entire result in the first page. And it _transformed_ the performance of the service. From one that was unbearably slow for human users to one that _felt_ instantaneous. I had real perf numbers at the time, but now all I have are the impressions.
But the lesson on the impact of the overhead of those paginated calls was important. Obviously everything is specific and YMMV, but this something worth having in the back of your mind.
So, if you have predictable object sizes, the pool will stay flat. If the workloads are random, you have a new problem because, like in this scenario, your pool grows 5x more.
You can solve this problem. E.g. you can only give back items into the pool that are small enough. Alternatively, you could have a small pool and a big pool, but now you're playing cat and mouse.
In such a scenario, it could also work to simply allocate and use GC to clean up. Then you don't have to worry about memory and the lifetime of objects, which makes your code much simpler to read and reason about.
This only happen when every request last 1 second.
If that's the case, it's usually better to have non-global pools, pool ranges, drop things after a certain capacity, etc.:
https://github.com/golang/go/issues/23199 https://github.com/golang/go/blob/7e394a2/src/net/http/h2_bu...
What people do is what this article suggested, pool.Get/pool.Put, which makes it only grow in size even if load profile changes. App literally accumulated now unwanted garbage in pool and no app I have seen made and attempt to GC it.
> If the Pool holds the only reference when this happens, the item might be deallocated.
Conceptually, the pool is holding a weak pointer to the items inside it. The GC is free to clean them up if it wants to, when it gets triggered.
The GC calls out to sync.Pool's cleanup.
For example, you might take the block of memory and data and send it to another system that will decode it. Or you can take the block of memory and store it in a file or in a hardware device where it means something in this specific order.
Huh,what?
I mean, who uses 32b architecture by default?
Also it would be 4KiB not 4KB.
Article is about the Go compiler. On 64 bit arch Go int is 64 bits.