Comment by bastawhiz
I'm not 100%, but my understanding was that the non standard pages are always larger than the standard pages. If you need more than a standard page, you always get a freshly allocated non standard page. But when one was released, it was being treated as though it was standard sized. The pool would then reuse that memory, but only at a standard size. So every released non standard page leaked the difference between what was allocated and what was standard.
Which is to say, I don't think it was actually being resized. I think it was the metadata for the page saying it had the (incorrect) standard size (and the incorrect handling after the metadata was changed).
Yes that last point was what I meant. I see no reason that the metadata's size field should get updated without some realloc of the memory it points to. I think I'll need to look into the actual code to see what's going on there, though, because we may both be misunderstanding. It just seems very error prone to categorize how you free memory based on a field that by the time you get to `free` has no guaranteed relationship with where the memory came from. I think that should be fixed. What was done in the blog is more of a band-aid imo.