The main disadvantage of this approach is that to free memory, you must terminate the program. This is fine for a program which runs for a second, but is unusable for long-running processes and servers where you want to free memory after finishing any memory-intensive task so that everything else that your machine is running concurrently gets memory too.
That’s not a problem for cases like a dispatcher that forks off a child per request and the memory intensive work is done in child space. In fact it can result in admirably simple and stable software. Things have probably changed but that used to be how telco software managed trunks.
madvise(MADV_FREE) (on *BSD) or madvise(MADV_DONTNEED) (on Linux) will remap the specified memory range to the zero page or leave it intact at the kernel’s discretion.
This is also fine for any long-running processes and servers where the "vector" in question is not expected to shrink its allocation, which I would guess is the main use of vectors. Shrinking a vector's allocation is a niche use, with some finicky APIs, that most programmers never needed or touched.
It is elegant in the regard that the eviction only happens of page pressure warrants it. If the vector happens to be small enough, nothing special happens.
And you save on having to write some kind of allocator for it.