Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How does that work? You cannot write to disk before you know the compressed size. Or if you do you can use a data descriptor but you cannot write concurrently.

I guess they buffer the compressed stream to RAM before writing to zip. If they want to keep their zip stable (always the same output given the same input), they also need to keep it a bit longer than necessary in RAM.



Maybe Windows allows a file to be constructed quickly from parts that are not necessarily multiples of the blocksize. Maybe they have a fast api for writing multiple files and then turning it into a single file. POSIX doesn't allow that, but it is quite old.


I think you get different compressed files depending on how many threads you use to compress




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: