> The file system limit is 64k hard-links for the same file
I had never heard that, so I went sniffing around and it seems to be ext4 specific[1] but I wasn't able to easily get the limits for ZFS (or xfs, etc), so depending on how much glucose one wished to spend it may be better to use a different FS than all that renaming work around
The whole extract_mbtiles.py file is 97 lines of code. This contains parsing MBTiles, writing metadata and some CLI specific lines. It's actually quite a concise script for doing this while taking care of the hard-link limits.
In nutiteq mobile maps SDK (later Carto, now abandonware) we used specifically compressed bitmap to represent 'water' and 'empty land' tilemasks to cover these two special cases. We provided planet-scale mobile embedded mbtiles package in 30GB if I remember well. This tile mask (quite instant bitmap index) concept should work well for server case also.
The Linux kernel's filesystem cache is actually really efficient at doing this. I doubt we could come up with a nginx scripting solution which could be equally efficient.
In total, there are 271 million hard links. So out of 300 million files 271 are hard links!
The file system limit is 64k hard-links for the same file, so I have to handle the case when it's reached and then start a new file for the next 64k.