At these levels of spending the actual cost is heavily negotiated and is usually far below the advertised on-demand pricing.
Considering I could negotiate A100 for under a dollar/hr - 8 months ago, when they were in high demand, I wouldn't be surprised if the cost was close to 100k for this training run.
I got the impression that kind of thing (buying time on GPUs hosted in people's homes) isn't useful for training large models, because model training requires extremely high bandwidth connections between the GPUs such that you effectively need them in the same rack.
I suspect most A100s on vast.ai are actually in a datacenter, and might even be on other public clouds, such as AWS. I don't see why either vast.ai or AWS care if this was the case.
Anyone training this size of model is almost certainly using AWS/GCE.
The GPU marketplaces are nice for people who need smaller/single GPU setups, don't have huge reliability or SLA concerns, and where data privacy risks aren't an issue.
Google is generous for giving TPU for free for research, so likely it is using this. The more representative number is one from meta which required 87k A100 hours, which is close to $100-200k for 7B model training.
> Overall we reach a throughput of over 1900 tokens / second / TPU-v4 chip in our training run
1 trillion / 1900 = 526315789 chip seconds ~= 150000 chip hours.
Assuming "on-demand" pricing [1] that's about $500,000 training cost.
[1] https://cloud.google.com/tpu/pricing