> even if the depreciation rates are slightly wrong.
The TFA cites a linked study which states "CoreWeave, for example, depreciates its GPUs over six years" which is way more than 'slightly wrong'. Just mapping that backward, 2020's hot new data center GPU was the A100 and they are just reaching their 5th year of service. How many large customers are lining up to pay top dollar to rent one of those 5 year-olds for the next 12 months? For most current workloads I think A100s are already net negative to keep operating in terms of opportunity cost. That power, cooling and rack space are more profitably allocated toward 2023's now mid-life H100 GPUs.
The rate of data center GPU progress has accelerated significantly in the last five years. I hardly know anything about AI workloads but even I know that newer GPU capabilities like FP8 are recent discoveries which can deflate the value of older GPUs almost overnight. With everyone now hunting for those optimization shortcuts, it's foolish to think more won't be discovered soon. The odds that this year's newly installed H200 GPUs will keep generating significant rental fees for 72 months are, IMHO, vanishingly small. Over a trillion dollars of loans have been secured by assets actually worth maybe half the claimed value. It's like 2009 sub-prime mortgages all over again.
Yes, I think it's worth getting details ... like my mental model is that everything within 2x is sort of reasonable error. Looking for 10x errors and cliff edges like in the 2007 crisis where I think a good anecdote is like default prob assumptions being 2% and then realized to 30% (15x).
Is 15x error in realized GPU + the debt AFTER INFLATION? I suppose but feels less likely except in some tail scenarios that have other interesting properties.
This doesn't mean that there isn't a significant possibility of market correction due to other factors but the GPU factor just seems medium sized compared to other scenarios historically. Am I missing anything in the 1st order thinking?
The TFA cites a linked study which states "CoreWeave, for example, depreciates its GPUs over six years" which is way more than 'slightly wrong'. Just mapping that backward, 2020's hot new data center GPU was the A100 and they are just reaching their 5th year of service. How many large customers are lining up to pay top dollar to rent one of those 5 year-olds for the next 12 months? For most current workloads I think A100s are already net negative to keep operating in terms of opportunity cost. That power, cooling and rack space are more profitably allocated toward 2023's now mid-life H100 GPUs.
The rate of data center GPU progress has accelerated significantly in the last five years. I hardly know anything about AI workloads but even I know that newer GPU capabilities like FP8 are recent discoveries which can deflate the value of older GPUs almost overnight. With everyone now hunting for those optimization shortcuts, it's foolish to think more won't be discovered soon. The odds that this year's newly installed H200 GPUs will keep generating significant rental fees for 72 months are, IMHO, vanishingly small. Over a trillion dollars of loans have been secured by assets actually worth maybe half the claimed value. It's like 2009 sub-prime mortgages all over again.