Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel this is too hardline and e.g. eliminates the useful things people do with SAT solvers.


The first SAT solver case that comes to mind is circuit layout, and then you have a k vs n problem. Because you don’t SAT solve per chip, you SAT solve per model and then amortize that cost across the first couple years’ sales. And they’re also “cheating” by copy pasting cores, which means the SAT problem is growing much more slowly than the number of gates per chip. Probably more like n^1/2 these days.

If SAT solvers suddenly got inordinately more expensive you’d use a human because they used to do this but the solver was better/cheaper.

Edit: checking my math, looks like in a 15 year period from around 2005 to 2020, AMD increased the number of cores by about 30x and the transistors per core by about 10x.


That's quite a contortion to avoid losing the argument!

"Oh well my algorithm isn't really O(N^2) because I'm going to print N copies of the answer!"

Absurd!


What I’m saying is that the gate count problem that is profitable is in m³ not n³. And as long as m < n^2/3 then you are n² despite applying a cubic time solution to m.

I would argue that this is essentially part of why Intel is flagging now. They had a model of ever increasing design costs that was offset by a steady inflation of sales quarter after quarter offsetting those costs. They introduced the “tick tock” model of biting off a major design every second cycle and small refinements in between, to keep the slope of the cost line below the slope of the sales line. Then they stumbled on that and now it’s tick tick tock and clearly TSM, AMD and possibly Apple (with TSM’s help) can now produce a better product for a lower cost per gate.

Doesn’t TSM’s library of existing circuit layouts constitute a substantial decrease in the complexity of laying out an entire chip? As grows you introduce more precalculated components that are dropped in, bringing the slope of the line down.

Meanwhile NVIDIA has an even better model where they spam gpu units like mad. What’s the doubling interval for gpu units?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: