It's 2000. Build failure was pretty much expected for any software. Probably a good idea to stay home and work through any problem. Nowadays you'll just fire up a build and go. And the build is probably finished before you're out of the door.
Yes. I never had problems with Linux itself and compiled kernels constantly. What I did have incessant problems with was compiling GNOME 1.2 and 1.4. SO MANY problems, just non-stop... it was always something. I learned a bit though, although not as much as I could have if I paid attention more.
Back in pre-module days, Slackware shipped with "big" kernel with lots of drivers compiled in. The advantage was that this way the kernel could boot on a wide range of hardware. But it was very bloated (for the time) and the users were expected to recompile the kernel with unnecessary drivers removed. I remember compiling it on Pentium 60 with 16MB of RAM. Took 1-2 hours or so.
I remember starting a 1.2 kernel compile on my 486 with 4 MB of RAM, going to bed, then going to school, and finding that it had finished when I came back home.
One the one hand, we have Moore's law. On the other hand, kernel compilation time. Since compilation time is monotonically increasing, do we observe exponential compilation complexity in the kernel?
In many organizations, compilation time tends to hover around a benchmark of "this is acceptable." If it is below that benchmark, nobody pays attention to performance. If it is above, someone fixes something.
In multiple interviews Linus Torvalds has said that this benchmark is about 10 minutes for him. But considering that his personal hardware gets better faster than Moore's law alone, that means that compiles get slower for the rest of us.
Searching "Phoronix ${cpuModel}" will take you to the full review for that model, along with the rest of the build specs.
With the default build in a standard build environment the clock speed tends to matter more. With tuning one could probably squeeze more out of the higher core count systems.
That's using the same config as the server systems (allmodconfig) but it has the 9950X listed there and on that config it takes 547.23 seconds instead 47.27. That puts all of the consumer CPUs as slower than any of the server systems on the list. You can also see the five year old 2.9GHz Zen2 Threadripper 3990X in front of the brand new top of the range 4.3GHz Zen5 9950X3D because it has more cores.
You can get a pretty good idea of how kernel compiles scale with threads by comparing the results for the 1P and 2P EPYC systems that use the same CPU model. It's generally getting ~75% faster by doubling the number of cores, and that's including the cost of introducing cross-socket latency when you go from 1P to 2P systems.
Oh good catches! I must have grabbed the wrong chart from the consumer CPU benchmark, thanks for pointing out the subsequent errors. The resulting relations do make more sense (clock speed certainly helps, but there is wayyyy less of a threading wall than I had incorrectly surmised).
It varies a lot depending on how much you have enabled. The distro kernels that are designed to support as much hardware as possible take a long time to build. If you make a custom kernel where you winnow down the config to only support the hardware that's actually in your computer, there's much less code to compile so it's much faster.
I recently built a 6.17 kernel using a full Debian config, and it took about an hour on a fast machine. (Sorry, I didn't save the exact time, but the exact time would only be relevant if you had the exact same hardware and config.) I was surprised how slow it still was. It appears the benefits of faster hardware have been canceled by the amount of new code added.