Go 1.6 GC is exactly what I mean. It's a design that gives tiny pauses by not being generational, or incremental, or compacting. Those other features weren't developed by GC researchers because it was more fun than Minesweeper. They were developed to solve actual problems real apps had.
By optimising for a single metric whilst ignoring all other criteria, Go's GC is setting its users up for problems. Just searching Google for [go 1.6 gc] shows the second result is about a company that can't upgrade past Go 1.4 because newer versions have way worse GC throughput: https://github.com/golang/go/issues/14161
Their recommended solution is, "give the Go app a lot more memory". Well, now they're back in the realm of GC tuning and trading off memory to increase throughput. Which is exactly what the JVM does (you can make the JVM use much less memory for any given app if you're willing to trade it off against CPU time, but if you have free RAM then by default the JVM will use it to go faster).
BTW the point of an optimising compiler is to make code run faster, not reduce memory usage. Go seems to impose something like a 3x overhead vs C, at least, that was the perf hit from converting the Go compiler itself from C to Go (which I read was done using some sort of transpiler?). The usual observed overheads of Java vs C are 0 to 0.5x overhead. The difference is presumably what the compilers can do. Go's compiler wasn't even using SSA form until recently, so it's likely missing a lot of advanced optimisations.
tl;dr - I have seen no evidence that the Go developers have any unique insight or solutions to the question of building managed language runtimes. They don't seem to be fundamentally smarter or better than the JVM or .NET teams. That's why I think eventually Go users will want to migrate, because those other teams have been doing it a lot longer than the Go guys have.
By optimising for a single metric whilst ignoring all other criteria, Go's GC is setting its users up for problems. Just searching Google for [go 1.6 gc] shows the second result is about a company that can't upgrade past Go 1.4 because newer versions have way worse GC throughput: https://github.com/golang/go/issues/14161
Their recommended solution is, "give the Go app a lot more memory". Well, now they're back in the realm of GC tuning and trading off memory to increase throughput. Which is exactly what the JVM does (you can make the JVM use much less memory for any given app if you're willing to trade it off against CPU time, but if you have free RAM then by default the JVM will use it to go faster).
BTW the point of an optimising compiler is to make code run faster, not reduce memory usage. Go seems to impose something like a 3x overhead vs C, at least, that was the perf hit from converting the Go compiler itself from C to Go (which I read was done using some sort of transpiler?). The usual observed overheads of Java vs C are 0 to 0.5x overhead. The difference is presumably what the compilers can do. Go's compiler wasn't even using SSA form until recently, so it's likely missing a lot of advanced optimisations.
tl;dr - I have seen no evidence that the Go developers have any unique insight or solutions to the question of building managed language runtimes. They don't seem to be fundamentally smarter or better than the JVM or .NET teams. That's why I think eventually Go users will want to migrate, because those other teams have been doing it a lot longer than the Go guys have.