Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Transfer learning is a thing. But the issue with the gap is that the datasets for "applying X" aren't easy to come by.


There is an awful lot of "looking for my keys under the street light" going around these days. I've seen a bunch of projects proposed that are either based on existing data (but have no useful application of that data) or have a specific application (but lack the data and evaluation required to perform that task). It doesn't matter how good your data is if no one has any use for things like it, and it doesn't matter how neat your application would be if the data doesn't match.

I'm including things like RL metrics as data here, for lack of a better umbrella term, though the number of proposed projects that I've seen that decided that ongoing evaluation of actual effectiveness was a distraction from the more important task of having expensive engineers make expensive servers into expensive heatsinks is maddening.


The importance of having good metrics cannot be overstated.

On the "applying X" problem - this almost feels to me like another argument against fine tuning? Because it seems like Applying can be a surprisingly broad skill, and frontier lab AIs are getting good at Applying in a broad fashion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: