Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The solution to the L1 regularization problem is actually a specific form of the classical ReLU nonlinearity used in deep learning. I’m not sure if similar results hold for other nonlinearities, but this gave me good intuition for what thresholding is doing mathematically!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: