Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's obvious no? For double descent: the network with a billion parameters is so large that it learns to simulate a neural network itself which first uses the test data set to train itself and then produces the final overfitted values??


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: