Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The score reported uses a minor prompt addition: "You should use tools as much as possible, ideally more than 100 times. You should also implement your own tests first before attempting the problem."

I'm not sure if the SWE benchmark score can be compared like for like with OpenAIs scores because of this.



https://en.wikipedia.org/wiki/Goodhart%27s_law "When a measure becomes a target, it ceases to be a good measure"

I'm also curious what results we would get if SWE came up with a new set of 500 problems to run all these models against, to guard against overfitting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: