Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>People don’t make better decisions when given more data, so why do we assume A.I. will?

How much of this is just because it says something they do not want to hear or because there are incentives to not consider it?



People don’t make better decisions when given more data, so why do we assume A.I. will?

It's recognized that many machine learning systems today need very large amounts of training data, far more than humans facing the same task. That's a property of the current brute-force approaches, where you often go from no prior knowledge to some specific classification in one step. This often works better than previous approaches involving feature extraction as an intermediate step, so it gets used.

This is probably an intermediate phase until someone has the next big idea in AI.


Every single adult person has a 20+ year learning lead on any machine, so it's a bit unfair to say humans can do the same task with less data.

And computers are already better at tasks involving a lot of math, which is the main reasons they've become commonplace.


Yep, 20+ year learning based on 10.000+ years of wisdom taught from generation to generation. People take both things for granted when comparing humans with "AIs".


I recommend consulting people who have seen above average amounts of relevant data if you have a medical, legal, or engineering problem.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: