Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"In the process it has automagically determined properties of the image which allow it to perform the classification."

(Emphasis mine.) But that's the point; it may be "auto", but if you understand how NNs work it's not magic. It's not even all that hard to understand (considered broadly), and once you understand how they work it is, for instance, easy to construct cases they fall flat on....

"So it has in effect determined how to solve a problem without your input."

... and it's less "auto" than you think. It figured out how to solve a problem based on your input of sample cases. And there's a certain amount of art involved in selecting and herding your sample cases, so regrettably you can't discard this part, either. Just flinging everything you've got at the NN is not going to produce good results.

If you don't understand NNs, you are unlikely to get good results by just flinging data at them; if you do get good results, it's probably because you have a problem that could have equally well been solved by even simpler techniques. They're really not magic.



It's classic emergent behavior. While you may understand how the algorithm works, and even be able to step through and see how each neuron affects the whole, that doesn't mean you know why the answer is correctly achieved through all of them combined.

The classic example is facial recognition. Training a neural network for facial recognition will result in lots of neurons contributing a very small part of the whole, and only when all (or most) are involved is the answer correct.

To most people, this (emergent behavior) is "magic".


But it's not emergence. Using Gaussian elimination to solve a gigantic system of equations isn't emergence either, even if the numbers are a bit too many to carry in your head at once. (as a matter of fact, solving systems of linear equations is part of the RBM training algo)

And even then, if it was emergence, doesn't automatically imply we don't understand it or that it's "magic". The famous Boids flocking simulation is a classic example of emergence. It's not very mysterious. Yes large-scale behaviour emerges from simple rules, this is amazing that it happens, but it doesn't hold up a barrier for us to understand, analyze and model this large-scale behaviour. Crystallisation is emergence, again we model it with a bunch of very hard combinatorial math.

But in this case, neural networks are not an example of emergence. They are really built in a fairly straight-forward manner from components that we understand, and the whole performs as the sum of the components, like gears in a big machine.


Understanding NNs is not the same as understanding "how they do it". You may have a good understanding of how the algorithms work, but after training your moderately sized network to do what it does, it's not very easy to know what _exactly_ it does to solve your problem.

I think that's what chii was trying to point out; not to say that devising them and training them is incomprehensible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: