Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think at some point there will be a paradigm shift towards different architecture in the same way transformers were for language (or generally pattern) processing.

You will no longer need to have a model that is exposed to enough training data to be good, you instead will have on the fly learning. A human doesn't need to hear the same piece of information over and over again - we can get told one time and if its important, we can contextualize it. Same thing will happen with models. You will have a model trained on the core concept of contextualizing new data, and between executions, it will have a persistent "memory".

You may start to see things like Hebbian Learning come back into play in some form and way.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: