Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s not a flaw in parallelism. The mathematical reality remains that independent operations scale better than sequential ones. Even if we were stuck with current CPU designs, transformers would have won out over RNNs.

Unless you are pushing back on my comment "all kinds" - if so, I meant "all kinds" in the way someone might say "there are all kinds of animals in the forest", it just means "lots of types".



I was pushing back against "all kinds". The reason is that I've been seeing a number of inherently parallel architectures, but existing GPUs don't like some aspect of them (usually the memory access pattern).


yeah, bad writing on my part.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: