Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I suspect the argument is that both AI and a compiler enables building software at a higher level of abstraction.


Abstraction is only useful when it involves a consistent mapping between A and B, LLM’s don’t provide that.

In most contexts you can abstract the earth as a sphere and it works fine ex:aligning solar panels etc. Until you enter the realm of precision where treating the earth as a sphere utterly fails. There’s no realistic set of tests you can right where an unsupervised LLM’s output can be trusted to generate a complex system which works if it’s constantly being recreated. Actual compilers don’t have that issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: