I consider it a tool. Tool multiplies performance. Tho from research it appears that multiplication is nonlinear, from "a complete greenhorn makes an app that would otherwise take them weeks to just learn skill", thru "low double digits improvements just from saving time on writing boilerplate/looking up common problems libs" all the way to "the time wasted on trying to make LLM do it is more than just doing it".
If you can't use your tools properly (i.e. in this case, have backups) you will hurt yourself. And trying to blame it on tools that have NO guarantee in the first place
> However, my case reveals a fundamental weakness: these tools were not developed with academic standards of reliability and accountability in mind.
Yeah the article is ridiculous. Im not trying to defend but rather extrapolate. In particular about the “bro you are not working by chatting with ChatGPT” point.
If we consider it a tool, then why is it not work?
And to be clear I’m not even sure what I think. I’m throwing the question out there because I’m curious about what other devs think out here.
Just the first thing that comes to mind: ChatGPT can act as an enhanced Jupyter notebook where you specify tasks in English. This isn't an analogy; they literally run Jupyter kernels in the backend with chat as the frontend.
There's also canvas mode for iterating on actual documents, and the search/retrieval features make it a genuinely useful research tool independent of generation.
And this is me defending OpenAI, which I've stopped using. Other systems are more capable.