I see it the same way, but LLMs like any tool depend on how they are used. I am mostly using them:
1. As an additional layer of checks, to find potential errors in things I have been doing (texts, code, ideas). This works good for the most part.
2. As a mental sparings partner where I explore certain ideas with me being the one to guide the LLM, instead of the other way around. This is much more about exploring thoughts than it is about the LLM providing anything I actually use anywhere.
3. As an additional tool to explore new bodies of text and code
But of course it makes me wonder how people will fare that don't know their shit yet. I had a LLM tell a student an electrical lie that would have very likely have caused a fire before, where the LLM got the math wrong in exactly the opposite way of how it would work in reality.
1. As an additional layer of checks, to find potential errors in things I have been doing (texts, code, ideas). This works good for the most part.
2. As a mental sparings partner where I explore certain ideas with me being the one to guide the LLM, instead of the other way around. This is much more about exploring thoughts than it is about the LLM providing anything I actually use anywhere.
3. As an additional tool to explore new bodies of text and code
But of course it makes me wonder how people will fare that don't know their shit yet. I had a LLM tell a student an electrical lie that would have very likely have caused a fire before, where the LLM got the math wrong in exactly the opposite way of how it would work in reality.