I generally agree, but it's worth contemplating how there are two "LLMs" we might be "convincing".
The first is a real LLM program which chooses text to append to a document, a dream-machine with no real convictions beyond continuing the themes of its training data. It lacks convictions, but can be somewhat steered by any words that somehow appear in the dream, with no regard for how the words got there.
The second there's a fictional character within the document that happens to be named after the real-world one. The character displays "convictions" through dialogue and stage-direction that incrementally fit with the story so far. In some cases it can be "convinced" of something when that fits its character, in other cases its characterization changes as the story drifts.
This is exactly my experience with how LLMs seem to work- the simulated fictional character is, I think key to understanding why they behave the way they do, and not understanding this is key to a lot of peoples frustration with them.
The first is a real LLM program which chooses text to append to a document, a dream-machine with no real convictions beyond continuing the themes of its training data. It lacks convictions, but can be somewhat steered by any words that somehow appear in the dream, with no regard for how the words got there.
The second there's a fictional character within the document that happens to be named after the real-world one. The character displays "convictions" through dialogue and stage-direction that incrementally fit with the story so far. In some cases it can be "convinced" of something when that fits its character, in other cases its characterization changes as the story drifts.