why would that be a joke? there's a ton of Reddit comments in the training data, and the output is of similar quality. LLMs are literally outputting average Reddit comments.
I have hard similar things but I think that's an exaggeration. When I tell GPT o3 or o4-high to assume a professional air, it stops acting like a meat-based AIs on r/politics; specifically, it stops making inane assumptions about the situation and starts becoming useful again.
For example, I had a question from a colleague that made no sense and I was trying to understand it. After feeding the question to GPT 3o, it aggressively told me that I made a major mistake in a quote and I had to make major changes. (It would be OK if this is what the colleague had said, but this wasn't the case). In reality the colleague had misunderstood something about the scope of the project and GPT had picked up on the other person's opinion as the "voice of reason" and just projected what it thought he was saying in a stronger way.
I changed its instructions to "Be direct; but polite, professional and helpful. Make an effort to understand the assumptions underlying your own points and the assumptions made by the user. Offer outside-of-the-box thinking as well if you are being too generic.". The aggro was immediately lost, and it instead it actually tried to clarify what my colleague was saying and being useful again.
I agree with those who say the vanilla version is sycophantic, but the plain talk version has far too many bad habits from the wrong crowd. It's a bit like Monday; lots of aggro, little introspection of assumption.