Strongly disagree. If you read enough of it the patterns in ai text are so familiar. Take this paragraph for example:
> Here’s what surprised me: the practices that made my exit smooth weren’t “exit strategies.” They were professional habits I should have built years earlier—habits that made work better even when I was staying.
“It’s not x—it’s y.”, the dashes, the q&a style text from the parent comment, and overall cadence were too hard to look past.
So for a counterpoint about the complaints being tedious, I’d say they are nice to preempt the realization that I’m wasting time reading ai output.
Regardless, people are going to start writing naturally like current LLM output, because that's a lot of what they are reading.
A tech doc writer once mentioned how she'd been reading Hunter S. Thompson, and that it was immediately bleeding into her technical writing.
So I tried reading some HST myself, and... some open source code documentation immediately got a little punchy.
> So for a counterpoint about the complaints being tedious, I’d say they are nice to preempt the realization that I’m wasting time reading ai output.
Good point. And if it's actually genuine original text from someone whose style was merely tainted by reading lots of "AI" slop, I guess that might be a reason to prefer reading someone who has a healthier intellectual diet.
AI writing often leads to word inflation, so getting the original more concise one is helpful IMO. Hiding it is the annoying part, marking that you use AI to help you and having a 'source code' version I think would go over much better. If a person is deceptive and dishonest about something so obvious, how can you trust other things they say?
It also leads to slop spam content. Writing it yourself is a form of anti-spam. I think tools like grammarly help strike a balance between 'AI slop machine' and 'help with my writing'.
And because they are so low effort, it feels like putting links to a google search essentially. Higher noise, lower signal.
It has a bunch of human imperfections, and I love that. The lowercase lists and inconsistent casing for similarly structured content throughout, the grammar mistakes, and overall structure. This article has a totally different feel compared to the newest ones. When you say it’s very similar, what are you picking up on? They feel like night and day from my perspective.
LLMs got all these patterns from humans in the first place*. They're common in LLM output because they're common in human output. Therefore this argument isn't very reliable.
If P is the probability that a text containing these patterns was generated by an LLM, then yes, P > 0, but readers who are (understandably) tired of generated comments are overestimating P.
* Edit: I see now that the GP comment already said this.
Fair enough. That style of writing sucks. I read the rest of their blog post and can see now they do it repeatedly. Is it OK if I dismiss the article for its human but otherwise bad writing?
@dang these complaints about AI are more tedious than any other complaints about the website. Might be time to add something to the "guidelines".