Strongly disagree. If you read enough of it the patterns in ai text are so familiar. Take this paragraph for example:
> Here’s what surprised me: the practices that made my exit smooth weren’t “exit strategies.” They were professional habits I should have built years earlier—habits that made work better even when I was staying.
“It’s not x—it’s y.”, the dashes, the q&a style text from the parent comment, and overall cadence were too hard to look past.
So for a counterpoint about the complaints being tedious, I’d say they are nice to preempt the realization that I’m wasting time reading ai output.
Regardless, people are going to start writing naturally like current LLM output, because that's a lot of what they are reading.
A tech doc writer once mentioned how she'd been reading Hunter S. Thompson, and that it was immediately bleeding into her technical writing.
So I tried reading some HST myself, and... some open source code documentation immediately got a little punchy.
> So for a counterpoint about the complaints being tedious, I’d say they are nice to preempt the realization that I’m wasting time reading ai output.
Good point. And if it's actually genuine original text from someone whose style was merely tainted by reading lots of "AI" slop, I guess that might be a reason to prefer reading someone who has a healthier intellectual diet.
AI writing often leads to word inflation, so getting the original more concise one is helpful IMO. Hiding it is the annoying part, marking that you use AI to help you and having a 'source code' version I think would go over much better. If a person is deceptive and dishonest about something so obvious, how can you trust other things they say?
It also leads to slop spam content. Writing it yourself is a form of anti-spam. I think tools like grammarly help strike a balance between 'AI slop machine' and 'help with my writing'.
And because they are so low effort, it feels like putting links to a google search essentially. Higher noise, lower signal.
It has a bunch of human imperfections, and I love that. The lowercase lists and inconsistent casing for similarly structured content throughout, the grammar mistakes, and overall structure. This article has a totally different feel compared to the newest ones. When you say it’s very similar, what are you picking up on? They feel like night and day from my perspective.
LLMs got all these patterns from humans in the first place*. They're common in LLM output because they're common in human output. Therefore this argument isn't very reliable.
If P is the probability that a text containing these patterns was generated by an LLM, then yes, P > 0, but readers who are (understandably) tired of generated comments are overestimating P.
* Edit: I see now that the GP comment already said this.
Fair enough. That style of writing sucks. I read the rest of their blog post and can see now they do it repeatedly. Is it OK if I dismiss the article for its human but otherwise bad writing?
I manually type en- and em-dashes, which on Mac is easily typed using the Option key and on Linux is easily typed with a Compose key, and even on iOS you just long-press the hyphen key.
I had to learn the difference between hyphen, en-dash, and em-dash when typesetting scientific papers and theses in LaTeX, and after that it just doesn’t feel right to not use them "properly".
There are em dashes: "I’d been talking openly about my struggles for over a year—lack of motivation, missing promotion, compensation not matching my contributions."
Please cease and desist from labeling things as AI. If you want to criticize the material, assume good faith and criticize it as if it is written by a human.