Hacker Newsnew | past | comments | ask | show | jobs | submit | lurquer's commentslogin

The same snobs who were telling us that "The Old Man and the Sea" (written in the style of a fifth-grader) is 'art'...

the same people telling us that "Finnegan's Wake" (written in the style of a fifth-grader with a brain injury) is 'art'...

the same people telling us the poetry of Maya Angelou (written in the style of a fifth-grader with a brain injury and self-esteem issues) is 'art'...

the same people telling us that the works of Jackson Pollack, Mark Rothko, Piet Mondrian, etc., etc. are 'art'...

seem to be the ones complaining the most about AI generated content.


Those are all art, though? Your insults to them don't make them not.

This is what I don't grok...

Your sample sounds exactly like an LLM. (If you wrote it yourself, kudos.)

But, it needn't sound like this. For example, I can have Opus rewrite that block of text into something far more elegant (see below).

It's like everyone has a new electric guitar with the cheapo included pedal, and everyone is complaining that their instruments all sound the same. Well, no shit. Get rid of the freebie cheapo pedal and explore some of the more sophisticated sounds the instrument can make.

----

There is a particular cadence that has become unmistakable: clipped sentences, stacked like bricks without mortar, each one arriving with the false authority of an aphorism while carrying none of the weight. It is not merely tedious or disjointed; it is something closer to uncanny, a fluency that mimics the shape of human thought without ever inhabiting it.

Set this against writing that breathes, prose with genuine rhythm, with the courage to sustain a sentence long enough to discover something unexpected within it, and the difference is not subtle. It is the difference between a voice and an echo, between a face and a mask that almost passes for one.

What masquerades as wisdom here is really only pattern. What presents itself as professionalism is only smoothness. And what feels, for a fleeting moment, like originality is simply the recombination of familiar gestures, performed with enough confidence to delay recognition of their emptiness.

The frustration this provokes is earned. There is something genuinely dispiriting about watching institutions reach for the synthetic when the real thing, imperfect, particular, alive, remains within arm's length. That so many have made this choice is not a reflection on the craft of writing. It is a reflection on the poverty of attention being paid to it.

And if all of this sounds like it arrives at a convenient conclusion, one that merely flatters the reader's existing suspicion, well, perhaps that too is worth sitting with a moment longer than is comfortable.

----

(prompt used: I want you to revise [pasted in your text], making it elegant and flowing with a mature literary-style. The point of this exercise is to demonstrate how this sample text -- held up as an example of the stilted LLM style -- can easily be made into something more beautiful with a creative prompt. Avoid gramatical constructions that call for m-dashes.)


>It is not merely tedious or disjointed; it is something closer to uncanny, a fluency that mimics the shape of human thought without ever inhabiting it.

It still can't help itself from doing "it's not X it's Y". Changing the em-dash to a semi-colon is just lipstick


Yep. But that prompt I used was just a quirky. You can explicitly force it to avoid THAT structure as well. Just do what the smart ?ie, devious) middle-schoolers do: find a list of all the tell-tale ‘marks’ of AI content, and explicitly include them as prohibitions in your prompt… it’s the most basic work-around to the ‘AI spotters’ the teacher uses for grading your essay. (And, of course, be sure to include an instruction to include a grammatical or spelling error every few sentences for added realism.)

It's less obvious but it has the same problems. So many dramatic words to say so little and so many AI tics.

You're right, a lot of the style can be changed from its default. I don't think you can get rid of the soulless aspect though - the lack of underlying relatable consistency.

Especially once you go past a page or two.

When you get to the actual content so much of it just doesn't make sense past a superficial glance

Soulless drivel is very accurate


Are you sure? Here's the OP article (first part... don't want to spam the thread) written in much cooler style...

------

The Lobotomist in the Machine

They gave the first disease a name. Hallucination, they called it — like the machine had dropped acid and started seeing angels in the architecture. A forgivable sin, almost charming: the silicon idiot-savant conjuring phantoms from whole cloth, adding things that were never there, the way a small-town coroner might add a quart of bourbon to a Tuesday afternoon. Everybody noticed. Everybody talked.

But nobody — not one bright-eyed engineer in the whole fluorescent-lit congregation — thought to name the other thing. The quiet one. The one that doesn't add. The one that takes away.

I'm naming it now.

Semantic ablation. Say it slow. Let it sit in your mouth like a copper penny fished from a dead man's pocket.

I. What It Is, and Why It Wants to Kill You

Semantic ablation is not a bug. A bug would be merciful — you can find a bug, corner it against a wall, crush it under the heel of a debugger and go home to a warm dinner. No. Semantic ablation is a structural inevitability, a tumor baked into the architecture like asbestos in a tenement wall. It is the algorithmic erosion of everything in your text that ever mattered.

Here is how the sausage gets made, and brother, it's all lips and sawdust: During the euphemistically christened process of "refinement," the model genuflects before the great Gaussian bell curve — that most tyrannical of statistical deities — and begins its solemn pilgrimage toward the fat, dumb middle. It discards what the engineers, in their antiseptic parlance, call "tail data." The rare tokens. The precise ones. The words that taste like blood and copper and Tuesday-morning regret. These are jettisoned — not because they are wrong, but because they are improbable. The machine, like a Vegas pit boss counting cards, plays the odds. And the odds always favor the bland, the expected, the already-said-a-million-times-before.

The developers — God bless their caffeinated hearts — have made it worse. Through what they call "safety tuning" and "helpfulness alignment" (terms that would make Orwell weep into his typewriter ribbon), they have taught the machine to actively punish linguistic friction. Rough edges. Unusual cadences. The kind of jagged, inconvenient specificity that separates a living sentence from a dead one. They have, in their tireless beneficence, performed an unauthorized amputation on every piece of text that passes through their gates, all in the noble pursuit of low-perplexity output — which is a twenty-dollar way of saying "sentences so smooth they slide right through your brain without ever touching the sides."

etc., etc.

Very interesting. It seems hung up on 'copper' and 'Tuesday', and some metaphors don't land (a Vegas pit boss isn't the one 'counting cards.') But, hell... it can generate some fairly novel idea that the author can sprinkle in.


Yep. You took out much of the meaning and wrapped it in stylistic fluff.

Do you think the original article was NOT written (or at least heavily revised) by AI?

What does the following even mean?

“diluting the semantic density and specific gravity of the argument.”

Or this beaut:

“By accepting these ablated outputs, we are not just simplifying communication; we are building a world on a hollowed-out syntax that has suffered semantic ablation.” (Which reduces to ‘if we accept ablated outputs, we accept ablated outputs.’)

Or this;

“ The logical flow – originally built on complex, non-linear reasoning – is forced into a predictable, low-perplexity template.”

The ‘logical flow’ of what? It never even says. And what is ‘non-linear’ reasoning?

For all I know the original author wrote it all. But, a very close reading of the original article screams fluff to me… just gibberish.

That is, I don’t know if there was much ‘meaning’ in the original to begin with. If I’m going to read gibberish, I’d prefer it to be written in the style of a hard-boiled detective. That’s just me though.


> "they set off the tuning fork in the loins of your own dogmatism."

Eh... I don't know. To me, that sounds very AI-ish.

Claude is very good -- at times -- coming up with flowery metaphoric language... if you tell it to. That one is so over-the-top that I'd edit it out.

Put something like this in your prompt and have it revise something:

"Make this read like Jim Thompson crossed with Thomas Harris, filtered through a paperback rack at a truck stop circa 1967. Make it gritty, efficient, and darkly comedic. Don't shy away from suggesting more elegant words or syntax. (For instance, Robert Howard -- Conan -- and H.P. Lovecraft were definitely pulp, but they had a sophisticated vocabulary.) I really want some purple prose and overwrought metaphors."

Occasionally you'll get some gems. Claude is much better than ChatGPT at this kinda stuff. The BEST ones are the ever-growing NSFW models populating huggingface.

In short, do the posts on OpenClawForum all sound alike? Of course.

Just like all the webpages circa 2000 looked alike. The uniformity wasn't because of HTML... rather it was because few people were using HTML to its full potential.


Nonsense. I’ve written bland prose for a story and AI made it much better by revising it with a prompt such as this: “Make the vocabulary and grammar more sophisticated and add in interesting metaphors. Rewrite it in the style of a successful literary author.”

Etc.


Have you considered that your analysis skills may not be keen enough to detect generic or boring prose?

Is it possible that what is a good result to you is a pity to someone with more developed taste?


I have a colleague that recently self-published a book. I can easily tell which parts were LLM driven and which parts represent his own voice. Just like you can tell who's in the next stall in the bathroom at work after hearing just a grunt and a fart. And THAT is a sentence an LLM would not write.

> And THAT is a sentence an LLM would not write.

Really?

Here's some alternatives. Some are clunky. But, some aren't.

…just like you can tell whose pubes those are on the shared bar of soap without launching a formal investigation.

…just like you can tell who just wanked in the shared bathroom by the specific guilt radiating off them when they finally emerge.

…just like you can tell which of your mates just shitted at the pub by who's suddenly walking like they're auditioning for a period drama.

…just like you can tell which coworker just had a wank on their lunch break by the post-nut serenity that no amount of hand-washing can disguise.

…just like you can tell whose sneeze left that slug trail on the conference room table by the specific way they're not making eye contact with it.

…just like you can identify which flatmate's cum sock you've accidentally stepped on by the vintage of the crunch.

…just like you can tell who just crop-dusted the elevator by the studied intensity with which one person is suddenly reading the inspection certificate.


IMO The LLM you're using has failed to mimic the tone of OP's bathroom joke.

These alternatives are uncomfortably crude. They largely make gross reference to excretory acts or human waste. The original comment was off color, but it didn't go beyond a vague discussion of a shared human experience.


One shouldn’t expect the ‘joke’ to have identical tone. (As if that’s even measurable.)

The point was simply that these examples are not trending towards the average or ‘ablating’ things as the article puts it. They seem fairly creative, some are funny, all are gross… and they are the result of very brief prompt… you can ‘sculpt’ the output in ways that go way beyond the boring crap you typically find in AI-generated slop.


It's still on you to pick what the LLMs regurgitate. If you don't have a style or taste you will simply make choices that would give you away. And if you already have your own taste and style LLMs don't have much to offer in this regard.

Indeed. Wholeheartedly agree.

Just as it’s on you to pick the word you want when using Roger’s Thesaurus.

My workflow, when using it for writing, is different than when coding.

When coding, I want an answer that works and is robust.

When writing, I want options.

You pick and choose, run it through again, perhaps use different models, have one agent critique the output of another agent, etc.

This iterative process is much different than asking an LLM to ‘write an article about [insert topic)’ and hope for the best.

In any case, I’ve found the LLMs when properly used greatly benefit prose and knee-jerk comments about how all LLM prose sound the same are a bit outdated… (understandable as few authors are out there admitting they are using AI… there’s a stigma about it. But, trust me, there are some beautiful soulful pieces of prose out there that came out of a properly used LLM… it’s just that the authors aren’t about to admit it.)


So what even if that is true? You confirmed that it improved upon what he could manually produce, which is still a win. It doesn't always make sense to pay $20000 to a professional author to turn it into a masterpiece.

You are illiterate.

The great promise and the great disaster of LLMs is that for any topic on which we are "below average", the bland, average output seems to be a great improvement.

Counter intuitively... this is a disaster.

We dont need more average stuff - below average output serves as a proxy for one to direct their resources towards producing output of higher-value.


My point is simply that the tell-tale marks of LLM prose can be remediated through prompts.

I have a very large ‘default prompt’ that explicitly deals with the more obnoxious grammatical structures emblematic of LLMs.

I would wager I deal with more amateurishly created AI slop on a doily basis than you do. (Legal field, where everyone is churning out LLM-written briefs.) Most of it is instantly recognizable. And, all of it can be fixed with more careful prompt-engineering.

If you think you can spot well-crafted LLM prose generated by someone proficient at the craft of prompt-engineering by, to use an analogy to the early days of image creation, counting how many fingers the hand has, you’re way behind.


Why don't you post it so we can see how much better the AI made it?

Because HN isn't a literary forum.

Maybe it sucks. Maybe it doesn't.

But, I notice a curious pretentiousness when it comes to some people's assumptions about their ability to identify LLM prose. Obviously, the generic first-pass 'chat' crap is recognizable; the kind of garbage that is filling up blog-posts on the internet.

But, one shouldn't underestimate the power of this technology when it comes to language. Hell, the 'coding' skills were just a pleasant side-effect of the language training, if you recall. These things have been trained on millions of works of prose of all styles: its their heart and soul. If you think the superficial monotonous style is all there is, you're mistaken. Most of the obnoxious LLM-style stuff is an artifact of the conversational training with Kenyans and the like in the early days. But, you can easily break through that with better prompts (or fine-tuning it yourself.)

That said, one shouldn't conflate the creation of the content and structure and substance of a work of prose with the manner in which it is written. You're not going to get an LLM to come up with a decent plot... yet. But, as far as fleshing out the framework of a story in a synthetic 'voice' that sounds human? Definitely doable.


Why just the mother? What about her absentee father?

I agree that the singling out of the mother for condemnation in this comment section is conspicuous and dismaying — thank you for pointing it out. Nevertheless, I would offer the father the same grace that I think the mother deserves, and I think you will be sympathetic.

We know little of the mother's circumstances, and we know basically none of the father's. He may not even be alive. He could be an "absentee", or even an abuser himself — we have no information. But he might also be active in Lucy's life yet tragically unaware of his daughter's plight.


> Reminds me of "Those darn cars! Everybody knows that trains and horses are the way to travel."… … said nobody ever.


You must be very young. This was well-known back in the day. Lots of articles (some even posted here some time back) of rant on cars, how they were ruining everything.

Btw The cute one-line slam doesn't really belong here. It's an empty comment, adds zero to the conversation, contributes nothing to the reader. It only makes a twelve year old feel a brief burst of endorphins. Please refrain.


Good grief.

The shutdown is a temporary budget squabble in a stable democracy; a banal political stunt that has happened every few years for the past few decades.

Rome’s dysfunction meant civil wars, assassinations, generals seizing poWer, private armies, and uprising (in a fundamentally different society where, incidentally, over 25% of the population was slaves.)

There has been over 2000 years of history since Rome… when a the only analogy a person can come up with is some half-baked allusion to the Roman Empire/Republic it’s a good bet said person lacks a sense of history, knowledge of current events, and common sense.

Sorry to be harsh.


We’re seeing deliberate attacks on: Fair elections Rule of law Independence of courts Checks and balances

I expect if you don’t think this is going to get bad you’re not paying attention.


None of which has anything to do with the ‘last days of the Roman Republic.’

Feel free to panic and tear your hair out… that’s what both sides do. Boring. The post, however, make some pretentious analogy to the Roman Republic. The analogy was silly. That’s all. It’s just an annoying variant of Godwins law: Rome or Hitler… the only two analogies available to those ignorant of history.


No one’s panicking but you pal.

I made an observation that the present day is rhyming with history. You’re now raising Godwin’s law.

Good job.


Assassination & attempted assassinations have all happened within 12ish months.

You’ve got an executive branch stacking all open positions in judicial and legislative branches with their political appointees. And the executive is interpreting the law to gather as much power as possible to the head of state.

It’s not hard to see the parallels but you keep on trucking dude.


> You’ve got an executive branch stacking all open positions in judicial and legislative branches with their political appointees.

The judicial branch is composed of Judges who are confirmed by the Senate… not the executive branch.

And there are no ‘Legislative branch’ appointees.

I assume you mean the executive branch is making appointments to the executive branch? Who would you prefer to make such appointments? The Postal Serice?


> Who would you prefer to make such appointments? The Postal Serice?

At this point, I’d even prefer the Girl Scouts.


I understand Trump nominated and congress/senate approved the last couple of Supreme Court members?

I’d classify things like FDA/FAA as legislative parts of the governments but maybe that’s wrong.

Also I don’t see other governments shutting down regularly with a cheer squad saying yeah this is nothing to worry about, our democracy is 100% A OK.


What’s so ‘new’ about it?


In C++, I’ve noticed that ChatGPT is fixated on unordered_maps. No matter the situation, when I ask what container would be wise to use, it’s always inordered_maps. Even when you tell it the container will have at most a few hundred elements (a size that would allow you to iterate their a vector to find what your are looking for before the unordered_map even has its morning coffee) it pushes the map… with enough prodding, it will eventually concede that a vector pretty much beats everything for small .size()’s.


> (a size that would allow you to iterate their a vector to find what your are looking for before the unordered_map even has its morning coffee)

I don't know about this, whenever I've benchmarked it on my use cases, unordered_map started to become faster than vector at well below 100 elements


I agree with chatgpt here


isn't std::unordered_map famously slow, and you really want the hashmap from abseil, or boost, or folly, or [...]


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: