Well, I have a good guess which of those models is your favorite.
I'm not even saying that Claude wrote this - because it still reads as human written, and it's not badly written - but it has just enough Claude voice in it that it feels like the thing where humans inevitably start talking like the people (or simulacrums thereof) that they interact with most. (Heck, you did "It's not X it's Y" twice)
...Or maybe I'm the crazy one here. I don't know. But if I'm right, it's fascinating to see this happen.
I think it's hard to say what sounds natural, and what is a stylistic flaw that is nevertheless natural to say. For instance in your comment you say
> for a long time before LLMs.
The double use of the sound "fore" (in "for" and "before") can sound jarring.
Similarly "This sort of" feels a bit off to me, though I'm not sure a could definitively say why. Maybe it's a bit of a garden-path sentence; it looks like the noun "sort" before becoming the adverb "sort of". Or maybe this is just some kind of peculiarity I've picked up and your writing is perfectly natural.
> “Laws are a threat made by the dominant socioeconomic ethnic group in a given nation. It’s just the promise of violence that’s enacted, and the police are basically an occupying army, you know what I mean?”
...Which is funny, but technically speaking, it's (more or less) a paraphrasing/extrapolation of the very serious political science definition of a state, “a monopoly over the legitimate use of violence in a defined territory”
[1] Minus the last line, which I will allow others to discover for themselves
It's a little weird, too, because Claude definitely isn't the only one approved for use on classified systems in general; both Grok and OpenAI have models approved, at the very least.
> A blade of grass has more humanity and is more deserving of respect than anything being referred to as AI does.
Emphatically disagree.
Even ignoring the obvious absurdity in this statement by pointing out that an LLM is emulating a human (quite well!) and a blade of grass is not:
I don't trust any human who can interact with something that uses the same method of communication as a human, and for all intents and purposes communicates like a human, and not feel any instinct to treat it with respect.
This is the kind of mindset that leads to dehumanizing other humans. Our brain isn't sophisticated enough to actually compartmentalize that - building the habit that it's right to treat something that talks like a sapient as if it deserves zero respect is going to have negative consequences.
Sure, you can believe it's a just a tool, and consciously let yourself treat it as one. But treat it like an incompetent intern, not a slave.
I think ascribing humanity to to something that isn’t human is far more dehumanizing to actual real life humans than the alternative. You are taking away actual people’s humanity if you’re giving it to anything we call AI.
I am capable of distinguishing between talking to another person and talking to an LLM and I don’t think that is hard to do.
I don’t think there is any other word than delusional to describe someone who thinks LLMs should be treated as humans.
Genuine question, why do you think this is so important to clarify?
Or, more crucially, do you think this statement has any predictive power? Would you, based on actual belief of this, have predicted that one of these "agents", left to run on its own would have done this? Because I'm calling bullshit if so.
Conversely, if you just model it like a person... people do this, people get jealous and upset, so when left to its own devices (which it was - which makes it extra weird to assert it "it just follows human commands" when we're discussing one that wasn't), you'd expect this to happen. It might not be a "person", but modelling it like one, or at least a facsimile of one, lets you predict reality with higher fidelity.
I'll be honest, as someone not familiar with Haskell, one of my main takeaways from this article is going down a rabbit hole of finding out how weird Haskell is.
The casualness at which the author states things like "of course, it's obvious to us that `Int -> Void` is impossible" makes me feel like I'm being xkcd 2501'd.
If you spend your life talking about bool having two values, and then need to act as if it has three or 256 values or whatever, that's where the weirdness lives.
In C, true doesn't necessarily equal true.
In Java (myBool != TRUE) does not imply that (myBool == FALSE).
Maybe you could do with some weirdness!
In Haskell:
Bool has two members: True & False. (If it's True, it's True. If it's not True, it's False).
Unit has one members: ()
Void has zero members.
To be fair I'm not sure why Void was raised as an example in the article, and I've never used it. I didn't turn up any useful-looking implementations on hoogle[1] either.
What were you expecting to find? A function which returns an empty type will always diverge - ie there is no return of control, because that return would have a value that we've said never exists. In a systems language like Rust there are functions like this for example std::process::exit is a function which... well, hopefully it's obvious why that doesn't return. You could imagine that likewise if one day the Linux kernel's reboot routine was Rust, that too would never return.
It's not like sleeping pills at all actually. Sleeping pills have a huge dependence and tolerance factor. Antidepressants, generally, do not. Once you find one that works, they keep working effectively forever.
It's actually like statins. Ideally, a doctor will recommend diet changes in addition to the pills. However, relying on lifestyle interventions almost never is effective, And the more we learn about it, the more we realize that cholesterol is mostly genetic based rather than diet based anyway. So the most effective thing they can do is say "here, take these indefinitely". And thank God they do because it saves thousands of lives annually.
For many people with depression, a neurochemical imbalance is the root cause. Just like with statins, addressing it means taking some pills.
Well, I have a good guess which of those models is your favorite.
I'm not even saying that Claude wrote this - because it still reads as human written, and it's not badly written - but it has just enough Claude voice in it that it feels like the thing where humans inevitably start talking like the people (or simulacrums thereof) that they interact with most. (Heck, you did "It's not X it's Y" twice)
...Or maybe I'm the crazy one here. I don't know. But if I'm right, it's fascinating to see this happen.
reply