Hacker Newsnew | past | comments | ask | show | jobs | submit | coldtea's commentslogin

>This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.

They're not like computers in a superficial way that doesn't matter.

They're still computational apparatus, and have a not that dissimilar (if way more advanced) architecture.

Same as 0 and 1s aren't vibrating air molecules. They can still encode sound however just fine.

>Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.

Not begging the question matters even more.

This is just handwaving and begging the question. 'An algorithm is an algorithm' means nothing. Who said what the brain does can't be described by an algorithm?


>So were mannequins in clothing stores. But that doesn't give them rights or moral consequences

If mannequins could hold discussions, argue points, and convince you they're human over a blind talk, then it would.


>* I think it absolutely adds to the discussion. Until the conversation around Ai can get past this fundamental error of attributing "choice, "alignment", "reasoning" and otherwise anthropomorphizing agents, it will not be a fruitful conversation. *

You call it a "fundamental error".

I and others call it an obvious pragmatic description based on what we know about how it works and what we know about how we work.


What we know about how it works is you can prompt it to address you however you like, which could be any kind of person or a group of people, or as fictional characters. That's not how humans work.

You admitted it yourself that you can prompt it to address you however you like. That’s what the original comment wanted. So why are we quibbling about words?

that's all that happens on this website

>The agent has no "identity". There's no "you" or "I" or "discrimination".

If identify is an emergent property of our mental processing, the AI agent can just as well be to posses some, even if much cruder than ours. It sure talks and walks like a duck (someone with identity).

>It's just a piece of software designed to output probable text given some input text.

If we generalize "input text" to sensory input, how is that different from a piece of wetware?


Also, since AI will mean most are just let go, why would they need meeting minutes? AI would be so crucial as to be the make or break phone/laptop feature, but people would still have meetings?

At best they will use it to tell them for special offers that they can buy with food coupons.


>Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last...

Not at all the experience of users past 2025. The biggest sentiment with model updates was dissapontment and even worries of nerfing.

>But it's time now. Not in an "eventually we should talk about this" way. In a "this is happening right now and I need you to understand it" way. (...) And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest. (...) We're not making predictions. We're telling you what already occurred in our own jobs (...) It wasn't just executing my instructions. It was making intelligent decisions."

Clear AI slop writing patterns.

The guy works in AI space and is fuelling the hype with slop as a content-strategy.


Here's an example of just one of an endless list of bullshit and hallucinations from paid latest-model AI:

Q. Compare the characters Wormtail and Wormtongue.

A. You’re asking about Wormtail and Wormtongue — two very different (but very unpleasant) characters from The Lord of the Rings.


I'm not sure if that is hallucination or "telling it like it is":

Lord of the Potter https://youtu.be/3KmXPcSxz2g?si=HUpGSkpDxaJ9wHxW&t=251


It's a joke, because hashing loses information, and thus the original is not retrievable, woosh

Lol, good one, then :)

The parent said "it's surprising". It's not surprising.

You're correct in the literal sense that they did say those words, but the entire comment clearly demonstrated a lack of surprise that reveals the opening words to be intended ironically.

>If I get blown off, or if somebody takes 4 days to respond to my email, my impression is always that my counterparty views the matter as unimportant

Usually it is unimportant, and the other side is just wasting their time.


>But then, it’s not wrong to scratch your head. Blurring amounts to averaging the underlying pixel values. If you average two numbers, there’s no way of knowing if you’ve started with 1 + 5 or 3 + 3. In both cases, the arithmetic mean is the same and the original information appears to be lost. So, is the advice wrong?

Well, if you have a large enough averaging window (like is the case with bluring letters) they have constraints (a fixed number of shapes) information for which is partly retained.

Not very different from the information retained in minesweeper games.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: