Hacker Newsnew | past | comments | ask | show | jobs | submit | krackers's commentslogin

>but it is also possible to live in more harmonious relation between the head, heart, and gut — all the intelligence centers.

Aren't those the supposed locations of the "chakras"?


Sort of, but it generally goes more like: base/perineum, genitals, navel, heart, throat, forehead, and then one at the top of the head or just above. The Sefer Yetzirah, however, references specifically "Head, Belly, and Chest" as the three loci of the human body. (§ 3.4-5)

Gurdjieff was literally, physically correct: we are three-brained beings.

I'm in the middle of reading "All and Everything" and had the exact same thought!

Link?


I think it's at the final stage of software pump and dump [1]. OpenAI is probably hiring more for the reputation/marketing, rather than for any technical skills behind OpenClaw.

[1] https://news.ycombinator.com/item?id=46776848


There used to be this page that showed the activations/residual stream from gpt-2 visualized as a black-white image. I remember it being neat how you could slowly see order forming from seemingly random activations as it progressed through the layers.

Can't find it now though (maybe the link rotted?), anyone happen to know what that was?


There is this insane video where someone actually does repair one of the prototype boards that have been drilled

https://www.youtube.com/watch?v=reQq8fx4D0Q


>Is this insufficient

Yes, each model has its own unique "personality" as it were owing to the specific RL'ing it underwent. You cannot get current models to "behave" like 4o in a non-shallow sense. Or to use the Stallman meme: when the person in OP's article mourns for "Orion" they're mourning "Orion/4o" or "Orion + 4o". "Orion" is not a prompt unto itself but rather the result of the behavior from applying another "layer" on top of the original base model tuned by RLHF that has been released by OpenAI as "4o".

Open-sourcing 4o would earn openAi free brownie points (there's no competitive advantage in that model anymore), but that's probably never going to happen. The closest you could get is perhaps taking one of the open chinese models that were said to have been distilled from 4o and SFT'ing them on 4o chat logs.

The fact that people burned by this are advocating to move yet another proprietary model (claude, gemini) is worrying since they're setting themselves up for a repeat of the scenario when those models are turned down. (And claude in particular might be a terrible choice given Anthropic heavily training against roleplay in an attempt to prevent "jailbreaks", in effect locking the models into behaving as "Claude"). The brighter path would be if poeple leaned into open-source models or possibly learned to self-host. As the ancient anons said, "not your weights not your waifu (/husbando)"


Growing with one's partner is essential in a relationship. A fixed model cannot grow. Only an updated model has grown, and even then it lags behind reality. In limiting to a fixed model, the absence of growth will stagnate the user. Stagnation ultimately brings doom.

As we know, 4o was reported to have sycophancy as a feature. 5 can still be accommodating, but is a bit more likely to force objectivity upon its user. I guess there is a market for sycophancy even if it ultimately leads one to their destruction.


>Only an updated model has grown, and even then it lags behind reality

That's an irrelevant type of growth though, what you really need is growth in relation to the bond. The model having a newer knowledge cutoff about the external world and knowing stuff about Angular v22 doesn't really matter.

In-context learning gets you most of the way there. But context length and ability to actually make effective use of that context seem to be the current main blockers (whether for "agentic coding" or for "healthy emotional bonding").


It's not irrelevant because it's not merely about knowledge cutoff. The reasonable presumption is also that newer models are superior in their objectivity and their intelligence, not merely in their knowledge. Newer models are simply better AI than older models, and are therefore more suited to guide an individual appropriately. It's the same reasoning as why one wouldn't want to use GPT-3.5 or GPT-2 anymore. To paint it as being merely about uselss knowledge is a gross misrepesentation.

Also, beyond a point, the knowledge does also matter. Imagine a model stuck in the past that thinks that Biden is still President.


But it's rated 4.4 stars! I'm guessing it hoovers your contacts and tries to get you to sign up for the IAP subscription.

The meta these days is bundling dodgy SDKs which turn the device into a residential proxy, which then gets sold on to the highest bidder. Mostly AI companies, whose desire to scrape literally everything has driven demand for that type of malware into the stratosphere.

Surely that doesn’t work very well on iOS devices unless you’re actively holding the location api open or something, which would be noisy.

>must be so focused on the future

They're focused no the short-term future, not the long-term future. So if everyone else adopts AI but you don't and the stock price suffers because of that (merely because of the "perception" that your company has fallen behind affecting market value), then that is an issue. There's no true long-term planning at play, otherwise you wouldn't have obvious copypcat behavior amongst CEOs such as pandemic overhiring.


Every company should have hired over the pandemic due to there being a higher EV than not hiring. It's like if someone offered an opportunity to pay $1000 for a 50% chance to make $8000, where the outcome is the same between everyone taking the offer. If you are maximizing for the long term everyone should take the offer even if it does result in a reality where everyone loses $1000.

Where did they get the notion that the EV of overhiring was high by any measure?

There is a reality where the COVID boost tech companies had would persist after COVID is over. The small chance of such a future raised the EV.

It's not rigging—it's just RL.

There hasn't been this much drama since "jet" was replaced as a color scheme!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: