Hacker Newsnew | past | comments | ask | show | jobs | submit | falcor84's commentslogin

But that's the thing, it's not about "more melanin", but rather about something like:

The grass on the other side has a different amount of melanin be harder-to-achieve and thus more desirable because it previously signaled belonging to the higher socio-economical strata.


As I see it, this is an inherent part of the tech industry. Unless you expressly choose to focus your career on maintaining legacy code, your value as a dev depends on your ability and willingness to continuously learn new tech.

Why? I'm happy to spend VC money while they're offering it. If and when they stop giving me an offering that I'm satisfied with, I'll stop using it.

"i'm happy to smoke crack while the dealer is paying for it"

I'll take that, but don't see how it's so different from the intent I've always had of "automating myself out of the job". When I want to do "engineering", I can always spin up Factorio or Turing Complete. But for the rest of the time, I care about the result rather than the process. For example, before starting to implement a tool, I'll always first search online for whether there is already a good tool that would address my need, and if so, I'll generally utilize that.

The nondeterminism is what makes LLMs different.

You download a tool written by a human, you can reasonably expect that it does what the author claims it does. And more, you can reasonably expect that if it fails it will fail in the same way in the same conditions.


Cracktorio! ;) I also love Dyson Sphere Program.

I wrote some Turing Machine programs back in my Philosophy of Computer Science class during the 80's, but since then my Turing Machine programming skills have atrophied, and I let LLMs write them for me now.


My answer is: use AI exactly for the tasks that you as a tech lead on a project would be ok delegating to someone else. I.e. you still own the project and probably want to devote your attention to all of the aspects that you HAVE to be on to of, but there are probably a lot of tasks where you have a clear definition of the task and its boundaries, and you should be ok to delegate and then review.

This gets particularly tricky when the task requires a competency that you yourself lack. But here too the question is - would you be willing to delegate it to another human whom you don't fully trust (e.g. a contractor)? The answer for me is in many cases "yes, but I need to learn about this enough so that I can evaluate their work" - so that's what I do, I learn what I need to know at the level of the tech lead managing them, but not at the level of the expert implementing it.


What do you mean? Are you saying that customers don't follow fashions?

I have a few qualms about this standard:

1. For an experienced Claude Code user, you can already build such an agent persona quite trivially by using the /agents settings.

2. It doesn't actually replace agents. Most people I know use pre-defined agents for some tasks, but they still want the ability to create ad-hoc agents for specific needs. Your standard, by requiring them to write markdown files does not solve this ad-hoc issue.

3. It does not seem very "viral" or income-generating. I know this is premature at this point, but without charging users for the standard, is it reasonable to expect to make money off of this?


This is a really good report!

I was particularly fascinated about the self-replication section [0]; I wasn't aware of the RepliBench paper [1], and now that I skimmed through it, I'm quite concerned. It was clear to me before that there's nothing particular stopping an AI from replicating, but seeing the already high success percentage at current levels of technology (e.g. 46% success of replicating itself onto compute) is very scary.

[0] https://www.aisi.gov.uk/frontier-ai-trends-report#self-repli...

[1] https://arxiv.org/abs/2504.18565


> For now, the achievement is striking: a single robot, a thousand tasks, one day, one demonstration per task—a glimpse of a future where robots learn almost as fast as humans.

Reminds me of that medical school "See one, Do one, Teach one". I'm not sure whether to be excited, or to panic, or both.


real "fun" begins when they can learn at scale and use that updated daily data. Each robot learnns something new and all the robots will be able to use that with just update. Common data for robots and about work, that we have never seen before in job markets. millions of robots proving no humans needed after learning from just few thousands of them doing their work.

Surely 1000 tasks in 24 hours is huge.

That's an interesting argument, but from my industry experience, the average experienced QA Engineer and technical Product Manager both have better vocabulary than the average SWE. Indeed, I wonder whether a future curriculum for Vibe Engineering (to borrow your own term) may look more similar to that of present-day QA or Product curricula, than to a typical coding or CS curriculum.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: