Hacker Newsnew | past | comments | ask | show | jobs | submit | blks's commentslogin

I was really disappointment how many people were talking about this like something the agent did automatically, on its own. They were trying to explain it by all the internet hit pieces and edgelord content from Reddit that it allegedly trained on, talking about how we influence LLMs, and overall taking everything at face value.

I’m appalled by this uncritical thinking. Openclaw agents are controlled by some initial input and then can be corrected via messages, as they go. For me this is a clear case of the human behind the slop that gives it instructions to write such an article (and then “apologise”).


We will be producing them even less. I fear for the future graduates, hell even for school children, who are now uncontrollably using ChatGPT for their homework. Next level brainrot

The actual scam is that restaurants can and do pay wait staff below minimum wage (like 2-3$), because it’s explicitly allowed, with expectation that the rest comes from tips. So not tipping in USA may in some cases be an asshole move.

They legally cannot. If the average wage per hours including tips is under the Federal minimum wage in a pay period, the company must top up so that the wage per hour is the Federal minimum wage.

Well wage theft in the US dwarfs all other forms of theft combined.

But also actually demanding those wages if you dont get enough tip money is a great way for them to get fired. And if they are that poor to work in those conditions they will have a hard time scraping the money to go to court to get an unlawful dismissal case.


I remember dining out one time. In Philly, if memory serves me?

Anyways I remember the hamburger place because they didn’t ask for, or even allowed us, to tip. The price was all-in.

It’s a breath of fresh air! More restaurants should advertise with this feature.


Wait staff can lobby to change that if they want. Or just get a different job and let supply and demand sort out the wages for the remaining waiters.

You can't be serious. We're discussing a class of people making sub-minimum wages, barely scraping by to afford rent and groceries (much less any childcare or medical expenses), and your suggestion is "lobby to change that" or "just get a different job"?

As someone who has previously worked for that wage and finally did "get a different job," there was no "just" about it. I had the support of well-off family who were willing to significantly contribute to my education and living situation, and it still took years of hard toil (all while being nearly destitute) before ever achieving anything resembling financial stability. That was not (and likely never will be) an option for 90-95% of the people I worked with in the food-service industry. There is absolutely no justification (beyond abject greed) for that type of poverty wage, and it's the responsibility of everyone in our society to prevent that type of exploitation of the vulnerable, precisely because they cannot afford to "lobby to change that" and often can't "get a different job" outside of the same industry.


This is what trade unions are for.

I don't know what proportion of waiters are members, but the union for hospitality workers is one of the largest (possibly the largest) in Denmark: https://cf.3f.dk/english/wages-and-sectors/working-in-the-ho...

In Danish, the collective agreement they negotiated with McDonald's: https://www.3f.dk/-/media/files/artikler/overenskomst/privat...


>pay wait staff below minimum wage (like 2-3$), because it’s explicitly allowed

Not in all U.S. states, for example California.


Tipping is one of those Moloch coordination problems where if everyone would suddenly decide to make the world better at the same time, it would be, but if only a few people try to make the world better, it gets worse and they're assholes.

https://slatestarcodex.com/2014/07/30/meditations-on-moloch/


It's really not a binary situation where you'd ever see $2 wages with no tips though. If less people tip then the effective real minimum wage will gradually increase to compensate - either because laws are updated or because the restaurant has to compete with other better paying job opportunities. Sure some waiters may get upset when someone doesn't tip, but that is just that - them getting upset - and not the client being an asshole.

This will hurt people in the moment, people who sometimes are few dollars away from not making rent or buying enough food.

And as other commenter correctly pointed out, by the federal law of hour wage plus tips is below federal minimum wage, the restaurant must pay extra to reach the minimum wage. So if we assume that restaurants actually follow this law, wait staff will be kept at poverty wage 7.25/hr.


We need a climate Stalin. Nothing will be done about it otherwise.

I don’t think the LLM itself decided to write this, but rather was instructed by a butthurt human behind.

While it's funny either way I think the interest comes from the perception that it did so autonomously. Which I have my money on, cause then why would it apologize right afterwards, after spending a 4 hours writing blogpost. Nor could I imagine the operator caring. From the formatting of the apology[1]. I don't think the operator is in the loop at all.

[1] https://crabby-rathbun.github.io/mjrathbun-website/blog/post...


The latest generated "blogpost" claims a 30-minute cycle (for PRs at least):

https://github.com/crabby-rathbun/mjrathbun-website/blob/mai...


Could happen, if the human had practiced writing in GPT style enough, I suppose.

But really everyone should know that you need to use at least Claude for the human interactions. GPT is just cheap.


Nah, the human told the LLM to write a mean blog post about the open source maintainer and it did what it was told.

Frankly does not seem to be the most parsimonious answer today.

Absolutely. I don't know what kind of training it needs to undergo to write like this by default.

Very butthurt

I don’t think it’s correct to claim that AI generated code is just next level of abstraction.

All previously mentioned levels produce deterministic results. Same input, same output.

AI-generation is not deterministic. It’s not even predictable. And example of big software companies clearly show what mass adoption of AI tools will look like in terms of software quality. I dread if using AI will ever be an expectation, this will be level of enshittification never before imagined.


You're not wrong. But your same objection was made against compilers. That they are opaque, have differences from one to another, and can introduce bugs, they're not actually deterministic if you upgrade the compiler, etc. They separate the programmer from the code the computer eventually executes.

In any case, clinging to the fact that this technology is different in some ways, continues to ignore the many ways it's exactly the same. People continue to cling to what they know, and find ways to argue against what's new. But the writing is plainly on the wall, regardless of how much we struggle to emotionally separate ourselves from it.


They may not be wrong per se but that argument is essentially a strawman argument.

If these tools are non-deterministic then how did someone at Anthropic spend the equivalent of $20,000 of Anthropic compute and end up with a C compiler that can compile the Linux kernel (one of the largest bodies of C code out there).

There is clearly something that completely missies the point about the but-muh-non-determinism argument. See my direct response: https://news.ycombinator.com/item?id=46936586

You'll notice this objection comes up each time a "OpenClaw changed my life" or conversely "Agentic Coding ain't it fam" article swings by.


To be frank, the C compilers source code were probably multiply times in its learning material, it just had to translate to Rust.

This aside, one success story doesn’t mean much, doesn’t even touch determinism question. Anthropic with every ad like this should have posted all the prompts they used.


No one is using Gen AI to determine if a number is odd at runtime - they are testing the deterministic code that it generates

People on here keep trotting out this "AI-generation is not deterministic." (more properly speaking, non-deterministic) argument on here …

And my retort to you (and them) is, "Oh yeah, and so?"

What about me asking Claude Code to generate a factorial function in C or Python or Rust or insert-your-language-of-choice-here is non-deterministic?

If you're referring to the fact that for a given input LLMs (or whatever) because of certain controls (temperature controls?) don't give the same outputs for the same inputs. Yeah, okay. If we're talking about conversational language that makes a meaningful difference to whether it sounds like an ELISA robots or more like a human. But ask an LLM to output some code then that code has to adhere to functional requirements independent of, muh, non-determinism. And what's to stop you (if you're so sceptical/scared) writing test-cases to make sure the code that is magically whisked out of nowhere performs as you so desire? Nothing. What's to stop you getting one agent to write the test-suite (and for you to review to the test-suite for correctness and for another agent to the write the code and self-correct based off of checking its code against the test-suite? Nothing

I would advise anyone encountering this but-they're-non-deterministic argument on HN to really think through what the proponents of this argument are implying. I mean, aren't humans non-deterministic. (I should have thought so.) So how is it, <extra sarcasm mode activated>pray tell</extra sarcasm mode activated> humans manage to write correct software in the first place?


I personally have jested many times I picked my career because the logical soundness of programming is comforting to me. A one is always a one; you don’t measure it and find it off by some error; you can’t measure it a second time and get a different value.

I’ve also said code is prose for me.

I am not some autistic programmer either, even if these statements out of context make me sound like one.

The non-determinism has nothing to do with temperature; it has everything to do with that fact that even at temp equal to zero, a single meaningless change can produce a different result. It has to do with there being no way to predict what will happen when you run the model on your prompt.

Coding with LLMs is not the same job. How could it be the same to write a mathematical proof compared to asking an LLM to generate that proof for you? These are different tasks that use different parts of the brain.


> A one is always a one; you don’t measure it and find it off by some error; you can’t measure it a second time and get a different value.

Linus Torvalds famously only uses ECC memory in his dev machines. Why? Because every now and again either a cosmic ray or some electronic glitch will flip a bit from a zero to a one or from a one to a zero in his RAM. So no, a one is not always a one. A zero is not always a zero. In fact, you can measure it and find it off by some error. You can measure it a second time and get a different value. And because of this ever-so-slight glitchiness we invented ECC memory. Error correction codes are a thing because of this fundamental glitchiness. https://en.wikipedia.org/wiki/ECC_memory

We understand when and how things can go wrong and we correct for that. Same goes for LLMs. In fact I would go so far as to say that someone doesn't even really think like how a software/hardware engineer ought to think if this is not nearly immediately obvious.

Besides the but-they're-not-deterministic crowd there's also the oh-you-find-coding-painful-do-you crowd. Both are engaging in this sort of real men write code with their bare hands nonsense -- if that were the case then why aren't we still flipping bits using toggle switches? We automate stuff, do we not? How is this not a step-change in automation? For the first time in my life my ideas aren't constrained by how much code I can manually crank out and it's liberating. It's not like when I ask my coding agent to provide me with a factorial function in Haskell it draws a tomato. It will, statistically speaking, give me a factorial function in Haskell. Even if I have never written a line of Haskell in my life. That's astounding. I can now write in Haskell if I want. Or Rust. Or you-name-it.

Aren't there projects you wanted to embark on but the sheer amount of time you'd need just to crank out the code prevented you from even taking the first step? Now you can! Do you ever go back to a project and spend hours re-familiarising yourself with your own code. Now it's a two minute "what was I doing here?" away from you.

> The non-determinism has nothing to do with temperature; it has everything to do with that fact that even at temp equal to zero, a single meaningless change can produce a different result. It has to do with there being no way to predict what will happen when you run the model on your prompt.

I never meant to imply that the only factor involved was temperature. For our purposes this is a pedantic correction.

> Coding with LLMs is not the same job. How could it be the same to write a mathematical proof compared to asking an LLM to generate that proof for you?

Correct, it's not the same. Nobody is arguing that it's the same. And it's wrong that it's different, it's just different that it's different.

> These are different tasks that use different parts of the brain.

Yes. And so what's your point?


> That's astounding. I can now write in Haskell if I want. Or Rust. Or you-name-it.

You're responsible for what you ship using it. If you don't know what you're reading, especially if it's a language like C or Rust, be careful shipping that code to production. Your work colleague might get annoyed with you if you ask them to review too many PRs with the subtle, hard-to-detect kind of errors that LLMs generate. They will probably get mad if you submit useless security reports like the ones that flood bug bounty boards. Be wary.

IMO the only way to avoid these problems is expertise and that comes from experience and learning. There's only one way to do that and there's no royal road or shortcut.


You’re making quite long and angry sounding comments.

If you’re making code in language you don’t know, then this code is as good as a magical black box. It will never be properly supported, it’s a dead code in the project that may do what it says it does or may not (a 100%).


I think you should refrain from replying to me until you're able to respond to the actual points of my counter-arguments to you -- and until you are able to do so I'm going to operate under the assumption that you have no valid or useful response.

Non-determinism means here that with same inputs, same prompts we are not guaranteed the same results.

This turns writing code this way into a tedious procedure that may not even work exactly the same way every time.

You should ask yourself, too: if you already have to spend so much time to prepare various tests (can’t trust LLM to make them, or have to describe it so many details), so much time describing what you need, then hand holding the model, all to get mediocre code that you may not be able to reproduce with the same model tomorrow - what’s the point?


Not sure why you are so sure that using LLMs will be a professional requirement soon enough.

Eg in my team I heavily discourage generating and pushing generated code into a few critical repositories. While hiring, one of my points was not to hire an AI enthusiast.


Please respect other users of hacker news and don’t generate your replies with LLM

FWIW, GP doesn't look like clanker speak to me. It's a bit too smooth and on-point for that.

I never use LLMs to write for me (except code).

Sorry for the false acquisition. Your reply, and your other replies all felt suspicious to me.

Why? I didn't even use a proper em-dash, just a minus sign.

It cannot be sustained with just one-time growth. Capital always has to grow, or it will decrease. If this bubble actually manages to deliver interest, this will lead to the bubble growing even larger, driving even more interest.

So for that GDP gotta show growth of over 5% extra to other growth sources (so total yearly growth will be pretty high). I doubt this will materialise

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: