Hacker Newsnew | past | comments | ask | show | jobs | submit | veunes's commentslogin

Reading works when you generate 50 lines a day. When AI generates 5,000 lines of refactoring in 30 seconds, linear reading becomes a bottleneck. Human attention doesn't scale like GPUs. Trying to "just read" machine-generated code is a sure path to burnout and missed vulnerabilities. We need change summarization tools, not just syntax highlighting

This is exactly the gap I'm worried about. human review still matters, but linear reading breaks down once the diff is mostly machine-generated noise. Summarizing what actually changed before reading feels like the only way to keep reviews sustainable.

The tools exist, they're just rarely used in web dev. Look into ApiDiff or tools using Tree-sitter to compare function signatures. In the Rust/Go ecosystem, there are tools that scream in CI if the public contract changes. We need to bring that rigor into everyday AI-assisted dev. A diff should say "Function X now accepts null", not "line 42 changed"

I totally get the fear regarding probabilistic changes being reviewed by probabilistic tools. It's a trap. If we trust AI to write the code and then another AI to review it, we end up with perfectly functioning software that does precisely the wrong thing.

Diffs are still necessary, but they should act as a filter. If a diff is too complex for a human to parse in 5 minutes, it’s bad code, even if it runs. We need to force AI to write "atomically" and clearly; otherwise we're building legacy code that's unmaintainable without that same AI


Agreed - that trap is very real. The open question for me is what we do when atomic, 5min readable diffs are the right goal but not realistically achievable always. My gut says we need better deterministic signals to reduce noise before human review. Not to replace it.

For a lot of people postponement isn't a strategic move

It's hard to imagine a parenting strategy with no tradeoffs

The tricky part (both as a parent and a manager) is knowing which fires are safe to burn and which ones will burn the house down

From the outside, "strategic delay" and "indecision" can look identical, but they feel very different to the people depending on you

The technique is very role-dependent. It works well when (a) others are capable of resolving things on their own, and (b) the cost of delay is low and bounded. In environments with unclear ownership or asymmetric information, delaying can just push coordination costs downstream or silently create resentment.

If it were just an average, we'd only get gray sludge. Models learn the manifold distribution - they don't just mix existing works; they discover the hidden rules by which those works were created. This is reverse-engineering of human culture, and that's exactly why control over the latent space is so critical - otherwise, we surrender culture to an algorithm that has "learned the rules" but has no concept of meaning

The analogy is great, but it breaks down when we talk about professional use. An aimbot in multiplayer is evil because it ruins the game for others. But an aimbot in game development (e.g., procedural aiming animation) is just a tool. The problem with current AI video is that the model often shoots not where the director wants, but where it's easiest to hit (the template)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: