Hacker Newsnew | past | comments | ask | show | jobs | submit | unconed's commentslogin

cough

The Strange Case of "Engineers" Who Use AI

I rely on AI coding tools. I don’t need to think about it to know they’re great. I have instincts which tell me convenience = dopamine = joy.

I tested ChatGPT in 2022, and asked it to write something. It (obviously) got some things wrong; I don’t remember what exactly, but it was definitely wrong. That was three years ago and I've forgotten that lesson. Why wouldn't I? I've been offloading all sorts of meaningful cognitive processes to AI tools since then.

I use Claude Code now. I finished a project last week that would’ve taken me a month. My senior coworker took one look at it and found 3 major flaws. QA gave it a try and discovered bugs, missing features, and one case of catastrophic data loss. I call that “nitpicking.” They say I don’t understand the engineering mindset or the sense of responsibility over what we build. (I told them it produces identical results and they said I'm just admitting I can't tell the difference between skill and scam).

“The code people write is always unfinished,” I always say. Unlike AI code, which is full of boilerplate, adjusted to satisfy the next whim even faster, and generated by the pound.

I never look at Stack Overflow anymore, it's dead. Instead I want the info to be remixed and scrubbed of all its salient details, and have an AI hallucinate the blanks. Thay way I can say that "I built this" without feeling like a fraud or a faker. The distinction is clear (well, at least in my head).

Will I ever be good enough to code by myself again? No. When a machine showed up that told me flattering lies while sounding like a silicon valley board room after a pile of cocaine, I jumped in without a parachute [rocket emoji].

I also personally started to look down on anyone who didn't do the same, for threatening my sense of competence.


1) A system that needs _seconds per tile_ is not suitable for real-time anything imo.

The irony is that you explicitly posited your thing as a successor to Perlin noise when in fact, it's just a system that hallucinates detail on top of Perlin (feature) noise. This is dishonest paper bait and the kind of AI hubris that will piss off veterans in the scene.

2) I'm also disappointed that nowhere is there any mention of Rune Johansen's LayerGen which is the pre-AI tech that is the real precedent here.

Every time I see a paper from someone trying to apply AI to classic graphics tech, it seems they haven't done the proper literature study and just cite other AI papers. It seems they also haven't talked to anyone who knows the literature either. https://runevision.com/tech/layerprocgen/

3) >The top level input is perlin noise because it is genuinely the best tool for generating terrain at continental scale

This is a non-sense statement. I don't know what you are thinking here at all, except maybe that you are mistakenly using "Perlin" as a group noun for an entire style of functions.

Perlin has all sorts of well-known issues, from the overall "sameyness" (due to the mandatory zero-crossings and consistent grid size) as well as the vertical symmetry which fails to mimic erosion. Using it as the input to a feature vector isn't going to change that at all.

The idea of using plate tectonics is much better, but, vastly _different_ from what you have done. And btw, every plate tectonics simulation that I've seen does not look convincing. If you treat it as a simple transport problem, the result just looks like a Civilization 1 map. But if you want to treat it seriously, then the tectonics have to be the source of all your elevation changes, and not just some AI hallucination on top afterwards. The features would all have to make sense.

Your abstract states that classic terrains are "fundamentally limited in coherence"... but even to my non-geologist eye, your generated heightmaps seem incredibly blobby and uncanny. This makes me think that a real geologist would immediately spot all sorts of things that don't make any sense. For example, if you added water and rivers to the terrain, would it work, or would you end up with non-sense loops and Escher-like watersheds?

(mostly I'm disappointed that the level of expertise in AI tech is so low that all these things have to be pointed out instead of being things you already knew)


> And btw, every plate tectonics simulation that I've seen does not look convincing.

It's an amazing problem! I haven't spent much time on it - maybe 20-30 hours spread out over several years - but it _is_ something I come back to from time to time. And it usually ends up with me sitting there, staring at my laptop screen, thinking, "but what if I... no, crap. Or if we... well... no..."

TBH it's one of the things that excites me, because it makes it clear how far we still have to go in terms of figuring out these planet-scale physical processes, simulating them, deriving any meaningful conclusions, etc. Still so much to learn!


The solution to seeing more Bret Victor-ish tooling is for people to rediscover how to build the kind of apps that were commonplace on the desktop but which have become a very rare art in the cloud era.

Direct manipulation of objects in a shared workspace, instant undo/redo, trivial batch editing, easy duplication and backup, ... all things you can't do with your average SaaS and which most developers would revolt for if they'd had to do their own work without them.


Recently I tried to hack in a feature into Transmission for Mac. All I wanted to do was add a single checkbox per torrent, which corresponded to a property in the libtransmission back-end, but which isn't exposed.

And sorry, but, it was a complete mess from start to finish. Instead of just mapping a boolean value to a state, the entire read and write path was this elaborate game of telephone. In React I would just use something like a cursor to traverse and mutate state immutably, and the rendering part would take care of itself. There was also a bunch of extra code to remember and apply defaults, which in a more functional system like React is generally managed via composition.

One of the article's claims is that the React model is suboptimal because UIs are more stable than it assumes. But this isn't true because the edge cases is what you will end up spending the most dev time on.

A declarative approach lets you achieve N features in mostly O(n) lines of code. When you do things imperatively, you're instead having to orchestrate up to O(n^2) state transitions in O(n^2) lines of code.

The React model is also not that different from immediate mode, which is very popular in games, where performance is important. The main difference is that React has an answer to what happens when you can't fit all the work into one rendering cycle, via memoization and sparse updates.

This gets you similar perf to classic retained mode, but without all the tedious MVC plumbing.

PS: Here's how i use patching as a basis for state management, https://usegpu.live/docs/reference-live-@use-gpu-state


Changes in mood result from changes in behavior.

If you don't feel like doing something you know is good for you, do it anyway. You'll feel better afterwards.


Apple already went through this before with PowerPC -> x86. They had universal binaries, Rosetta, etc. to build off of. And they got to do it with their own hardware, which includes some special instructions intended to help with emulation.


> Apple already went through this before with PowerPC -> x86

Not to mention 68K -> PowerPC.

Rhapsody supported x86, and I think during the PowerPC era Apple kept creating x86 builds of OS X just in case. This may have helped to keep things like byte order dependencies from creeping in.


TOML is basically a formalization of the old INI format, which only existed in ad-hoc implementations. It's not really a "language", just a config data syntax like JSON. It doesn't have major footguns because it doesn't have a lot of surface area.

The various features it has for nesting and arrays make it convenient to write, but can make it harder to read. There is no canonical serialization of a TOML document, as far as I can tell, you could do it any number of ways.

So while TOML has its use for small config files you edit by hand, it doesn't really make sense for interchange, and it doesn't see much use outside of Rust afaik.


I believe TOML can always be serialized to JSON. And TOML is in the python standard library in newer pythons. It’s also used as the suggest format for `pyproject.toml` in python


Event sourcing gives you all the technical overhead of replayable transactions while giving your end-users none of the benefits (e.g. undo/redo). It's purely an infrastructure-level feature that only makes auditors and sysadmins happier.


Event sourcing doesn't directly address this. However, the "undo" functionality is not a feature of event sourcing itself, but rather something your application needs to implement. Sure, you can implement that functionality without event sourcing, but it becomes problematic if you decide to add it later.


Because the person whose job depends on keeping the customers happy is not the same as the person whose job depends on making spreadsheet numbers go up.


These are end-user products with many real alternatives, if people didn't like them, they wouldn't buy them. So the problem is not that the decision-maker and the user are different, like in case of the famously dreaded electronic healthcare software.


>The semantics of React code are very different to imperative straight line code.

Yes and all you have to do is learn why those semantics exist in order to do react well.

Unfortunately too many programmers still think they can just mimic other code to figure it out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: