I can say at Google we usually just had engineering tip posters in the washrooms they were usually very insightful and just written by other engineers at the company.
Stuff like how to reduce nesting logic, how to restructure APIs for better testing, etc.
People usually like them. I can't say I've seen what the parent post described so I imagine it's "the other" FAANG mentioned here.
Yeah honestly I've been racking my brain about the same question (where can I move on to).
HN has been my home for learning about all sorts of things for 10+ years but blockchain + AI has just killed all interesting discussion that can be had.
It's hard to define a community around "not being obsessed" with something. Maybe instead it's worth thinking about what the goal of such a community / forum is. Might be easier to find / define.
If you come up with something I'm happy to check it out.
Standards and patterns matter, but discernment matters more. The issue isn't reusability itself, it's the cargo-cult adoption of frameworks that solve problems you don't have, when you don't have them.
Your LLM agent works for undiscussed reasons because you made deliberate architectural choices for your specific context. That's engineering. Blindly importing a framework just because "everyone uses it" is the opposite. That's the point, nothing more nothing less.
In my experience the lack of joy or difficulty with tests is almost always that the test environment is usually different enough from the real environment that you end up needing to kind of stretch your code to fit into the test env instead of actually testing what you are interested in.
This doesn't apply to very simple functions but tests on simple functions are the least interesting/ valuable.
Ironically I feel like our QA team is busier than ever since most e2e user-ish tests require coordinating tools that is just beyond current LLM capabilities. We are pumping out features faster that require more QA to verify.
Progress is not always linear, Until it actually does it we can't say anything. This assumption is only peddled by AI companies to get the investments and is not a scientific assumption.
I wonder if that's only really true for "pre-LLM" engineers though. If all you know is prompting maybe there's not a higher quality with more focused that can really be achieved.
It might just all meld into a mediocre soup of features.
To be clear not against AI assisted coding, think it can work pretty great but thinking about the implications for future engineers.
>If all you know is prompting maybe there's not a higher quality with more focused that can really be achieved.
That's true of any particular individual but not for a company that can decide to hire someone who can do more than prompting.
>It might just all meld into a mediocre soup of features
I don't think the relative economics have changed. Mediocre makes sense for a lot of software categories because not everyone competes on software quality.
But in other areas software quality makes a difference it will continue to make a difference. It's not a question of tools.
This analogy has always been bad any time someone has used it. Compilers directly transform via known algorithms.
Vibecoding is literally just random probabilistic mapping between unknown inputs and outputs on an unknown domain.
Feels like saying because I don't know how my engine works that my car could've just been vibe-engineered. People have put 1000s of hours into making certain tools work up to a give standard and spec reviewed by many many people.
"I don't know how something works" != "This wasn't thoughtfully designed"
Stuff like how to reduce nesting logic, how to restructure APIs for better testing, etc.
People usually like them. I can't say I've seen what the parent post described so I imagine it's "the other" FAANG mentioned here.
reply