Hacker Newsnew | past | comments | ask | show | jobs | submit | vorticalbox's commentslogin

You could use ai in read only mode and use it as a rubber duck.

I do this a lot and it’s super helpful.


LLMs also try and find short cuts to get the task done, for example I wrote some code (typescript) for work that had a lot of lint errors (I created a pretty strict rule set)

And I asked codex to fix them for me, first attempt was to add comments to disable the rules for the whole file and just mark everything as any.

Second attempt was to disable the rules in the eslint config.

It does the same with tests it will happily create a work around to avoid the issue rather than fix the issue.


The problem is that “clear”, “modular”, “well designed” and all pretty abstract ideas.

I personally like builder style when doing oop new Client().withTimeOut().ignoreHttpErrors()

Not everyone would consider that clean when using it in your code base.

And let’s face it all code has hacks and patches just to get it out before the deadline then there are more things to do so it will just stay that way.


That might be true. But unclear, non-modular, and poorly designed is actually much easier to identify.

I don't know if I like the builder style; I could go either way. But if I saw that, I'd still consider that clear and well designed. But I've seen some truly ugly code from both people and AI.


But same is true about "good food": some people will prefer some specific food and someone "good food" may not be the taste of someone else.

And yet, it would be ridiculous to pretend that we cannot say that there is an advantage in avoiding cooking a dish made with dirt and toxic waste. The fact that we cannot define an absolute objective "good food" is not at all a problem.


i have pretty strict rules for the code bases at work and one is to use for..of

I’ve noticed this too at work.

If keep the change’s focused I can iterate far faster with ideas because it can type faster than I can.


My favourite agent crush[0] has lsp support for a while.

I’ve not noticed the agent deciding to use it all that much.

[0] https://github.com/charmbracelet/crush


Did it make no difference when you mentioned in your AGENT.md which LSP servers are installed?


I guess supporting tool call natively would improve read token efficiency since they can just run the tool directly


This used to happen with bench marks on phones, manufacturers would tweak android so benchmarks ran faster.

I guess that’s kinda how it is for any system that’s trained to do well on benchmarks, it does well but rubbish at everything else.


yes, they turned off all energy economy measures when benchmarking software activity was detected, which completely broke the point of the benchmarks because your phone is useless if it's very fast but the battery lasts one hour


the issue is not that devs don't know what they are its that they don't pin packages

if you run `npm i ramda` it will set this to "ramda": "^0.32.0" (as of comment)

that ^ means install any version that is a feature or patch.

so when a package is released with malware they bump version 0.32.1 and everyone just installs it on next npm i.

pinning your deps "ramda": "0.32.0" completely removes the risk assuming the version you listed is not infected.

the trade off is you don't get new features/patches without manually changing the version bump.


For context: ramada 0.32.0 isn't a concrete thing, in the sense that glibc 2.35 is. It really means "the latest ramada code because if you were to pin on this version it'll at some point stop working". glibc 2.35 never stops working.


> the trade off

I see that as a desirable feature. I don’t want new functionality suddenly popping into my codebase without one of my team intending it.


me too but a lot of people see it as massive overhead they don't want to deal with.

personally i pin all mine because if you don't a version could be deployed during a pipeline and this makes your local version not the same as the one in docker etc.

pinning versions is the only way to be sure that the version I am running is the same as everyone elses


Sure but if you are always unique for every website then you can’t be tracked overtime.


They meant a signal of uniqueness for your setup that could still assist with tracking, not being unique for every site.


last year i learn about the Collatz Conjecture which i found super interesting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: