Hacker Newsnew | past | comments | ask | show | jobs | submit | alainrk's commentslogin

I can confirm, happened to me a few days ago

How did you find out?


Haven't used it but just checked it out — interesting project. Different goals though.

Raptor configures Claude Code as a security agent for active pentesting and adversarial research. It's an LLM doing dynamic security analysis.

Prodlint is the opposite direction with deterministic static analysis, no LLM in the loop. 52 rules that check for the structural patterns AI coding tools consistently get wrong (leaked secrets, missing rate limiting, hallucinated imports, etc.). Same result every time, under 100ms, works offline.


I suggest to read the full article :)


The thesis of the article is that software other people have built is bad (frameworks) but software my LLM agent is good (for undisclosed reasons).

I think it just adds to the noise of our industry that reusable patterns and standards don't matter


Strawman argument.

Standards and patterns matter, but discernment matters more. The issue isn't reusability itself, it's the cargo-cult adoption of frameworks that solve problems you don't have, when you don't have them.

Your LLM agent works for undiscussed reasons because you made deliberate architectural choices for your specific context. That's engineering. Blindly importing a framework just because "everyone uses it" is the opposite. That's the point, nothing more nothing less.


If you have no idea on how to setup the pillars you're absolutely right, maybe you should try


Interesting analysis


Thank you the insightful feedback :) If you also have something to say on the point of the article itself, instead of pointing the finger on the person I'll be happy to answer on that


I agree with your point. My concern is more about the tedious aspects. You could argue that tedium is part of what makes the craft valuable, and there's truth to that. But it comes down to trade-offs, what could I accomplish with that saved time, and would I get more value from those other pursuits?


If you're gonna take this track, at least be honest with yourself. Does your boss get more value out of you? You aren't going to get a kickback from being more productive, but your boss sure will.


I had this moment recently with implementing facebook oauth. I don’t need to spend mental cycles figuring that out, doing the back and forth with their API, pulling my hair out at their docs, etc. I just want it to work and build my app. AI just did that part for me and could move on.


Integrating auth code is probably a good example of code you want to understand, rather than just seeing that it appears to work.


I honestly think the stuff AI is really good at is the stuff around the programming that keeps you from the actual programming.

Take a tool like Gradle. Bigger pain in the ass using an actual cactus as a desk chair. It has a staggering rate of syntax and feature churn with every version upgrade, sprawling documentation that is clearly written by space aliens, every problem is completely ungoogleable as every single release does things differently and no advice stays valid for more than 25 minutes.

It's a comically torturous DevEx. You can literally spend days trying to get your code to compile again, and not a second of that time will be put toward anything productive. Sheer frustration. Just tears. Mad laughter. Rocking back and forth.

"Hey Claude, I've upgraded to this week's Gradle and now I'm getting this error I wasn't getting with last week's version, what could be going wrong?" makes all that go away in 10 minutes.


I'm glad to hear the gradle experience hasn't changed in the decade since I started avoiding it.


I think it's still an open question if it's actually a net savings of time.


One thing I’ve noticed is that effort may be saved but not as much time. The agent can certainly type faster than me but I have to sit there and watch it work and then check its work when done. There’s certainly some time savings but not what you think.


Another thing I've noticed is that using AI, I'm less likely to give existing code another look to see if there's already something in it that does what I need. It's so simple to get the AI to spin up a new class / method that gets close to what I want, so sometimes I end up "giving orders first, asking questions later" and only later realizing that I've duplicated functionality.


Always have another agent around that is just reviewing code. "Look for duplicate code, code smells, bad architecture, future scalability problems..."

I usually keep one agent/prompt/context ripping through new features and another 1-3 agents/prompts/contexts reviewing, testing, planning.


The absence of evidence is evidence in its own way. I don’t understand how there haven’t been more studies on this yet. The one from last year that showed AI made people think they were faster but were actually slower gets cited a lot, and I know that was a small study with older tools, but it’s amazing that that hasn’t been repeated. Or maybe it has and we don’t know because the results got buried.


Indeed, this has been one of the first things I've noticed


Thank you for the constructive feedback :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: