Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find most bugs take less time to fix than it takes time to verify and reproduce.


LLMs have helped me here the most. Adding copious detailed logging across the app on demand, then inspecting the logs to figure out the bug and even how to reproduce it.


I did that once: logging ended up taking 80% of the CPU leaving not enough overhead for everything else the system should do. Now I am more careful to figure out what is worth logging at all, and also to make sure disabled logs are quickly bypassed.


You misunderstand: I remove the logging as soon as the task is done. I definitely do not keep the LLM logging around.

That's the beauty of it - it's able to add and remove huge amounts of logging per task, so I never need to manage the scale and complexity of logging that outlasts the task it was purposefully added for. With typical development, adding logging takes time so we keep it around and maintain it.


One of my needs is when something breaks in the real world I can figure out why. Bugs that happen at my desk I do what you said, add the logs I need and then delete them when it is fixed. However often there are things that I can't figure out how to reproduce at my desk and so I need logs that are always running on the off chance a new bug happens that I need to debug.


Yea that's valid. I do keep some kinds of logs around for this. But I'm selective with it and most logs I don't need to retain to manage this risk.


we've gotten into adding verbosity levels in logging where each logged event comes with an assigned level that only makes it to the log if it matches the requested log level. there are times when a full verbose output is just too damn much for day-to-day debugging, but is helpful when debugging the one feature.

i used to think options like -vvv or -loglevel panic were just someone being funny, but they do work when necessary. -loglevel sane, -loglevel unsane, -loglevel insane would be my take but am aware that most people would roll their eyes so we're lame using ERROR, WARNING, INFO, VERBOSE


That's great when you have to maintain a large amount of logs for weeks, months, years.

But I'm talking about adding and removing logs per dev task. There's really no need to have sophisticated log levels and maintaining them as the app evolves and grows, because the LLM can "instantly" add and remove the logging it needs per granular task. This is much faster for me than maintaining logs and carefully selecting log levels and managing how logs can be filtered. That only made sense to me when it took actual dev effort to add or remove these logs.


On smaller projects that works. We have a complex system where individual logs can get the log level changed. Though this turns out too fine grained. I'm moving to every subsystem being controllable, but not the individual logs. I'm still not sure what the right answer is though - it always seems like there are 10,000 lines of unrelated useless logs to wade through before finding the useful one, but anytime I remove something that turns out to be the needed log for the very next bug report...


Use something like syslog, where everything is recorded and you can filter on display by subsystem and loglevel.


Yes. I often just copy the whole core dump, and feed it into the prompt.


This is something that I've been trying to improve at. I work on a Windows application and so I get crash dumps that I open with WinDbg and then I usually start looking for exceptions.

Is this something an LLM could help with? What exactly do you mean when you say you feed a dump to the prompt?


I literally copy the whole stack dump from the log, and paste it into the LLM (I find that ChatGPT does a better job than Claude), along with something along the lines of:

> I am getting occasional crashes on my iOS 17 or above UIKit program. Given the following stack trace, what problem do think it might be?

I will attach the source file, if I think I know the general area, along with any symptoms and steps to reproduce. One of the nice things about an LLM, is that it's difficult to overwhelm with too much information (unlike people).

It will usually respond with a fairly detailed analysis. Usually, it has some good ideas to use as starting points.

I don't think "I have a bug. Please fix it." would work, though. It's likely to try, but caveat emptor.


I kinda wonder if at some point this is something we might use the LLM more directly for. As in, train them on raw binary dumps as input.


I wonder if we’ll be seeing tools that do this.

I could see Apple or Microsoft, building it into their IDEs.

But, as was noted elsewhere, I think it’s only useful as an advisor. I think a lot of folks look at LLMs as some kind of programmer replacement.


They are that too


I still wouldn't trust them for a lot of stuff.

Some of the code I get from Claude and ChatGPT is ... not so good.


It's like an intern that is incapable of learning. But a very enthusiastic one.


I review it and I sometimes have it retry the same task 40+ times


And this kids is how one bug got fixed and two more were created


There's a huge difference between using an LLM to assist you versus letting it just do all the work for you. Your implication that they're the same, and that the previous commenter let the LLM do the work, is lazy.

ChrisMarshallNY only said they fed the dump into the LLM. They said nothing about using the LLM to write the fix.


Nope.

Good result == LLM + Experience.

The LLM just reduces the overhead.

That’s really what every “new paradigm” has ever done.


Also, robust test coverage helps prevent regressions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: