To minimise context bloat and provide more holistic context, I extract on first step the important elements from a codebase via AST which then the LLM uses to determine which files to get in full for given task.
Random related idea: Regarding code, similarly to what we see when we use 'git blame', I would love to see what lines of code were generated by AI. That would help also during code review.
ive found myself wanting that over the past couple weeks of starting to figure out an ai assisted workflow.
when do i step in to put in assembly-like specifics on the code? how do i represent where i intervened vs what is compiler-y code, when the end assembly-ish code is the product up for review?
You can do that with the Analyst feature of Matcha RSS.[1] Just write to the prompt what you want back from Analyst feeds. Disclaimer: its my own tool.
The way I have addressed this is using my own tool [1] which creates daily digests instead (MD files with date). This removes the stress to check everything or having hundreds of unread, I care only about the current day. If I get some time, say on the weekend, I could go in previous days and check if I skipped. I think its a better trade off than having so many unread stories that grow over time.
https://github.com/piqoni/vogte