Right, but the discussion we're having here is context size. I, and others, are saying that the current context size is a limitation on when they can use the tool to be useful.
The replies of "well, just change the situation, so context doesn't matter" is irrelevant, and off-topic. The rationalizations even more so.
A huge context is a problem for humans too, which is why I think it's fair to suggest maybe the tool isn't the (only) problem.
Tools like Aider create a code map that basically indexes code into a small context. Which I think is similar to what we humans do when we try to understand a large codebase.
I'm not sure if Aider can then load only portions of a huge file on demand, but it seems like that should work pretty well.
As someone who's worked with both more fragmented/modular codebases with smaller classes and shorter files vs ones that span thousands of lines (sometimes even double digits), I very much prefer the former and hate the latter.
That said, some of the models out there (Gemini 2.5 Pro, for example) support 1M context; it's just going to be expensive and will still probably confuse the model somewhat when it comes to the output.