These LLMs are really good at digging up internal docs if you give them access to your knowledge sources with tooling to search and reason in a loop before responding.
>These LLMs are really good at digging up internal docs if you give them access to your knowledge sources with tooling to search and reason in a loop before responding.
Are those internal documents in the room with us right now?
No but seriously, most of the software out there is legacy code (don't quote me on that though). IME, legacy code very poorly documented, if anything at all. Sure you could let the LLM extract semantics from the code alone but with old code, arcane hacks and such LLM interpretation can take you only so far. And even then semantics is not always directly translates to business logic.
> Are those internal documents in the room with us right now?
I have no clue what you're on about here.
If you have a legacy knowledge base, like maybe using mediawiki for corp knowledge, what you do is maintain a vector database that gets updated when it sees changes. Using embeddings enables lookup through sentiment.
In a control loop with well maintained vector embeddings, these LLMs are absolutely better than a human at finding, citing, and summarizing information needed by the user.
Tools like glean already exist for this if you doubt it.