This is cool. Was looking for model weights, but it seems like maybe it will work with a variety of different models. This is like a RAG/agent app built on top of your typical llama. Am I reading that right?
You are exactly right! As I wanted to have a solution that works with many LLMs out of the box, I focused on chain of thoughts and few shot learnings.
Lots of paper show that fine-tuning only helps with steerability and form (https://arxiv.org/abs/2402.05119), therefore I thought it would be sufficient to provide just the right examples and it did work!
We do intend to create a decentralized dataset to further train models and have maybe a 2b or 7b model working well